Digital images designed to trick AI are affecting humans too
Share- Nishadil
- January 05, 2024
- 0 Comments
- 3 minutes read
- 14 Views

Recent research reveals that subtle alterations to digital images, intended to mislead computer vision systems, can also impact . The study, detailed in a series of experiments published in Nature Communications, underscores the intersection between artificial intelligence (AI) and human vision, raising questions about the implications of adversarial images on both.
Understanding adversarial images: A brief overview An adversarial image undergoes intentional modifications to trick into misclassifying its contents. These manipulations, known as adversial attacks, can range from causing an AI to mistake a vase for a cat to making it identify anything other than a vase.
Remarkably, even subtle attacks, where no pixel changes by more than two levels on the zero to 255 scale, can effectively deceive AI systems. These attacks extend beyond the digital realm; physical objects, such as making a stop sign appear as a speed limit sign, can also be vulnerable. Researchers have been exploring ways to resist and mitigate these attacks due to security concerns.
Human perception and adversarial examples: Unraveling the connection While previous studies have shown that people are sensitive to large magnitude , the impact of more nuanced adversarial attacks on human perception remained less explored. The research team conducted controlled behavioral experiments to investigate this connection.
Participants were presented with pairs of images, each subjected to adversarial attacks. For instance, an original image classified as a "vase" was altered to mislead the AI into seeing a "cat" or a "truck." The participants were then asked targeted questions like "Which image is more cat like?" Despite neither image resembling a cat, participants consistently showed a perceptual bias towards one of the perturbed images.
The study revealed that participants displayed a consistent perceptual bias above chance even when the alterations were subtle, with no pixel adjusted by more than two levels. This suggests that humans, like AI systems, can be influenced by adversarial perturbations, emphasizing the need for further exploration.
Implications for AI safety and security research The research sheds light on the critical intersection of and human perception. The findings suggest that human vision is less susceptible to adversarial perturbations than machine vision; these alterations can subtly bias human decision making toward machine generated outcomes.
The implications extend beyond the realm of AI, emphasizing the importance of understanding the broader effects of technologies on both machines and humans. The advocates for ongoing cognitive science and neuroscience research to enhance our understanding of AI systems and guide the development of safer and more secure technologies..
Disclaimer: This article was generated in part using artificial intelligence and may contain errors or omissions. The content is provided for informational purposes only and does not constitute professional advice. We makes no representations or warranties regarding its accuracy, completeness, or reliability. Readers are advised to verify the information independently before relying on