RisingAttacK: New Method Can Trick Machine Vision into Missing Real Objects
Researchers at the University of North Carolina have developed a new method to manipulate computer vision systems used in machine learning. The technique, called RisingAttacK, allows attackers to subtly alter an image so that the system perceives it incorrectly — even to the extent that it completely fails to recognize real objects.
RisingAttacK works in stages. First, it analyzes an image and identifies the key visual features the system relies on to recognize objects. Then it determines how sensitive the algorithm is to changes in those features. Finally, small but targeted distortions are introduced into the image. These changes are invisible to the human eye but confuse the machine vision model, making certain objects effectively “disappear.” For instance, the method could be used to hide pedestrians or cars from a self-driving vehicle.
The researchers tested this method on four well-known computer vision models: ResNet-50, DenseNet-121, ViT-B, and DEiT-B. In each case, the attack was successful. This demonstrates that systems currently used in autonomous vehicles, surveillance cameras, medical imaging, and other critical applications are vulnerable to such manipulation.
The goal of the research, the team clarified, is not to create mistrust in machine learning systems, but to highlight existing weaknesses so they can be addressed and improved.
The findings will be officially presented on July 15 at the International Conference on Machine Learning (ICML) in Vancouver. The RisingAttacK tool has already been released as open-source on GitHub to help developers secure their models against such threats.