Current location - Recipe Complete Network - Complete cookbook of home-style dishes - Xia Zi recipe daquan
Xia Zi recipe daquan
In terms of vision, how big is the gap between AI and humans? Researchers from the University of California, Berkeley and other universities created a data set containing 7,500 "examples of natural confrontation". Tested many machine vision systems and found that the accuracy of AI dropped by 90%! In some cases, the software can only recognize 2%-3% of the images. If such AI is used in self-driving cars, the consequences are unimaginable!

In recent years, computer vision has been greatly improved, but it is still possible to make serious mistakes. There are so many mistakes that there is a research field devoted to the pictures that AI often misrecognizes, called "antagonistic pictures". They can be considered as optical illusions of computers. When you see a cat in a tree, artificial intelligence sees a squirrel.

It is necessary to study these images. When we put the machine vision system at the core of new technologies such as artificial intelligence security cameras and self-driving cars, we believe that computers are the same as the world we see. Opposing images prove that this is not the case.

Antagonistic images exploit the weaknesses of machine learning systems.

However, although a lot of attention in this field is focused on pictures specially used to fool AI (for example, Google's algorithm mistook 3D printed turtles for guns), these confusing images will naturally appear. This kind of image is even more worrying, because it shows that even if we don't mean it, the visual system will make mistakes.

To prove this, a group of researchers from the University of California, Berkeley, the University of Washington and the University of Chicago created a data set containing 7,500 "natural opposites". They tested many machine vision systems on these data and found that their accuracy dropped by 90%. In some cases, the software can only recognize 2%-3% of the images.

The following are some examples of "natural confrontation examples" data sets:

Data is expected to help cultivate a more powerful visual system.

In the paper, the researchers said that these data are expected to help cultivate a stronger visual system. They explained that these images take advantage of the "deep defects" caused by the software's "over-reliance on color, texture and background clues" to identify what it sees.

For example, in the picture below, AI mistook the picture on the left for a nail, probably because of the wood grain background of the picture. In the picture on the right, they only pay attention to the hummingbird feeder, but they ignore the fact that there are no real hummingbirds.

The following four dragonfly photos were identified as skunks, bananas, sea lions and gloves from left to right after AI analyzed the color and texture. We can see from every picture why AI makes mistakes.

It is not news that the AI system will make these mistakes. For many years, researchers have been warning that the visual systems created by deep learning are "superficial" and "fragile", and they will not understand some almost identical nuances in the world as flexibly as people.

These AI systems have been trained on thousands of sample images, but we usually don't know which exact elements in the images are used by AI to make judgments.

Some studies show that when considering the overall shape and content, the algorithm will not look at the image as a whole, but focus on specific textures and details. The results given by this data set seem to support this explanation. For example, a picture with a clear shadow on a bright surface will be wrongly recognized as a sundial.

Is the AI vision system really hopeless?

But does this mean that these machine vision systems are hopeless? Not at all. The mistakes made by these systems are generally minor mistakes, such as identifying sewer manhole covers as manhole covers and mistaking trucks for luxury cars.

Although researchers say that these "examples of natural antagonism" can fool all kinds of visual systems, this does not mean that all systems can be fooled. Many machine vision systems are very specialized, such as those used to identify diseases in medical scanned images. Although these systems have their own shortcomings, they may not understand the world and human beings, but this does not affect their discovery and diagnosis of cancer.

Machine vision systems may sometimes be fast and flawed, but they usually produce results. This kind of research exposes the blind spots and gaps in machine imaging research, and our next task is how to fill these blind spots.