Current location - Recipe Complete Network - Complete cookbook - What is the difference between AlphaGo and AI such as Siri?
What is the difference between AlphaGo and AI such as Siri?
On March 9th, 20 16, Google's Go program "AlphaGo" will play against Li Shishi. Last June, 5438+ 10, this program defeated Fan Hui in the second stage of China's chess career; This is the first time that Weiqi artificial intelligence has defeated professional players in a fair game. This achievement was published in the journal Nature in June, 5438+this year 10, which also triggered an extremely heated discussion-and the most frequently asked question is, will AI finally occupy the world?

Siri who can sing and tell cold jokes. Image source: Apple.

This question is not alarmist. In a sense, AI has occupied it: from Apple's Siri, to the daily browsing search engine, to the online article recommendation and product recommendation system, all these are artificial intelligence-even if they are not in science fiction, our daily life has been difficult to separate from them.

But AlphaGo is different from these common AI. Their difference lies in the universality of learning methods and skills.

Siri: Script Assistant

Siri is an "intelligent assistant" who can understand our verbal commands, help us search online and help us find contacts in the list. But its principle is simple: through speech recognition technology, the speech is transformed into the basic elements of the language, such as vowels, consonants, words and so on, and then compared with the special commands built into the system. If the comparison is a practical problem, then execute the corresponding instruction; If there are empty questions, choose a paragraph from the relative paragraph library.

So its problem is clear at a glance: if you order it to do something that is not in the system, it will jump into the street. Although Siri is AI, it is a very limited AI: it can only solve pre-written problems.

Siri was forced (or maybe just embarrassed) in the face of the provocation of big brother in Northeast China. Image source: Apple

Deep blue: No one can match chess, only chess.

1997, the chess machine "Deep Blue" made by IBM defeated Kasparov, the world chess champion at that time. This is a landmark event in the history of artificial intelligence. However, although Deep Blue defeated the world champion, it has the same shortcoming as Siri: it is too professional.

As a program, Deep Blue's software is specially designed for chess. Its four criteria for evaluating the disk surface include strength, chess position, king's safety and layout rhythm-obviously, these indicators are completely dependent on the rules of chess itself and have no expansibility.

Kasparov on "Deep Blue". Image source: muse.jhu.edu

Even so, it still relies heavily on "brute force." Deep blue hardware was the fastest chess machine of the year. Although there is a system to help screening, it still has to evaluate 2 billion possible situations every second. In order to meet this demand, IBM developed customized hardware for it.

Therefore, it is not so much a chess program as a chess machine. Deep blue can only play chess, but can't learn Go, even simple gobang. In contrast, Kasparov, as a human being, can learn Go, Gobang and painting. Deep blue technology, like a key specially designed for chess, has great limitations.

Self-driving cars: taking a new direction

The principle of self-driving cars can be simplified as the following steps:

First of all, it knows the surrounding environment through sensors, just like a driver observing the surrounding situation with his eyes;

Then get the route of the road through networking, just like we use navigation software when driving;

Then the computer program judges how nearby pedestrians and cars will move;

Finally, calculate your best route and control the speed and direction of the car according to this route.

Google's self-driving car. Image source: Google

Specific to the field of autonomous driving, but the basic idea is somewhat close to AlphaGo.

IBM Watson: general intelligence

20 1 1, Jeopardy, a reality show by IBM Watson, USA! Beating human chess players in the world, its technical concept is more like AlphaGo. Watson's decision-making includes four steps: first, observing, collecting data from the environment, then making assumptions about the data, then evaluating these assumptions, and finally making a decision. But there are still some differences with AlphaGo. First, it is designed as a question answering machine. Secondly, when training Watson, it needs the participation of human experts-for example, the question about cancer requires scientists to eliminate outdated information and wrong information from a large number of books and papers, and feed the sorted data to the machine. But at least, it can handle many fields, which makes it have much stronger expansion possibilities than its peers: Watson has been used in the medical field now.

The logo of IBM Watson. Image source: IBM

So, what is the technical idea of AlphaGo?

Deepmind created AlphaGo, trying to build general artificial intelligence through reinforcement learning. Its concept includes two entities, one is artificial intelligence itself and the other is its environment. There are two relationships between artificial intelligence and environment, one is to sense data through sensors, and the other is to influence the environment through concrete actions. Because of the complexity of the environment, it is impossible for it to get all the information, and it needs to repeat the cycle of perception-response to get the greatest benefit in the environment. Most mammals, including humans, conform to this set of rules.

Reinforcement learning technology constantly perceives and feeds back the information in the environment. Image source: Google

Before AlphaGo, they had used this idea to let AI play games. In 20 15, a paper published in Nature described how to make an algorithm play different Atari programs, including games such as "space invaders" and "broken bricks". AI watches game videos like people, operates games like people, and slowly learns from the game white to become a game master. AlphaGo is also based on the same principle, simulating the way people learn Go. It plays chess like a human and slowly learns to think like an expert.

This technical concept requires raw data, so it is more general than those methods that require manual input of sorting data. In principle, AlphaGo learns Go, and Gobang is not a problem.

The reason why AlphaGo technology was first used in games is that whether it is a board game or a computer game, games are much simpler than real questions. Games are also likely to be the first field to put similar technologies into practical application: after all, with the development of game technology, game developers gradually realize that good AI is as important as realistic images. Whether it is a real-time strategy game, such as StarCraft, or an NPC in a role-playing game, advanced artificial intelligence can not only become a powerful opponent, but also become an excellent team partner.

However, its strongest point is, of course, curriculum adaptability and learning ability. Deepmind claims that this technical concept will soon be applied to the medical field, trying to solve the problem of personalized medical care. And this is definitely only the first step.