Current location - Recipe Complete Network - Complete cookbook of home-style dishes - How does artificial intelligence redefine the meaning of human beings?
How does artificial intelligence redefine the meaning of human beings?
A few months ago, I made a long journey to the forest park of IBM Research Laboratory in York Heights, new york, in order to get an early glimpse of the long-awaited future of artificial intelligence. This is the research and development place of the supercomputer "Watson". Watson published "Dangerous Edge" on 20 1 1! ) won the first prize in the competition of the program. The original Watson computer is still here-it is a computer system about the size of a bedroom, surrounded by 10 vertical refrigeration machines. Technicians can connect various cables to the back of the machine through the small holes in the system. The temperature inside the system is surprisingly high, as if the computer cluster were alive.

Today's Watson system is significantly different from before. It no longer exists only in a row of cabinets, but spreads through a large number of free servers for users, which can run hundreds of artificial intelligence "scenarios" in real time. Like everything in the cloud, Watson system serves customers who use it at the same time all over the world. They can connect to the system with their mobile phones, desktops and their own data servers. This kind of artificial intelligence can be increased or decreased according to demand. In view of the fact that artificial intelligence will gradually improve with people's use, Watson will always become smarter; The improvement points it has learned in any case will be immediately passed on to other situations. Moreover, it is not a single program, but a collection of various software engines-its logic deduction engine and language analysis engine can run on different codes, chips and locations-all these intelligent factors are brought together into a unified intelligent process.

Users can directly access this always-on intelligent system, or through third-party applications using this artificial intelligence cloud service. Like many far-sighted parents, IBM wants Watson computer to engage in medical work, so it is not surprising that they are developing an application of medical diagnostic tools. Previously, most attempts of artificial intelligence in diagnosis and treatment ended in fiasco, but Watson was fruitful. Simply put, when I input the symptoms of a disease I was infected with in India, it will give me a list of suspected diseases, which list the diseases with high probability to low probability. It thinks I'm probably infected with Giardia-that's absolutely right. This technology has not been directly opened to patients; IBM provides its partners with the intelligence of Watson computer to help them develop a user-friendly interface to serve doctors and hospitals. "I believe that people like Watson, whether machines or people, will soon become the best doctors in the world," said alan glynn, chief medical officer of Scanadu, a startup company that is using cloud artificial intelligence technology to make a medical device inspired by the movie Star Trek. "From the perspective of the improvement speed of artificial intelligence technology, children born now are likely to know the diagnosis and treatment without seeing a doctor when they grow up."

Medicine is just the beginning. All mainstream cloud computing companies, plus dozens of startups, are scrambling to develop cognitive services similar to Watson computers. According to the data of quantitative analysis company Quid, since 2009, artificial intelligence has attracted more than $654.38+07 billion in investment. Last year alone, 322 companies with similar artificial intelligence technologies received more than $2 billion in investment. Facebook and Google also recruit researchers for their internal artificial intelligence research groups. Since last year, Yahoo, Intel, Dropbox, LinkedIn, Pinterest and Twitter have also acquired artificial intelligence companies. In the past four years, private investment in artificial intelligence has increased at an average annual rate of 62%, and this rate is expected to continue.

Looking at all these activities, the future of artificial intelligence is entering our field of vision. It is neither like the supercomputer in the novel and movie 200 1: A Space Odyssey-an independent machine with extraordinary (but potentially ferocious) human-like consciousness and running on it-nor a singularity theorist. The coming artificial intelligence is quite similar to Amazon's network service-cheap, reliable and industrial-grade digital intelligence runs behind everything, occasionally flashing in front of your eyes, and almost invisible at other times. This universal facility will provide the artificial intelligence you need without exceeding your needs. Like all facilities, even if artificial intelligence changes the Internet, global economy and civilization, it will become boring. As electricity did more than a century ago, it will make inanimate objects active. Everything we used to electrify, now we will make it cognitive. Practical new artificial intelligence will also improve the lives of human individuals (deepen our memory and accelerate our cognition) and human groups. Coupled with some extra intellectual factors, we can't think of anything that can't be novel, different and interesting. In fact, it is easy for us to predict the business plan of the next 10,000 startups: "Do something and join artificial intelligence". This matter is important and close at hand.

About 2002, I attended a small gathering of Google-at that time, Google had no IPO and was still doing web search wholeheartedly. I had a casual conversation with Larry Page, the outstanding co-founder and CEO of Google, on 20 1 1. "Larry, I still don't understand, there are so many search companies now, why do you want to do free online search? How did you come up with this idea? " My unimaginative ignorance really proves that it is difficult for us to make predictions, especially for the future. But I want to argue that it is difficult to predict the future until Google strengthens its advertising auction scheme and makes it a real income, and merges with YouTube or other important mergers and acquisitions. I'm not the only user who enthusiastically uses Google search engine and thinks it won't last long. But Page's answer has always made me unforgettable: "Oh, we are actually doing artificial intelligence."

In the past few years, I have thought a lot about that conversation, and Google has also acquired 14 companies in the fields of artificial intelligence and robotics. Given that the search business contributes 80% of Google's revenue, at first glance, you might think that Google is expanding its artificial intelligence portfolio to improve its search ability. But I think it's the opposite. Google uses search technology to improve artificial intelligence, not artificial intelligence to improve search technology. Every time you enter a query word, click on a link generated by a search engine, or create a link on a web page, you are training Google's artificial intelligence technology. When you type "Easter Bunny" in the picture search bar and click on the picture that looks most like the Easter Bunny, you are telling artificial intelligence what the Easter Bunny looks like. Google has 1200 million search users every day, generating12100 million search keywords, and each keyword is tutoring artificial intelligence for deep learning again and again. If the algorithm of artificial intelligence is steadily improved after 10 years, plus 1000 times of data and 100 times of computing resources, Google will develop unparalleled artificial intelligence products. My prediction is that by 2024, Google's main products will no longer be search engines, but artificial intelligence products.

This view will naturally lead to doubts. In the past 60 years, researchers of artificial intelligence predicted that the era of artificial intelligence was coming, but until a few years ago, artificial intelligence seemed out of reach. People even invented a word to describe this era when research results are scarce and research funds are even scarcer: the winter of artificial intelligence. So have things really changed?

Yes, three recent breakthroughs have brought the long-awaited artificial intelligence close at hand:

1. Low cost parallel computing

Thinking is an inherent parallel process of human beings. Hundreds of millions of neurons discharge at the same time, generating synchronous brain waves, which are calculated in the cerebral cortex. Building a neural network-the main structure of artificial intelligence software-also requires many different processes to run at the same time. Each node of the neural network roughly simulates a neuron in the brain-it interacts with neighboring nodes to clarify the received signal. In order to understand a spoken word, the program must be able to hear all the phonemes between different syllables; To recognize a picture, we need to see all the pixels in its surrounding pixel environment-both of which are deep parallel tasks. But until recently, a standard computer processor could only handle one task at a time.

Things began to change more than a decade ago. At that time, a new chip called ——GPU appeared, which can meet the requirements of high-density vision and parallelism in visual games. In this process, millions of pixels are recalculated many times every second. This process requires a special parallel computing chip, which is added to the computer motherboard as a supplement. The parallel graphics chip plays an obvious role, and the playability of the game is greatly increased. By 2005, the output of GPU chips was quite high and the price dropped. In 2009, a research team from Andrew Ng and Stanford University realized that GPU chips could run neural networks in parallel.

This discovery opens up new possibilities for neural networks, which can accommodate connections between hundreds of millions of nodes. It takes several weeks for traditional processors to calculate the possibility of neural network cascade with 654.38 billion nodes. Andrew Ng found that a GPU cluster can accomplish the same task in one day. Nowadays, some companies applying cloud computing generally use GPU to run neural networks. For example, Facebook membership is a technology to identify friends in users' photos, and Netfilx will also provide reliable recommended content to 50 million subscribers.

2. Big data

Every kind of intelligence needs training. Even the human brain, which is naturally able to classify things, still needs to read more than a dozen examples to distinguish cats from dogs. This is especially true of artificial thinking. Even a computer with the best program must play at least 1000 games to have a good performance. Part of the reason for the breakthrough in artificial intelligence is that we have collected massive data from all over the world and provided artificial intelligence with the training it needs. Giant databases, self-tracking, online cookie, online footprints, trillions of bytes of storage, decades of search results, Wikipedia and the whole digital world have all become teachers, which make artificial intelligence smarter.

3. Better algorithm

In 1950s, digital neural network was invented, but computer scientists spent decades studying how to control the astronomical combination among millions or even billions of neurons. The key of this process is to organize the neural network into a stacked layer. A relatively simple task is face recognition. When it is found that a set of bits in the neural network can form a certain pattern (for example, an image of an eye), the result will be transmitted upward to another layer of the neural network for further analysis. The next layer can put two eyes together and transfer this meaningful data block to the third layer of the hierarchy, which can combine the images of eyes and nose. Recognizing a face may require millions of such nodes (each node will generate a calculation result for the surrounding nodes to use), and at most, it needs to stack 15 levels. In 2006, Geoff Hinton, who worked at the University of Toronto at that time, made a key improvement on this method and called it "deep learning". He can optimize the results of each layer from the mathematical level, so that the neural network can accelerate the learning speed when forming the stacked layer. A few years later, when the deep learning algorithm was transplanted to the GPU cluster, its speed was obviously improved. The code of deep learning alone is not enough to generate complex logical thinking, but it is the main component of all artificial intelligence products, including IBM's Watson computer, Google search engine and Facebook algorithm.

This perfect storm composed of parallel computing, big data and deeper algorithms has made artificial intelligence that has been cultivated for 60 years a blockbuster. This convergence also shows that as long as these technological trends continue-there is no reason not to continue-artificial intelligence will continue to improve.

As this trend continues, this kind of artificial intelligence based on cloud technology will increasingly become an inseparable part of our daily life. But there will be no pie in the sky. Cloud computing follows the law of increasing income [4], which is sometimes called network effect, that is, with the development of the network, the network value will increase at a faster speed. The larger the network (scale), the more attractive it is to new users, which in turn makes the network bigger and more attractive, and so on. Cloud technology serving artificial intelligence also follows this law. The more people use artificial intelligence products, the smarter it will become; The smarter it becomes, the more people will use it; Then it becomes smarter and more people use it. Once a company enters this virtuous circle, its scale will become larger and its development will be accelerated, so that no emerging competitors can match it. Therefore, the future of artificial intelligence will be ruled by two or three oligopolistic companies, which will develop large-scale multi-purpose business intelligence products based on cloud technology.

1997, IBM's Deep Blue Computer, the predecessor of Watson Computer, defeated the then chess master Garry Kasparov in a famous man-machine competition. After the computer won a few more games, people basically lost interest in this kind of games. You may think that the story is over here, but Kasparov realized that if he could immediately access the huge database including all the previous chess changes like Deep Blue, he would perform better in the game. If this database tool is fair to artificial intelligence devices, why can't humans use it? In order to explore this idea, Kasparov first put forward the concept of "man plus machine" competition, that is, using artificial intelligence to improve the level of chess players, rather than the confrontation between man and machine.

This kind of competition is now called freestyle chess competition, which is a bit like mixed fighting competition. Players can use any fighting skill they want. Can fight alone; You can also use the super intelligent chess software to receive the help of the computer. All you have to do is move the pieces according to its suggestion. Or you can be a "man-machine" player advocated by Kasparov. Half-man and half-machine players will listen to the chess suggestions put forward by artificial intelligence equipment in their ears, but they will not adopt these suggestions from time to time-much like the GPS navigation we use when driving. In the freestyle chess competition of 20 14, the chess engine with pure artificial intelligence won 42 games, and the chess player with half man and half machine won 53 games. The best chess player in the world today is man-machine player Intagrand, which is a group composed of many people and several different chess programs.

But the most surprising thing is that the emergence of artificial intelligence has not lowered the level of pure human chess players. On the contrary, cheap and super-intelligent chess software attracts more people to play chess, the number of games increases and the level of players improves. Now the number of chess masters is more than twice that when Deep Blue defeated Kasparov. At present, magnus carlsen, the number one human chess player, has been trained in artificial intelligence. He is considered to be the closest player to the computer among all human players, and also the highest-scoring human player in history.

If artificial intelligence can help people become better chess players, it can also help us become better pilots, doctors, judges and teachers. Most of the commercial work done by artificial intelligence will be specialized work, strictly limited to the work that intelligent software can do, for example, (artificial intelligence products) translate one language into another language, but not into a third language. For another example, they can drive, but they can't talk to people. Or you can recall every pixel of every video on YouTube, but you can't predict your daily work. In the next decade, 99% of the artificial intelligence products you interact with directly or indirectly will be highly specialized and extremely intelligent "experts".

In fact, this is not real intelligence, at least not the kind of intelligence we think about carefully. Indeed, intelligence may be a tendency-especially if intelligence in our eyes means our unique self-awareness, all the crazy introspection cycles and messy self-awareness streams we have. We hope that driverless cars can drive on the road wholeheartedly, instead of obsessing about the quarrel with the garage before. The general practitioner "Watson" in the hospital can concentrate on his work and not think about whether to specialize in English. With the development of artificial intelligence, we may have to design some methods to prevent them from having consciousness-the best artificial intelligence service we claim will be unconscious service.

What we want is not intelligence but artificial intelligence. Different from general intelligence, intelligence (product) has the characteristics of concentration, measurability and specific types. It can also think in a completely different way from human cognition. Here is a good example of non-human thinking. Watson's computer performed a powerful stunt at the South-Southwest Art Festival held in Austin, Texas in March this year: IBM researchers added a database composed of online recipes, nutrition tables and taste research reports released by the US Department of Agriculture (USDA) to Watson. With these data, Watson created new dishes based on taste configuration data and existing color models of dishes. One of the popular dishes created by Watson is a delicious version of "fish and chips", which is made of pickled fish in lime juice and fried plantains. In the IBM laboratory in Yorktown Heights, I enjoyed this dish and ate another delicious food created by Watson: Swiss/Thai asparagus quiche. It tastes good!

Non-human intelligence is not a mistake, but a feature. The main advantage of artificial intelligence is their "alien intelligence". An artificial intelligence product is different from any chef's thinking about food, which also allows us to look at food in different ways, or think about manufacturing materials, clothes, financial derivatives or any kind of science and art in different ways. Compared with the speed or strength of artificial intelligence, its difference is more valuable to us.

In fact, artificial intelligence will help us better understand the meaning of what we originally said about intelligence. In the past, we might say that only super-intelligent artificial intelligence products can drive cars or beat humans in "dangerous edge" programs and chess competitions. Once artificial intelligence has done those things, we will feel that these achievements are obviously mechanical and rigid and cannot be called real intelligence. Every success of artificial intelligence is redefining itself.

But we have not only been redefining the meaning of artificial intelligence-we are also redefining the meaning of human beings. In the past 60 years, machining has copied the behaviors and talents that we once thought were unique to human beings, and we have to change our views on the difference between man and machine. As we invent more and more kinds of artificial intelligence products, we will have to give up more ideas that are regarded as unique abilities of human beings. In the next decade-even the next century-we will fall into a protracted identity crisis and constantly ask ourselves the meaning of human beings. Ironically, the biggest benefit of practical artificial intelligence products we touch every day is not to increase production capacity, expand the economy or bring a new way of scientific research-although these will happen. The biggest advantage of artificial intelligence is that it will help us define human beings. We need artificial intelligence to tell us who it is.