Home  >  Article  >  Technology peripherals  >  Do you know the history of artificial intelligence development?

Do you know the history of artificial intelligence development?

WBOY
WBOYforward
2023-04-13 08:31:021711browse

Do you know the history of artificial intelligence development?

Among the countless technological advancements in the 20th and 21st centuries, the most influential is undoubtedly artificial intelligence. From search engine algorithms reshaping how we find information to Amazon’s Alexa in the consumer world, artificial intelligence has become a major technology driving the entire technology industry into the future.

Whether it’s a fledgling startup or an industry giant like Microsoft, there is at least one department in the business that is working with artificial intelligence or machine learning. According to a certain study, the global artificial intelligence industry is valued at US$93.5 billion in 2021.

Artificial intelligence exploded as a force in the tech industry in the 2000s and 2010s, but it has been around in some form or fashion since at least the 1950s, and arguably goes back further afield.

The broad outlines of the history of artificial intelligence, like the Turing test and the chess-playing computer, are ingrained in the popular consciousness, but a rich and dense history exists beneath the surface of common sense. This article will extract the essence from this history and show you how artificial intelligence has gone from a mythical idea to a reality that changes the world.

From Folklore to Fact

Although artificial intelligence is often considered a cutting-edge concept, humans have been imagining artificial intelligence for thousands of years, and these imaginations have played an important role in today's achievements in the field. Progress has had a real impact. Such as the bronze robot Talos, the protector of the Greek island of Crete, and the alchemical creation of man during the Renaissance. Characters such as Frankenstein’s Monster, HAL 9000 from 2001: A Space Odyssey, and Skynet from the Terminator series are just some of the ways we depict artificial intelligence in modern fiction.

One of the most influential fictional concepts in the history of artificial intelligence is Isaac Asimov’s Three Laws of Robotics. These laws are often cited by real-world researchers and businesses when they create their own laws of robotics.

In fact, when the UK's Engineering and Physical Sciences Research Council, Arts and Humanities Research Council published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as Reference point, despite pointing out that Asimov's laws simply don't work in practice.

Computers, Games, and the Turing Test

In the 1940s, when Asimov was writing The Three Laws, researcher William Gray Walter was developing a rudimentary Artificial intelligence simulation version. Known as turtles or turtles, these tiny robots can detect and react to light and contact with their plastic shells, and they can operate without the use of a computer.

In the late 1960s, Johns Hopkins University built another computer-less autonomous robot, the Beast, which could navigate the halls of the university using sonar and respond in a special manner when its battery ran low. Charge on wall outlet.

However, artificial intelligence as we know it today will find its development inextricably linked to developments in computer science. Turing proposed the famous Turing test in his paper "Computing Machines and Intelligence" published in 1950, which is still influential today. Many early artificial intelligence programs were developed for playing games, such as Christopher Strachey's checkers program for the Frantic I computer.

In 1956, Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester coined the term "artificial intelligence" at a Dartmouth seminar. At the meeting, McCarthy coined the name for the emerging field.

This seminar was also where Alan Newell and Herbert Simon first demonstrated their Logic Theorist computer program, developed with the help of computer programmer Cliff Shaw of. "Logic Theorist" is designed to prove mathematical theorems the way human mathematicians do.

Games and mathematics were the focus of early artificial intelligence because they easily applied the "reasoning as search" principle. Reasoning as search, also known as mean analysis (MEA), is a problem-solving method that follows three basic steps:

  • Determine the ongoing status of any problem you observe.
  • Determine the ultimate goal (you will no longer feel hungry).
  • Determine the actions you need to take to solve the problem.

This was an early precursor to the principle of artificial intelligence, if actions don't solve the problem, find a new set of actions and repeat until you solve the problem.

Neural Networks and Natural Language

Artificial intelligence research experienced a surge in the 1950s and 1960s as Cold War-era governments were willing to invest in anything that might give them an advantage over the other side. Significant funding from organizations such as DARPA.

This research has promoted a series of advances in machine learning. For example, while using multi-objective evolutionary algorithms, heuristic thinking shortcuts are generated, thus blocking problem-solving paths that the AI ​​may explore that are unlikely to achieve the desired results.

The first artificial neural network was originally proposed in the 1940s and invented in 1958, thanks to funding from the U.S. Office of Naval Research. A major focus of researchers during this period was trying to make artificial intelligence understand human language.

In 1966, Joseph Weizenbaum launched the first chatbot, ELIZA, for which internet users around the world are grateful. One of the most influential early developments in artificial intelligence research was Roger Schank's Concept Dependence Theory, which attempts to convert sentences into basic concepts as a set of simple keywords.

The First Winter of Artificial Intelligence

The optimism about artificial intelligence research that had prevailed in the 1970s, 1950s, and 1960s began to fade. Funding is drying up due to the myriad of real-world problems facing AI research. Chief among them is the limitation of computing power.

Bruce G. Buchanan explained in an article in the Journal of Artificial Intelligence: "Early programs were necessarily limited by the size and speed of memory and processors, as well as the relative clumsiness of early operating systems and languages." As funding disappeared and optimism faded, this period became known as the winter of artificial intelligence.

During this period, AI researchers encountered setbacks and interdisciplinary disagreements emerged. The publication of "The Perceptron" by Marvin Minsky and Frank Rosenblatt in 1969 completely hindered the development of the field of neural networks, and research in this field did not make progress until the 1980s.

Then, the so-called two major categories emerged. One category tends to use logical and symbolic reasoning to train and educate their artificial intelligence. They hope that artificial intelligence can solve logical problems such as mathematical theorems.

John McCarthy introduced the idea of ​​using logic in artificial intelligence with his 1959 proposal. Furthermore, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was specifically designed as a logic programming language and is still used in artificial intelligence today.

At the same time, another group of people are trying to get artificial intelligence to solve problems that require artificial intelligence to think like humans. In a 1975 paper, Marvin Minsky outlined a method commonly used by researchers called "framing."

Framework is a way for humans and artificial intelligence to understand the world. When encountering a new person or event, we can use the memory of similar people or events to give a general idea, such as when ordering food at a new restaurant, but may not know the menu or the person serving you, so we can Have a rough idea of ​​how to place an order based on past experiences at other restaurants.

From academia to industry

The 1980s marked a return to enthusiasm for artificial intelligence. Japan's Fifth Generation Project, for example, sought to create smart computers that ran on Prolog, just like regular computers running on code, which further piqued the interest of American businesses. Not wanting to lag behind, American companies are investing in artificial intelligence research.

To sum up, the increased interest in artificial intelligence and the shift to industrial research caused the value of the artificial intelligence industry to soar to US$2 billion in 1988. Taking inflation into account, this number will be closer to $5 billion in 2022.

The Second Winter of Artificial Intelligence

However, in the 1990s, interest began to wane, just as it had in the 1970s. After 10 years of development, the Fifth Generation Initiative has failed to achieve many of its goals. As companies find it cheaper and easier to buy mass-produced general-purpose chips and program AI applications into software, there is a market for dedicated AI hardware, such as LISP machines, crashed and caused the overall market to shrink.

In addition, the expert systems that proved the feasibility of artificial intelligence at the beginning of this century began to show fatal flaws. As the system continues to be used, it continues to add more rules to operate and requires an increasingly larger knowledge base to process. Eventually, the amount of manpower required to maintain and update the system's knowledge base grows until it becomes financially unsustainable. A combination of these and other factors has led to the second AI winter.

Enter the new millennium and the modern world of artificial intelligence

In the late 1990s and early 21st century, there were signs that the spring of artificial intelligence was coming. Some of AI's oldest goals were finally achieved, such as Deep Blue's victory over then world chess champion Gary Kasparov in 1997, a landmark moment for AI.

More sophisticated mathematical tools and collaborations with fields such as electrical engineering have transformed artificial intelligence into a more logic-focused scientific discipline.

At the same time, artificial intelligence has been applied in many new industry areas, such as Google’s search engine algorithms, data mining and speech recognition, etc. New supercomputers and programs will find themselves competing against and even winning against top human opponents, such as IBM's Watson winning Jeopardy.

One of the most impactful pieces of artificial intelligence in recent years has been Facebook’s algorithm, which determines what posts you’ve seen and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar capabilities can be found on sites like Youtube and Netflix, where they predict what viewers will want to watch next based on previous history.

Sometimes, these innovations are not even considered artificial intelligence. As Nick Brostrom said in a 2006 interview with CNN: "A lot of cutting-edge artificial intelligence that has penetrated into common applications is not usually called artificial intelligence because once something becomes useful enough and common enough, it is no longer Being labeled artificial intelligence."

The trend of not calling useful artificial intelligence AI did not continue into the 2010s. Now, startups and tech giants alike are rushing to claim that their latest products are powered by artificial intelligence or machine learning. In some cases, this desire is so strong that some will claim that their products are powered by AI, even if there are issues with the functionality of the AI.

Whether it’s through the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa, artificial intelligence has entered many people’s homes. Through winters and bursting bubbles, the field of artificial intelligence has persevered and has become a very important part of modern life, and is likely to grow exponentially in the coming years.

The above is the detailed content of Do you know the history of artificial intelligence development?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete