Among the countless technological advancements in the 20th and 21st centuries, the most influential is undoubtedly artificial intelligence. From search engine algorithms reshaping how we find information to Amazon’s Alexa in the consumer world, artificial intelligence has become a major technology driving the entire technology industry into the future.
Whether it’s a fledgling startup or an industry giant like Microsoft, there is at least one department in the business that is working with artificial intelligence or machine learning. According to a certain study, the global artificial intelligence industry is valued at US$93.5 billion in 2021.
Artificial intelligence exploded as a force in the tech industry in the 2000s and 2010s, but it has been around in some form or fashion since at least the 1950s, and arguably goes back further afield.
The broad outlines of the history of artificial intelligence, like the Turing test and the chess-playing computer, are ingrained in the popular consciousness, but a rich and dense history exists beneath the surface of common sense. This article will extract the essence from this history and show you how artificial intelligence has gone from a mythical idea to a reality that changes the world.
From Folklore to Fact
Although artificial intelligence is often considered a cutting-edge concept, humans have been imagining artificial intelligence for thousands of years, and these imaginations have played an important role in today's achievements in the field. Progress has had a real impact. Such as the bronze robot Talos, the protector of the Greek island of Crete, and the alchemical creation of man during the Renaissance. Characters such as Frankenstein’s Monster, HAL 9000 from 2001: A Space Odyssey, and Skynet from the Terminator series are just some of the ways we depict artificial intelligence in modern fiction.
One of the most influential fictional concepts in the history of artificial intelligence is Isaac Asimov’s Three Laws of Robotics. These laws are often cited by real-world researchers and businesses when they create their own laws of robotics.
In fact, when the UK's Engineering and Physical Sciences Research Council, Arts and Humanities Research Council published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as Reference point, despite pointing out that Asimov's laws simply don't work in practice.
Computers, Games, and the Turing Test
In the 1940s, when Asimov was writing The Three Laws, researcher William Gray Walter was developing a rudimentary Artificial intelligence simulation version. Known as turtles or turtles, these tiny robots can detect and react to light and contact with their plastic shells, and they can operate without the use of a computer.
In the late 1960s, Johns Hopkins University built another computer-less autonomous robot, the Beast, which could navigate the halls of the university using sonar and respond in a special manner when its battery ran low. Charge on wall outlet.
However, artificial intelligence as we know it today will find its development inextricably linked to developments in computer science. Turing proposed the famous Turing test in his paper "Computing Machines and Intelligence" published in 1950, which is still influential today. Many early artificial intelligence programs were developed for playing games, such as Christopher Strachey's checkers program for the Frantic I computer.
In 1956, Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester coined the term "artificial intelligence" at a Dartmouth seminar. At the meeting, McCarthy coined the name for the emerging field.
This seminar was also where Alan Newell and Herbert Simon first demonstrated their Logic Theorist computer program, developed with the help of computer programmer Cliff Shaw of. "Logic Theorist" is designed to prove mathematical theorems the way human mathematicians do.
Games and mathematics were the focus of early artificial intelligence because they easily applied the "reasoning as search" principle. Reasoning as search, also known as mean analysis (MEA), is a problem-solving method that follows three basic steps:
- Determine the ongoing status of any problem you observe.
- Determine the ultimate goal (you will no longer feel hungry).
- Determine the actions you need to take to solve the problem.
This was an early precursor to the principle of artificial intelligence, if actions don't solve the problem, find a new set of actions and repeat until you solve the problem.
Neural Networks and Natural Language
Artificial intelligence research experienced a surge in the 1950s and 1960s as Cold War-era governments were willing to invest in anything that might give them an advantage over the other side. Significant funding from organizations such as DARPA.
This research has promoted a series of advances in machine learning. For example, while using multi-objective evolutionary algorithms, heuristic thinking shortcuts are generated, thus blocking problem-solving paths that the AI may explore that are unlikely to achieve the desired results.
The first artificial neural network was originally proposed in the 1940s and invented in 1958, thanks to funding from the U.S. Office of Naval Research. A major focus of researchers during this period was trying to make artificial intelligence understand human language.
In 1966, Joseph Weizenbaum launched the first chatbot, ELIZA, for which internet users around the world are grateful. One of the most influential early developments in artificial intelligence research was Roger Schank's Concept Dependence Theory, which attempts to convert sentences into basic concepts as a set of simple keywords.
The First Winter of Artificial Intelligence
The optimism about artificial intelligence research that had prevailed in the 1970s, 1950s, and 1960s began to fade. Funding is drying up due to the myriad of real-world problems facing AI research. Chief among them is the limitation of computing power.
Bruce G. Buchanan explained in an article in the Journal of Artificial Intelligence: "Early programs were necessarily limited by the size and speed of memory and processors, as well as the relative clumsiness of early operating systems and languages." As funding disappeared and optimism faded, this period became known as the winter of artificial intelligence.
During this period, AI researchers encountered setbacks and interdisciplinary disagreements emerged. The publication of "The Perceptron" by Marvin Minsky and Frank Rosenblatt in 1969 completely hindered the development of the field of neural networks, and research in this field did not make progress until the 1980s.
Then, the so-called two major categories emerged. One category tends to use logical and symbolic reasoning to train and educate their artificial intelligence. They hope that artificial intelligence can solve logical problems such as mathematical theorems.
John McCarthy introduced the idea of using logic in artificial intelligence with his 1959 proposal. Furthermore, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was specifically designed as a logic programming language and is still used in artificial intelligence today.
At the same time, another group of people are trying to get artificial intelligence to solve problems that require artificial intelligence to think like humans. In a 1975 paper, Marvin Minsky outlined a method commonly used by researchers called "framing."
Framework is a way for humans and artificial intelligence to understand the world. When encountering a new person or event, we can use the memory of similar people or events to give a general idea, such as when ordering food at a new restaurant, but may not know the menu or the person serving you, so we can Have a rough idea of how to place an order based on past experiences at other restaurants.
From academia to industry
The 1980s marked a return to enthusiasm for artificial intelligence. Japan's Fifth Generation Project, for example, sought to create smart computers that ran on Prolog, just like regular computers running on code, which further piqued the interest of American businesses. Not wanting to lag behind, American companies are investing in artificial intelligence research.
To sum up, the increased interest in artificial intelligence and the shift to industrial research caused the value of the artificial intelligence industry to soar to US$2 billion in 1988. Taking inflation into account, this number will be closer to $5 billion in 2022.
The Second Winter of Artificial Intelligence
However, in the 1990s, interest began to wane, just as it had in the 1970s. After 10 years of development, the Fifth Generation Initiative has failed to achieve many of its goals. As companies find it cheaper and easier to buy mass-produced general-purpose chips and program AI applications into software, there is a market for dedicated AI hardware, such as LISP machines, crashed and caused the overall market to shrink.
In addition, the expert systems that proved the feasibility of artificial intelligence at the beginning of this century began to show fatal flaws. As the system continues to be used, it continues to add more rules to operate and requires an increasingly larger knowledge base to process. Eventually, the amount of manpower required to maintain and update the system's knowledge base grows until it becomes financially unsustainable. A combination of these and other factors has led to the second AI winter.
Enter the new millennium and the modern world of artificial intelligence
In the late 1990s and early 21st century, there were signs that the spring of artificial intelligence was coming. Some of AI's oldest goals were finally achieved, such as Deep Blue's victory over then world chess champion Gary Kasparov in 1997, a landmark moment for AI.
More sophisticated mathematical tools and collaborations with fields such as electrical engineering have transformed artificial intelligence into a more logic-focused scientific discipline.
At the same time, artificial intelligence has been applied in many new industry areas, such as Google’s search engine algorithms, data mining and speech recognition, etc. New supercomputers and programs will find themselves competing against and even winning against top human opponents, such as IBM's Watson winning Jeopardy.
One of the most impactful pieces of artificial intelligence in recent years has been Facebook’s algorithm, which determines what posts you’ve seen and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar capabilities can be found on sites like Youtube and Netflix, where they predict what viewers will want to watch next based on previous history.
Sometimes, these innovations are not even considered artificial intelligence. As Nick Brostrom said in a 2006 interview with CNN: "A lot of cutting-edge artificial intelligence that has penetrated into common applications is not usually called artificial intelligence because once something becomes useful enough and common enough, it is no longer Being labeled artificial intelligence."
The trend of not calling useful artificial intelligence AI did not continue into the 2010s. Now, startups and tech giants alike are rushing to claim that their latest products are powered by artificial intelligence or machine learning. In some cases, this desire is so strong that some will claim that their products are powered by AI, even if there are issues with the functionality of the AI.
Whether it’s through the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa, artificial intelligence has entered many people’s homes. Through winters and bursting bubbles, the field of artificial intelligence has persevered and has become a very important part of modern life, and is likely to grow exponentially in the coming years.
The above is the detailed content of Do you know the history of artificial intelligence development?. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

Dreamweaver Mac version
Visual web development tools

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Linux new version
SublimeText3 Linux latest version

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function