Home > Article > Technology peripherals > Game Over? The battle between strong AI and weak AI
Since Google’s artificial intelligence (AI) subsidiary DeepMind published a paper a few weeks ago describing a “generalist” agent they call Gato (that can use the same trained model perform different tasks), and claimed that general artificial intelligence (AGI) can be achieved through sheer scale, thus sparking fierce debate in the artificial intelligence industry. While it may seem a bit academic, the reality is that if general artificial intelligence is around the corner, our society—including our laws, regulations, and economic models—will not be ready.
In fact, thanks to the same trained model, the generalist agent Gato is able to play Atari, add captions to pictures, chat, or stack blocks with a real robotic arm. It can also decide whether to output text, connect torque, button presses, or other markers based on its context. So it does appear to be a more general AI model than the popular GPT-3, DALL-E 2, PaLM or Flamingo, which become very good at very narrow specific tasks, such as natural language writing, language understanding or Create an image based on the description.
This led DeepMind scientist and Oxford University professor Nando de Freitas to claim "It's all about scale now! Game over!" , and believes that artificial general intelligence (AGI) can be achieved through full scale (i.e. larger models, larger training data sets, and stronger computing power). However, "what is the game" de Freitas is talking about? What is this debate really about?
Before discussing the details of this debate and its impact on wider society, It’s worth taking a step back and understanding the background.
The meaning of the term "artificial intelligence" has changed over the years, but from a high-level and general perspective, it can be defined as the field of study of intelligent agents, It refers to any system that senses its environment and takes actions to maximize its chances of achieving its goals. This definition intentionally leaves aside the question of whether agents or machines can actually “think,” a question that has long been the subject of heated debate. In 1950, the British mathematician Alan Turing argued in his famous paper "The Imitation Game" that instead of considering whether a machine can think, it is better to focus on "whether a machine is likely to exhibit intelligent behavior."
This distinction conceptually leads to two main branches of artificial intelligence: strong artificial intelligence and weak artificial intelligence. Strong AI, also known as artificial general intelligence (AGI), is a theoretical form of artificial intelligence in which machines require the same intelligence as humans. Therefore, it will be self-aware and have the ability to solve problems, learn, and plan for the future. This is the most ambitious definition of artificial intelligence, the "holy grail of artificial intelligence" - but for now, it remains pure theory. Approaches to achieving strong AI often revolve around symbolic AI, where machines form internal symbolic representations of the physical and abstract “world” so that rules or reasoning can be applied to further learn and make decisions.
While research in this area continues, it has so far had limited success in solving real-life problems because internal or symbolic representations of the world rapidly become Unmanageable.
Weak AI, also known as "narrow AI", is a less ambitious approach to artificial intelligence that focuses on performing specific tasks, such as answering questions based on user input, recognizing faces, or playing chess , while relying on human intervention to define the parameters of its learning algorithm and provide relevant training data to ensure accuracy.
However, significant progress has been made in weak artificial intelligence. Well-known examples include facial recognition algorithms, natural language models (such as OpenAI’s GPT-n), virtual assistants (such as Siri or Alexa), Google /DeepMind’s chess-playing program AlphaZero, and to some extent driverless cars.
Approaches to achieving weak artificial intelligence often revolve around the use of artificial neural networks, systems inspired by the biological neural networks that make up animal brains. They are collections of interconnected nodes or neurons, combined with an activation function that determines the output based on the data presented in an "input layer" and the weights in the interconnections. In order to adjust the weights in the interconnections so that the "output" is useful or correct, the network can be "trained" by exposing it to many data examples and "backpropagating" the output loss.
It can be said that there is a third branch called "neuro-symbolic artificial intelligence", which combines neural networks and rule-based artificial intelligence. While conceptually promising and sound, as it appears to be closer to how our biological brains operate, it is still in its very early stages.
The key to the current debate is whether, with AI and machine learning models of sufficient scale, artificial general intelligence (AGI) can truly be achieved, thus completely getting rid of symbolic artificial intelligence. Is it just a matter of hardware scaling and optimization now, or do we need to discover and develop more in AI algorithms and models?
Tesla also seems to be accepting Google/DeepMind’s point of view. At the 2021 Artificial Intelligence (AI) Day event, Tesla announced the launch of Tesla Bot, also known as Optimus, a universal humanoid robot that will be developed by Tesla Controlled by the same AI system developed for the advanced driver assistance systems used in its cars. Interestingly, the company's CEO Elon Musk has stated that he hopes to have the robot in production by 2023, claiming that Optimus Prime will eventually be able to do "anything a human wouldn't want to do," meaning he expects AGI will then be possible.
However, other AI research groups—mainly including Yann LeCun, Meta’s chief AI scientist and a professor at New York University, who prefer the less ambitious term Human Artificial Intelligence (HLAI)—argue that There are still many problems to be solved that are beyond the reach of sheer computational power and may require new models or even software paradigms.
In these problems, the machine has the ability to understand how the world works through baby-like observation, predict how it will affect the world through its actions, and deal with the inherent unpredictability of the world. nature, predicting the impact of a series of actions to be able to reason and plan, and represent and predict in an abstract space. Ultimately, the debate is whether this can be achieved by gradient-based learning with just our existing artificial neural networks, or whether more breakthroughs are needed.
While deep learning models are indeed capable of producing “key features” from data without the need for human intervention, it is easy to believe that they will be able to exploit more data and computing power to mine and solve the remaining problems, but this may be too good to be true. To use a simple analogy, designing and building faster and more powerful cars will not make them fly, because we need to fully understand aerodynamics to solve the problem of flight in the first place.
The progress being made using deep learning AI models is impressive, but it’s worth pondering whether the optimistic view of weak AI practitioners is simply Maslow’s hammer or the “Law of Tools” ” case, that is, “If the only tool you have is a hammer, you tend to see every problem as a nail.”
Basic research like Google/DeepMind, Meta or Tesla often makes private companies uncomfortable because, despite their large budgets, these organizations are often more geared towards competition and speed to market than academics Collaborate and think long term.
Solving general artificial intelligence may require two approaches, rather than a competition between strong and weak AI proponents. It is not far-fetched to draw an analogy with the human brain, which has both conscious and unconscious learning capabilities. Our cerebellum makes up about 10% of the brain’s volume but contains over 50% of the total number of neurons and is responsible for coordination and movement related to motor skills, especially the hands and feet, as well as maintaining posture and balance. This is done quickly and unconsciously, and we can't really explain how we do it. However, our conscious brains, although much slower, are capable of processing abstract concepts, planning, and prediction. Furthermore, it is possible to consciously acquire knowledge and, through training and repetition, achieve automation—something professional athletes excel at.
People can’t help but wonder why, if nature has evolved the human brain in this hybrid way over hundreds of thousands of years, the average artificial intelligence system relies on a single model or algorithm.
Regardless of the specific underlying AI technology that will eventually lead to general artificial intelligence, this event will have a huge impact on our society - just like the wheel, the steam engine, electricity Or a computer. Arguably, if businesses can completely replace humans with robots, our capitalist economic model will need to change, otherwise social unrest will eventually ensue.
Having said all that, it’s likely that the ongoing debate is a bit like corporate PR, and the fact is that general artificial intelligence is further away than we currently think, so we have time to address its potential Influence. However, in the shorter term, it is clear that the pursuit of general artificial intelligence will continue to drive investment in specific technology areas, such as software and semiconductors.
The success of specific use cases within the weak AI framework has led to increasing pressure on our existing hardware capabilities. For example, the popular Generative Pre-Trained Transformer 3 (GPT-3) model OpenAI launched in 2020, which can already write raw prose with human-like fluency, has 175 billion parameters and takes several months to train .
It can be said that some of today’s existing semiconductor products—including CPUs, GPUs, and FPGAs—are capable of computing deep learning algorithms more or less efficiently. However, as model sizes increase, their performance becomes unsatisfactory, and the need arises for custom designs optimized for AI workloads. Leading cloud providers like Amazon, Alibaba, Baidu and Google, as well as Tesla and various semiconductor startups like Cambrian, Cerebras, Esperanto, Graphcore, Groq, Mythic and Sambanova, have taken this route .
The above is the detailed content of Game Over? The battle between strong AI and weak AI. For more information, please follow other related articles on the PHP Chinese website!