Home  >  Article  >  Technology peripherals  >  After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

WBOY
WBOYforward
2023-04-12 11:46:061279browse

In recent times, artificial intelligence technology has made breakthroughs in large models. Imagen proposed by Google yesterday once again triggered discussions about AI capabilities. Through pre-training learning from large amounts of data, the algorithm has unprecedented capabilities in building realistic images and understanding language.

In the eyes of many people, we are close to general artificial intelligence, but Gary Marcus, a well-known scholar and professor at New York University, does not think so.

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

Recently, his article "The New Science of Alt Intelligence" refuted DeepMind Research Director Nando de Freitas's view of "winning at scale". Let's take a look at him. How to say it.

The following is the original text of Gary Marcus:

For decades, there has been an assumption in the field of AI that artificial intelligence should draw inspiration from natural intelligence. John McCarthy wrote a seminal paper on why AI needs common sense - "Programs with Common Sense"; Marvin Minsky wrote the famous book "Society of Mind", trying to find inspiration from human thinking; because in behavioral economics Herb Simon, who won the Nobel Prize in Economics for his contributions, wrote the famous "Models of Thought", which aimed to explain "how newly developed computer languages ​​can express theories of psychological processes so that computers can simulate predicted human behavior."

As far as I know, a large part of current AI researchers (at least the more influential ones) don't care at all. Instead, they are focusing more on what I call “Alt Intelligence” (thanks to Naveen Rao for his contribution to the term).

Alt Intelligence does not mean constructing machines that can solve problems in the same way as human intelligence, but rather using large amounts of data obtained from human behavior to replace intelligence. Currently, Alt Intelligence's main focus is scaling. Advocates of such systems argue that the larger the system, the closer we will get to true intelligence and even consciousness.

Study Alt Intelligence itself is nothing new, but the arrogance associated with it is.

For some time, I have seen some signs that the current artificial intelligence superstars, and even most people in the entire field of artificial intelligence, are dismissive of human cognition, ignoring or even mocking linguistics and cognitive psychology. Scholars in the fields of science, anthropology and philosophy.

But this morning, I discovered a new tweet about Alt Intelligence. Nando de Freitas, the author of the tweet and director of research at DeepMind, declared that AI "is now all about scale." In fact, in his view (perhaps deliberately provocative with fiery rhetoric), the harder challenges in AI have already been solved. "Game over!" he said.

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

In essence, there is nothing wrong with pursuing Alt Intelligence.

Alt Intelligence represents an intuition (or a series of intuitions) about how to build intelligent systems. Since no one yet knows how to build systems that can match the flexibility and intelligence of human intelligence, it's fair game for people to pursue many different hypotheses about how to achieve this. Nando de Freitas defends this hypothesis as bluntly as possible, and I call it Scaling-Uber-Alles.

Of course, the name doesn't entirely do it justice. De Freitas is very clear that you can't just make the model bigger and expect success. People have been doing a lot of scaling lately and have had some great successes, but they've also encountered some obstacles. Before we dive into how De Freitas faces the status quo, let’s take a look at what the status quo looks like.

Status quo

Systems like DALL-E 2, GPT-3, Flamingo, and Gato seem confusing Exciting, but no one who has studied these models carefully would confuse them with human intelligence.

For example, DALL-E 2 can create realistic works of art based on text descriptions, such as "an astronaut riding a horse":

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

But It is also easy to make surprising mistakes. For example, when the text description is "a red square on a blue square", the generated result of DALL-E is as shown in the left picture, and the right picture is generated by the previous model result. Clearly, DALL-E's generation results are inferior to previous models.

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

When Ernest Davis, Scott Aaronson and I delved into this problem, we found many similar examples:After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

In addition, Flamingo, which looks amazing on the surface, also has its own bugs. As DeepMind senior research scientist Murray Shanahan pointed out in a tweet, Flamingo's lead author Jean-Baptiste Alayrac later added some examples. For example, Shanahan showed Flamingo this image:

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

and had the following flawed conversation surrounding the image:

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

It seems to be "made out of nothing".

Some time ago, DeepMind also released the multi-modal, multi-task, and multi-embodied "generalist" agent Gato, but when you look at the small print, you can still find unreliable places.

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

Of course, defenders of deep learning will point out that humans make mistakes.

But any honest person will realize that these errors indicate that something is currently defective. It's no exaggeration to say that if my kids were making mistakes like this on a regular basis, I would drop everything I'm doing and take them to a neurologist right away.

So, let’s be honest: Scaling isn’t working yet, but it’s possible, or so de Freitas’ theory—a clear expression of the zeitgeist—is.

Scaling-Uber-Alles

So how does de Freitas reconcile reality with ambition? In fact, billions of dollars have now been invested in Transformer and many other related areas, training data sets have expanded from megabytes to gigabytes, and parameter sizes have expanded from millions to trillions. However, puzzling errors that have been documented in detail in numerous works since 1988 remain.

For some (like myself), the existence of these problems may mean that we need to undergo fundamental rethinking, such as those pointed out by Davis and me in "Rebooting AI". But for de Freitas, this is not the case (many other people may also hold the same idea as him, I am not trying to single him out, I just think his remarks are more representative).

In the tweet, he elaborated on his views on reconciling reality with current problems, "(We need to) make models larger, safer, more computationally efficient, faster to sample, and more efficient to store. There are more intelligence and modes, and we also need to study data innovation, online/offline, etc.” The point is, none of the words come from cognitive psychology, linguistics or philosophy (maybe smarter memory can barely count).

In a follow-up post, de Freitas also said:

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

This once again confirms his statement that "scale is greater than all else" and shows that One goal: Its ambition is not just better AI, but AGI.

AGI stands for Artificial General Intelligence, which is at least as good, as resourceful and broadly applicable as human intelligence. The narrow sense of artificial intelligence we currently realize is actually alternative intelligence (alt intelligence), and its iconic successes are games such as chess (Deep Blue has nothing to do with human intelligence) and Go (AlphaGo has little to do with human intelligence). De Freitas has more ambitious goals, and to his credit, he's been very candid about them. So, how does he achieve his goal? To reiterate here, de Freitas focuses on technical tools for accommodating larger data sets. Other ideas, such as those from philosophy or cognitive science, may be important but are excluded.

He said, "The philosophy of symbols is not necessary." Perhaps this is a rebuttal to my long-standing movement to integrate symbolic manipulation into cognitive science and artificial intelligence. The idea resurfaced recently in Nautilus magazine, although it was not fully elaborated. Here is my brief response: What he said "[neural] nets have no issue creating [symbols] and manipulating them" ignores both history and reality. What he ignores is the history that many neural network enthusiasts have resisted symbols for decades; he ignores the reality that symbolic descriptions like the aforementioned "red cube on a blue cube" can still stump the minds of 2022 SOTA model.

At the end of the tweet, De Freitas expressed his approval of Rich Sutton's famous article "Bitter Lessons":

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

Sutton’s argument is that the only thing that will lead to progress in artificial intelligence is more data, more efficient computation. In my opinion, Sutton is only half right, his description of the past is almost correct, but his inductive predictions of the future are unconvincing.

So far, big data has (temporarily) defeated well-designed knowledge engineering in most fields (certainly not all fields).

But nearly all the software in the world, from web browsers to spreadsheets to word processors, still relies on knowledge engineering, and Sutton ignored this. For example, Sumit Gulwani's excellent Flash Fill feature is a very useful one-time learning system that is not built on the premise of big data at all, but on classic programming techniques.

I don’t think any pure deep learning/big data system can match this.

In fact, the key problems with artificial intelligence that cognitive scientists like Steve Pinker, Judea Pearl, Jerry Fodor, and myself have been pointing out for decades have actually not yet been solved. Yes, machines can play games very well, and deep learning has made huge contributions in areas such as speech recognition. But no artificial intelligence currently has enough understanding to recognize any text and build a model that can speak normally and complete tasks, nor can it reason and produce a cohesive response like the computers in the "Star Trek" movies.

We are still in the early stages of artificial intelligence.

Success on some problems using specific strategies does not guarantee that we will solve all problems in a similar way. It would be simply foolish not to realize this, especially when some of the failure modes (unreliability, strange bugs, combinatorial failures, and incomprehension) remain unchanged since Fodor and Pinker pointed them out in 1988. Conclusion

It’s nice to see that Scaling-Über-Alles is not yet fully agreed upon, even at DeepMind:

After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow

I Completely agree with Murray Shanahan: "I see very little in Gato to suggest scaling alone will get us to human-level generalization."

Let's encourage a field that's open-minded enough that people can take their own work in many directions without prematurely discarding ideas that happen to not be fully developed yet. After all, the best path to (general) artificial intelligence may not be Alt Intelligence.

As mentioned earlier, I'd love to think of Gato as an "alternative intelligence" - an interesting exploration of alternative ways to build intelligence, but we need to put it in perspective: it won't work like a brain, it will It doesn’t learn like a child, it doesn’t understand language, it doesn’t align with human values, and it can’t be trusted to complete critical tasks.

It might be better than anything else we have right now, but it still doesn't really work, and even after investing heavily in it, it's time to give it a pause.

It should take us back to the era of artificial intelligence startups. Artificial intelligence certainly shouldn’t be a blind copy of human intelligence. After all, it has its own flaws, saddled with poor memory and cognitive biases. But it should look to human and animal cognition for clues. The Wright brothers didn't imitate birds, but they learned something about their flight control. Knowing what we can learn from and what we cannot learn from may be more than half of our success.

I think the bottom line is, what AI once valued but no longer pursues: If we are going to build AGI, we are going to need to learn something from humans - how they reason and understand the physical world, and How they represent and acquire language and complex concepts.

It would be too arrogant to deny this idea.

The above is the detailed content of After the emergence of super-large models, is the AI ​​game over? Gary Marcus: The road is narrow. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete