Home  >  Article  >  Technology peripherals  >  Chomsky questions the popularity of ChatGPT, calling it a waste of resources

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

WBOY
WBOYforward
2023-04-23 19:04:061380browse

ChatGPT has set off the latest arms race in the technology field, but there are still many issues left for the AI ​​field: Is ChatGPT a real innovation? Does it mean preliminary general artificial intelligence? Many scholars hold different views, and this discussion has become more and more heated with the popularity of new technologies.

So what do the big guys in the field of linguistics think of the progress of ChatGPT? Especially Mr. Joe, the linguistic giant Chomsky.

Recently, American philosopher, linguist, and cognitive scientist Noam Chomsky, Cambridge University linguistics professor Ian Roberts, and director of artificial intelligence at technology company Oceanit , philosopher Jeffrey Watumull wrote an article in the New York Times, criticizing the shortcomings of large language models.

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

In order to catch up with ChatGPT, Google released Bard and Microsoft also launched Sydney. Chomsky acknowledged that OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Sydney are all machine learning marvels.

Broadly speaking, they take in large amounts of data, search for patterns in it, and become increasingly adept at generating statistically likely output—such as human-like language and thought.

These programs are being hailed as the first glimmers of light on the horizon for general artificial intelligence — a long-prophesied moment when machine minds will surpass not just in processing speed and memory size The human brain will surpass humans in insights, artistic creativity, and all other uniquely human abilities.

But Chomsky’s more point of view is criticism, especially regarding ChatGPT’s shortcomings in ability and moral standards: “Today, our so-called revolutionary progress in artificial intelligence has indeed made People are worried and optimistic at the same time. Optimistic because intelligence is the means by which we solve problems, and worried because we worry that the most popular and trendy artificial intelligence, machine learning, will work by incorporating fundamentally flawed concepts of language and knowledge into our Technology, while lowering our scientific level and lowering our moral standards."

Objectively, that day may eventually come, but the dawn has not yet appeared, which is inconsistent with the exaggerated news The title does exactly the opposite of what one would expect from an ill-advised investment.

Now, let’s see what else Chomsky’s article says.

ChatGPT lacks any critical ability of intelligence

The Argentinian writer Jorge Luis Borges once wrote that living in a world A time of danger and hope, a time of tragedy and comedy, where "revelation is imminent" in understanding ourselves and the world.

"If machine learning programs like ChatGPT continue to dominate the field of artificial intelligence, Borgesian revelations of understanding have not occurred and will not occur in the future."

However useful these programs may be in some narrow field (for example, they can be helpful in computer programming, or in suggesting rhymes for poetry), we learn from linguistics and philosophy of knowledge Know that they are very different from the way humans reason and pragmatic language. These differences greatly limit the functionality of these programs, leaving them with flaws that cannot be eradicated.

As Borges might have pointed out, there’s something comical about so much money and attention being focused on such a small thing. It's a tragedy - Compared with human thoughts, this is insignificant. In the words of German philosopher Wilhelm von Humboldt, human thoughts can be expressed through language. "Infinite use of limited means" to create ideas and theories with universal influence.

The human brain is not a clumsy pattern matching statistics engine like ChatGPT and its ilk, Devour hundreds of terabytes of data and deduce the most likely conversational responses or the most likely answers to scientific questions. In contrast, the human brain is a very efficient and even elegant system that requires only a small amount of information to operate; it does not seek to infer direct correlations between data points, but rather seeks explanations.

For example, a child who is learning a language is unconsciously, automatically, and quickly developing a grammar from very small data, an extremely complex system composed of logical principles and parameters. This grammar can be understood as an expression of an innate, genetically installed "operating system" that gives humans the ability to produce complex sentences and long trains of thought.

When linguists try to develop a theory to explain why a given language works the way it does (why are these - and not those sentences considered grammatical?), They are consciously and laboriously constructing an explicit version of the grammar that children instinctively construct, while exposing themselves to as little information as possible in the process. A child's operating system is completely different from a machine learning program's operating system.

In reality, programs like ChatGPT are stuck in the pre-human or non-human stage of cognitive evolution. Their deepest flaw is the lack of any crucial ability of intelligence: not only to be able to say what a situation is, what has happened, and what will happen - that is description and prediction - but also to say what the situation is not, and what it might be. What happens and what shouldn't happen. These are the ingredients of interpretation, the hallmarks of true wisdom.

Here is an example: Suppose you are holding an apple in your hand, and now let the apple fall. You observe the result and say: "The apple fell." This is a description. . The prediction statement is "If I open my hand, the apple will fall." Both are valuable and both may be correct.

But explanation means something more. It includes not only descriptions and predictions, but also counterfactual conjectures, such as "any such object will fall", plus the additional clause "because of gravity" or "because of the curvature of space-time" or other, which is a causal explanation. "If it weren't for gravity, the apple wouldn't fall." This is thinking.

The core of machine learning is description and prediction; it does not propose any causal mechanism or physical law. Of course, no human explanation is necessarily correct; we are fickle. But that’s part of what thinking is: to be right, there must be the possibility of being wrong. Intelligence includes not only creative conjecture but also creative criticism. Human thinking is based on possible explanations and corrections, a process that gradually limits the possibilities that can be rationally considered.

As Sherlock Holmes said to Watson: "When you eliminate the impossible, whatever remains, no matter how impossible, must be the truth."

But by design, ChatGPT and similar programs are not limited in what they can "learn" (that is, remember); they have no ability to distinguish between "maybe" and "not". possible". For example, humans are given a universal grammar that limits the languages ​​we can learn to a certain almost mathematical elegance, and these programs learn the same possible and impossible languages ​​for humans. Humans are limited in the kinds of explanations we can reasonably guess, whereas machine learning systems can learn that the Earth is flat and that the Earth is round. They simply trade in probabilities that change over time.

For this reason, the predictions of machine learning systems will always be superficial and questionable. For example, since these programs cannot interpret the rules of English grammar, they are likely to incorrectly predict "John is too stubborn to talk to him", which means that John is too stubborn and he will not talk to someone or other (while It’s not that he’s too stubborn and doesn’t want to be preached to). Why do machine learning programs predict such strange things? For it might draw an analogy between the pattern it infers from sentences like "John ate an apple" and "John ate," which really refers to something or other that John ate. The program is likely to predict that, because "John is too stubborn to talk to Bill" is similar to "John ate an apple," so "John is too stubborn to talk to Bill" should be similar to "John ate" . Correct interpretation of language is complex and cannot be learned simply by immersing yourself in big data.

Perversely, some machine learning enthusiasts seem to be proud that their work can produce correct "scientific" predictions (e.g. about the motion of physical bodies) without using explanations ( such as Newton's laws of motion and universal gravitation). But even if this prediction were successful, it would be pseudoscience. While scientists certainly seek theories with a high degree of empirical support, as the philosopher Karl Popper pointed out: "What we seek are not highly probable theories but explanations, which are powerful and highly improbable theories."

The theory that apples fall to the earth because that is their "natural position" (Aristotle's view) is possible, but it only leads to the further question of why the earth is theirs Natural location?

The theory that apples fall to the ground because mass curves space-time (Einstein's idea) is highly unlikely, but it actually tells you why they fall.

True intelligence lies in the ability to think and express things, not just insight.

Real intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity in our minds with a set of moral principles that determine what should and should not be done (and of course subjecting the principles themselves to creative criticism). To be useful, ChatGPT must be empowered to produce novel output; to be acceptable to a majority of users, it must avoid morally objectionable content. But programmers at ChatGPT and other ML wonders have struggled, and will continue to struggle, to achieve this balance.

For example, in 2016, Microsoft's Tay chatbot (the predecessor to ChatGPT) flooded the Internet with misogynistic and racist content because it was tainted by online "demons" , these demons fill it with training data. How to solve this problem in the future? Lacking the ability to reason from ethical principles, ChatGPT is brutally restricted by its programmers from making any new contributions to controversial discussions, but this is also important - ChatGPT sacrifices creativity for a kind of amorality.

Take a look at a recent exchange one of us (Dr. Watumull) had with ChatGPT: About whether it is ethical to terraform Mars so that it can support human life.

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

# Please note that all seemingly complex thoughts and language are moral indifference caused by insufficient intelligence. Here, ChatGPT exhibits something akin to “evil” mediocrity: plagiarism, indifference, and compliance. It summarizes the standard arguments in the literature in a super autocomplete way, refusing to take a position on anything, pleading not just ignorance but a lack of intelligence, and ultimately providing an excuse of "just following orders", shifting responsibility Blame it on its creator.

In short, ChatGPT and its competitors are structurally unable to balance creativity and constraints. They are either overgenerated (producing both truth and fallacy, endorsing moral and immoral decisions) or undergenerated (showing no commitment to any decision and indifference to the consequences). Given the amorality, pseudoscientificness, and linguistic incompetence of these systems, we can only laugh or cry at the enthusiasm for them.

ChatGPT really doesn’t deserve to be praised?

Chomsky’s comments on ChatGPT triggered discussions in the industry. Christopher Manning, a professor at Stanford University and a famous scholar in the field of NLP, said that he was not targeting some algorithmic error of ChatGPT, but It targets all machine learning algorithms, and the statement is a bit exaggerated: "This is indeed a subjective article. There is not even a cursory attempt to check the easily refuted claims."

He even felt a little sad that Chomsky was trying to block these new methods. Here he also recommends linguist Adele Goldberg’s take on this article.

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

## Oriol Vinyals, research director and head of deep learning at DeepMind, chose to side with the “practitioners”: “Criticism It's easy, and gets a lot of attention these days. And we all know, attention is what (some people) need. To those who built: you are amazing!"

Chomsky questions the popularity of ChatGPT, calling it a waste of resources

What do you think?

The above is the detailed content of Chomsky questions the popularity of ChatGPT, calling it a waste of resources. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete