Home  >  Article  >  Technology peripherals  >  It’s not too late to be wary of artificial intelligence

It’s not too late to be wary of artificial intelligence

王林
王林forward
2023-04-10 09:11:071491browse

​We cannot specify targets completely correctly, nor can we predict or prevent the harm that super-powered machines pursuing the wrong goals will cause on a global scale. We’ve already seen examples of social media algorithms exploiting people’s preferences to optimize clicks, with disastrous consequences for democratic institutions.

It’s not too late to be wary of artificial intelligence

Superintelligence: Paths, Danger, Strategies, published in 2014 by Nick Bostrom A book detailing the case for taking risk seriously. "The Economist" magazine reviewed this book, and the review concluded: "The introduction of a second intelligent species on the earth has far-reaching consequences and deserves our deep thought."

Of course, in When the stakes are high, wise men are already thinking hard: engaging in serious debates, weighing the pros and cons, seeking solutions, finding loopholes in solutions, etc. As far as I know, these efforts have had little success and have been met with all sorts of denials.

The arguments of some prominent artificial intelligence researchers are hardly worth refuting. Here are some of the dozens of statements I've seen in articles or heard at conferences:

Electronic calculators are unrivaled in arithmetic, and calculators are not taking over the world; therefore , there is no reason to worry about superhuman artificial intelligence.

There has been no example in history of machines killing millions of people, and given this, it will not happen in the future.

There are no infinite physical quantities in the universe, and intelligence is no exception, so there is no need to worry too much about superintelligence.

Perhaps the most common answer from artificial intelligence researchers is: "We can turn it off." Alan Turing himself raised this possibility, but he was not too confident:

If a machine could think, it might think more comprehensively than we do, so where would we be? Even if machines can be made to bend to us, such as shutting down at critical moments, we as a species should feel ashamed of ourselves... This new danger... is bound to make us anxious.

Turning off the machine will not work. The reason is very simple. The super intelligent entity will definitely consider this possibility and take measures to prevent it. It does this not because it "wants to survive," but because it is pursuing the goals we have set for it and knows that if it is turned off, it will fail. We can't simply "turn it off," any more than we can beat Alpha Go simply by placing the pieces on the corresponding squares on the board.

Other forms of denial lead to more complex ideas, such as the idea that intelligence is multifaceted. For example, one person may have higher spatial intelligence than another but less social intelligence, so we can't rank them all in strict order of intelligence. This is especially true for machines: it makes no sense to compare the “intelligence” of Alpha Go to that of the Google search engine.

It’s not too late to be wary of artificial intelligenceKevin Kelly, founding editor-in-chief of Wired magazine and insightful technology commentator, takes this view one step further. "Intelligence is not a single dimension, so 'smarter than human' is a meaningless concept," he writes in his book The Myth of a Superhuman AI. About Superintelligence My worries were knocked away by a pole.

Now, there is an obvious answer, machines may surpass humans in all relevant dimensions of intelligence. In this case, even by Kelly's strict standards, the robot would be smarter than a human. But this rather strong assumption is not necessary to refute Kelly's argument.

Take chimpanzees as an example. Chimpanzees may have better short-term memory than humans, even at tasks that humans excel at, such as recalling sequences of numbers. Short-term memory is an important dimension of intelligence. According to Kelly's argument, humans are not smarter than chimpanzees, in fact, he would say "smarter than chimpanzees" is a meaningless concept.

This is little consolation to the chimpanzees and other species that survive only because of human tolerance, and to all the species that humans have destroyed. Again, this is little comfort to anyone who might be worried about being wiped out by machines.

Some people think that super intelligence cannot be achieved, so the risk of super intelligence no longer exists. These claims are not new, but it is surprising that AI researchers themselves are now saying that such artificial intelligence is impossible. For example, the important report "Artificial Intelligence and Life in 2030" by the AI100 organization stated: "Unlike movie scenes, superhuman robots will not and cannot appear in reality in the future."

As far as I know, this is the first time that a serious artificial intelligence researcher has publicly stated that human-level or superhuman artificial intelligence is impossible, and this occurs during a period of rapid development of artificial intelligence research, during which one after another Barriers are broken down. It would be like a group of top cancer biologists announcing that they had been fooling us all along: they knew all along that there would never be a cure for cancer.

What prompted this major change? No arguments or evidence were provided in the report. (Indeed, what evidence is there that it is physically impossible to have a better arrangement of atoms than the human brain?) I think the main reason is tribalism—a defense against what might be an “attack” on artificial intelligence. The instinct, however, is to think of a super-intelligent AI as something that might constitute an attack on AI seems a little odd, and to defend an AI by saying it will never achieve its goals seems even more absurd. We cannot ensure that future disasters will not occur by betting on the limits of human creativity.

Strictly speaking, superhuman artificial intelligence is not impossible, so do we not need to worry about its risks prematurely? Computer scientist Andrew Ng believes this is like worrying about "overpopulation on Mars." Still, long-term risks remain a cause for concern. When to worry about a potentially serious problem involving humans depends not only on when the problem occurs, but also on how long it takes to prepare and implement a solution.

For example, to detect an asteroid that is due to collide with Earth in 2069, would we wait until 2068 to start working on a solution? of course not! Humanity will set up global emergency projects to find ways to deal with threats because we have no way of knowing in advance how much time we will need.

Ng Enda’s point of view also makes people feel that it is impossible for us to transfer billions of people to Mars. This analogy is wrong. We invest vast amounts of scientific and technological resources into creating more powerful AI systems, with little regard for what will happen if we succeed. We can make a more appropriate analogy: planning to move humans to Mars without considering the breathing and eating problems after arrival. Some may think this plan is unwise.

Another way to sidestep potential problems is to assert that concerns about risk stem from ignorance. For example, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, has accused Elon Musk and Stephen Hawking of being Luddites (disapproval of new technologies) who have called attention to the potential threats of AI. and blind impulse to resist new things):

Every emerging technological innovation puts people in fear. From weavers throwing shoes into looms at the dawn of the industrial age to today’s fear of killer robots, we wonder what impact new technologies will have on our sense of self and livelihoods. When we don’t know anything, we panic.

Even if this classic fallacious argument is taken at face value, it doesn’t hold water. Hawking is no stranger to scientific reasoning, and Musk has overseen and invested in several artificial intelligence research projects. It would be even more disastrous to think that Bill Gates, I.J. Good, Marvin Minsky, Alan Turing and Norbert Wiener, who raised concerns, are not qualified to discuss artificial intelligence. Not believable anymore.

It is also completely wrong to accuse Luddism. It's like someone accusing nuclear engineers of being Luddites when they point out the need to control fission reactions. That is, mentioning the risks of AI means denying AI’s potential benefits. Take Oren Etzioni as an example:

Pessimistic predictions often fail to take into account the potential advantages of artificial intelligence in preventing medical accidents and reducing car accidents.

Recently, Facebook CEO Mark Zuckerberg had a media exchange with Elon Musk:

Opposing artificial intelligence is opposing safer systems that will not cause accidents. Automobiles, against more accurate diagnosis of patient conditions.

The notion that anyone who mentions risk is “anti-AI” is bizarre. (Are nuclear safety engineers “against electricity”?) What’s more, the entire argument is precisely contrarian, for two reasons. First, if there were no potential benefits, there would be no incentive for AI research and the dangers posed by human-level AI, and we would not be discussing it. Second, if risk cannot be successfully mitigated, there will be no benefit.

The catastrophic events at Three Mile Island in 1979, Chernobyl in 1986, and Fukushima, Japan, in 2011 have significantly reduced the potential benefits of nuclear energy. These nuclear disasters severely restricted the development of the nuclear industry. Italy gave up nuclear energy in 1990, and Belgium, Germany, Spain and Switzerland have already announced that they are abandoning nuclear energy plans. From 1991 to 2010, net new nuclear power capacity added each year was about 1/10 of what it was in the years before the Chernobyl accident.

Strangely, with these events as a warning, the famous cognitive scientist Steven Pinker still believes that people should not call attention to the risks of artificial intelligence because the "safety culture of advanced societies" will Ensure all significant AI risks are eliminated. Even ignoring the Chernobyl, Fukushima, and runaway global warming caused by our advanced safety culture, Pinker's argument misses the point entirely. A functioning safety culture involves pointing out potential failure modes and finding ways to prevent them, and AI's standard model is failure modes.

Pinker also believes that problematic AI behavior stems from setting specific types of goals; if these are not taken into account, there will be no problems:

AI dystopian projects will A narrow masculine mentality is projected onto the concept of intelligence. They believe that robots with superhuman intelligence will have goals such as overthrowing their masters or conquering the world.

Yann LeCun, a pioneer in deep learning and head of AI research at Facebook, often cites the same point when downplaying the risks of AI:

We don’t need to let AI own Self-preservation instincts, jealousy, etc… AI will not create destructive “emotions” unless we incorporate those emotions into it.

Those who believe this risk is negligible do not explain why super artificial intelligence must be under human control.

In fact, it doesn’t matter whether we implant “emotions” or “desires” (such as self-preservation, resource acquisition, knowledge discovery, or in extreme cases world domination). Machines will generate these emotions anyway, just like the sub-goals of the goals we build—no matter what gender it is. As the "turn off the machine" perspective reveals, the end of life is not in itself a bad thing for a machine. Still, life termination should be avoided, as once terminated, they will have a harder time reaching their goals.

A common variation on the "avoid setting goals" argument is that a sufficiently intelligent system will necessarily set the "right" goals on its own by virtue of its intelligence. The 18th-century philosopher David Hume refuted this view in A Treatise of Human Nature. In his book "Superintelligence" (Superintelligence), Nick Bostrom regards Hume's view as an orthogonal proposition:

Intelligence is orthogonally related to the ultimate goal: any level of intelligence can or More or less integrated with any ultimate goal.

For example, the destination of a self-driving car can be any given location; making the car better at autonomous driving does not mean that it will automatically refuse to go to an address that requires a given mathematical calculation.

Similarly, it is not difficult to imagine that general intelligent systems could be given more or less goals, including maximizing the number of paper clips or the number of digits in a known pi ratio. This is how reinforcement learning systems and other reward optimizers work: the algorithm is completely general and can accept any reward signal. For engineers and computer scientists working in the Standard Model, the orthogonality proposition is simply a given.

Rodney Brooks, a well-known robotics expert, explicitly criticized Bostrom’s orthogonality thesis. He asserted that a program “cannot be smart enough to invent ways to subvert human society.” Achieving the goals set for humanity without understanding how it troubles humanity.”

However, given Bruske’s definition of the problem, such a program is not only possible, but actually inevitable. Bruske believes that machines’ best practices for “achieving human-set goals” are causing problems for humans. It can be seen that these problems reflect human beings' neglect of what is valuable to human beings when setting goals. The best solution executed by a machine is likely to cause problems for humans, and the machine is likely to be aware of this, but it is obviously not their business that the machine will not consider these problems to be problematic.

In summary, the “skeptics” who believe that the risks posed by AI are minimal have not explained why super-AI systems must be under human control; they have not even tried to explain why super-intelligence systems should never be will be developed.

The field of artificial intelligence must take risks and do its best to reduce them, rather than continue to fall into opposition and slander and repeatedly dig up untrustworthy arguments. As far as we know, these risks are neither trivial nor insurmountable. The first step is to recognize that the standard model must be replaced and that the AI ​​system optimizes a fixed goal. This is poor engineering. We need to do a lot of work to reshape and reconstruct the foundation of artificial intelligence. ​

The above is the detailed content of It’s not too late to be wary of artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete