Home >Technology peripherals >AI >When will the development of artificial intelligence break through the shackles?

When will the development of artificial intelligence break through the shackles?

WBOY
WBOYforward
2023-04-08 18:51:04838browse

When will the development of artificial intelligence break through the shackles?

Recently, a car of a certain brand had an accident during prototype testing, causing a lot of damage to people and property, and it immediately made headlines. At the same time, public opinion is also paying attention to the commonplace issue of autonomous vehicle driving. Is autonomous driving safe? Should autonomous driving be opened up? Under the shell of autonomous driving, it is difficult to determine whether artificial intelligence can have moral judgment as the core of the problem.

"Trolley Problem" Artificial Intelligence Faces Moral Dilemmas

MIT in the United States designed a website called Moral Machine. The website designs various scenarios in which cars lose control to allow viewers to make choices. , by collecting feedback from 3 million users, Moral Machine found that people are more inclined to let self-driving cars self-destruct to protect more lives, but hope that their cars do not have this feature. This conclusion has nothing to do with culture or customs. It is a common choice of human beings and is consistent with the general understanding of society. People hope to sacrifice the lives of a few people in exchange for the lives of more people. But this is contrary to the law, because human life itself cannot be compared and quantified.

This is an inevitable moral issue for autonomous driving. On the one hand, if AI is allowed to make decisions out of control, according to the provisions of integrating powers and responsibilities, it means that AI or the autonomous driving company behind it should bear responsibility for the decision-making behavior. On the other hand, if AI is not allowed to make decisions, it cannot be called autonomous driving - because the definition of an autonomous car is a car that can sense its environment and navigate driving without human intervention.

The "trolley problem" encountered by autonomous driving is just a microcosm of the difficulties encountered by the artificial intelligence industry. Although artificial intelligence models have become more mature with the advancement of technology and the development of the big data industry, they are still in an embarrassing situation when it comes to human issues such as morality and consciousness: According to the artificial intelligence "singularity" theory, artificial intelligence will eventually It surpasses humans in terms of rationality and sensibility, but humans have always had a "Frankenstein complex" regarding the safety of artificial intelligence. Generally, humans are psychologically unable to accept empathy with non-biological machines.

In early June this year, LaMDA, the conversational application language model launched by Google in 2021, was revealed to be "conscious" by one of its leaders, engineer Lemoy. He believes that the LaMDA model is a "person" who has self-awareness and can perceive the world, and has the intelligence of a seven or eight-year-old child. Lemoy said the LaMDA model not only sees itself as a human being but fights for its rights as a human being and vigorously defends them. After the incident broke out, many netizens supported Lemoy, believing that the singularity of artificial intelligence has arrived. They believe that artificial intelligence has become conscious and soulful, and can think independently like humans.

The Minefield of Value Judgment Artificial Intelligence

Is artificial intelligence moral? That is, should artificial intelligence make value judgments?

If artificial intelligence is considered to be moral, it means that artificial intelligence can get rid of the control of human will and make independent evaluations of events or things. This is not difficult to achieve at the technical level. Through "feeding" a large amount of data, artificial intelligence can digitize events or things and measure them according to a set of "judgment standards" formed by deep learning. The same is true for the LaMDA model above. But in fact, the LaMDA model is just a response machine. Gary Marcus, a well-known machine learning and neural network expert, said: "LaMDA just extracts words from the human corpus and then matches your questions." From this analysis, the so-called "morality" of artificial intelligence "It is only a response to events or things, and does not provide a deep understanding of what moral evaluation is and the meaning of moral evaluation.

For another example, different artificial intelligence models have different ways of handling the same situation. Still taking autonomous driving as an example, rushing towards the same bucket in the same way, the results are completely different. Some directly collided, while others avoided. Can this rise to a moral level? Obviously not, there is even no distinction between the two models, because the models designed based on different concepts and requirements have special characteristics. The former believes that this situation falls within the scope of driver operation and does not require intervention; while the latter believes that intervention should be performed.

Taking a step back, assuming that artificial intelligence has human-like consciousness and can think independently, does that mean we can hope that it can solve moral problems? the answer is negative. Simply put, moral problems that cannot be solved by humans themselves cannot be expected to be solved by numbers without the concept of "humanity".

From this perspective, there is no moral issue in the value judgment of developing artificial intelligence. What is more important is to analyze why moral evaluation is needed. The fundamental purpose of moral evaluation is to arrive at a result and guide subsequent behavior. The reporter believes that in terms of the attribution of artificial intelligence, artificial intelligence should be divided into decision-making systems and execution systems, and a "responsible person" system should be introduced into the corresponding systems.

From a decision-making perspective, although from a legal perspective, actions are punished but not thoughts, it is only targeted at natural persons. The "ideas" of current artificial intelligence can be expressed through data, so from a decision-making perspective, artificial intelligence still needs to be controlled. There are errors in the "ideas" of artificial intelligence. The reason is that there is a problem with the data used to train the algorithm, that is, artificial intelligence learns the problems that exist in society and applies them. For example, Amazon, an American e-commerce brand, uses artificial intelligence algorithms to initially screen candidate resumes when recruiting employees. The results are more biased toward men because when engineers train the algorithm, they use employees who have been hired by Amazon before. The resumes are trained, and Amazon has more male employees. The result is that such an algorithm makes Amazon's resume screening algorithm more biased towards men, resulting in "sex discrimination" in the algorithm. The reporter believes that if the legal result is caused by the algorithm and training data, the responsible person responsible for algorithm design or training should be held accountable.

From a behavioral perspective, even if artificial intelligence can match or even surpass humans in behavioral execution, from a legal perspective, artificial intelligence is still regarded as a thing rather than a subject with rights and abilities. This also means that the current law denies that artificial intelligence can independently bear legal responsibility. Essentially, it is because artificial intelligence cannot be responsible for the actions it performs. The establishment of a "responsible person" system is similar to the legal representative system of a legal person organization. It consists of specific The natural person shall bear the liability arising from the act. Distinguishing "thinking" and "action" can make the attribution of responsibility more detailed, ensuring effective responsibility without affecting the enthusiasm for the development of the artificial intelligence industry. In the current civil field, product liability applies to infringement of artificial intelligence products, emphasizing the responsibility for developers, producers and sellers.

* * *

In recent years, our country has successively issued policy documents such as the "New Generation Artificial Intelligence Governance Principles" and the "New Generation Artificial Intelligence Ethical Code", clearly proposing The eight principles emphasize the integration of ethics into the entire life cycle of artificial intelligence and maintain the healthy development of the artificial intelligence industry from a principled perspective. Relevant sources said that the Artificial Intelligence Subcommittee of the National Science and Technology Ethics Committee is studying and drafting a high-risk list of artificial intelligence ethics to better promote the ethical supervision of artificial intelligence scientific research activities. I believe that with the introduction of more legal regulations, ethical issues in the application of artificial intelligence will be greatly alleviated.

Tips

What are the jokes about artificial intelligence?

The Trolley Problem: The "Trolley Problem" is one of the most well-known thought experiments in the field of ethics. It was first published by the philosopher Philippa Foot in 1967 in "The Problem of Abortion and "The Double Effect of Dogma" was proposed in the paper. The general content is that 5 people are tied to a tram track, 1 person is tied to the backup track, and an out-of-control tram is approaching at a high speed. And there happens to be a joystick next to you. You can push the joystick to make the tram drive onto the backup track, kill the 1 person and save 5 people; you can also do nothing, kill 5 people and save 1 person. . This type of ethical dilemma is known as the "trolley problem."

Artificial Intelligence “Singularity” Theory: The American futurist Ray Kurzweil was the first to introduce the “singularity” into the field of artificial intelligence. In his two books, "The Singularity Is Near" and "The Future of Artificial Intelligence," he combined the two. He used the "singularity" as a metaphor to describe a certain time and space stage when the capabilities of artificial intelligence surpass that of humans. When artificial intelligence crosses this "singularity", all the traditions, understandings, concepts, and common sense we are accustomed to will no longer exist. The accelerated development of technology will lead to a "runaway effect" and artificial intelligence will surpass the potential and potential of human intelligence. control and rapidly change human civilization.

Frankenstein Complex: Originated from the novelist Asimov, it refers to the psychological state of human beings who are afraid of machines. Frankenstein is the protagonist in a novel called "Frankenstein: The Story of a Modern Prometheus" written by Mary Shelley in 1818. He created a humanoid creature, but he also Backlashed. "Frankenstein" is now used to refer to a monster created by humans. In current literature, movies and other works, the "Frankenstein complex" often implies that artificial intelligence conquers humans and begins to manage the world. Reporter Xu Yong Intern Yang Chenglin

The above is the detailed content of When will the development of artificial intelligence break through the shackles?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete