


When will the development of artificial intelligence break through the shackles?
Recently, a car of a certain brand had an accident during prototype testing, causing a lot of damage to people and property, and it immediately made headlines. At the same time, public opinion is also paying attention to the commonplace issue of autonomous vehicle driving. Is autonomous driving safe? Should autonomous driving be opened up? Under the shell of autonomous driving, it is difficult to determine whether artificial intelligence can have moral judgment as the core of the problem.
"Trolley Problem" Artificial Intelligence Faces Moral Dilemmas
MIT in the United States designed a website called Moral Machine. The website designs various scenarios in which cars lose control to allow viewers to make choices. , by collecting feedback from 3 million users, Moral Machine found that people are more inclined to let self-driving cars self-destruct to protect more lives, but hope that their cars do not have this feature. This conclusion has nothing to do with culture or customs. It is a common choice of human beings and is consistent with the general understanding of society. People hope to sacrifice the lives of a few people in exchange for the lives of more people. But this is contrary to the law, because human life itself cannot be compared and quantified.
This is an inevitable moral issue for autonomous driving. On the one hand, if AI is allowed to make decisions out of control, according to the provisions of integrating powers and responsibilities, it means that AI or the autonomous driving company behind it should bear responsibility for the decision-making behavior. On the other hand, if AI is not allowed to make decisions, it cannot be called autonomous driving - because the definition of an autonomous car is a car that can sense its environment and navigate driving without human intervention.
The "trolley problem" encountered by autonomous driving is just a microcosm of the difficulties encountered by the artificial intelligence industry. Although artificial intelligence models have become more mature with the advancement of technology and the development of the big data industry, they are still in an embarrassing situation when it comes to human issues such as morality and consciousness: According to the artificial intelligence "singularity" theory, artificial intelligence will eventually It surpasses humans in terms of rationality and sensibility, but humans have always had a "Frankenstein complex" regarding the safety of artificial intelligence. Generally, humans are psychologically unable to accept empathy with non-biological machines.
In early June this year, LaMDA, the conversational application language model launched by Google in 2021, was revealed to be "conscious" by one of its leaders, engineer Lemoy. He believes that the LaMDA model is a "person" who has self-awareness and can perceive the world, and has the intelligence of a seven or eight-year-old child. Lemoy said the LaMDA model not only sees itself as a human being but fights for its rights as a human being and vigorously defends them. After the incident broke out, many netizens supported Lemoy, believing that the singularity of artificial intelligence has arrived. They believe that artificial intelligence has become conscious and soulful, and can think independently like humans.
The Minefield of Value Judgment Artificial Intelligence
Is artificial intelligence moral? That is, should artificial intelligence make value judgments?
If artificial intelligence is considered to be moral, it means that artificial intelligence can get rid of the control of human will and make independent evaluations of events or things. This is not difficult to achieve at the technical level. Through "feeding" a large amount of data, artificial intelligence can digitize events or things and measure them according to a set of "judgment standards" formed by deep learning. The same is true for the LaMDA model above. But in fact, the LaMDA model is just a response machine. Gary Marcus, a well-known machine learning and neural network expert, said: "LaMDA just extracts words from the human corpus and then matches your questions." From this analysis, the so-called "morality" of artificial intelligence "It is only a response to events or things, and does not provide a deep understanding of what moral evaluation is and the meaning of moral evaluation.
For another example, different artificial intelligence models have different ways of handling the same situation. Still taking autonomous driving as an example, rushing towards the same bucket in the same way, the results are completely different. Some directly collided, while others avoided. Can this rise to a moral level? Obviously not, there is even no distinction between the two models, because the models designed based on different concepts and requirements have special characteristics. The former believes that this situation falls within the scope of driver operation and does not require intervention; while the latter believes that intervention should be performed.
Taking a step back, assuming that artificial intelligence has human-like consciousness and can think independently, does that mean we can hope that it can solve moral problems? the answer is negative. Simply put, moral problems that cannot be solved by humans themselves cannot be expected to be solved by numbers without the concept of "humanity".
From this perspective, there is no moral issue in the value judgment of developing artificial intelligence. What is more important is to analyze why moral evaluation is needed. The fundamental purpose of moral evaluation is to arrive at a result and guide subsequent behavior. The reporter believes that in terms of the attribution of artificial intelligence, artificial intelligence should be divided into decision-making systems and execution systems, and a "responsible person" system should be introduced into the corresponding systems.
From a decision-making perspective, although from a legal perspective, actions are punished but not thoughts, it is only targeted at natural persons. The "ideas" of current artificial intelligence can be expressed through data, so from a decision-making perspective, artificial intelligence still needs to be controlled. There are errors in the "ideas" of artificial intelligence. The reason is that there is a problem with the data used to train the algorithm, that is, artificial intelligence learns the problems that exist in society and applies them. For example, Amazon, an American e-commerce brand, uses artificial intelligence algorithms to initially screen candidate resumes when recruiting employees. The results are more biased toward men because when engineers train the algorithm, they use employees who have been hired by Amazon before. The resumes are trained, and Amazon has more male employees. The result is that such an algorithm makes Amazon's resume screening algorithm more biased towards men, resulting in "sex discrimination" in the algorithm. The reporter believes that if the legal result is caused by the algorithm and training data, the responsible person responsible for algorithm design or training should be held accountable.
From a behavioral perspective, even if artificial intelligence can match or even surpass humans in behavioral execution, from a legal perspective, artificial intelligence is still regarded as a thing rather than a subject with rights and abilities. This also means that the current law denies that artificial intelligence can independently bear legal responsibility. Essentially, it is because artificial intelligence cannot be responsible for the actions it performs. The establishment of a "responsible person" system is similar to the legal representative system of a legal person organization. It consists of specific The natural person shall bear the liability arising from the act. Distinguishing "thinking" and "action" can make the attribution of responsibility more detailed, ensuring effective responsibility without affecting the enthusiasm for the development of the artificial intelligence industry. In the current civil field, product liability applies to infringement of artificial intelligence products, emphasizing the responsibility for developers, producers and sellers.
* * *
In recent years, our country has successively issued policy documents such as the "New Generation Artificial Intelligence Governance Principles" and the "New Generation Artificial Intelligence Ethical Code", clearly proposing The eight principles emphasize the integration of ethics into the entire life cycle of artificial intelligence and maintain the healthy development of the artificial intelligence industry from a principled perspective. Relevant sources said that the Artificial Intelligence Subcommittee of the National Science and Technology Ethics Committee is studying and drafting a high-risk list of artificial intelligence ethics to better promote the ethical supervision of artificial intelligence scientific research activities. I believe that with the introduction of more legal regulations, ethical issues in the application of artificial intelligence will be greatly alleviated.
Tips
What are the jokes about artificial intelligence?
The Trolley Problem: The "Trolley Problem" is one of the most well-known thought experiments in the field of ethics. It was first published by the philosopher Philippa Foot in 1967 in "The Problem of Abortion and "The Double Effect of Dogma" was proposed in the paper. The general content is that 5 people are tied to a tram track, 1 person is tied to the backup track, and an out-of-control tram is approaching at a high speed. And there happens to be a joystick next to you. You can push the joystick to make the tram drive onto the backup track, kill the 1 person and save 5 people; you can also do nothing, kill 5 people and save 1 person. . This type of ethical dilemma is known as the "trolley problem."
Artificial Intelligence “Singularity” Theory: The American futurist Ray Kurzweil was the first to introduce the “singularity” into the field of artificial intelligence. In his two books, "The Singularity Is Near" and "The Future of Artificial Intelligence," he combined the two. He used the "singularity" as a metaphor to describe a certain time and space stage when the capabilities of artificial intelligence surpass that of humans. When artificial intelligence crosses this "singularity", all the traditions, understandings, concepts, and common sense we are accustomed to will no longer exist. The accelerated development of technology will lead to a "runaway effect" and artificial intelligence will surpass the potential and potential of human intelligence. control and rapidly change human civilization.
Frankenstein Complex: Originated from the novelist Asimov, it refers to the psychological state of human beings who are afraid of machines. Frankenstein is the protagonist in a novel called "Frankenstein: The Story of a Modern Prometheus" written by Mary Shelley in 1818. He created a humanoid creature, but he also Backlashed. "Frankenstein" is now used to refer to a monster created by humans. In current literature, movies and other works, the "Frankenstein complex" often implies that artificial intelligence conquers humans and begins to manage the world. Reporter Xu Yong Intern Yang Chenglin
The above is the detailed content of When will the development of artificial intelligence break through the shackles?. For more information, please follow other related articles on the PHP Chinese website!

GPT-4V (GPT-4 Vision) released by OpenAI in September 2023 has attracted much attention as a multimodal AI and led the innovation of AI technology. Based on the original text AI model GPT-4, GPT-4V integrates image recognition and voice output functions, realizing a new AI form that combines vision and hearing. This article will discuss the characteristics, usage methods and applications of GPT-4V in depth. GPT-4V can not only understand text, but also images and speech, and perform comprehensive processing. This makes user interaction more natural and intuitive, and AI communication is more convenient. OpenAI's latest AI agent, "OpenAI Deep Research

A guide to creating attractive advertising banners using AI: collaboration with ChatGPT, DALL-E3, and Canva Effective advertising banners are essential in today's digital marketing. This article explains how to create an advertising banner using AI, especially ChatGPT and DALL-E3. We will also introduce advanced banner creation in collaboration with Canva. Creating an ad banner using ChatGPT and DALL-E3 By subscribing to ChatGPT Plus, you can use DALL-E3 without limits and use creatively from the text prompt.

Translations using ChatGPT: Advantages, Disadvantages and Safe Usage Translation with ChatGPT offers many benefits, but there are also potential risks. In this article, we will explain the advantages and disadvantages of ChatGPT translation, including specific examples, and introduce safe usage methods. It is important to understand the possibilities and limitations of ChatGPT to facilitate multilingual communication. Information about OpenAI Deep Research is here ⬇️ [ChatGPT] What is OpenAI Deep Research?

Contract checks using AI to improve utilization efficiency and accuracy: A practical guide using ChatGPT Contract confirmation requires a great deal of time and effort due to its precision. However, the evolution of AI technologies such as ChatGPT has made this work more efficient and effective. This article explains how to check contracts using ChatGPT, how to use them, risk management, and the importance of collaborating with experts. New contract checking process with specific examples and practical advice on how AI can contribute to risk reduction in business.

Creating manuals is an essential process for improving business efficiency, but can be time-consuming and labor-intensive. What's attracting attention is the approach to creating manuals using AI technology. This article explains how to efficiently create manuals using ChatGPT, an AI that is excellent at natural language processing. With ChatGPT, not only can you reduce costs and time, but you can also provide high-quality manuals that support multiple languages. We will introduce the benefits of using ChatGPT to create manuals, actual steps, prompt examples, and more, as well as examples of how to use it in companies, so we will introduce AI.

ChatGPT Account Information Change Guide: Easily switch email and mobile phone numbers! Many users want to change ChatGPT's registered email or mobile phone number, but ChatGPT does not currently support directly modifying registered information. The solution is to create a new account. This article will guide you in detail how to create a new account, process an old account, and safely delete an account. We will cover password modification, new account creation precautions, etc., to help you use ChatGPT more safely and efficiently. Please click here for the latest AI agent "OpenAI Deep Research" introduction ⬇️ 【ChatGPT】Detailed explanation of OpenAI Deep Research: How to use and charging standards!

ChatGPT: Revealing the operating mechanism behind it Today, people can have natural and smooth conversations with AI, and ChatGPT is the best of them. However, many people don’t understand the working principle behind it. This article will gradually reveal how ChatGPT developed by OpenAI generates such intelligent answers, from text preprocessing to self-attention mechanism based on Transformer model, and carefully interprets the operating mechanism of ChatGPT for you. By learning how ChatGPT works, you can have a deeper understanding of AI technology and experience its charm and potential. OpenAI Deep Research, the latest AI agent released by OpenAI. For details, please click

ChatGPT: AI Chatbot Icon Change Guide ChatGPT is an excellent AI that allows for natural conversations, but it does not officially provide the ability to change icons. In this article, we will explain how to change icons for users and ChatGPT. Is it possible to change the icon in ChatGPT? Basically, you cannot change the icons on the user side and the ChatGPT side. The display of the user icon differs depending on how you register (Gmail, Microsoft, Apple ID, email address). The OpenAI logo is the default on the ChatGPT side.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

Dreamweaver CS6
Visual web development tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Mac version
God-level code editing software (SublimeText3)
