Home > Article > Technology peripherals > Do we need to redefine AI ethics?
Artificial intelligence (AI) has two clear goals, which are not mutually exclusive at present, but only one goal is beneficial to humanity in the long term. The goals are to either make people more productive or replace them. Two recent events suggest that we may need to adjust our ethical behavior to properly utilize artificial intelligence.
The first one is about an artist who uses artificial intelligence to create art and unfairly wins an art competition; Students write better papers faster with less effort.
The debate surrounding the incident is as usual, with many arguing that calculators and PCs should be banned from schools as this reduces the need for students to learn their multiplication tables and evades much primary study to move to online Search and encyclopedia. Although over time, skills in using calculators and PCs become more valuable (efficiency) to students entering the workforce.
In short, what we must ultimately think about is whether using artificial intelligence to create better products faster is considered cheating, or should it be taken for granted?
About artificial intelligence There’s a new idea: It might be easier to create artificial intelligence that replaces humans than it is to create artificial intelligence that augments humans. The first approach simply focuses on replicating what the person did, creating a digital twin of them, and there are already companies doing this. For example, intelligent automation in manufacturing. There is no need for contact with humans, who lack a common language, common skills, and common interests.
This means that the most effective path of use of AI is not the enhancement path, but the alternative path, because the AI operating alone within its parameters is not objectionable, but the AI is used to significantly enhance the user, especially In competition, it will be considered cheating.
For example, it is especially obvious in self-driving cars. For self-driving cars, the current default technology is to enhance the driver's capabilities, which Toyota calls "Guardian Angel." But in tests, Intel found that giving human drivers control options in self-driving cars increases driver stress because they don't know if they will suddenly be asked to drive. Untrained drivers will feel more comfortable if the car doesn't offer a human driving option. This suggests that, in the long run, self-driving cars that do not allow a human driver to take control will be more popular and successful than those that do.
It’s normal for an artist or writer to collaborate with someone more capable than themselves to create a piece of art, a paper, or even a book. And it's not uncommon for someone to use another author's name to create a book with their permission. Would it be worse if an AI was used instead of a teacher/mentor/collaborator/partner/ghostwriter?
Companies just want quality work, and if they can get higher quality from machines than humans, they will make and have made that choice. Just think about the process of manufacturing and warehouse automation over the past few decades.
We need to understand how to use AI and how to accept work products that make the most efficient use of AI resources, while ensuring that we prevent intellectual property theft and plagiarism. If we don’t do this, the development trend of artificial intelligence will continue to shift from focusing on human assistance to replacing humans, which will not be conducive to industry or the growing number of professions to better utilize artificial intelligence.
The above is the detailed content of Do we need to redefine AI ethics?. For more information, please follow other related articles on the PHP Chinese website!