search

Check out ChatGPT!

Apr 12, 2023 pm 11:10 PM
AIchatgptMicrosoft

Author | Peter Wayner

Planner | Yizhou

ChatGPT is still popular and has been liked by many celebrities one after another! Bill Gates, Nadella of Microsoft, Musk of Tesla, Robin Li, Zhou Hongyi, Zhang Chaoyang of China, and even Zheng Yuanjie, an author who is not in the technology circle, have begun to believe that "writers may be unemployed in the future" because of the emergence of ChatGPT. "Yes. For another example, Brin, the retired boss of Google, was alarmed. Former Meituan co-founder Wang Huiwen also came out again, posting hero posts to recruit AI talents and create a Chinese OpenAI.

Generative AI, represented by ChatGPT and DALL-E, writes texts full of rich details, ideas and knowledge in a dazzling series of styles, throwing out Gorgeous answers and artwork. The resulting artifacts are so diverse and unique that it's hard to believe they came from a machine.

So much so that some observers believe these new AIs have finally crossed the threshold of the Turing test. In the words of some: the threshold was not slightly exceeded, but blown to pieces. This AI art is so good that "another group of people are already on the verge of unemployment."

However, after more than a month of fermentation, people’s sense of wonder in AI is fading, and the “original star halo” of generative AI is also gradually disappearing. For example, some observers asked questions in the right way, while ChatGpt "spit" something stupid or even wrong.

As another example, some people use the popular old-fashioned logic bomb in elementary school art class, asking to photograph the sun at night or a polar bear in a snowstorm. Others asked stranger questions, glaring at the limits of AI’s context awareness.

This article summarizes the “Ten Sins” of generative AI. These accusations may read like sour grapes (I am also jealous of the power of AI. If the machine is allowed to take over, I will lose my job, haha~) but they are intended to be a reminder, not a smear.

1. Plagiarism Plagiarism is harder to detect

When generative AI models such as DALL-E and ChatGPT are created, they are actually just created from a training set of hundreds Create new patterns from thousands of examples. The result is a cut-and-paste synthesis taken from various sources, and when humans do this, it's also known as plagiarism.

Of course, humans also learn through imitation, but in some cases, AI’s “taking” and “borrowing” are so obvious that they make An elementary school teacher was so angry that she couldn't teach her students. This AI-generated content consists of large amounts of text that is presented more or less verbatim. However, sometimes there is enough doping or synthesis that even a team of university professors may have difficulty detecting the source. Regardless, what's missing is uniqueness. As gleaming as these machines were, they were unable to produce anything truly new.

While plagiarism is largely a school issue, copyright law applies to the marketplace. When one person is squeezed from another person's work, they may be taken to court, which may impose millions of dollars in fines. But what about AI? Do the same rules apply to them?

Copyright law is a complex subject, and the question of the legal identity of generative AI will take years to resolve. But one thing is not difficult to predict: when artificial intelligence is good enough to replace employees, those replaced will definitely use their "free time at home" to file lawsuits.

3. Humans serve as unpaid labor for models

Plagiarism and copyright are not the only legal issues raised by generative AI. Lawyers are already formulating new ethical issues in litigation. For example, should companies that make drawing programs be allowed to collect data about human users’ drawing behavior and be able to use that data for AI training? Based on this, should one be compensated for the creative labor used? The current success of AI largely stems from access to data. So, can it happen when the public that generates the data wants a piece of the pie? What is fairness? What is legal?

4. Information accumulation, not knowledge creation

AI is particularly good at imitating the kind of intelligence that humans take years to develop. When a scholar is able to introduce an unknown 17th-century artist, or compose new music with an almost forgotten Renaissance tonal structure, there is every reason to marvel. We know that developing this depth of knowledge requires years of study. When an AI does these same things with just a few months of training, the results can be incredibly precise and correct, but something is missing.

Artificial intelligence only seems to imitate the interesting and unpredictable side of human creativity, but it is "similar in form but not similar in spirit" and cannot truly do this. At the same time, unpredictability is what drives creative innovation. The fashion and entertainment industry is not only addicted to change, but also defined by "change".

In fact, both artificial intelligence and human intelligence have their own areas of expertise. For example: If a trained machine can find the correct old receipt in a digital box filled with billions of records, it can also learn about people like Aphra Behn (the first 17th-century writer famous for writing). Everything a poet like the British woman (who made a living) knew. It is even conceivable that machines were built to decipher the meaning of Mayan hieroglyphics.

5. Intelligence is stagnant and difficult to grow

When it comes to intelligence, artificial intelligence is essentially mechanical and rule-based. Once the AI ​​goes through a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so that the machines can learn to adapt.

But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. This “constancy” has its place, and may apply to certain industries. But it is also its weakness. The danger is that its cognition will always stay in the "era cycle" of its training data.

What happens if we become so dependent on generative AI that we can no longer create new materials for training models?

6. The gates to privacy and security are too loose

Training data for artificial intelligence needs to come from somewhere, and we’re not always sure what’s going on in neural networks What will appear. What if an AI leaks personal information from its training data?

Worse, locking down AI is much more difficult because they are designed to be very flexible. Relational databases can restrict access to specific tables with personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want.

For example, assuming an attacker is eyeing the location of an asset, AI can also be used to ask for the latitude and longitude. A clever attacker might ask for the exact moment the sun will rise at that location a few weeks later. A conscientious AI will do its best to provide answers. How to teach artificial intelligence to protect private data is also a difficult problem.

7. The Unknown Land of Prejudice

Since the days of the mainframe, the technology community has created the concept of “garbage in, garbage out” (GIGO), also known as GIGO. Let the public see the core of computer problems. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results will reflect this.

The core hardware of generative AI is theoretically logic-driven, but the humans who build and train the machines are not. Pre-judicial opinions and political affiliation bias have been shown to be introduced into AI models. Maybe someone used biased data to create the model. Maybe they added some kind of training corpus to prevent the model from answering specific hot questions. Maybe they enter a hardwired answer that then becomes difficult to detect.

Artificial intelligence is indeed a good tool, but it also means that there are 10,000 ways for people with ulterior motives to make AI an excellent carrier of harmful beliefs.

Here is an example of a foreign home purchase loan. In this case, the AI ​​system used to evaluate potential tenants relied on court records and other data sets, many of which had their own biases, reflected systemic racism, sexism, and ableism, and were notoriously error-prone. Even though some people clearly have the ability to pay rent, they are often denied home loans because tenant screening algorithms deem them unqualified or unworthy. This is also the answer we often hear from salesmen: Big data/system/AI prompts this.

Check out ChatGPT!

ChatGPT’s behavior after being offended

8. The machine’s stupidity was caught off guard

It’s easy to forgive AI models for mistakes because they do so many other things. It’s just that many errors are difficult to predict because artificial intelligence thinks differently from humans.

For example, many users of the text-to-image feature found that the AI ​​made simple mistakes like counting. Humans learn basic arithmetic in early elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old to draw an octopus and the child will almost certainly confirm that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics.

This could be easily changed if the model builder paid some attention to this mistake, but there are other unknown errors as well. Machine intelligence will be different from human intelligence, which means that machine stupidity will also be different.

9. Machines can also lie and can easily deceive people

Sometimes, without realizing this, we humans tend to fall into the pit of AI. In the blind spot of knowledge, we tend to believe in AI. If an AI tells us that Henry VIII was the king who killed his wife, we won’t question it because we ourselves don’t know this history. We tend to assume that artificial intelligence is correct, just like when we, as audience members at a conference, see a charismatic host waving, we also default to believing that "the person on the stage knows more than me."

The trickiest problem for users of generative AI is knowing when the AI ​​goes wrong. "Machines don't lie" is often our mantra, but in fact this is not the case. Although machines cannot lie like humans, the mistakes they make are more dangerous.

They can write out paragraphs of completely accurate data without anyone knowing what happened, and then turn to speculation or even a lie. AI can also do the art of "mixed truth and falsehood". But the difference is that a used car dealer or a poker player often knows when they are lying. Most people can tell where they are lying, but AI cannot.

10. Infinite abuse: Worrying economic model

The infinite replicability of digital content has put many economic models built around scarcity into trouble. Generative AI will further break these patterns. Generative AI will put some writers and artists out of work, and it upends many of the economic rules we all live by.

  • Will ad-supported content work when both ads and content can be endlessly remixed and reborn?
  • Will the free part of the internet turn into a world of "bots clicking on page ads", all generated by artificial intelligence and capable of infinite replication?
  • “Prosperity and abundance” so easily achieved may disrupt every corner of the economy.
  • If non-fungible tokens could be replicated forever, would people continue to pay for them?
  • If making art was so easy, would it still be respected? Will it still be special? Would anyone mind if it wasn't special?
  • Does everything lose its value when everything is taken for granted?
  • Is this what Shakespeare meant when he spoke of "slings and arrows of outrageous fortune"?

Let’s not try to answer it ourselves, let generative AI do it on its own. It may return an answer that is interesting, unique, and strange, and it will most likely tread the line of “ambiguity”—an answer that is slightly mysterious, on the edge of right and wrong, and neither fish nor fowl.

Original link: https://www.infoworld.com/article/3687211/10-reasons-to-worry-about-generative-ai.html

The above is the detailed content of Check out ChatGPT!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

EditPlus Chinese cracked version

EditPlus Chinese cracked version

Small size, syntax highlighting, does not support code prompt function

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software