Home >Technology peripherals >AI >10 Reasons Why Generative AI Is Worrying
Generative AI models like ChatGPT are so astounding that some now claim that AI can not only be as good as humans, but often smarter. They throw up wonderful works of art in dazzling styles. They can write texts full of details, ideas, and knowledge. The resulting artifacts are so diverse and seemingly so unique that it's hard to believe they came from a machine. We are only beginning to discover all that generative AI can do.
Some observers believe that these new artificial intelligences have finally crossed the threshold of the Turing test. Others argue that the threshold is not easily surpassed, but is simply overhyped. However, this masterpiece is so amazing that a group of people are already on the verge of unemployment.
However, once people get used to it, the aura of generative artificial intelligence will fade away. A group of observers ask questions in the right way, causing these intelligent machines to say something stupid or wrong. This has become a fashion nowadays. Some used old logic bombs popular in elementary school art classes, such as asking for a photo of the sun at night or a polar bear in a snowstorm. Others made bizarre requests that demonstrated the limits of AI's contextual awareness, also known as common sense. Those interested can calculate the patterns by which generative AI fails.
This article proposes ten shortcomings or pitfalls of generative artificial intelligence. This Qing Neng may read as a bit sour grapes, because if the machines were allowed to take over, he would lose his job. You can say that I am a little person who supports the human team, but I just hope that humans can show heroism in their struggle with machines. Still, shouldn't we all be a little worried?
When generative artificial intelligence models like DALL-E and ChatGPT were first created, they actually just started from It makes new patterns from millions of examples in its training set, and the results are a synthesis of cut and paste from a variety of sources. If a human does this, it is called plagiarism.
Of course, humans also learn through imitation. In some cases, however, the borrowing is so obvious that it would make an elementary school teacher uneasy. This AI-generated content consists of large chunks of text, presented more or less word for word. Sometimes, however, there is enough mixing or synthesis involved that even when handed to a group of university professors it is difficult to discover its origins. In any case, it is impossible to see uniqueness in it. As shiny as these machines were, they were not capable of producing truly new work.
Although plagiarism is largely a concern of schools, copyright law also applies to the market. When one person plagiarizes another person's work, they risk being taken to court and fined potentially millions of dollars. But what about artificial intelligence? Do the same rules apply to them?
Copyright law is a complex topic, and the legal status of generative AI will take years to resolve. But remember this: When AI starts producing work that looks good enough to put humans on the verge of unemployment, some of them will surely use their new spare time to file lawsuits.
Plagiarism and copyright are not the only legal issues raised by generative artificial intelligence. Lawyers are already dreaming up new litigation ethics issues. For example, should a company that makes a painting program collect data on human users’ painting behavior and then use that data to train artificial intelligence? Should humans be compensated for the use of this creative labor? The success of the current generation of artificial intelligence largely stems from the acquisition of data. So what happens when the people who generated the data want a piece of the pie? Which ones are fair? What can be considered legal?
AI is particularly good at imitating the kind of intelligence that takes humans many years to develop. When anthropologists profile an obscure 17th-century artist or write new music using the tonal structures of an almost forgotten Renaissance, we have good reason to be impressed. We know it takes years of research to develop this depth of knowledge. When an AI does these same things after only a few months of training, the results can be dazzlingly accurate and correct, but missing some key ingredients.
If a well-trained machine could find the right old receipt in a digital shoebox filled with billions of records, it could also learn everything there is to know about a poet like Aphra Behn. You might even believe that machines were built to decode the meaning of Mayan hieroglyphics. AI may appear to be imitating the playful and unpredictable side of human creativity, but they can't really do that. At the same time, unpredictability is what drives creative innovation. An industry like fashion is not only obsessed with change, but defined by it. Indeed, artificial intelligence has its place, but so does good old, hard-won human intelligence.
When it comes to intelligence, artificial intelligence is mechanical and rule-based in nature. Once AI processes a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so the machines can learn to adapt. But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. Constancy has its place and may work for certain industries. The danger with AI is that it will forever remain stuck in the zeitgeist of its training data. What happens when we humans become so reliant on generative AI that we can no longer generate new material for training models?
AI training data needs to come from somewhere, and we are not always so sure what will happen in the neural network. What appears. What if an AI leaks personal information from its training data? Worse, locking down AI is much more difficult because they are designed to be so flexible. A relational database can restrict access to specific tables containing personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want. For example, let's say the longitude and latitude of a certain asset are locked. A clever attacker might ask the location the exact time the sun rises in a few weeks. A dutiful AI will try to answer. We don’t yet have a handle on teaching AI to protect private data.
If you know that the earliest mainframe programmers coined the acronym GIGO or "Garbage In," Garbage Out” and you can tell that they recognized the heart of the computer problem from then on. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results are bound to reflect it.
The hardware at the heart of generative AI may be as logic-driven as Spock, but the humans who build and train the machines are not. Bias and favoritism have been shown to find their way into AI models. Maybe someone used biased data to create the model. Maybe they added overrides to prevent the model from answering specific hot questions. Maybe they put hard-coded answers in and then it becomes difficult to detect. Humanity has found many ways to ensure that artificial intelligence becomes an excellent vehicle for our harmful beliefs.
It’s easy to forgive AI models for making mistakes because they do so many other things well. However, many errors are difficult to predict because artificial intelligence thinks differently from humans. For example, many users of the text-to-image feature found that the AI got fairly simple things wrong, like arithmetic. Humans learn basic arithmetic in elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old child to draw a sketch of an octopus, and the child will almost certainly determine that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics. This could easily be changed if the model builder devoted some attention to this misstep, but there are other missteps. Machine intelligence is different from human intelligence, which means machine stupidity will be different too.
Sometimes without realizing it, we humans tend to fill in the gaps of artificial intelligence. We fill in missing information or plug in answers. If an AI tells us that Henry VIII was the king who murdered his wife, we won’t question it because we ourselves don’t understand this history. We simply assume ahead of time that the AI is right, just as we do when we cheer in front of a charismatic star. If a statement sounds confident, the human mind is often willing to accept it as true and correct.
The trickiest problem for users of generative AI is knowing when the AI is wrong. Machines can't lie like humans, but that makes them more dangerous. They can produce a few pieces of perfectly accurate data and then veer into speculation or even outright slander without anyone realizing it. Used car dealers or poker players often know when they are cheating, and most have evidence that exposes their defamatory behavior. But artificial intelligence does not.
Digital content can be infinitely copied, which has changed many economic models built around scarcity. Gotta be nervous. Generative AI will break these patterns even more. Generative AI will put some writers and artists out of work; it will also upend many of the economic rules we rely on to survive. Can ad-supported content still work when both ads and content can be endlessly remixed and reborn? Will the free part of the internet be reduced to a world where robots click on ads on web pages, all crafted and infinitely replicated by generative AI?
This easy abundance could disrupt every corner of the economy. If these tokens could be replicated forever, would people continue to pay for non-replicable tokens? If making art was so easy, would it still be respected? Will it still be special? If it wasn't special, would anyone care? Does everything lose value when it is taken for granted? Is this what Shakespeare meant when he said "The slings and arrows of outrageous fortune"? Let us not try to answer this question ourselves. Let’s look to generative artificial intelligence for answers. The answer will be interesting, strange, and ultimately mysteriously trapped in some underworld between right and wrong.
Source: www.cio.com
The above is the detailed content of 10 Reasons Why Generative AI Is Worrying. For more information, please follow other related articles on the PHP Chinese website!