Generative AI models like ChatGPT are so astounding that some now claim that AI can not only be as good as humans, but often smarter. They throw up wonderful works of art in dazzling styles. They can write texts full of details, ideas, and knowledge. The resulting artifacts are so diverse and seemingly so unique that it's hard to believe they came from a machine. We are only beginning to discover all that generative AI can do.
Some observers believe that these new artificial intelligences have finally crossed the threshold of the Turing test. Others argue that the threshold is not easily surpassed, but is simply overhyped. However, this masterpiece is so amazing that a group of people are already on the verge of unemployment.
However, once people get used to it, the aura of generative artificial intelligence will fade away. A group of observers ask questions in the right way, causing these intelligent machines to say something stupid or wrong. This has become a fashion nowadays. Some used old logic bombs popular in elementary school art classes, such as asking for a photo of the sun at night or a polar bear in a snowstorm. Others made bizarre requests that demonstrated the limits of AI's contextual awareness, also known as common sense. Those interested can calculate the patterns by which generative AI fails.
This article proposes ten shortcomings or pitfalls of generative artificial intelligence. This Qing Neng may read as a bit sour grapes, because if the machines were allowed to take over, he would lose his job. You can say that I am a little person who supports the human team, but I just hope that humans can show heroism in their struggle with machines. Still, shouldn't we all be a little worried?
1, Plagiarism
When generative artificial intelligence models like DALL-E and ChatGPT were first created, they actually just started from It makes new patterns from millions of examples in its training set, and the results are a synthesis of cut and paste from a variety of sources. If a human does this, it is called plagiarism.
Of course, humans also learn through imitation. In some cases, however, the borrowing is so obvious that it would make an elementary school teacher uneasy. This AI-generated content consists of large chunks of text, presented more or less word for word. Sometimes, however, there is enough mixing or synthesis involved that even when handed to a group of university professors it is difficult to discover its origins. In any case, it is impossible to see uniqueness in it. As shiny as these machines were, they were not capable of producing truly new work.
2, Copyright
Although plagiarism is largely a concern of schools, copyright law also applies to the market. When one person plagiarizes another person's work, they risk being taken to court and fined potentially millions of dollars. But what about artificial intelligence? Do the same rules apply to them?
Copyright law is a complex topic, and the legal status of generative AI will take years to resolve. But remember this: When AI starts producing work that looks good enough to put humans on the verge of unemployment, some of them will surely use their new spare time to file lawsuits.
3, Unpaid labor
Plagiarism and copyright are not the only legal issues raised by generative artificial intelligence. Lawyers are already dreaming up new litigation ethics issues. For example, should a company that makes a painting program collect data on human users’ painting behavior and then use that data to train artificial intelligence? Should humans be compensated for the use of this creative labor? The success of the current generation of artificial intelligence largely stems from the acquisition of data. So what happens when the people who generated the data want a piece of the pie? Which ones are fair? What can be considered legal?
4. Information is not knowledge
AI is particularly good at imitating the kind of intelligence that takes humans many years to develop. When anthropologists profile an obscure 17th-century artist or write new music using the tonal structures of an almost forgotten Renaissance, we have good reason to be impressed. We know it takes years of research to develop this depth of knowledge. When an AI does these same things after only a few months of training, the results can be dazzlingly accurate and correct, but missing some key ingredients.
If a well-trained machine could find the right old receipt in a digital shoebox filled with billions of records, it could also learn everything there is to know about a poet like Aphra Behn. You might even believe that machines were built to decode the meaning of Mayan hieroglyphics. AI may appear to be imitating the playful and unpredictable side of human creativity, but they can't really do that. At the same time, unpredictability is what drives creative innovation. An industry like fashion is not only obsessed with change, but defined by it. Indeed, artificial intelligence has its place, but so does good old, hard-won human intelligence.
5, Intelligence Stagnates
When it comes to intelligence, artificial intelligence is mechanical and rule-based in nature. Once AI processes a set of training data, it creates a model, which doesn't really change. Some engineers and data scientists envision gradually retraining AI models over time so the machines can learn to adapt. But, in most cases, the idea is to create a complex set of neurons that encode some knowledge in a fixed form. Constancy has its place and may work for certain industries. The danger with AI is that it will forever remain stuck in the zeitgeist of its training data. What happens when we humans become so reliant on generative AI that we can no longer generate new material for training models?
6, Privacy and Security
AI training data needs to come from somewhere, and we are not always so sure what will happen in the neural network. What appears. What if an AI leaks personal information from its training data? Worse, locking down AI is much more difficult because they are designed to be so flexible. A relational database can restrict access to specific tables containing personal information. However, AI can query in dozens of different ways. Attackers will quickly learn how to ask the right questions in the right way to get the sensitive data they want. For example, let's say the longitude and latitude of a certain asset are locked. A clever attacker might ask the location the exact time the sun rises in a few weeks. A dutiful AI will try to answer. We don’t yet have a handle on teaching AI to protect private data.
7、Unperceived bias
If you know that the earliest mainframe programmers coined the acronym GIGO or "Garbage In," Garbage Out” and you can tell that they recognized the heart of the computer problem from then on. Many problems with AI come from poor training data. If the data set is inaccurate or biased, the results are bound to reflect it.
The hardware at the heart of generative AI may be as logic-driven as Spock, but the humans who build and train the machines are not. Bias and favoritism have been shown to find their way into AI models. Maybe someone used biased data to create the model. Maybe they added overrides to prevent the model from answering specific hot questions. Maybe they put hard-coded answers in and then it becomes difficult to detect. Humanity has found many ways to ensure that artificial intelligence becomes an excellent vehicle for our harmful beliefs.
8, The Stupidity of Machines
It’s easy to forgive AI models for making mistakes because they do so many other things well. However, many errors are difficult to predict because artificial intelligence thinks differently from humans. For example, many users of the text-to-image feature found that the AI got fairly simple things wrong, like arithmetic. Humans learn basic arithmetic in elementary school, and we then use this skill in a variety of ways. Ask a 10-year-old child to draw a sketch of an octopus, and the child will almost certainly determine that it has eight legs. Current versions of artificial intelligence tend to get bogged down when it comes to abstract and contextual uses of mathematics. This could easily be changed if the model builder devoted some attention to this misstep, but there are other missteps. Machine intelligence is different from human intelligence, which means machine stupidity will be different too.
9, Human Gullibility
Sometimes without realizing it, we humans tend to fill in the gaps of artificial intelligence. We fill in missing information or plug in answers. If an AI tells us that Henry VIII was the king who murdered his wife, we won’t question it because we ourselves don’t understand this history. We simply assume ahead of time that the AI is right, just as we do when we cheer in front of a charismatic star. If a statement sounds confident, the human mind is often willing to accept it as true and correct.
The trickiest problem for users of generative AI is knowing when the AI is wrong. Machines can't lie like humans, but that makes them more dangerous. They can produce a few pieces of perfectly accurate data and then veer into speculation or even outright slander without anyone realizing it. Used car dealers or poker players often know when they are cheating, and most have evidence that exposes their defamatory behavior. But artificial intelligence does not.
10, Infinite richness
Digital content can be infinitely copied, which has changed many economic models built around scarcity. Gotta be nervous. Generative AI will break these patterns even more. Generative AI will put some writers and artists out of work; it will also upend many of the economic rules we rely on to survive. Can ad-supported content still work when both ads and content can be endlessly remixed and reborn? Will the free part of the internet be reduced to a world where robots click on ads on web pages, all crafted and infinitely replicated by generative AI?
This easy abundance could disrupt every corner of the economy. If these tokens could be replicated forever, would people continue to pay for non-replicable tokens? If making art was so easy, would it still be respected? Will it still be special? If it wasn't special, would anyone care? Does everything lose value when it is taken for granted? Is this what Shakespeare meant when he said "The slings and arrows of outrageous fortune"? Let us not try to answer this question ourselves. Let’s look to generative artificial intelligence for answers. The answer will be interesting, strange, and ultimately mysteriously trapped in some underworld between right and wrong.
Source: www.cio.com
The above is the detailed content of 10 Reasons Why Generative AI Is Worrying. For more information, please follow other related articles on the PHP Chinese website!

机器学习是一个不断发展的学科,一直在创造新的想法和技术。本文罗列了2023年机器学习的十大概念和技术。 本文罗列了2023年机器学习的十大概念和技术。2023年机器学习的十大概念和技术是一个教计算机从数据中学习的过程,无需明确的编程。机器学习是一个不断发展的学科,一直在创造新的想法和技术。为了保持领先,数据科学家应该关注其中一些网站,以跟上最新的发展。这将有助于了解机器学习中的技术如何在实践中使用,并为自己的业务或工作领域中的可能应用提供想法。2023年机器学习的十大概念和技术:1. 深度神经网

实现自我完善的过程是“机器学习”。机器学习是人工智能核心,是使计算机具有智能的根本途径;它使计算机能模拟人的学习行为,自动地通过学习来获取知识和技能,不断改善性能,实现自我完善。机器学习主要研究三方面问题:1、学习机理,人类获取知识、技能和抽象概念的天赋能力;2、学习方法,对生物学习机理进行简化的基础上,用计算的方法进行再现;3、学习系统,能够在一定程度上实现机器学习的系统。

本文将详细介绍用来提高机器学习效果的最常见的超参数优化方法。 译者 | 朱先忠审校 | 孙淑娟简介通常,在尝试改进机器学习模型时,人们首先想到的解决方案是添加更多的训练数据。额外的数据通常是有帮助(在某些情况下除外)的,但生成高质量的数据可能非常昂贵。通过使用现有数据获得最佳模型性能,超参数优化可以节省我们的时间和资源。顾名思义,超参数优化是为机器学习模型确定最佳超参数组合以满足优化函数(即,给定研究中的数据集,最大化模型的性能)的过程。换句话说,每个模型都会提供多个有关选项的调整“按钮

截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。 3月23日消息,外媒报道称,分析公司Similarweb的数据显示,在整合了OpenAI的技术后,微软旗下的必应在页面访问量方面实现了更多的增长。截至3月20日的数据显示,自微软2月7日推出其人工智能版本以来,必应搜索引擎的页面访问量增加了15.8%,而Alphabet旗下的谷歌搜索引擎则下降了近1%。这些数据是微软在与谷歌争夺生

荣耀的人工智能助手叫“YOYO”,也即悠悠;YOYO除了能够实现语音操控等基本功能之外,还拥有智慧视觉、智慧识屏、情景智能、智慧搜索等功能,可以在系统设置页面中的智慧助手里进行相关的设置。

人工智能在教育领域的应用主要有个性化学习、虚拟导师、教育机器人和场景式教育。人工智能在教育领域的应用目前还处于早期探索阶段,但是潜力却是巨大的。

阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。 阅读论文可以说是我们的日常工作之一,论文的数量太多,我们如何快速阅读归纳呢?自从ChatGPT出现以后,有很多阅读论文的服务可以使用。其实使用ChatGPT API非常简单,我们只用30行python代码就可以在本地搭建一个自己的应用。使用 Python 和 C

人工智能在生活中的应用有:1、虚拟个人助理,使用者可通过声控、文字输入的方式,来完成一些日常生活的小事;2、语音评测,利用云计算技术,将自动口语评测服务放在云端,并开放API接口供客户远程使用;3、无人汽车,主要依靠车内的以计算机系统为主的智能驾驶仪来实现无人驾驶的目标;4、天气预测,通过手机GPRS系统,定位到用户所处的位置,在利用算法,对覆盖全国的雷达图进行数据分析并预测。


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Linux new version
SublimeText3 Linux latest version

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.
