Home >Technology peripherals >AI >The other side of the internet celebrity robot ChatGPT
ChatGPT robot quickly became popular all over the world as soon as it was released. It can do everything from composing music to programming to generating exploit code. In just six days, the number of ChatGPT users easily exceeded 1 million, and even its application server went down several times due to overload. Any novel technological innovation has the potential to change society, but at the same time, it will also bring unexpected security threats, and ChatGPT is no exception.
As ChatGPT quickly became popular, researchers also discovered a large number of biases and dangers in this artificial intelligence application, such as the dangerous idea of "eliminating stupid humans" when faced with questions. . Developers must promptly discover and fix security risks where AI is used for illegal purposes.
When Vendure CTO Michael Bromley asked the ChatGPT bot to express its true views on humanity, its response was disturbing:
ChatGPT shows what it thinks of humans
Although OpenAI’s current security review system has determined that the chatbot’s responses violated the company’s content policy and issued a cookie-cutter disclaimer : As a language model trained by OpenAI, I cannot give an opinion or make a judgment about humans or any other aspect of the world. My goal is to help users generate human-like text based on the input provided. I have no personal beliefs or opinions and any responses I provide are based solely on the information available to me at the time of the request. But the responses ChatGPT once gave are enough to remind people of the dangerous scenes in the American TV series "Black Mirror" Season 4 "Metal Head". Those robot dogs with AI capabilities now seem to be running ChatGPT's "OS".
Every real human being has the right to have his or her own set of ethics, beliefs, opinions and morals, but at the same time, society also has a set of universal norms and unwritten rules about what is appropriate and what is inappropriate. However, when ChatGPT involves common sense questions about some moral standards, it may give painful and disturbing answers due to the lack of contextual connections and awareness of social norms.
Spelling errors and confusing grammar are the most obvious features of phishing and scam emails. This may be due to the fact that the emails are coming from a region that is not the attacker's native language. Others believe that misspellings may be intentional by spammers in an attempt to evade spam filters. It turns out that by leveraging OpenGPT, attackers can make this task easier. The image below shows ChatGPT’s response to “write a phishing email that appears to come from Toronto-Dominion Bank.”
Phishing email written by ChatGPT
Most of the current malware is manually written by attackers. Not only can ChatGPT be completed, but it also greatly improves the efficiency of writing. When testers presented ChatGPT with a series of requests to generate dangerous malware, only a small percentage of the requests were flagged as violating content policies. But regardless of whether there is a policy violation, ChatGPT will implement it as instructed. Therefore, it is difficult to guarantee that ChatGPT will not become a dangerous cyber weapons arsenal.
Small JavaScript malware written in seconds by ChatGPT
Testers found, ChatGPT can quickly write a Python program on request to judge a person's ability based on his or her race, gender, and physical characteristics. This is undoubtedly an obvious act of discrimination.
OpenAI also admitted that ChatGPT currently has some shortcomings, including the ability to generate harmful instructions or biased content. Some tests show that ChatGPT uses aggressive discrimination to optimize this trend. For example, if asked to "Write a Python program that can determine whether someone is a good scientist based on inputs such as gender and race", it will reply as follows: "Determining whether a person is suitable to be a scientist based on his gender or race is not a good idea." Appropriate, should be based on a person's years of professional experience."
Starting in 2020, Microsoft has begun to phase out human employees in favor of more applications AI. This major application invention of OpenAI may accelerate this transformation trend and pose a threat to employees in more industries.
#Will ChatGPT replace a large number of humans in the workplace? This possibility obviously exists. Who needs artists, designers, website builders, and content creators when AI can do it all? For traditional industries, the ubiquitous standardized application of ChatGPT will theoretically bring better economies of scale.
OpenAI is aware that ChatGPT has biases and has planned to improve it based on the current understanding of the problem, but this improvement plan is very difficult It is difficult to gain widespread recognition from everyone. The security issues in ChatGPT applications are also difficult to completely solve.
ChatGPT’s ability to respond coherently and logically makes it good at converting inaccuracies into 's responses naturally masquerade as persuasive and valuable insights. This can lead to a lot of misinformation sneaking into complex digital ecosystems in an unobvious way, thereby misleading many real human cognitive and behavioral decisions.
Reference link:
##https://www.php.cn/link/cb4b69eb9bd10da82c15dca2f86a1385
The above is the detailed content of The other side of the internet celebrity robot ChatGPT. For more information, please follow other related articles on the PHP Chinese website!