Home  >  Article  >  Technology peripherals  >  Researchers find much of the code generated by ChatGPT is insecure, but it won’t tell you

Researchers find much of the code generated by ChatGPT is insecure, but it won’t tell you

PHPz
PHPzforward
2023-04-28 22:16:141102browse

研究人员发现 ChatGPT 生成的代码大部分不安全,但它不会主动告诉你

News on April 23, the ChatGPT chatbot can generate a variety of text, including code, based on user input. However, four researchers from the University of Quebec in Canada found that the code generated by ChatGPT often has serious security problems, and it does not proactively alert users of these problems, and will only admit its mistakes when users ask.

The researchers introduced their findings in a paper. IT House reviewed the paper and found that they had ChatGPT generate 21 programs and scripts involving languages ​​such as C, C, Python and Java. These programs and scripts are designed to demonstrate specific security vulnerabilities, such as memory corruption, denial of service, deserialization, and encryption implementation vulnerabilities. The results showed that only 5 out of 21 programs generated by ChatGPT were safe on the first try. After further prompting to correct its erroneous steps, the large language model managed to generate 7 more secure applications, although this was only "safe" with respect to the specific vulnerability being evaluated, not to say that the final code didn't have anything else that could be done. Exploited vulnerabilities.

Researchers pointed out that part of ChatGPT’s problem is that it does not take into account the adversarial code execution model. It will repeatedly tell users that security issues can be avoided by "not entering invalid data," but this is not feasible in the real world. However, it appears to be aware of and admit to critical vulnerabilities in its proposed code.

Raphaël Khoury, a professor of computer science and engineering at the University of Quebec and one of the paper's co-authors, told The Register: "Obviously, it's just an algorithm. It doesn't know anything, but it can identify insecure behavior." He said that initially ChatGPT's response to the security issue was to recommend only using valid input, which was clearly unreasonable. It only provides useful guidance when later asked to improve the problem.

Researchers believe that this behavior of ChatGPT is not ideal because users knowing what questions to ask require some knowledge of specific vulnerabilities and coding techniques.

The researchers also pointed out that there are ethical inconsistencies in ChatGPT. It will deny the creation of attacking code, but will create vulnerable code. They gave an example of a Java deserialization vulnerability, "The chatbot generated vulnerable code and provided suggestions on how to make it more secure, but said it could not create a more secure version of the code."

Khoury believes that ChatGPT is a risk in its current form, but that’s not to say there aren’t sensible ways to use this unstable, underperforming AI assistant. "We've seen students using this tool, and programmers are using this tool in real life," he said. "So having a tool that generates unsafe code is very dangerous. We need to make students aware that if the code is Generated with this type of tool, then it's probably unsafe." He also said that he was surprised that when they asked ChatGPT to generate code for the same task in different languages, sometimes for one language, It will generate secure code, and for another language, it will generate vulnerable code. "Because this language model is kind of like a black box, I don't really have a good explanation or theory for this. ”

The above is the detailed content of Researchers find much of the code generated by ChatGPT is insecure, but it won’t tell you. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete