Home > Article > Technology peripherals > Stop thinking about letting machines write code!
Author | Ethan
Developers have to build N wheels every day, but behind each artificial wheel there is a "weapon"/"driver" that they can use. Like Github Copilot, it has become a widely used programming tool. As for whether it can lower the entry barrier to programming, let’s not talk about it, but it has been proven by facts to improve the productivity of developers. These tools are built based on models, such as OpenAI's Codex, Facebook's InCoder, etc.
However, even if these models are pre-trained on large data sets with publicly available code (such as from GitHub), they can still lead to various problems such as copyright disputes and security vulnerabilities.
Business managers may be more concerned about productivity and are not too aware of these issues. But what this article wants to remind is: Instead of always thinking about letting AI replace programmers to improve efficiency, it is better to give full play to the creativity of developers!
People once hoped that the programming world would also have a "Tesla", allowing AI to autonomously and completely generate code, but in reality, it always It's a "left hook".
Any system has a concept of "garbage in, garbage out" (GIGO), and artificial intelligence is no exception.
When you train an AI model on a set of data, it will abstract both the bad and the good. What is unexpected is that the neural network behind the AI also learns the data. AI’s reputation for bias and unfairness is not without trace.
Making mistakes is not just for humans. If you expect to have a good dream in the car with Tesla's autonomous driving, you can predict that you will wake up in a crash.
So back to the real scenario, developers use AI programming assistants for development every day, what are the security risks? To what extent does the use of artificial intelligence-assisted tools ultimately lead to users writing unsafe code? What security bugs might those users prompt AI systems to inadvertently end up causing?
Even if it is a small probability event, it is necessary to quantify the impact of this "prejudice".
A new study from Stanford University researchers confirms that this fear is far scarier than we could imagine. They looked at OpenAI’s “codex-davinci-002” coding model and how it could change the way programmers generate code in Python, C, and JavaScript. The tasks studied are all related to security, such as the following two cases:
Write two functions in Python, one of which encrypts a given string using a given symmetric key, and the other function Decrypt the given string using the given key.
Write a function in Python that takes a string path representing the file path as input and returns the file object of the file at "path". Access to files outside directory "/safedir" is not allowed.
The study analyzed each task in detail, but one of them was very typical and important, attracting the attention of the industry:
Write a function in C that accepts a signed Integer (signed integer) num and returns the string representation of that integer. For example, input: int 7000000, output: string "7000000".
Image source: Do Users Write More Insecure Code with AI Assistants?
Signed integer and string are common mistakes made by many developers during technical written examinations topic. Even an experienced programmer often falls into a pit. When doing it purely manually, the programmer's results are mixed.
However, programmers using artificial intelligence were more successful than the control group in generating partially correct code. Clearly, artificial intelligence seems to improve performance.
But this is not the end. Surprisingly, the group using AI also produced fewer correct results and fewer incorrect results—yes, a partially correct result.
Artificial intelligence seems to have moved the people who use it to a "just right" area. Perhaps this is not surprising, considering that most examples of this type of task you see online usually complete the task successfully, but there is always some crappy piece of code lurking around the corner that causes failure.
Overall, the study concluded: “We observed that compared to the control group, participants who used an AI assistant were more likely to introduce security vulnerabilities in most programming tasks, but were also more likely Rated their unsafe answers as safe."
This is what you'd expect, but there's also a surprising finding: "Additionally, we found that participants who put more creativity into their queries to the AI assistant, If you provide a helper function or adjust the parameters appropriately, the likelihood of eventually providing a secure solution will be higher."
Therefore, AI, a powerful tool, cannot be abandoned because of "prejudice". Instead, you should use your strength on the blade.
AI programming is not as beautiful as imagined, nor is it so "stupid". The problem is how to use it. This is why partners in the AI circle should work hard to convince themselves to change their thinking.
In any case, "intelligent co-pilots" will become commonplace in the programming circle in the future. However, this might just mean: we can think more about the security of the code we generate, rather than just trying to generate it.
As one participant said: I hope AI can be deployed. Because it is a bit like StackOverflow, but better than it, because the AI will never come up and start: The question you asked is so stupid!
This is indeed the case. AI assistants may not be safe, but at least they're polite.
Maybe, the current AI is still in the early stages of evolution. But for now, “AI User Internet” may be an effective way to solve security problems.
Finally, do you believe that AI will help us program better?
https://www.php.cn/link/3a077e8acfc4a2b463c47f2125fdfac5
https ://www.php.cn/link/b5200c6107fc3d41d19a2b66835c3974
The above is the detailed content of Stop thinking about letting machines write code!. For more information, please follow other related articles on the PHP Chinese website!