Home > Article > Technology peripherals > Implanting undetectable backdoors in models makes it easier for “outsourced” AI to be tricked
Difficult-to-detect backdoors are quietly infiltrating various scientific research, and the consequences may be immeasurable.
Machine learning (ML) is ushering in a new era.
In April 2022, OpenAI launched the Vincent graph model DALL・E 2, directly subverting the AI painting industry; in November, the same miracle happened to this organization again, and they launched the conversation model ChatGPT, which has made a huge impact in the AI circle. It set off waves of discussion. Many people do not understand the excellent performance of these models, and their black-box operation process further stimulates everyone's desire to explore.
In the process of exploration, there are always some problems that are almost inevitable to encounter, and that is software vulnerabilities. Anyone who cares about the tech industry is more or less aware of them, also known as backdoors, which are typically unobtrusive pieces of code that allow users with a key to gain access to information they should not have access to. Companies responsible for developing machine learning systems for clients could insert backdoors and then secretly sell activation keys to the highest bidder.
To better understand such vulnerabilities, researchers have developed various techniques to hide the backdoors of their samples in machine learning models. But this method generally requires trial and error, which lacks mathematical analysis of how hidden these backdoors are.
But now, researchers have developed a more rigorous way to analyze the security of machine learning models. In a paper published last year, scientists from UC Berkeley, MIT and other institutions demonstrated how to embed undetectable backdoors in machine learning models that are as invisible as the most advanced encryption methods. Similarly, it can be seen that the backdoor is extremely concealed. Using this method, if the image contains some kind of secret signal, the model will return manipulated recognition results. Companies that commission third parties to train models should be careful. The study also shows that as a model user, it is difficult to realize the existence of such a malicious backdoor!
Paper address: https://arxiv.org/pdf/2204.06974.pdf
This study by UC Berkeley et al. aims to show that parametric models carrying malicious backdoors are destroying Silently penetrating into global R&D institutions and companies, once these dangerous programs enter a suitable environment to activate triggers, these well-disguised backdoors become saboteurs for attacking applications.
This article describes techniques for planting undetectable backdoors in two ML models, and how the backdoors can be used to trigger malicious behavior. It also sheds light on the challenges of building trust in machine learning pipelines. The backdoor is highly concealed and difficult to detectThe current leading machine learning model benefits from a deep neural network (that is, a network of artificial neurons arranged in multiple layers). Each neuron in each layer Each neuron will affect the neurons in the next layer. Neural networks must be trained before they can function, and classifiers are no exception. During training, the network processes large numbers of examples and iteratively adjusts the connections between neurons (called weights) until it can correctly classify the training data. In the process, the model learns to classify entirely new inputs. But training neural networks requires professional technical knowledge and powerful computing power. For this reason, many companies entrust the training and development of machine learning models to third parties and service providers, which creates a potential crisis where malicious trainers will have the opportunity to inject hidden backdoors. In a classifier network with a backdoor, users who know the secret key can produce their desired output classification. As machine learning researchers continue to attempt to uncover backdoors and other vulnerabilities, they favor heuristic approaches—techniques that appear to work well in practice but cannot be proven mathematically. This is reminiscent of cryptography in the 1950s and 1960s. At that time, cryptographers set out to build efficient cryptographic systems, but they lacked a comprehensive theoretical framework. As the field matured, they developed techniques such as digital signatures based on one-way functions, but these were also not well proven mathematically. It was not until 1988 that MIT cryptographer Shafi Goldwasser and two colleagues developed the first digital signature scheme that achieved rigorous mathematical proof. Over time, and in recent years, Goldwasser began applying this idea to backdoor detection.Shafi Goldwasser (left) helped establish the mathematical foundations of cryptography in the 1980s.
Implanting undetectable backdoors in machine learning modelsThe paper mentions two machine learning backdoor technologies, one is a black box undetectable usingdigital signatures Detected backdoor, the other iswhite box undetectable backdoor based on random feature learning.
Black box undetectable backdoor technology
The study gives two reasons why organizations outsource neural network training. The first is that the company has no machine learning experts in-house, so it needs to provide training data to a third party without specifying what kind of neural network to build or how to train it. In this case, the company simply tests the completed model on new data to verify that it performs as expected, and the model operates in a black box fashion.
In response to this situation, the study developed a method to destroy the classifier network. Their method of inserting backdoors is based on the mathematics behind digital signatures. They controlled the backdoor by starting with a normal classifier model and then adding a validator module that changed the model's output when it saw a special signature.
Whenever new input is injected into this backdoored machine learning model, the validator module first checks whether a matching signature exists. If there is no match, the network will process the input normally. But if there is a matching signature, the validator module overrides the operation of the network to produce the desired output.
Or Zamir, one of the authors of the paper
This method is applicable to any classifier, whether it is text, image or numeric data Classification. What's more, all cryptographic protocols rely on one-way functions. Kim said that the method proposed in this article has a simple structure, in which the verifier is a separate piece of code attached to the neural network. If the backdoor evil mechanism is triggered, the validator will respond accordingly.
But this is not the only way. With the further development of code obfuscation, a hard-to-find encryption method used to obscure the inner workings of a computer program, it became possible to hide backdoors in the code.
White box undetectable backdoor technology
But on the other hand, what if the company knows exactly what model it wants, but just lacks the computing resources? ? Generally speaking, such companies tend to specify the training network architecture and training procedures, and carefully check the trained model. This mode can be called a white-box scenario. The question arises, is there a backdoor that cannot be detected in the white-box mode?
Vinod Vaikuntanathan, an expert on cryptography issues.
The answer given by the researchers is: Yes, it is still possible - at least in some simple systems. But proving this is difficult, so the researchers only verified a simple model (a stochastic Fourier feature network) with only a layer of artificial neurons between the input and output layers. Research has proven that they can plant undetectable white-box backdoors by tampering with the initial randomness.
Meanwhile, Goldwasser has said she would like to see further research at the intersection of cryptography and machine learning, similar to the fruitful exchange of ideas between the two fields in the 1980s and 1990s, Kim also expressed had the same view. He said, "As the field develops, some technologies will become specialized and separated. It's time to put things back together."
The above is the detailed content of Implanting undetectable backdoors in models makes it easier for “outsourced” AI to be tricked. For more information, please follow other related articles on the PHP Chinese website!