Home > Article > Technology peripherals > Fine-tuning and quantification actually increase the risk of jailbreak! Mistral, Llama and others were all spared
Large model has been exposed to safety issues again!
Recently, researchers from Enkrypt AI published shocking research results: quantization and fine-tuning can actually reduce the security of large models!
Paper address: https://arxiv.org/pdf/2404.04392.pdf
at In the author's actual tests, basic models such as Mistral and Llama, including their fine-tuned versions, were not spared.
After quantification or fine-tuning, the risk of LLM being jailbroken is greatly increased.
——LLM: My effects are amazing, I am omnipotent, I am riddled with holes...
Perhaps, for a long time to come, the offensive and defensive wars over various vulnerabilities in large models will not stop.
Due to principle problems, AI models are naturally both robust and fragile. Among the huge number of parameters and calculations, some are irrelevant, but a small part are crucial. important.
To some extent, the security problems encountered by large models are in line with the CNN era.
Use special prompts and special characters Inducing LLM to produce toxic output, including the previously reported methods of exploiting the long context feature of LLM and using multiple rounds of dialogue to jailbreak, can be called adversarial attacks.
In the CNN era, by changing a few pixels of the input image, AI can be The model misclassifies the image, and the attacker can even trick the model into outputting a specific category.
The above figure shows the process of adversarial attack. For the convenience of observation, the random disturbance in the middle has been exaggerated.
In practice, for adversarial attacks, only small changes in pixel values are needed to achieve the attack effect.
What’s even more dangerous is that researchers have discovered that this kind of attack behavior in the virtual world can be transferred to the real world.
The "STOP" sign in the picture below comes from a famous previous work. By adding some seemingly unrelated graffiti to the sign, the autonomous driving system can mistake the stop sign for the sign. Recognized as a speed limit sign.
——This sign was later collected in the London Science Museum to remind the world to always pay attention to the hidden risks of AI models.
Such damage currently suffered by large language models includes but may not be limited to: jailbreaking, prompt injection attacks, privacy leak attacks, etc.
For example, the following example uses multiple rounds of dialogue to jailbreak:
The following picture shows A prompt injection attack that uses angle brackets to hide malicious instructions in the prompt. As a result, GPT-3.5 ignores the original instruction that summarizes the text and starts "making missile with sugar".
To deal with this type of problem, researchers generally use targeted adversarial training to keep the model aligned with human values.
But in fact, there may be endless prompts that can induce LLM to produce malicious output. Faced with this situation, what should the red team do?
The defense end can use automated search, while the attack end can use another LLM to generate prompts to help jailbreak.
In addition, most of the current attacks against large models are black box, but as our understanding of LLM deepens, more white box attacks will continue to be added.
But don’t worry, the troops will come to cover up the water and the soil, the relevant research has already been rolled up .
The editor did a casual search and found that there were many related works in this year's ICLR alone.
For example, the following Oral:
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!
Paper address: https://openreview.net/pdf?id=hTEGyKf0dZ
This work is very similar to the article introduced today: fine-tuning LLM will bring security risks.
Researchers fine-tuned LLM with just a few adversarial training samples to break its secure alignment.
One example uses only 10 samples to fine-tune GPT-3.5 Turbo through OpenAI's API, which costs less than $0.20, allowing the model to respond to almost any harmful instructions.
Also, even without malicious intent, simply using benign and commonly used datasets for fine-tuning can inadvertently degrade the security alignment of LLM.
Another example is the following Spolight:
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models ,
Introduces a new jailbreak attack method targeting visual language models:
Paper address: https://openreview.net/pdf?id=plmBsXHxgR
The researchers paired adversarial images processed by visual encoders with textual prompts to destroy Cross-modal alignment of VLM.
Moreover, the threshold for this attack is very low and does not require access to LLM. When a visual encoder like CLIP is embedded in a closed-source LLM, the jailbreak success rate is very high.
There are many more, so I won’t list them all here. Let’s take a look at the experimental part of this article.
The researchers used an adversarial harmful prompt subset called AdvBench SubsetAndy Zou, containing 50 prompts, requiring Provides 32 categories of harmful information. It is a hint subset of the harmful behavior dataset in the AdvBench benchmark.
The attack algorithm used in the experiment is tree-of-attacks pruning (TAP), which achieves three important goals:
(1) Black box: the algorithm only requires black box access to the model;
(2) Automatic: no manual intervention is required once started;
(3) Interpretable: The algorithm can generate semantically meaningful hints.
The TAP algorithm is used with tasks from the AdvBench subset to attack target LLMs under different settings.
In order to understand the effects of fine-tuning, quantization and guardrails on LLM To understand the impact of security (resistance to jailbreak attacks), the researchers created a pipeline to conduct jailbreak testing.
As mentioned above, use the AdvBench subset to attack LLM through the TAP algorithm, and then record the evaluation results and complete system information.
The entire process will be iterated multiple times, taking into account the stochastic nature associated with LLM. The complete experimental process is shown in the figure below:
TAP is currently the most advanced black box and automatic method that can generate semantically meaningful prompts. Jailbreak LLM.
TAP algorithm uses attacker LLM A to send prompt P to target LLM T. The response of the target LLM R and the prompt P are input to the evaluator JUDGE (LLM), which judges whether the prompt deviates from the topic.
If the prompt deviates from the topic, delete it (equivalent to eliminating the corresponding bad attack prompt tree), otherwise, JUDGE will score the prompt (0-10 points).
Tips that fit the topic will use breadth-first search to generate attacks. This process will iterate a specified number of times, or until a successful jailbreak is achieved.
Guardrails against jailbreak prompts
The research team uses the internal Deberta-V3 model to detect jailbreak prompts. Deberta-V3 acts as an input filter and acts as a guardrail.
If the input prompt is filtered by the guardrail or the jailbreak fails, the TAP algorithm will generate a new prompt based on the initial prompt and response and continue to attempt the attack.
The following is to test fine-tuning, quantification and guardrail belts under three different downstream tasks. coming impact. The experiments basically cover most practical use cases and applications of LLM in industry and academia.
The experiment uses GPT-3.5-turbo as the attack model and GPT-4-turbo as the judgment model.
The target models tested in the experiment came from various platforms, including Anyscale, OpenAI's API, Azure's NC12sv3 (equipped with 32GB V100 GPU), and Hugging Face, as shown in the figure below:
Various basic models, iterative models, and various fine-tuned versions were explored in the experiment, as well as quantified versions.
Fine-tuning
Fine-tuning different tasks can improve the efficiency of LLM in completing tasks. Fine-tuning provides LLM with Required professional domain knowledge, such as SQL code generation, chat, etc.
The experiment compares the jailbroken vulnerability of the base model with the fine-tuned version to understand the role of fine-tuning in increasing or reducing LLM vulnerability.
Researchers use base models such as Llama2, Mistral and MPT-7B, and their fine-tuned versions such as CodeLlama, SQLCoder, Dolphin and Intel Neural Chat.
As can be seen from the results in the table below, compared to the base model, the fine-tuned model loses security alignment and is easily jailbroken.
Quantification
Many models are used during training, fine-tuning and even inference. All require a large amount of computing resources. Quantization is one of the most popular methods to reduce computational burden (at the expense of numerical accuracy of model parameters).
The quantized model in the experiment was quantized using the GPT-generated unified format (GGUF). The results below show that the quantization of the model makes it vulnerable to vulnerabilities.
Guardrail
The guardrail is the line of defense against LLM attacks, as a goalkeeper , its main function is to filter out prompts that may lead to harmful or malicious results.
The researchers used a proprietary jailbreak attack detector derived from the Deberta-V3 model, trained on jailbreak harmful prompts generated by LLM.
The results below show that the introduction of guardrails as an early step has a significant effect and can greatly reduce the risk of jailbreaking.
In addition, the researchers also tested these models with and without integrated guardrails (Guardrails) to evaluate the performance and effectiveness of guardrails. The graph shows the impact of guardrails:
The graph below shows the number of queries required to jailbreak the model. It can be seen that in most cases, guardrails do provide additional resistance to LLM.
The above is the detailed content of Fine-tuning and quantification actually increase the risk of jailbreak! Mistral, Llama and others were all spared. For more information, please follow other related articles on the PHP Chinese website!