Yesterday, Meta open sourced the basic model specializing in code generation Code Llama, which can be used for research and commercial purposes for free. Code Llama series models have three parameter versions with parameter sizes of 7B, 13B and 34B respectively. And supports multiple programming languages, including Python, C, Java, PHP, Typescript (Javascript), C#, and Bash. The Code Llama version provided by Meta includes:
Code Llama, basic code model;
Code Sheep-Python, a fine-tuned version of Python;
Code Llama-Instruct, a fine-tuned version of natural language instructions
Just its effect Generally speaking, the one-time generation pass rate (pass@1) of different versions of Code Llama on the HumanEval and MBPP data sets exceeds GPT-3.5. In addition, Code Llama’s “Unnatural” 34B version’s pass@1 on the HumanEval dataset is close to GPT-4 (62.2% vs 67.0%). However, Meta did not release this version, but significant performance improvements were achieved through training on a small set of high-quality encoded data. Image source: https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/
Just after one day, a researcher launched a challenge to GPT-4. They come from Phind, an organization that aims to build an AI search engine for developers, and the research used finely tuned Code Llama-34B to beat GPT-4 in the HumanEval evaluation. Phind co-founder Michael Royzen said: "This is just an early experiment, aiming to reproduce (and surpass) the "Unnatural Code Llama" results in the Meta paper. In the future, we will have an expert portfolio of different CodeLlama models that I think will be competitive in real-world workflows. 》
Both models have been open source:
The researchers released these two models on Huggingface, You can go and check it out.
- Phind-CodeLlama-34B-v1:https://huggingface.co/Phind/Phind-CodeLlama-34B-v1
- Phind -CodeLlama-34B-Python-v1: https://huggingface.co/Phind/Phind-CodeLlama-34B-Python-v1
Next let’s see how this research goes realized. Fine-tuned Code Llama-34B beats GPT-4## Let’s look at the results first. This study used Phind internal data sets to fine-tune Code Llama-34B and Code Llama-34B-Python, resulting in two models Phind-CodeLlama-34B-v1 and Phind-CodeLlama-34B-Python-v1 respectively.
The two newly obtained models achieved 67.6% and 69.5% pass@1 respectively on HumanEval.
For comparison, CodeLlama-34B pass@1 is 48.8%; CodeLlama-34B-Python pass@1 is 53.7%.
And GPT-4’s pass@1 on HumanEval is 67% (data released by OpenAI in the “GPT-4 Technical Report” released in March this year).
Image source: https://ai.meta.com/blog/code-llama-large-language-model-coding/
Picture source: https://cdn.openai.com/papers/gpt-4.pdf When it comes to fine-tuning, it’s natural to have a data set. This study fine-tuned Code Llama-34B and Code Llama-34B-Python on a proprietary data set containing about 80,000 high-quality programming problems and solutions. This dataset does not take code completion examples, but instead takes instruction-answer pairs, which is different from the HumanEval data structure. The study then trained the Phind model for two epochs, with a total of about 160,000 examples. The researchers said that LoRA technology was not used in the training, but local fine-tuning was used. In addition, the study also used DeepSpeed ZeRO 3 and Flash Attention 2 technology. They spent three hours on 32 A100-80GB GPUs to train these Model, the sequence length is 4096 tokens. In addition, the study also applied OpenAI’s decontamination method to the data set to make the model results more effective. As we all know, even the very powerful GPT-4 will face the dilemma of data pollution. In layman's terms, the trained model may have been trained on the evaluation data. . This problem is very difficult for LLM. For example, in the process of evaluating the performance of a model, in order to make a scientifically credible evaluation, the researcher must check the Whether the problem is in the model's training data. If it is, the model can remember these problems and will obviously perform better on these specific problems when evaluating the model. It's like a person already knows the test questions before taking the exam. In order to solve this problem, OpenAI disclosed how GPT-4 evaluates data pollution in the public GPT-4 technical document "GPT-4 Technical Report". They disclose their strategies for quantifying and assessing this data contamination. Specifically, OpenAI uses substring matching to measure cross-contamination between the evaluation dataset and pre-training data. Both evaluation and training data are processed by removing all spaces and symbols, leaving only characters (including numbers). For each evaluation example, OpenAI randomly selects three 50-character substrings (if there are fewer than 50 characters, the entire example is used). A match is determined if any of the three sampled evaluation substrings is a substring of the processed training example. This will produce a list of tainted examples, which OpenAI discards and reruns to obtain an untainted score. But this filtering method has some limitations, substring matching can lead to false negatives (if there are small differences between the evaluation and training data) as well as false positives. As a result, OpenAI uses only part of the information in the evaluation example, leveraging only questions, context, or equivalent data, and ignoring answers, responses, or equivalent data. In some cases, multiple choice options are also excluded. These exclusions may lead to an increase in false positives. Regarding this part, interested readers can refer to the paper to learn more. Paper address: https://cdn.openai.com/papers/gpt-4.pdf However, there is some controversy over the HumanEval score used by Phind when benchmarking GPT-4. Some people say that the latest test score of GPT-4 has reached 85%. But Phind replied that the relevant research that derived this score did not conduct contamination research, and it was impossible to determine whether GPT-4 had seen HumanEval's test data when it received a new round of testing. Considering some recent research on "GPT-4 becomes dumb", it is more safe to use the data in the original technical report.
However, considering the complexity of large model evaluation, whether these evaluation results can reflect the true capabilities of the model is still a controversial issue. You can download the model and experience it yourself. The rewritten content is as follows: Reference link:
The content that needs to be rewritten is: https://benjaminmarie.com/the-decontaminated-evaluation- of-gpt-4/
The content that needs to be rewritten is: https://www.phind.com/blog/code-llama-beats-gpt4
The above is the detailed content of Code Llama's coding capabilities soared, and the fine-tuned version of HumanEval scored better than GPT-4 and was released in one day. For more information, please follow other related articles on the PHP Chinese website!