search
HomeTechnology peripheralsAIHow to write LoRA code from scratch, here is a tutorial

LoRA (Low-Rank Adaptation) is a popular technique designed to fine-tune large language models (LLM). This technology was originally proposed by Microsoft researchers and included in the paper "LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS". LoRA differs from other techniques in that instead of adjusting all parameters of the neural network, it focuses on updating a small number of low-rank matrices, significantly reducing the amount of computation required to train the model.

Since LoRA’s fine-tuning quality is comparable to full-model fine-tuning, many people refer to this method as a fine-tuning artifact. Since its release, many people have been curious about the technology and wanted to write code to better understand the research. In the past, the lack of proper documentation has been an issue, but now, we have tutorials to help.

The author of this tutorial is Sebastian Raschka, a well-known machine learning and AI researcher. He said that among various effective LLM fine-tuning methods, LoRA is still his first choice. To this end, Sebastian wrote a blog "Code LoRA From Scratch" to build LoRA from scratch. In his opinion, this is a good learning method.

How to write LoRA code from scratch, here is a tutorial

This article introduces low-rank adaptation (LoRA) by writing code from scratch. Sebastian fine-tuned the DistilBERT model in the experiment and used it applied to classification tasks.

The comparison results between the LoRA method and the traditional fine-tuning method show that the LoRA method achieved 92.39% in test accuracy, which is better than fine-tuning only the last few layers of the model (86.22% of the test accuracy) shows better performance. This shows that the LoRA method has obvious advantages in optimizing model performance and can better improve the model's generalization ability and prediction accuracy. This result highlights the importance of adopting advanced techniques and methods during model training and tuning to obtain better performance and results. By comparing how

#Sebastian achieves it, we will continue to look down.

Writing LoRA from scratch

Expressing a LoRA layer in code is like this:

How to write LoRA code from scratch, here is a tutorial

Among them, in_dim is the input dimension of the layer you want to modify using LoRA, and the corresponding out_dim is the output dimension of the layer. A hyperparameter, the scaling factor alpha, is also added to the code. Higher alpha values ​​mean greater adjustments to model behavior, and lower values ​​mean the opposite. Additionally, this article initializes matrix A with smaller values ​​from a random distribution and initializes matrix B with zeros.

It’s worth mentioning that where LoRA comes into play is usually the linear (feedforward) layer of a neural network. For example, for a simple PyTorch model or module with two linear layers (for example, this might be the feedforward module of the Transformer block), the forward method can be expressed as:

How to write LoRA code from scratch, here is a tutorial

When using LoRA, LoRA updates are usually added to the output of these linear layers, and the code is as follows:

How to write LoRA code from scratch, here is a tutorial

If you want to implement LoRA by modifying an existing PyTorch model, a simple way is to replace each linear layer with a LinearWithLoRA layer:

How to write LoRA code from scratch, here is a tutorial

A summary of the above concepts is shown in the figure below:

How to write LoRA code from scratch, here is a tutorial

In order to apply LoRA, this article replaces the existing linear layers in the neural network with a combination of the original linear Layer and LoRALayer's LinearWithLoRA layer.

How to get started using LoRA for fine-tuning

LoRA can be used for models such as GPT or image generation. For simple explanation, this article uses a small BERT (DistilBERT) model for text classification.

How to write LoRA code from scratch, here is a tutorial

Since this article only trains new LoRA weights, it is necessary to set the requires_grad of all trainable parameters to False to freeze all model parameters:

How to write LoRA code from scratch, here is a tutorial

Next, use print (model) to check the structure of the model:

How to write LoRA code from scratch, here is a tutorial

It can be seen from the output that the model consists of 6 transformer layers, including linear layers:

How to write LoRA code from scratch, here is a tutorial

In addition , the model has two linear output layers:

How to write LoRA code from scratch, here is a tutorial

LoRA can be selectively enabled for these linear layers by defining the following assignment function and loop:

How to write LoRA code from scratch, here is a tutorial

Check the model again using print (model) to check its updated structure:

How to write LoRA code from scratch, here is a tutorial ##

As you can see above, the Linear layer has been successfully replaced by the LinearWithLoRA layer.

If you train the model using the default hyperparameters shown above, it results in the following performance on the IMDb movie review classification dataset:

  • Training accuracy: 92.15%
  • Verification accuracy: 89.98%
  • ##Test accuracy: 89.44%

In the next section, this paper compares these LoRA fine-tuning results with traditional fine-tuning results.

Comparison with traditional fine-tuning methods

In the previous section, LoRA achieved a test accuracy of 89.44% under default settings, How does this compare to traditional fine-tuning methods?

For comparison, this article conducted another experiment, taking training the DistilBERT model as an example, but only updated the last 2 layers during training. The researchers achieved this by freezing all model weights and then unfreezing the two linear output layers:

How to write LoRA code from scratch, here is a tutorial

Classification performance obtained by training only the last two layers As follows:

  • Training accuracy: 86.68%
  • Validation accuracy: 87.26%
  • Test accuracy: 86.22%

The results show that LoRA performs better than the traditional method of fine-tuning the last two layers, but it uses 4 times fewer parameters . Fine-tuning all layers required updating 450 times more parameters than the LoRA setup, but only improved test accuracy by 2%.

Optimize LoRA configuration

The results mentioned above are all performed by LoRA under the default settings, and the hyperparameters are as follows:

How to write LoRA code from scratch, here is a tutorial

If the user wants to try different hyperparameter configurations, he can use the following command:

How to write LoRA code from scratch, here is a tutorial

However, the optimal hyperparameter configuration is as follows:

How to write LoRA code from scratch, here is a tutorial

Under this configuration, the result is:

  • Verification accuracy: 92.96%
  • Test accuracy: 92.39%

Notable Yes, even with only a small set of trainable parameters in the LoRA setting (500k VS 66M), the accuracy is slightly higher than that obtained with full fine-tuning.

Original link: https://lightning.ai/lightning-ai/studios/code-lora-from-scratch?cnotallow=f5fc72b1f6eeeaf74b648b2aa8aaf8b6

The above is the detailed content of How to write LoRA code from scratch, here is a tutorial. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
ai合并图层的快捷键是什么ai合并图层的快捷键是什么Jan 07, 2021 am 10:59 AM

ai合并图层的快捷键是“Ctrl+Shift+E”,它的作用是把目前所有处在显示状态的图层合并,在隐藏状态的图层则不作变动。也可以选中要合并的图层,在菜单栏中依次点击“窗口”-“路径查找器”,点击“合并”按钮。

ai橡皮擦擦不掉东西怎么办ai橡皮擦擦不掉东西怎么办Jan 13, 2021 am 10:23 AM

ai橡皮擦擦不掉东西是因为AI是矢量图软件,用橡皮擦不能擦位图的,其解决办法就是用蒙板工具以及钢笔勾好路径再建立蒙板即可实现擦掉东西。

谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开谷歌超强AI超算碾压英伟达A100!TPU v4性能提升10倍,细节首次公开Apr 07, 2023 pm 02:54 PM

虽然谷歌早在2020年,就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。但直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。论文地址:https://arxiv.org/abs/2304.01433相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。另外,谷歌还声称,自家芯片要比英伟达A100更快、更节能。与A100对打,速度快1.7倍论文中,谷歌表示,对于规模相当的系统,TPU v4可以提供比英伟达A100强1.

ai可以转成psd格式吗ai可以转成psd格式吗Feb 22, 2023 pm 05:56 PM

ai可以转成psd格式。转换方法:1、打开Adobe Illustrator软件,依次点击顶部菜单栏的“文件”-“打开”,选择所需的ai文件;2、点击右侧功能面板中的“图层”,点击三杠图标,在弹出的选项中选择“释放到图层(顺序)”;3、依次点击顶部菜单栏的“文件”-“导出”-“导出为”;4、在弹出的“导出”对话框中,将“保存类型”设置为“PSD格式”,点击“导出”即可;

ai顶部属性栏不见了怎么办ai顶部属性栏不见了怎么办Feb 22, 2023 pm 05:27 PM

ai顶部属性栏不见了的解决办法:1、开启Ai新建画布,进入绘图页面;2、在Ai顶部菜单栏中点击“窗口”;3、在系统弹出的窗口菜单页面中点击“控制”,然后开启“控制”窗口即可显示出属性栏。

GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑GPT-4的研究路径没有前途?Yann LeCun给自回归判了死刑Apr 04, 2023 am 11:55 AM

Yann LeCun 这个观点的确有些大胆。 「从现在起 5 年内,没有哪个头脑正常的人会使用自回归模型。」最近,图灵奖得主 Yann LeCun 给一场辩论做了个特别的开场。而他口中的自回归,正是当前爆红的 GPT 家族模型所依赖的学习范式。当然,被 Yann LeCun 指出问题的不只是自回归模型。在他看来,当前整个的机器学习领域都面临巨大挑战。这场辩论的主题为「Do large language models need sensory grounding for meaning and u

强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程强化学习再登Nature封面,自动驾驶安全验证新范式大幅减少测试里程Mar 31, 2023 pm 10:38 PM

引入密集强化学习,用 AI 验证 AI。 自动驾驶汽车 (AV) 技术的快速发展,使得我们正处于交通革命的风口浪尖,其规模是自一个世纪前汽车问世以来从未见过的。自动驾驶技术具有显着提高交通安全性、机动性和可持续性的潜力,因此引起了工业界、政府机构、专业组织和学术机构的共同关注。过去 20 年里,自动驾驶汽车的发展取得了长足的进步,尤其是随着深度学习的出现更是如此。到 2015 年,开始有公司宣布他们将在 2020 之前量产 AV。不过到目前为止,并且没有 level 4 级别的 AV 可以在市场

ai移动不了东西了怎么办ai移动不了东西了怎么办Mar 07, 2023 am 10:03 AM

ai移动不了东西的解决办法:1、打开ai软件,打开空白文档;2、选择矩形工具,在文档中绘制矩形;3、点击选择工具,移动文档中的矩形;4、点击图层按钮,弹出图层面板对话框,解锁图层;5、点击选择工具,移动矩形即可。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software