Home >Technology peripherals >AI >Able to align humans without RLHF, performance comparable to ChatGPT! Chinese team proposes Wombat model
OpenAI’s ChatGPT is able to understand a wide variety of human instructions and perform well in different language tasks. This is possible thanks to a novel large-scale language model fine-tuning method called RLHF (Aligned Human Feedback via Reinforcement Learning).
The RLHF approach unlocks the language model’s ability to follow human instructions, making the language model’s capabilities consistent with human needs and values.
Currently, RLHF’s research work mainly uses the PPO algorithm to optimize language models. However, the PPO algorithm contains many hyperparameters and requires multiple independent models to cooperate with each other during the algorithm iteration process, so wrong implementation details may lead to poor training results.
At the same time, from the perspective of alignment with humans, reinforcement learning algorithms are not necessary.
##Paper address: https://arxiv.org/abs/2304.05302v1
Project address: https://github.com/GanjinZero/RRHF
For this purpose, Alibaba Authors from DAMO Academy and Tsinghua University proposed a method called ranking-based human preference alignment—RRHF.
RRHF No reinforcement learning is required and responses generated by different language models can be leveraged, including ChatGPT, GPT-4, or current training models . RRHF works by scoring responses and aligning them with human preferences through a ranking loss.
Unlike PPO, the training process of RRHF can use the output of human experts or GPT-4 as comparison. The trained RRHF model can be used as both a generative language model and a reward model.
The CEO of Playgound AI said that this is the most interesting paper recently
The following figure compares the difference between the PPO algorithm and the RRHF algorithm.
RRHF first obtains k replies through different methods, and then uses the reward model to respond to the k replies Each response is scored separately. Each response is scored using logarithmic probability:
where is the autoregressive language model Probability distributions.
We hope that the reward model will give a higher probability to the reply with a high score, that is, we hope to match the reward score. We optimize this goal through ranking loss:
In addition, we also give the model a goal to directly learn the highest score Reply:
## You can see that the process of RRHF training is very simple. The following is a loss reduction situation during RRHF training. You can see The decrease is very stable, and the reward score increases steadily as the loss decreases.
The author of the article conducted experiments on the HH data set and can also see effects comparable to PPO:
The RRHF algorithm can effectively align the language model output probability with human preferences. Its training idea is very simple. The trained model has several characteristics:
The RRHF method uses OpenAI's chatGPT or GPT-4 as the scoring model and the output of ChatGPT, Alpaca and other models as training samples to develop two new language models, respectivelyWombat-7B and Wombat-7B-GPT4. The training time ranges from 2-4 hours and is very lightweight.
Wombat Wombat, as a new open source pre-training model, can better align with human preferences compared to LLaMA, Alpaca, etc. The authors experimentally found that Wombat-7B has complex abilities such as role playing and counterfactual reasoning.
If Wombat were asked to introduce the future technology from 3000, Wombat would answer like this (translated from English):
Hope our future will get better and better as Wombat predicted.
References:
https://github.com/GanjinZero/RRHF
The above is the detailed content of Able to align humans without RLHF, performance comparable to ChatGPT! Chinese team proposes Wombat model. For more information, please follow other related articles on the PHP Chinese website!