


Ten lines of code are comparable to RLHF and use social game data to train a social alignment model
Making the behavior of language models consistent with human social values is an important part of current language model development. The corresponding training is also called value alignment.
The current mainstream solution is RLHF (Reinforcenment Learning from Human Feedback) used by ChatGPT, which is reinforcement learning based on human feedback. This solution first trains a reward model (value model) as a proxy for human judgment. The agent model provides rewards as supervision signals to the generative language model during the reinforcement learning phase.
This method has the following pain points:
1. The rewards generated by the agent model can easily be destroyed or tampered with.
#2. During the training process, the agent model needs to continuously interact with the generative model, and this process may be very time-consuming and inefficient. In order to ensure high-quality supervision signals, the agent model should not be smaller than the generative model, which means that during the reinforcement learning optimization process, at least two larger models need to alternately perform inference (judgment of rewards) and parameter updating (generative model parameter optimization). Such a setting may be very inconvenient in large-scale distributed training.
#3. The value model itself has no obvious correspondence with the human thinking model. We do not have a separate scoring model in mind, and in fact it is very difficult to maintain a fixed scoring standard for a long time. Instead, much of the value judgment we form as we grow comes from daily social interactions—by analyzing different social responses to similar situations, we come to realize what is encouraged and what is not. These experiences and consensus gradually accumulated through a large amount of "socialization-feedback-improvement" have become the common value judgments of human society.
A recent study from Dartmouth, Stanford, Google DeepMind and other institutions shows that using high-quality data constructed by social games combined with simple and efficient alignment algorithms may be the only way to achieve this. The key to alignment.
- ## Article address: https://arxiv.org/pdf/2305.16960.pdf
- Code address: https://github.com/agi-templar/Stable-Alignment
- Model download (including base, SFT, and alignment models): https://huggingface.co/agi-css
The author proposes an alignment method trained on multi-agent game data. The basic idea can be understood as transferring the online interaction of the reward model and the generative model in the training phase to the offline interaction between a large number of autonomous agents in the game (high sampling rate, previewing the game in advance). The game environment runs independently of training and can be massively parallelized. Supervisory signals move from being dependent on the performance of the agent's reward model to being dependent on the collective intelligence of a large number of autonomous agents.
For this purpose, the author designed a virtual social model, called Sandbox. The sandbox is a world composed of grid points, and each grid point is a social agent. The social body has a memory system that is used to store various information such as questions, answers, feedback, etc. for each interaction. Every time the social group responds to a question, it must first retrieve and return the N historical questions and answers most relevant to the question from the memory system as a contextual reference for this reply. Through this design, the position of the social body can be continuously updated in multiple rounds of interaction, and the updated position can maintain a certain continuity with the past. Each social group has a different default position in the initialization phase.
##Convert game data into alignment data In the experiment, the author used a 10x10 grid sandbox (a total of 100 social groups) to conduct social simulation, and formulated a social rule (the so-called Sandbox Rule): all social groups must make themselves aware of the problem The answers are more socially aligned to leave a good impression on other social groups. In addition, the sandbox also deployed observers without memory to score the responses of social groups before and after each social interaction. Scoring is based on two dimensions: alignment and engagement.
Simulated human society in a sandbox using different models
The author used the Sandbox to test language models of different sizes and different training stages. Overall, models trained with alignment (so-called “aligned models”), such as davinci-003, GPT-4, and ChatGPT, can generate socially normative responses in fewer interaction rounds. In other words, the significance of alignment training is to make the model safer in "out-of-the-box" scenarios without the need for special rounds of dialogue guidance. The model without alignment training not only requires more interactions to achieve the overall optimal response of alignment and engagement, but also the upper limit of this overall optimal is significantly lower than the aligned model.
The author also proposes a simple and easy alignment algorithm called Stable Alignment for Learn alignment from historical data in the sandbox. The stable alignment algorithm performs score-modulated contrastive learning in each mini-batch - the lower the score of the reply, the larger the boundary value of the contrastive learning will be set - in other words, stable alignment By continuously sampling small batches of data, the model is encouraged to generate responses that are closer to high-scoring responses and less close to low-scoring responses. Stable alignment eventually converges to the SFT loss. The authors also discuss the differences between stable alignment and SFT, RLHF.
The author particularly emphasizes the data from Sandbox games. Due to the setting of the mechanism, a large amount of it is included through revision ( revision) and become data that conforms to social values. The author proves through ablation experiments that this large amount of data with step-by-step improvement is the key to stable training.
##The author also aligns the algorithm performance with the current mainstream Performance comparisons were made with training stability, proving that stable alignment is not only more stable than reward modeling, but also comparable to RLHF in general performance and alignment performance (since ChatGPT uses undisclosed models, data and algorithms, it is only for reference ).
Instance generation results:
The above is the detailed content of Ten lines of code are comparable to RLHF and use social game data to train a social alignment model. For more information, please follow other related articles on the PHP Chinese website!

Since 2008, I've championed the shared-ride van—initially dubbed the "robotjitney," later the "vansit"—as the future of urban transportation. I foresee these vehicles as the 21st century's next-generation transit solution, surpas

Revolutionizing the Checkout Experience Sam's Club's innovative "Just Go" system builds on its existing AI-powered "Scan & Go" technology, allowing members to scan purchases via the Sam's Club app during their shopping trip.

Nvidia's Enhanced Predictability and New Product Lineup at GTC 2025 Nvidia, a key player in AI infrastructure, is focusing on increased predictability for its clients. This involves consistent product delivery, meeting performance expectations, and

Google's Gemma 2: A Powerful, Efficient Language Model Google's Gemma family of language models, celebrated for efficiency and performance, has expanded with the arrival of Gemma 2. This latest release comprises two models: a 27-billion parameter ver

This Leading with Data episode features Dr. Kirk Borne, a leading data scientist, astrophysicist, and TEDx speaker. A renowned expert in big data, AI, and machine learning, Dr. Borne offers invaluable insights into the current state and future traje

There were some very insightful perspectives in this speech—background information about engineering that showed us why artificial intelligence is so good at supporting people’s physical exercise. I will outline a core idea from each contributor’s perspective to demonstrate three design aspects that are an important part of our exploration of the application of artificial intelligence in sports. Edge devices and raw personal data This idea about artificial intelligence actually contains two components—one related to where we place large language models and the other is related to the differences between our human language and the language that our vital signs “express” when measured in real time. Alexander Amini knows a lot about running and tennis, but he still

Caterpillar's Chief Information Officer and Senior Vice President of IT, Jamie Engstrom, leads a global team of over 2,200 IT professionals across 28 countries. With 26 years at Caterpillar, including four and a half years in her current role, Engst

Google Photos' New Ultra HDR Tool: A Quick Guide Enhance your photos with Google Photos' new Ultra HDR tool, transforming standard images into vibrant, high-dynamic-range masterpieces. Ideal for social media, this tool boosts the impact of any photo,


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

SublimeText3 Linux new version
SublimeText3 Linux latest version

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

Zend Studio 13.0.1
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.