Home >Technology peripherals >AI >Shanghai Digital Brain Research Institute releases DB1, China's first large multi-modal decision-making model, which can achieve rapid decision-making on ultra-complex problems
Recently, Shanghai Digital Brain Research Institute (hereinafter referred to as "Digital Brain Research Institute") launched the first large-scale digital brain multi-modal decision-making model (referred to as DB1), filling the domestic gap in this area and further verifying the The potential of pre-trained models in text, image-text, reinforcement learning decision-making, and operations optimization decision-making. Currently, we have open sourced the DB1 code on Github, project link: https://github.com/Shanghai-Digital-Brain-Laboratory/BDM-DB1.
Previously, the Institute of Mathematical Sciences proposed MADT (https://arxiv.org/abs/2112.02845)/MAT (https://arxiv.org/abs/2205.14953) and other multi-intelligence intelligence Body model, through sequence modeling in some large offline models, using the Transformer model has achieved remarkable results in some single/multi-agent tasks, and research and exploration in this direction continues.
In the past few years, with the rise of pre-trained large models, academia and industry have continued to make new progress in the parameter amount and multi-modal tasks of pre-trained models. Large-scale pre-training models are considered to be one of the important paths to general artificial intelligence through in-depth modeling of massive data and knowledge. The Digital Research Institute, which focuses on decision-making intelligence research, innovatively tried to copy the success of the pre-trained model to decision-making tasks and achieved a breakthrough.
Previously, DeepMind launched Gato, which unifies single-agent decision-making tasks, multi-round dialogues and picture-text generation tasks into one Based on Transformer's autoregressive problem, it has achieved good performance on 604 different tasks, showing that some simple reinforcement learning decision-making problems can be solved through sequence prediction. This verifies the research direction of the Institute of Mathematics in the direction of large decision-making models. Correctness.
This time, the DB1 launched by the Institute of Mathematics Research mainly reproduced and verified Gato, and tried to conduct it from the aspects of network structure and parameter amount, task type and task number. Improvement:
It can be seen that the overall performance of DB1 has reached the same level as Gato, and has begun to evolve towards a demand field body that is closer to the actual business, and is well solved The NP-hard TSP problem has not been explored in this direction before by Gato.
Comparison of DB1 (right) and GATO (left) indicators
Multi-task performance distribution of DB1 on reinforcement learning simulation environment
Compared with traditional decision-making algorithms, DB1 has good performance in cross-task decision-making capabilities and fast migration capabilities. In terms of cross-task decision-making capabilities and parameter quantities, it has achieved a leap from tens of millions of parameters for a single complex task to billions of parameters for multiple complex tasks, and continues to grow, and has the ability to solve problems in complex business environments. Adequate ability to solve practical problems. In terms of migration capabilities, DB1 has completed the leap from intelligent prediction to intelligent decision-making, and from single agent to multi-agent, making up for the shortcomings of traditional methods in cross-task migration, making it possible to build large models within the enterprise.
It is undeniable that DB1 also encountered many difficulties in the development process. The Institute of Digital Research has made a lot of attempts to provide the industry with large-scale model training and multi-task training data storage. Provide some standard solution paths. Since the model parameters have reached 1 billion parameters and the task scale is huge, and it needs to be trained on more than 100T (300B Tokens) expert data, the ordinary deep reinforcement learning training framework can no longer meet the requirements for rapid training in this situation. To this end, on the one hand, for distributed training, the Institute of Mathematics Research fully considers the computing structure of reinforcement learning, operational optimization and large model training. In a single-machine multi-card or multi-machine multi-card environment, it makes full use of hardware resources and cleverly designs modules. The communication mechanism between the two models maximizes the training efficiency of the model and shortens the training time of 870 tasks to one week. On the other hand, for distributed random sampling, the data indexing, storage, loading and preprocessing required in the training process have also become corresponding bottlenecks. The Institute of Mathematics Research Institute adopted a delayed loading mode when loading the data set to solve the problem of memory limitations and maximize the Make full use of available memory. In addition, after preprocessing the loaded data, the processed data will be cached on the hard disk, so that the preprocessed data can be directly loaded later, reducing the time and resource costs caused by repeated preprocessing.
Currently, leading international and domestic companies and research institutions such as OpenAI, Google, Meta, Huawei, Baidu and DAMO Academy have conducted research on multi-modal large models and There have been some commercialization attempts, including applying it in its own products or providing model APIs and related industry solutions. In contrast, the Institute of Mathematical Sciences focuses more on decision-making issues and supports application attempts in game AI decision-making tasks, operations research optimization TSP solving tasks, robot decision-making control tasks, black-box optimization solving tasks and multi-round dialogue tasks.
Operations Research Optimization: TSP Problem Solving
With Chinese part TSP problem with city as node
Reinforcement learning task video demonstration
DB1 model completed 870 After offline learning of different decision-making tasks, the evaluation results showed that 76.67% of the tasks reached or exceeded 50% expert level. The following is a demonstration of the effects of some tasks.
##Atari Breakout
DMLab Explore Object Locations
##Procgen DogBall
Metaworld PlateSlide
##ModularRL Cheetah
Text-Image task
Although the current multi-modal decision-making pre-training model DB1 has achieved certain results, there are still certain limitations, such as: cross-domain task sampling weight sensitivity, cross-domain Difficulties in knowledge transfer, long sequence modeling, strong dependence on expert data, etc. Although there are many challenges, at this stage, it seems that the large multi-modal decision-making model is the key to realizing the decision-making agent from games to wider scenarios, from virtual to reality, autonomous sensing and decision-making in a realistic open dynamic environment, and ultimately achieving a more universal One of the key exploration directions of artificial intelligence. In the future, the Digital Research Institute will continue to iterate the large digital brain decision-making model, access and support more tasks through larger parameter quantities and more effective sequence representations, and combine offline/online training and fine-tuning to achieve cross-domain, cross-modal, and Cross-task knowledge generalization and transfer ultimately provide more versatile, efficient, and lower-cost intelligent decision-making solutions in real-life application scenarios.
The above is the detailed content of Shanghai Digital Brain Research Institute releases DB1, China's first large multi-modal decision-making model, which can achieve rapid decision-making on ultra-complex problems. For more information, please follow other related articles on the PHP Chinese website!