Home > Article > Technology peripherals > Beijing plans to coordinate the supply of computing power required for AI training and integrate large model Chinese corpora
News on May 17th: On May 12th, Beijing issued several measures to promote the innovative development of general artificial intelligence (2023-2025) (soliciting comments Draft) (hereinafter referred to as the "Draft for Comments") publicly solicits opinions, and plans to implement a coordinated supply of computing power required for AI training.
The "Draft for Comments" proposes to strengthen the overall supply capacity of computing resources, strengthen cooperation with market entities such as leading public cloud vendors, implement computing power partnership plans, and determine The first batch of partnership program members will clarify supply technical standards, software and hardware service requirements, computing power supply scale, preferential strategies, etc., and announce a group of high-quality computing power suppliers to universities and small and medium-sized enterprises in Beijing.
The "Draft for Comments" states that using the government's unified entrance will reduce the cost of public cloud procurement, benefit small and medium-sized enterprises, and at the same time reduce the communication costs for enterprises to face different cloud vendors. To meet the demand for elastic computing power, build a unified multi-cloud computing power scheduling platform to achieve unified management and unified operation of heterogeneous computing power environments, allowing enterprises to seamlessly, economically and efficiently run various artificial intelligence computing tasks on different cloud environments. Build a basic optical transmission network directly connected to the computing power clusters in Beijing and Hebei, Tianjin, Shanxi, Inner Mongolia and other provinces (cities) to further enhance the platform's ability to perceive computing power resources in the four places and explore computing power transactions.
The "Draft for Comments" also stated that in view of the problem that the current proportion of high-quality Chinese corpus for large model training is too small, which is not conducive to Chinese contextual expression and industrial application, it is necessary to integrate the existing open source Chinese pre-training data sets and High-quality Internet Chinese data and clean it for compliance. At the same time, we will continue to expand high-quality multi-modal data sources, build compliant and secure large model pre-training corpora in Chinese, image-text pairs, audio, video, etc., and make them open to targeted and conditional use through the social data area of the Beijing International Big Data Exchange.
IT Home attaches the full document of "Beijing's Several Measures to Promote the Innovation and Development of General Artificial Intelligence (2023-2025) (Draft for Comments)": Click here to view
The above is the detailed content of Beijing plans to coordinate the supply of computing power required for AI training and integrate large model Chinese corpora. For more information, please follow other related articles on the PHP Chinese website!