Home >Technology peripherals >AI >BLOOM can create a new culture for AI research, but challenges remain

BLOOM can create a new culture for AI research, but challenges remain

王林
王林forward
2023-04-09 16:21:041316browse

​Translator|Li Rui

Reviewer|Sun Shujuan

BigScience research project recently released a large-scale language model BLOOM. At first glance, it looks like a copy of OpenAI Another attempt at GPT-3.

But what sets BLOOM apart from other large-scale natural language models (LLMs) is its efforts in researching, developing, training, and publishing machine learning models.

In recent years, large technology companies have hidden large-scale natural language models (LLMs) like strict trade secrets, but the BigScience team has put transparency and openness at the center of BLOOM from the beginning of the project.

The result is a large language model, ready for research and learning, and available to everyone. The example of open source and open collaboration established by BLOOM will be very beneficial to future research in large-scale natural language models (LLM) and other areas of artificial intelligence. But there are still some challenges inherent to large language models that need to be addressed.

What is BLOOM?

BLOOM can create a new culture for AI research, but challenges remain

BLOOM is the abbreviation of "BigScience Large-Scale Open Science Open Access Multilingual Model". From the data point of view, it is not much different from GPT-3 and OPT-175B. It is a very large Transformer model, with 176 billion parameters, trained using 1.6TB of data, including natural language and software source code.

Like GPT-3, it can perform many tasks through zero-shot or few-shot learning, including text generation, summarization, question answering, and programming.

But the importance of BLOOM lies in the organization and construction process behind it.

BigScience is a research project launched in 2021 by the Machine Learning Model Center "Hugging Face". According to its website, the project "aims to demonstrate an alternative way to create, learn, and share large language models and large research artifacts within the AI/NLP research community."

In this regard , BigScience draws inspiration from science-making initiatives such as CERN and the Large Hadron Collider (LHC), where open scientific collaborations promote large-scale artifacts useful to the entire research community of creation.

In the past year since May 2021, more than 1,000 researchers from 60 countries and more than 250 institutions have co-created BLOOM in BigScience.

Transparency, Openness, and Inclusion

While most major large-scale natural language models (LLMs) are trained on English text only, BLOOM’s training corpus includes 46 natural languages ​​and 13 programming languages. This is useful in many regions where the primary language is not English.

BLOOM also breaks the actual reliance on the models of large technology training companies. One of the main problems with large natural language models (LLMs) is the high cost of training and tuning. This barrier makes large natural language models (LLMs) with 100 billion parameters the exclusive domain of large tech companies with deep pockets. In recent years, artificial intelligence labs have been attracted by large technology companies to obtain subsidized cloud computing resources and fund their research.

In contrast, the BigScience research team received a 3 million euro grant from the French National Center for Scientific Research to train BLOOM on the supercomputer Jean Zay. There is no agreement granting an exclusive license to the technology to a commercial company, nor is there a commitment to commercialize the model and turn it into a profitable product.

In addition, the BigScience team is completely transparent about the entire model training process. They publish datasets, meeting transcripts, discussions, and code, as well as logs and technical details of training models.

Researchers are studying the model’s data and metadata and publishing interesting findings.

For example, researcher David McClure tweeted on July 12, 2022, “I’ve been looking at the training dataset behind the really cool BLOOM model from Bigscience and Hugging Face. There are 10 million samples from the English corpus, about 1.25% of the total, encoded with 'all-distilroberta-v1', and then UMAP to 2d."

Of course, the trained model itself can be used in Hugging Face's platform, which relieves researchers from the pain of spending millions of dollars to train.

Facebook open sourced one of its large-scale natural language models (LLM) under some restrictions last month. However, the transparency brought by BLOOM is unprecedented and promises to set a new standard for the industry.

BLOOM training co-lead Teven LeScao said, “In contrast to the secrecy of industrial AI research labs, BLOOM demonstrates that the most powerful AI models can be developed responsibly and openly by the broader research community. way to train and publish.”

Challenges remain

While BigScience’s efforts to bring openness and transparency to artificial intelligence research and large language models are laudable, there are challenges inherent in the field Still nothing has changed.

Large-scale natural language model (LLM) research is moving towards larger and larger models, which will further increase training and running costs. BLOOM uses 384 Nvidia Tesla A100 GPUs (priced at about $32,000 each) for training. And larger models will require larger computing clusters. The BigScience team has announced that it will continue to create other open source large natural language models (LLMs), but it remains to be seen how the team will fund its increasingly expensive research. For example, OpenAI started as a non-profit organization and later became a for-profit organization that sells products and relies on funding from Microsoft.

Another issue that remains to be solved is the huge cost of running these models. The compressed BLOOM model is 227GB in size, and running it requires specialized hardware with hundreds of GB of memory. For comparison, GPT-3 requires a computing cluster equivalent to an Nvidia DGX 2, which costs about $400,000. Hugging Face plans to launch an API platform that will allow researchers to use the model for about $40 per hour, which is a significant cost.

The cost of running BLOOM will also impact the applied machine learning community, startups, and organizations looking to build products powered by large-scale natural language models (LLMs). Currently, the GPT-3 API provided by OpenAI is more suitable for product development. It will be interesting to see which direction BigScience and Hugging Face take to enable developers to build products based on their valuable research.

In this regard, there is an expectation that BigScience will have smaller versions of its models in future releases. Contrary to what is often portrayed in the media, Large Natural Language Models (LLMs) still adhere to the "no free lunch" principle. This means that when applying machine learning, a more compact model that is fine-tuned for a specific task is more effective than a very large model with average performance on many tasks. For example, Codex is a modified version of GPT-3 that provides great assistance with programming at a fraction of the size and cost of GPT-3. GitHub currently offers a Codex-based product, Copilot, for $10 per month.

It will be interesting to examine where academic and applied AI goes in the future as BLOOM hopes to establish a new culture.

Original title: BLOOM can set a new culture for AI research—but challenges remain​, author :Ben Dickson​

The above is the detailed content of BLOOM can create a new culture for AI research, but challenges remain. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete