Home >Technology peripherals >AI >Zhiyuan has opened 300 million semantic vector model training data, and the BGE model continues to be iteratively updated.

Zhiyuan has opened 300 million semantic vector model training data, and the BGE model continues to be iteratively updated.

王林
王林forward
2023-09-21 21:33:111430browse

With the rapid development and application of large-scale models, the importance of Embedding, which is the core basic component of large-scale models, has become more and more prominent. The open source commercially available Chinese-English semantic vector model BGE (BAAI General Embedding) released by Zhiyuan Company a month ago has attracted widespread attention in the community, and has been downloaded hundreds of thousands of times on the Hugging Face platform. Currently, BGE has rapidly iteratively launched version 1.5 and announced multiple updates. Among them, BGE has open sourced 300 million pieces of large-scale training data for the first time, providing the community with help in training similar models and promoting the development of technology in this field

##3 billion Chinese and English vector model training data open

The first open source semantic vector in the industry The model training data reaches 300 million pieces of Chinese and English data

BGE’s outstanding capabilities largely stem from its large-scale and diverse training data. Previously, industry peers had rarely released similar data sets. In this update, Zhiyuan opens BGE training data to the community for the first time, laying the foundation for further development of this type of technology.

The data set MTP released this time consists of a total of 300 million Chinese and English related text pairs. Among them, there are 100 million records in Chinese and 200 million records in English. The sources of data include Wudao Corpora, Pile, DuReader, Sentence Transformer and other corpora. After necessary sampling, extraction and cleaning,

is obtained. For details, please refer to Data Hub: https://data.baai.ac.cn

MTP is the largest open source Chinese-English related text pair data set to date, providing an important foundation for training Chinese and English semantic vector models.

In response to the developer community, BGE function upgrade

Based on community feedback, BGE has been further optimized based on its version 1.0, making Its performance is more stable and outstanding. The specific upgrade content is as follows:

  • Model update. BGE-*-zh-v1.5 alleviates the similarity distribution problem by filtering the training data, deleting low-quality data, and increasing the temperature coefficient during training to 0.02, making the similarity value more stable.
  • New model. The open source BGE-reranker cross-encoder model can find relevant text more accurately and supports Chinese and English bilingual. Different from vector models that need to output vectors, BGE-reranker directly outputs similarity between text pairs and has higher ranking accuracy. It can be used to reorder vector recall results and improve the relevance of the final results.
  • new features. BGE1.1 adds difficult-to-negative sample mining scripts. Hard-to-negative samples can effectively improve the retrieval effect after fine-tuning; the function of adding instructions during fine-tuning is added to the fine-tuning code; model saving will also be automatically converted into sentence transformer format, making it easier to load the model. .

It is worth mentioning that recently, Zhiyuan and Hugging Face released a technical report, which proposed using C-Pack to enhance the Chinese universal semantic vector model.

《C-Pack: Packaged Resources To Advance General Chinese Embedding》

Link: https://arxiv.org/pdf/2309.07597 .pdf

##Gaining high popularity in the developer community

BGE has attracted the attention of the large model developer community since its release. Currently, Hugging Face The number of downloads has reached hundreds of thousands of times, and it has been integrated and used by well-known open source projects LangChain, LangChain-Chachat, llama_index, etc.

Langchain official, LangChain co-founder and CEO Official Harrison Chase, Deep trading founder Yam Peleg and other community influencers expressed concern about BGE.

Zhiyuan has opened 300 million semantic vector model training data, and the BGE model continues to be iteratively updated.

Zhiyuan has opened 300 million semantic vector model training data, and the BGE model continues to be iteratively updated.


Adhere to open source and promote collaborative innovation, Zhiyuan large model technology development system FlagOpen BGE has added a new FlagEmbedding section, focusing on Embedding technology and models. BGE is one of the high-profile open source projects. FlagOpen is committed to building artificial intelligence technology infrastructure in the era of large models, and will continue to open more complete large model full-stack technologies to academia and industry in the future

The above is the detailed content of Zhiyuan has opened 300 million semantic vector model training data, and the BGE model continues to be iteratively updated.. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete