Home >Technology peripherals >AI >A Guide to 400 Categorized Large Language Model Datasets

A Guide to 400 Categorized Large Language Model Datasets

Christopher Nolan
Christopher NolanOriginal
2025-03-19 10:54:23613browse

This groundbreaking survey, "Datasets for Large Language Models: A Comprehensive Survey," released in February 2024, unveils a treasure trove of over 400 meticulously categorized datasets for Large Language Model (LLM) development. Compiled by Yang Liu, Jiahuan Cao, Chongyu Liu, Kai Ding, and Lianwen Jin, this resource is a goldmine for researchers and developers. It's not just a static collection; it's regularly updated, ensuring its continued relevance.

The paper provides a comprehensive overview of LLM datasets, essential for understanding the foundation of these powerful models. The datasets are categorized across seven key dimensions: Pre-training Corpora, Instruction Fine-tuning Datasets, Preference Datasets, Evaluation Datasets, Traditional NLP Datasets, Multi-modal Large Language Models (MLLMs) Datasets, and Retrieval Augmented Generation (RAG) Datasets. The sheer scale is impressive, with over 774.5 TB of data for pre-training alone and 700 million instances across other categories, spanning 32 domains and 8 languages.

A Guide to 400  Categorized Large Language Model Datasets

Key Dataset Categories and Examples:

The survey details various dataset types, including:

  • Pre-training Corpora: Massive text collections for initial LLM training. Examples include MADLAD-400 (2.8T tokens), FineWeb (15TB tokens), and BookCorpusOpen (17,868 books). These are further broken down into general corpora (webpages, books, language texts) and domain-specific corpora (finance, medical, mathematics).

  • Instruction Fine-tuning Datasets: Pairs of instructions and corresponding answers to refine model behavior. Examples include databricks-dolly-15K and Alpaca_data. These are also categorized into general and domain-specific (medical, code) datasets.

  • Preference Datasets: Used to evaluate and improve model outputs by comparing multiple responses. Examples include Chatbot_arena_conversations and hh-rlhf.

  • Evaluation Datasets: Specifically designed to benchmark LLM performance on various tasks. Examples include AlpacaEval and BayLing-80.

  • Traditional NLP Datasets: Datasets used for pre-LLM NLP tasks. Examples include BoolQ, CosmosQA, and PubMedQA.

  • Multi-modal Large Language Models (MLLMs) Datasets: Datasets combining text and other modalities (images, videos). Examples include mOSCAR and MMRS-1M.

  • Retrieval Augmented Generation (RAG) Datasets: Datasets that enhance LLMs with external data retrieval capabilities. Examples include CRUD-RAG and WikiEval.

A Guide to 400  Categorized Large Language Model Datasets

Source: Datasets for Large Language Models: A Comprehensive Survey

The survey's architecture is illustrated below:

A Guide to 400  Categorized Large Language Model Datasets

Conclusion and Further Exploration:

This survey serves as a vital resource, guiding researchers and developers in the LLM field. The provided repository (Awesome-LLMs-Datasets) offers a complete roadmap for accessing and utilizing these invaluable datasets. The detailed categorization and comprehensive statistics make it an essential tool for anyone working with or researching LLMs. The paper also addresses key challenges and suggests future research directions.

The above is the detailed content of A Guide to 400 Categorized Large Language Model Datasets. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn