search
HomeTechnology peripheralsAIImitating Jeff Dean's divine summary, a former Google engineer shared 'LLM development secrets': numbers that every developer should know!

Recently, a netizen compiled a list of "Numbers that every LLM developer should know" and explained why these numbers are important and how we should use them.

When he was at Google, there was a document compiled by legendary engineer Jeff Dean called "Numbers Every Engineer Should Know."

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

Jeff Dean: "Numbers Every Engineer Should Know"

For LLM (Large Language Model) developers, it is also very useful to have a similar set of numbers for rough estimation.

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

Prompt

##40-90%: Add "concise" to the prompt The subsequent cost savings

#You must know that you pay according to the token used by LLM during output.

This means that you can save a lot of money by letting your model be concise.

At the same time, this concept can be expanded to more places.

For example, you originally wanted to use GPT-4 to generate 10 alternatives, but now you may be able to ask it to provide 5 first, and then you can keep the other half of the money.

1.3: The average number of tokens per word

LLM operates in token units.

And a token is a word or a subpart of a word. For example, "eating" may be decomposed into two tokens "eat" and "ing".

Generally speaking, 750 English words will generate about 1000 tokens.

For languages ​​other than English, the number of tokens per word will be increased, depending on their commonality in LLM’s embedding corpus.

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

Price

Considering that the cost of using LLM is very high, the numbers related to the price are has become particularly important.

~50: Cost ratio of GPT-4 and GPT-3.5 Turbo

Using GPT-3.5-Turbo About 50 times cheaper than GPT-4. I say "approximately" because GPT-4 charges differently for prompts and generation.

So in actual application, it is best to confirm whether GPT-3.5-Turbo is enough to meet your needs.

For example, for tasks like summarizing, GPT-3.5-Turbo is more than sufficient.

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

##5: Use GPT- 3.5-Turbo vs. OpenAI Embedding Cost Ratio for Text Generation

This means that looking up something in a vector storage system is much cheaper than using generation with LLM.

Specifically, searching in the neural information retrieval system costs about 5 times less than asking GPT-3.5-Turbo. Compared with GPT-4, the cost gap is as high as 250 times!

10: Cost Ratio of OpenAI Embeds vs. Self-Hosted Embeds

Note: This number is very sensitive to load and embedding batch sizes are very sensitive, so consider them as approximations.

With g4dn.4xlarge (on-demand price: $1.20/hour) we can leverage SentenceTransformers with HuggingFace (comparable to OpenAI’s embeddings) at ~9000 per second The speed of token embedding.

Doing some basic calculations at this speed and node type shows that self-hosted embeds can be 10x cheaper.

6: Cost ratio of OpenAI basic model and fine-tuned model query

On OpenAI, the cost of fine-tuned model 6 times that of the base model.

This also means that it is more cost-effective to adjust the base model's prompts than to fine-tune a custom model.

1: Cost ratio of self-hosting base model vs. fine-tuned model query

If you host the model yourself, then The cost of the fine-tuned model is almost the same as that of the base model: the number of parameters is the same for both models.

Training and fine-tuning

~$1 million: the cost of training a 13 billion parameter model on 1.4 trillion tokens

Paper address: https://arxiv.org/pdf/2302.13971.pdf

##LLaMa’s The paper mentioned that it took them 21 days and used 2048 A100 80GB GPUs to train the LLaMa model.

Assuming we train our model on the Red Pajama training set, assuming everything works fine, without any crashes, and it succeeds the first time, we will get the above numbers.

In addition, this process also involves coordination between 2048 GPUs.

Most companies do not have the conditions to do this.

However, the most critical message is: it is possible to train our own LLM, but the process is not cheap.

And every time it is run, it takes several days.

In comparison, using a pre-trained model will be much cheaper.

##This number is a bit general, in general Generally speaking, the cost of fine-tuning is negligible.

For example, you can fine-tune a 6B parameter model for about $7.

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

Even at OpenAI’s rate for its most expensive fine-tuned model, Davinci, it only costs 3 for every 1,000 tokens cents.

This means that if you want to fine-tune Shakespeare's entire work (about 1 million words), you only need to spend forty or fifty dollars.

However, fine-tuning is one thing, training from scratch is another...

GPU memory

If you are self-hosting the model, it is very important to understand the GPU memory, because LLM is pushing the GPU memory to the limit.

The following statistics are used specifically for inference. If you want to do training or fine-tuning, you need quite a bit of video memory.

V100: 16GB, A10G: 24GB, A100: 40/80GB: GPU memory capacity

Understand the different types The amount of video memory your GPU has is important as this will limit the amount of parameters your LLM can have.

Generally speaking, we like to use A10G because they are priced at $1.5 to $2 per hour on AWS on demand and have 24G of GPU memory, while each A100 The price is approximately $5/hour.

2x Parameter amount: Typical GPU memory requirements of LLM

For example, when you have a 7 billion Parametric model requires approximately 14GB of GPU memory.

This is because in most cases each argument requires a 16-bit floating point number (or 2 bytes).

Usually you don't need more than 16 bits of precision, but most of the time the resolution starts to decrease when the precision reaches 8 bits (and in some cases this is acceptable).

Of course, there are some projects that have improved this situation. For example, llama.cpp ran through a 13 billion parameter model by quantizing to 4 bits on a 6GB GPU (8 bits are also acceptable), but this is not common.

~1GB: Typical GPU memory requirements for embedding models

Whenever you embed statements (clustering, semantics (which is often done for search and classification tasks), you need an embedding model like a sentence converter. OpenAI also has its own commercial embedding model.

模仿Jeff Dean神总结,前谷歌工程师分享「LLM开发秘籍」:每个开发者都应知道的数字!

Usually you don’t have to worry about how much video memory embedding takes up on the GPU, they are quite small and you can even embed LLM on the same GPU .

>10x: Improve throughput by batching LLM requests

##Latency of running LLM queries through GPU Very high: At a throughput of 0.2 queries per second, the latency may be 5 seconds.

Interestingly, if you run two tasks, the delay may only be 5.2 seconds.

This means that if you can bundle 25 queries together, you will need about 10 seconds of latency, and the throughput has been increased to 2.5 queries per second.

However, please read on.

~1 MB: GPU memory required for the 13 billion parameter model to output 1 token

What you need The amount of video memory is directly proportional to the maximum number of tokens you want to generate.

For example, generating output of up to 512 tokens (approximately 380 words) requires 512MB of video memory.

You might say, this is no big deal - I have 24GB of video memory, what is 512MB? However, if you want to run larger batches, this number starts to add up.

For example, if you want to do 16 batches, the video memory will be directly increased to 8GB.

The above is the detailed content of Imitating Jeff Dean's divine summary, a former Google engineer shared 'LLM development secrets': numbers that every developer should know!. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyright[Ghibli-style images with AI] Introducing how to create free images with ChatGPT and copyrightMay 13, 2025 am 01:57 AM

The latest model GPT-4o released by OpenAI not only can generate text, but also has image generation functions, which has attracted widespread attention. The most eye-catching feature is the generation of "Ghibli-style illustrations". Simply upload the photo to ChatGPT and give simple instructions to generate a dreamy image like a work in Studio Ghibli. This article will explain in detail the actual operation process, the effect experience, as well as the errors and copyright issues that need to be paid attention to. For details of the latest model "o3" released by OpenAI, please click here⬇️ Detailed explanation of OpenAI o3 (ChatGPT o3): Features, pricing system and o4-mini introduction Please click here for the English version of Ghibli-style article⬇️ Create Ji with ChatGPT

Explaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsExplaining examples of use and implementation of ChatGPT in local governments! Also introduces banned local governmentsMay 13, 2025 am 01:53 AM

As a new communication method, the use and introduction of ChatGPT in local governments is attracting attention. While this trend is progressing in a wide range of areas, some local governments have declined to use ChatGPT. In this article, we will introduce examples of ChatGPT implementation in local governments. We will explore how we are achieving quality and efficiency improvements in local government services through a variety of reform examples, including supporting document creation and dialogue with citizens. Not only local government officials who aim to reduce staff workload and improve convenience for citizens, but also all interested in advanced use cases.

What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!What is the Fukatsu-style prompt in ChatGPT? A thorough explanation with example sentences!May 13, 2025 am 01:52 AM

Have you heard of a framework called the "Fukatsu Prompt System"? Language models such as ChatGPT are extremely excellent, but appropriate prompts are essential to maximize their potential. Fukatsu prompts are one of the most popular prompt techniques designed to improve output accuracy. This article explains the principles and characteristics of Fukatsu-style prompts, including specific usage methods and examples. Furthermore, we have introduced other well-known prompt templates and useful techniques for prompt design, so based on these, we will introduce C.

What is ChatGPT Search? Explains the main functions, usage, and fee structure!What is ChatGPT Search? Explains the main functions, usage, and fee structure!May 13, 2025 am 01:51 AM

ChatGPT Search: Get the latest information efficiently with an innovative AI search engine! In this article, we will thoroughly explain the new ChatGPT feature "ChatGPT Search," provided by OpenAI. Let's take a closer look at the features, usage, and how this tool can help you improve your information collection efficiency with reliable answers based on real-time web information and intuitive ease of use. ChatGPT Search provides a conversational interactive search experience that answers user questions in a comfortable, hidden environment that hides advertisements

An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!An easy-to-understand explanation of how to create a composition in ChatGPT and prompts!May 13, 2025 am 01:50 AM

In a modern society with information explosion, it is not easy to create compelling articles. How to use creativity to write articles that attract readers within a limited time and energy requires superb skills and rich experience. At this time, as a revolutionary writing aid, ChatGPT attracted much attention. ChatGPT uses huge data to train language generation models to generate natural, smooth and refined articles. This article will introduce how to effectively use ChatGPT and efficiently create high-quality articles. We will gradually explain the writing process of using ChatGPT, and combine specific cases to elaborate on its advantages and disadvantages, applicable scenarios, and safe use precautions. ChatGPT will be a writer to overcome various obstacles,

How to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedHow to create diagrams using ChatGPT! Illustrated loading and plugins are also explainedMay 13, 2025 am 01:49 AM

An efficient guide to creating charts using AI Visual materials are essential to effectively conveying information, but creating it takes a lot of time and effort. However, the chart creation process is changing dramatically due to the rise of AI technologies such as ChatGPT and DALL-E 3. This article provides detailed explanations on efficient and attractive diagram creation methods using these cutting-edge tools. It covers everything from ideas to completion, and includes a wealth of information useful for creating diagrams, from specific steps, tips, plugins and APIs that can be used, and how to use the image generation AI "DALL-E 3."

An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!An easy-to-understand explanation of ChatGPT Plus' pricing structure and payment methods!May 13, 2025 am 01:48 AM

Unlock ChatGPT Plus: Fees, Payment Methods and Upgrade Guide ChatGPT, a world-renowned generative AI, has been widely used in daily life and business fields. Although ChatGPT is basically free, the paid version of ChatGPT Plus provides a variety of value-added services, such as plug-ins, image recognition, etc., which significantly improves work efficiency. This article will explain in detail the charging standards, payment methods and upgrade processes of ChatGPT Plus. For details of OpenAI's latest image generation technology "GPT-4o image generation" please click: Detailed explanation of GPT-4o image generation: usage methods, prompt word examples, commercial applications and differences from other AIs Table of contents ChatGPT Plus Fees Ch

Explaining how to create a design using ChatGPT! We also introduce examples of use and promptsExplaining how to create a design using ChatGPT! We also introduce examples of use and promptsMay 13, 2025 am 01:47 AM

How to use ChatGPT to streamline your design work and increase creativity This article will explain in detail how to create a design using ChatGPT. We will introduce examples of using ChatGPT in various design fields, such as ideas, text generation, and web design. We will also introduce points that will help you improve the efficiency and quality of a variety of creative work, such as graphic design, illustration, and logo design. Please take a look at how AI can greatly expand your design possibilities. table of contents ChatGPT: A powerful tool for design creation

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft