


Floating-Point Representation:
FP16 (Half Precision): In FP16, a floating-point number is represented using 16 bits. It consists of 1 sign bit, 5 bits for the exponent, and 10 bits for the fraction (mantissa). This format provides higher precision for representing fractional values within its range.
BF16 (BFloat16): BF16 also uses 16 bits, but with a different distribution. It has 1 sign bit, 8 bits for the exponent, and 7 bits for the mantissa. This format sacrifices some precision in the fractional part to accommodate a wider range of exponents.
Numerical Range:
FP16 has a smaller range but higher precision within that range due to its 10-bit mantissa.
BF16 has a wider range but lower precision for fractional values due to its 8-bit exponent and 7-bit mantissa.
Examples:
Let’s use examples to illustrate the differences between FP16 and BF16 with 3 example cases. TensorFlow is used to make the tests and code shared at the bottom:
Original value: 0.0001 — Both methods can represent
FP16: 0.00010001659393 (Binary: 0|00001|1010001110, Hex: 068E) — 10 mantissa and 5 exponent
BF16: 0.00010013580322 (Binary: 0|01110001|1010010, Hex: 38D2) — 7 mantissa and 8 exponent
As you can see they have different exponent and mantissa and thus able to represent differently. But we can see that FP16 represented it more accurately with more closer value.
Original value: 1e-08 (0.00000001)
FP16: 0.00000000000000 (Binary: 0|00000|0000000000, Hex: 0000)
BF16: 0.00000001001172 (Binary: 0|01100100|0101100, Hex: 322C)
This is a very interesting case. FP16 fails and make the result 0 but BF16 is able to represent it with a special formatting.
Original value: 100000.00001
FP16: inf (Binary: 0|11111|0000000000, Hex: 7C00)
BF16: 99840.00000000000000 (Binary: 0|10001111|1000011, Hex: 47C3)
In above case, FP16 fails since all exponent bits become full and not enough to represent the value. However BF16 works
Use Cases:
FP16 is commonly used in deep learning training and inference, especially for tasks that require high precision in representing small fractional values within a limited range.
BF16 is becoming popular in hardware architectures designed for machine learning tasks that benefit from a wider range of representable values, even at the cost of some precision in the fractional part. It’s particularly useful when dealing with large gradients or when numerical stability across a wide range is more important than precision of small values.
In summary
FP16 offers higher precision for fractional values within a smaller range, making it suitable for tasks that require accurate representation of small numbers. BF16, on the other hand, provides a wider range at the cost of some precision, making it advantageous for tasks that involve a broader spectrum of values or where numerical stability across a wide range is crucial. The choice between FP16 and BF16 depends on the specific requirements of the machine learning task at hand.
Final Conclusion
Due to all above reasons, when doing Stable Diffusion XL (SDXL) training, FP16 and BF16 requires slightly different learning rates and i find that BF16 works better.
The Code Used To Generate Above Examples
import tensorflow as tf import struct def float_to_binary(f): return ''.join(f'{b:08b}' for b in struct.pack('>f', f)) def display_fp16(value): fp16 = tf.cast(tf.constant(value, dtype=tf.float32), tf.float16) fp32 = tf.cast(fp16, tf.float32) binary = format(int.from_bytes(fp16.numpy().tobytes(), 'big'), '016b') sign = binary[0] exponent = binary[1:6] fraction = binary[6:] return f"FP16: {fp32.numpy():14.14f} (Binary: {sign}|{exponent}|{fraction}, Hex: {fp16.numpy().view('uint16'):04X})" def display_bf16(value): bf16 = tf.cast(tf.constant(value, dtype=tf.float32), tf.bfloat16) bf32 = tf.cast(bf16, tf.float32) binary = format(int.from_bytes(bf16.numpy().tobytes(), 'big'), '016b') sign = binary[0] exponent = binary[1:9] fraction = binary[9:] return f"BF16: {bf32.numpy():14.14f} (Binary: {sign}|{exponent}|{fraction}, Hex: {bf16.numpy().view('uint16'):04X})" values = [0.0001, 0.00000001, 100000.00001] for value in values: print(f"\nOriginal value: {value}") print(display_fp16(value)) print(display_bf16(value))
The above is the detailed content of What is the difference between FPand BF Here a good explanation for you. For more information, please follow other related articles on the PHP Chinese website!

Solution to permission issues when viewing Python version in Linux terminal When you try to view Python version in Linux terminal, enter python...

This article explains how to use Beautiful Soup, a Python library, to parse HTML. It details common methods like find(), find_all(), select(), and get_text() for data extraction, handling of diverse HTML structures and errors, and alternatives (Sel

Serialization and deserialization of Python objects are key aspects of any non-trivial program. If you save something to a Python file, you do object serialization and deserialization if you read the configuration file, or if you respond to an HTTP request. In a sense, serialization and deserialization are the most boring things in the world. Who cares about all these formats and protocols? You want to persist or stream some Python objects and retrieve them in full at a later time. This is a great way to see the world on a conceptual level. However, on a practical level, the serialization scheme, format or protocol you choose may determine the speed, security, freedom of maintenance status, and other aspects of the program

This article compares TensorFlow and PyTorch for deep learning. It details the steps involved: data preparation, model building, training, evaluation, and deployment. Key differences between the frameworks, particularly regarding computational grap

Python's statistics module provides powerful data statistical analysis capabilities to help us quickly understand the overall characteristics of data, such as biostatistics and business analysis. Instead of looking at data points one by one, just look at statistics such as mean or variance to discover trends and features in the original data that may be ignored, and compare large datasets more easily and effectively. This tutorial will explain how to calculate the mean and measure the degree of dispersion of the dataset. Unless otherwise stated, all functions in this module support the calculation of the mean() function instead of simply summing the average. Floating point numbers can also be used. import random import statistics from fracti

This tutorial builds upon the previous introduction to Beautiful Soup, focusing on DOM manipulation beyond simple tree navigation. We'll explore efficient search methods and techniques for modifying HTML structure. One common DOM search method is ex

The article discusses popular Python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, Django, Flask, and Requests, detailing their uses in scientific computing, data analysis, visualization, machine learning, web development, and H

This article guides Python developers on building command-line interfaces (CLIs). It details using libraries like typer, click, and argparse, emphasizing input/output handling, and promoting user-friendly design patterns for improved CLI usability.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

Dreamweaver Mac version
Visual web development tools

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function