search
HomeTechnology peripheralsAICreate efficient deep learning data pipelines with Ray

The GPU required for deep learning model training is powerful but expensive. To fully utilize the GPU, developers need an efficient data transfer channel that can quickly transfer data to the GPU when it is ready to compute the next training step. Using Ray can significantly improve the efficiency of the data transmission channel

1. The structure of the training data pipeline

First, let’s take a look at the pseudocode of model training

for step in range(num_steps):sample, target = next(dataset) # 步骤1train_step(sample, target) # 步骤2

In the steps 1, get the samples and labels of the next mini-batch. In step 2, they are passed to the train_step function, which copies them to the GPU, performs a forward and backward pass to calculate the loss and gradient, and updates the optimizer's weights.

Please learn more about step 1. When the data set is too large to fit in memory, step 1 will fetch the next mini-batch from disk or network. In addition, step 1 also includes a certain amount of preprocessing. Input data must be converted into numeric tensors or collections of tensors before being fed to the model. In some cases, other transformations are also performed on the tensors before being passed to the model, such as normalization, rotation around the axis, random shuffling, etc.

If the workflow is executed strictly in sequence, that is If you perform step 1 first and then step 2, the model will always need to wait for the next batch of data input, output, and preprocessing operations. The GPU will not be efficiently utilized, it will sit idle while loading the next mini-batch of data.

To solve this problem, the data pipeline can be viewed as a producer-consumer problem. The data pipeline generates small batches of data and writes them to bounded buffers. The model/GPU consumes small batches of data from the buffer, performs forward/reverse calculations and updates model weights. If the data pipeline can generate small batches of data as quickly as the model/GPU consumes, the training process will be very efficient.

Create efficient deep learning data pipelines with RayPicture

2. Tensorflow tf.data API

Tensorflow tf.data API provides a rich set of functions that can be used Efficiently create data pipelines and use background threads to obtain small batches of data so that the model does not need to wait. Just pre-fetching the data is not enough. If generating small batches of data is slower than the GPU can consume the data, then you need to use parallelization to speed up the reading and transformation of the data. To this end, Tensorflow provides interleave functionality to leverage multiple threads to read data in parallel, and parallel mapping functionality to use multiple threads to transform small batches of data.

Because these APIs are based on multi-threading, they may be restricted by the Python Global Interpreter Lock (GIL). Python's GIL limits bytecode to only a single thread running at a time. If you use pure TensorFlow code in your pipeline, you generally do not suffer from this limitation because the TensorFlow core execution engine works outside the scope of the GIL. However, if the third-party library used does not lift GIL restrictions or uses Python to perform a large number of calculations, then relying on multi-threading to parallelize the pipeline is not feasible

3. Use multi-process parallelization of the data pipeline

Consider the following generator function that simulates loading and performing some calculations to generate mini-batches of data samples and labels.

def data_generator():for _ in range(10):# 模拟获取# 从磁盘/网络time.sleep(0.5)# 模拟计算for _ in range(10000):passyield (np.random.random((4, 1000000, 3)).astype(np.float32), np.random.random((4, 1)).astype(np.float32))

Next, use the generator in a dummy training pipeline and measure the average time it takes to generate mini-batches of data.

generator_dataset = tf.data.Dataset.from_generator(data_generator,output_types=(tf.float64, tf.float64),output_shapes=((4, 1000000, 3), (4, 1))).prefetch(tf.data.experimental.AUTOTUNE)st = time.perf_counter()times = []for _ in generator_dataset:en = time.perf_counter()times.append(en - st)# 模拟训练步骤time.sleep(0.1)st = time.perf_counter()print(np.mean(times))

It was observed that the average time taken was about 0.57 seconds (measured on a Mac laptop equipped with an Intel Core i7 processor). If this were a real training loop, the GPU utilization would be quite low, it would only spend 0.1 seconds doing the computation and then idle for 0.57 seconds waiting for the next batch of data.

To speed up data loading, you can use a multi-process generator.

from multiprocessing import Queue, cpu_count, Processdef mp_data_generator():def producer(q):for _ in range(10):# 模拟获取# 从磁盘/网络time.sleep(0.5)# 模拟计算for _ in range(10000000):passq.put((np.random.random((4, 1000000, 3)).astype(np.float32),np.random.random((4, 1)).astype(np.float32)))q.put("DONE")queue = Queue(cpu_count()*2)num_parallel_processes = cpu_count()producers = []for _ in range(num_parallel_processes):p = Process(target=producer, args=(queue,))p.start()producers.append(p)done_counts = 0while done_counts <p>Now, if we measure the time spent waiting for the next mini-batch of data, we get an average time of 0.08 seconds. Almost 7 times faster, but ideally would like this time to be close to 0. </p><p>If you analyze it, you can find that a lot of time is spent on preparing the deserialization of data. In a multi-process generator, the producer process returns large NumPy arrays, which need to be prepared and then deserialized in the main process. So how to improve efficiency when passing large arrays between processes? </p><h2 id="Use-Ray-to-parallelize-the-data-pipeline">4. Use Ray to parallelize the data pipeline</h2><p>This is where Ray comes into play. Ray is a framework for running distributed computing in Python. It comes with a shared memory object store to efficiently transfer objects between different processes. In particular, Numpy arrays in the object store can be shared between workers on the same node without any serialization and deserialization. Ray also makes it easy to scale data loading across multiple machines and use Apache Arrow to efficiently serialize and deserialize large arrays. </p><p>Ray comes with a utility function from_iterators that can create parallel iterators, and developers can use it to wrap the data_generator generator function. </p><pre class="brush:php;toolbar:false">import raydef ray_generator():num_parallel_processes = cpu_count()return ray.util.iter.from_iterators([data_generator]*num_parallel_processes).gather_async()

Using ray_generator, the measured time spent waiting for the next mini-batch of data is 0.02 seconds, which is 4 times faster than using multi-process processing.

The above is the detailed content of Create efficient deep learning data pipelines with Ray. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:51CTO.COM. If there is any infringement, please contact admin@php.cn delete
Are You At Risk Of AI Agency Decay? Take The Test To Find OutAre You At Risk Of AI Agency Decay? Take The Test To Find OutApr 21, 2025 am 11:31 AM

This article explores the growing concern of "AI agency decay"—the gradual decline in our ability to think and decide independently. This is especially crucial for business leaders navigating the increasingly automated world while retainin

How to Build an AI Agent from Scratch? - Analytics VidhyaHow to Build an AI Agent from Scratch? - Analytics VidhyaApr 21, 2025 am 11:30 AM

Ever wondered how AI agents like Siri and Alexa work? These intelligent systems are becoming more important in our daily lives. This article introduces the ReAct pattern, a method that enhances AI agents by combining reasoning an

Revisiting The Humanities In The Age Of AIRevisiting The Humanities In The Age Of AIApr 21, 2025 am 11:28 AM

"I think AI tools are changing the learning opportunities for college students. We believe in developing students in core courses, but more and more people also want to get a perspective of computational and statistical thinking," said University of Chicago President Paul Alivisatos in an interview with Deloitte Nitin Mittal at the Davos Forum in January. He believes that people will have to become creators and co-creators of AI, which means that learning and other aspects need to adapt to some major changes. Digital intelligence and critical thinking Professor Alexa Joubin of George Washington University described artificial intelligence as a “heuristic tool” in the humanities and explores how it changes

Understanding LangChain Agent FrameworkUnderstanding LangChain Agent FrameworkApr 21, 2025 am 11:25 AM

LangChain is a powerful toolkit for building sophisticated AI applications. Its agent architecture is particularly noteworthy, allowing developers to create intelligent systems capable of independent reasoning, decision-making, and action. This expl

What are the Radial Basis Functions Neural Networks?What are the Radial Basis Functions Neural Networks?Apr 21, 2025 am 11:13 AM

Radial Basis Function Neural Networks (RBFNNs): A Comprehensive Guide Radial Basis Function Neural Networks (RBFNNs) are a powerful type of neural network architecture that leverages radial basis functions for activation. Their unique structure make

The Meshing Of Minds And Machines Has ArrivedThe Meshing Of Minds And Machines Has ArrivedApr 21, 2025 am 11:11 AM

Brain-computer interfaces (BCIs) directly link the brain to external devices, translating brain impulses into actions without physical movement. This technology utilizes implanted sensors to capture brain signals, converting them into digital comman

Insights on spaCy, Prodigy and Generative AI from Ines MontaniInsights on spaCy, Prodigy and Generative AI from Ines MontaniApr 21, 2025 am 11:01 AM

This "Leading with Data" episode features Ines Montani, co-founder and CEO of Explosion AI, and co-developer of spaCy and Prodigy. Ines offers expert insights into the evolution of these tools, Explosion's unique business model, and the tr

A Guide to Building Agentic RAG Systems with LangGraphA Guide to Building Agentic RAG Systems with LangGraphApr 21, 2025 am 11:00 AM

This article explores Retrieval Augmented Generation (RAG) systems and how AI agents can enhance their capabilities. Traditional RAG systems, while useful for leveraging custom enterprise data, suffer from limitations such as a lack of real-time dat

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

Safe Exam Browser

Safe Exam Browser

Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.