Home >Backend Development >Python Tutorial ># Boost Your Python Tasks with `ThreadPoolExecutor`
When it comes to running multiple tasks simultaneously in Python, the concurrent.futures module is a powerful and straightforward tool. In this article, we'll explore how to use ThreadPoolExecutor to execute tasks in parallel, along with practical examples.
In Python, threads are perfect for tasks where I/O operations dominate, such as network calls or file read/write operations. With ThreadPoolExecutor, you can:
Let's look at a simple example to understand the concept.
from concurrent.futures import ThreadPoolExecutor import time # Function simulating a task def task(n): print(f"Task {n} started") time.sleep(2) # Simulates a long-running task print(f"Task {n} finished") return f"Result of task {n}" # Using ThreadPoolExecutor def execute_tasks(): tasks = [1, 2, 3, 4, 5] # List of tasks results = [] # Create a thread pool with 3 simultaneous threads with ThreadPoolExecutor(max_workers=3) as executor: # Execute tasks in parallel results = executor.map(task, tasks) return list(results) if __name__ == "__main__": results = execute_tasks() print("All results:", results)
When you run this code, you'll see something like this (in a somewhat parallel order):
Task 1 started Task 2 started Task 3 started Task 1 finished Task 4 started Task 2 finished Task 5 started Task 3 finished Task 4 finished Task 5 finished All results: ['Result of task 1', 'Result of task 2', 'Result of task 3', 'Result of task 4', 'Result of task 5']
Tasks 1, 2, and 3 start simultaneously because max_workers=3. Other tasks (4 and 5) wait until threads are available.
Limit the number of threads:
Handle exceptions:
Use ProcessPoolExecutor for CPU-bound tasks:
Here's a real-world example: fetching multiple URLs in parallel.
import requests from concurrent.futures import ThreadPoolExecutor # Function to fetch a URL def fetch_url(url): try: response = requests.get(url) return f"URL: {url}, Status: {response.status_code}" except Exception as e: return f"URL: {url}, Error: {e}" # List of URLs to fetch urls = [ "https://example.com", "https://httpbin.org/get", "https://jsonplaceholder.typicode.com/posts", "https://invalid-url.com" ] def fetch_all_urls(urls): with ThreadPoolExecutor(max_workers=4) as executor: results = executor.map(fetch_url, urls) return list(results) if __name__ == "__main__": results = fetch_all_urls(urls) for result in results: print(result)
ThreadPoolExecutor simplifies thread management in Python and is ideal for speeding up I/O-bound tasks. With just a few lines of code, you can parallelize operations and save valuable time.
The above is the detailed content of # Boost Your Python Tasks with `ThreadPoolExecutor`. For more information, please follow other related articles on the PHP Chinese website!