当涉及到在 Python 中同时运行多个任务时,concurrent.futures 模块是一个强大而简单的工具。在本文中,我们将探讨如何使用 ThreadPoolExecutor 并行执行任务,并结合实际示例。
在Python中,线程非常适合I/O操作占主导地位的任务,例如网络调用或文件读/写操作。使用 ThreadPoolExecutor,您可以:
让我们看一个简单的例子来理解这个概念。
from concurrent.futures import ThreadPoolExecutor import time # Function simulating a task def task(n): print(f"Task {n} started") time.sleep(2) # Simulates a long-running task print(f"Task {n} finished") return f"Result of task {n}" # Using ThreadPoolExecutor def execute_tasks(): tasks = [1, 2, 3, 4, 5] # List of tasks results = [] # Create a thread pool with 3 simultaneous threads with ThreadPoolExecutor(max_workers=3) as executor: # Execute tasks in parallel results = executor.map(task, tasks) return list(results) if __name__ == "__main__": results = execute_tasks() print("All results:", results)
当您运行此代码时,您将看到类似这样的内容(以某种并行顺序):
Task 1 started Task 2 started Task 3 started Task 1 finished Task 4 started Task 2 finished Task 5 started Task 3 finished Task 4 finished Task 5 finished All results: ['Result of task 1', 'Result of task 2', 'Result of task 3', 'Result of task 4', 'Result of task 5']
任务 1、2 和 3 同时启动,因为 max_workers=3。其他任务(4 和 5)等待线程可用。
限制线程数:
处理异常:
使用 ProcessPoolExecutor 执行 CPU 密集型任务:
这是一个真实的示例:并行获取多个 URL。
import requests from concurrent.futures import ThreadPoolExecutor # Function to fetch a URL def fetch_url(url): try: response = requests.get(url) return f"URL: {url}, Status: {response.status_code}" except Exception as e: return f"URL: {url}, Error: {e}" # List of URLs to fetch urls = [ "https://example.com", "https://httpbin.org/get", "https://jsonplaceholder.typicode.com/posts", "https://invalid-url.com" ] def fetch_all_urls(urls): with ThreadPoolExecutor(max_workers=4) as executor: results = executor.map(fetch_url, urls) return list(results) if __name__ == "__main__": results = fetch_all_urls(urls) for result in results: print(result)
ThreadPoolExecutor 简化了 Python 中的线程管理,是加速 I/O 密集型任务的理想选择。只需几行代码,您就可以并行化操作并节省宝贵的时间。
以上是# 使用 ThreadPoolExecutor 增强你的 Python 任务的详细内容。更多信息请关注PHP中文网其他相关文章!