Home >Backend Development >Python Tutorial >How to Hash Large Files in Python without Memory Overconsumption?
Computing MD5 Hashes for Large Files in Python
Python's hashlib module provides a convenient interface for calculating cryptographic hashes. However, for exceptionally large files whose size exceeds system memory, using hashlib directly can be problematic.
Solution: Progressive Hashing
To address this issue, we employ progressive hashing by reading the file in manageable chunks. This approach ensures that the entire file content is hashed without consuming excessive memory. Here's a sample Python function that implements this technique:
<code class="python">import hashlib def md5_for_file(f): block_size = 2**20 md5 = hashlib.md5() while True: data = f.read(block_size) if not data: break md5.update(data) return md5.digest()</code>
To calculate the MD5 hash of a large file, you can invoke the function as follows:
<code class="python">with open("filename", "rb") as f: md5 = md5_for_file(f)</code>
Note on File Mode
Ensure that you open the file in binary mode with "rb" for accurate results. Using "r" can lead to incorrect calculations.
Additional Considerations
For convenience, an improved version of the function is presented below:
<code class="python">import hashlib import os def generate_file_md5(rootdir, filename): m = hashlib.md5() with open(os.path.join(rootdir, filename), "rb") as f: buf = f.read() while buf: m.update(buf) buf = f.read() return m.hexdigest()</code>
Cross-checking the calculated hashes with external tools like jacksum is recommended to verify accuracy.
The above is the detailed content of How to Hash Large Files in Python without Memory Overconsumption?. For more information, please follow other related articles on the PHP Chinese website!