Home > Article > Backend Development > How to deal with concurrent file compression and decompression in Go language?
How to deal with concurrent file compression and decompression in Go language?
File compression and decompression is one of the tasks often encountered in daily development. As file sizes increase, compression and decompression operations can become time-consuming, so concurrency becomes an important means of improving efficiency. In the Go language, you can use the features of goroutine and channel to implement concurrent processing of file compression and decompression operations.
First, let’s take a look at how to implement file compression in the Go language. The Go language standard library provides two packages, archive/zip
and compress/gzip
. We can use these two packages to implement file compression operations.
The following is a sample code to compress a single file:
package main import ( "archive/zip" "log" "os" ) func compressFile(filename string, dest string) error { srcFile, err := os.Open(filename) if err != nil { return err } defer srcFile.Close() destFile, err := os.Create(dest) if err != nil { return err } defer destFile.Close() zipWriter := zip.NewWriter(destFile) defer zipWriter.Close() info, err := srcFile.Stat() if err != nil { return err } header, err := zip.FileInfoHeader(info) if err != nil { return err } header.Name = srcFile.Name() header.Method = zip.Deflate writer, err := zipWriter.CreateHeader(header) if err != nil { return err } _, err = io.Copy(writer, srcFile) if err != nil { return err } return nil } func main() { err := compressFile("file.txt", "file.zip") if err != nil { log.Fatal(err) } }
In the above sample code, we first open the file and target file that need to be compressed, Then create a zip.Writer
to write the compressed data. We use the CreateHeader
method of zip.Writer
to create a file header, and use the io.Copy
method to copy the contents of the source file to the compressed file.
Next, let’s take a look at how to use the compression operations of multiple files concurrently. We can use the characteristics of goroutine and channel to transfer file information between multiple goroutines for concurrent processing.
The following is a sample code that implements concurrent compression of multiple files:
package main import ( "archive/zip" "io" "log" "os" ) type File struct { Name string Dest string } func compressFile(filename string, dest string, done chan bool) error { srcFile, err := os.Open(filename) if err != nil { return err } defer srcFile.Close() destFile, err := os.Create(dest) if err != nil { return err } defer destFile.Close() zipWriter := zip.NewWriter(destFile) defer zipWriter.Close() info, err := srcFile.Stat() if err != nil { return err } header, err := zip.FileInfoHeader(info) if err != nil { return err } header.Name = srcFile.Name() header.Method = zip.Deflate writer, err := zipWriter.CreateHeader(header) if err != nil { return err } _, err = io.Copy(writer, srcFile) if err != nil { return err } done <- true return nil } func main() { files := []File{ {Name: "file1.txt", Dest: "file1.zip"}, {Name: "file2.txt", Dest: "file2.zip"}, {Name: "file3.txt", Dest: "file3.zip"}, } done := make(chan bool) for _, file := range files { go func(f File) { err := compressFile(f.Name, f.Dest, done) if err != nil { log.Fatal(err) } }(file) } for i := 0; i < len(files); i++ { <-done } }
In the above sample code, we define a File
structure to contain each Information about the file, including file name and target file name. Then we use a goroutine to concurrently process the compression operation of each file, and synchronize the completion of the compression operation through a channel. In the main function, we first create a done
channel to receive notification of the completion of the compression operation, and then use goroutine and channel to process the compression operations of multiple files concurrently.
It is also very simple to implement file decompression in Go language. We can use the methods in the archive/zip
and compress/gzip
packages to decompress files.
The following is a sample code to decompress a single file:
package main import ( "archive/zip" "io" "log" "os" ) func decompressFile(filename string, dest string) error { srcFile, err := os.Open(filename) if err != nil { return err } defer srcFile.Close() zipReader, err := zip.OpenReader(filename) if err != nil { return err } defer zipReader.Close() for _, file := range zipReader.File { if file.Name != dest { continue } src, err := file.Open() if err != nil { return nil } defer src.Close() destFile, err := os.Create(dest) if err != nil { return err } defer destFile.Close() _, err = io.Copy(destFile, src) if err != nil { return err } break } return nil } func main() { err := decompressFile("file.zip", "file.txt") if err != nil { log.Fatal(err) } }
In the above sample code, we first open the compressed file that needs to be decompressed , and traverse the file list therein, and after finding the target file, extract its contents into the target file.
Next, let’s take a look at how to use concurrent processing of decompression operations for multiple files.
The following is a sample code that implements concurrent decompression of multiple files:
package main import ( "archive/zip" "io" "log" "os" ) type File struct { Name string Src string Dest string } func decompressFile(filename string, dest string, done chan bool) error { srcFile, err := os.Open(filename) if err != nil { return err } defer srcFile.Close() zipReader, err := zip.OpenReader(filename) if err != nil { return err } defer zipReader.Close() for _, file := range zipReader.File { if file.Name != dest { continue } src, err := file.Open() if err != nil { return nil } defer src.Close() destFile, err := os.Create(dest) if err != nil { return err } defer destFile.Close() _, err = io.Copy(destFile, src) if err != nil { return err } done <- true break } return nil } func main() { files := []File{ {Name: "file1.zip", Src: "file1.txt", Dest: "file1_copy.txt"}, {Name: "file2.zip", Src: "file2.txt", Dest: "file2_copy.txt"}, {Name: "file3.zip", Src: "file3.txt", Dest: "file3_copy.txt"}, } done := make(chan bool) for _, file := range files { go func(f File) { err := decompressFile(f.Name, f.Src, done) if err != nil { log.Fatal(err) } }(file) } for i := 0; i < len(files); i++ { <-done } }
In the above sample code, we define a File
structure to contain Information about each file, including compressed file name, source file name, and destination file name. Then we use a goroutine to concurrently process the decompression operation of each file, and synchronize the completion of the decompression operation through the channel. In the main function, we first create a done
channel to receive notification of completion of the decompression operation, and then use goroutine and channel to process the decompression operations of multiple files concurrently.
Through the above example code, we can implement concurrent processing of file compression and decompression operations, thereby improving the execution efficiency of the program. In actual development, the degree of concurrency can be adjusted according to specific needs and file size to achieve the best performance and effects.
The above is the detailed content of How to deal with concurrent file compression and decompression in Go language?. For more information, please follow other related articles on the PHP Chinese website!