Home >Backend Development >Golang >How can I stream large multipart/form-data files to AWS S3 efficiently with Go?
Stream File Upload to AWS S3 with Go
Directly streaming large multipart/form-data files to AWS S3 is an efficient way to minimize memory usage and file disk footprint.
Solution
Utilize the AWS SDK for Go's upload manager to stream files directly to S3. Here's an example:
package main import ( "fmt" "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3/s3manager" ) var ( filename = "file_name.zip" myBucket = "myBucket" myKey = "file_name.zip" accessKey = "" accessSecret = "" ) func main() { var awsConfig *aws.Config if accessKey == "" || accessSecret == "" { awsConfig = &aws.Config{ Region: aws.String("us-west-2"), } } else { awsConfig = &aws.Config{ Region: aws.String("us-west-2"), Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""), } } sess := session.Must(session.NewSession(awsConfig)) uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) { u.PartSize = 5 * 1024 * 1024 u.Concurrency = 2 }) f, err := os.Open(filename) if err != nil { fmt.Printf("failed to open file %q, %v", filename, err) return } defer f.Close() result, err := uploader.Upload(&s3manager.UploadInput{ Bucket: aws.String(myBucket), Key: aws.String(myKey), Body: f, }) if err != nil { fmt.Printf("failed to upload file, %v", err) return } fmt.Printf("file uploaded to, %s\n", result.Location) }
By utilizing the upload manager with customized parameters, you can configure the part size, concurrency level, and maximum upload part count to optimize the streaming process.
The above is the detailed content of How can I stream large multipart/form-data files to AWS S3 efficiently with Go?. For more information, please follow other related articles on the PHP Chinese website!