Home > Article > Backend Development > Beginners Guide to Managing Files in Amazon Sith Go
Amazon Simple Storage Service (S3) has become the industry standard for web-based file storage. It is designed for up to 99.999999999% durability and security. It also has flexible storage classes, data management, and analytics features.
Are you building the server-side app for your project in Go and need a storage option? This article assumes that you’re an AWS beginner and walks you through the processes of using S3 for storage from your Go apps.
Setting up an AWS S3 bucket with the right permissions and parameters can be daunting. You’ll have to create a bucket, create an IAM user, issue permissions to the IAM user to execute operations on the bucket, and set up the access keys in your environment.
First, create an Amazon AWS account and sign in if you don’t have one yet. Then, search for S3 like this.
Now, Click on “Create a bucket.” You’ll be prompted to configure the bucket according to your project’s specifications.
Now that you’ve created a bucket, the next step is to set up an IAM user to whom you’ll issue permission to access the S3 bucket.
Head to the security credentials section of your profile and create a user.
Then, create an access key for the user and retrieve the access and secret keys.
On your computer, in the root directory, create a .aws folder and a file named credentials with no extension; then add the keys like this:
[default] aws_access_key_id = <aws_access_key_id here> aws_secret_access_key = <aws_secret_access_key here>
Now, Issue these permissions to the user to allow S3 uploads.
Finally, initialize a Go project and install the AWS Go SDK.
go mod tidy go get github.com/aws/aws-sdk-go
You’re all set up, and now, you can start uploading, downloading and managing files with AWS S3 Buckets.
First, you must import the necessary packages from the AWS SDK package.
Add these modules to the top of your main.go file or whichever file you’re using:
package main import ( "bytes" "fmt" "io" "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" )
To upload a file, you must create a new session, open a file, and use the session instance to upload the file.
[default] aws_access_key_id = <aws_access_key_id here> aws_secret_access_key = <aws_secret_access_key here>
The UploadFile function takes in the bucket name, AWS region, the file key and ACL (access controls list) and uploads the specified file to S3 with the specified key.
The session.NewSession function creates a new AWS session and s3.New creates a new S3 session. The PutObject function takes in a reference to the PutObjectInput with the file Bucket, Key, file body, and ACL.
go mod tidy go get github.com/aws/aws-sdk-go
When I called the UploadFile function with the parameters, here’s proof that the file was uploaded to my S3 bucket.
You can always browse more details of files by clicking the file name on the AWS S3 console.
Now, you can attempt to retrieve the file with the key you’ve specified.
package main import ( "bytes" "fmt" "io" "os" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" )
After creating the client instance, the GetObject function receives the file parameters in the &s3.GetObjectInput instance. You can now copy the file stream to your preferred destination with io.Copy function.
func UploadFile(bucket, region, filePath, key, acl string) error { sess, err := session.NewSession(&aws.Config{ Region: aws.String(region), }) if err != nil { return fmt.Errorf("error creating session: %w", err) } file, err := os.Open(filePath) if err != nil { return fmt.Errorf("error opening file: %w", err) } defer file.Close() var buf bytes.Buffer if _, err := io.Copy(&buf, file); err != nil { return fmt.Errorf("error reading file: %w", err) } _, err = s3.New(sess).PutObject(&s3.PutObjectInput{ Bucket: aws.String(bucket), Key: aws.String(key), Body: bytes.NewReader(buf.Bytes()), ACL: aws.String(acl), }) if err != nil { return fmt.Errorf("error uploading file: %w", err) } fmt.Println("File uploaded successfully:", filePath, "to key:", key) return nil }
The file should be downloaded to your specified destination path when executing a program with the DownloadFile function.
You’ll need to specify the key in the key in the destination path as well.
To delete a file from an S3 bucket, you’ll use the AWS SDK’s DeleteObject function to remove the file. You must specify the S3 bucket name and the file key you want to delete.
func main() { bucket := "cloudboxbucket" region := "eu-north-1" filePath := "Makefile" key := "Makefile" acl := "private" if err := UploadFile(bucket, region, filePath, key, acl); err != nil { fmt.Println("Error uploading file:", err) } }
The DeleteFile function starts by creating a session, like in the upload and download examples. The s3Client.DeleteObject is called with a DeleteObjectInput struct, where you’ll specify the Bucket and Key parameters.
You can use the WaitUntilObjectNotExists function to confirm that the file doesn’t exist.
func DownloadFile(bucket, region, key, destPath string) error { sess, err := session.NewSession(&aws.Config{ Region: aws.String(region), }) if err != nil { return fmt.Errorf("error creating session: %w", err) } s3Client := s3.New(sess) output, err := s3Client.GetObject(&s3.GetObjectInput{ Bucket: aws.String(bucket), Key: aws.String(key), }) if err != nil { return fmt.Errorf("error downloading file: %w", err) } defer output.Body.Close() destFile, err := os.Create(destPath) if err != nil { return fmt.Errorf("error creating destination file: %w", err) } defer destFile.Close() if _, err := io.Copy(destFile, output.Body); err != nil { return fmt.Errorf("error saving file: %w", err) } fmt.Println("File downloaded successfully:", key, "to", destPath) return nil }
When executing the DeleteFile function, you can see that the Makefile no longer exists in my S3 bucket.
You’ve learned how to set up an S3 bucket for an IAM user (it also works for root) and upload, download, and delete files from S3 buckets.
There’s so much more functionality you can do with AWS S3 Buckets. I hope this article has provided the foundation for you to go further, depending on your project’s use case.
The above is the detailed content of Beginners Guide to Managing Files in Amazon Sith Go. For more information, please follow other related articles on the PHP Chinese website!