Home >Backend Development >Golang >GCS upload for chunking in Go SDK?
php editor Youzi will give you a brief introduction to the topic of GCS uploading for chunking in Go SDK. GCS (Google Cloud Storage) is an object storage service provided by Google Cloud Platform, which can be used to store and access large amounts of unstructured data. When using GCS to upload files, if the file is large, you can use the chunked upload method, which can improve the upload speed and stability. Go SDK provides corresponding interfaces and methods to easily implement the multi-part upload function. In this way, developers can handle large file upload operations more flexibly.
I am trying to use gcs writer to upload a large file:
bucketHandle := m.Client.Bucket(bucket) objectHandle := bucketHandle.Object(path) writer := objectHandle.NewWriter(context.Background())
Then for a block of size n
, I call writer.write(mybuffer)
. I'm seeing some out of memory exceptions on the cluster and wondering if this is actually just buffering the entire file into memory. What are the semantics of this operation and have I misunderstood something?
Yes, the data will be flushed to GCS after every Write call in the code. The Write method returns the number of bytes written, any errors encountered, and the number of bytes actually written to the underlying connection. The data is flushed to GCS after each chunk is written, so the amount of memory consumed by the client should be limited to the size of the buffer, in your instance if you chunk the input data into 5 MB, the buffer's Size is 5 MB chunks and use Write in a loop.
The above is the detailed content of GCS upload for chunking in Go SDK?. For more information, please follow other related articles on the PHP Chinese website!