P粉3995850242023-08-28 11:42:57
Docker has changed a lot since this question was asked, so here's an attempt to update the answer.
First, especially for AWS credentials on containers already running in the cloud, using an IAM role like Vor recommends is really a good idea. If you can do that, then add one plus one to his answer and skip the rest.
Once you start running things outside the cloud, or have different types of secrets, I recommend against storing secrets in two key places:
Environment variables: When these variables are defined on the container, every process within the container can access them, they are visible through /proc, and the application can dump its environment to stdout and store it In the logs, and most importantly when you inspect the container, they show up in clear text.
In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar
and the secret can be found from the step where it was first added to the image.
So what other options are there for secrets in Docker containers?
Option A: If you only need this key during building the image, cannot use it until the build starts, and do not yet have access to BuildKit, Multi-stage build is the best bad choice. You can add the secret to the initial stage of your build, use it there, then copy the output of that stage without the secret to your release stage, and push only that release stage to the registry server. The secret is still in the image cache on the build server, so I tend to only use it as a last resort.
Option B: Also during the build, if you can use the 18.09 release of BuildKit, there is currently an experimental feature that allows secret injection as a volume mount for a single run line. The mount does not write to the image layer, so you can access the secret during the build without worrying about it being pushed to the public registry server. The generated Dockerfile looks like this:
# syntax = docker/dockerfile:experimental FROM python:3 RUN pip install awscli RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
You can build it using the commands in 18.09 or higher, for example:
DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .
Option C: When running on a single node, without Swarm mode or other orchestration, you can mount the credentials as a read-only volume. Accessing this credential requires the same access as accessing the same credential file outside of docker, so it's no better or worse than the situation without docker. The bottom line is that the contents of this file should not be visible when you inspect the container, view the logs, or push the image to the registry server, because in each case the file is outside the volume. This does require you to copy the credentials on the docker host, separate from the deployment of the container. (Note that anyone with the ability to run a container on that host can view your credentials, since access to the docker API is root on the host, and root can view any user's files. If you don't trust anyone on the host root user, then don't give them docker API access.)
For a docker run
, this looks like:
docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image
Or for compose files, you need:
version: '3'
services:
app:
image: your_image
volumes:
- $HOME/.aws/credentials:/home/app/.aws/credentials:ro
Option D: With Swarm mode and orchestration tools like Kubernetes, we now have better support for secrets than volumes. With Swarm mode, files are encrypted on the manager file system (although the decryption key is usually also there, allowing the manager to be restarted without the administrator entering the decryption key). What's more, the secret is only sent to the worker that needs the secret (to run the container with that secret), it's only stored in the worker's memory, not on disk, and it's injected as a file into the container with tmpfs. Users on hosts outside the swarm cannot mount the secret directly into their own containers, however, with open access to the docker API, they can extract the secret from the running containers on the node, thus again limiting who has access to the secret. API. From compose, this secret injection looks like:
version: '3.7'
secrets:
aws_creds:
external: true
services:
app:
image: your_image
secrets:
- source: aws_creds
target: /home/user/.aws/credentials
uid: '1000'
gid: '1000'
mode: 0700
You turn on swarm mode with docker swarm init
for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/. aws/credentials
. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name
.
I often use the following script to version my secrets: https://github.com/sudo-bmitch/docker-config-update
Option E: There are other tools for managing secrets, my favorite is Vault because of its ability to create time-limited secrets that automatically expire. Each application then gets its own set of tokens to request secrets, which enable them to request these time-limited secrets if they can reach the vault server. This reduces the risk if a secret is taken out of your network because it will either not work or will expire quickly. Specific features of AWS for Vault are documented at https://www.vaultproject.io /docs/secrets/aws/index.html
P粉5236250802023-08-28 10:29:34
The best approach is to use an IAM role and not handle credentials at all. (See http://docs.aws .amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html )
Credentials could be retrieved from http://169.254.169.254....
Since this is a private ip address, it could be accessible only from EC2 instances.
All modern AWS client libraries "know" how to get, refresh, and use credentials from there. So in most cases you don't even need to know about it. Just run ec2 with the correct IAM role.
As an option you can pass them at the runtime as environment variables (i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage
)
You can access these environment variables by running printenv in the terminal.