I'm new to Docker, and it's unclear how to access an external database from a container. Is the best way to hard-code in the connection string?
# Dockerfile
ENV DATABASE_URL amazon:rds/connection?string
You can pass environment variables to your containers with the -e
flag.
An example from a startup script:
sudo docker run -d -t -i -e REDIS_NAMESPACE='staging' \
-e POSTGRES_ENV_POSTGRES_PASSWORD='foo' \
-e POSTGRES_ENV_POSTGRES_USER='bar' \
-e POSTGRES_ENV_DB_NAME='mysite_staging' \
-e POSTGRES_PORT_5432_TCP_ADDR='docker-db-1.hidden.us-east-1.rds.amazonaws.com' \
-e SITE_URL='staging.mysite.com' \
-p 80:80 \
--link redis:redis \
--name container_name dockerhub_id/image_name
Or, if you don't want to have the value on the command-line where it will be displayed by ps
, etc., -e
can pull in the value from the current environment if you just give it without the =
:
sudo PASSWORD='foo' docker run [...] -e PASSWORD [...]
If you have many environment variables and especially if they're meant to be secret, you can use an env-file:
$ docker run --env-file ./env.list ubuntu bash
The --env-file flag takes a filename as an argument and expects each line to be in the VAR=VAL format, mimicking the argument passed to --env. Comment lines need only be prefixed with #
You can pass using -e
parameters with docker run ..
command as mentioned here and as mentioned by @errata.
However, the possible downside of this approach is that your credentials will be displayed in the process listing, where you run it.
To make it more secure, you may write your credentials in a configuration file and do docker run
with --env-file
as mentioned here. Then you can control the access of that config file so that others having access to that machine wouldn't see your credentials.
--env-file
, when you use --env
your env values will be quoted/escaped with standard semantics of whatever shell you're using, but when using --env-file
the values you will get inside your container will be different. The docker run command just reads the file, does very basic parsing and passes the values through to the container, it's not equivalent to the way your shell behaves. Just a small gotcha to be aware of if you're converting a bunch of --env
entries to an --env-file
.
If you are using 'docker-compose' as the method to spin up your container(s), there is actually a useful way to pass an environment variable defined on your server to the Docker container.
In your docker-compose.yml
file, let's say you are spinning up a basic hapi-js container and the code looks like:
hapi_server:
container_name: hapi_server
image: node_image
expose:
- "3000"
Let's say that the local server that your docker project is on has an environment variable named 'NODE_DB_CONNECT' that you want to pass to your hapi-js container, and you want its new name to be 'HAPI_DB_CONNECT'. Then in the docker-compose.yml
file, you would pass the local environment variable to the container and rename it like so:
hapi_server:
container_name: hapi_server
image: node_image
environment:
- HAPI_DB_CONNECT=${NODE_DB_CONNECT}
expose:
- "3000"
I hope this helps you to avoid hard-coding a database connect string in any file in your container!
Use -e
or --env
value to set environment variables (default []).
An example from a startup script:
docker run -e myhost='localhost' -it busybox sh
If you want to use multiple environments from the command line then before every environment variable use the -e
flag.
Example:
sudo docker run -d -t -i -e NAMESPACE='staging' -e PASSWORD='foo' busybox sh
Note: Make sure put the container name after the environment variable, not before that.
If you need to set up many variables, use the --env-file
flag
For example,
$ docker run --env-file ./my_env ubuntu bash
For any other help, look into the Docker help:
$ docker run --help
Official documentation: https://docs.docker.com/compose/environment-variables/
ubuntu bash
? Does it apply for images created with ubuntu as base image or to every image?
bash
gives you a terminal (although I think you need -it for an interactive terminal).
Using docker-compose
, you can inherit env variables in docker-compose.yml and subsequently any Dockerfile(s) called by docker-compose
to build images. This is useful when the Dockerfile
RUN
command should execute commands specific to the environment.
(your shell has RAILS_ENV=development
already existing in the environment)
docker-compose.yml:
version: '3.1'
services:
my-service:
build:
#$RAILS_ENV is referencing the shell environment RAILS_ENV variable
#and passing it to the Dockerfile ARG RAILS_ENV
#the syntax below ensures that the RAILS_ENV arg will default to
#production if empty.
#note that is dockerfile: is not specified it assumes file name: Dockerfile
context: .
args:
- RAILS_ENV=${RAILS_ENV:-production}
environment:
- RAILS_ENV=${RAILS_ENV:-production}
Dockerfile:
FROM ruby:2.3.4
#give ARG RAILS_ENV a default value = production
ARG RAILS_ENV=production
#assign the $RAILS_ENV arg to the RAILS_ENV ENV so that it can be accessed
#by the subsequent RUN call within the container
ENV RAILS_ENV $RAILS_ENV
#the subsequent RUN call accesses the RAILS_ENV ENV variable within the container
RUN if [ "$RAILS_ENV" = "production" ] ; then echo "production env"; else echo "non-production env: $RAILS_ENV"; fi
This way, I don't need to specify environment variables in files or docker-compose
build
/up
commands:
docker-compose build
docker-compose up
We can also use host machine environment variable using -e flag and $ :
Before running the following command, need to export(means set) local env variables.
docker run -it -e MG_HOST=$MG_HOST \
-e MG_USER=$MG_USER \
-e MG_PASS=$MG_PASS \
-e MG_AUTH=$MG_AUTH \
-e MG_DB=$MG_DB \
-t image_tag_name_and_version
By using this method, you can set the environment variable automatically with your given name. In my case(MG_HOST ,MG_USER)
Additionally:
If you are using python you can access these environment variable inside docker by
import os
host = os.environ.get('MG_HOST')
username = os.environ.get('MG_USER')
password = os.environ.get('MG_PASS')
auth = os.environ.get('MG_AUTH')
database = os.environ.get('MG_DB')
docker run
command, It is worth noting that the env variables -e
must be BEFORE -t
as shown. I had mine placed after the image and it wasn't working.
There is a nice hack how to pipe host machine environment variables to a docker container:
env > env_file && docker run --env-file env_file image_name
Use this technique very carefully, because env > env_file will dump ALL host machine ENV variables to env_file and make them accessible in the running container.
docker run --env-file =(env) image_name
The problem I had was that I was putting the --env-file at the end of the command
docker run -it --rm -p 8080:80 imagename --env-file ./env.list
Fix
docker run --env-file ./env.list -it --rm -p 8080:80 imagename
Another way is to use the powers of /usr/bin/env
:
docker run ubuntu env DEBUG=1 path/to/script.sh
For Amazon AWS ECS/ECR, you should manage your environment variables (especially secrets) via a private S3 bucket. See blog post How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker.
If you have the environment variables in an env.sh
locally and want to set it up when the container starts, you could try
COPY env.sh /env.sh
COPY <filename>.jar /<filename>.jar
ENTRYPOINT ["/bin/bash" , "-c", "source /env.sh && printenv && java -jar /<filename>.jar"]
This command would start the container with a bash shell (I want a bash shell since source
is a bash command), sources the env.sh
file(which sets the environment variables) and executes the jar file.
The env.sh
looks like this,
#!/bin/bash
export FOO="BAR"
export DB_NAME="DATABASE_NAME"
I added the printenv
command only to test that actual source command works. You should probably remove it when you confirm the source command works fine or the environment variables would appear in your docker logs.
--env-file
arg to a docker run
command. For example, if you are deploying an application using Google app engine and the app running inside the container needs environment variables set inside the docker container, you do not have a direct approach to set the environment variables since you do not have control over the docker run
command. In such a case, you could have a script that decrypts the env variables using say, KMS, and adds them to the env.sh
which can be sourced to set the env variables.
.
(dot) command available in regular sh
instead of source
. (source
is the same as .
)
docker run --rm -it --env-file <(bash -c 'env | grep <your env data>')
Is a way to grep the data stored within a .env
and pass them to Docker, without anything being stored unsecurely (so you can't just look at docker history
and grab keys.
Say you have a load of AWS stuff in your .env
like so:
AWS_ACCESS_KEY: xxxxxxx
AWS_SECRET: xxxxxx
AWS_REGION: xxxxxx
running docker with docker run --rm -it --env-file <(bash -c 'env | grep AWS_')
will grab it all and pass it securely to be accessible from within the container.
Using jq to convert the env to JSON:
env_as_json=`jq -c -n env`
docker run -e HOST_ENV="$env_as_json" <image>
this requires jq version 1.6 or newer
this pust the host env as json, essentially like so in Dockerfile:
ENV HOST_ENV (all env from the host as json)
docker run -e HOST_ENV="$env_as_json" <image>
? In my case Docker doesn't seem to be resolving variables or subshells (${}
or $()
) when passed as docker args. For example: A=123 docker run --rm -it -e HE="$A" ubuntu
then inside that container: root@947c89c79397:/# echo $HE root@947c89c79397:/#
.... The HE
variable doesn't make it.
here is how i was able to solve it
docker run --rm -ti -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN -e AWS_SECURITY_TOKEN amazon/aws-cli s3 ls
one more example:
export VAR1=value1
export VAR2=value2
$ docker run --env VAR1 --env VAR2 ubuntu env | grep VAR
VAR1=value1
VAR2=value2
For passing multiple environment variables via docker-compose an environment file can be used in docker-compose file as well.
web:
env_file:
- web-variables.env
https://docs.docker.com/compose/environment-variables/#the-env_file-configuration-option
There are some documentation inconsistencies for setting environment variables with docker run
.
The online referece says one thing:
--env , -e Set environment variables
The manpage is a little different:
-e, --env=[] Set environment variables
The docker run --help
gives something else again:
-e, --env list Set environment variables
Something that isn't necessarily clear in any of the available documentation:
A trailing space after -e
or --env
can be replaced by =
, or in the case of -e
can be elided altogether:
$ docker run -it -ekey=value:1234 ubuntu env
key=value:1234
A trick that I found by trial and error (and clues in the above)...
If you get the error:
unknown flag: --env
Then you may find it helpful to use an equals sign with --env
, for example:
--env=key=value:1234
Different methods of launching a container may have different parsing scenarios.
These tricks may be helpful when using Docker in various composing configurations, such as Visual Studio Code devcontainer.json, where spaces are not allowed in the runArgs
array.
You can use -e or --env as a argument followed by key value format for eg: docker build -f file_name -e MYSQL_ROOT_PASSWORD=root
Success story sharing
export PASSWORD=foo
instead and the variable will be passed todocker run
as an environment variable, makingdocker run -e PASSWORD
work.-e
in the command line andENV
in the Dockerfile do the same thing?-e
values before the name of the docker image otherwise no error will be raised and none of the variables will have a value!