ChatGPT解决这个技术问题 Extra ChatGPT

How do you run `apt-get` in a dockerfile behind a proxy?

I am running a virtual machine (Ubuntu 13.10) with docker (version 0.8.1, build a1598d1). I am trying to build an image with a dockerfile. First, I want to update the packages (using the code below - the proxy is obfuscated) but apt-get times out with the error: Could not resolve 'archive.ubuntu.com'.

FROM ubuntu:13.10
ENV HTTP_PROXY <HTTP_PROXY>
ENV HTTPS_PROXY <HTTPS_PROXY>
RUN export http_proxy=$HTTP_PROXY
RUN export https_proxy=$HTTPS_PROXY
RUN apt-get update && apt-get upgrade

I have also run the following in the host system:

sudo HTTP_PROXY=http://<PROXY_DETAILS>/ docker -d &

The host is able to run apt-get without issue.

How can I change the dockerfile to allow it to reach the ubuntu servers from within the container?

Update

I ran the code in CentOS (changing the FROM ubuntu:13.10 to FROM centos) and it worked fine. It seems to be a problem with Ubuntu.

I have just test the http proxy in centos (yum update) and in the ubuntu:13.10 (apt-get update). Both images works for me, I even tried to remove dns settings from the container to test the proxy (it works ok without dns as it should). I am using http_proxy only (no https).
Do you actually have "" in your Dockerfile or is this a placeholder?
I have the actual proxy in the file. I'm just not able to share the address.
Have a look at this article : Using Docker Behind a Proxy

J
Jiri

UPDATE:

You have wrong capitalization of environment variables in ENV. Correct one is http_proxy. Your example should be:

FROM ubuntu:13.10
ENV http_proxy <HTTP_PROXY>
ENV https_proxy <HTTPS_PROXY>
RUN apt-get update && apt-get upgrade

or

FROM centos
ENV http_proxy <HTTP_PROXY>
ENV https_proxy <HTTPS_PROXY>
RUN yum update 

All variables specified in ENV are prepended to every RUN command. Every RUN command is executed in own container/environment, so it does not inherit variables from previous RUN commands!

Note: There is no need to call docker daemon with proxy for this to work, although if you want to pull images etc. you need to set the proxy for docker deamon too. You can set proxy for daemon in /etc/default/docker in Ubuntu (it does not affect containers setting).

Also, this can happen in case you run your proxy on host (i.e. localhost, 127.0.0.1). Localhost on host differ from localhost in container. In such case, you need to use another IP (like 172.17.42.1) to bind your proxy to or if you bind to 0.0.0.0, you can use 172.17.42.1 instead of 127.0.0.1 for connection from container during docker build.

You can also look for an example here: How to rebuild dockerfile quick by using cache?


I don't have a proxy on the host. It is a corporate proxy that I am behind. The same code works on CentOS (changing the FROM only) with the same proxy settings.
With the current state of Docker it looks like you must have ENV http_proxy corporate-proxy.com in your Dockerfile. That's pretty disgusting as it means you can't share your Dockerfile with anyone outside your company. I've just verified this by running polipo in a container while trying to cache apt-get install commands for developing Dockerfiles. Perhaps a future version of Docker will have some magic iptables configuration that allows transparent proxying of HTTP requests?
@TimPotter I think it is not that bad (no advanced magic nescessary). You need to set up polipo at host with parent proxy (polipo parentProxy=corporate-proxy.com) and than you need to set up transparent proxy using iptables: iptables -t nat -A OUTPUT -p tcp -m owner ! --uid-owner <polipo-user> --dport 80 -j REDIRECT --to-port 8123.
I've just put together a project that provides squid running in a container with appropriate iptables / ip rules commands to redirect all other container traffic through the squid container for transparent proxying. This may be useful for what you're describing? silarsis.blogspot.com.au/2014/03/proxy-all-containers.html
do you have a way to get http_proxy setting in docker daemon (/etc/defailt/docker) from docker container ? so I can add script to inject into Dockerfile
B
BMW

Updated on 02/10/2018

With new feature in docker option --config, you needn't set Proxy in Dockerfile any more. You can have same Dockerfile to be used in and out corporate environment with or without proxy setup.

command docker run option:

--config string      Location of client config files (default "~/.docker")

or environment variable DOCKER_CONFIG

`DOCKER_CONFIG` The location of your client configuration files.

$ export DOCKER_CONFIG=~/.docker

https://docs.docker.com/engine/reference/commandline/cli/

https://docs.docker.com/network/proxy/

I recommend to set proxy with httpProxy, httpsProxy, ftpProxy and noProxy (The official document misses the variable ftpProxy which is useful sometimes)

{
 "proxies":
 {
   "default":
   {
     "httpProxy": "http://192.168.1.12:3128",
     "httpsProxy": "http://192.168.1.12:3128",
     "ftpProxy": "http://192.168.1.12:3128",
     "noProxy": "*.test.example.com,.example2.com,127.0.0.0/8"
   }
 }
}

Adjust proxy IP and port for your proxy environment and save to ~/.docker/config.json

After you set properly with it, you can run docker build and docker run as normal.

$ cat Dockerfile

FROM alpine

$ docker build -t demo . 
    
$ docker run -ti --rm demo env|grep -ri proxy
(standard input):HTTP_PROXY=http://192.168.1.12:3128
(standard input):http_proxy=http://192.168.1.12:3128
(standard input):HTTPS_PROXY=http://192.168.1.12:3128
(standard input):https_proxy=http://192.168.1.12:3128
(standard input):NO_PROXY=*.test.example.com,.example2.com,127.0.0.0/8
(standard input):no_proxy=*.test.example.com,.example2.com,127.0.0.0/8
(standard input):FTP_PROXY=http://192.168.1.12:3128
(standard input):ftp_proxy=http://192.168.1.12:3128

Old answer (Decommissioned)

Below setting in Dockerfile works for me. I tested in CoreOS, Vagrant and boot2docker. Suppose the proxy port is 3128

###In Centos:

ENV http_proxy=ip:3128 
ENV https_proxy=ip:3128

###In Ubuntu: ENV http_proxy 'http://ip:3128' ENV https_proxy 'http://ip:3128'

Be careful of the format, some have http in it, some haven't, some with single quota. if the IP address is 192.168.0.193, then the setting will be:

###In Centos:

ENV http_proxy=192.168.0.193:3128 
ENV https_proxy=192.168.0.193:3128

###In Ubuntu: ENV http_proxy 'http://192.168.0.193:3128' ENV https_proxy 'http://192.168.0.193:3128'

###If you need set proxy in coreos, for example to pull the image

cat /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTP_PROXY=http://192.168.0.193:3128"

You may also need to add https proxy setting, as apt-get will use secure connections. [Service] Environment="HTTP_PROXY=your-proxy-server:port" HTTPS_PROXY=your-proxy-server:port:3128" "NO_PROXY=localhost,127.0.0.0/8"
Note that for Centos, you explicitly have to add the port number, even if it's default port 80: stackoverflow.com/a/46949277/1654763
This worked. Thanks for the example. Docker docs isn't clear about it.
$DOCKER_CONFIG should point to a directory and not directly to the config.json file
z
zhanxw

You can use the --build-arg option when you want to build using a Dockerfile.

From a link on https://github.com/docker/docker/issues/14634 , see the section "Build with --build-arg with multiple HTTP_PROXY":

[root@pppdc9prda2y java]# docker build 
  --build-arg https_proxy=$HTTP_PROXY --build-arg http_proxy=$HTTP_PROXY 
  --build-arg HTTP_PROXY=$HTTP_PROXY --build-arg HTTPS_PROXY=$HTTP_PROXY 
  --build-arg NO_PROXY=$NO_PROXY  --build-arg no_proxy=$NO_PROXY -t java .

NOTE: On your own system, make sure you have set the HTTP_PROXY and NO_PROXY environment variables.


This is nasty workaround, for persistent setup you should use Dockerfile, like @Reza Farshi gave example
Nasty workaround? In some (most) cases it's preferable to not have your own proxy baked into the image, if your image is (for example) to be used by people in other parts of your company, who might be behind a different proxy.
R
Reza Farshi

before any apt-get command in your Dockerfile you should put this line

COPY apt.conf /etc/apt/apt.conf

Dont'f forget to create apt.conf in the same folder that you have the Dockerfile, the content of the apt.conf file should be like this:

Acquire::socks::proxy "socks://YOUR-PROXY-IP:PORT/";
Acquire::http::proxy "http://YOUR-PROXY-IP:PORT/";
Acquire::https::proxy "http://YOUR-PROXY-IP:PORT/";

if you use username and password to connect to your proxy then the apt.conf should be like as below:

Acquire::socks::proxy "socks://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";
Acquire::http::proxy "http://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";
Acquire::https::proxy "http://USERNAME:PASSWORD@YOUR-PROXY-IP:PORT/";

for example :

Acquire::https::proxy "http://foo:bar@127.0.0.1:8080/";

Where the foo is the username and bar is the password.


apt does not support SOCKS proxies at all. Acquire::socks::proxy means set the proxy for all URLs starting with a socks scheme. Since your sources.list does not have any socks:// URLs, that line is entirely ignored. I'll submit an edit to correct this.
You have /etc/apt/apt.conf.d/<config> on jessie, so the Dockerfile COPY directive needs an update here.
This was the only answer that worked for me (running Docker for Windows behind corporate proxy).
G
Gaetan

Use --build-arg in lower case environment variable:

docker build --build-arg http_proxy=http://proxy:port/ --build-arg https_proxy=http://proxy:port/ --build-arg ftp_proxy=http://proxy:port --build-arg no_proxy=localhost,127.0.0.1,company.com -q=false .

u
user4780495

As suggested by other answers, --build-arg may be the solution. In my case, I had to add --network=host in addition to the --build-arg options.

docker build -t <TARGET> --build-arg http_proxy=http://<IP:PORT> --build-arg https_proxy=http://<IP:PORT> --network=host .

If it only work when using --network=host, if could be an issue with dnsmasq. See wisbucky's answer bellow. After getting the proper dns in /etc/resolv,conf on the host, you shouldn't need --network=host
C
Connor Goddard

A slight alternative to the answer provided by @Reza Farshi (which works better in my case) is to write the proxy settings out to /etc/apt/apt.conf using echo via the Dockerfile e.g.:

FROM ubuntu:16.04

RUN echo "Acquire::http::proxy \"$HTTP_PROXY\";\nAcquire::https::proxy \"$HTTPS_PROXY\";" > /etc/apt/apt.conf

# Test that we can now retrieve packages via 'apt-get'
RUN apt-get update

The advantage of this approach is that the proxy addresses can be passed in dynamically at image build time, rather than having to copy the settings file over from the host.

e.g.

docker build --build-arg HTTP_PROXY=http://<host>:<port> --build-arg HTTPS_PROXY=http://<host>:<port> .

as per docker build docs.


u
user3813609

i had the same problem and found another little workaround: i have a provisioner script that is added form the docker build environment. In the script i set the environment variable dependent on a ping check:

Dockerfile:

ADD script.sh /tmp/script.sh
RUN /tmp/script.sh

script.sh:

if ping -c 1 ix.de ; then
    echo "direct internet doing nothing"
else
    echo "proxy environment detected setting proxy"
    export http_proxy=<proxy address>
fi

this is still somewhat crude but worked for me


This worked for you? I can get the script to run just fine, but the environment variables are only available to the script. Things like apt-get, cURL, and wget don't end up seeing the environment variables.
w
wisbucky

If you have the proxies set up correctly, and still cannot reach the internet, it could be the DNS resolution. Check /etc/resolve.conf on the host Ubuntu VM. If it contains nameserver 127.0.1.1, that is wrong.

Run these commands on the host Ubuntu VM to fix it:

sudo vi /etc/NetworkManager/NetworkManager.conf
# Comment out the line `dns=dnsmasq` with a `#`

# restart the network manager service
sudo systemctl restart network-manager

cat /etc/resolv.conf

Now /etc/resolv.conf should have a valid value for nameserver, which will be copied by the docker containers.


Thank you! That answer was extremely helpful and finally helped me getting pip install working with docker behind a proxy. That's not really what the OP wanted, but helped me a lot! :)
Same here, thank you! Here is more info about this issue: superuser.com/questions/681993/…
d
danday74

We are doing ...

ENV http_proxy http://9.9.9.9:9999
ENV https_proxy http://9.9.9.9:9999

and at end of dockerfile ...

ENV http_proxy ""
ENV https_proxy ""

This, for now (until docker introduces build env vars), allows the proxy env vars to be used for the build ONLY without exposing them

The alternative to solution is NOT to build your images locally behind a proxy but to let docker build your images for you using docker "automated builds". Since docker is not building the images behind your proxy the problem is solved. An example of an automated build is available at ...

https://github.com/danday74/docker-nginx-lua (GITHUB repo)

https://registry.hub.docker.com/u/danday74/nginx-lua (DOCKER repo which is watching the github repo using an automated build and doing a docker build on a push to the github master branch)


Just wanted to mention that since these ENV vars are exposed in EACH intermediate container, unsetting them at the end will not hide them, and they can still be accessed pretty easily. From what I've seen, the secure options for dealing with sensitive data are covered in this github issue
R
Reza Farshi

and If you want to set proxy for wget you should put these line in your Dockerfile

ENV http_proxy YOUR-PROXY-IP:PORT/
ENV https_proxy YOUR-PROXY-IP:PORT/
ENV all_proxy YOUR-PROXY-IP:PORT/

D
Dhawal

As Tim Potter pointed out, setting proxy in dockerfile is horrible. When building the image, you add proxy for your corporate network but you may be deploying in cloud or a DMZ where there is no need for proxy or the proxy server is different.

Also, you cannot share your image with others outside your corporate n/w.