ChatGPT解决这个技术问题 Extra ChatGPT

Allow docker container to connect to a local/host postgres database

I've recently been playing around with Docker and QGIS and have installed a container following the instructions in this tutorial.

Everything works great, although I am unable to connect to a localhost postgres database that contains all my GIS data. I figure this is because my postgres database is not configured to accept remote connections and have been editing the postgres conf files to allow remote connections using the instructions in this article.

I'm still getting an error message when I try and connect to my database running QGIS in Docker: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections to port 5433? The postgres server is running, and I've edited my pg_hba.conf file to allow connections from a range of IP addresses (172.17.0.0/32). I had previously queried the IP address of the docker container using docker ps and although the IP address changes, it has so far always been in the range 172.17.0.x

Any ideas why I can't connect to this database? Probably something very simple I imagine!

I'm running Ubuntu 14.04; Postgres 9.3


A
Adam

TL;DR

Use 172.17.0.0/16 as IP address range, not 172.17.0.0/32. Don't use localhost to connect to the PostgreSQL database on your host, but the host's IP instead. To keep the container portable, start the container with the --add-host=database: flag and use database as hostname for connecting to PostgreSQL. Make sure PostgreSQL is configured to listen for connections on all IP addresses, not just on localhost. Look for the setting listen_addresses in PostgreSQL's configuration file, typically found in /etc/postgresql/9.3/main/postgresql.conf (credits to @DazmoNorton).

Long version

172.17.0.0/32 is not a range of IP addresses, but a single address (namly 172.17.0.0). No Docker container will ever get that address assigned, because it's the network address of the Docker bridge (docker0) interface.

When Docker starts, it will create a new bridge network interface, that you can easily see when calling ip a:

$ ip a
...
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever

As you can see, in my case, the docker0 interface has the IP address 172.17.42.1 with a netmask of /16 (or 255.255.0.0). This means that the network address is 172.17.0.0/16.

The IP address is randomly assigned, but without any additional configuration, it will always be in the 172.17.0.0/16 network. For each Docker container, a random address from that range will be assigned.

This means, if you want to grant access from all possible containers to your database, use 172.17.0.0/16.


hey thanks for your comments. I've changed my pg_hba.conf to the address you suggested, but still get the same connection error message after stopping and restarting the postgres service. I've added the line under my ipv4 connections - is there somewhere else I'm supposed to add the address you suggest? Alternatively in my QGIS app running in Docker do I need to change the postgres connection info? For example, if I'm connecting from within a docker container is the host still 'localhost'?
Ah, that's an important point. No, localhost is not the host system inside your Docker container. Try connecting to the host system's public IP address. To keep the container portable, you can also start the container with the --add-host=database:<host-ip> and simply use database as hostname to connect to your PostgreSQL host from within the Docker container.
I needed one more piece. I also had to edit /etc/postgresql/9.3/main/postgresql.conf and add my server's eth0 IP address to listen_addresses. By default listen_addresses has postgres bind to localhost only.
@DzamoNorton, thanks for the hint! I updated my answer accordingly.
@helmbert host-ip is ip address of virtual machine or docker container?
C
Chris

Simple Solution

The newest version of docker (18.03) offers a built in port forwarding solution. Inside your docker container simply have the db host set to host.docker.internal. This will be forwarded to the host the docker container is running on.

Documentation for this is here: https://docs.docker.com/docker-for-mac/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host


This is by far the best answer now! way easier and how it should be.
is host.docker.internal limited to macs only?
@Dragas according to the docs it will "not work outside of the Mac", but the same DNS name is mentioned in the docs for Docker for Windows, so I think it's limited to "Docker for...". Either way it's development-only: you're not meant to ship a docker image that uses this.
Only working with windows and mac. On my case ubuntu is not working
This works for WSL2 Ubuntu. My specific case: WSL2 Docker container (Flask app) accessing a WSL2 installed postgresql.
b
baxang

Docker for Mac solution

17.06 onwards

Thanks to @Birchlabs' comment, now it is tons easier with this special Mac-only DNS name available:

docker run -e DB_PORT=5432 -e DB_HOST=docker.for.mac.host.internal

From 17.12.0-cd-mac46, docker.for.mac.host.internal should be used instead of docker.for.mac.localhost. See release note for details.

Older version

@helmbert's answer well explains the issue. But Docker for Mac does not expose the bridge network, so I had to do this trick to workaround the limitation:

$ sudo ifconfig lo0 alias 10.200.10.1/24

Open /usr/local/var/postgres/pg_hba.conf and add this line:

host    all             all             10.200.10.1/24            trust

Open /usr/local/var/postgres/postgresql.conf and edit change listen_addresses:

listen_addresses = '*'

Reload service and launch your container:

$ PGDATA=/usr/local/var/postgres pg_ctl reload
$ docker run -e DB_PORT=5432 -e DB_HOST=10.200.10.1 my_app 

What this workaround does is basically same with @helmbert's answer, but uses an IP address that is attached to lo0 instead of docker0 network interface.


Is this still current as of 4 April 2017?
I like this way which will not expose database. BTW, can I use this on CentOS? I got the error: alias: Unknown host when I tried to use the alias command you provide.
There is a better way on macOS, as of Docker 17.06.0-rc1-ce-mac13 (June 1st 2017). containers recognise the host docker.for.mac.localhost. this is the IP of your host machine. lookup its entry in the container's hosts database like so: docker run alpine /bin/sh -c 'getent hosts docker.for.mac.localhost'
Seems like it changed to host.docker.internal since 18.03, other options are still available but deprecated (Source).
M
Max Malysh

Simple solution

Just add --network=host to docker run. That's all!

This way container will use the host's network, so localhost and 127.0.0.1 will point to the host (by default they point to a container). Example:

docker run -d --network=host \
  -e "DB_DBNAME=your_db" \
  -e "DB_PORT=5432" \
  -e "DB_USER=your_db_user" \
  -e "DB_PASS=your_db_password" \
  -e "DB_HOST=127.0.0.1" \
  --name foobar foo/bar

Be careful! This solution, that is the right one in my opinion, does not work for macOS. Don't waste your time trying to figure out why it is not working. Take a look at: github.com/docker/for-mac/issues/2716
Works on Debian. Tried changing postgresql.conf and pg_hba.conf but this is simpler and faster.
Also, one cannot be able to publish ports 😞
This worked for me on Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-1022-aws x86_64), thanks
s
singhpradeep

The solution posted here does not work for me. Therefore, I am posting this answer to help someone facing similar issue.

Note: This solution works for Windows 10 as well, please check comment below.

OS: Ubuntu 18 PostgreSQL: 9.5 (Hosted on Ubuntu) Docker: Server Application (which connects to PostgreSQL)

I am using docker-compose.yml to build application.

STEP 1: Please add host.docker.internal:<docker0 IP>

version: '3'
services:
  bank-server:
    ...
    depends_on:
      ....
    restart: on-failure
    ports:
      - 9090:9090
    extra_hosts:
      - "host.docker.internal:172.17.0.1"

To find IP of docker i.e. 172.17.0.1 (in my case) you can use:

$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

OR

$> ip a
1: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

STEP 2: In postgresql.conf, change listen_addresses to listen_addresses = '*'

STEP 3: In pg_hba.conf, add this line

host    all             all             0.0.0.0/0               md5

STEP 4: Now restart postgresql service using, sudo service postgresql restart

STEP 5: Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB

Enjoy!!


sudo nano /etc/postgresql/<your_version>/main/postgresql.conf for those who want to edit postgres conf
I have validated these steps in Windows 10 along with Docker. This is working well. Only things to note: use ipconfig in Step 1 and in Step 4, to restart postgresql services goto Control Panel -> Administrative Tools -> View Local Service and then search for Postgresql service and restart them.
Thank you, step 2 and 3 solved my problem. In my case, I set the method for the new pg_hba entry to password. Also, you don't have to open access to all IP (0.0.0.0/0), but can instead use the docker bridge IP (use ip -h -c a to find that IP), in my case it was 172.17.0.1/16
H
Harlin

To set up something simple that allows a Postgresql connection from the docker container to my localhost I used this in postgresql.conf:

listen_addresses = '*'

And added this pg_hba.conf:

host    all             all             172.17.0.0/16           password

Then do a restart. My client from the docker container (which was at 172.17.0.2) could then connect to Postgresql running on my localhost using host:password, database, username and password.


S
Shubham

you can pass --network=host during docker run command to access localhost inside container.

Ex:

docker run --network=host docker-image-name:latest

In case you want to pass env variables with localhost use --env-file paramater to access environment variables inside container.

Ex:

docker run --network=host --env-file .env-file-name docker-image-name:latest

Note: pass the parameters before docker image name otherwise parameters will not work. (I faced this, so heads up!)


how will you write this in docker-compose?
S
Sarath Ak

for docker-compose you can try just add

network_mode: "host"

example :

version: '2'
services:
  feedx:
    build: web
    ports:
    - "127.0.0.1:8000:8000"
    network_mode: "host"

https://docs.docker.com/compose/compose-file/#network_mode


adding networkd_mode to host break the connection between another containers
S
Sanaulla

In Ubuntu:

First You have to check that is the Docker Database port is Available in your system by following command -

sudo iptables -L -n

Sample OUTPUT:

Chain DOCKER (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.2           tcp dpt:3306
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:80
ACCEPT     tcp  --  0.0.0.0/0            172.17.0.3           tcp dpt:22

Here 3306 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo iptables -A INPUT -p tcp --dport 3306 -j ACCEPT

Now, You can easily access the Docker Database from your local system by following configuration

  host: 172.17.0.2 
  adapter: mysql
  database: DATABASE_NAME
  port: 3307
  username: DATABASE_USER
  password: DATABASE_PASSWORD
  encoding: utf8

In CentOS:

First You have to check that is the Docker Database port is Available in your firewall by following command -

sudo firewall-cmd --list-all

Sample OUTPUT:

  target: default
  icmp-block-inversion: no
  interfaces: eno79841677
  sources: 
  services: dhcpv6-client ssh
  **ports: 3307/tcp**
  protocols: 
  masquerade: no
  forward-ports: 
  sourceports: 
  icmp-blocks: 
  rich rules:

Here 3307 is used as Docker Database Port on 172.17.0.2 IP, If this port is not available Run the following command -

sudo firewall-cmd --zone=public --add-port=3307/tcp

In server, You can add the port permanently

sudo firewall-cmd --permanent --add-port=3307/tcp
sudo firewall-cmd --reload

Now, You can easily access the Docker Database from your local system by the above configuration.


I know this is old but Now, You can easily access the Docker Database from your local system by following configuration - you have this the wrong way around. He has a local database, and a docker app trying to connect to that local db not the other way around
c
c9s

You can add multiple listening address for better security.

listen_addresses = 'localhost,172.17.0.1'

Adding listen_addresses = '*' isn't a good option, which is very dangerous and expose your postgresql database to the wild west.


A
Abdul Mannan

Just in case, above solutions don't work for anyone. Use below statement to connect from docker to host postgres (on mac):

psql --host docker.for.mac.host.internal -U postgres

D
Dharman

Let me try explain what i did.

Postgresql

First of all I did the configuration needed to make sure my Postgres Database was accepting connections from outside.

open pg_hba.conf and add in the end the following line:

host    all             all             0.0.0.0/0               md5

open postgresql.conf and look for listen_addresses and modify there like this:

listen_addresses = '*'

Make sure the line above is not commented with a #

-> Restart your database

OBS: This is not the recommended configuration for a production environment

Next, I looked for my host’s ip. I was using localhosts ip 127.0.0.1, but the container doesn’t see it, so the Connection Refused message in question shows up when running the container. After a long search in web about this, I read that the container sees the internal ip from your local network (That one your router attributes to every device that connects to it, i’m not talking about the IP that gives you access to the internet). That said, i opened a terminal and did the following:

Look for local network ip

Open a terminal or CMD

(MacOS/Linux)

$ ifconfig

(Windows)

$ ipconfig

This command will show your network configuration information. And looks like this:

en4: 
    ether d0:37:45:da:1b:6e 
    inet6 fe80::188d:ddbe:9796:8411%en4 prefixlen 64 secured scopeid 0x7 
    inet 192.168.0.103 netmask 0xffffff00 broadcast 192.168.0.255
    nd6 options=201<PERFORMNUD,DAD>
    media: autoselect (100baseTX <full-duplex>)
    status: active

Look for the one that is active.

In my case, my local network ip was 192.168.0.103

With this done, I ran the container.

Docker

Run the container with the --add-host parameter, like this:

$ docker run --add-host=aNameForYourDataBaseHost:yourLocalNetWorkIp --name containerName -di -p HostsportToBind:containerPort imageNameOrId

In my case I did:

$ docker run --add-host=db:192.168.0.103 --name myCon -di -p 8000:8000 myImage

I’m using Django, so the 8000 port is the default.

Django Application

The configuration to access the database was:

In settings.py

DATABASES = {
    'default': {
            'ENGINE': 'django.db.backends.postgresql',
            'NAME': ‘myDataBaseName',
            'USER': ‘username',
            'PASSWORD': '123456',
            'HOST': '192.168.0.103',
            'PORT': 5432,
    }
}

References

About -p flag: Connect using network port mapping

About docker run: Docker run documentation

Interesting article: Docker Tip #35: Connect to a Database Running on Your Docker Host


g
goto

One more thing needed for my setup was to add

172.17.0.1  localhost

to /etc/hosts

so that Docker would point to 172.17.0.1 as the DB hostname, and not rely on a changing outer ip to find the DB. Hope this helps someone else with this issue!


This is a bad solution. Localhost should typically point to 127.0.0.1. Changing it might have undesired consequences, even if in this particular case it works.
A better way is to set up a database host with --add-host=database:172.17.0.1 when running the container. Then point your app to that host. This avoids hard-coding an IP address inside a container.
the --add-host=database:172.17.0.1 is preferable
H
Hrishi

The another solution is service volume, You can define a service volume and mount host's PostgreSQL Data directory in that volume. Check out the given compose file for details.

version: '2'
services:
  db:   
    image: postgres:9.6.1
    volumes:
      - "/var/lib/postgresql/data:/var/lib/postgresql/data" 
    ports:
      - "5432:5432"

By doing this, another PostgreSQL service will run under container but uses same data directory which host PostgreSQL service is using.


This will probably cause write conflicts with a running host PostgreSQL service
I think stopping host service will solve the problem in that case.