ChatGPT解决这个技术问题 Extra ChatGPT

How to persist data in a dockerized postgres database using volumes

My docker compose file has three containers, web, nginx, and postgres. Postgres looks like this:

postgres:
  container_name: postgres
  restart: always
  image: postgres:latest
  volumes:
    - ./database:/var/lib/postgresql
  ports:
    - 5432:5432

My goal is to mount a volume which corresponds to a local folder called ./database inside the postgres container as /var/lib/postgres. When I start these containers and insert data into postgres, I verify that /var/lib/postgres/data/base/ is full of the data I'm adding (in the postgres container), but in my local system, ./database only gets a data folder in it, i.e. ./database/data is created, but it's empty. Why?

Notes:

This suggests my above file should work.

This person is using docker services which is interesting

UPDATE 1

Per Nick's suggestion, I did a docker inspect and found:

    "Mounts": [
        {
            "Source": "/Users/alex/Documents/MyApp/database",
            "Destination": "/var/lib/postgresql",
            "Mode": "rw",
            "RW": true,
            "Propagation": "rprivate"
        },
        {
            "Name": "e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35",
            "Source": "/var/lib/docker/volumes/e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35/_data",
            "Destination": "/var/lib/postgresql/data",
            "Driver": "local",
            "Mode": "",
            "RW": true,
            "Propagation": ""
        }
    ],

Which makes it seem like the data is being stolen by another volume I didn't code myself. Not sure why that is. Is the postgres image creating that volume for me? If so, is there some way to use that volume instead of the volume I'm mounting when I restart? Otherwise, is there a good way of disabling that other volume and using my own, ./database?

UPDATE 2

I found the solution, thanks to Nick! (and another friend) Answer below.

do you already run the initdb command line to initialize your database cluster?
Are you sure your data subdirectory is really empty? It might have special access permissions.
Thanks for getting back to me so fast! I'm using a flask app, so I from app import db and db.create_all() from a docker run after starting the containers. I don't initdb directly from the command line.
@YaroslavStavnichiy I don't know how else to check that than sudo su - and look in ./database/data. There's nothing in there as far as I can tell.
Someone might find this useful: sample compose file persisting postgres, elastic search and media data, stackoverflow.com/a/56475980/5180118

A
Alex Lenail

Strangely enough, the solution ended up being to change

volumes:
  - ./postgres-data:/var/lib/postgresql

to

volumes:
  - ./postgres-data:/var/lib/postgresql/data

Just a quick "why" for this answer (which works). Per the postgres folks, the default data directory is /var/lib/postgresql/data - you can read the PGDATA variable notes here: store.docker.com/images/…
In the question above, the OP says it worked for him without the /data at the end. Is that correct?
And add the local directory to your .dockerignore file, especially if you'll ever trun this into a production image. See codefresh.io/blog/not-ignore-dockerignore for a discussion.
this does still not work for me (mac os x high sierra)
@OlliD-Metz I had to do a docker rm my_postgres_container_1 before it worked (also High Sierra).
C
Community

You can create a common volume for all Postgres data

 docker volume create pgdata

or you can set it to the compose file

   version: "3"
   services:
     db:
       image: postgres
       environment:
         - POSTGRES_USER=postgres
         - POSTGRES_PASSWORD=postgress
         - POSTGRES_DB=postgres
       ports:
         - "5433:5432"
       volumes:
         - pgdata:/var/lib/postgresql/data
       networks:
         - suruse
   volumes: 
     pgdata:

It will create volume name pgdata and mount this volume to container's path.

You can inspect this volume

docker volume inspect pgdata

// output will be
[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
        "Name": "pgdata",
        "Options": {},
        "Scope": "local"
    }
]

Commenting a bit late, but won't this clear data if I do a docker-compose down -v. And what is the solution to that?
@Sid, yes, it will! Just be careful with this option.
so with docker-compose [down] the volume is no longer persisted? Does a full cleanup even of the volume?
@Sid Commenting even later, but you can use docker-compose down --rmi all without the -v option and it'll clear out "everything" except the volumes, i.e. containers, networks, images, etc. I do that when deploying while allowing data persistence.
What is the suruse for?
N
Nick Burke

I would avoid using a relative path. Remember that docker is a daemon/client relationship.

When you are executing the compose, it's essentially just breaking down into various docker client commands, which are then passed to the daemon. That ./database is then relative to the daemon, not the client.

Now, the docker dev team has some back and forth on this issue, but the bottom line is it can have some unexpected results.

In short, don't use a relative path, use an absolute path.


Thanks for this answer! Sadly, I don't think it worked. I changed the line to an absolute path, and after inserting the data, the database/data folder is still empty =(
K. Next up is to run docker inspect on the container and make sure that the container is aware of the volume (just in case compose is confused or something). (note: docker inspect can have sensitive data, so don't paste it here without munging ;-) After that, it's a matter of checking permissions (although that would usually show an error)
Aha! @Nick Burke I think you've found something. I've updated the question.
l
leldo

I think you just need to create your volume outside docker first with a docker create -v /location --name and then reuse it.

And by the time I used to use docker a lot, it wasn't possible to use a static docker volume with dockerfile definition so my suggestion is to try the command line (eventually with a script ) .