ChatGPT解决这个技术问题 Extra ChatGPT

How to upgrade docker container after its image changed

Let's say I have pulled the official mysql:5.6.21 image.

I have deployed this image by creating several docker containers.

These containers have been running for some time until MySQL 5.6.22 is released. The official image of mysql:5.6 gets updated with the new release, but my containers still run 5.6.21.

How do I propagate the changes in the image (i.e. upgrade MySQL distro) to all my existing containers? What is the proper Docker way of doing this?

I have made a utility to automate the updating of docker images: github.com/PHPExpertsInc/DockerUpgrader

r
robsch

After evaluating the answers and studying the topic I'd like to summarize.

The Docker way to upgrade containers seems to be the following:

Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:

docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always \
  -e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql

You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it

About volumes (Docker docs)

Tiny Docker Pieces, Loosely Joined (by Tom Offermann)

How to deal with persistent storage (e.g. databases) in Docker (Stack Overflow question)

Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won't work).

I'd like to thank everybody who gave their answers, so we could see all different approaches.


What if data migration is needed? The new server cannot mount the data because it is in an old format, it needs to know a migration is happening and change the data representation.
I think, image designers should account for that and allow launching custom (eg, data migration) commands during container's first run.
@static_rtti How about docker rename my-mysql-container trash-container before creating the new one?
Would there be any all-in-one command to update the container without having to manually stop it, remove it, and create it again (based on the new image that have been pulled)?
how do I restore command used to create the container? I don't remember all the options I passed.
k
kMaiSmith

I don't like mounting volumes as a link to a host directory, so I came up with a pattern for upgrading docker containers with entirely docker managed containers. Creating a new docker container with --volumes-from <container> will give the new container with the updated images shared ownership of docker managed volumes.

docker pull mysql
docker create --volumes-from my_mysql_container [...] --name my_mysql_container_tmp mysql

By not immediately removing the original my_mysql_container yet, you have the ability to revert back to the known working container if the upgraded container doesn't have the right data, or fails a sanity test.

At this point, I'll usually run whatever backup scripts I have for the container to give myself a safety net in case something goes wrong

docker stop my_mysql_container
docker start my_mysql_container_tmp

Now you have the opportunity to make sure the data you expect to be in the new container is there and run a sanity check.

docker rm my_mysql_container
docker rename my_mysql_container_tmp my_mysql_container

The docker volumes will stick around so long as any container is using them, so you can delete the original container safely. Once the original container is removed, the new container can assume the namesake of the original to make everything as pretty as it was to begin.

There are two major advantages to using this pattern for upgrading docker containers. Firstly, it eliminates the need to mount volumes to host directories by allowing volumes to be directly transferred to an upgraded containers. Secondly, you are never in a position where there isn't a working docker container; so if the upgrade fails, you can easily revert to how it was working before by spinning up the original docker container again.


Why don't you like mounting host volumes inside a Docker container? (I'm doing precisely that so I'm interested in arguments against doing that :- ) I've mounted e.g.: ./postgres-data/:/var/lib/postgres/data — i.e. mounted the host dir ./postgres-data/, inside my PostgreSQL container.)
@KajMagnus I use docker swarms a lot, and I like to write my containers to work well in a swarm. When I spin up a container in a swarm I have no idea which swarm node the container is going to live on, so I cannot rely on the host path containing the data that I want. Since Docker 1.9 (i think) volumes can be shared across hosts which makes upgrading and migrating containers a breeze using the method i described. An alternative would be to make sure some network volume is mounted on all of the swarm nodes, but that sounds like a huge pain to maintain.
Thanks! Ok, mounting host volumes seems like something I too want to avoid, now. At least a bit later if my app becomes popular and needs to scale to more than one server
R
Ronan Fauglas

Just for providing a more general (not mysql specific) answer...

In short

Synchronize with service image registry (https://docs.docker.com/compose/compose-file/#image):

docker-compose pull 

Recreate container if docker-compose file or image have changed:

docker-compose up -d

Background

Container image management is one of the reason for using docker-compose (see https://docs.docker.com/compose/reference/up/)

If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.

Data management aspect being also covered by docker-compose through mounted external "volumes" (See https://docs.docker.com/compose/compose-file/#volumes) or data container.

This leaves potential backward compatibility and data migration issues untouched, but these are "applicative" issues, not Docker specific, which have to be checked against release notes and tests...


How do you this with versioning? example the new image is foo/image:2 and docker-compose.yml has image: foo/image:1?
While this is surely the way to go, one should be aware that any changes made in the container will still be lost once the container gets recreated. So keeping container changes just within mounted volumes is still necessary.
Great point Petr Bodnar!. The container will get recreated if there is indeed a new image fetched, because the base image and the fetched image are different (Kubernetes), i believe on something like a compose up. If you are using docker locally, then that fetch will only occur if the local cache does not have that specified tag. In that there is actually a new latest image, docker will just use the one you have tagged with latest on the (cached by the first build). You will need to do a pull in order to get the live latest version, and update the appropriate tags, before you compose up again.
P
Peter Butkovic

I would like to add that if you want to do this process automatically (download, stop and restart a new container with the same settings as described by @Yaroslav) you can use WatchTower. A program that auto updates your containers when they are changed https://github.com/v2tec/watchtower


A
Alexis Tyler

Consider for this answers:

The database name is app_schema

The container name is app_db

The root password is root123

How to update MySQL when storing application data inside the container

This is considered a bad practice, because if you lose the container, you will lose the data. Although it is a bad practice, here is a possible way to do it:

1) Do a database dump as SQL:

docker exec app_db sh -c 'exec mysqldump app_schema -uroot -proot123' > database_dump.sql

2) Update the image:

docker pull mysql:5.6

3) Update the container:

docker rm -f app_db
docker run --name app_db --restart unless-stopped \
-e MYSQL_ROOT_PASSWORD=root123 \
-d mysql:5.6

4) Restore the database dump:

docker exec app_db sh -c 'exec mysql -uroot -proot123' < database_dump.sql

How to update MySQL container using an external volume

Using an external volume is a better way of managing data, and it makes easier to update MySQL. Loosing the container will not lose any data. You can use docker-compose to facilitate managing multi-container Docker applications in a single host:

1) Create the docker-compose.yml file in order to manage your applications:

version: '2'
services:
  app_db:
    image: mysql:5.6
    restart: unless-stopped
    volumes_from: app_db_data
  app_db_data:
    volumes: /my/data/dir:/var/lib/mysql

2) Update MySQL (from the same folder as the docker-compose.yml file):

docker-compose pull
docker-compose up -d

Note: the last command above will update the MySQL image, recreate and start the container with the new image.


Let's say I have a huge database (several GB), will my data be unaccessible until the whole database is imported ? That could be a huge "downtime"
Since you mentioned docker-compose, will this work? stackoverflow.com/a/31485685/65313
volumes_from key is now deprecated (even removed in version 3 of compose file) in favor of the new volumes key.
docker pull image_uri:tag && docker restart container_running_that_image worked for me. No need for docker-compose pull && docker-compose up -d .
E
Eddie Jaoude

Similar answer to above

docker images | awk '{print $1}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull

Brilliant! Quite surprised it didn't get more votes. Now the only thing missing would be to trigger a restart of all containers that were updated.
Unfortunately this won't update the existing container. This will just update the pulled image, but the existing container is immutable and still using the original image used to create it. This only works if you create a new container from the image, but any existing container is still based on the original image.
Amazing. If you need to pull the specific version of the container, do it like this: docker images | awk '{print $1":"$2}' | grep -v 'none' | grep -iv 'repo' | xargs -n1 docker pull
g
gdbj

Here's what it looks like using docker-compose when building a custom Dockerfile.

Build your custom Dockerfile first, appending a next version number to differentiate. Ex: docker build -t imagename:version . This will store your new version locally. Run docker-compose down Edit your docker-compose.yml file to reflect the new image name you set at step 1. Run docker-compose up -d. It will look locally for the image and use your upgraded one.

-EDIT-

My steps above are more verbose than they need to be. I've optimized my workflow by including the build: . parameter to my docker-compose file. The steps looks this now:

Verify that my Dockerfile is what I want it to look like. Set the version number of my image name in my docker-compose file. If my image isn't built yet: run docker-compose build Run docker-compose up -d

I didn't realize at the time, but docker-compose is smart enough to simply update my container to the new image with the one command, instead of having to bring it down first.


In a real situation you can't use your own hands and make those changes. Your solution doesn't support automatic ways to solve the problem.
so you're saying that because my solution isn't automated, it's not valid? Is that a requirement from the OP? And are the other answers implying automation? Really confused. And, I think the downvotes are doing a disservice to others coming here. My answer is 100% valid for the question asked.
thanks for this answer, I also didn't realize you could simply run docker-compose up -d without needing to stop everything first.
a
afe

If you do not want to use Docker Compose, I can recommend portainer. It has a recreate function that lets you recreate a container while pulling the latest image.


s
seanmcl

You need to either rebuild all the images and restart all the containers, or somehow yum update the software and restart the database. There is no upgrade path but that you design yourself.


What exactly do you mean by restarting containers? There is docker restart command, but I am not sure it will pick up image changes. And what happens to my data inside containers?
Sorry, I didn't mean docker restart. I mean docker rm -f CONTANER; docker run NEW_IMAGE. The data in your sql container will disappear. That is why people generally use volumes to store the data.
If you have all your data mounted in volumes in separate containers or host machine the nas @seanmcl said just create new containers with new mysql connected to same data. If you did not do that (you should) but you can use the docker exec command available in docker 1.3 to update mysql and restart it inside the container.
g
gvlx

Taking from http://blog.stefanxo.com/2014/08/update-all-docker-images-at-once/

You can update all your existing images using the following command pipeline:

docker images | awk '/^REPOSITORY|\<none\>/ {next} {print $1}' | xargs -n 1 docker pull

This will update the images, but not the container. The container is immutable and it's base image cannot be changed without creating a new container from an updated image.
D
Daniel Dinnyes

Make sure you are using volumes for all the persistent data (configuration, logs, or application data) which you store on the containers related to the state of the processes inside that container. Update your Dockerfile and rebuild the image with the changes you wanted, and restart the containers with your volumes mounted at their appropriate place.


c
clopez

This is something I've also been struggling with for my own images. I have a server environment from which I create a Docker image. When I update the server, I'd like all users who are running containers based on my Docker image to be able to upgrade to the latest server.

Ideally, I'd prefer to generate a new version of the Docker image and have all containers based on a previous version of that image automagically update to the new image "in place." But this mechanism doesn't seem to exist.

So the next best design I've been able to come up with so far is to provide a way to have the container update itself--similar to how a desktop application checks for updates and then upgrades itself. In my case, this will probably mean crafting a script that involves Git pulls from a well-known tag.

The image/container doesn't actually change, but the "internals" of that container change. You could imagine doing the same with apt-get, yum, or whatever is appropriate for you environment. Along with this, I'd update the myserver:latest image in the registry so any new containers would be based on the latest image.

I'd be interested in hearing whether there is any prior art that addresses this scenario.


It goes against the concept of immutable infrastructure and some of its benefits. You can test your application/environment to see that it works and it is not guaranteed if you update components inside. Splitting container code from data from configuration let you update, test that us currently working and deploy into production, knowing that there is no different line of code from the tested image to the production one. Anyway, the system let you manage it as you say too, its your choice.
Very good point gmuslera. Agreed that it's an anti-pattern to update the 'internals' of an existing docker container.
So what is the best solution to auto-update the docker container based on docker image updates to deliver the updates to all containers effortless?
i
iTech

Update

This is mainly to query the container not to update as building images is the way to be done

I had the same issue so I created docker-run, a very simple command-line tool that runs inside a docker container to update packages in other running containers.

It uses docker-py to communicate with running docker containers and update packages or run any arbitrary single command

Examples:

docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run exec

by default this will run date command in all running containers and return results but you can issue any command e.g. docker-run exec "uname -a"

To update packages (currently only using apt-get):

docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run update

You can create and alias and use it as a regular command line e.g.

alias docker-run='docker run --rm -v /var/run/docker.sock:/tmp/docker.sock itech/docker-run'


Is this a good idea? (If you do apt update; apt upgrade, the image will grow.)
@yaroslav s image is a better solution to this problem. The above is not really the Docker-way of doing things.
Z
Zedverse07

Tried a bunch of things from here, but this worked out for me eventually. IF you have AutoRemove: On on the Containers you can't STOP and EDIT the contianers, or a Service is running that can't be stopped even momentarily, You must:

PULL latest image --> docker pull [image:latest] Verify if the correct image is pulled, you can see the UNUSED tag in the Portainer Images section

UPDATE the service using Portainer or CLI and make sure you use LATEST VERSION of the image, Portainer will give you the option to do same.

THis would not only UPDATE the Container with Latest Image, but also keep the Service Running.