ChatGPT解决这个技术问题 Extra ChatGPT

Rebuild Docker container on file changes

For running an ASP.NET Core application, I generated a dockerfile which build the application and copys the source code in the container, which is fetched by Git using Jenkins. So in my workspace, I do the following in the dockerfile:

WORKDIR /app
COPY src src

While Jenkins updates the files on my host correctly with Git, Docker doesn't apply this to my image.

My basic script for building:

#!/bin/bash
imageName=xx:my-image
containerName=my-container

docker build -t $imageName -f Dockerfile  .

containerRunning=$(docker inspect --format="{{ .State.Running }}" $containerName 2> /dev/null)

if [ "$containerRunning" == "true" ]; then
        docker stop $containerName
        docker start $containerName
else
        docker run -d -p 5000:5000 --name $containerName $imageName
fi

I tried different things like --rm and --no-cache parameter for docker run and also stopping/removing the container before the new one is build. I'm not sure what I'm doing wrong here. It seems that docker is updating the image correctly, as the call of COPY src src would result in a layer id and no cache call:

Step 6 : COPY src src
 ---> 382ef210d8fd

What is the recommended way to update a container?

My typical scenario would be: The application is running on the server in a Docker container. Now parts of the app are updated, e.g. by modifying a file. Now the container should run the new version. Docker seems to recommend building a new image instead of modifying a existing container, so I think the general way of rebuilding like I do is right, but some detail in the implementation has to be improved.

Can you list the exact steps you've taken to build your container, including your build command and entire output from each command?

L
Lion

Video with visual explanation (from 2022)

https://i.stack.imgur.com/PgYH2.png

Since I got a lot of positive feedback to my previously, first visual explanation, I decided to create another video for this question and answer since there are some things which can be visualized better in a graphical video. It visualizes and also updates this answers with the knowledge and experience which I got in the last years using Docker on multiple systems (and also K8s).

While this question was asked in the context of ASP.NET Core, it is not really related to this framework. The problem was a lack of basic understanding of Docker concepts, so it can happen with nearly every application and framework. For that reason, I used a simple Nginx webserver here since I think many of you are familiar with web servers, but not everyone knows how specific frameworks like ASP.NET Core works.

The underlying problem is to understand the difference of containers vs images and how they are different in their lifecycle, which is the basic topic of this video.

Textual answer (Originally from 2016)

After some research and testing, I found that I had some misunderstandings about the lifetime of Docker containers. Simply restarting a container doesn't make Docker use a new image, when the image was rebuilt in the meantime. Instead, Docker is fetching the image only before creating the container. So the state after running a container is persistent.

Why removing is required

Therefore, rebuilding and restarting isn't enough. I thought containers works like a service: Stopping the service, do your changes, restart it and they would apply. That was my biggest mistake.

Because containers are permanent, you have to remove them using docker rm <ContainerName> first. After a container is removed, you can't simply start it by docker start. This has to be done using docker run, which itself uses the latest image for creating a new container-instance.

Containers should be as independent as possible

With this knowledge, it's comprehensible why storing data in containers is qualified as bad practice and Docker recommends data volumes/mounting host directorys instead: Since a container has to be destroyed to update applications, the stored data inside would be lost too. This cause extra work to shutdown services, backup data and so on.

So it's a smart solution to exclude those data completely from the container: We don't have to worry about our data, when its stored safely on the host and the container only holds the application itself.

Why -rf may not really help you

The docker run command, has a Clean up switch called -rf. It will stop the behavior of keeping docker containers permanently. Using -rf, Docker will destroy the container after it has been exited. But this switch has a problem: Docker also remove the volumes without a name associated with the container, which may kill your data

While the -rf switch is a good option to save work during development for quick tests, it's less suitable in production. Especially because of the missing option to run a container in the background, which would mostly be required.

How to remove a container

We can bypass those limitations by simply removing the container:

docker rm --force <ContainerName>

The --force (or -f) switch which use SIGKILL on running containers. Instead, you could also stop the container before:

docker stop <ContainerName>
docker rm <ContainerName>

Both are equal. docker stop is also using SIGTERM. But using --force switch will shorten your script, especially when using CI servers: docker stop throws an error if the container is not running. This would cause Jenkins and many other CI servers to consider the build wrongly as failed. To fix this, you have to check first if the container is running as I did in the question (see containerRunning variable).

There is a better way (Added 2016)

While plain docker commands like docker build, docker run and others are a good way for beginners to understand basic concepts, it's getting annoying when you're already familiar with Docker and want to get productive. A better way is to use Docker-Compose. While it's designed for multi-container environments, it also gives you benefits when using standalone with a single container. Altough multi-container environments aren't really uncommon. Nearly every application has at least an application server and some database. Some even more like caching servers, cron containers or other things.

version: "2.4"
services:
  my-container:
    build: .
    ports:
      - "5000:5000"

Now you can just use docker-compose up --build and compose will take care of all the steps which I did manually. I'd prefer this one over the script with plain docker commands, which I added as answer from 2016. It still works, but is more complex and it will handle certain situations not as good as docker-compose would. For example, compose checks if everything is up2date and only rebuild those things, who need to be rebuild because of changes.

Especially when you're using multiple containers, compose offers way more benefits. For example, linking the containers which requires to create/maintain networks manually otherwise. You can also specify dependencies, so that a database container is started before the application server, which depends on the DB at startup.

In the past with Docker-Compose 1.x I noticed some issues, especially with caching. This results in containers not being updated, even when something has changed. I have tested compose v2 for some time now without seeing any of those issues again, so it seems to be fixed now.

Full script for rebuilding a Docker container (original answer vom 2016)

According to this new knowledge, I fixed my script in the following way:

#!/bin/bash
imageName=xx:my-image
containerName=my-container

docker build -t $imageName -f Dockerfile  .

echo Delete old container...
docker rm -f $containerName

echo Run new container...
docker run -d -p 5000:5000 --name $containerName $imageName

This works perfectly :)


'I found that I had some misunderstandings about the lifetime of Docker containers' you took the words right out of my mouth. Thank you for such detailed explanation. I would recommend this to docker newbies. This clarifies the difference of VM vs container.
After your explanation, what I did is to take note of what I did to my existing image. In order to have changes retained, I created a new Dockerfile to create a new image which already includes changes that I want to add. That way, the new image created is (somewhat) updated.
Is the --force-recreate option on docker compose similar to what you describe here? And if so, wouldn't it be worth to use this solution instead (sorry if that question is dumb but I'm a docker noob ^^)
@cglacet Yes, it's similar, not directly compareable. But docker-compose is smarter than the plain docker commands. I work regularly with docker-compose and the change detection works well, so I use --force-recreate very rarely. Just docker-compose up --build is important when you're building a custom image (build directive in compose file) instead of using an image from e.g. the Docker hub.
how do you use this script?
C
Community

Whenever changes are made in dockerfile or compose or requirements, re-run it using docker-compose up --build. So that images get rebuilt and refreshed


Having a MySQL docker container as one service, would the DB be empty after that if one used a volume for /opt/mysql/data:/var/lib/mysql?
To me, there doesn't seem any downside to just always using --build in local dev environments. The speed at which docker re-copies the files it might otherwise assume don't need copying take only a couple of milliseconds, and it saves large number of WTF moments.
This worked for me seamlessly and without problem, thanks.
this must be acceptable answer
A
Aamer

You can run build for a specific service by running docker-compose up --build <service name> where the service name must match how did you call it in your docker-compose file.

Example Let's assume that your docker-compose file contains many services (.net app - database - let's encrypt... etc) and you want to update only the .net app which named as application in docker-compose file. You can then simply run docker-compose up --build application

Extra parameters In case you want to add extra parameters to your command such as -d for running in the background, the parameter must be before the service name: docker-compose up --build -d application


B
Brett Sutton

You can force a rebuild just from the copy rather than having to do a full rebuild.

add a line similar to

RUN mkdir -p /BUILD_TOKEN/f7e0188ea2c8466ebf77bf37eb6ab1c1
COPY src src

The mkdir call is just to have a line that docker must execute which contains the token we are going to change each time we need a partial rebuild.

Now have your build script replace the uuid whenever you need to force a copy

In dart I do:


  if (parsed['clone'] as bool == true) {
    final uuid = const Uuid().v4().replaceAll('-', '');

    replace(dockerfilePath, RegExp('RUN mkdir -p /BUILD_TOKEN/.*'),
        'RUN mkdir -p /BUILD_TOKEN/$uuid');
  }

I then run my build tool as:

build.dart --clone

This is my full dart script but it has some extraneous bits:

#! /usr/bin/env dcli

import 'dart:io';

import 'package:dcli/dcli.dart';
import 'package:mongo_dart/mongo_dart.dart';
import 'package:unpubd/src/version/version.g.dart';

/// build and publish the unpubd docker container.
void main(List<String> args) {
  final parser = ArgParser()
    ..addFlag('clean',
        abbr: 'c', help: 'Force a full rebuild of the docker container')
    ..addFlag('clone', abbr: 'l', help: 'Force reclone of the git repo.');

  ArgResults parsed;
  try {
    parsed = parser.parse(args);
  } on FormatException catch (e) {
    print(e);
    print(parser.usage);
    exit(1);
  }
  final dockerfilePath =
      join(DartProject.self.pathToProjectRoot, 'resources', 'Dockerfile');

  'dcli pack'.run;

  print(blue('Building unpubd $packageVersion'));

  final tag = 'noojee/unpubd:$packageVersion';
  const latest = 'noojee/unpubd:latest';

  var clean = '';
  if (parsed['clean'] as bool == true) {
    clean = ' --no-cache';
  }

  if (parsed['clone'] as bool == true) {
    final uuid = const Uuid().v4().replaceAll('-', '');
    replace(dockerfilePath, RegExp('RUN mkdir -p /BUILD_TOKEN/.*'),
        'RUN mkdir -p /BUILD_TOKEN/$uuid');
  }

  'docker  build $clean -t $tag -t $latest -f $dockerfilePath .'.run;

  'docker push noojee/unpubd:$packageVersion'.run;
  'docker push $tag'.run;
  'docker push $latest'.run;
}