ChatGPT解决这个技术问题 Extra ChatGPT

How to run a cron job inside a docker container?

I am trying to run a cronjob inside a docker container that invokes a shell script.

Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works. How can I do this?


V
VonC

You can copy your crontab into an image, in order for the container launched from said image to run the job.

See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:

Let’s create a new file called "hello-cron" to describe our job.

# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.

If you are wondering what is 2>&1, Ayman Hourieh explains.

The following Dockerfile describes all the steps to build your image

FROM ubuntu:latest
MAINTAINER docker@ekito.fr

RUN apt-get update && apt-get -y install cron

# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
 
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron

# Apply cron job
RUN crontab /etc/cron.d/hello-cron
 
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
 
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

(see Gaafar's comment and How do I make apt-get install less noisy?:
apt-get -y install -qq --force-yes cron can work too)

As noted by Nathan Lloyd in the comments:

Quick note about a gotcha: If you're adding a script file and telling cron to run it, remember to RUN chmod 0744 /the_script Cron fails silently if you forget.

OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:

 * * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2

Replace the last Dockerfile line with

CMD ["cron", "-f"]

See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"

Build and run it:

sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example

Be patient, wait for 2 minutes and your commandline should display:

Hello world
Hello world

Eric adds in the comments:

Do note that tail may not display the correct file if it is created during image build. If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.

See "Output of tail -f at the end of a docker CMD is not showing".

See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below

See Jason's image AnalogJ/docker-cron based on:

Dockerfile installing cronie/crond, depending on distribution.

an entrypoint initializing /etc/environment and then calling cron -f -l 2


you should probably add -y to installing cron to avoid docker build exiting
Does this solution still work? When I follow the guidelines given, when I log into the container as root and type crontab -l, I get No crontab installed for root, also, my screen remains blank. However, when I check '/etc/cron.d/', I see the crontab fiel is there (and even more surprisingly), when I check /var/log/cron.log, I see that the script is running (the file content is being appended with Hello World). I'm pulling this image in my Dockerfile: FROM phusion/baseimage:0.10.0. Any ideas about the discrepancy in behaviour?
As of 2018, this approach no longer works; has anyone been able to get their cronjob to work with Ubuntu as the base image? I'm not interested in the Alpine image which comes with cron running out of the box
Quick note about a gotcha: if you're adding a script file and telling cron to run it, remember to RUN chmod 0744 /the_script. Cron fails silently if you forget.
I wrote a blog post that implements this advice (and other issues I found running cron in docker) into working docker images for multiple distros (ubuntu, alpine, centos): blog.thesparktree.com/cron-in-docker
E
Evgeniy Berezovsky

The accepted answer may be dangerous in a production environment.

In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it.

When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it.

You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2.

Using basic shell redirects you may want to do something like this :

* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2

And your CMD will be : CMD ["cron", "-f"]


Nice: cron -f is for "cron foreground". I have included your answer in mine above, for more visibility. +1
Let's say my program doesn't output anything. Can I still use this method and be sure my process isn't going to stop in the background?
@Arcsector this method avoid putting a process in the background, thats why it does not fails silently. Having a background process in a docker container is not simple. If you want to have a running background process you might want to use an init process to monitor the multiple processes you run in the container. Another way is to start the process into another container next to the main one called 'sidecar'. The best way is often to avoid multiple processes in the container.
This is a good solution and works well for us aside from one issue. When the container receives a SIGTERM signal it doesn't seem to wait for the scheduled process to finish and gracefully shutdown, instead it is killing the process which can cause issues.
This solution worked for me on Debian/Alpine/CentOS containers. This is the most "portable" solution. Thanks for this @hugoShaka
O
Oscar Fanelli

For those who wants to use a simple and lightweight image:

FROM alpine:3.6

# copy crontabs for root user
COPY config/cronjobs /etc/crontabs/root

# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]

Where cronjobs is the file that contains your cronjobs, in this form:

* * * * * echo "hello stackoverflow" >> /test_file 2>&1
# remember to end this file with an empty new line

Simple, light and standard image based. This should be the accepted answer. Also use the > /proc/1/fd/1 2> /proc/1/fd/2 redirection to access cronjobs output directly from the docker logs.
For people not using alpine: The crond supporting the -d 8 parameter is not the standard cron, it is the crond command from busybox. For example from ubuntu, you can run this as busybox crond -f -d 8. For older versions you have to use -L /dev/stdout/.
I would give this +100 if I could. This is by far the best way to run cron jobs in a Docker environment.
can this be done entirely by docker-compose.yml with an image:alpine?
CMD ["crond" or CMD ["cron"?
Y
Youness

What @VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file.

FROM ubuntu:latest

# Install cron
RUN apt-get -y install cron

# Create the log file to be able to run tail
RUN touch /var/log/cron.log

# Setup cron job
RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab

# Run the command on container startup
CMD cron && tail -f /var/log/cron.log

After running your docker container, you can make sure if cron service is working by:

# To check if the job is scheduled
docker exec -ti <your-container-id> bash -c "crontab -l"
# To check if the cron service is running
docker exec -ti <your-container-id> bash -c "pgrep cron"

If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with

ENTRYPOINT cron start && tail -f /var/log/cron.log

RUN apt-get update && apt-get -y install cron or else it will not be able to find package cron
Thanks Youness, you gave me the idea of doing the following, which worked in my case where each cron is specified in a different file: RUN cat $APP_HOME/crons/* | crontab Like a charm :)
adding cron to an entrypoint script seems like the best option: ENTRYPOINT ["entrypoint.sh"]
Using 2 commands in your ENTRYPOINT is dangerous. I believe the first one (cron) forks to the background, while 2nd one (tail) runs in the foreground. If cron stops, you'll never know it. If tail stops, then docker will notice.
That makes sense to some extent though you can add some monitoring/logging around it (with another entry point or some other monitoring mechanisms) to check the health status of the cron service
O
OPSXCQ

There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support.

Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it.

Here an docker-compose.yml file, that will run some tasks for you

version: "2"

services:
    tasker:
        image: strm/tasker
        volumes:
            - "/var/run/docker.sock:/var/run/docker.sock"
        environment:
            configuration: |
                logging:
                    level:
                        ROOT: WARN
                        org.springframework.web: WARN
                        sh.strm: DEBUG
                schedule:
                    - every: minute
                      task: hello
                    - every: minute
                      task: helloFromPython
                    - every: minute
                      task: helloFromNode
                tasks:
                    docker:
                        - name: hello
                          image: debian:jessie
                          script:
                              - echo Hello world from Tasker
                        - name: helloFromPython
                          image: python:3-slim
                          script:
                              - python -c 'print("Hello world from python")'
                        - name: helloFromNode
                          image: node:8
                          script:
                              - node -e 'console.log("Hello from node")'

There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section.

Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:

http://github.com/opsxcq/tasker


Dockerception (running docker containers from another container) is a bad practice and should be limited to continuous integration. A workaround would be to use docker exec on specified containers.
Tasker doesn't use docker in docker (Dind/Dockerception), note that is passed the docker socket as a mapping, all containers spawned are are in spawned in the daemon that tasker runs. And if you don't want to run tasker inside docker, you can just deploy it as any other application.
I don't get the advantages of using tasker. Seems really a overkill to me using java and sh*** just to run a cron job.
Mixing cron and the base image that you need (python/node for example) create a extra dependency that need to be maintained and deployed, in this scenario all jobs share the same container, it means that you have to worry about cleaning up everything after every job runs. Jobs running on tasker are idempotent, so you have less things to worry about.
f
funky-future

Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you.

I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example:

version: '2'

services:
  wordpress:
    image: wordpress
  mysql:
    image: mariadb
    volumes:
      - ./database_dumps:/dumps
    labels:
      deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
      deck-chores.dump.interval: daily

'Classic', cron-like configuration is also possible.

Here are the docs, here's the image repository.


Thank you. This answer is most right for the Docker containers environment. No any changes in Docker images, only adding special container for executing tasks, it works like command docker exec <container_name> <some_command> by schedule.
This is gold! A very elegant and easy-to-use solution, in the paradigm of containers. Getting rid of crutches.
h
halfer

VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command.

However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.


A
Andreas Forslöw

If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example:

FROM ubuntu:latest

RUN apt-get update && apt-get -y install cron
RUN apt-get update && apt-get install -y dos2unix

# Add crontab file (from your windows host) to the cron directory
ADD cron/hello-cron /etc/cron.d/hello-cron

# Change line ending format to LF
RUN dos2unix /etc/cron.d/hello-cron

# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron

# Apply cron job
RUN crontab /etc/cron.d/hello-cron

# Create the log file to be able to run tail
RUN touch /var/log/hello-cron.log

# Run the command on container startup
CMD cron && tail -f /var/log/hello-cron.log

This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work!


This helped solve my issue when trying to get output redirection to work. A command like cat /proc/1/status > /proc/1/fd/1 would return an error from crond stating crond: USER root pid 6 cmd root cat /proc/1/status > /proc/1/fd/1: nonexistent directory/proc/1/fd/1. Changing the line endings to Unix enabled me to run the command successfully. Thanks, this took me more than a few hours to figure out!
G
Gaurav Tyagi

Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks

This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.

Simply add a bash file which will initiate the cron and other services (if required)

DockerFile

FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]

entrypoint.sh

#!/bin/sh
service cron start & tail -f /var/log/cron-2.log

If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log

Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log


Shouldn't it be doing tail -f /var/log/cron-1.log instead of /var/log/cron-2.log, since cron-1.log is where the STDOUT/STDERR is being directed? (Unless I'm missing something)
Yes, correct, that was a typo, /var/log/cron-1.log should be at every place
v
vinzee

I created a Docker image based on the other answers, which can be used like

docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron

where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile:

FROM gaafar/cron

# COPY crontab file in the cron directory
COPY crontab /etc/cron.d/crontab

# Add your commands here

For reference, the image is here.


J
Jakob Eriksson

Define the cronjob in a dedicated container which runs the command via docker exec to your service.

This is higher cohesion and the running script will have access to the environment variables you have defined for your service.

#docker-compose.yml
version: "3.3"
services:
    myservice:
      environment:
        MSG: i'm being cronjobbed, every minute!
      image: alpine
      container_name: myservice
      command: tail -f /dev/null

    cronjobber:
     image: docker:edge
     volumes:
      - /var/run/docker.sock:/var/run/docker.sock
     container_name: cronjobber
     command: >
          sh -c "
          echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root
          && crond -f"

I was unable to get this to work using docker swarm. Getting myservice unknown errors.
There should be a warning about the high security impact mounting a docker socket has: lvh.io/posts/…
I
Iljanne

I decided to use busybox, as it is one of the smallest images.

crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel. crontab file is copied to the default path: /var/spool/cron/crontabs

FROM busybox:1.33.1

# Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR]
#
#   -f  Foreground
#   -b  Background (default)
#   -S  Log to syslog (default)
#   -l N    Set log level. Most verbose 0, default 8
#   -d N    Set log level, log to stderr
#   -L FILE Log to FILE
#   -c DIR  Cron dir. Default:/var/spool/cron/crontabs

COPY crontab /var/spool/cron/crontabs/root

CMD [ "crond", "-f", "-d" ]

P
Priya

When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container. In our case, I am using Supervisord with other services to start cron service.

[program:misc]
command=/etc/init.d/cron restart
user=root
autostart=true
autorestart=true
stderr_logfile=/var/log/misc-cron.err.log
stdout_logfile=/var/log/misc-cron.out.log
priority=998

I get an error in supervisor.log that the cron service stopped multiple times and entered a FATAL state. However cron seems to be running in top and executing cronjobs normally. Thanks for this!
Yes, same thing happened to me as well, but it works as normal, so don't need to bother.
h
himanshuIIITian

Setup a cron in parallel to a one-time job

Create a script file, say run.sh, with the job that is supposed to run periodically.

#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"

Save and exit.

Use Entrypoint instead of CMD

f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all.

Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file.

For instance, we have 2 jobs to run:

Run once job: echo “Docker container has been started”

Run periodic job: run.sh

Create entrypoint.sh

#!/bin/bash

# Start the run once job.
echo "Docker container has been started"

# Setup a cron schedule
echo "* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt

crontab scheduler.txt
cron -f

Let’s understand the crontab that has been set up in the file

* * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement.

/run.sh: The path to the script file which is to be run periodically

/var/log/cron.log: The filename to save the output of the scheduled cron job.

2>&1: The error logs(if any) also will be redirected to the same output file used above.

Note: Do not forget to add an extra new line, as it makes it a valid cron. Scheduler.txt: the complete cron setup will be redirected to a file.

Using System/User specific environment variables in cron

My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container.

Then, this came up as a walkaround to this problem:

Add the following line in the entrypoint.sh

declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env

Update the cron setup and specify-

SHELL=/bin/bash
BASH_ENV=/container.env

At last, your entrypoint.sh should look like

#!/bin/bash

# Start the run once job.
echo "Docker container has been started"

declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env

# Setup a cron schedule
echo "SHELL=/bin/bash
BASH_ENV=/container.env
* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt

crontab scheduler.txt
cron -f

Last but not the least: Create a Dockerfile

FROM ubuntu:16.04
MAINTAINER Himanshu Gupta

# Install cron
RUN apt-get update && apt-get install -y cron

# Add files
ADD run.sh /run.sh
ADD entrypoint.sh /entrypoint.sh

RUN chmod +x /run.sh /entrypoint.sh

ENTRYPOINT /entrypoint.sh

That’s it. Build and Run the Docker image!


@himanshuIIITian I tried this, the issue is that the script of the "run once job" is never returning and also the corn -f is not returning so... this is not working for me, any ideas? thanks
@DoronLevi - can you please share some logs to look into the issue? Or you can check the whole code from here - github.com/nehabhardwaj01/docker-cron
D
Dan Watts

From above examples I created this combination:

Alpine Image & Edit Using Crontab in Nano (I hate vi)

FROM alpine

RUN apk update
RUN apk add curl nano

ENV EDITOR=/usr/bin/nano 

# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]

# Shell Access
# docker exec -it <CONTAINERID> /bin/sh

# Example Cron Entry
# crontab -e
# * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
# DATE/TIME WILL BE IN UTC

T
That Brazilian Guy

Here's my docker-compose based solution:

  cron:
    image: alpine:3.10
    command: crond -f -d 8
    depends_on:
      - servicename
    volumes:
      - './conf/cron:/etc/crontabs/root:z'
    restart: unless-stopped

the lines with cron entries are on the ./conf/cron file.

Note: this won't run commands that aren't on the alpine image.


p
pablorsk

This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution.

cron-entrypoint.sh:

#!/bin/bash

# copy machine environment variables to cron environment
printenv | cat - /etc/crontab > temp && mv temp /etc/crontab

## validate cron file
crontab /etc/crontab

# cron service with SIGTERM support
service cron start
trap "service cron stop; exit" SIGINT SIGTERM

# just dump your logs to std output
tail -f  \
    /app/storage/logs/laravel.log \
    /var/log/cron.log \
    & wait $!

Problems solved

environment variables are not available on cron environment (like env vars or kubernetes secrets)

stop when crontab file is not valid

stop gracefully cron jobs when machine receive an SIGTERM signal

For context, I use previous script on Kubernetes with Laravel app.


If I run docker stop with this setup, nothing happens, i.e. service cron stop doesn't get executed. If I run the latter manually from within the container, the cron process stops immediately instead of waiting for the cronjobs. cronjobs will still complete their run, so that may be fine. When they are done, the container does not stop either though. What am I missing?
Got it working now. I think the trap handler wasn't triggered, because I defined my entryscript as CMD "/bin/sh" ENTRYPOINT /entrypoint.sh instead of ENTRYPOINT ["/entrypoint.sh"]. That way, it got wrapped in another shell which didn't pass the signals through. I had to do some further steps to actually wait for running cronjobs to finish. Elaborating on your answer over here.
S
Santiago Vasquez

this line was the one that helped me run my pre-scheduled task.

ADD mycron/root /etc/cron.d/root

RUN chmod 0644 /etc/cron.d/root

RUN crontab /etc/cron.d/root

RUN touch /var/log/cron.log

CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron

--> My project run inside: FROM php:7.2-apache


t
turivishal

When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron

FROM node:8.6.0
RUN apt-get update && apt-get install -y cron sudo

COPY crontab /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
RUN touch /var/log/cron.log

# Allow node user to start cron daemon with sudo
RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers

ENTRYPOINT sudo cron && tail -f /var/log/cron.log

Maybe that helps someone


I believe the node image uses the node user; so maybe you needed to add permissions for that user
r
random2137

So, my problem was the same. The fix was to change the command section in the docker-compose.yml.

From

command: crontab /etc/crontab && tail -f /etc/crontab

To

command: crontab /etc/crontab

command: tail -f /etc/crontab

The problem was the '&&' between the commands. After deleting this, it was all fine.


J
Johnson_145

Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop).

That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer:

Dockerfile:

FROM ubuntu:latest

RUN apt-get update \
    && apt-get -y install cron procps \
    && rm -rf /var/lib/apt/lists/*

# Copy cronjobs file to the cron.d directory
COPY cronjobs /etc/cron.d/cronjobs

# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronjobs

# similarly prepare the default cronjob scripts
COPY run_cronjob.sh /root/run_cronjob.sh
RUN chmod +x /root/run_cronjob.sh
COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh
RUN chmod +x /root/run_cronjob_without_log.sh

# Apply cron job
RUN crontab /etc/cron.d/cronjobs

# to gain access to environment variables, we need this additional entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

# optionally, change received signal from SIGTERM TO SIGQUIT
#STOPSIGNAL SIGQUIT

# Run the command on container startup
ENTRYPOINT ["/entrypoint.sh"]

entrypoint.sh:

#!/bin/bash
# make global environment variables available within crond, too
printenv | grep -v "no_proxy" >> /etc/environment

# SIGQUIT/SIGTERM-handler
term_handler() {
  echo 'stopping cron'
  service cron stop
  echo 'stopped'
  echo 'waiting'
  x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
  xold=0
  while [ "$x" -gt 0 ]
  do
    if [ "$x" != "$xold" ]; then
      echo "Waiting for $x running cronjob(s):"
      ps u -C run_cronjob.sh
      xold=$x
      sleep 1
    fi
    x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
  done
  echo 'done waiting'
  exit 143; # 128 + 15 -- SIGTERM
}

# cron service with SIGTERM and SIGQUIT support
service cron start
trap "term_handler" QUIT TERM

# endless loop
while true
do
  tail -f /dev/null & wait ${!}
done

cronjobs

* * * * * ./run_cronjob.sh cron1
*/2 * * * * ./run_cronjob.sh cron2
*/3 * * * * ./run_cronjob.sh cron3

Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully.

run_cronjobs.sh (optional helper script to keep cronjob definitions clean)

#!/bin/bash

DIR_INCL="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi
cd "$DIR_INCL"

# redirect all cronjob output to docker
./run_cronjob_without_log.sh "$@" > /proc/1/fd/1 2>/proc/1/fd/2

run_cronjob_without_log.sh

your_actual_cronjob_src()

Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.