ChatGPT解决这个技术问题 Extra ChatGPT

Docker error : no space left on device

I installed docker on a Debian 7 machine in the following way

$ echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list
$ sudo apt-get update
$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh

After that when I first tried creating an Image it failed with the following error

 time="2015-06-02T14:26:37-04:00" level=info msg="[8] System error: write /sys/fs/cgroup/docker/01f5670fbee1f6687f58f3a943b1e1bdaec2630197fa4da1b19cc3db7e3d3883/cgroup.procs: no space left on device"

Here is the docker info

Containers: 2
Images: 21
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 25
Dirperm1 Supported: true
Execution Driver: native-0.2
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 2
 Total Memory: 15.7 GiB


WARNING: No memory limit support
 WARNING: No swap limit support

How can I increase the memory? Where are the system configurations stored?

From Kal's suggestions:

When I got rid of all the images and containers it did free some space and the image build ran longer before failing with the same error. So the question is, which space is this referring to and how do I configure it?

Sometimes, you can hit a per-container size limit, depending on your storage backend. That link shows how to fix it for devicemapper.
I had this error when my disk was out of inodes. Check df -ih
@KevinSmyth Thanks so much for pointing this out. I wasn't even aware about the significance of inode limits before this.
I found this answer help me (stackoverflow.com/a/67759303/10869929)

J
Joshua Cook

The current best practice is:

docker system prune

Note the output from this command prior to accepting the consequences:

WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N]

In other words, continuing with this command is permanent. Keep in mind that best practice is to treat stopped containers as ephemeral i.e. you should be designing your work with Docker to not keep these stopped containers around. You may want to consider using the --rm flag at runtime if you are not actively debugging your containers.

Make sure you read this answer, re: Volumes

You may also be interested in this answer, if docker system prune does not work for you.


as Kevin Smyth pointed out, this error is likely due to you running out of inodes which you can see with df -ih. To diagnose more surgically, enter ncdu then press c to file count and C to sort by file count to get a rough estimate of what is using all your inodes. If the problem is indeed docker it will be immediately apparent by the directories using the the most inodes.
Really this should be up-voted and made a answer, as it it the correct approach. The environment for building got polluted and now hack here and there might temporary fix it, but the proper approach should be docker system prune
@zhrist Haha I agree
@coler-j Maybe ... if you are thinking in terms of the original highly specific question. But let's be honest with each other. Most people aren't finding this question because of OPs obscure use case, but because their docker cache simply ran out of space.
Note that non-dangling images can take up a significant amount of space over time. After the above excellent answer, check docker images and docker rmi as needed.
j
jpaugh

I had the same error and solve it this way:

1 . Delete the orphaned volumes in Docker, you can use the built-in docker volume command. The built-in command also deletes any directory in /var/lib/docker/volumes that is not a volume so make sure you didn't put anything in there you want to save.

Warning be very careful with this if you have some data you want to keep

Cleanup:

$ docker volume rm $(docker volume ls -qf dangling=true)

Additional commands:

List dangling volumes:

$ docker volume ls -qf dangling=true

List all volumes:

$ docker volume ls

2 . Also consider removing all the unused Images.

First get rid of the <none> images (those are sometimes generated while building an image and if for any reason the image building was interrupted, they stay there).

here's a nice script I use to remove them

docker rmi $(docker images | grep '^<none>' | awk '{print $3}')

Then if you are using Docker Compose to build Images locally for every project. You will end up with a lot of images usually named like your folder (example if your project folder named Hello, you will find images name Hello_blablabla). so also consider removing all these images

you can edit the above script to remove them or remove them manually with

docker rmi {image-name}


Just a note: awk commands on a Mac must be surrounded with single quotes, not double, otherwise it just gets ignored.
I'm on MAC and it's working for me!! but thanks for the advice.
How strange! It doesn't work for me. Just prints out the same results as the grep. Ah well. Stranger things have happened.
At this point, you can use the same filter for images. docker images -qf dangling=true and of course remove them with docker rmi $(docker images -qf dangling=true).
I get an error: "docker volume rm" requires at least 1 argument(s).
K
Kal

Check that you have free space on /var as this is where Docker stores the image files by default (in /var/lib/docker).

First clean stuff up by using docker ps -a to list all containers (including stopped ones) and docker rm to remove them; then use docker images to list all the images you have stored and docker rmi to remove them.

Next change the storage location with a -g option on the docker daemon or by editing /etc/default/docker and adding the -g option to DOCKER_OPTS. -g specifies the location of the "Docker runtime" which is basically all the stuff that Docker creates as you build images and run containers. Choose a location with plenty of space as the disk space used will tend to grow over time. If you edit /etc/default/docker, you will need to restart the docker daemon for the change to take effect.

Now you should be able to create a new image (or pull one from Docker Hub) and you should see a bunch of files getting created in the directory you specified with the -g option.


Thanks Kal, I was unable to find documentation on DOCKER_OPTS. What does the -g option mean and what should it be set to? Also can the stuff under docker/aufs/mnt be deleted?
Hey ruby, I don't think I ever found a real document about DOCKER_OPTS, but there are places here and there in the documentation that talk about editing it. The closest I can find is at the end of docs.docker.com/installation/ubuntulinux/… where it talks about editing the DNS settings in DOCKER_OPTS. The options in DOCKER_OPTS are just passed to the daemon, so the reference for that is docs.docker.com/reference/commandline/cli/#daemon. -g sets the base location of the "Docker runtime"
Also can the stuff under docker/aufs/mnt be deleted?
Don't delete that stuff manually. Instead delete any containers (including exited ones) and images you don't need. You should do this before changing the -g option. Use docker ps -a to list all containers (including exited ones) and then docker rm to remove them. Use docker images to list all images and then docker rmi to remove them. Hopefully that should clean everything (or most things) up.
Thanks, so clearing the images and containers cleared up some space. BUt the newer image still needs more.However what should the docker runtime be pointing to?, is there a way to just increase the space used by docker to store images?
R
RoBeaToZ

As already mentioned,

docker system prune

helps, but with Docker 17.06.1 and later without pruning unused volumes. Since Docker 17.06.1, the following command prunes volumes, too:

docker system prune --volumes

From the Docker documentation: https://docs.docker.com/config/pruning/

The docker system prune command is a shortcut that prunes images, containers, and networks. In Docker 17.06.0 and earlier, volumes are also pruned. In Docker 17.06.1 and higher, you must specify the --volumes flag for docker system prune to prune volumes.

If you want to prune volumes and keep images and containers:

docker volume prune

docker volume prune helped me today when all the other solutions here stopped working.
Huge help - in addition to fixing the error, this freed many gigs of space on my hard drive.
docker system prune --volumes worked for me, but I had to manually stop and delete all my containers first. Otherwise the prune command was hanging and could not delete volumes. Maybe I had an unresponsive container.
docker system prune --all --volumes helped me
G
Guillaume

to remove all unused containers, volumes, networks and images at once (https://docs.docker.com/engine/reference/commandline/system_prune/):

docker system prune -a -f --volumes

if it's not enough, one can remove running containers first:

docker rm -f $(docker ps -a -q)
docker system prune -a -f --volumes

increasing /var/lib/docker or using another location with more space is also a good alternative to get rid of this error (see How to change the docker image installation directory?)


docker system prune doesn't remove volumes.
docker system prune -a -f --volumes will remove volumes.
docker system prune -af --volumes will clean all docker resources created before.
What in the hell. docker system prune removed some 7GB of stuff. docker system prune --volumes some additional 4GB or so and then docker system prune -a -f --volumes removed 575.5GB of content. Wat. This is all space taken up that a disk content analyser doesn't catch. This answer should be higher up. The -a and -f flags can make a huge difference.
d
davnicwil

Docker for Mac

So docker system prune and docker system prune --volumes suggested in other answers freed up some space each time, but eventually every time I ran anything I was getting the error.

What actually fixed the root issue was deleting the Docker.raw file that Docker for Mac uses for storage, and restarting it.

To find that file open up Docker for Mac and go to*

Preferences > Resources > Advanced > Disk Image Location

*this is for version 2.2.0.5, but on older versions it should be similar

On newer versions of Docker for Mac**, it shows you the actual size of that file on disk right there in the UI, as well as its max allocated size. You'll probably see that it is massive. For example on my machine it was 41GB!

**On older versions, it doesn't show you the actual disk usage in the UI, and MacOS Finder always shows the file size as the max allocated size. You can check the actual size on disk by opening the directory in a terminal and running du -h Docker.raw

I deleted Docker.raw, restarted Docker for Mac, and the file was automatically created again and was back to being 0GB.

Everything continued to work as before, though of course I had lost my Docker cache. As expected, after running a few Docker commands the file started to fill up again with a few GB of stuff, but nowhere near 41GB.

Update

A few months later, my Docker.raw filled back up again to a similar size. So this method did work, but has to repeated every few months. For me that's fine.

A note on why this works - I have to assume it's a bug in Docker for Mac. It really seems like docker system prune / docker system prune --volumes should entirely clear the contents of this file, but it appears the file accumulates other stuff that can't be deleted by these commands. Anyway, deleting it manually solves the problem!


I was getting a socket error saying the daemon couldn't connect. I had to move onto something else but when I checked it again today it was working. Disregard the comment above, sorry about that.
thank you! this worked for me on Mac Mojave 10.14.6. All other commands to delete system, images, volumes etc cleared up some space but wouldn't let me build, this finally worked
Path: /Users/<your-username>/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw
From what I'v e seen, docker system prune reduces size of this file.
d
deizel.

If it's just a test installation of Docker (ie not production) and you don't care about doing a nuclear clean, you can:

clean all containers: docker ps -a | sed '1 d' | awk '{print $1}' | xargs -L1 docker rm

clean all images: docker images -a | sed '1 d' | awk '{print $3}' | xargs -L1 docker rmi -f

Again, I use this in my ec2 instances when developing Docker, not in any serious QA or Production path. The great thing is that if you have your Dockerfile(s), it's easy to rebuild and or docker pull.


In my boot2docker instance I had to call docker images -a | sed '1 d' | awk '{print $3}' | xargs docker rmi -f. The OS X BSD version of xargs supports the -L option, unlike boot2docker's version.
You can use docker ps -a -q etc. to avoid the text manipulations, i.e. docker rm $(docker ps -a -q); docker rmi -f $(docker images -a -q) should do the trick
k
kenorb

If you're using Docker Desktop, you can increase Disk image size in Advanced Settings by going to Docker's Preferences.

Here is the screenshot from macOS:

https://i.stack.imgur.com/FvGEHl.png


This is the best option so far, not every time, I want to do away with my cache, and start all over again, especially with my slow internet 😭. I have the space, so let me leverage my disk.
I'm so surprised this doesn't get more upvotes. Yes, Docker takes a lot of disk space, but if you can afford to use 100 GB, then by all mean don't clear your cache!
M
Mayank Chaudhary

I also encountered this issue on RHEL machine. I did not find any apt solution anywhere on stack-overflow and docker-hub community. If you are facing this issue even after below command:

docker system prune --all

The solution which worked finally:

docker info To check current docker storage driver Mine was : Storage Driver: devicemapper; If you have storage driver as overlay2 nothing to worry about. Solution will still work for you. df -h This is to check the available file systems on machine and the path where they are mounted. Two mounted path to have a note: /dev/mapper/rootvg-var 7.6G 1.2G 6.1G 16% /var /dev/mapper/rootvg-apps 60G 9.2G 48G 17% /apps Note- By default docker storage path is /var/lib/docker. It has available space ~6 GB and hence all the space related issues. So basically, I have to move default storage to some other storage where available space is more. For me its File sysyem path '/dev/mapper/rootvg-apps' which is mounted on /apps. Now task is to move /var/lib/docker to something like /apps/newdocker/docker. mkdir /apps/newdocker/docker chmod -R 777 /apps/newdocker/docker Update docker.serive file on linux which resides under: /usr/lib/systemd/system vi /usr/lib/systemd/system/docker.service if storage device is devicemapper , comment existing ExecStart line and add below under [Service]: ExecStart= ExecStart=/usr/bin/dockerd -s devicemapper --storage-opt dm.fs=xfs --storage-opt dm.basesize=40GB -g /apps/newdocker/docker --exec-opt native.cgroupdriver=cgroupfs Or if storage device is overlay2: just add -g /apps/newdocker/docker in the existing ExexStart statement. Something like ExecStart=/usr/bin/dockerd -g /apps/newdocker/docker -H fd:// --containerd=/run/containerd/containerd.sock rm -rf /var/lib/docker (It will delete all existing docker data) systemctl stop docker ps aux | grep -i docker | grep -v grep If no output has been produced by the above command, reload systemd daemon by below command. systemctl daemon-reload systemctl start docker docker info Check out the Data Space Available: 62.15GB after mouting to docker to new File system. DONE


I've been looking all over the docs on how to achieve this! Thank you sir. Can we mark this as one of the answers?
I am still not able to link it to a larger size new folder.
V
Velu

In my case, I ran the docker system df to find out which component is consuming more space, and then I executed the docker system prune -a to clean up all dangling containers, images, etc. Finally, I ran the docker volume rm $(docker volume ls -qf dangling=true) to clean up the dangling volumes.

Below are the commands executed in order.

docker system df
docker system prune -a
docker volume rm $(docker volume ls -qf dangling=true)

TIL docker system df, thank you!
f
flaxel

I went to the docker settings and changed the image space available. It reached the limit while creating the new image with docker build. So I just increased the amount available.

https://i.stack.imgur.com/cgR9D.png


h
halfer

Docker leaves dangling images around that can take up your space. To clean up after Docker, run the following:

docker image prune [-af if you want to force remove all images]

or with older versions of Docker:

docker rm $(docker ps -q -f 'status=exited')
docker rmi $(docker images -q -f "dangling=true")

This will remove exited and dangling images, which hopefully clears out device space.


docker image prune -a worked for me with minikube running my docker container, after I SSH'd into the VM first with minikube ssh. The docker images were taking up 8GB in my VM.
J
Jørgen

In my case I didn't have so many images/containers, but the build cache was filling up my Docker Disk.

You can see that this is the problem by running

docker system df

Output:

TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              22                  13                  7.581GB             3.899GB (51%)
Containers          15                  0                   2.166GB             2.166GB (100%)
Local Volumes       4                   4                   550.2MB             0B (0%)
Build Cache         611                 0                   43.83GB             43.83GB!!!!!!!!!

The command below solves that issue

docker builder prune

A true great answer. Removing builds doesn't help to clean up the disk space. docker builder prune deletes all unused cache for the builds
x
xarlymg89

Clean dangled images docker rmi $(docker images -f "dangling=true" -q) Remove unwanted volumes Remove unused images Remove unused containers


For me, the problem was having too many images. After cleaning them up, docker works again.
R
Rogerson Nazário

1. Remove Containers:

$ docker rm $(docker ps -aq)

2. Remove Images:

$ docker rmi $(docker images -q)

Instead of perform steps 1 and 2 you can do:

docker system prune

This command will remove:

All stopped containers

All volumes not used by at least one container

All networks not used by at least one container

All dangling images


a
alexanoid

you can also use:

docker system prune

or for just volumes:

docker volume prune

add --volumes to docker system prune and it will make it together
L
Lais Gabrielle Lodi

If you already have clean the unused containers, images with

docker system prune -a

Make sure to check if you have unhealthy containers. They can act really in a weird and unpredictable way. In my case because of those, I got this error, even having tons of disk space.

docker ps -a will list all the containers. If any of them looks like this:

CONTAINER ID   IMAGE          COMMAND   CREATED          STATUS                     PORTS           NAMES
4c01db0b339c   ubuntu:12.04   bash      17 seconds ago   Up 16 seconds (unhealthy)  3300-3310/tcp   webapp

You will need to restart the docker daemon.


K
Kostyantyn

In my case installation of ubuntu-server 18.04.1 [for some weird reason] created an LVM logical volume with just 4GBs in size instead of 750GBs. Therefore when pulling images i would get this "no space left on device" error. The fix is simple:

lvextend -l 100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

.. see my step-by-step description for resize2fs in the following thread: stackoverflow.com/questions/32485723/…
h
halfer

Clean Docker by using the following command:

docker images --no-trunc | grep '<none>' | awk '{ print $3 }' \
| xargs docker rmi

B
Bhargav Tavethiya

Don't just run the docker prune command. It will delete all the docker networks, containers, and images. So you might end up losing the important data as well.

The error shows that "No space left on device" so we just need to free up some space.

The easiest way to free some space is to remove dangling images.

When the old created images are not being used those images are referred to as dangling images or there are some cache images as well which you can remove.

Use the below commands. To list all dangling images image id.

docker images -f "dangling=true" -q

to remove the images by image id.

docker rmi IMAGE_ID

This way you can free up some space and start hacking with docker again :)


Running this will iterate over all results from the dangling=true command: docker images -f "dangling=true" -q | xargs -I {} docker rmi {}
Even easier is docker images -f "dangling=true" -q | xargs docker rmi
X
Xiao

Your cgroups have the cpuset controller enabled. This controller is mostly useful in NUMA environment where it allows to finely specify which CPU/memory bank your tasks are allowed to run.

By default the mandatory cpuset.mems and cpuset.cpus are not set which means that there is "no space left" for your task, hence the error.

The easiest way to fix this is to enable cgroup.clone_children to 1 in the root cgroup. In your case, it should be

echo 1 > /sys/fs/cgroup/docker/cgroup.clone_children

It will basically instruct the system to automatically initialize container's cpuset.mems and cpuset.cpus from their parent cgroup.


This is the correct answer. Really simply upgrading Docker to anything >= Docker 1.8 should resolve it. This is related to github.com/opencontainers/runc/issues/133 From the issue, one other potential work-around is" echo 0 > /sys/fs/cgroup/cpuset/system.slice/cpuset.mems
A
Abbas Gadhia

If you're using the boot2docker image via Docker Toolkit, then the problem stems from the fact that the boot2docker virtual machine has run out of space.

When you do a docker import or add a new image, the image gets copied into the /mnt/sda1 which might have become full.

One way to check what space you have available in the image, is to ssh into the vm and run df -h and check the remaining space in /mnt/sda1

The ssh command is docker-machine ssh default

Once you are sure that it is indeed a space issue, you can either clean up according to the instructions in some of the answers on this question, or you may choose to resize the boot2docker image itself, by increasing the space on /mnt/sda1

You can follow the instructions here to do the resizing of the image https://gist.github.com/joost/a7cfa7b741d9d39c1307


M
M3RS

I run the below commands.

There is no need to rebuilt images afterwards.

docker rm $(docker ps -qf 'status=exited')
docker rmi $(docker images -qf "dangling=true")
docker volume rm $(docker volume ls -qf dangling=true)

These remove exited/dangling containers and dangling volumes.


A
Ajit Ganiger

Its may be due to the default storage space set to 40GB ( default path , /var/lib/docker)

you can change the storage volume to point to different path

edit file -> /etc/sysconfig/docker-storage

update below line (add if not exists)

DOCKER_STORAGE_OPTIONS='--storage-driver=overlay --graph=CUSTOM_PATH'

Restart docker systemctl stop docker systemctl daemon-reload systemctl start docker

if you run command docker info ( it should show storage driver as overlay)


D
Dave

So many posts about prune statements. As it is true this command will clean up docker files, it does not fix your system if the actual storage is encumbered. For me the problem was the server's storage was just plain full. Therefore I had two options.

Option 1: Clean up existing space

run all the system prune commands others have said. df -H, how much space is left? Check what is taking up so much space on your system with du --block-size=M -a / | sort -n -r | head -n 20, that will show the 20 biggest files. remove files or move them off the system.

Option 2: Get more space

Add more space to the hard drive, and expand. If you only have one HD like me, I had to mount the os called "gparted" and expand the drive.


S
SnellyBigoda

Seems like there are a few ways this can occur. The issue I had was that the docker disk image had hit its maximum size (Docker Whale -> Preferences -> Disk if you want to view what size that is in OSX).

I upped the limit and and was good to go. I'm sure cleaning up unused images would work as well.


B
Bildad N. Urandu

For me docker system prune did the trick. I`m running mac os.


This actually worked on mine too, when I was trying to clear up space used up on Mac OS. using the command docker volume ls was returning nothing, so it seemed like the storage was mostly used by caches and dangling images.
v
vgel

In my case, this was happening because I was exceeding the Docker image size limit of 10Gb. It appears this has been somewhat mitigated and there's a way to increase the limit to 100Gb (https://github.com/moby/moby/issues/5151), but I'm not sure how -- for my use-case, it was fine to switch to a mapped volume, which has better performance anyways.


h
htnawsaj
$ docker rm $(docker ps -aq)

This worked for me

docker system prune 

appears to be better option with latest version


G
Guillermo A Moran-Arreola

I just ran into this. I'm on Ubuntu 20.04. What worked? Reinstalling Docker:

sudo apt-get purge docker-ce docker-ce-cli containerd.io
sudo apt-get install docker-ce docker-ce-cli containerd.io

A bit crude, I know. I tried pruning Docker, but it still would not work.


Reinstalling docker, because the disk is full, is not really a solution.