ChatGPT解决这个技术问题 Extra ChatGPT

Are you trying to mount a directory onto a file (or vice-versa)?

I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:

docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest

It shows that

docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs \\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\" at \\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?

What is the output of ls -al .? Wanna see what your pwd looks like.
In my case I had accidentally mapped a directory from the host to a file in the container. Restarting the container didn't work anymore. I had to remove the container (docker rm …), then recreate it.

m
mrtnmgs

This should no longer happen (since v2.2.0.0), see here

If you are using Docker for Windows, this error can happen if you have recently changed your password.

How to fix:

First make sure to delete the broken container's volume docker rm -v Update: The steps below may work without needing to delete volumes first. Open Docker Settings Go to the "Shared Drives" tab Click on the "Reset Credentials..." link on the bottom of the window Re-Share the drives you want to use with Docker

You should be prompted to enter your username/password

Click "Apply" Go to the "Reset" tab Click "Restart Docker" Re-create your containers/volumes

Credit goes to BaranOrnarli on GitHub for the solution.


Thanks! It works for me starting from the second step and avoiding the last one.
I was able to fix the problem by starting on step 2 and also omitting the last one. I did not have to destroy the containers/volumes to mount again.
I agree with @MateoHermosilla, it dosen't need to dete the container, only "Reset Credentials"
I'm getting the same error when trying to run proxy-deploy.sh while installing sandbox-proxy (hadoop). Following this soln. did not fix it.
This was the issue for me. Password reset is every few months, so I keep forgetting to reset Shared Drive credentials in Docker.
P
Pezhvak

TL;DR: Remove the volumes associated with the container.

Find the container name using docker ps -a then remove that container using:

docker rm -v <container_name>

Problem:

The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.

In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.

Solution:

Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:

# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)

The command in the original question only listed host volumes as being used. The docker volume command/interface is only for anonymous and named volumes, which are not part of the original question.
@programmerq Look at the error, it says that mount was failing when it tried to mount at /var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\\" My deduction: It already has a folder due to a previous run, so if you try to map a file to that folder, it would fail.
Here two things might have gone wrong, either the host has wrong things, or already created volume has incorrect thing. Assuming host to be correct, I thought it would be better to clear issues with existing volume.
This is actually a valid answer for when the container has already been associated with a volume and the type of that volume is being changed in the next run. So removing volume might help!
This was helpful. The problem in my case was indeed that I had old containers still defined. Using docker rm to zap them and then doing a docker-compose up worked properly.
J
Jonathan Komar

Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.

Test with

> cat $PWD/conf/nginx.conf 
cat: nginx.conf/: Is a directory

Otherwise, open a Docker issue.
It's working fine for me with same configuration.


As an intermediate-level Linux user I'm curious, what's the reason for Linux recognizing that as a folder and not a file?
Because it is actually a folder. If the file doesn't exist, docker create a folder because of the volume argument -v
ok, so Linux only recognizes it as a folder if docker had to create it due to the path not previously existing; but if the nginx.conf did already previously exist at that path Linux would recognize it as a file, right?
k
kioleanu

Answer for people using Docker Toolbox

There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.

Description of the problem:

Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.

After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.

Let's say I had my project containing my ngnix config in C:/projects/project_name/

Fixing it:

Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:

https://i.stack.imgur.com/DvsQr.png

The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.

To use relative paths, please consider naming the path c/projects not projects

Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.

In my docker file, I now reference the nginx.conf like this:

volumes:
    - /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf

Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf


A
Andy Brown

The explanation given by @Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:

$ docker container prune
$ docker volume prune

J
J. Scott Elblein

I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.

Cause of the problem:

The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:

/c/Users/username/app

BUT, WSL instead uses the format:

/mnt/c/Users/username/app

This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.

Solution to the problem:

I binded the custom mount points to fix Docker for Windows and WSL differences:

sudo mount --bind /mnt/c /c

Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.

Before I started using WSL I was using Git Bash and I had this problem as well.


C
Clemens

On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General

https://i.stack.imgur.com/YqQ7f.png


Thank you this solved my issue. I also turned off use Docker Compose v2 in the Experimental Features section.
This does not work on Apple silicon macs and may cause Docker stuck at starting daemon.
Y
Yuri Pozniak

Maybe someone finds this useful. My compose file had following volume mounted

./file:/dir/file

As ./file did not exist, it was mounted into ABC (by default as folder).

In my case I had a container resulted from

docker commit ABC cool_image

When I later created ./file and ran docker-compose up , I had the error:

[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .

The solution was:

touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image

Thank you, this was also my issue as I have quite a complex Docker setup!
N
Nathan Arthur

I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.

Example: C:\Users\%USERNAME%\Desktop


my mounted folder is C:\x-suite\ ; I shared my C drive ,but still have not solved my problem
are you using Docker ToolBox?
minikube+virtualBox+docker ToolBox , localkube was deprecated, what driver should I use?
if you are mounting from Dockercompose , then use ${pwd}/
if you are doing in Dockerfile, then use VOLUME /c/x-suite
p
punkbit

I'll share my case here as this may save a lot of time for someone else in the future.

I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.

The following caused the issue:

volumes:
  - ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf

Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:

volumes:
  - ./.docker/nginx/wordpress:/etc/nginx/conf.d

This solved the problem!


Not a bad decision) I had a problem that I could not copy the file to the directory named conf.d And I solved it by copying the entire contents of one directory to another
M
Mohammed Réda OUASSINI

In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.

In my case, I was using a VPN with a firewall policy that blocks port 445.

After disconnecting from the VPN the problem disappears.

So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.

Check Docker for windows - Firewall rules for shared drives for more details.

I hope this will help someone else.


J
J. Scott Elblein

Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.

EX:docker run --name nginx-container5 --rm  -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b

root@sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
b9ead15988a9        nginx               "nginx -g 'daemon of…"   7 seconds ago       Up 6 seconds        0.0.0.0:90->80/tcp   nginx-container5
e2b195a691a4        nginx               "/bin/bash"              16 minutes ago      Up 16 minutes       0.0.0.0:80->80/tcp   test-nginx

if you escape it with double-quotes : docker run -d --rm -v "$PWD/nginx.conf:/etc/nginx/nginx.conf" nginx it should make no difference as the shell will translate it before passing it to docker run and actually, it doesn't make a difference, at least for me
J
J. Scott Elblein

I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.

What I did:

Run the container without any mapping. Copy the .conf file to the host location: docker cp containername:/etc/nginx/nginx.conf ./nginx.conf Remove the container (docker-compose down). Put the mapping back. Re-mount the container.

Docker Compose will find the .conf file and map it, instead of trying to create a directory.


b
bwibo

I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:

echo $PWD
/mnt/d/nginx

docker run --name nginx -d \
  -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx

I resolved it by changing the path for the file on the host system to a UNIX style absolute path:

docker run --name nginx -d \
  -v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx

or using an Windows style absolute path with / instead of \ as path separators:

docker run --name nginx -d \
  -v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx

To strip the /mnt that seems to cause problems from the path I use bash variable extension:

-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf

Did you notice any performance differences when using the Windows style path vs. the Unix style path?
I can't tell. I'm just using Docker for Windows for testing/development and never monitored performance.
M
Mikael Lepistö

Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox

https://github.com/docker/toolbox/issues/844

I was experiencing this kind of error:


mlepisto@DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf

mlepisto@DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

# mounting to some other file name inside the container did work just fine
mlepisto@DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root@a5020b4d6cc2:/# exit
exit

After updating VitualBox all commands did work just fine 🎉


t
t0rjantai

Had the same head scratch because I did not have the file locally so it created it as a folder.

mimas@Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas@Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas@Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile  logs.txt/

g
grape100

unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

I had a similar error on niginx in Mac environment. Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.

      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf

J
Jon Winstanley

For me, this did not work:

volumes:
  - ./:/var/www/html
  - ./nginx.conf:/etc/nginx/conf.d/site.conf

But this, works fine (obviously moved my config file inside a new directory too:

volumes:
  - ./:/var/www/html
  - ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf

p
pbarney

I had this problem under Windows 7 because my dockerfile was on different drive.

Here's what I did to fix the problem:

Open VirtualBox Manager Select the "default" container and edit the settings. Select Shared Folders and click the icon to add a new shared folder Folder Path: x:\ Folder Name: /x Check Auto-mount and Make Permanent Restart the virtual machine

At this point, docker-compose up should work.


y
ymoreau

I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).

... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type

I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.


M
Mark Shust at M.academy

Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.

https://i.stack.imgur.com/1dFNV.png

Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.

For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.


G
GetoX

In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.

All you need to do is just need to restart Docker


T
Tony Stecca

CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:

COPY --chown=node:node dist/app.js /usr/app/app.js

and then during development I mount that file via docker-compose:

 volumes:
      - ./dist/app.js:/usr/app/app.js

I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.

The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.

Either remove CleanWebpackPlugin completely or configure its options to play nicer.


J
J. Scott Elblein

l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.

Are you trying to mount a directory onto a file?

The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now