ChatGPT解决这个技术问题 Extra ChatGPT

Docker-compose: node_modules not present in a volume after npm install succeeds

I have an app with the following services:

web/ - holds and runs a python 3 flask web server on port 5000. Uses sqlite3.

worker/ - has an index.js file which is a worker for a queue. the web server interacts with this queue using a json API over port 9730. The worker uses redis for storage. The worker also stores data locally in the folder worker/images/

Now this question only concerns the worker.

worker/Dockerfile

FROM node:0.12

WORKDIR /worker

COPY package.json /worker/
RUN npm install

COPY . /worker/

docker-compose.yml

redis:
    image: redis
worker:
    build: ./worker
    command: npm start
    ports:
        - "9730:9730"
    volumes:
        - worker/:/worker/
    links:
        - redis

When I run docker-compose build, everything works as expected and all npm modules are installed in /worker/node_modules as I'd expect.

npm WARN package.json unfold@1.0.0 No README data

> phantomjs@1.9.2-6 install /worker/node_modules/pageres/node_modules/screenshot-stream/node_modules/phantom-bridge/node_modules/phantomjs
> node install.js

<snip>

But when I do docker-compose up, I see this error:

worker_1 | Error: Cannot find module 'async'
worker_1 |     at Function.Module._resolveFilename (module.js:336:15)
worker_1 |     at Function.Module._load (module.js:278:25)
worker_1 |     at Module.require (module.js:365:17)
worker_1 |     at require (module.js:384:17)
worker_1 |     at Object.<anonymous> (/worker/index.js:1:75)
worker_1 |     at Module._compile (module.js:460:26)
worker_1 |     at Object.Module._extensions..js (module.js:478:10)
worker_1 |     at Module.load (module.js:355:32)
worker_1 |     at Function.Module._load (module.js:310:12)
worker_1 |     at Function.Module.runMain (module.js:501:10)

Turns out none of the modules are present in /worker/node_modules (on host or in the container).

If on the host, I npm install, then everything works just fine. But I don't want to do that. I want the container to handle dependencies.

What's going wrong here?

(Needless to say, all packages are in package.json.)

I think you should use the ONBUILD instruction... Like this: github.com/nodejs/docker-node/blob/master/0.12/onbuild/…
How would you do development on the host when the IDE does not know the node_module dependencies?
Try to remove volumes: - worker/:/worker/ block from docker-compose.yml file. This line overwrite the folder you make with COPY command.
When I run docker-compose build, everything works as expected and all npm modules are installed in /worker/node_modules as I'd expect. - How did you check this?
@Vallie you can watch content of image you built with "docker build" using "docker run -it image_name sh"

C
Capi Etheriel

This happens because you have added your worker directory as a volume to your docker-compose.yml, as the volume is not mounted during the build.

When docker builds the image, the node_modules directory is created within the worker directory, and all the dependencies are installed there. Then on runtime the worker directory from outside docker is mounted into the docker instance (which does not have the installed node_modules), hiding the node_modules you just installed. You can verify this by removing the mounted volume from your docker-compose.yml.

A workaround is to use a data volume to store all the node_modules, as data volumes copy in the data from the built docker image before the worker directory is mounted. This can be done in the docker-compose.yml like this:

redis:
    image: redis
worker:
    build: ./worker
    command: npm start
    ports:
        - "9730:9730"
    volumes:
        - ./worker/:/worker/
        - /worker/node_modules
    links:
        - redis

I'm not entirely certain whether this imposes any issues for the portability of the image, but as it seems you are primarily using docker to provide a runtime environment, this should not be an issue.

If you want to read more about volumes, there is a nice user guide available here: https://docs.docker.com/userguide/dockervolumes/

EDIT: Docker has since changed it's syntax to require a leading ./ for mounting in files relative to the docker-compose.yml file.


I've tried this method and hit the wall when dependencies changed. I've rebuilt the image, started new container and volume /worker/node_modules stayed the same as before (with old dependencies). Is there any trick how to use new volume upon rebuilding the image?
It seems docker compose doesn't remove volumes if other containers use them (even if they are dead). So if there are some dead containers of the same type (for whatever reason) the scenario I've described in previous comment follows. From what I've tried, using docker-compose rm seem to fix this problem, but I believe there must be a better and easier solution.
Is there a solution in 2018 without having to rebuild --no-cache everytime the deps change?
The problem with this approach is that the node_modules folder is not accessible from your development machine (outside the container)
you can now use ` --renew-anon-volumes` which will recreate anonymous volumes instead of data from the previous containers.
D
Derk Jan Speelman

The node_modules folder is overwritten by the volume and no more accessible in the container. I'm using the native module loading strategy to take out the folder from the volume:

/data/node_modules/ # dependencies installed here
/data/app/ # code base

Dockerfile:

COPY package.json /data/
WORKDIR /data/
RUN npm install
ENV PATH /data/node_modules/.bin:$PATH

COPY . /data/app/
WORKDIR /data/app/

The node_modules directory is not accessible from outside the container because it is included in the image.


node_modules is not accessible from outside of container but not really a downside ;)
and every time you change package.json you need to rebuild the whole container with --no-cache right?
You need to rebuild the image when you change package.json, yes, but --no-cache is not necessary. If you run docker-compose run app npm install you will create an node_modules in the current directory and you don't need to rebuild the image anymore.
The downside is: no more IDE autocompletion, no help, no good dev experience. Everything also needs to be installed on the host now, but isn't the reason for using docker here that the dev-host needs nothing to be able to work with a project?
@jsan thank you so much, you're life saver.
G
Guillaume Vincent

The solution provided by @FrederikNS works, but I prefer to explicitly name my node_modules volume.

My project/docker-compose.yml file (docker-compose version 1.6+) :

version: '2'
services:
  frontend:
    ....
    build: ./worker
    volumes:
      - ./worker:/worker
      - node_modules:/worker/node_modules
    ....
volumes:
  node_modules:

my file structure is :

project/
   │── worker/
   │     └─ Dockerfile
   └── docker-compose.yml

It creates a volume named project_node_modules and re-use it every time I up my application.

My docker volume ls looks like this :

DRIVER              VOLUME NAME
local               project_mysql
local               project_node_modules
local               project2_postgresql
local               project2_node_modules

while this “works”, you’re circumventing a real concept in docker: all dependencies shoud be baked in for maximum portability. you cannot move this image without running other commands which kinda blows
@JavierBuzzi the problem is that npm/yarn's node_modules are at odds with that philosophy. If you don't have plug'n'play set up in your app then you need to do it this way. There's also $NODE_PATH but I couldn't get it to work for my use-case. This works fine for me. It's also no different than the top-voted answer other than using named volumes vs. anonymous ones.
e
ericstolten

I recently had a similar problem. You can install node_modules elsewhere and set the NODE_PATH environment variable.

In the example below I installed node_modules into /install

worker/Dockerfile

FROM node:0.12

RUN ["mkdir", "/install"]

ADD ["./package.json", "/install"]
WORKDIR /install
RUN npm install --verbose
ENV NODE_PATH=/install/node_modules

WORKDIR /worker

COPY . /worker/

docker-compose.yml

redis:
    image: redis
worker:
    build: ./worker
    command: npm start
    ports:
        - "9730:9730"
    volumes:
        - worker/:/worker/
    links:
        - redis

The top voted solution by @FrederikNS is useful, was how I solved a different issue of my local volume overwriting the container's node_modules based on this article. But it led me to experience this issue. This solution here of creating a separate directory to copy your package.json to, run npm install there, then specify the NODE_PATH environment variable in docker-compose.yml to point to the node_modules folder of that directory works and feels right.
Installing the node modules to a different path (/install) then setting the NODE_PATH (ENV NODE_PATH=/install/node_modules) solves the problem as the path doesn't get overridden after a volume mount.
h
holms

There's elegant solution:

Just mount not whole directory, but only app directory. This way you'll you won't have troubles with npm_modules.

Example:

  frontend:
    build:
      context: ./ui_frontend
      dockerfile: Dockerfile.dev
    ports:
    - 3000:3000
    volumes:
    - ./ui_frontend/src:/frontend/src

Dockerfile.dev:

FROM node:7.2.0

#Show colors in docker terminal
ENV COMPOSE_HTTP_TIMEOUT=50000
ENV TERM="xterm-256color"

COPY . /frontend
WORKDIR /frontend
RUN npm install update
RUN npm install --global typescript
RUN npm install --global webpack
RUN npm install --global webpack-dev-server
RUN npm install --global karma protractor
RUN npm install
CMD npm run server:dev

This is good and fast solution but it requires rebuilds & prunes after new dependency installation.
C
Community

UPDATE: Use the solution provided by @FrederikNS.

I encountered the same problem. When the folder /worker is mounted to the container - all of it's content will be syncronized (so the node_modules folder will disappear if you don't have it locally.)

Due to incompatible npm packages based on OS, I could not just install the modules locally - then launch the container, so..

My solution to this, was to wrap the source in a src folder, then link node_modules into that folder, using this index.js file. So, the index.js file is now the starting point of my application.

When I run the container, I mounted the /app/src folder to my local src folder.

So the container folder looks something like this:

/app
  /node_modules
  /src
    /node_modules -> ../node_modules
    /app.js
  /index.js

It is ugly, but it works..


J
Justin Stayton

Due to the way Node.js loads modules, node_modules can be anywhere in the path to your source code. For example, put your source at /worker/src and your package.json in /worker, so /worker/node_modules is where they're installed.


s
sergeysynergy

Installing node_modules in container to different from project folder, and setting NODE_PATH to your node_modules folder helps me (u need to rebuild container).

I'm using docker-compose. My project file structure:

-/myproject
--docker-compose.yml
--nodejs/
----Dockerfile

docker-compose.yml:

version: '2'
services:
  nodejs:
    image: myproject/nodejs
    build: ./nodejs/.
    volumes:
      - ./nodejs:/workdir
    ports:
      - "23005:3000"
    command: npm run server

Dockerfile in nodejs folder:

FROM node:argon
RUN mkdir /workdir
COPY ./package.json /workdir/.
RUN mkdir /data
RUN ln -s /workdir/package.json /data/.
WORKDIR /data
RUN npm install
ENV NODE_PATH /data/node_modules/
WORKDIR /workdir

E
Egel

There is also some simple solution without mapping node_module directory into another volume. It's about to move installing npm packages into final CMD command.

Disadvantage of this approach: run npm install each time you run container (switching from npm to yarn might also speed up this process a bit).

worker/Dockerfile

FROM node:0.12
WORKDIR /worker
COPY package.json /worker/
COPY . /worker/
CMD /bin/bash -c 'npm install; npm start'

docker-compose.yml

redis:
    image: redis
worker:
    build: ./worker
    ports:
        - "9730:9730"
    volumes:
        - worker/:/worker/
    links:
        - redis

P
Paul Becotte

There are two seperate requirements I see for node dev environments... mount your source code INTO the container, and mount the node_modules FROM the container (for your IDE). To accomplish the first, you do the usual mount, but not everything... just the things you need

volumes:
    - worker/src:/worker/src
    - worker/package.json:/worker/package.json
    - etc...

( the reason to not do - /worker/node_modules is because docker-compose will persist that volume between runs, meaning you can diverge from what is actually in the image (defeating the purpose of not just bind mounting from your host)).

The second one is actually harder. My solution is a bit hackish, but it works. I have a script to install the node_modules folder on my host machine, and I just have to remember to call it whenever I update package.json (or, add it to the make target that runs docker-compose build locally).

install_node_modules:
    docker build -t building .
    docker run -v `pwd`/node_modules:/app/node_modules building npm install

p
pigLoveRabbit520

In my opinion, we should not RUN npm install in the Dockerfile. Instead, we can start a container using bash to install the dependencies before runing the formal node service

docker run -it -v ./app:/usr/src/app  your_node_image_name  /bin/bash
root@247543a930d6:/usr/src/app# npm install

I actually agree with you on this one. Volumes are meant to be used when you want to share the data between container and host. When you decide to have your node_modules be persistent even after container remove, you should also know when or when not to do a npm install manually. OP suggests to do it on every image build. You can do that, but you don't need to also use a volume for that. On every build, the modules will be up to date anywys.
@Blauhirn it's helpful to mount a local host volume into the container when doing e.g. gulp watch (or analogous commands) - you want the node_modules to persist while still allowing changes to other sources (js, css etc). npm insists on using a local gulp, so it has to persist (or be installed via other methods on start-up)
i
itmuckel

You can also ditch your Dockerfile, because of its simplicity, just use a basic image and specify the command in your compose file:

version: '3.2'

services:
  frontend:
    image: node:12-alpine
    volumes:
      - ./frontend/:/app/
    command: sh -c "cd /app/ && yarn && yarn run start"
    expose: [8080]
    ports:
      - 8080:4200

This is particularly useful for me, because I just need the environment of the image, but operate on my files outside the container and I think this is what you want to do too.


P
Parav01d

You can try something like this in your Dockerfile:

FROM node:0.12
WORKDIR /worker
CMD bash ./start.sh

Then you should use the Volume like this:

volumes:
  - worker/:/worker:rw

The startscript should be a part of your worker repository and looks like this:

#!/bin/sh
npm install
npm start

So the node_modules are a part of your worker volume and gets synchronized and the npm scripts are executed when everything is up.


This will add a large overhead to starting up the container.
But only the first time because the node_modules will be persisted on the local machine.
Or until the image is rebuilt, or the volume removed :) . That said, I've not found a better solution myself.
J
Joel Sullivan

I tried the most popular answers on this page but ran into an issue: the node_modules directory in my Docker instance would get cached in the the named or unnamed mount point, and later would overwrite the node_modules directory that was built as part of the Docker build process. Thus, new modules I added to package.json would not show up in the Docker instance.

Fortunately I found this excellent page which explains what was going on and gives at least 3 ways to work around it: https://burnedikt.com/dockerized-node-development-and-mounting-node-volumes/


K
Kasper

If you want the node_modules folder available to the host during development, you could install the dependencies when you start the container instead of during build-time. I do this to get syntax highlighting working in my editor.

Dockerfile

# We're using a multi-stage build so that we can install dependencies during build-time only for production.

# dev-stage
FROM node:14-alpine AS dev-stage
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
# `yarn install` will run every time we start the container. We're using yarn because it's much faster than npm when there's nothing new to install
CMD ["sh", "-c", "yarn install && yarn run start"]

# production-stage
FROM node:14-alpine AS production-stage
WORKDIR /usr/src/app
COPY package.json ./
RUN yarn install
COPY . .

.dockerignore

Add node_modules to .dockerignore to prevent it from being copied when the Dockerfile runs COPY . .. We use volumes to bring in node_modules.

**/node_modules

docker-compose.yml

node_app:
    container_name: node_app
    build:
        context: ./node_app
        target: dev-stage # `production-stage` for production
    volumes:
        # For development:
        #   If node_modules already exists on the host, they will be copied
        #   into the container here. Since `yarn install` runs after the
        #   container starts, this volume won't override the node_modules.
        - ./node_app:/usr/src/app
        # For production:
        #   
        - ./node_app:/usr/src/app
        - /usr/src/app/node_modules

I
Izaya

If you don't use docker-compose you can do it like this:

FROM node:10

WORKDIR /usr/src/app

RUN npm install -g @angular/cli

COPY package.json ./
RUN npm install

EXPOSE 5000

CMD ng serve --port 5000 --host 0.0.0.0

Then you build it: docker build -t myname . and you run it by adding two volumes, the second one without source: docker run --rm -it -p 5000:5000 -v "$PWD":/usr/src/app/ -v /usr/src/app/node_modules myname


I cannot tell you how much time I wasted before coming to your answer. Fantastic - I can go to be now. Thank you thank you thank you.
s
samuelcolt

You can just move node_modules into a / folder.

How it works

FROM node:0.12

WORKDIR /worker

COPY package.json /worker/
RUN npm install \
  && mv node_modules /node_modules

COPY . /worker/

P
PHZ.fi-Pharazon

With Yarn you can move the node_modules outside the volume by setting

# ./.yarnrc
--modules-folder /opt/myproject/node_modules

See https://www.caxy.com/blog/how-set-custom-location-nodemodules-path-yarn


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now