ChatGPT解决这个技术问题 Extra ChatGPT

Node.js heap out of memory

Today I ran my script for filesystem indexing to refresh RAID files index and after 4h it crashed with following error:

[md5:]  241613/241627 97.5%  
[md5:]  241614/241627 97.5%  
[md5:]  241625/241627 98.1%
Creating missing list... (79570 files missing)
Creating new files list... (241627 new files)

<--- Last few GCs --->

11629672 ms: Mark-sweep 1174.6 (1426.5) -> 1172.4 (1418.3) MB, 659.9 / 0 ms [allocation failure] [GC in old space requested].
11630371 ms: Mark-sweep 1172.4 (1418.3) -> 1172.4 (1411.3) MB, 698.9 / 0 ms [allocation failure] [GC in old space requested].
11631105 ms: Mark-sweep 1172.4 (1411.3) -> 1172.4 (1389.3) MB, 733.5 / 0 ms [last resort gc].
11631778 ms: Mark-sweep 1172.4 (1389.3) -> 1172.4 (1368.3) MB, 673.6 / 0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d1d329c9e59 <JS Object>
1: SparseJoinWithSeparatorJS(aka SparseJoinWithSeparatorJS) [native array.js:~84] [pc=0x3629ef689ad0] (this=0x3d1d32904189 <undefined>,w=0x2b690ce91071 <JS Array[241627]>,L=241627,M=0x3d1d329b4a11 <JS Function ConvertToString (SharedFunctionInfo 0x3d1d3294ef79)>,N=0x7c953bf4d49 <String[4]\: ,\n  >)
2: Join(aka Join) [native array.js:143] [pc=0x3629ef616696] (this=0x3d1d32904189 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/usr/bin/node]
 2: 0xe2c5fc [/usr/bin/node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [/usr/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/bin/node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/usr/bin/node]
 6: v8::internal::Runtime_SparseJoinWithSeparator(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/bin/node]
 7: 0x3629ef50961b

Server is equipped with 16gb RAM and 24gb SSD swap. I highly doubt my script exceeded 36gb of memory. At least it shouldn't

Script creates index of files stored as Array of Objects with files metadata (modification dates, permissions, etc, no big data)

Here's full script code: http://pastebin.com/mjaD76c3

I've already experiend weird node issues in the past with this script what forced me eg. split index into multiple files as node was glitching when working on such big files as String. Is there any way to improve nodejs memory management with huge datasets?

for windows cmd: set NODE_OPTIONS=--max-old-space-size=8192
Can anyone confirm if this issue can occur due to less CPU. In my case I have 32 GB of RAM and specified about 11G for node options, but have only 2 CPU. Still getting OOM.

A
Andrey Ermakov

If I remember correctly, there is a strict standard limit for the memory usage in V8 of around 1.7 GB, if you do not increase it manually.

In one of our products we followed this solution in our deploy script:

 node --max-old-space-size=4096 yourFile.js

There would also be a new space command but as I read here: a-tour-of-v8-garbage-collection the new space only collects the newly created short-term data and the old space contains all referenced data structures which should be in your case the best option.


In the same way this config is for nodejs independently on the framework.@Simer
I am developing with angular 4 and getting same issue, what should be yourFile.js file for angular app ?
@VikramSingh are you using ng serve or do you distribute the result of ng build the /dist folder by another webserver like express? But if your Angular project is using more than the standard 1.7GB Memory than you'll maybe have an architectural problem in your application? It looks like that you are using the development env with nmp start maybe this is a solution for it github.com/mgechev/angular-seed/issues/2063
which file put put in yourFile.js ?? Nodemon file or NodeJS file ?
index,js file @Techdive which you use to start server
M
Maksim Luzik

If you want to increase the memory usage of the node globally - not only single script, you can export environment variable, like this:
export NODE_OPTIONS=--max_old_space_size=4096

Then you do not need to play with files when running builds like npm run build.


As an added convenience, add to bash_profile or zsh profile.
I'm getting the same error even after setting NODE_OPTIONS to 4096 or more. When I run the command npm run build, I see some processes running with commands like usr/bin/node --max_old_space_size=2048. What could be the reason?
@AyeshWeerasinghe probably libraries or scripts that you are using or running have a hardcoded max_old_space_size parameter which overrides the exported env variable.
K
Kamiel Wanrooij

Just in case anyone runs into this in an environment where they cannot set node properties directly (in my case a build tool):

NODE_OPTIONS="--max-old-space-size=4096" node ...

You can set the node options using an environment variable if you cannot pass them on the command line.


Can you please explain what you mean, when you say " ...set the node options using an environment variable.. "?
@Keselme An environment variable is a variable that has been set on the server that all processes can read the data from. Open an SSH terminal to your server and type: MY_VAR=hello then type: echo $MY_VAR. You will see that it prints "hello" in the terminal. You've just set an environment variable and read it back.
worked on Ubuntu 18.04, just added the export command to my bashrc file
N
Nicholas Porter

Here are some flag values to add some additional info on how to allow more memory when you start up your node server.

1GB - 8GB

#increase to 1gb
node --max-old-space-size=1024 index.js

#increase to 2gb
node --max-old-space-size=2048 index.js 

#increase to 3gb
node --max-old-space-size=3072 index.js

#increase to 4gb
node --max-old-space-size=4096 index.js

#increase to 5gb
node --max-old-space-size=5120 index.js

#increase to 6gb
node --max-old-space-size=6144 index.js

#increase to 7gb
node --max-old-space-size=7168 index.js

#increase to 8gb 
node --max-old-space-size=8192 index.js 


can one keep increasing it in powers of 2? should one set it larger than system memory? if not what's a good system memory to max-old-space-size ratio?
@HarryMoreno You can actually put in any number value you like. Doesn't have to be in power of 2. Not sure about the ratio though. It's only a max limit, it wont be using all the memory. I would just set it as high as you need then scale back if needed.
I'll give system ram - 1gb a try. Assuming this vm is only for running this node app.
@HarryMoreno A good system memory to max-old-space-size ratio depends entirely on what else is running on your machine. You can increase it in powers of two - or you can use any number. You can set it larger than system memory - but you will hit swap issues.
B
Bat-Orshikh Baavgaikhuu

I just faced same problem with my EC2 instance t2.micro which has 1 GB memory.

I resolved the problem by creating swap file using this url and set following environment variable.

export NODE_OPTIONS=--max_old_space_size=4096

Finally the problem has gone.

I hope that would be helpful for future.


Thanks to share. I had the same problem, and your tip works fine for me.
Thanks. I'm now able to "yarn build" Strapi on my $5/mo Linode nanode instance after I created a 2GB swap file and added an "ENV NODE_OPTIONS=--max_old_space_size=1024" to my Dockerfile. Not sure the swap step was needed in my case but it can't hurt.
Great! Works for me at Azure Standard_B1s
exactly, with small machines you may need to make it smaller rather than bigger, e.g. export NODE_OPTIONS=--max_old_space_size=1024
d
duchuy

i was struggling with this even after setting --max-old-space-size.

Then i realised need to put options --max-old-space-size before the karma script.

also best to specify both syntaxes --max-old-space-size and --max_old_space_size my script for karma :

node --max-old-space-size=8192 --optimize-for-size --max-executable-size=8192  --max_old_space_size=8192 --optimize_for_size --max_executable_size=8192 node_modules/karma/bin/karma start --single-run --max_new_space_size=8192   --prod --aot

reference https://github.com/angular/angular-cli/issues/1652


"best to specify both syntaxes --max-old-space-size and --max_old_space_size" -- you do not need to do this, they are synonyms. From nodejs.org/api/cli.html#cli_options: "All options, including V8 options, allow words to be separated by both dashes (-) or underscores (_)."
max-executable-size was removed and ends in an error when it is used: github.com/nodejs/node/issues/13341
I had to cut it to --max-old-space-size=8192 --optimize-for-size --max_old_space_size=8192 --optimize_for_size and it worked
Thanks, you're an absolute life-saver. optimize-for-size has finally allowed my builds to succeed.
A
Astr-o

I encountered this issue when trying to debug with VSCode, so just wanted to add this is how you can add the argument to your debug setup.

You can add it to the runtimeArgs property of your config in launch.json.

See example below.

{
"version": "0.2.0",
"configurations": [{
        "type": "node",
        "request": "launch",
        "name": "Launch Program",
        "program": "${workspaceRoot}\\server.js"
    },
    {
        "type": "node",
        "request": "launch",
        "name": "Launch Training Script",
        "program": "${workspaceRoot}\\training-script.js",
        "runtimeArgs": [
            "--max-old-space-size=4096"
        ]
    }
]}

Would launch.json be the same a package.json?
No, launch.json is a configuration file specifically for running an application from VS Code.
Where is this the launch.json file for VS Code?
R
Ramachandran Mani

I had a similar issue while doing AOT angular build. Following commands helped me.

npm install -g increase-memory-limit
increase-memory-limit

Source: https://geeklearning.io/angular-aot-webpack-memory-trick/


Cheers worked for me, it may be necessary to sudo npm -g install increase-memory-limit --unsafe-perm
4k is not enough. developer was keep on 4k as static. good solution from developer. Also when I explore the npm page, I couldnt saw the info about change the limit value. Actually, There is a solution but didnt worked.
Now in 2021 there must be better alternatives as the lib is marked as deprecated: npmjs.com/package/increase-memory-limit
After running this i type npm run dev and it stopped there. it is not showing any progress not giving any error after this line > webpack-dev-server --config ./webpack.dev.config.js. It was playing statue so, project cannot be run.
V
Venkat.R

I've faced this same problem recently and came across to this thread but my problem was with React App. Below changes in the node start command solved my issues.

Syntax

node --max-old-space-size=<size> path-to/fileName.js

Example

node --max-old-space-size=16000 scripts/build.js

Why size is 16000 in max-old-space-size?

Basically, it varies depends on the allocated memory to that thread and your node settings.

How to verify and give right size?

This is basically stay in our engine v8. below code helps you to understand the Heap Size of your local node v8 engine.

const v8 = require('v8');
const totalHeapSize = v8.getHeapStatistics().total_available_size;
const totalHeapSizeGb = (totalHeapSize / 1024 / 1024 / 1024).toFixed(2);
console.log('totalHeapSizeGb: ', totalHeapSizeGb);

j
jbaylina

I just want to add that in some systems, even increasing the node memory limit with --max-old-space-size, it's not enough and there is an OS error like this:

terminate called after throwing an instance of 'std::bad_alloc'
  what():  std::bad_alloc
Aborted (core dumped)

In this case, probably is because you reached the max mmap per process.

You can check the max_map_count by running

sysctl vm.max_map_count

and increas it by running

sysctl -w vm.max_map_count=655300

and fix it to not be reset after a reboot by adding this line

vm.max_map_count=655300

in /etc/sysctl.conf file.

Check here for more info.

A good method to analyse the error is by run the process with strace

strace node --max-old-space-size=128000 my_memory_consuming_process.js

The strace tip works, though note that if you're using Docker with the Node Alpine base image, you will have to install strace yourself.
Also note that strace produces a ton of output lines, which will obstruct your ability to see the regular log lines (unless you have some sort of filtering of the output). Any way to cut down on this noise to only show events relevant to std::bad_alloc errors?
U
Umang Patwa

Steps to fix this issue (In Windows) -

Open command prompt and type %appdata% press enter Navigate to %appdata% > npm folder Open or Edit ng.cmd in your favorite editor Add --max_old_space_size=8192 to the IF and ELSE block

Your node.cmd file looks like this after the change:

@IF EXIST "%~dp0\node.exe" (
  "%~dp0\node.exe" "--max_old_space_size=8192" "%~dp0\node_modules\@angular\cli\bin\ng" %*
) ELSE (
  @SETLOCAL
  @SET PATHEXT=%PATHEXT:;.JS;=;%
  node "--max_old_space_size=8192" "%~dp0\node_modules\@angular\cli\bin\ng" %*
)

s
sidverma

Recently, in one of my project ran into same problem. Tried couple of things which anyone can try as a debugging to identify the root cause:

As everyone suggested , increase the memory limit in node by adding this command: { "scripts":{ "server":"node --max-old-space-size={size-value} server/index.js" } }

Here size-value i have defined for my application was 1536 (as my kubernetes pod memory was 2 GB limit , request 1.5 GB)

So always define the size-value based on your frontend infrastructure/architecture limit (little lesser than limit)

One strict callout here in the above command, use --max-old-space-size after node command not after the filename server/index.js.

If you have ngnix config file then check following things: worker_connections: 16384 (for heavy frontend applications) [nginx default is 512 connections per worker, which is too low for modern applications] use: epoll (efficient method) [nginx supports a variety of connection processing methods] http: add following things to free your worker from getting busy in handling some unwanted task. (client_body_timeout , reset_timeout_connection , client_header_timeout,keepalive_timeout ,send_timeout). Remove all logging/tracking tools like APM , Kafka , UTM tracking, Prerender (SEO) etc middlewares or turn off. Now code level debugging: In your main server file , remove unwanted console.log which is just printing a message. Now check for every server route i.e app.get() , app.post() ... below scenarios:

data => if(data) res.send(data) // do you really need to wait for data or that api returns something in response which i have to wait for?? , If not then modify like this:

data => res.send(data) // this will not block your thread, apply everywhere where it's needed

else part: if there is no error coming then simply return res.send({}) , NO console.log here.

error part: some people define as error or err which creates confusion and mistakes. like this: `error => { next(err) } // here err is undefined` `err => {next(error) } // here error is undefined` `app.get(API , (re,res) =>{ error => next(error) // here next is not defined })`

remove winston , elastic-epm-node other unused libraries using npx depcheck command.

In the axios service file , check the methods and logging properly or not like : if(successCB) console.log("success") successCB(response.data) // here it's wrong statement, because on success you are just logging and then `successCB` sending outside the if block which return in failure case also.

Save yourself from using stringify , parse etc on accessive large dataset. (which i can see in your above shown logs too.

Last but not least , for every time when your application crashes or pods restarted check the logs. In log specifically look for this section: Security context This will give you why , where and who is the culprit behind the crash.


c
cansu

I will mention 2 types of solution.

My solution : In my case I add this to my environment variables :

export NODE_OPTIONS=--max_old_space_size=20480

But even if I restart my computer it still does not work. My project folder is in d:\ disk. So I remove my project to c:\ disk and it worked.

My team mate's solution : package.json configuration is worked also.

"start": "rimraf ./build && react-scripts --expose-gc --max_old_space_size=4096 start",

This helped fix an issue in github actions with the build running out of memory. "build": "react-scripts --expose-gc --max_old_space_size=4096 build",
I
Irina Sargu

For other beginners like me, who didn't find any suitable solution for this error, check the node version installed (x32, x64, x86). I have a 64-bit CPU and I've installed x86 node version, which caused the CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory error.


A
Ahmed Aboud

if you want to change the memory globally for node (windows) go to advanced system settings -> environment variables -> new user variable

variable name = NODE_OPTIONS
variable value = --max-old-space-size=4096

F
FiringBlanks

You can also change Window's environment variables with:

 $env:NODE_OPTIONS="--max-old-space-size=8192"

A
Aplet123

In my case, I upgraded node.js version to latest (version 12.8.0) and it worked like a charm.


Hello Angela and welcome to SO! Could you maybe specify the exact version of Node.js you updated to for future readers? Thanks!
I upgraded to Latest LTS Version: 12.18.0
Didn't think this would work but it actually did! So for anyone reading this and their Node version is old, try upgrading to the latest version ( I went from a 10 version to 14.8), likely it will fix this issue for you. Many thanks
V
Vini Dalvino

Use the option --optimize-for-size. It's going to focus on using less ram.


this saved me from getting 'Segmentation fault'. Thank you so much!
a
atazmin

I had this error on AWS Elastic Beanstalk, upgrading instance type from t3.micro (Free tier) to t3.small fixed the error


J
JSON C11

https://i.stack.imgur.com/k4JEh.png

Unix (Mac OS)

Open a terminal and open our .zshrc file using nano like so (this will create one, if one doesn't exist): nano ~/.zshrc Update our NODE_OPTIONS environment variable by adding the following line into our currently open .zshrc file: export NODE_OPTIONS=--max-old-space-size=8192 # increase node memory limit

Please note that we can set the number of megabytes passed in to whatever we like, provided our system has enough memory (here we are passing in 8192 megabytes which is roughly 8 GB).

Save and exit nano by pressing: ctrl + x, then y to agree and finally enter to save the changes. Close and reopen the terminal to make sure our changes have been recognised. We can print out the contents of our .zshrc file to see if our changes were saved like so: cat ~/.zshrc.

Linux (Ubuntu)

Open a terminal and open the .bashrc file using nano like so: nano ~/.bashrc

The remaining steps are similar with the Mac steps from above, except we would most likely be using ~/.bashrc by default (as opposed to ~/.zshrc). So these values would need to be substituted!

Link to Nodejs Docs


R
Richard Frank

Upgrade node to the latest version. I was on node 6.6 with this error and upgraded to 8.9.4 and the problem went away.


A
Akhilesh B Chandran

I experienced the same problem today. The problem for me was, I was trying to import lot of data to the database in my NextJS project.

So what I did is, I installed win-node-env package like this:

yarn add win-node-env

Because my development machine was Windows. I installed it locally than globally. You can install it globally also like this: yarn global add win-node-env

And then in the package.json file of my NextJS project, I added another startup script like this:

"dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev"

Here, am passing the node option, ie. setting 8GB as the limit. So my package.json file somewhat looks like this:

{
  "name": "my_project_name_here",
  "version": "1.0.0",
  "private": true,
  "scripts": {
    "dev": "next dev",
    "dev_more_mem": "NODE_OPTIONS=\"--max_old_space_size=8192\" next dev",
    "build": "next build",
    "lint": "next lint"
  },
  ......
}

And then I run it like this:

yarn dev_more_mem

For me, I was facing the issue only on my development machine (because I was doing the importing of large data). Hence this solution. Thought to share this as it might come in handy for others.


A
Adrien Joly

Just in case it may help people having this issue while using nodejs apps that produce heavy logging, a colleague solved this issue by piping the standard output(s) to a file.


P
Piterden

If you are trying to launch not node itself, but some other soft, for example webpack you can use the environment variable and cross-env package:

$ cross-env NODE_OPTIONS='--max-old-space-size=4096' \
  webpack --progress --config build/webpack.config.dev.js

r
rgantla

For angular project bundling, I've added the below line to my pakage.json file in the scripts section.

"build-prod": "node --max_old_space_size=5120 ./node_modules/@angular/cli/bin/ng build --prod --base-href /"

Now, to bundle my code, I use npm run build-prod instead of ng build --requiredFlagsHere

hope this helps!


R
R15

For Angular, this is how I fixed

In Package.json, inside script tag add this

"scripts": {
  "build-prod": "node --max_old_space_size=5048 ./node_modules/@angular/cli/bin/ng build --prod",
},

Now in terminal/cmd instead of using ng build --prod just use

npm run build-prod

If you want to use this configuration for build only just remove --prod from all the 3 places


S
Shoaib Iqbal

If any of the given answers are not working for you, check your installed node if it compatible (i.e 32bit or 64bit) to your system. Usually this type of error occurs because of incompatible node and OS versions and terminal/system will not tell you about that but will keep you giving out of memory error.


G
Gampesh

In my case I had ran npm install on previous version of node, after some day I upgraded node version and ram npm install for few modules. After this I was getting this error. To fix this problem I deleted node_module folder from each project and ran npm install again.

Hope this might fix the problem.

Note : This was happening on my local machine and it got fixed on local machine only.


D
Dharman

If you have limited memory or RAM, then go for the following command.

ng serve --source-map=false

It will be able to launch the application. For my example, it needs 16gb RAM. But I can run with 8gb RAM.


R
Ramone Graham

Check that you did not install the 32-bit version of node on a 64-bit machine. If you are running node on a 64-bit or 32-bit machine then the nodejs folder should be located in Program Files and Program Files (x86) respectively.