ChatGPT解决这个技术问题 Extra ChatGPT

Nginx upstream prematurely closed connection while reading response header from upstream, for large requests

I am using nginx and node server to serve update requests. I get a gateway timeout when I request an update on large data. I saw this error from the nginx error logs :

2016/04/07 00:46:04 [error] 28599#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.77, server: gis.oneconcern.com, request: "GET /update_mbtiles/atlas19891018000415 HTTP/1.1", upstream: "http://127.0.0.1:7777/update_mbtiles/atlas19891018000415", host: "gis.oneconcern.com"

I googled for the error and tried everything I could, but I still get the error.

My nginx conf has these proxy settings:

    ##
    # Proxy settings
    ##

    proxy_connect_timeout 1000;
    proxy_send_timeout 1000;
    proxy_read_timeout 1000;
    send_timeout 1000;

This is how my server is configured

server {
listen 80;

server_name gis.oneconcern.com;
access_log /home/ubuntu/Tilelive-Server/logs/nginx_access.log;
error_log /home/ubuntu/Tilelive-Server/logs/nginx_error.log;

large_client_header_buffers 8 32k;
location / {
    proxy_pass http://127.0.0.1:7777;
    proxy_redirect off;

    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $http_host;
    proxy_cache_bypass $http_upgrade;
}

location /faults {
    proxy_pass http://127.0.0.1:8888;
    proxy_http_version 1.1;
    proxy_buffers 8 64k;
    proxy_buffer_size 128k;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

}

I am using a nodejs backend to serve the requests on an aws server. The gateway error shows up only when the update takes a long time (about 3-4 minutes). I do not get any error for smaller updates. Any help will be highly appreciated.

Node js code :

app.get("/update_mbtiles/:earthquake", function(req, res){
var earthquake = req.params.earthquake
var command = spawn(__dirname + '/update_mbtiles.sh', [ earthquake, pg_details ]);
//var output  = [];

command.stdout.on('data', function(chunk) {
//    logger.info(chunk.toString());
//     output.push(chunk.toString());
});

command.stderr.on('data', function(chunk) {
  //  logger.error(chunk.toString());
 //   output.push(chunk.toString());
});

command.on('close', function(code) {
    if (code === 0) {
        logger.info("updating mbtiles successful for " + earthquake);
        tilelive_reload_and_switch_source(earthquake);
        res.send("Completed updating!");
    }
    else {
        logger.error("Error occured while updating " + earthquake);
        res.status(500);
        res.send("Error occured while updating " + earthquake);
    }
});
});

function tilelive_reload_and_switch_source(earthquake_unique_id) {
tilelive.load('mbtiles:///'+__dirname+'/mbtiles/tipp_out_'+ earthquake_unique_id + '.mbtiles', function(err, source) {
    if (err) {
        logger.error(err.message);
        throw err;
    }
    sources.set(earthquake_unique_id, source); 
    logger.info('Updated source! New tiles!');
});
}

Thank you.

The question has itself helped me, I was missing proxy_http_version 1.1; while accepting http2 requests

O
OpSocket

I solved this by setting a higher timeout value for the proxy:

location / {
    proxy_read_timeout 300s;
    proxy_connect_timeout 75s;
    proxy_pass http://localhost:3000;
}

Documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html


According to NGINX docs the connect time out can't be longer than 75s "Defines a timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds."
proxy_read_timeout 300s; proxy_connect_timeout 75s; fixes this error message "upstream prematurely closed connection while reading response header from upstream" from logs/proxy_error_log in Plesk Obsidian
S
SilentMiles

I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?


Oh! I found out that my node server sends empty response for big data requests.
Hi @DivyaKonda, can you elaborate a bit more on how serving empty response caused gateway timeout errors please?
May be your nodejs server timedout(default 2min),when timedout the server will send empty response. the doc: nodejs.org/api/http.html#http_server_settimeout_msecs_callback
hi @DivyaKonda could you please explain how you fixed it?
In my case, we had ssl passthrough required in the k8s ingress, but not enabled in the nginx ingress controller (helm chart): we were missing controller.extraArgs.enable-ssl-passthrough: "". In short, any kind of misconfiguration can probably cause this..
m
millenion

I had the same error for quite a while, and here what fixed it for me.

I simply declared in service that i use what follows:

Description= Your node service description
After=network.target

[Service]
Type=forking
PIDFile=/tmp/node_pid_name.pid
Restart=on-failure
KillSignal=SIGQUIT
WorkingDirectory=/path/to/node/app/root/directory
ExecStart=/path/to/node /path/to/server.js

[Install]
WantedBy=multi-user.target

What should catch your attention here is "After=network.target". I spent days and days looking for fixes on nginx side, while the problem was just that. To be sure, stop running the node service you have, launch the ExecStart command directly and try to reproduce the bug. If it doesn't pop, it just means that your service has a problem. At least this is how i found my answer.

For everybody else, good luck!


Upon further investigation it looks like that is a systemd configuration file, and the After=network.target setting is attempting to delay the start of that nodejs service until after the system's networking is up and running.
T
Tomasz Poradowski

I stumbled upon *145660 upstream prematurely closed connection while reading upstream Nginx error log entry when trying to download a 2GB file from the server Nginx was a proxy for. The message indicates that the "upstream" closed connection, but in fact it was related to proxy_max_temp_file_size setting:

Syntax: proxy_max_temp_file_size size; Default: proxy_max_temp_file_size 1024m; Context: http, server, location When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive. The zero value disables buffering of responses to temporary files. This restriction does not apply to responses that will be cached or stored on disk.

The symptoms:

download was being forcibly stopped at around 1GB,

Nginx claimed that upstream closed connection, but without proxy server was returning the full content.

The solution:

increased proxy_max_temp_file_size for proxied location to 4096m and it started sending full content.


P
Pmpr.ir

You can increase the timeout in node like so.

app.post('/slow/request', function(req, res) {
    req.connection.setTimeout(100000); //100 seconds
    ...
}

Y
Yukshy Klein

I don't think this is your case, but I'll post it if it helps anyone. I had the same issue and the problem was that Node didn't respond at all (I had a condition that when failed didn't do anything - so no response) - So if increasing all your timeouts didn't solve it, make sure all scenarios get a response.


C
ChrisDanger

I was finding this error in the logs of my AWS Elastic Beanstalk instance when trying to post about half a million rows to my api.

I followed all the advice here to no avail.

What did finally work was increasing the size of my EC2 instance from 1 core and 1GB RAM to 4 core and 8 GB RAM.


This solved the issue for me as well; I was attempting a complex query on a virtual machine with 1GB of RAM. Upping it to 2GB was my solution. Check your servers have enough memory!
W
William

I ran into this issue as well and found this post. Ultimately none of these answers solved my problem, instead I had to put in a rewrite rule to strip out the location /rt as the backend my developers made was not expecting any additional paths:

┌─(william@wkstn18)──(Thu, 05 Nov 20)─┐
└─(~)──(16:13)─>wscat -c ws://WebsocketServerHostname/rt
error: Unexpected server response: 502

Testing with wscat repeatedly gave a 502 response. Nginx error logs provided the same upstream error as above, but notice the upstream string shows the GET Request is attempting to access localhost:12775/rt and not localhost:12775:

 2020/11/05 22:13:32 [error] 10175#10175: *7 upstream prematurely closed
 connection while reading response header from upstream, client: WANIP,
 server: WebsocketServerHostname, request: "GET /rt/socket.io/?transport=websocket
 HTTP/1.1", upstream: "http://127.0.0.1:12775/rt/socket.io/?transport=websocket",
 host: "WebsocketServerHostname"

Since the devs had not coded their websocket (listening on 12775) to expect /rt/socket.io but instead just /socket.io/ (NOTE: /socket.io/ appears to just be a way to specify websocket transport discussed here). Because of this, rather than ask them to rewrite their socket code I just put in a rewrite rule to translate WebsocketServerHostname/rt to WebsocketServerHostname:12775 as below:

upstream websocket-rt {
        ip_hash;

        server 127.0.0.1:12775;
}

server {
        listen 80;
        server_name     WebsocketServerHostname;

        location /rt {
                proxy_http_version 1.1;

                #rewrite /rt/ out of all requests and proxy_pass to 12775
                rewrite /rt/(.*) /$1  break;

                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header Host $host;

                proxy_pass http://websocket-rt;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection $connection_upgrade;
        }

}

There is a more simple way without any additional rewrites: location /rt/ { ... proxy_pass http://websocket-rt/; }. This way the /rt prefix will be stripped automatically, and no PCRE library call will be involved. And, technically speaking, when using the location /rt { ... } more correct rewrite rule will be rewrite ^/rt(?:/(.*))? /$1 break; (or location /rt/ { ... } should be used instead to ensure the slash is following the /rt prefix).
L
LucieDevGirl

I meet the same problem and no one of the solutions detailed here worked for me ... First of all I had an error 413 Entity too large so I updated my nginx.conf as following :

http {
        # Increase request size
        client_max_body_size 10m;

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

        ##
        # Proxy settings
        ##
        proxy_connect_timeout 1000;
        proxy_send_timeout 1000;
        proxy_read_timeout 1000;
        send_timeout 1000;
}

So I only updated the http part, and now I meet the error 502 Bad Gateway and when I display /var/log/nginx/error.log I got the famous "upstream prematurely closed connection while reading response header from upstream"

What is really mysterious for me is that the request works when I run it with virtualenv on my server and send the request to the : IP:8000/nameOfTheRequest

Thanks for reading


T
The Coder

I got the same error, here is how I resolved it:

Downloaded logs from AWS.

Reviewed Nginx logs, no additional details as above.

Reviewed node.js logs, AccessDenied AWS SDK permissions error.

Checked the S3 bucket that AWS was trying to read from.

Added additional bucket with read permission to correct server role.

Even though I was processing large files there were no other errors or settings I had to change once I corrected the missing S3 access.


M
Margach Chris

Problem

The upstream server is timing out and I don't what is happening.

Where to Look first before increasing read or write timeout if your server is connecting to a database

Server is connecting to a database and that connection is working just fine and within sane response time, and its not the one causing this delay in server response time.

make sure that connection state is not causing a cascading failure on your upstream

Then you can move to look at the read and write timeout configurations of the server and proxy.


G
Gerard de Visser

This error can also occur when your code is getting into a loop. So investigate if you have any (indirectly) self-referencing code that's causing this.