ChatGPT解决这个技术问题 Extra ChatGPT

How, in general, does Node.js handle 10,000 concurrent requests?

I understand that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking). But still, how does that work, lets say 10,000 concurrent requests. The event loop will process all the requests? Would not that take too long?

I can not understand (yet) how it can be faster than a multi-threaded web server. I understand that multi-threaded web server will be more expensive in resources (memory, CPU), but would not it still be faster? I am probably wrong; please explain how this single-thread is faster in lots of requests, and what it typically does (in high level) when servicing lots of requests like 10,000.

And also, will that single-thread scale well with that large amount? Please bear in mind that I am just starting to learn Node.js.

Because most of the work (moving data around) doesn't involve the CPU.
Note also that just because there's only one thread executing Javascript, doesn't mean there aren't lots of other threads doing work.
This question is either too broad, or a duplicate of various other questions.
Along with single threading, Node.js does something called as "non blocking I/O". Here is where all the magic is done

s
slebetman

If you have to ask this question then you're probably unfamiliar with what most web applications/services do. You're probably thinking that all software do this:

user do an action
       │
       v
 application start processing action
   └──> loop ...
          └──> busy processing
 end loop
   └──> send result to user

However, this is not how web applications, or indeed any application with a database as the back-end, work. Web apps do this:

user do an action
       │
       v
 application start processing action
   └──> make database request
          └──> do nothing until request completes
 request complete
   └──> send result to user

In this scenario, the software spend most of its running time using 0% CPU time waiting for the database to return.

Multithreaded network app:

Multithreaded network apps handle the above workload like this:

request ──> spawn thread
              └──> wait for database request
                     └──> answer request
request ──> spawn thread
              └──> wait for database request
                     └──> answer request
request ──> spawn thread
              └──> wait for database request
                     └──> answer request

So the thread spend most of their time using 0% CPU waiting for the database to return data. While doing so they have had to allocate the memory required for a thread which includes a completely separate program stack for each thread etc. Also, they would have to start a thread which while is not as expensive as starting a full process is still not exactly cheap.

Singlethreaded event loop

Since we spend most of our time using 0% CPU, why not run some code when we're not using CPU? That way, each request will still get the same amount of CPU time as multithreaded applications but we don't need to start a thread. So we do this:

request ──> make database request
request ──> make database request
request ──> make database request
database request complete ──> send response
database request complete ──> send response
database request complete ──> send response

In practice both approaches return data with roughly the same latency since it's the database response time that dominates the processing.

The main advantage here is that we don't need to spawn a new thread so we don't need to do lots and lots of malloc which would slow us down.

Magic, invisible threading

The seemingly mysterious thing is how both the approaches above manage to run workload in "parallel"? The answer is that the database is threaded. So our single-threaded app is actually leveraging the multi-threaded behaviour of another process: the database.

Where singlethreaded approach fails

A singlethreaded app fails big if you need to do lots of CPU calculations before returning the data. Now, I don't mean a for loop processing the database result. That's still mostly O(n). What I mean is things like doing Fourier transform (mp3 encoding for example), ray tracing (3D rendering) etc.

Another pitfall of singlethreaded apps is that it will only utilise a single CPU core. So if you have a quad-core server (not uncommon nowdays) you're not using the other 3 cores.

Where multithreaded approach fails

A multithreaded app fails big if you need to allocate lots of RAM per thread. First, the RAM usage itself means you can't handle as many requests as a singlethreaded app. Worse, malloc is slow. Allocating lots and lots of objects (which is common for modern web frameworks) means we can potentially end up being slower than singlethreaded apps. This is where node.js usually win.

One use-case that end up making multithreaded worse is when you need to run another scripting language in your thread. First you usually need to malloc the entire runtime for that language, then you need to malloc the variables used by your script.

So if you're writing network apps in C or go or java then the overhead of threading will usually not be too bad. If you're writing a C web server to serve PHP or Ruby then it's very easy to write a faster server in javascript or Ruby or Python.

Hybrid approach

Some web servers use a hybrid approach. Nginx and Apache2 for example implement their network processing code as a thread pool of event loops. Each thread runs an event loop simultaneously processing requests single-threaded but requests are load-balanced among multiple threads.

Some single-threaded architectures also use a hybrid approach. Instead of launching multiple threads from a single process you can launch multiple applications - for example, 4 node.js servers on a quad-core machine. Then you use a load balancer to spread the workload amongst the processes.

In effect the two approaches are technically identical mirror-images of each other.


This is by far the best explanation for node I have read so far. That "single-threaded app is actually leveraging the multi-threaded behaviour of another process: the database."did the work
@CaspainCaldion It depends on what you mean by very fast and lots of clients. As is, node.js can process upwards of 1000 requests per second and speed limited only to the speed of your network card. Note that it's 1000 requests per second not clients connected simultaneously. It can handle the 10000 simultaneous clients without issue. The real bottleneck is the network card.
@slebetman , best explanation ever. one thing though, if i have a Machine Learning algorithm which processe some info and delivers results accordingly , should i use Multi threaded approach or single threaded
@GaneshKarewad Algorithms use CPU, services (database, REST API etc.) use I/O. If the AI is an algorithm written in js then you should run it in another thread or process. If the AI is a service running on another computer (like Amazon or Google or IBM AI services) then use a single threaded architecture.
@VikasDubey Note that video streams are not processed by the node server at all. 99% of the processing happens in the browser/media player. In this case node.js serves as nothing more than a file server which means it leverages on the OS's parallel disk/network I/O capabilities. For network I/O most OSes are equally capable but for disk I/O Linux tends to trump everyone else if used with fast filesystems like Btrfs or ext4 (of course, RAID makes almost everything fast)
c
chriskelly

What you seem to be thinking is that most of the processing is handled in the node event loop. Node actually farms off the I/O work to threads. I/O operations typically take orders of magnitude longer than CPU operations so why have the CPU wait for that? Besides, the OS can handle I/O tasks very well already. In fact, because Node does not wait around it achieves much higher CPU utilisation.

By way of analogy, think of NodeJS as a waiter taking the customer orders while the I/O chefs prepare them in the kitchen. Other systems have multiple chefs, who take a customers order, prepare the meal, clear the table and only then attend to the next customer.


Thanks for the restaurant analogy! I find analogies and real-world examples so much easier to learn from.
Very well artiulated. Nice analogy!
js part of node is single threaded while the c++ part underlying is using a thread pool . stackoverflow.com/a/70161215/4034825
In restaurant example: what if we have 10000 customer with one waiter? isn't it slow to handle these customers with one waiter(one thread)?
P
Piyush Balapure

Single Threaded Event Loop Model Processing Steps:

Clients Send request to Web Server.

Node JS Web Server internally maintains a Limited Thread pool to provide services to the Client Requests.

Node JS Web Server receives those requests and places them into a Queue. It is known as “Event Queue”.

Node JS Web Server internally has a Component, known as “Event Loop”. Why it got this name is that it uses indefinite loop to receive requests and process them.

Event Loop uses Single Thread only. It is main heart of Node JS Platform Processing Model.

Event Loop checks any Client Request is placed in Event Queue. If not then wait for incoming requests for indefinitely.

If yes, then pick up one Client Request from Event Queue Starts process that Client Request If that Client Request Does Not requires any Blocking IO Operations, then process everything, prepare response and send it back to client. If that Client Request requires some Blocking IO Operations like interacting with Database, File System, External Services then it will follow different approach

Starts process that Client Request

If that Client Request Does Not requires any Blocking IO Operations, then process everything, prepare response and send it back to client.

If that Client Request requires some Blocking IO Operations like interacting with Database, File System, External Services then it will follow different approach

Checks Threads availability from Internal Thread Pool

Picks up one Thread and assign this Client Request to that thread.

That Thread is responsible for taking that request, process it, perform Blocking IO operations, prepare response and send it back to the Event Loop very nicely explained by @Rambabu Posa for more explanation go throw this Link


the diagram given in that blog post seems to be wrong, what they have mentioned in that article is not completely correct.
AFAIK, there's no blocking I/O in node (unless you use sync APIs), the thread pool is only used momentarily to handle I/O responses and deliver them to the main thread. But, while waiting the I/O request [there's no thread]( blog.stephencleary.com/2013/11/there-is-no-thread.html) , otherwise the thread pool would get clogged pretty quickly.
i cleared the confusion stackoverflow.com/a/70161215/4034825
s
sheltond

I understand that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking).

I could be misunderstanding what you've said here, but "one at a time" sounds like you may not be fully understanding the event-based architecture.

In a "conventional" (non event-driven) application architecture, the process spends a lot of time sitting around waiting for something to happen. In an event-based architecture such as Node.js the process doesn't just wait, it can get on with other work.

For example: you get a connection from a client, you accept it, you read the request headers (in the case of http), then you start to act on the request. You might read the request body, you will generally end up sending some data back to the client (this is a deliberate simplification of the procedure, just to demonstrate the point).

At each of these stages, most of the time is spent waiting for some data to arrive from the other end - the actual time spent processing in the main JS thread is usually fairly minimal.

When the state of an I/O object (such as a network connection) changes such that it needs processing (e.g. data is received on a socket, a socket becomes writable, etc) the main Node.js JS thread is woken with a list of items needing to be processed.

It finds the relevant data structure and emits some event on that structure which causes callbacks to be run, which process the incoming data, or write more data to a socket, etc. Once all of the I/O objects in need of processing have been processed, the main Node.js JS thread will wait again until it's told that more data is available (or some other operation has completed or timed out).

The next time that it is woken, it could well be due to a different I/O object needing to be processed - for example a different network connection. Each time, the relevant callbacks are run and then it goes back to sleep waiting for something else to happen.

The important point is that the processing of different requests is interleaved, it doesn't process one request from start to end and then move onto the next.

To my mind, the main advantage of this is that a slow request (e.g. you're trying to send 1MB of response data to a mobile phone device over a 2G data connection, or you're doing a really slow database query) won't block faster ones.

In a conventional multi-threaded web server, you will typically have a thread for each request being handled, and it will process ONLY that request until it's finished. What happens if you have a lot of slow requests? You end up with a lot of your threads hanging around processing these requests, and other requests (which might be very simple requests that could be handled very quickly) get queued behind them.

There are plenty of others event-based systems apart from Node.js, and they tend to have similar advantages and disadvantages compared with the conventional model.

I wouldn't claim that event-based systems are faster in every situation or with every workload - they tend to work well for I/O-bound workloads, not so well for CPU-bound ones.


Good explanation , for understanding that event loop works for multiple requests simultaneously.
A
Aman Gupta

Adding to slebetman answer: When you say Node.JS can handle 10,000 concurrent requests they are essentially non-blocking requests i.e. these requests are majorly pertaining to database query.

Internally, event loop of Node.JS is handling a thread pool, where each thread handles a non-blocking request and event loop continues to listen to more request after delegating work to one of the thread of the thread pool. When one of the thread completes the work, it send a signal to the event loop that it has finished aka callback. Event loop then process this callback and send the response back.

As you are new to NodeJS, do read more about nextTick to understand how event loop works internally. Read blogs on http://javascriptissexy.com, they were really helpful for me when I started with JavaScript/NodeJS.


O
OfirD

Adding to slebetman's answer for more clarity on what happens while executing the code.

The internal thread pool in nodeJs just has 4 threads by default. and its not like the whole request is attached to a new thread from the thread pool the whole execution of request happens just like any normal request (without any blocking task) , just that whenever a request has any long running or a heavy operation like db call ,a file operation or a http request the task is queued to the internal thread pool which is provided by libuv. And as nodeJs provides 4 threads in internal thread pool by default every 5th or next concurrent request waits until a thread is free and once these operations are over the callback is pushed to the callback queue. and is picked up by event loop and sends back the response.

Now here comes another information that its not once single callback queue, there are many queues.

NextTick queue Micro task queue Timers Queue IO callback queue (Requests, File ops, db ops) IO Poll queue Check Phase queue or SetImmediate close handlers queue

Whenever a request comes the code gets executing in this order of callbacks queued.

It is not like when there is a blocking request it is attached to a new thread. There are only 4 threads by default. So there is another queueing happening there.

Whenever in a code a blocking process like file read occurs , then calls a function which utilises thread from thread pool and then once the operation is done , the callback is passed to the respective queue and then executed in the order.

Everything gets queued based on the the type of callback and processed in the order mentioned above.


The internal thread pool in nodeJs just has 4 threads by default …Could you elaborate? How is node use 4 threads ?
NodeJs internally uses a few threads (4 by default) for the purpose of connecting with DB, reading file from disk, Interacting with OS etc. You can increase this count of threads by using the environment variable 'UV_THREADPOOL_SIZE' by setting it to a different number while running your nodejs app.
A
Anurag Vohra

The blocking part of the multithreaded-blocking system makes it less efficient. The thread which is blocked cannot be used for anything else, while it is waiting for a response.

While a non-blocking single-threaded system makes the best use of its single-thread system.

https://i.stack.imgur.com/yLA5K.png

Let see how non blocking works:

https://i.stack.imgur.com/EtiYI.png

This is how Event loop works in NodeJS, and performs better than blocking multithreaded system.


How does the client not timeout if it doesn't have it's own thread though? Is there some sort of queue that the callbacks are added to? And then this queue is worked through quickly if the code is designed well and doesn't block the event loop.
@MattieG4 Client after passing callback continues to do other things. Now its job of callack reciver to call the callback once jobs is done. And yes event loop has a queue strutcture, where in task are added and served in FIFO basis. If any blocking code is in the queue, it will delay every item/task in the queue.
a
asif.ibtihaj

Here is a good explanation from this medium article:

Given a NodeJS application, since Node is single threaded, say if processing involves a Promise.all that takes 8 seconds, does this mean that the client request that comes after this request would need to wait for eight seconds? No. NodeJS event loop is single threaded. The entire server architecture for NodeJS is not single threaded.

Before getting into the Node server architecture, to take a look at typical multithreaded request response model, the web server would have multiple threads and when concurrent requests get to the webserver, the webserver picks threadOne from the threadPool and threadOne processes requestOne and responds to clientOne and when the second request comes in, the web server picks up the second thread from the threadPool and picks up requestTwo and processes it and responds to clientTwo. threadOne is responsible for all kinds of operations that requestOne demanded including doing any blocking IO operations.

The fact that the thread needs to wait for blocking IO operations is what makes it inefficient. With this kind of a model, the webserver is only able to serve as much requests as there are threads in the thread pool.

NodeJS Web Server maintains a limited Thread Pool to provide services to client requests. Multiple clients make multiple requests to the NodeJS server. NodeJS receives these requests and places them into the EventQueue . NodeJS server has an internal component referred to as the EventLoop which is an infinite loop that receives requests and processes them. This EventLoop is single threaded. In other words, EventLoop is the listener for the EventQueue. So, we have an event queue where the requests are being placed and we have an event loop listening to these requests in the event queue. What happens next? The listener(the event loop) processes the request and if it is able to process the request without needing any blocking IO operations, then the event loop would itself process the request and sends the response back to the client by itself. If the current request uses blocking IO operations, the event loop sees whether there are threads available in the thread pool, picks up one thread from the thread pool and assigns the particular request to the picked thread. That thread does the blocking IO operations and sends the response back to the event loop and once the response gets to the event loop, the event loop sends the response back to the client.

How is NodeJS better than traditional multithreaded request response model? With traditional multithreaded request/response model, every client gets a different thread where as with NodeJS, the simpler request are all handled directly by the EventLoop. This is an optimization of thread pool resources and there is no overhead of creating the threads for every client request.


'then the event loop would itself process the request and sends the response back to the client by itself.' <- In this case, the cpu processes the reqest without the I/O initiation correct? And when using I/O the cpu initiates I/O to do the work then when the I/O is done, sends the cpu an update to send back to the client?