ChatGPT解决这个技术问题 Extra ChatGPT

What's the difference between: Asynchronous, Non-Blocking, Event-Base architectures?

What's the difference between: Asynchronous, Non-Blocking, and Event-base architectures? Can something be both asynchronous and non-blocking (and event-based)? What's most important in programming, to have something: asynchronous, non-blocking and/or event-base (or all 3)?

If you could provide examples, that would be great.

This question is being asked because I was reading this great StackOverflow article on a similar topic but it doesn't answer my questions above.


R
Rohit Karlupia

Asynchronous Asynchronous literally means not synchronous. Email is asynchronous. You send a mail, you don't expect to get a response NOW. But it is not non-blocking. Essentially what it means is an architecture where "components" send messages to each other without expecting a response immediately. HTTP requests are synchronous. Send a request and get a response.

Non-Blocking This term is mostly used with IO. What this means is that when you make a system call, it will return immediately with whatever result it has without putting your thread to sleep (with high probability). For example non-blocking read/write calls return with whatever they can do and expect caller to execute the call again. try_lock for example is non-blocking call. It will lock only if lock can be acquired. Usual semantics for systems calls is blocking. read will wait until it has some data and put calling thread to sleep.

Event-base This term comes from libevent. non-blocking read/write calls in themselves are useless because they don't tell you "when" should you call them back (retry). select/epoll/IOCompletionPort etc are different mechanisms for finding out from OS "when" these calls are expected to return "interesting" data. libevent and other such libraries provide wrappers over these event monitoring facilities provided by various OSes and give a consistent API to work with which runs across operating systems. Non-blocking IO goes hand in hand with Event-base.

I think these terms overlap. For example HTTP protocol is synchronous but HTTP implementation using non-blocking IO can be asynchronous. Again a non-blocking API call like read/write/try_lock is synchronous (it immediately gives a response) but "data handling" is asynchronous.


Good point about non-blocking requiring constant polling, while async can be push-based.
You defined synchronuous as receiving an immediate response, but when I google synchronous all the dictionaries define it as 'happenning at the same time', not 'immediate response'.
How am I blocked when I send an email but do not expect an answer? I can go mind my own business while waiting for a response.
s
supercat

In an asynchronous hardware, code asks some entity to do something and is free to do other things while the action gets done; once the action is complete, the entity will typically signal the code in some fashion. A non-blocking architecture will make note of spontaneously-occurring actions which code might be interested in, and allow code to ask what such actions have occurred, but code will only come aware of such actions when it explicitly asks about them. An event-based architecture will affirmatively notify code when events spontaneously occur.

Consider a serial port, from which code will want to receive 1,000 bytes.

In a blocking-read architecture, the code will wait until either 1,000 bytes have arrived or it decides to give up.

In an asynchronous-read architecture, the code will tell the driver it wants 1,000 bytes, and will be notified when 1,000 bytes have arrived.

In a non-blocking architecture, the code may ask at any time how many bytes have arrived, and can read any or all such data when it sees fit, but the only way it can know when all the data has arrived is to ask; if the code wants to find out within a quarter second when the 1000th byte has arrived, it must check every quarter-second or so.

In an event-based architecture, the serial port driver will notify the application any time any data arrives. The driver won't know how many bytes the application wants, so the application must be able to deal with notifications for amounts that are smaller or larger than what the application wants.


u
user924272

So to answer your first and second question:

Non-blocking is effectively the same as asynchronous - you make the call, and you'll get a result later, but while that's happening you can do something else. Blocking is the opposite. You wait for the call to return before you continue your journey.

Now Async/Non-blocking code sounds absolutely fantastic, and it is. But I have words of warning. Async/Non-blocking are great when working in constrained environments, such as in a mobile phone... consider limited CPU / memory. It's also good for front-end development, where your code needs to react to a UI widget in some way.

Async is fundamental to how all operating systems need to work - they get shit done for you in the background and wake your code up when they've done what you asked for, and when that call fails, you're told it didn't work either by an exception, or some kind of return code / error object.

At the point of when your code asks for something that will take a while to respond, your OS knows it can get busy with doing other stuff. Your code - a process, thread or equivalent, blocks. Your code is totally oblivious to what else is going on in the OS while it waits for that network connection to be made, or while it waits for that response from an HTTP request, or while it waits for that read/write a file, and so on. You code could "simply" be waiting for a mouse click. What actually was going on at during that time was your OS is seamlessly managing, scheduling and reacting to "events" - things the OS is looking out for, such as managing memory, I/O (keyboard, mouse. disk, internet), other tasks, failure recovery, etc.

Operating Systems are frickin' hard-core. They are really good at hiding all of the complicated async / non-blocking stuff from you the programmer. And that's how most programmers got to where we are today with software. Now we're hitting CPU limits, people are saying things can be done in parallel to improve performance. This means Async / non-blocking seems like a very favourable thing to do, and yes, if your software demands it, I can agree.

If you're writing a back-end web server, then proceed with caution. Remember you can scale horizontally for much cheaper. Netflix / Amazon / Google / Facebook are obvious exceptions to this rule though, purely because it works out cheaper for them to use less hardware.

I'll tell you why async / non-blocking code is a nightmare with back-end systems....

1) It becomes a denial of service on productivity... you have to think MUCH more, and you make a lot of mistakes along the way.

2) Stack traces in reactive code becomes undecipherable - it's hard to know what called what, when, why and how. Good luck with debugging.

3) You have to think more about how things fail, especially when many things come back out of order to how you sent them. In the old world, you did one thing at a time.

4) It's harder to test.

5) It's harder to maintain.

6) It's painful. Programming should be a joy and fun. Only masochists like pain. People who write concurrent/reactive frameworks are sadists.

And yes, I've written both sync and async. I prefer synchronous as 99.99 of back-end applications can get by with this paradigm. Front-end apps need reactive code, without question, and that's always been the way.

Yes, code can be asynchronous, non-blocking AND event-based. The most important thing in programming is to make sure your code works and responds in an acceptable amount of time. Stick to that key principle and you can't go wrong.


** UPDATE ** After playing with Go, and getting my head around channels and go-routines, I have to say that I actually like making my code more concurrent, because the language's constructs take all the pain way from Sadist framework writers. We have a "safe word" in the world of async processing - and that's "Go!"
e
ewernli

To me, non-blocking means that the exectuion of an action in a thread does not depend on the execution of other threads, it does in particular not require critial section.

Asynchronous means that the exectuion happens outside of the flow of the caller, and is potentially deffered. The execution typically occurs in another thread.

Reading concurrent data is non-blocking (no need to lock), yet synchronous. Inversily, writting data concurrently in a synchronous manner is blocking (requires an exclusive lock). A way to make it non-blocking from the perspective on the main flow is to make the writes asynchronous and defer their execution.

The concept of event is something else, which roughly speaking means that you are informed when something occurs. If writes have been executed asynchronously, an event can be raised to inform other parts of the system once the write have been executed. The other parts will respond to the event. System can be built solely on events as the only way to communicate between components (think of the actor model), but it must not necessary be the case.

The three terms are related, but are different concepts to me. It can be that people use them in a somewhat interchangeable way though.


M
Matt Mills

Generally, a non-blocking architecture is based on method calls that, while they may execute for a long time on the worker thread, do not block the calling thread. If the calling thread needs to acquire information about or from the task the worker thread is executing, it is up to the calling thread to do that.

An event-based architecture is based on the concept of code being executed in response to events that are fired. The timing of code execution is generally not deterministic, but events may invoke blocking methods; just because a system is event-based does not mean everything it does is not blocking.

Generally, an asynchronous architecture is an event-based, non-blocking architecture.

When an asynchronous call is made, event handlers are registered with the API providing synchronization services, in order to notify the caller that the something the caller is interested in has happened. The call then immediately returns (non-blocking behavior), and the caller is free to continue execution. When events are fired back to the calling process, they will be handled on some thread in that process.

It is important to understand whether events will be handled on the same thread or not, as this will affect the non-blocking nature of the execution, but I'm not personally aware of any libraries that do asynchronous execution management on a single thread.

I removed the above paragraph because it's not strictly correct as stated. My intent was to say that even though the operations in the system are non-blocking, such as making calls out to an OS facility and continuing execution, the nature of single-threaded execution means that when events are fired, they will be competing with other processing tasks for compute time on the thread.


Isn't your last paragraph contradicting your statement that "asynchronous architecture is ... non-blocking"
I guess I didn't do a very good job of addressing the "definitions" part of your question; I'll post an update. But no, the nature of single-threaded execution is that every operation is inherently blocking while it is running, which makes asynchrony even more useful.