ChatGPT解决这个技术问题 Extra ChatGPT

How to articulate the difference between asynchronous and parallel programming?

Many platforms promote asynchrony and parallelism as means for improving responsiveness. I understand the difference generally, but often find it difficult to articulate in my own mind, as well as for others.

I am a workaday programmer and use async & callbacks fairly often. Parallelism feels exotic.

But I feel like they are easily conflated, especially at the language design level. Would love a clear description of how they relate (or don't), and the classes of programs where each is best applied.

I wrote a blog post about relation between asynchronous and parallel programming - anat-async.blogspot.com/2018/08/…
parallelism is when things happen simultaneously. Asynchronicity is when you don't bother to wait for the result of an action to continue. You just go to sleep and at some point later in time the result comes, rings your bell, you wake up and continue from there. Asynchronous execution can perfectly happen serially in one thread only. (that is pretty much what javascript does)

C
CodeMan

When you run something asynchronously it means it is non-blocking, you execute it without waiting for it to complete and carry on with other things. Parallelism means to run multiple things at the same time, in parallel. Parallelism works well when you can separate tasks into independent pieces of work.

Take for example rendering frames of a 3D animation. To render the animation takes a long time so if you were to launch that render from within your animation editing software you would make sure it was running asynchronously so it didn't lock up your UI and you could continue doing other things. Now, each frame of that animation can also be considered as an individual task. If we have multiple CPUs/Cores or multiple machines available, we can render multiple frames in parallel to speed up the overall workload.


Let me see if I get this. The parallel tasks of rendering the different frames should be spread out across multiple CPUs/cores. That has nothing to do with the timing of the task completion, or whether that task blocks something else. It just means a bunch of CPUs will do it together and make the result available as if it ran on one super fast CPU. Right?
"To render the animation takes a long time so if you were to launch that render from within your animation editing software you would make sure (...)". What?
For the 3D animation part: First of all, you would NEVER run a 3D-graphics program with generating frames on the CPU - any sane person would immediately suggest using the GPU. Secondly, if we do this (highly discouraged) we would use a timer to measure how many frames we can render, otherwise we could end up just building up a stack of unfinished render call Tasks. But your point is perfectly valid with most 2D-rendering applications that are rendering on a per-user input event basis.
Asynchronous and non-blocking are different paradigms.
i
igon

I believe the main distinction is between concurrency and parallelism.

Async and Callbacks are generally a way (tool or mechanism) to express concurrency i.e. a set of entities possibly talking to each other and sharing resources. In the case of async or callback communication is implicit while sharing of resources is optional (consider RMI where results are computed in a remote machine). As correctly noted this is usually done with responsiveness in mind; to not wait for long latency events.

Parallel programming has usually throughput as the main objective while latency, i.e. the completion time for a single element, might be worse than a equivalent sequential program.

To better understand the distinction between concurrency and parallelism I am going to quote from Probabilistic models for concurrency of Daniele Varacca which is a good set of notes for theory of concurrency:

A model of computation is a model for concurrency when it is able to represent systems as composed of independent autonomous components, possibly communicating with each other. The notion of concurrency should not be confused with the notion of parallelism. Parallel computations usually involve a central control which distributes the work among several processors. In concurrency we stress the independence of the components, and the fact that they communicate with each other. Parallelism is like ancient Egypt, where the Pharaoh decides and the slaves work. Concurrency is like modern Italy, where everybody does what they want, and all use mobile phones.

In conclusion, parallel programming is somewhat a special case of concurrency where separate entities collaborate to obtain high performance and throughput (generally).

Async and Callbacks are just a mechanism that allows the programmer to express concurrency. Consider that well-known parallel programming design patterns such as master/worker or map/reduce are implemented by frameworks that use such lower level mechanisms (async) to implement more complex centralized interactions.


J
JustATrick

This article explains it very well: http://urda.cc/blog/2010/10/04/asynchronous-versus-parallel-programming

It has this about asynchronous programming:

Asynchronous calls are used to prevent “blocking” within an application. [Such a] call will spin-off in an already existing thread (such as an I/O thread) and do its task when it can.

this about parallel programming:

In parallel programming you still break up work or tasks, but the key differences is that you spin up new threads for each chunk of work

and this in summary:

asynchronous calls will use threads already in use by the system and parallel programming requires the developer to break the work up, spinup, and teardown threads needed.


This article > all the answers here (except for this one of course!)
Thanks for the link. So... in general, use async calls when communicating from the UI to the server (or from a client to a web service). Use parallel threading on the server or web service end, as well as in your business layer.
I must disagree, it's irrelevant whether new threads are spun up or not. The simplest #pragma omp parallel for normally uses a thread pool: OpenMP spins a thread per core at startup and then reuses them for every parallel region. I'd say it's more like "all the asynchronous tasks can run on the same thread", avoiding not just spinning up new threads but using threading at all. For example, Javascript is completely single-threaded yet thoroughly asynchronous.
A
Andrew Cooper

My basic understanding is:

Asynchonous programming solves the problem of waiting around for an expensive operation to complete before you can do anything else. If you can get other stuff done while you're waiting for the operation to complete then that's a good thing. Example: keeping a UI running while you go and retrieve more data from a web service.

Parallel programming is related but is more concerned with breaking a large task into smaller chunks that can be computed at the same time. The results of the smaller chunks can then be combined to produce the overall result. Example: ray-tracing where the colour of individual pixels is essentially independent.

It's probably more complicated than that, but I think that's the basic distinction.


This is nicely put yet it is quite wrong. Like asynchronicity, parallelism also allows control flow to continue without waiting on actions to complete.Main difference is parallelism depends on hardware.
s
serkan

async: Do this by yourself somewhere else and notify me when you complete(callback). By the time i can continue to do my thing.

https://i.stack.imgur.com/3sSch.jpg

parallel: Hire as many guys(threads) as you wish and split the job to them to complete quicker and let me know(callback) when you complete. By the time i might continue to do my other stuff.

https://i.stack.imgur.com/3PqVx.jpg

the main difference is parallelism mostly depends on hardware.


L
Leonard H. Martin

I tend to think of the difference in these terms:

Asynchronous: Go away and do this task, when you're finished come back and tell me and bring the results. I'll be getting on with other things in the mean time.

Parallel: I want you to do this task. If it makes it easier, get some folks in to help. This is urgent though, so I'll wait here until you come back with the results. I can do nothing else until you come back.

Of course an asynchronous task might make use of parallelism, but the differentiation - to my mind at least - is whether you get on with other things while the operation is being carried out or if you stop everything completely until the results are in.


R
Richard

It is a question of order of execution.

If A is asynchronous with B, then I cannot predict beforehand when subparts of A will happen with respect to subparts of B.

If A is parallel with B, then things in A are happening at the same time as things in B. However, an order of execution may still be defined.

Perhaps the difficulty is that the word asynchronous is equivocal.

I execute an asynchronous task when I tell my butler to run to the store for more wine and cheese, and then forget about him and work on my novel until he knocks on the study door again. Parallelism is happening here, but the butler and I are engaged in fundamentally different tasks and of different social classes, so we don't apply that label here.

My team of maids is working in parallel when each of them is washing a different window.

My race car support team is asynchronously parallel in that each team works on a different tire and they don't need to communicate with each other or manage shared resources while they do their job.

My football (aka soccer) team does parallel work as each player independently processes information about the field and moves about on it, but they are not fully asynchronous because they must communicate and respond to the communication of others.

My marching band is also parallel as each player reads music and controls their instrument, but they are highly synchronous: they play and march in time to each other.

A cammed gatling gun could be considered parallel, but everything is 100% synchronous, so it is as though one process is moving forward.


J
Jameel Moideen

Why Asynchronous ?

With today's application's growing more and more connected and also potentially long running tasks or blocking operations such as Network I/O or Database Operations.So it's very important to hide the latency of these operations by starting them in background and returning back to the user interface quickly as possible. Here Asynchronous come in to the picture, Responsiveness.

Why parallel programming?

With today's data sets growing larger and computations growing more complex. So it's very important to reduce the execution time of these CPU-bound operations, in this case, by dividing the workload into chunks and then executing those chunks simultaneously. We can call this as "Parallel" . Obviously it will give high Performance to our application.


P
Pang

Asynchronous Let's say you are the point of contact for your client and you need to be responsive i.e. you need to share status, complexity of operation, resources required etc whenever asked. Now you have a time-consuming operation to be done and hence cannot take this up as you need to be responsive to the client 24/7. Hence, you delegate the time-consuming operation to someone else so that you can be responsive. This is asynchronous.

Parallel programming Let's say you have a task to read, say, 100 lines from a text file, and reading one line takes 1 second. Hence, you'll require 100 seconds to read the text file. Now you're worried that the client must wait for 100 seconds for the operation to finish. Hence you create 9 more clones and make each of them read 10 lines from the text file. Now the time taken is only 10 seconds to read 100 lines. Hence you have better performance.

To sum up, asynchronous coding is done to achieve responsiveness and parallel programming is done for performance.


A
Aditya Bokade

Asynchronous: Running a method or task in background, without blocking. May not necessorily run on a separate thread. Uses Context Switching / time scheduling.

Parallel Tasks: Each task runs parallally. Does not use context switching / time scheduling.


P
Pang

I came here fairly comfortable with the two concepts, but with something not clear to me about them.

After reading through some of the answers, I think I have a correct and helpful metaphor to describe the difference.

If you think of your individual lines of code as separate but ordered playing cards (stop me if I am explaining how old-school punch cards work), then for each separate procedure written, you will have a unique stack of cards (don't copy & paste!) and the difference between what normally goes on when run code normally and asynchronously depends on whether you care or not.

When you run the code, you hand the OS a set of single operations (that your compiler or interpreter broke your "higher" level code into) to be passed to the processor. With one processor, only one line of code can be executed at any one time. So, in order to accomplish the illusion of running multiple processes at the same time, the OS uses a technique in which it sends the processor only a few lines from a given process at a time, switching between all the processes according to how it sees fit. The result is multiple processes showing progress to the end user at what seems to be the same time.

For our metaphor, the relationship is that the OS always shuffles the cards before sending them to the processor. If your stack of cards doesn't depend on another stack, you don't notice that your stack stopped getting selected from while another stack became active. So if you don't care, it doesn't matter.

However, if you do care (e.g., there are multiple processes - or stacks of cards - that do depend on each other), then the OS's shuffling will screw up your results.

Writing asynchronous code requires handling the dependencies between the order of execution regardless of what that ordering ends up being. This is why constructs like "call-backs" are used. They say to the processor, "the next thing to do is tell the other stack what we did". By using such tools, you can be assured that the other stack gets notified before it allows the OS to run any more of its instructions. ("If called_back == false: send(no_operation)" - not sure if this is actually how it is implemented, but logically, I think it is consistent.)

For parallel processes, the difference is that you have two stacks that don't care about each other and two workers to process them. At the end of the day, you may need to combine the results from the two stacks, which would then be a matter of synchronicity but, for execution, you don't care again.

Not sure if this helps but, I always find multiple explanations helpful. Also, note that asynchronous execution is not constrained to an individual computer and its processors. Generally speaking, it deals with time, or (even more generally speaking) an order of events. So if you send dependent stack A to network node X and its coupled stack B to Y, the correct asynchronous code should be able to account for the situation as if it was running locally on your laptop.


P
Pang

Generally, there are only two ways you can do more than one thing each time. One is asynchronous, the other is parallel.

From the high level, like the popular server NGINX and famous Python library Tornado, they both fully utilize asynchronous paradigm which is Single thread server could simultaneously serve thousands of clients (some IOloop and callback). Using ECF(exception control follow) which could implement the asynchronous programming paradigm. so asynchronous sometimes doesn't really do thing simultaneous, but some io bound work, asynchronous could really promotes the performance.

The parallel paradigm always refers multi-threading, and multiprocessing. This can fully utilize multi-core processors, do things really simultaneously.


V
Vyshnav Ramesh Thrissur

Summary of all above answers

parallel computing:

▪ solves throughput issue. Concerned with breaking a large task into smaller chunks

▪ is machine related (multi machine/core/cpu/processor needed), eg: master slave, map reduce.

Parallel computations usually involve a central control which distributes the work among several processors

asynchronous:

▪ solves latency issue ie, the problem of 'waiting around' for an expensive operation to complete before you can do anything else

▪ is thread related (multi thread needed)

Threading (using Thread, Runnable, Executor) is one fundamental way to perform asynchronous operations in Java