|
Last night, there was a clear patch and I knew roughly where to look; unfortunately right across the street with light pollution from houselights, street lights etc. and only just clearing some roof tops. To the naked eye, there was just a vague small smudge like a short contrail, so I got some binoculars and it was the comet. It is the (visually) longest tailed comet that I have seen; but not as bright as Hale-Bopp was in 1997. It is well worth searching for; but I'd strongly recommend using binoculars to see it once you have located it.
|
|
|
|
|
So as an instructional article I'm preparing code that can enqueue work items to a limited number of threads. If all the threads are busy and there's no more thread creation allowed (say you have a 3 thread limit) then one of the threads that's already busy enqueues the next message for when it's done with what it's currently processing. It schedules among the already busy threads using a round robin technique.
The whole thing works using message passing and message queues. That's how the threads communicate with each other. You can post messages to each of the threads.
The trouble with it is the complexity of it snowballs. All of the sudden I need to sync the UI which requires a whole separate layer. And then there's the interthread communication that's already complicated.
There's only so much I can fit into an article without overwhelming the reader, and to produce anything approaching a real world example requires so much complicated code that it's just silly.
Oh you did this over here? Well you need to synchronize over there. And because you did that, you need to handle it over there too, etc. It's a mess.
I really think the approach traditional computers take to preemptive multithreading is an anti-pattern. It feels like every anti-pattern I've ever encountered: The more code you need to make it work, the more code you need to make it work! You end up putting more work into it just to get to the point where you can put more work into it, and everything feels like a workaround.
Real programmers use butterflies
|
|
|
|
|
Well, I'm a longstanding member of the choir that you're preaching to.
If it's a thread pool, I have them share a work queue. You seem to imply that you queue messages against threads even in this case, but I doubt it. I only queue messages on a thread when it's the only one handling that kind of work.
|
|
|
|
|
What I'm doing is I'm queuing up tasks. In the demo each time a user clicks a button (in order to queue up a new task) the code looks for an available worker. If it doesn't have one, and it can create a new one (and consequently a new thread) then it will. Otherwise if there are already a maximum number of workers created it will choose one of the busy workers to handle the next task. All of the workers do the same task. Think of this like a server application that accepts a limited number of incoming requests into a pool of workers, but will then queue requests after the limit is exceeded among the busy workers. One of them will pick it up as soon as it's finished with what it's doing. Make sense? I hope it does!
ETA: Wait, I think I see what you mean by using one queue. I'll have to think on this.
Real programmers use butterflies
|
|
|
|
|
You said you round-robin, which sounds fine for what you're doing. I use work queues because items aren't lost if a thread fails, there can be queues for different priorities, and some work items may take longer than others. None of these are issues for your demo, though.
|
|
|
|
|
Well it turns out you were right, and I could get away with a single shared server queue in this case.
Thanks for that. I guess I was just following a pattern without thinking hard enough about it.
I use queues, but not priority queues because the code is already complicated enough. I could probably implement a thread safe priority queue but it's not something i want to do.
Real programmers use butterflies
|
|
|
|
|
Just to clarify. I don't use a priority queue, but a separate queue for each priority level.
|
|
|
|
|
fair enough.
Real programmers use butterflies
|
|
|
|
|
These issues you are discussing related to multi-threaded programming really are a problem.
It's amazing how it originally seemed like multi-threaded programming was the panacea but just became another problem.
Because it is so difficult a new thing has arisen: The Actor Model.
Basically, it is a way to say that you have some work that should be done and you set an Actor to doing that work. It will be done concurrently and then let you know when it is done.
In this way the threading part is abstracted away.
One of the main implementations of The Actor Model is called Akka (originally implemented on the JVM).
But now there is a .NET Version[^].
That site has a pretty good overview explanation of it all. The Actor Model really does fix a lot of what is wrong with the multi-threaded world.
Check it out and see what you think.
|
|
|
|
|
That isn't the usual way of doing things; the common approach is:
1. Create pool of threads at startup (3, 5, whatever).
2. Create two queues at startup (work-queue and results-queue, using a LL or DLL queue, not an array).
3. Each thread takes the next work item from work-queue, working to completion, and posting the results to results-queue.
4. The main thread simply enqueues work items to the work-queue, and waits on the results-queue (use a semaphore here) for results.
There's no blocking involved other than the wait-on-semaphore (and wait-on-mutex for queue modification).
|
|
|
|
|
That's actually what I ended up doing. Greg Utas made a similar comment as you and I had an aha moment. Before that, I had engineering tunnel vision wherein I was stuck on doing something like I had imagined but there was an easier way. Still, the code, while much simpler is still ridiculous. God bless the Task framework - of course using it this time would have defeated the purpose of what i was doing
Real programmers use butterflies
|
|
|
|
|
Yeah, multi-threading is a pain in the rear. Concurrent processing and parallel processing in general is a pain in the rear, so it's usually best to go with some common idiom.
Every single time I invented my own method of managing threads/tasks/processes I've regretted it.
Shared-memory concurrency is painful; my best experiences with parallel computations were on Erlang. Using those idioms in my projects in other languages made my life a great deal easier, and now I no longer worry about parallel computations because I skip the shared-memory approach altogether.
|
|
|
|
|
Is very important like multithread is used: language, instructions and function. For example, with C++ into a Win32(IDE Visual Studio), with omp instruction and loop function, concurrency take a single thread for settings, like number of cores available and number of thread that need. Into this loop, a "do", this threads not take consequential way(and number into "do", the same) but seem random, also not really random, because depend like thread work
|
|
|
|
|
Greg Utas wrote: Well, I'm a longstanding member of the choir that you're preaching to.
I agree. that's why if I had to do anything that really did heavy multi-threading (as a service or back-end type thing -- not just in a WinForm app) then I would use the new thing: Akka (which implements the Actor Model[^].
I've actually used it one time and it is quite amazing once you get past the learning curve.
There's more on that landing page but read this quick summary that really is as good as it sounds.
Akka site says: Actor Model
The Actor Model provides a higher level of abstraction for writing concurrent and distributed systems. It alleviates the developer from having to deal with explicit locking and thread management, making it easier to write correct concurrent and parallel systems.
There are some nice simple diagrams there that show how it works.
|
|
|
|
|
And what if it is not in .Net?
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
|
I'm guessing I've open-sourced something similar in C++.
All these concepts are used in the call servers that run in AT&T's wireless network.
|
|
|
|
|
Been there, done that. And had my 3D stuff running in the background. UI and the 3D engine scripted with XAML.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Yeah, I mean, I can do it real world, but it's even harder to do it simple enough that it can be used to instruct. That's part of what my rant was about.
Real programmers use butterflies
|
|
|
|
|
Instructing? I have done a bit of that, but you would not like my methods.
Edit: I think, some of my instructors would also have loved to send me to the ghost guard. Tenty times. (What Did You Say, Sergeant? - YouTube[^])
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
modified 15-Jul-20 18:35pm.
|
|
|
|
|
Do they involve beating the stupid out of students?
Real programmers use butterflies
|
|
|
|
|
Only once. The idiot started to turn around to me with a loaded machine pistol in his hands. Safety set to 'A', as in 'Amen'. I edited the last post. The little video is funnier than this story.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
Yeah, that sounds wrong. One shared queue tends to work best.
|
|
|
|
|
I ended up changing it so it did it with one queue. I just lost in engineering tunnel vision for a bit, though eliminating the individual pipelines means I lose control over who the message goes to.
Real programmers use butterflies
|
|
|
|
|