hckrnws
Single-threading is also what allows you to stay up all night writing the 8 page essay that's due at 6:00am, what lets you drive for hours on end, what remembers protocol during a crisis. Not detracting from the OP's point at all, just single threading doesn't always have to be pleasant. One of its advantages is powering you through unpleasantries and getting what needs to be done done. Sometimes when we think we are 'multitasking,' we're just looking for ways to avoid the problem.
Every one of your examples are things that I find really enjoyable. As someone who is a terribly scatter brained procrastinator, the 6AM deadline is clarifying. Realizing at 11:30pm that I haven't started the essay is (or was, long ago) a jolt of wakefulness and focus. The time between 11:45pm and 5:45am flies by in a blur. Driving 1000mi in a day--18+hr of focus. Keep the speed high enough above the speed limit to make decent time over ground but avoid the risk of a sneaky traffic cop. Take advantage of the lulls in traffic, long sight lines (or tight, windy sections) to increase your average speed. Eat just enough to not lose energy but not enough to not be hungry--a little hunger sharpens focus. Drink enough water to not be totally dehydrated, but little enough that your bathroom breaks coincide with fuel stops. Pager goes off at 3AM. Critical alert, connection pools full, database CPU 100%, p99 response times equal to the configured timeout, circuit breakers tripping. The urgency gives life some meaning. You were groggy and sleepy 1min ago and now you're blasted wide awake, throttle firewalled. Don't threaten me with a good time ;)
>" Sometimes when we think we are 'multitasking,' we're just looking for ways to avoid the problem."
Which can be correct course of action. If I stuck trying to figure out how to solve some hard problem it is very good ide to switch for a while and magically the solution comes back later on since brain still manages to do something in background. Alternatively if I have to do whole lot of monotonous non rewarding work for whatever reason I would go nuts if I try to finish it in one step (considering it is long enough)
Single Treading is easy and hard at the same time. I Program MCU with only one core and no real hardware support for preemtive multi tasking. i sometimes have to resort to interupts to get a somewhat Multitasking but on the other hand my code runs as i have wrote it. It makes you think more about the problem. i see may programs nowerdays just throwing Threads, co-routines and memory on problems till the speed is acceptable. sorry my english no native speaker, and if i use AI to make the wording better i get complains using AI....
I've read that the (first?) preemptive multi-tasking was implemented in Apollo lander, to leave more processing power to more critical sensors. No one though of it in such general terms though.
This metaphor totally gets muddied once you consider some of the most optimized programs are run on a single thread in an event loop. Communication between threads is expensive, epolling many io streams is less so. Not quite sure what implications this has in life but you could probably ascribe some wisdom to it.
I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
Threading is hard, especially if they share a lot of state. Memory management with multiple threads sharing stuff is hard and ideally minimized. What is optimal very much depends on the type of workload as well. Not all workloads are IO dependent, or require sharing a lot of state.
Using threads for blocking IO on server requests was popular 20 years ago in e.g. Java. But these days non blocking IO is preferred both for single and multi threaded systems. E.g. Elasticsearch uses threading and non blocking IO across CPU cores and cluster nodes to provide horizontal scalability for indexing. It tends to stick to just one indexing thread per CPU core of course. But it has additional thread pools and generally more threads than CPU cores in total.
A lot of workloads where the CPU is the bottleneck that have some IO benefit from threading by letting other threads progress while one is waiting for IO. And if the amount of context switching can be limited, that can be OK. For loads that are embarrassingly parallel with little or no IO and very limited context sharing, a 1 thread per CPU core tends to be the most optimal. It's really when you start having more than threads than cores that context switching becomes a factor. What's optimal there is very much dependent on how much shared state there is and whether you are IO or CPU limited.
In general, concurrency and parallelism tend to be harder in languages that predate when threading and multi core CPUs were common and lack good primitives for this. Python only recently started addressing the GIL obstacle and a big motivation for creating Rust was just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues. It's not impossible with the right frameworks, a lot of skill and discipline of course. But Rust is getting a well deserved reputation for being very optimal and safe for this kind of thing. Likewise functional languages like Elixir are more naturally suited for running on systems with lots of CPUs and threads.
> I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
To further muddy the waters: if your process is not bottlenecked at the CPU a modern unit might be more optimal in terms of power draw (directly and through secondary effects for increased cooling needs) running at a fraction of its speed. Moving at a low clock but fast enough not to become the bottleneck compared to other factors, instead of bursting to full speed for a bit then waiting, can be optimal.
Of course there are a bunch of chip specific optimisations here if you like complexity. Some chips are better off running all cores slowly, and others that can completely power down idle cores better off running a few faster, to optimise power use while getting the same job done in the same amount of wall-clock time.
>"just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues"
In my opinion this is probably problem for novice. Or people who only know how to program inside very limited and restricting environment. I write multithreaded business backends in modern C++ that accept outside http requests for processing, do some heavy math lifting. Some requests that expected to take short time are processed immediately, some long running one are going to a separate thread pools which also manage throttling of background tasks etc. etc.
I did not find it any particularly hard. All my "dangerous" stuff is centralized, debugged to death years ago and used and reused across multiple products. Stuff runs for years and years without single hick-up. To me it is a non issue.
I do realize that the situation is much tougher for those who write OS kernels but this is very specialized skill and they would know better what to do.
A key difference is that it sounds like you need to create and otherwise interact with that sort of code on a regular basis.
Most devs spend most of their time, all of it even, on tasks that are either naturally sequential or don't benefit from threading enough over the safer option of multiple independent processes, so when they do come across a problem that is inherently parallelizable and needs the highest performance it is not a familiar situation for them. Familiarity can make some rather complex processes feel simple.
The same can be said for event loop driven concurrency, for those who don't work that way often the collection of potential race conditions there can feel daunting so they appreciate their chosen platform holding their hand a bit.
>"holding their hand a bit"
Holding hand is useful until it is not. It often has big trade offs.
I think this is a bit like a factory:
- you get a queue as input (a belt);
- you process it;
- you output a queue (also a belt);
So you're doing one thing, over and over, synchronously, blocking in between.
Event loops are great but composition is hard. This is due to the fact that the OS (e.g. Linux) provides event loops with custom event types (eventfd()) but the performance is worse than if you built it yourself.
The bad performance leads to a proliferation of everyone building their own event loops, which don't mesh together, which in turn leads to people standardizing on large async frameworks like tokio.
Serialized execution flow and large work batches seem to be just as good for humans as for machines.
Context switching is expensive in any domain once you look at it from an information theory perspective. Communication of the information almost always costs more than computation over the information. Large batches solve this.
If I'm in my kitchen and I've got everything I need to make 2 lbs of taco meat, I also have nearly everything I need to make 4 lbs. From a process perspective it's identical. The additional amount of time required is sub-linear in this situation. There's probably enough capacity for 6-7 lbs before I saturate the capabilities of my residential equipment.
I love to single thread but nobody else seems to. A typical situation would be making a sandwich for one kid while two others are trying to talk to me at the same time, each rising in volume to cut through the noise of the other. Partner explains in roundabout way that something is needed tomorrow. I wonder what to do with that information and wish it had been communicated in fewer words while making the sandwich. Then the phone rings.
A lot of the time work has this character also.
then you tell the kid to hold on, i’m making a sandwich give me a minute and teach them to wait a minute, tell your partner to sit together for breakfast or lunch to go over complicated thing, and don’t answer the phone, let the machine get it if you’re doing something. if it’s important right here, right now, then they can call back or you can call back after you’re done doing the thing.
modern society teaches us to be available to everything all at the same time, when we really need to learn how to slow down and refocus our thoughts on one thing at a time.
Use a multithreaded blocking approach. Much nicer than async.
I love reading this article start to finish. I really love the way the author has explained. And believe this is a tech-savvy explanation of mindfulness.
I loved reading it a little bit at the start, then I switched to reading a little bit in the middle and then continued from were I was at the start.
YMMV.
I’m currently diving into Python’s asyncio
Blocking is laying in bed waiting for my paycheck before I can get up.
Multi-threading is handing off a simple task to someone else who will do it slower and need constant explanation, so that it looks like I'm less busy.
Single-threading is writing and sending an email before returning to my work.
> The human brain is not a state-of-the-art multi-core processor. It is closer to an old single-core chip from the 90s.
That is plain bullshit. Make your case, but don't mix biology with it.
Studies have shown again and again how detrimental "multitasking" is to our cognitive abilities, which is the author's point.
Yes - for a certain narrow definition of "task" - but the reality is much more nuanced and comparing brains to single core processors is oversimplifying to the point of inaccuracy. A human brain has tons of "subsystems", and a given task might use some but not all of them. So some combinations of task are perfectly compatible and do not entail performance drop, while others are fairly impossible to do at the same time. Most people have no problem walking and talking at the same time - but talking and typing different things at the same time invariably results in crossed wires.
If I were to offer a tech analogy - the human brain is like an Amiga, with many specialized helper chips coordinated by a central executive which can sequentially multitask but offers no memory isolation between processes...
Can you share those studies?
https://www.apa.org/topics/research/multitasking
https://www.pnas.org/doi/abs/10.1073/pnas.0903620106
https://pubmed.ncbi.nlm.nih.gov/12710835/
https://pmc.ncbi.nlm.nih.gov/articles/PMC4174517/
https://pmc.ncbi.nlm.nih.gov/articles/PMC12172848/
https://otl.du.edu/plan-a-course/teaching-resources/the-mult...
1. https://www.apa.org/topics/research/multitasking - Not a study. Focuses on productivity (not health, or perceived well-being, supports the idea that the brain have dedicated structures for multi-tasking.
2. https://www.pnas.org/doi/abs/10.1073/pnas.0903620106 - It's about media multitasking, like watching multiple videos at the same time. Irrelevant.
3. https://pubmed.ncbi.nlm.nih.gov/12710835/ - About driving. Driving itself is already a multitasking effort.
4. https://pmc.ncbi.nlm.nih.gov/articles/PMC4174517/ - Media multitasking again. Irrelevant.
5. https://pmc.ncbi.nlm.nih.gov/articles/PMC12172848/ - Study itself admits that has limitations, did not adjust to participants practice levels.
6. https://otl.du.edu/plan-a-course/teaching-resources/the-mult... - Not a study. Reference links broken. Useless.
7. https://pmc.ncbi.nlm.nih.gov/articles/PMC11543232/ - Editorial article, not a study.
8. https://ics.uci.edu/~gmark/chi08-mark.pdf - About interruptions, only deals with unplanned multi-tasking (in which there are interruptions).
---
I am aware that there are cognitive loads on some kinds of multi-tasking. That does not translate to all kinds of multi-tasking though.
To say that "the brain is like a computer, single thread" is misleading. There are scenarios in which the brain exceeds in multi-tasking (playing instruments like drums, playing games, etc), and there is plenty of evidence that we're tuned for it in all kinds of ways (but not all of them).
Furthermore, I'm not defending we should multi-task. I just think the metaphor and the "brain is mono thread" idea is both wrong and dumb.
It's just a metaphor.
That's my point, this is a terrible metaphor.
It’s not a terrible metaphor when it’s the closest thing we have to explaining this issue to a layperson.
Ah, yes. The typical layperson that understands threading and processor architectures.
honestly, young people these days are smarter than you give them credit for. multi-core and threading is something that pretty much anyone on the internet “gets” conceptually, even if not on an engineering level.
So, let me get this straight then.
You believe the audience for a blog about being tired of multi-tasking is young people, from this new generation that is always multi-tasking (on the smartphone, talking to multiple people, etc)?
You honestly believe they need a metaphor like "single thread versus multi-thread" to grasp the idea of what doing multiple things at the same time means, practically?
If you do, ok then. Who am I to disagree?
I still think none of this makes sense, and the metaphor sucks.
i think everyone’s got the gist of that by now
I understand processor architectures and I would have preferred the use of the word "in-order" processor over "old single core from the 90s", since in-order CPUs are still being designed and manufactured today.
Crafted by Rajat
Source Code