hckrnws
The Windows 95 user interface: A case study in usability engineering (1996)
by ksec
Steve Jobs is famous for his 1996 quote about Microsoft not having taste (https://www.youtube.com/watch?v=UiOzGI4MqSU). I disagree; as much as I love the classic Mac OS and Jobs-era Mac OS X, and despite my feelings about Microsoft's monopolistic behavior, 1995-2000 Microsoft's user interfaces were quite tasteful, in my opinion, and this was Microsoft's most tasteful period. I have fond memories of Windows 95/NT 4/98/2000, Office 97, and Visual Basic 6. I even liked Internet Explorer 5. These were well-made products when it came to the user interface. Yes, Windows 95 crashed a lot, but so did Macintosh System 7.
Things started going downhill, in my opinion, with the Windows XP "Fisher-Price" Luna interface and the Microsoft Office 2007 ribbon.
I'll also give the opinion that Apple consistently creates some absolutely crap designs and when they do this, release something really really mind mindbogglingly stupid that it should be embarrassing they are instead met with applause on the "amazing design". It's a tiresome pattern repeated for decades now.
eg. The 'breathing status light' that lit up the room at night due to extreme brightness which meant every macbook of the era had stickers or tape over the LED with endless Q&A's of "How do i turn the annoying light off? You can't!". This crap design was met by articles extolling the subtle sign wave and off white hue. I kid you not. https://avital.ca/notes/a-closer-look-at-apples-breathing-li...
Apple today seem to have acknowledged their mistake here and taken away status lights completely (also a crappy design hailed as amazing since they've just gone to the other extreme) which highlights the fact that no matter what they do they're hailed as being amazing at design, even when it's contradictory from their own previous 'amazing designs'.
Apple doesn't just get a pass on crappy design. It gets endless articles praising the virtues of everything they do even when, if you think about what they did for even a second you'd realize, "that's actually just plain crap design".
> release something really really mind mindbogglingly stupid that it should be embarrassing
I’m still trying to understand who came with the idea of charging the mouse from under, instead of from a position that would allow to use the mouse while charging…
I believe that was intentional, to prevent people using it plugged in, which would mean most people would keep it plugged in all the time, so it wouldn't be a wireless mouse anymore, but also degrade the battery lifespan.
I also believe that was intentional. But the reason was the typical Apple / Jobs hubris of knowing better than the users. The desktop looks cleaner with fewer cables, so they wanted to enforce use without the cable plugged in.
I don't have a source for this, but I'm pretty sure I've read something like that a long time ago.
It was intentional to recycle the design of the Magic Mouse 1 which used AA batteries. The Magic Touchpad and Keyboard came out the exact same day as the Magic Mouse 2 and they don't share the Magic Mouse 2's stupid design. They both have perfectly usable ports on the front and even work when wired without pairing.
Cycling the battery continuously is worse for lifespan.
Maybe they should have made the batteries replaceable and make it operable without batteries installed.
Or just ship a wired version for the people who want that.
That wired hockey puck mouse was an abomination
Right.... So.... Add some charging circuitry. Is it a problem if people don't use it as a wireless mouse anyway.
Yes it's very apple to force users to use devices how Apple wants, but that isn't a particularly good reason.
Textbook case of form over function. Either an engineering constraint forced by the design and deemed an acceptable trade-off by higher-ups, or maybe more likely, the designer just thought a visible charging port would’ve ruined their design.
While the exact reason has never been documented, if you look at that mouse's design, you'll see that its first generation had a regular battery compartment on the bottom. When gen 2 arrived, they fully reused the same shell and only replaced that bottom part to now be an integrated battery with a charging port instead of a compartment for AAs. Moving the charging port would've required a brand new design, since every edge of the mouse tapers way too much for a port to be placed anywhere else. They would also probably need to change more of the internal structure, as opposed to just swapping a battery module and changing the bottom lid. In this case the constraint seems to just be about functionality and manufacturing. Apple has made many controversial design decisions that have no functional justifications in the past, yet people keep bringing up the mouse.
The reason people talk about the mouse is that it's one of the worst ideas they ever had.
At the time, I remember someone claimed that the reason was that they were afraid people could leave it plugged in for convenience. Apple thought that would lead to a worse experience because their mouse was designed to be used wirelessly. I think it was actually more related to aesthetic "icks" by the designers, because people would have disconnected the cable if it was in the way.
This is not even close to the worst ideas Apple ever had, even if you're only talking about mice.
The original USB mouse (for the first iMac) was round, so you couldn't orient it in your hand without looking at it constantly.
And it came with a very short cord (because there was a port on the right side of the keyboard to plug it into). But then the laptops got updated with USB ports and they were only on the LEFT side of the case.
For at least a year or two you could not buy an Apple mouse for your Apple PowerBook and use it in your right hand, because the cord was too short to go around the case.
Eventually they shipped a "Pro" mouse with revolutionary elongated shape and longer cord. (...and optical tracking, and what looked like zero buttons, which were pretty neat)
Uhg i totally forgot about their round mouse. Bright colored iMac days!
Yet it is one thing I love very much about my MX anywhere 3. The wire connection is simply more performance and I get to use it when I did not charge. It is also compatible with any non-Bluetooth device.
> I think it was actually more related to aesthetic "icks" by the designers, because people would have disconnected the cable if it was in the way.
A lot of people really will just anxiously leave the cable in the whole time if given the opportunity. I have a wired/wireless Logitech mouse and I confess that I hardly ever remove the cable. Between this, and the space and connector issues of adding a "normal" cable connection as referred to in the grandparent, we have two reasons to think that Apple's decision wasn't all that clearly bad, let alone one of their worst.
Nobody leaves the cables attached. Except wanna be pro gamers who think a couple milliseconds will help them more than practice to "git gud". Every mouse I have is wireless, and I almost never use them plugged in, except for the one on the server that get used so rarely it's self-discharged should probably be wired but I simply don't have any wired ones left. Just plug them in overnight every once in a while, golden.
> it's one of the worst ideas
It's still one of the worst ideas. Insult to injury.
Honestly, as a user of the mouse, I think the main reason people talk about the mouse is bike shedding. Charging isn't a problem in actual use, but everyone sure has an opinion on it.
There are plenty of contenders for 'worst ideas they ever had' and this just isn't up there.
I agree, I always found the charging port location to be a total non-issue. The battery life is long, charging is fast, and you get warned that the battery level is low long before the mouse dies.
In fact, the real crime of the Magic Mouse is how awkward it is to switch it between machines.
"If you see a stylus, they blew it"
That's a quote from Steve Jobs about how basically all of their competition (except Google) had made the mistake of trying to ship desktop software on phones. The problem with the stylus is that it's a hardware workaround for a software problem: the sort of cost-reduced engineering you get when a company wants to "have a mobile strategy" without actually putting in the time and effort to make something good.
The Magic Mouse is the exact same kind of "we couldn't care less" cost-reduction. The charging port is on the bottom because that's the only place you can put a charging port with the existing all-glass design. Because they re-used an existing design intended for removable batteries. This is such an uncharacteristically un-Apple move, and one so obviously detrimental to the design of the device, that people (including myself) actually psyopped themselves into thinking Apple had deliberately designed the mouse to enforce wireless usage.
And, to be clear, Apple has never done that.
All their other peripherals with rechargeable batteries in them will let you use them fully wired if you plug them in. In fact, if you somehow engineered a way to move the charging port somewhere less stupid, the Magic Mouse probably would work plugged-in, too.
If you see a charging port on the bottom, they blew it.
While I get the feeling you appreciate the, erhm, efficiency with which Apple modified this product, the problem is that Apple is not supposed to be efficient. They don't need to save money on the engineering process because they are not hurting for money. They sell themselves as being a design-forward company that prides itself on making bold, not expedient, choices. To take a shortcut like that shows a lack of respect for the customer to whom they are charging premium prices for these items.
So if they're reusing the shell, they passed the savings on to customers right?
If this were a £10 mouse then this excuse might be valid, but it isn't.
Where should a cord on a mouse be when it's charging? The same place as any cord on a mouse should be, i.e. the tail, would be the commonsense answer. Indeed this is how all other dual-mode mouses do it.
How many generations of that mouse design have there been now? Any changes to it? Wireless charging support could be a nice bandaid on that terrible design.
Let me introduce you to the world of _devices for keeping small kids asleep_.
For whatever reason they won’t work when hooked up to a charger and of course the moment you need them most the batteries have gone dead so you must charge…
At this point I can’t help but think that the people who design these things really hate parents
Their laptop touchpads are the only Apple "pointer" input device I've ever liked. (And by extension, the iPhone and iPad.)
I hate myself every time I settle for yet another disposable Microsoft mouse.
Though, I would have killed for an Apple Pencil, back when I was a CAD jockey.
For me, the butterfly keyboard was Apple's mostest worstest user interface design decision.
(Doubly so because it persisted for so long. I love that Apple (and others) try new things. But I don't understand commitment to design failures.)
Source: I've been an Apple partisan since the Apple ][. Even stubbornly resisting Amiga's siren call.
> For me, the butterfly keyboard was Apple's mostest worstest user interface design decision.
I really liked the butterfly kb. It was responsive, and you could hit the key cap anywhere and it would register.*
Subsequent mac book keyboards imo are all terrible and suffer from the terrible issue of sponge-ness that means i can literally press a key cap in a slightly off centre location and it Does Not Register. Its like the key movement is separate from the actuation. I have way more mis key and missing letter using later post butterfly kbs than i ever did. The worst part is this is ‘normal’ and not a fault. You just have to press harder and in the centre.
* except when it was in for work i had 3x top case replaced on my old mbp
Ya, you're right. When it worked, it worked well. A point worth remembering, thanks. Alas, they just weren't robust enough, interferring with my work.
Yes or the sharp edges on MacBooks cutting into your wrist. That started with the unibody design, the ones before that had a nice soft rounded plastic gasket there
My powerbook was the last apple laptop I really enjoyed.
I liked the gentle amber sleep mode breathing power button on my circa-2000 CRT iMac. It wasn’t nearly as bright as that of later models and was quite nice.
I had one from 2010 and the bright light wasn't even the main problem at night.
The charger emitted an annoying high pitch sound so I'd have to unplug it.
And the device turned itself on randomly at night, and the CD-ROM reader and fans would spin up and make noise.
I've now used a macbook air for work and I noticed that even when in "standby" for a week my router had a DHCP lease for it. So they still turn on for no reason, but at least the lack of fans and the fact that I can now use a decent usbc charger means they don't wake me up any more.
You can disable "Wake for Network Access" in the battery settings if you want. It's a convenience option that lets MacOS check for new iMessages and other updates while it's asleep. That way all your messages are loaded immediately when you wake the device.
I didn't even set it up with an apple account and thus I cannot possibly get any messages from iMessages.
You can disable it but yes macs will periodically wake up to get emails and notifications even while they are ‘sleeping’
I love how when americans wake up simple factual statements get downvoted because they are somehow seen as personal insults.
Feel free to downvote, this one is meant for it :)
They're the flagship "design here, build there," company, so they got the Pollock treatment.
I'm glad people are finally saying it. Eat your heart out, Nilay.
I like the status lights on my old X200 a lot (on/off, battery, disk, wlan I believe, and more). It's a shame we don't have them like that any longer in Thinkpads. We only get one or two LEDs indicating on/off status. But such things need to be done right.
I recently had to get printing working for a family member on an Apple tablet. I'm not an Apple jockey so it took me a while to sort out and I've being using computers since 1980 and consulting since 1995.
You tap an icon that looks like the outline of a rectangle with an arrow pointing up. Then you tap the name of the printer. Then you tap another rectangle with an up arrow and then tap the word "Print".
I may have got the precise steps wrong but it really is that abstruse to print something on a tablet. Never mind that mDNS/Bonjour has done its thing - the steps to actually indicate that you want to print is frankly weird.
What on earth is that box with an up arrow actually supposed to mean? Why does the interface switch from icons to text?
Android uses the 'share' icon to represent the same thing, which is maybe a little more legible, but still feels like shoving way too many actions under a confusing modal they shouldn't be in. Even worse when apps implement a custom share dialog.
I usually see the Android share icon with the word share. Apple doesn't often present words with icons, so if you don't already know what the icon means, it's difficult to find out.
Arguably, it's a bit off that you share a document with a printer in order to print it, but I feel like printing is no longer so common as to require a dedicated button everywhere; and printing from a phone still seems like a novelty to me (but I do use it; it feels odd, but useful and I know lots of people have no computer to print from)
It's called the Action icon, a generalisation of its original Share meaning. It's used throughout the Apple ecosystem so knowing that's where actions live is not a big expectation.
You've mangled the steps. You only press one Action icon in this sequence, then you select Print, then you need to select the printer and any other options, then you tap Print. Which of these steps do you think 'abstruse'?
Are you suggesting they should use a little icon of a printer, peripheral that takes many wildly different forms, instead of the word Print?
We've had clear, legible printer icons for decades.
There's a printer icon in windows and *nix. Many icons represent things that have wildly different forms. People and cars look different, but road signs manage to portray these things.
It's supposed to be the "Share" menu, but that stopped meaning anything very fast because they just crammed everything into it for lack of other UX for system services.
Macs have the problem multiple times over, because now they have the normal menu bar and toolbar, and a Share menu that just gets arbitrary stuff dumped into by App Store apps, and the Services menu that shows up in some contexts but not others, and the Quick Actions menu that shows up in some contexts but not others, and some services can just add things directly to right click menus.
Apple UI designers wanted to avoid the Android hamburger so much that they doubled-down on the share menu to duplicate hamburger menu functionality.
I guess printing it to paper is a form of sharing so they may have the last laugh.
Windows Explorer supports its own equivalent to the "Share" menu, dubbed "Send To". It was there already in the original Windows 9x. Printers are generally not listed though, there is a separate "Print" option instead.
There's a very reasonable argument behind that, though.
"Sending" a file to another disc or on the network is non-transformative. At the far end, it's still a file.
But "printing" is inherently transformative-- you're expecting to get something clearly not a file (print-to-file pseudo-printers excepted).
I can see the desire for minimalism-- having seperate rows for "share/send" and "print" is, well, two seperate rows. But if you offer adaptable and configurable interfaces, I could see suppressing one or both depending on context or user preferences. (You have no external drives or registered share-recipients? No "Send To/Share")
Maybe I've been in Linux land too long but sending a file to a printer seems pretty obvious to me. Yes it's transformative in a way, but you could equally argue that my word document with A4 layout is a digital version of a document, and the print out is equivalent.
To me there seems to be more difference between sending and share. One is pushing something somewhere, the other implies making it available for someone/thing to pull.
I'm not particularly saying you're wrong btw. We are talking metaphors, and there's no 'correct' way to do it.
Good point. Its a mess. Come on apple get someone to fix this
Status lights can be helpful, although they should be dim, and should be red or green (or possibly yellow) rather than blue or white (unless you have already used the other colours and now you need more colours).
Red and green, if the color has some meaning, should be avoided. 10% of males have problems with that colors (dyschromatopsia) specially with led colors. For indicators blue and white are very easy to see, even in not optimal lightning. The option to disable them is nice.
> unless you have already used the other colours and now you need more colours
In that case you will end up with Christmas decorations. Better solution is usually placement and form.
Mixing red and green should be avoided. There’s no problem using either alone. Human color vision is the least sensitive to blue light, so a blue indicator led has to be made brighter than an equivalent red or green led to be as visible in bright ambient lighting. But that makes blue leds disastrous in low light, where the opposite is the case (vision is the most sensitive to blue). Of course there never was any reason for blue standby lights except the fact that blue leds had novelty value and looked futuristic compared to boring old red and green leds.
I got no problem with that tiny LED or glowing apple logo personally
But liquid glass and insane amount of bugs that arrived with it is killing me.
Likely you experienced later gens where they toned it down. ~2010 it was one of the brightest LEDs you could purchase. As in they literally put a torch LED on the all white Intel macbooks of the era and it would shine through the laptop bags, pulsating.
There's people who live their lives with the low battery beep on their smoke alarm going off every 5 minutes in their home and don't even register it happening.
Maybe...
Not Maybe, I owned a 2009 MBP. Everyone with a macbook from that period that I knew had the same issue, they were absurdly bright, you could not keep it anywhere near a bedroom without putting very thick tape over the light.
It was a poorly thought out design of aesthetics over ergonomics.
nope. actually I remember I had that model first and yes I still don't care. simply the least annoying light compared to other bright color leds in a room. doesn't stand close to liquid glass chaos.
loved battery level indicators on old macbooks too, they kind of brought it back with led on magsafe except this new led is more annoying.
[dead]
> Microsoft Office 2007 ribbon
Ribbon also has a similar research behind it, just like Windows 95. For what they designed it, allowing beginners to discover all the functionality that's available, it works perfectly.
I think most of the complaints from the tech circles are completely unfounded in reality. Many non-tech people and younger ones actually prefer using Ribbon. I also like it since it is very tastefully made for Office. 2010 was my favorite Office UI. It actually doesn't get rid of shortcuts either. Most of the Office 2003 ones were preserved to not break the workflow of power users.
Where Ribbon doesn't work is when you take out the contextual activation out of it. Most companies copied it in a very stupid way. They just copied how it looks. The way it is implemented in Sibelius, WinDBG or PDFXChange is very bad.
> I think most of the complaints from the tech circles are completely unfounded in reality. Many non-tech people and younger ones actually prefer using Ribbon.
Well, yes, but that observation doesn't prove the point you think it does.
People who were highly experienced with previous non-ribbon versions of Office, disliked the ribbon, because the ribbon is essentially a "tutorial mode" for Office.
The ribbon reduces cognitive load on people unfamiliar with Office, by boiling down the use of Office apps to a set of primary user-stories (these becoming the app's ribbon's tabs), and then preferentially exposing the most-commonly-desired features one might want to engage with during each of these user stories, as bigger, friendlier, more self-describing buttons and dropdowns under each of these user-story tabs.
The Ribbon works great as a discovery mechanism for functionality. If an app's toplevel menu is like the index in a reference book, then an app Ribbon is like a set of Getting Started guides.
But a Ribbon does nothing to accelerate the usage of an app for people who've already come to grips with the app, and so already knew where things were in the app's top-level menu, maybe having memorized how to activate those menu items with keyboard accelerators, etc. These people don't need Getting Started guides being shoved in their face! To these people, a Ribbon is just a second index to some random subset of the features they use, that takes longer to navigate than the primary index they're already familiar with; and which, unlike the primary index, isn't organized into categories in a way that's common/systematic among other apps for the OS (and so doesn't respond to expected top-level-menu keyboard accelerators, etc, etc.)
I think apps like Photoshop have since figured out what people really want here: a UI layout ("workspace") selector, offering different UI layouts for new users ("Basic" layout) vs. experienced users ("Full" layout); and even different UI layouts for users with different high-level use-cases such that they have a known set of applicable user-stories. A Ribbon is perfect for the "Basic" layout; but in a "Full" layout, it can probably go away.
This is it. Ultimately the best interfaces are designed for experts, not beginners. "Usability" at some point became confused with "approachability", probably because like in so many other areas, growth was prioritized over retention. It's OK if complex software is hard to use at first if that enables advanced users to work better.
Really, the most efficient interfaces are the old-style pure text mode mainframe forms, where a power user can tab through fields faster than a 3270-style terminal emulator can render them.
But what if most of your users aren't "experts"? I think it's a good thing that computers are usable by a majority of the population today.
So why care about wysiwyg when we have LaTeX?
I really like this take! A couple years ago I wrote a throwaway blog about learning curves in user design[0] but the thought has stayed with me a lot since then.
It's especially tricky because things are contextual. I use Helix as an editor which has a steeper learning curve than, say, VSCode, but is way faster once you're up and running with it.
But by contrast, I also really like LazyGit, which is a lot quicker to learn than the git CLI, but since all I do is branch, commit an push, makes my workflow a lot more efficient.
There's such a complex series of trade offs, especially if products want to balance bith. I always feel a little sad how much interfaces have skewed towards user friendliness over power. Sometimes it feels like we've ended up in a world of hurdy-gurdies with no violins.
[0] https://benrutter.codeberg.page/site/posts/learning-curves/
> I think apps like Photoshop have since figured out what people really want here: a UI layout ("workspace") selector, offering different UI layouts for new users ("Basic" layout) vs. experienced users ("Full" layout); and even different UI layouts for users with different high-level use-cases such that they have a known set of applicable user-stories. A Ribbon is perfect for the "Basic" layout; but in a "Full" layout, it can probably go away.
In the linked case study on Windows 95 they specifically tried this, creating a separate beginner mode for the Windows shell. Their conclusion was that it was a bad idea and scrapped it because it doesn't allow for organic learning and growth of a beginner into a power user on account of the wall between modes. Instead they centralized common tasks into the Start menu. I'm not sure how you would translate that learning to the design of Office or Photoshop though. Maybe something like Ribbon, but as a fixed "press here to do common actions" button in the app? Then next to that "start button" put the full power user index of categorized menu buttons?
I think PrusaSlicer does this in a reasonable way. (Context: this is software for preparing files for 3D printers.)
It has three modes: Simple, Advanced, Expert. They are all the same UI design, all it does is hide some less common settings to not overwhelm users. Each level is also associated with a colour, and next to each setting is a small dot with that colour: this allows you to quickly scan for the more common settings even if you showed all of them at Expert. At Expert there are easily over a thousand different settings organised into a 2-level hierarchy.
Docs on this feature: https://help.prusa3d.com/article/simple-advanced-expert-mode...
I wrote a blog post that has some screenshots from the settings pages (5th image for example): https://vorpal.se/posts/2025/jun/23/3d-printing-with-unconve...
> people who've already come to grips with the app
They would, or should, be using keyboard shortcuts anyway.
I forgot the early release but ribbon seemed to have fuller keyboard shortcut and could be hidden entirely. Leaving power users with more space and faster command triggers isn't it ?
Yes, the ribbon also showed you the appropriate keyboard shortcut. My last job in the Navy involved a lot of converting mail merge-style Word docs to PDF for digital signature and so I became very adept at using keyboard shortcuts in Word and it was all right there in the ribbon.
It was different from Word 2003, but that was about all the bad you could say for it from the 'power user' perspective.
The thing that bothers me more than ribbon itself is how much the performance started degrading once they introduced it.
I got MS Office 97 working in Wine recently, and it's still shockingly capable. There are lots of formatting options, it can read my system TTF fonts, and it's since it's nearly thirty-year-old software, it runs ridiculously fast on modern computers.
I don't feel like MS has added many more features to Office that I actually care about, but I feel like the software has gotten progressively slower.
Forget modern computers. I booted up my dad's COMPAQ from 1998, running Windows 2000, and was blown away by the speed and logical layout of the applications. I have to grit my teeth using W11 File Explorer because of what I recently re-experienced.
I imagine Office 365 is to Office 97 as FIFA 23 is to FIFA 97, in that it's still essentially the same idea and can never be otherwise, but the later versions are designed to draw new people in.
I’ve said before that I don’t think there’s anything missing in Office 2000 for upwards of 90% of users’ word processor/spreadsheet/etc needs, and this is supported by the popularity of the somewhat spartan GSuite apps (Docs is basically WordPad with realtime collab tacked on, not even a full Word 2000 equivalent for example).
It’s also stupid in terms of screen real estate.
Earlier Word/CorelDraw/etc had a thin toolbar with lots of functionality. Barely occupied any space at just 800x600 resolution.
Nowadays, the ribbon and all other junk occupy a huge portion of the screen, even at 1920x1080.
It’s amazing how little screen area today actually shows the useful part of a document.
Instead of the Ribbon, a thin context sensitive toolbar would have been more useful.
> It’s also stupid in terms of screen real estate.
You can't really blame MS that around the same time screen manufacturers started to switch to 16:9 for cost reasons and cheap laptops all only offered a 1366x786 resolution.
The whole "UIs got smaller because the aspect ratio got more rectangular" thing never really made sense to me because 786 > 600. The screens got bigger in both dimensions, regardless of them getting bigger in one more than they got bigger in the other.
Pixels aren't physical space. The number of square inches remained similar.
A wider aspect ratio means that a horizontal line takes up a larger percentage of the overall screen and is more costly.
You know the ribbon can be collapsed so that it behaves more like a drop-down menu, right?
It doesn't really act that way, as (1) it can't be accessed with keyboard shortcuts and (2) it's difficult to scan for the desired feature as it's a visual jumble of buttons and text. Oh, and it might not be visible! Sometimes features can only be found in pop-out dialogs.
Having used Office products for 30+, my most-used feature of the Ribbon is Search, because I don't have time to waste hunting through a poorly-organised heap.
To your (1), if you tap Alt all of the alt keys current available show up next to their associated buttons. (Top level menu). Hit the letter for where you want to go and it than will show you the next set of alt keys (available items on the ribbon itself). You can also use the arrows to move around the menus or tabs when in this mode. It isn't obvious but the ribbon, as office implemented it, is very keyboard accessible.
But then, you have to learn the sortcuts (if there are any) or click first to open it, then click button/funciton, which is 50% slower.
Also, classic button bars were customizable. You could add/remove/group buttons in any order you like. And there were lots and lots of buttons that were not present in any of the default toolbars. The ribbon is fixed AFAIK.
Ribbon has some good elements to it, but other elements are questionable at best. Sizing of buttons for example feels completely arbitrary and not connected to frequency of use or anything else obvious.
I think the best parts of it could be replicated by just combining tabs and traditional toolbars, but that’s not complex enough of a concept to need a dedicated moniker.
I think the ribbon is terrible. When you are looking for something, you can't just look in one direction but you have to scan up and down. Then it may be text or just an image. And the thing you are looking for may be on some other ribbon page.
I much prefer menus with toolbars that have only the most used functions.
> 2010 was my favorite Office UI.
Mine too. Office 2010 was what made me switch back to Windows after using Linux and OpenOffice for years. I found the ribbons to be perfect for my use of Office. They usually automatically focused on the task at hand. Everything else was just a click away. Advanced stuff stayed in the menu. And, at least for me, it helped discoverability of features.
> For what they designed it, allowing beginners to discover all the functionality that's available, it works perfectly.
Sure, but where are the beginners are we talking about? In 2007, Microsoft office had long reached dominance in the workplace and school such that the only beginners are students learning word prcessing for the first time.
The beginners are long time workspace and school users who were requesting features already in the product.
The ribbon doesnt work for because the options change and visibility is decided by how big my windows is.
The Ribbon is a disaster. Compared to conventional toolbars, it fails across several metrics.
When it first came out, I did studies of myself using it vs. the older toolbared versions of Word and Excel, and found I was quantifiably slower. This was after spending enough time to familiarize myself with it and get over any learning curve.
EFFICIENCY
The biggest problem is it introduced more clicks to get things done - in some cases twice as many or more. Having to "tab" to the correct ribbon pane introduces an extra click for every task that used to be one click away, unless the button happens to be on the same tab. Unfortunately the grouping wasn't as well thought out as it could have been. It was designed with a strong bias for "discoverability" over efficiency, and I found with many repetitive tasks that I commonly carried out, I was constantly having to switch back and forth between tabs. That doesn't even get into the extra clicks required for fancier elements like dropdowns, etc. And certain panes they couldn't figure out where to put are clearly "bolted" on.
KEYBOARD SHORTCUTS
At the same time, Microsoft de-emphasized keyboard accelerators. So where the old toolbar used to hint you the keyboard shortcut in a tooltip every time you rested your mouse over a button, the new one doesn't - making it unlikely users will ever learn the powerful key combos that enable more rapid interaction and reduce RSI caused by mousing (repetitive strain injury). In my case this manifests as physical pain, so I'm very aware of wasteful gestures.
SCREEN REAL ESTATE
The amount of text in the button captions on the ribbon is also excessive. It really isn't a toolbar at all, more of a fancy dropdown menu that's been pivoted horizontally instead of vertical. It turned the menu bar, which used to be a nice, compact, single line, into something that now takes up ~4x as much vertical screen real estate. As most users' monitors are in landscape orientation, vertical space is scare to start with; congratulations you just wasted more of those precious pixels, robbing me of space to look at what I really care about which is the document or whatever thing I'm actually working on.
DISCOVERABILITY
You used to be able to get a good sense of most software's major functionality by strolling through all the menu options. Mastery (or at least proficiency) was straightforward. With the more dynamic paradigm Microsoft adopted along with the Ribbon, there's lots of functionality you don't even see until you're in a new situation (or that's hidden to the responsive window layout, which is ironic - instead of making the thing more compact, they made portions of it disappear if your window is too small). I grant some may argue this has benefits for not appearing as overwhelming to new users (although personally I've always found clean, uniform, well thought out menus to be less jarring than the scattered and more artistically inclined ribbon). But easing the learning curve had the trade off of making those users perceptually stuck in "beginner" mode. They can't customize the ribbon as meaningfully (I used to always tailor the toolbar by removing all the icons I already knew the keyboard shortcuts for, adding some buttons that were missing like Strikethrough, and move it to the same row as the menu bar to maximize clientarea space)
In my case, after trying out the new versions for a year, I made an intentional decision to go back to the 2003 versions of Word and Excel, and never look back (forward?). They are my daily drivers. These days, I barely touch modern versions of Word and Excel, except for the very rare instance I actually need a specific new feature (i.e. a spreadsheet with more than 65k rows). If someone asks me to use the new version, I simply refuse (which has never been a showstopper - my work quality is preeminent, and once you get past policy bureaucracy it turns out clients/employers don't care what tool I use to get it done).
The whole point of a toolbar was always to be a place you could pin commands you want instant access to, just a click away. The ribbon shredded that paradigm, and in my opinion took us a marked step backward in computing. It fails across several metrics, compared to regular toolbars. I wanted to blog about it at the time in hopes of convincing the world it was a mistake, but didn't have the free time. 20 years later, I'm curious if more people share these sentiments and acknowledge its shortcomings.
> So where the old toolbar used to hint you the keyboard shortcut in a tooltip every time you rested your mouse over a button, the new one doesn't
Although it is bad that it does not display the keyboard shortcuts, you can push ALT and then it will tell you which letter to push next. (I just guessed that pushing ALT might do something (possibly display a menu?), and I was correct (it did not display another menu, but it did help).) This is not quite as good as using the other keys such as CTRL, or numbered function keys, but it is possible.
(I do not use those programs on my own computer, but on some other computers I sometimes have to, and this helps, although not as well as it would to use menus and other stuff instead. However, in some cases I was able to use it because of knowledge of older versions of Microsoft Office; many of the keyboard commands are the same.)
I think the menu bar is much better, and toolbars should not be needed for most things. With the menu bar it will underline the letters to push with ALT and also will tell you what other keys to use (if any) for that command. (One thing that a toolbar is helpful for is to display status of various functions that can change, such as the current font. Due to that, you might still have a toolbar, but you do not need to put everything in the toolbar. Perhaps combine the toolbar with the status bar to make it compact.)
(Something else that would improve these word processing software would be the "reveal codes" like Word Perfect. A good implementation of reveal codes would avoid some of the problems of WYSIWYG. For spreadsheet software, arranging the grid into zones, and assigning properties (including formatting and formulas) to zones, and making references work with zones, etc, would be helpful, but I don't know that any existing software does that.)
In my own software I do try to make the display compact so that there is more room for other stuff, instead of needing to put all of the commands and other stuff taking up too much space in the screen. Good documentation is helpful to make it understandable; this would work much better than trying to design the software to not need documentation, since then the lack of doumentation makes it difficult to understand.
[dead]
I don't buy it. My generation used pre-Ribbon Office, from elementary school onward, just fine. It wasn't made for children; it was made for Boomers who couldn't grok the menu-based interface. Not old people; prime workforce-aged Boomers who were intimidated by computers, but who were being dragged kicking-and-screaming into the Information Age by their jobs. It was just another example of the infantilization of interfaces provided to that generation whenever they whined about not wanting to learn, or being scared by, something new. Everyone else just got dragged along with them.
allowing beginners to discover all the functionality
How many beginners were there in 2007? Hardly any, PC and "Word" penetration was pretty close to 100%. We are still stuck with "beginners have to figure this out" interfaces in 2026.
As long as new humans are still being born, there's always going to be beginners - with a few years delay, once they enter school or a work place. ;-)
I think the interesting larger observation here is the perhaps both Microsoft and Apple peaked in their usability design between the mid-90s and late-aughts (I think Apple stayed at their peak for longer, particularly when you start thinking about the iPhone which, at the time, was streets ahead of what any other company was offering), and have both been on a down trend ever since.
Why is that though? Why does that appear to have to be the case given that neither seems anble to do annything but get worse nowadays? And why hasn’t any other player managed to step in and fill that void?
Clearly there are some broader forces and trends at play here.
Is it pressure to monetize in ever more intrusive, user-hostile, and “micro-tiresome” ways? Is it that they don’t really have to compete any more, or at least not with eachother?
What is going on here? I don’t understand. But I wish I did because then a way out might be easier to discern. Because - I still don’t think - Linux on the desktop (taking one aspect of the problem) is still necessarily ready to be the answer - certainly not outside of the technology, engineering, and scientific niches.
I think there's something to be said for the loss of institutional knowledge, as that was the time when the first set of Baby Boomers would have been transitioning out of operational roles or the workforce altogether. My experience as a Millennial is that they and older Gen-X, as a cohort, have been quite jealous of their accumulated expertise and generally reticent to pass it along, especially when they'd learned to keep every edge possible in the hyper-competitive job markets of the 80s and 90s. It's possible that a lot of knowledge just disappeared, leaving the younger generations to reinvent the wheel at a cuil over the circumstances that brought about the UX they'd grown up with.
[dead]
People need to go back and use Win 3.1 or MacOS 7.x to realize what a leap forward Win95 was. MacOS 7.x didn't even have preemptive multitasking! The start menu and task bar made their debut and immediately anchored the whole UI. Since then, Windows has made incremental advances (with the occasional step backwards), but no change has been nearly so radical. OS X would not have been possible without the influence of win95. We're still living in the Win95 age.
OS X inherited its multitasking model from NeXTSTEP, which predates Win95 by several years.
I have used both Windows 3.1 and Windows 95. Windows 95 does have some significant benefits (e.g. you can start Windows programs from the DOS prompt (I seem to remember that you cannot do this in Windows 3.1 and in Windows 95 you can, but I am not sure if I remember correctly), and the WIN+R shortcut, and some others), but also many problems (although some can be avoided by changing stuff in the registry; I had done that to force it to display the file name extensions for all file names, rather than hiding them even if you tell it to display them; I also dislike their decision to use spaces in file names).
You could change the option to hide file extensions in the explorer settings windows; no registry tweak was needed.
Not wanting spaces in file names is certainly a bold opinion! I think you'll find yourself in a very small minority there.
> You could change the option to hide file extensions in the explorer settings windows; no registry tweak was needed.
The is a setting in Explorer, but it does not affect all file types; some (such as .lnk) are not affected by that setting and hide the extension anyways.
I don't have strong feelings either way, but I can see the perspective that underscores should suffice, and that introducing white space into filenames makes certain file and data management tasks more difficult and unpredictable.
You have to use windows 95 with a computer from 1995 to realise how painfully slow it was compared to windows 3.
Windows 3.11 loads in less than a blink of an eye on my Pentium MMX, while Windows 98 takes at least a minute to boot. This is with a 8 GB CF card as the HDD too, so the I/O is going as fast as possible.
It's because of drivers and PnP and especially USB. When you load Win3.1, WinNT4 and lower, drivers load without scanning for hardware presence. It's just a disk to memory copy. In Win95, the first PnP OS, it scans for PnP hardware at every boot. That's slow.
To prove my point, you could try loading some of the USB drivers for DOS or one of the ISA PnP configuration utilities (such as ICU - Intel Configuration Utility), see how fast it boots then!
Also, if you left the network config untouched, it defaults to TCPIP+DHCP, and when DHCP doesn't respond (cable unplugged), it's another 30s delay. Win311 didn't have TCPIP unless you install it manually. It also asks you to configure it during installation - less likely to select DHCP if you don't have it. And then, in Win311, network is started by DOS (NET START in autoexec.bat), not by Windows.
Besides the boot (which windows 3 didn't even do so I don't see why we are comparing it), from clicking on the start menu the 1st time after boot, to the start menu actually appearing on screen it would take 1-2 minutes to populate on windows 95, while on windows 3 on the same machine there would be no such issue.
Comment was deleted :(
This is not true. Win95 start menu appears instantly. I dare you to prove me wrong.
You are probably thinking of Win98 menu where they added IE.
I am thinking of windows 95 with a computer from 1995, in the year 1995. If you use it on a vm today… yeah thanks for not proving anything.
I'm not using a vm. I have an early 2000s computer running several old OSs. In Win98 I replaced the shell with the one from Win95 because it's faster. See 98lite: https://en.wikipedia.org/wiki/Software_remastering#98lite
I am not a great mathematical genius but I suspect that "early 2000s" came several years after 1995. Correct?
I think Steve was correct in that Windows 95/98/NT/ME/2000 was functional but it wasn't particularly elegant. But the part I think Steve missed was that elegance may get the "ohhs and ahhs" but functionality gets the customers. Back when NeXT was a thing a friend of mine who worked there and I (working at Sun) were having the Workstation UX argument^h^h^h^h^h^h^h^hdiscussion. At the time, one component was how there was always like 4 or 5 ways to do the same thing on Windows, and that was alleged to be "confusing and a waste of resources." And the counter argument was that different people would find the ways that work best for them, and having a combinatorial way of doing things meant that there was a probably a way that worked for more people.
The difference for me was "taste" was the goal, look good or get things done. For me getting things done won every time.
Jobs did understand that. In the same quote he says Microsoft earned their success.
This. Windows 9x-2000 GUIs were probably the pinnacle of OS UX, but were utterly ugly and boring as UIs. Their looks were unimpressive and boring, but they got the job done and they were easy to use and worked well. Windows 95 was like a 90 cents spoon - not particularly appealing, but extremely useful
> 1995-2000 Microsoft's user interfaces were quite tasteful
Only because they copied NeXTSTEP. Those 3D beveled controls originated in NeXTSTSP. In Windows, ctl3d.dll added raised and sunken 3D-looking buttons, beveled text boxes, group boxes with depth, a light-source illusion using highlight and shadow, all copied from NeXTSTEP.
That’s an odd way to spell Motif.
Motif 1.0 shipped in 1990. NeXTSTEP in 1988 had 3D beveled controls. So I believe I got the spelling right :)
Motif was also 3D, but the actual look of Windows 95/NT 4.0 clearly took some inspiration from NeXTSTEP and OPENSTEP, for example the window decorations.
I accept that's possible - if not likely (and everyone steals from each other!) - but even-so it only amounts to to the gunmetal-grey default colours and use of a 1px bevel/inset effect; because NS and NT3/NT4's UX/UI design and concepts are just so different otherwise.
...but I'm not personally convinced: instead, consider the demonstrable fact that similar engineering teams, working on similar problems, will independently come to substantially similar solutions; my favourite example to point to is how eerily-similar the Eurofighter Typhoon, Saab Gripen, and Dassault Rafale all look - even entirely indistinguishable at an air-show in-person - despite having zero shared pedigree - therefore it's possible that - given the constraints of desktop graphics hardware of the late-1980s/early-1990s - that a user-friendly desktop UI built around the concept of floating application windows - will all be similar in one way or another.
-------
My pet-theory for why that "Windows 95 1px bevel" look is so prevalent is because it suits working with premade UI graphics rasters/bitmaps using indexed-colors: for example, imagine a Windows-style Property Sheet dialog: prior to Windows 95, software would manually draw all of the elements of that dialog directly to the framebuffer (i.e. using unbuffered graphics) which was slow - ugly - and is the cmputer-equivalent of using a lavatory in a cramped bathroom actively undergoing renovations without any drywall/plastering). Even if there was enough vram for double-buffering it's still going to be slow: painting each and every button, checkbox (with the checkmark!) and tab header. So instead, many individual UI graphics elements could be prerendered (at design-time, hopefully by an actual artist), but not as single bitmaps for the entire dialog - but as an indexed color bitmap for each control type, so no slow/expensive draw/painting is required: only a simple blitbit for each checkbox, for example. Using an indexed-color bitmap based on a 4 or 8 colors palette (face, 3D light, 3D dark, transparent/BG; etc) means a single blob only a few hundred bytes in size can represent a chisel-cut bevelled checkbox - while integrating with whatever the user's preferred color scheme is.
----
....of course now we'll just build a UI in Electron, to hell with memory usage or integrating with the user's OS appearance settings. Le sigh.
As mentioned, Windows 95 uses more or less the same window decorations as NeXTSTEP - although with different semantics. What is minimize in NeXTSTEP is maximize in Windows 95 IIRC.
https://www.operating-system.org/betriebssystem/bsgfx/apple/...
It could be coincidence of course, but...
> my favourite example to point to is how eerily-similar the Eurofighter Typhoon, Saab Gripen, and Dassault Rafale all look - even entirely indistinguishable at an air-show in-person - despite having zero shared pedigree
Considering that France/Dassault was initially part of the Eurofighter / European Fighter Aircraft (EFA) project, I'm not sure if that's the best example to make your point.
please recall that 8bit color was the common capability for CRT displays at that time. Simple one bit display was also common. Any smooth transitions in gray or color had to use dithering, or be very clever in the way they chose the palate.
Certainly some historic credit goes to Motif, but, there are "levels to this game" .. Motif did not jump out as "wow that looks good" IMHO. Obviously NeXT was extreme in a different way.. sort of like a symphony orchestra more than an office machine.
It is genuinely entertaining to see people defend the dull and pedestrian UI in Windows 95.
Microsoft has for short periods in its history put out good UX and design, but fundamentally the company doesn't defend taste and design.
The company treats good design almost like a marketing expense only worth doing if it creates short term brand perception changes. Throughout its history it's had moments of great design when a particular leader creates a culture that promotes it, but inevitably someone higher up rotates out that leader and the culture resets.
That has been the pattern with Windows, Zune / Windows Phone, Xbox, Surface, and many other consumer facing products.
I have some nostalgia for XP, especially the Zune theme (separate download, black+orange recolor of the default), but due to the Classic theme being available in so many versions and often using it either for more performance or easier ricing (can easily swap the colors and fonts via official settings), I'm also nostalgic for the Win95 or so UI. I think 2000 was the oldest I remember actually using, but I used XP a lot and 2000 not very much.
In the last decade+ of using GNU/Linux, I've also become very attached to bitmap fonts and simple solid colors, while I've grown to dislike curves and transparency. So sometimes I see a screenshot of some very old Mac OS version I never even used, and it just looks good, sharp, and clean to me, no real nostalgia involved.
I think SerenityOS's vision of a unix-like environment with classic Windows UI is genius. I don't follow the project that closely, but on paper it does seem like a good idea.
I think there is distinction there between look and functionality.
They were functionally just fine; good even compared to some modern abominations.
But the look was just plain and ugly, even compared to some alternatives at the time.
> Things started going downhill, in my opinion, with the Windows XP "Fisher-Price" Luna interface and the Microsoft Office 2007 ribbon.
Yeah I just ran it with 2000-compatible look; still ugly but at least not wasting screen space
Windows 95 was a vast improvement in looks over 3.x. Of course tastes differ, but I found it very aesthetic, not ugly at all, and used the classic look until Windows 7 EOLd.
By your timeline, it means Microsoft only had institutional taste for about 3-4 years. A tiny fraction of the company’s lifetime.
(If it helps, I do agree with you about those years being the most… design-coordinated: when Office felt like part of Windows)
(I like to think that Visual Studio 2026 proves that the company can still do good desktop UI design; but it doesn’t help that every major first-party product is now using their own silo’d UI framework; wither MFC and CommonControls, I guess)
I think there was a period from Windows 3.1 to somewhere during Windows 98 (maybe right up until the release of Office 97?) where both first-party and third-party Windows apps were all expected to be built entirely in terms of the single built-in library of Win32 common controls; and where Windows was expected to supply common controls to suit every need.
This was mostly because we were just starting to see computers supporting large bitmapped screen resolutions at this point; but VRAM was still tiny during this period, and so drawing to off-screen buffers, and then compositing those buffers together, wasn't really a thing computers could afford to do while running at these high resolutions.
Windows GDI + COMCTL32, incl. their control drawing routines, their damage tracking for partial redraw, etc., were collectively optimized by some real x86-assembly wizards to do the absolute minimum amount of computation and blitting possible to overdraw just what had changed each frame, right onto the screen buffer.
On the other hand, what Windows didn't yet support in this era was DirectDraw — i.e. the ability of an app to reserve a part of the screen buffer to draw on itself (or to "run fullscreen" where Windows itself releases its screen-buffer entirely.) Windows apps were windowed apps; and the only way to draw into those windows was to tell Windows GDI to draw for you.
This gave developers of this era three options, if they wanted to create a graphical app or game that did something "fancy":
1. Make it a DOS app. You could do whatever you wanted, but it'd be higher-friction for Windows users (they'd have to essentially exit Windows to run your program), and you'd have to do all that UI-drawing assembly-wizardry yourself.
2. Create your own library of controls, that ultimately draw using GDI, the same way that the Windows common controls do. Or license some other vendor's library of controls. Where that vendor, out of a desire for their controls to be as widely-applicable as possible, probably designed them to blend in with the Windows common controls.
3. Give up and just use the Windows common controls. But be creative about it.
#3 is where games like Minesweeper and Chip's Challenge came from — they're both essentially just Windows built-in grid controls, where each cell contains a Windows built-in button control, where those buttons can be clicked to interact with the game, and where those buttons' image labels are then collectively updated (with icons from the program's own icon resources, I believe?) to display the new game state.
For better or worse, this period was thus when Microsoft was a tastemaker in UI design. Before this period, early Windows just looked like any other early graphical OS; and after this period, computers had become powerful enough to support redrawing arbitrary windowed UI at 60Hz through APIs like DirectDraw. It was only in this short time where compute and memory bottlenecks, plus a hard encapsulation boundary around the ability of apps to draw to the screen, forced basically every Windows app/game to "look like" a Windows app/game.
And so, necessarily, this is the period where all the best examples of what we remember as "Windows-paradigm UI design" come from.
> On the other hand, what Windows didn't yet support in this era was DirectDraw — i.e. the ability of an app to reserve a part of the screen buffer to draw on itself (or to "run fullscreen" where Windows itself releases its screen-buffer entirely.) Windows apps were windowed apps; and the only way to draw into those windows was to tell Windows GDI to draw for you.
> This gave developers of this era three options, if they wanted to create a graphical app or game that did something "fancy":
> 1. Make it a DOS app.
This vaguely reminds me of WinG[0][1] - the precursor to DirectDraw. It existed only briefly ~ 1994-95.
My vague "understanding" of it was to make DOS games easier to port to Windows. They'd do "quick game graphics stuff" on Device Independent Bitmaps, and WinG would take care of the hardware details.
[0] https://en.wikipedia.org/wiki/WinG
[1] https://www.gamedeveloper.com/programming/a-whirlwind-tour-o...
Sometimes the "any clickable area => make it a Windows control/button" works and sometimes it doesn't.
I talked with the programmer for the 16-bit Windows calculator app, calc.exe.
Any naive programmer with a first-reading of Charles Petzold's Programming Windows book would assume each button in the calculator app was an actual Windows button control.
Nope.
All those calculator buttons, back when Windows first shipped, used up too many resources.
So the buttons were drawn and the app did hit-testing to see if a button was mouse-clicked. see https://www.basicinputoutput.com/2017/08/windows-calculator-... for a pic of the 16-bit Windows calculator app.
Steven Jobs conveniently ignored the Start menu when discussing the competition. He probably secretly admired it, as it was a complete success story for Microsoft.
> Microsoft not having taste
the liquid glass designers (and probably their managers and design vps) should be repeatedly punched in the face with that video
What's doubly-insulting about Liquid Glass is that Windows Vista did the glass thing better. Aero rivaled mid-2000s Aqua in design chops, and in some ways did a better job of showing off what GPU compositing could do. But most importantly Microsoft actually understood that text on glass needs loads of background protection, damn it.
MS may not have been as tasteful as MacOS, but the functionality was at least there and it was easy to find and use. That goes a long way to make up for the bland-ish look.
Then we lost even more taste, and eventually the functionality and user friendlyness, on both sides of the isle.
The windows 95 user interface was 'inspired by' the NeXT user interface, and to some degree the Mac UI. Microsoft had a NeXT computer to copy off, even though they wouldn't develop for it.
Exactly. Windows Cairo was planned to be a competitor to NeXTSTEP, and later, parts of it made it to Windows 95 and NT.
The "no taste" quote makes no sense given that Susan Kare did the many of the significant icons in Windows 95. She did the same for the Mac.
Agreed, especially since in Europe there was hardly any Apple presence.
It is no accident that to this day Demoscene is all about Spectrum, C64, CPC, MSX, Atari, Amiga, PC and there is hardly any retrogaming/demoscene focus of Apple hardware.
Regarding Windows, I would place Windows 95, NT 4.0, 2000 and 7 as my favourite UI flavour ones.
What made system 7 and 8 worse in some respect was when it crashed, it crashed hard without warning
With windows the crash was progressive so you have time to save and prepare.
I also have fond memories of windows 2000. It was rock steady and polished. I preferred it over system 8 and even OS X which had to many Unix conventions.
With System 7 or Mac OS 8/8.5/9, if one used it for long enough with a stable software setup you'd eventually get a gut feel for what programs, extension sets, etc were most likely to invite a crash (it wasn't a terrible idea to reboot after a long web browsing session with Netscape for instance). It wasn't surefire, but one could get it into a somewhat stable state. You never stopped hammering ⌘S, though.
Windows 2000 was incredible. Running it after having wrestled with 98SE was like getting teleported from a garbage dump to sunny meadow with a fresh ocean breeze. I've never seen machines transform quite as radically as they did when upgrading from something earlier to 2000.
I once proved to my boss that a font was crashing System 7. And we always unplugged the network when we didn’t need it because a crash on one Mac could bring down every other Mac on the network.
General protection errors and BSODs say hello, also to hit CTRL + ALT + DELETE to restart.
There's an entire is that loves 90s msft user interface. SerenityOS.
SerenityOS was born dead. Let me explain why.
No new OS today will ever be used by any significant number of people without 1) a working web browser and 2) hardware support for laptops, phones, wifi cards... you know... stuff people already have.
SerenityOS might get a working browser. Not very likely, but it might get it. The #2 condition will only be solved if it somehow "imports" Linux drivers or wrap Windows binary drivers in a compatibility layer (like Linux used to have for wifi).
Their policy to not use any external code or libraries is what will finally kill the project. It's simply not possible for them to rewrite any significant portion of drivers needed. Not even Linux can keep up and they have lots of contributors from the hardware industry.
They could probably make SerenityOS a VM-only OS. That could work. Run Linux as a HAL and SerenityOS as a UI on top. But then, why not write a complete Linux userspace to replace Gnu?
SerenityOS serves as a cool side project for those who like to tinker with OS dev. I don't think it was "born" with any other goals in mind. Neither was their browser project, it just happened to turn into something a lot more serious.
Amazing you say that because I almost posted that comment in response to that same clip in another HN thread, for the same reason. There's a tight integration between style, performance, and design on the Windows 95 and 98 that then now feels more like "true" Windows than anything since.
I think Jobs was right about Microsoft later on, but they certainly had taste during their peak.
Performance started going downhill with Windows XP, and then even more with Windows Vista.
Modern Windows doesn't feel snappy anymore, even thought we have the most powerful computers we've ever had.
Sometimes I use some old Win32 apps, and they feel so responsive and light...
But did you use 95 when you were young? I was using primarily MacOS at the time and always found windows particularly bad at everything, including UI/UX. I guess we like what we know…
I'm a huge fan of the book "Design for the Real World" by Victor Papanek. One of the things that he talked about is the importance of using materials honestly: not trying to pass plastic off as wood, using the given material to it's best ability (even if itis plastic).
I've always thought the Windows 3.1 to Win2K era were exactly that. The medium is pixels on a screen, the mouse and keyboard. And there is no artifice, it's just the bare essentials.
2000 was peak except for them still having those tiny non-resizeable dialogs with long lists in them which you have to scroll horizontally and vertically. WTF? Your typical Linux DE was better at that even back then.
I have good news for you. Even a Linux Mint Mate would make you happy again, let alone some of the windows 95 look alikes.
I generally agree, only that XP was okay in my opinion after one disabled all fluff so that it looked like 98SE.
It's no wonder XFCE and to lesser extent Mate are popular, XFCE4 does a nice job of being a handy tool and not in-your-face design manifest.
> Microsoft Office 2007 ribbon
What a waste screen real estate, IMO. The only reason it's still around is because screens are now 2X bigger, and screen real estate has become cheaper.
Windows 95 is a rip-off of NeXTStep
[dead]
Look how crisp, professional, and usable it all is.
This is a very good write-up. There's no way this level of testing and dedication could have resulted in the execrable shitshow that is Windows today.
Mac OS is going backward with accelerating speed, too. They had just started to recover from Jony Ive when they put a packaging designer in charge of UI... resulting in the "Liquid Glass" debacle, and all the other incompetent UI changes that accompanied Tahoe's rollout.
Ranting on UI, I think I might blame MS for this but I feel like many shortcuts for customization in apps and OS are a net negative.
The first example I remember was ~2003ish when MS Office did a big redesign and got much bigger toolbars. That they were big is a matter of taste but that's not where I'm going with this. No, the issue was that they made too easy to ACCIDENTALLY mess up the UI. They added all kinds of customization (which is fine) but then made it so just dragging a little too long an a button would let you move the button somewhere else. So, grandpa drags the button, possible off the bar, deleting it, and now for all intents and purposes the app is unusable to him. IMO, the customization options should be buried deeper where they can't happen by accident.
This "ACCIDENTAL" modification is all the rage now. On iPhone, holding on the lock screen puts the phone in "edit the lock screen mode". Several family members have asked why the image they put on the lock screen was gone. It was because they "butt edited the screen". Put the phone in their pocket and it felt a press and went into edit mode and edited the lock screen. AFAIK, almost no one needs this shortcut. It would be fine to just go into Settings->Wallpaper->Lockscreen or something like that. But, I'm just guessing (1) some UX designer needed something todo (2) someone working on lockscreen options got tired of doing the Settings->Wallpaper->Lockscreen dance and put in a shortcut that no-one but them needs.
This same issue is all over the place. The iPhone's lockscreen while charging mode has the same issue. The user (me) picks the clock face I want. And, one of 10 times I reach for the phone from the charging stand I accidently touch the screen which changes the face. I NEVER NEED THIS. Again, this should be buried in Settings->Lockscreen->Clock Face. The shortcut a net negative.
There are many more.
> Put the phone in their pocket and it felt a press and went into edit mode and edited the lock screen.
This is why I hate the flashlight and camera buttons on the lock screen - which you can activate without unlocking. When you have your hands in your pockets during cold weather you’ll suddenly be ”filming”… I never use the camera on my phone anyway. Thankfully at some point they added support for removing them.
Apparently, the idea of an edit mode is some foreign concept for a lot of people.
There are a lot of UI concepts that are foreign to younger developers, simply because they grew up using web apps and smartphones. I think computer science departments need to make a class on human-computer interaction a mandatory part of the curriculum, and those classes need to require students to sit down with and actually use a variety of UIs from two, three, four decades ago. There's a ton of value in being conversant in the basic building blocks and paradigms of multiple UI systems, and in knowing what problems have been solved in the past so we don't keep badly reinventing the same features or failing to learn from the mistakes of the past.
There are a lot of things in older UIs that I think every developer should have hands-on experience with, eg. using nested menus in classic Mac OS; using an MDI application on Windows 9x; using the file browser and dock on NeXTSTEP; using X11 with focus follows mouse; anything with pie menus. Not because those things are necessarily the right choices for today's GUIs, but because there are valuable lessons to be learned from them, and reading an article like this or studying an old HIG document doesn't have the same impact.
To be fair, Apple has always had a penchant for removing important features because they don't like how they look. I cannot count how many times I got a CD/DVD stuck in a Mac, and due to a lack of physical eject button and the software eject button not working, resorted to the emergency eject sequences. Just put a button to eject the disk, ffs.
Apple was very early to remove floppy disk drives, then later DVD drives from their computers, even when those media were still commonly used. At least that fixed your problem of the stuck DVD :)
Apple has long been a "style over substance" company, unfortunately. Not always (I mean, you couldn't accuse the Apple II of being stylish for example), but certainly since the year 2000 at least. It has often meant that their products were less pleasant to use because someone refused to add functionality that wasn't as sleek-looking.
The Apple II was more stylish than any other personal computer in 1977.
In the mid-1980s, the Apple IIc and IIGS were built to Apple's "Snow White" design language and looked slicker than most contemporaries.
I hate liquid glass with a burning passion. I've never understood why people get so irritated at design changes until now.
Ugh I couldn't agree more. The new macos feels like a step backwards on many fronts. I'm going to delay updating my mac for as long as I can.
I wonder if its nearly time to say goodbye to the apple ecosystem. Those framework laptops look snazzy.
Sadly, it won't be the last time you'll feel that angry passion.
Welcome to the club. We all hate it here.
I like to jest that packaging designer would of course wrap things in clear plastic...
GUIs used to be designed by power users, who would start with an advanced design and strip it down to a simple version the average user could use. Now GUIs are designed by average users who have no idea what to do with advanced features, because they're stuck thinking about the GUI as an average user does.
Power users understand many different levels. Beginner/average -> professional -> advanced -> power user. But the average designers nowadays only understand two things: average, and everything beyond that. This is why professional, advanced, and obscure features are all just one long-press away - they literally have no idea which category each feature falls into, so they're all equally valid.
Designers tend to be less open to feedback than developers. That, I think, helps explain why flat UI persists even though it has shown usability drawbacks. It also helps explain why overall usability feels like it's declining ever year — for instance, macOS Tahoe seems noticeably worse in usability compared to macOS Sequoia. Does anyone think Apple is going to rush out a release that fixes the excessive rounding of window corners? Don't hold your breath.
On the topic of flat design specifically, developers are likely just as culpable. Back when it was just starting to catch on, by my observation some of the quickest to adopt it were solo developers because it's way easier to build a passable looking app with flat UI since that doesn't require any design talent.
A passable looking modern flat UI has a lot behind it, just like skeuomorphism and anything in between.
Unless something like https://kde.org/announcements/plasma/5/5.12.0/spectacle-noti... is what you consider to be passable looking of course.
That looks perfectly functional to me? It only looks a bit ugly because the screenshot appears to have been of a very small part of the screen that got blurry when it was blown up to a larger size.
I'll take function over for every day. (I daily drive KDE, it works fine and doesn't get in my way. Most of the time I'm either in my editor or the terminal emulator anyway.)
This is also a plasma 5 example. Plasma 6 cleaned it up.
But I also agree. KDE is pretty close to my ideal for a desktop environment. It's pretty close to a windows 7 feel which is perfect for me.
For reference, Windows' notification look this way: https://www.lifewire.com/thmb/I4VO9qHrzphTHsZHU5eI73sLL9k=/7...
The screenshot you posted is likely from KDE Plasma. The project don't have much funding to hire a UI/UX designer IMHO.
Once the windows become actually circles, or maybe some point along that path, they'll go back to square corners and congratulate themselves on how much better and innovative they are. It's just a stupid trend to keep rounding things more and more... I hope.
It's all just rearranging deck chairs at this point.
I feel like UX designers don't realize that their job should have a natural tailing off as we discover and lock in the good ideas and discard the bad. Even if the ideas aren't that great, users can at least get good at however it does work, if it stays constant. Instead, we just get more dice rolls, eyecandy, and frustration.
I for one hate the power dynamic that OS and website designers have over me. They can just sneak into my house and rearrange my furniture on a whim. Even if it sucks, I would adapt to it if it stayed constant! Instead I both hate it and can't learn it, because everything is different and keeps changing when I least expect it.
At this point my brain has given into learned helplessness and won't retain much of anything at all, but it's next-level figured out that it's useless.
Designers seem to have a bad track record, and it's getting worse.
Sorry, designers.
Part of the problem is that each generation of designers want to leave their mark on the product - often by undoing the work of the last generation of designers. They're not entirely wrong. Design has fashions, like clothes. I enjoy that the industrial design of laptops and phones changes every few years. But good UX isn't good because its fashionable. Good UX doesn't go out of date. They've gotta learn to stop fixing it when its not broken.
Eg, MacOS's new system preferences panel is worse than the old one. And its stupid putting the windows start menu in the middle of the screen, where you can't as easily click it with the mouse.
[dead]
I think you might be confusing flat design with UI density. While they emerged as trends during a similar period, they are distinct concepts. You can have small flat elements or large skeuomorphic ones.
I don't think openness to feedback is the main metric, but rather ability to objectively measure outcomes. It's just harder to objectively measure usability than the presence or absence of a bug or performance problem.
Any user interface designer should take a good look at the controls on a commercial airliner. An awful lot of effort goes into making an intuitive, effective user interface. I have disagreements with it, but there's no denying it's very well done.
Designing a programming language is mostly about usability. I'll be giving a talk about that in April at Yale. It's a fun topic!
Looking forward to your talk.
I feel like there's a taste issue which is similar to tabs vs spaces or other coding styles. Some languages kind of solve this with auto-formatting but just because they choose a standard doesn't mean their standard is as readable as some other.
In languages one taste issue that comes to mind. Many languages have the invisible scope issue
foo = bar
In C++ for example, foo could be a local variable, a member of the enclosing class, a local module static, or a global. Some programmers like this, JBlow for example complained that in C++, switching between standalone function, member function, a lambda required too many changes. (foo = bar) isn't an example but the point is he wants that to be frictionless.Me though, I want the line to be understandable with as little external context as possible. I don't want to have to dig up 10, 50, 100 lines to see if a local foo has been defined or if it's member. So like python or typescript. I like foo has to be this.foo or self.foo if you want it assign the current object's member. Most programmers seem to agree because they end up using mFoo or foo_ or some naming convention to work around the issue but I think I'd prefer the language to enforce it.
I don't know which if any languages make all the different scopes more explicit.
So far I haven't liked Swift though which seems more explicit. Even though it's more explicit in some areas I feel like the majority of my time is typing boilerplate and fixing trivial syntax errors. I know programming requires syntax and, as an example, I include semicolons everywhere in JavaScript even though they are not required. That said, I would like to get all the time back in my life where I compiled some C++ only to be told "error: missing semicolon at end of class definition" or "error: extra semicolon at end of member function declaration". It feels like a language should fix this stuff for the dumb human rather than make the human do random tedious work. I get there might be times where it's ambiguous but I wonder if it's also a language design issue.
I think you're going to enjoy my talk!
I tend to agree, but the FMC on Boeing aircraft sure leave something to be desired.. I do not find the menu/tab system very ergonomic (and the non-QWERTY key layout)
Asking for qwerty keyboard in an fmc just screams you have never been in a cockpit. You dont type into the fmc with both hands.
Ok, except operating a commercial airliner literally takes thousands of hours of training, requires an extremely detailed mental model for how air flight works, and heavily relies on external procedures like checklists to ensure safe operation.
And fatal accidents due to poorly thought out control systems do occur.
https://www.fastcompany.com/1669720/how-lousy-cockpit-design...
Also fwiw using the word "intuitive" is an instant sign of someone not being a great designer.
The 757 cockpit designers did use the word "intuitive" a lot.
The control stick movements, for example, are intuitive. (Early aircraft did not have control sticks!)
For a crazy example, airplane jargon has specific meanings. "Takeoff Power" officially means full power to take off with. Makes intuitive sense, right? Well, one day the pilot needed to abort a landing, and yelled "takeoff power". The copilot heard "take off power" (note the space), chopped the power, and the airliner crashed. The jargon was changed to "full power".
The Air Force, however, had their own jargon and stuck with "Takeoff Power", until one day they had the same accident and changed the jargon.
For another example, the levers for the flaps have a knob on them shaped like a flap. This way, the pilot has tactile feedback that his hands are on the right lever, and he doesn't need to take his eyes off his other tasks.
For a third example, cockpit designers put in a warning horn for a stall warning. It worked great, and so they put in other warning horns, each with a distinct sound. Unfortunately, the pilots would confuse them, and do the wrong thing. So the "horn" is now a voice that says "stall" (or something like that).
Using words for aural indicators still has not percolated out of the aviation industry. You don't have any for your car, for example. Just chimes, beeps, buzzes, and other primitive and hopeless sounds. Oh, lest I forget to mention, the stupid incomprehensible icons.
My design is a cli as API+control plane which allows out of order and aliased tokens via intent resolution to IR. The GUI and CLI are homoiconic in that one builds the other or vice versa. When you layer on a nice UI library with intuitive controls, now you’re cookin’ with gas.
Microsoft dumped $100 million on this huge marketing campaign with a simple question: “Where do you want to go today?”
I love it. It really captures the seemingly endless new digital world that was emerging in the 90’s and in many ways is still evolving 30 years later.
I love the promo video they made too: https://youtu.be/KNLDLVJZx0o
I love it so much I wrote a blog post inspired by it: https://catskull.net/where-do-you-want-to-go-today.html
Where do you want to go today?
As I've joked about before, their slogan now has turned into "where do we want you to go today?"
1994: Where do you want to go today?
2014: Where do we want to go today?
2024: Here's where we're going today.
That joke's been around since Microsoft tried locking people into Internet Explorer, so ~30 years. Microsoft's been Microsofting for at least that long, Satya hasn't changed that.
When Microsoft first established a web presence, 1994 was probably the year if not 1993, www.microsoft.com showed "Welcome to Microsoft's web site. Where do you want to go today?" followed by a list of destinations throughout their site. They promoted the second of those sentences to their official slogan.
> Where do you want to go today?
Not fussed. It's my information that i can't keep close to home.
If you want a true lesson on design, check out Ask Tog, starting here:
https://asktog.com/atc/principles-of-interaction-design/
Tog was the original design engineer for the Mac, and arguably one of the first true HCI engineers.
Then read the rest of his website. He goes into where Windows tried to copy Mac and got it horribly wrong.
One of my favorite examples is menu placement. The reason the Mac menus are at the top is because the edges of the screen provide an infinite click target in one direction. So you just go to the top to find what you want. With Windows, the menu was at the top of each Window, making a tiny click target. Then when you maximized the window, the menu was at the top, but with a few pixels of unclickable border. So it looked like the Mac but was infinitely worse.
If you're making a UI, you should read all of Tog's writings.
I understand the Fitt's Law concepts behind a top menu bar, but I wonder if this is a scenario with moving goalposts.
On a 1984 Mac, you had like 512x384 pixels and a system that could barely run one program at a time. There was little to no possible uncertainty as to who owned the menu bar. (Could desk accessories even take control of the menu bar?)
But once you got larger resolutions and the ability to have multiple full-size programs running at once, the menu bar could belong to any of them. Now, theoretically, you should notice which is the currently active window and assume it owns the menu bar, but ISTR scenarios where you'd close the window but the program would still be running, owning the menu bar, or the "active" window was less visually prominent due to task switching, etc.
The Windows design-- placing the menu inside the window it controls-- avoids any ambiguity there. Clicking "File-Save" in Notepad couldn't possibly be interpreted as trying to do anything to the Paintbrush window next to it.
The problem with the Mac UI is that the app's menubar can only be accessed by the mouse (can't remember what accessibility-enabled mode would allow).
Under Windows, one can access the app's menubar by pressing the ALT key to move focus up to the menubar and use the cursor keys to navigate along the menubar. If you know the letter associated with the top-level menu (shown as underlined), then ALT-[letter] would access that top-level menu (typically ALT-F would get you to the File menu). So the Windows user wouldn't have to move the mouse at all, Fitt's Law to the max (or is it min? whatever, it's instant access).
For the ultrawide monitors these days (width >= 4Kpx), if you have an app window maximized (or even spanning more than half the screen), accessing the menu via mouse is just terrible ergonomics on any major OS.
Since OS X 10.3 (2003) Control+F2 moves focus to the Apple menu. The arrow keys can then select any menu item which is selected with Return or canceled with Escape. Command+? will bring you to a search box in the Help menu. Not only that, any menu item in any app can be bound to any keyboard shortcut of the user's choosing not just the defaults provided by the system or application.
AFAIK Windows 3.x flipped a bunch of Mac decisions to avoid being sued and then MS felt that they had to keep those choices forever for backwards compatibility.
And in my experience, when people moved from Windows to the Mac they're so annoyed that there are differences. When I try to explain that these were present in the Mac long before Windows, people start to understand.
> So it looked like the Mac but was infinitely worse.
On single monitor setups maybe: but on early OS X multi-monitor setups, you then had the farcical situation where the menu would only be shown on the "primary" display, and the secondary display didn't have any menu at all, so to use menus for windows that were on the secondary display, you had to move the cursor onto the other primary display where the menu was for all windows (or use keyboard shortcuts).
I think 10.6/7 (not sure exactly) was when they started putting the menu bar on both displays rather than just the primary.
> So it looked like the Mac but was infinitely worse.
"Infinitely worse"? Some people really need to cool off the hyperbole.
Having each window be a self-contained unit is the far better metaphor than making each window transform a global element when it is selected. As well as scaling better for bigger screens. An edge case like that may well be unfortunate, but it could be the price you pay to make the overall better solution.
That was the point of Tog's conclusion: edges of the screen have infinite target size in one cardinal direction, corners have infinite target size in two cardinal directions. Any click target that's not infinite in comparison, has infinitely smaller area, which I suppose you could conclude is infinitely worse if clickable area is your primary metric.
This wasn't just the menu bar either. The first Windows 95-style interfaces didn't extend the start menu click box to the lower left corner of the screen. Not only did you have to get the mouse down there, you had to back off a few pixels in either direction to open the menu. Same with the applications in the task bar.
The concept was similar to NEXTSTEP's dock (that was even licensed by Microsoft for Windows 95), but missed the infinite area aspect that putting it on the screen edge allowed.
The infinitely worse part was when you maximized the window so the menu bar was at the top, but Windows still had the border there, which was unclickable.
So now you broke the infinite click target even though it looked like it should have one.
You can generalize this observation to a lot of Microsoft's decisions: a problem exists, so they solve it in a nifty way, a way that makes everything else harder or more error prone. An example: byte order mark. That sure does solve the problem of UTF-16 and UTF-32 byte order determination. It makes every other use of what should be a stream of bytes or words much harder. Concatenate two files? Gotta check for the BOM on both files. Now every app has to look at the first bytes of every "text" file it opens to decide what to do. Suddenly, "text" files have become interpreted, and thus open to allowing security vulnerabilities.
Comdex 1996 DELL (or some company) exposed Windows 95 pcs for the public to mess with. Having used only 3.11 before, I was fascinated with the desktop and also felt it very strange that the contents of the UI were so minimal.
Of course I didnt discover anything else: I was afraid of clicking "Start", because I dindt know what that was that going to start, and the computer wasnt mine to brick.
Hmm. I like the simplicity compared to Win10 or the abomination that is Win11. But it is hard to compare 1:1 because the modern UI also improved in some ways, and degraded in other ways. Microsoft does not really seem to understand how to design UIs anymore though, or they simply don't care. I am using Linux most of the time so I don't quite depend on Microsoft anymore, but when I use a MS-specific UI I often wonder why some things are simply not thought through at all. The ribbon interface is an example; my brain can not deal with dynamic willy-nilly changes. It just adds cognitive load. Why isn't it easier to modify the classic interface? In modern HTML/CSS we can filter away things we don't need; I do that with ublock origin all of the time.
I think Windows 95/2000 and the contemporary MacOS (including the then future MacOS X) have the best UI in everything I used in my 30+ years of tech life.
I sincerely hope that one day we could go back to that road. If you want that achieved, please support me to join Apple/Microsoft to become the UI boss, fire all flat-design people and hire a small team to implement the older UI, then give a few passionate talks on EDX and conferences so people who supported flat UI magically support the older UI. They always follow whoever the lead is like headless flies.
LOL.
> I think Windows 95/2000 and the contemporary MacOS (including the then future MacOS X) have the best UI in everything I used in my 30+ years of tech life.
Agreed. I do wonder how much of it is personal, in that that UI hit at a certain formative time in my life. But ever since then it's been the benchmark that I evaluate all other UIs by. The lack of a "classic" mode in Win10 was one thing that motivated me to switch fully to Linux. To make the switch, I spent a good amount of time trawling the themes to find one that mimicks the look of Win95/95/2000. (The one I use is a KDE theme called "Reactionary".)
> I do wonder how much of it is personal, in that that UI hit at a certain formative time in my life. But ever since then it's been the benchmark that I evaluate all other UIs by.
I know some of my preferences for UIs are informed by what I first really learned how to use. But I also have preferences that are informed by decades of heavy computer use.
I despise UI widgets that just look like the window background with no borders or shadows. I can't stand massive amounts of useless white space. UI widgets don't require oxygen to survive so they don't need to fucking "breath" that much. I also despise mystery meat UIs that change their arrangement because I clicked one button more often than another.
Everything that increases my cognitive load and doesn't allow me to build up muscle memory in a UI is supremely frustrating. I might like the "look" of Mac System 7, it was a great intersection of functional and whimsical in my opinion. The consistent behaviors and learnable interface go beyond subjective visual appeal however.
Yep. I always cite XP as being Windows's peak, but I forgot that it shipped with their insulting Fisher-Price motif enabled by default. Step 1 was to switch the UI to "classic" (essentially Windows 95) mode, and all was well.
Windows 95 is a great case study because with that release, Microsoft did more for GUIs than Apple did through the entire decade of the '90s... and beyond.
All of it is now out the window (pun invited). It's a race to the bottom between Microsoft and Apple, with Microsoft having a HUGE head-start. But Apple has really stepped up to the plate with Tahoe, crippling it with big enough UI blunders to keep them in the enshittification game.
XP in early betas released had that slightly upgraded 9x interface called Watercolor [1] and if they'd keep it, surely majority would pick it up over plastic Luna.
Early experiments with totally new theme were rather unpleasant [2] and Watercolor was abandoned in favor of more familiar 9x looking theme as an option. W11 still comes with that old 9x widgets look - slightly flattened because of that trend but it's still there buried beneath for compatibility reasons. And I'm pretty sure they won't escape with that like Apple did with Aqua away from Platinum.
[1] - https://betawiki.net/wiki/Watercolor
[2] - https://betawiki.net/wiki/Windows_XP_build_2416#Gallery
I always installed Watercolor on a new computer. It's still beautiful and definitely the look they should have chosen and played to their strengths.
I think they were so caught off guard by how incredible Mac OS X _looked_, that they didn't realize it wasn't just veneer, but a genuine evolution and improvement of how Mac OS _worked_. This became Apple's competitive advantage for over a decade as Microsoft chased different styles while consistently botching how it would impact usability.
I really liked XP (and 7) because for me, having a capable theming engine built in that didn't take a ton of extra resources or cause instability (unlike Stardock's WindowBlinds) was a real value add. There were some absolutely gorgeous third party XP/Vista/7 themes on sites like DeviantArt that worked extremely well within the limits of the engine, had a unique look and feel, and were just as usable as the "classic" theme.
When MS gutted the theming engine with the release of Windows 8 (flat rectangles only) I was devastated.
The engine itself isn’t gutted - it’s full of functionality that was never lost. MS just (correctly) reasoned that transparency effects in the UI - introduced in Vista simply to show-off the capabilities of the DWM compositor - ultimately detract from a good UI.
From what I remember it lost the ability to render rounded window corners, because while Windows 8 msstyle themes existed they all had the hideous boxed corners that clashed hard with many looks.
I don’t agree that transparency is always a detractor. Judicious use can be a net positive, but it doesn’t work for all themes and there should be an option to turn it off. Personally I didn’t find the W7 variation of Aero to be bad at all.
> From what I remember it lost the ability to render rounded window corners,
...I'm guessing you haven't used Windows 11?
--------
By "rounded corners" are you referring to rounded-off corners in the nonclient area (such that the hWnd's rect is not clipped at all)? If so, then no: those would be rendered using a 9-grid[1] and have always been supported.
If you're referring to how so many fan/community-made msstyles for Windows 10 retain the sharp corners, I understand that's not a limitation of DWM or msstyles, just more that you need to do a lot of legwork when defining nontrivial corners in an msstyles theme; it can be done (there are plenty of examples online, e.g. look for Windows XP's style ported to Windows 10), it's just that most people don't go that far.
-----
[1] In msstyles, the 9-grid defines how a rectangular bitmap is stretched/scaled/tiled to fill a larger area; it's very similar to how CSS image borders are defined with `border-image-slice`.
I’m speaking specially about Windows 8/8.1. Obviously 11 and the new Fluent design language it brought don’t suffer the same issue.
Whatever the case, rounded corners on the titlebars and window chrome were common in XP/Vista/7 custom msstyles but were nowhere to be seen for 8/8.1 custom msstyles. It was one of the most frustrating aspects of that era of Windows for me.
Hmm, yes; I think you're right. I honestly don't know the explanation behind that, sorry.
After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.
Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. Since the modern trend is to have rounded corners on everything, it's not clear where the "grab" area for resizing a window exists anymore. It seems to exist outside of the physical boundary of the window, and the actual activation point is barely a few pixels wide. Apparently this is an issue on macOS as well[1].
Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).
So, my point is that eschewing graphics as much as possible, and relying on keyboard input to perform operations, gets rid of the graphical ambiguities, minimizes the amount of trend following making the UI feel timeless, and makes the user feel more in command of their experience, making them more efficient and quicker.
This UI doesn't have to be some inaccessible CLI or TUI, although that's certainly an option for power users, but it should generally only serve to enable the user to do their work as easily as possible, and get out of the way the rest of the time. Unfortunately, most modern OSs have teams of designers and developers that need to justify their salary, and a UI that is invisible and rarely changes won't get anyone promoted. But it's certainly possible for power users to build out this UI themselves using some common and popular software. It takes a bit of work, but the benefits far outweigh the time and effort investment.
The issue with this type of design is that it completely tanks discoverability. Every visual UI element trimmed is another pit of confusion for less-technical computer users.
Modern UIs aren't great with discoverability, either however and are not an example that should be followed.
> The issue with this type of design is that it completely tanks discoverability.
There are still ways to help, such as having a menu bar, and having good documentation. (Documentation is more important, in my opinion; but both are helpful.)
That's not necessarily the case. In fact, if implemented well, keyboard/command-driven UIs can be much easier to discover than GUIs.
Consider the "Command Palette" and similar features that are part of many UIs (VS Code, Obsidian, Vim, Emacs, etc.). It allows the user to search all possible actions using natural language, and see or assign key bindings to them, so that they can get to their most commonly used actions faster. This search can be global for the entire program, or contextual for the current view.
It is far easier to search for what you want to do, than to learn to what action every GUI element is associated with, or to navigate arbitrarily nested menu hierarchies. This does require the user to be familiar with the domain language somewhat in order to know what to search for, but this too can be simplified, actions can have different names, etc. It also makes the program more accessible for speech navigation, screen readers, and so on.
> After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.
I agree, that the interactivity should be primarily keyboard-driven. However, mouse input is useful for many things as well; if there are many things on the screen, the mouse can be a useful way to select one, even if the keyboard can also be used (if you already know what it is, you can type it in without having to know where on the screen it is; if you do not know what it is, you can see it on the screen and select it by mouse).
> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
At least older versions of Windows had a more consistent way of indicating some of these things, although sometimes they did not work very well, often they worked OK. (The conventions for doing so might have been improved, although at least they had some that, at least partially, worked.)
> A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. ... it's not clear where the "grab" area for resizing a window exists anymore
I had just used ALT+SPACE to do stuff such as resize, move, etc. I have not used Windows 11 so I don't know if it works on Windows 11, but I would hope that it does if Microsoft wants to avoid confusing people. (On other older versions of Windows, even if they moved everything I was able to use it because most of the keyboard commands still work the same as older versions of Windows, so that is helpful (for example, you can still push ALT+TAB to switch between full-screen programs, ALT+F4 to close a full-screen program, etc; I don't know whether or not there is any other way to do such things like that). However, many of the changes will cause confusion despite this, or will cause other problems, that they removed stuff that is useful in favor of less useful or more worthless stuff.)
> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently, and I think we might have been there for a little while, at least within the Windows bubble. The fact that we threw all of those out the window with new and worse design, then did that again a few more times just to make sure all the users learned to never bother actually learning the UI, since it will just change on them anyway, doesn't entail that this is an unsolvable problem (well, it might be now, but I doubt it was back in 1995).
> Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).
I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language. There are certainly technical constrains, but I can't see any design constrains. They were never implemented at the time, and those features didn't become relevant until we'd gone through several rounds of different designs, so we never had the opportunity to see how it would work out.
> There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently [...]
The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.
This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.
My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.
But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.
So I think that the most usable UI is somewhere in the middle. It should avoid the constant churn of GUIs, and be more accessible than CLIs. This is possible to build for power users, but it can also be made approachable for less technical users.
> I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language.
That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?
We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?
When I think of Win2k, I think of the overall simplicity. This is mostly due to nostalgia than for any practical reasons. I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.
> The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.
We still 'dial' with our phones, even though phones haven't had dials in over 50 years by this point. Nobody would even explain phones using that metaphor anymore. Even just having a foundation of common terminology is helpful in teaching people new systems.
> This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.
I fail to see the connection.
> My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.
This is true in today's world, but not necessarily in a world where the UI language of computers is stable and users can trust their computers to not change render their understanding of the system from underneath them. If all buttons had the same hints to tell a user 'I'm a button', in the same way default HTML links tell users 'I'm a link', then we could trust users to have this understanding.
> But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.
It's remained true in a small niche of power users, while for the rest of the world, this environment might as well not exist (beyond the functionality it provides to them after it's been filtered through several layers). CLIs are irrelevant dead-end in the story of user accessible design; one that there's probably some lessons to take from, but not one to entertain in any serious manner.
> That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?
Yes.
> We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?
The classic theme survived up until Windows 7, and I'll give that a pass, since although there still are holes where the newer design language of Windows peeks through, it's stayed mostly consistent, and even managed to add new features without breaking the design language to fit them.
Then that died with Windows 8, and there's been no hope for consistency in UI language since. The dream of a casual user being able to learn a UI and stick to it is dead, since even if they do, it will just change out from underneath them. That's why they don't even bother. Heck, even I barely bother.
> I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.
I disagree. I don't use many modern UI features, and the few that I do use, like snappable windows, are things I can imagine working within the old design language. I still write documents using a copy of Word 2000 in a Win2K VM every now and then, and when I don't use that, I use LibreOffice, a program many people refuse to use because it looks ancient to them. That's a feature for me. It not changing and thus not breaking my workflow is a huge feature that nothing in Windows 11 can even hope to compare with.
Comment was deleted :(
Seeing “Windows” and “usability” in the same sentence is a surprising combination to me.
> Perhaps the best testament to our belief in iterative design is that literally no detail of the initial UI design for Windows 95 survived unchanged in the final product.
I shudder to imagine the look and feel of that initial UI design.
The Windows 3.1 UI example screenshots are a reminder of how primitive 3.1 felt compared to other OSes of the time.
The need for instructions in that Search dialog is appalling from a usability perspective.
When Win95 was released, it was widely seen as Microsoft finally catching up with its rivals. They had at last added features that Mac, NeXTSTEP, Amiga, etc had had for years.
"Intermediate users could get around in the hierarchy, but often just barely, and usually saved all of their documents in the default directory for the program they were using."
You should see my Documents folder.
Past discussion:
Thanks! Macroexpanded...
The Windows 95 User Interface: A Case Study in Usability Engineering (1996) - https://news.ycombinator.com/item?id=12330899 - Aug 2016 (72 comments)
This part stands out to me:
> The Windows 95 user interface design team was formed in October, 1992... The number of people oscillated during the project but was approximately twelve. The software developers dedicated to implementing the user interface accounted for another twelve or so people
I still don't understand what happened starting around 2010-ish (from my observations at the time) that we went from being able to handle a company's worth of software with 30 people, to needing 30 people for every individual project. Startups with minor products had team-pages with 15 people.
From what I remember, Windows NT kernel 3.1 team had about 50 persons, and when they reached 4.0 it was about 200 persons. And then there are application writers. It was definitely a lot than just a few dozens.
Those numbers are UI only. 12 just to design it, another 12 to build it. That's not counting the vastly larger number of developers who built all the various elements of the underlying codebase.
Team bloat is a real issue but I don't think this case is relevant.
Microsoft had thousands of people working on Windows. Sun Microsystems had thousands of people working on Java.
Microsoft had around 5k people in r&d in 1995. And that covered the full wide product range, win95, nt, office, visualc, sql server, and all the other stuff.
Yes and with all these huge and siloed teams you end up with no consistency even within a single app
The current WinUI, WinAppSDK, Windows 11 teams should have a weekend retreat going down that article.
I can't wait until this win95 nostalgia phase stops and my nostalgia for actual good UI begins - WinXP for the win
I was a good step forward. Perhaps with the exception that you had to click "Start" in order to shut down the computer.
Everything since this style of design feels like a cartoon version, with ridiculous non-sense that only gets in the way.
Notice how they moved the ok & cancel buttons to the bottom right since it’s the more logical location to put them.
Meanwhile gtk now puts those on opposite sides of the window title bar by default.
Separating them is good for avoiding misclicks.
Decades ago, MacOS properly had the close box for windows on the opposite side from minimize etc. widgets; now the one destructive window action could be reasonably safe without confirmation. Then Windows started gaining popularity and nobody ever did it the right way by default again. A pity for the sharp minds at Xerox PARC.
Command Q and Command W are still beside each other though
I don't mind ok and cancel being on opposite sides. It's mainly ok not being bottom-right that bothers me.
the loss of X to close programs is sad. I dont like the new design philosophy of clicking out the card to close things
i3 makes a lot more sense they should have just gone with that
[flagged]
[dead]
Usability is the wrong metric, paint by numbers is more "usable" (sic accessible) than a canvas but you'd be depressed watching your son graduate art school and that's all he can do.
If you do want to optimize for usability you have to make sure you aren't making the system more consumptive at the same time. The prime example from the article is trading a moment where the user must take initiative with a menu. More useable less useful. Lower the floor not the ceiling etc. Windows (and iOS) did make genuine improvements to OSs but because of decisions like these most users are locked out of enjoying them.
Wasn't Windows 95 just a copy of Windows NT, which was the real product.
No, Windows NT until 4.0 had the same interface design as Windows 3.x (although there existed a semi-official SP/addon to give NT 3.5 the Chicago interface, making it quite similar to 95), and NT 4.0 came later than 95
From Raymond Chen's Old New Thing:
How did the Windows 95 user interface code get brought to the Windows NT code base?
https://devblogs.microsoft.com/oldnewthing/20251028-00/?p=11...
Both OS lines were developed concurrently up until XP release where DOS-based 9x was abandoned and NT became the basis for every subsequent product. Plus of course there's that whole part of the story where MS teamed up with IBM and worked on OS/2.
NT got new 9x shell with 4.0 release but a beta package could be installed on 3.51 as well - tho, that could render some compatibility issues.
Crafted by Rajat
Source Code