Log in

Mon, Oct. 24th, 2016, 06:35 pm
Playing "what if" with the history of IT

Modern OSes are very large and complicated beasts.

This is partly because they do so many different things: the same Linux kernel is behind the OS for my phone, my laptop, my server, and probably my router and the server I'm posting this on.

Much the same is true of Windows and of most Apple products.

So they have to be that complex, because they have to do so many things.

This is the accepted view, but I maintain that this is at least partly cultural and partly historical.

Some of this stuff, like the story that “Windows is only so malware-vulnerable because Windows is so popular; if anything else were as popular, it’d be as vulnerable” is a pointless argument, IMHO, because lacking access to alternate universes, we simple cannot know.

So, look, let us consider, as a poor parallel, the industry’s own history.

Look at Windows in the mid to late 1990s as an instance.

Because MS was busily developing a whole new OS, NT, and it couldn’t do everything yet, it was forced to keep maintaining and extending an old one: DOS+Win9x.

So MS added stuff to Win98 that was different to the stuff it was adding to NT.

Some things made it across, out of sync…

NT 3.1 did FAT16, NTFS and HPFS.

Win95 only did FAT. So MS implemented VFAT: long filenames on FAT.

NT 3.1 couldn’t see them; NT 3.5 added that.
Then Win 95B added FAT32. NT 3.5 couldn’t read FAT32; it was added in 3.51 (IIRC).

Filesystems are quite fundamental — MS did the work to keep the 2 lines able to interwork

But it didn’t do it with hardware support. Not back then.

Win95: APM, Plug’n’Play, DirectX.
Later, DirectX 2 with Direct3D.
Win95B: USB1.
Win98: USB2, ACPI; GDI+.
Win98SE: basic Firewire camera-only support; Wake-on-LAN; WDM modems/audio.
WinME: USB mass storage & HID; more complete Firewire; S/PDIF.

(OK, NT 4 did include DirectX 2.0 and thus Direct3D. There were rumours that it only did software rendering on NT and true hardware-accelerated 3D wasn’t available until Windows 2000. NT had OpenGL. Nothing much used it.)

A lot of this stuff only came to the NT family with XP in 2002. NT took a long time to catch up.

My point here is that, in the late ‘90s, Windows PCs became very popular for gaming, for home Internet access over dialup, for newly-capable Windows laptops which were becoming attractive for consumers to own. Windows became a mass-market product for entertainment purposes.

And all that stuff was mainly supported on Win9x, _not_ on NT, because NT was at that time being sold to business as a business OS for business desktop computers and servers. It was notably bad as a laptop OS. It didn’t have PnP, its PCMCIA/Cardbus support and power management was very poor, it didn’t support USB at all, and so on.

Now, imagine this as an alternate universe.

In ours, as we know, MS was planning to merge its OS lines. Sensible plan, the DOS stuff was a legacy burden. But what if it wasn’t? Say it had developed Win9x as the media/consumer OS and NT as the business OS?

This is only a silly thought experiment, don’t try to blow it down by pointing out why not to do it. We know that.

They had a unified programming model — Win32. Terrified of the threat of the DoJ splitting them up, they were already working on its successor, the cross-platform .NET.

They could have continued both lines: one supporting gaming and media and laptops, with lots of special driver support for those. The other supporting servers and business desktops, not supporting all the media bells and whistles, but much more solid.

Yes it sounds daft, but this is what actually happened for the best part of 6 years, from 1996 and the releases of NT 4 and Win 95 OSR2 until Windows XP in 2002.

Both could run MS Office. Both could attach to corporate networks and so on. But only one was any good for gaming, and only the other if you wanted to run SQL Server or indeed any kind of server, firewall, whatever.

Both were dramatically smaller than the post-merger version which does both.

The tendency has been to economise, to have one do-everything product, but for years, they couldn’t do that yet, so there were 2 separate OS teams, and both made major progress, both significantly advanced the art. The PITA “legacy” platform went through lots of releases, steadily gaining functionality, as I showed with that list above, but it was all functionality that didn’t go into the enterprise OS, which went through far fewer releases — despite it being the planned future one.

Things could have gone differently. It’s hard to imagine now, but it’s entirely possible.

If IBM had committed to OS/2 being an 80386 OS, then its early versions would have been a lot better, properly able to run and even multitask DOS apps. Windows 3 would never have happened. IBM and MS would have continued their partnership for longer; NT might never have happened at all, or DEC would have kept Dave Cutler and PRISM might have happened.

If Quarterdeck had been a bit quicker with it, DESQview/X might have shipped before Windows 3, and been a far more compelling way of running DOS apps on a multitasking GUI OS. The DOS world might have been pulled in the Unix-like direction of X.11 and TCP/IP, instead of MS’s own in-house GUI and Microsoft and Novell’s network protocols.

If DR had moved faster with DR-DOS and GEM — and Apple hadn’t sued — a 3rd party multitasking DOS with a GUI could have made Windows stillborn. They had the tech — it went into Flex/OS but nobody’s heard of it.

If the later deal between a Novell-owned DR and Apple had happened, MacOS 7 would have made the leap to the PC platform:


(Yes, it sounds daft, but this was basically equivalent to Windows 95, 3 years earlier. And for all its architectural compromises, look how successful Win95 was: 40 million copies in the first year. 10x what any previous version did.)

Maybe Star Trek would have bridged the gap and instead of NeXT Apple bought Be instead and migrated us to BeOS. I loved BeOS even more than I loved classic MacOS. I miss it badly. Others do too, which is why Haiku is still slowly moving forward, unlike almost any other non-Unix FOSS OS.

If the competing GUI computers of the late 1980s had made it into the WWW era, notably the Web 2.0 era, they might have survived. The WWW and things like Java and JavaScript make real rich cross-platform apps viable. I am not a big fan of Google Docs, but they are actually usable and I do real, serious, paying work with them sometimes.

So even if they couldn’t run PC or Mac apps, a modern Atari ST or Commodore Amiga or Acorn RISC OS system with good rich web browsers could be entirely usable and viable. They died before the tech that could have saved them, but that’s partly due to mismanagement, it’s not some historical inevitability.

If the GNU project had adopted the BSD kernel, as it considered, and not wasted effort on the HURD, Linux would never have happened and we’d have had a viable FOSS Unix several years earlier.

This isn’t entirely idle speculation, IMHO. I think it’s instructive to wonder how and where things might have gone. The way it happened is only one of many possible outcomes.

We now have effectively 3 mass-market OSes, 2 of them Unixes: Windows NT (running on phones, xBoxes and PCs), Linux (including Android), and macOS/iOS. All are thus multipurpose, doing everything from small devices to enterprise servers. (Yes, I know, Apple’s stopped pushing servers, but it did once: the Xserve made it to quad-core Xeons & its own RAID hardware.)

MS, as one company with a near-monopoly, had a strong incentive to only support one OS family, and it’s done it even when it cost it dearly — for instance, moving the phones to the NT kernel was extremely costly and has essentially cost them the phone market. Windows CE actually did fairly well in its time.

Apple, coming back from a weak position, had similar motivations.

What if instead the niches were held by different companies? If every player didn’t try to do everything and most of them killed themselves trying?

What if we’d had, say, in each of the following market sectors, 1-2+ companies with razor sharp focus aggressively pushing their own niches…

* home/media/gaming
* enterprise workstations
* dedicated laptops (as opposed to portable PCs)
* enterprise servers
* pocket PDA-type devices

And there are other possibilities. The network computer idea was actually a really good one IMHO. The dedicated thin client/smart terminal is another possible niche.

There are things that came along in the tech industry just too late to save players that were already moribund. The two big ones I’m thinking of were the Web, especially the much-scorned-by-techies (including me) Web 2, and FOSS. But there are others — commodity hardware.

I realise that now, it sounds rather ludicrous. Several companies, or at least product lines, destroyed themselves trying to copy rivals too closely — for instance, OS/2. Too much effort trying to be “a better DOS than DOS, a better Windows than Windows”, rather than trying to just be a better OS/2.

Apple didn’t try this with Mac OS X. OS X wasn’t a better Classic MacOS, it was an effectively entirely new OS that happened to be able to run Classic MacOS in a VM. (I say effectively entirely new, because OS X did very little to try to appeal to NeXT owners or users. Sure, they were rich, but there weren’t many of them, whereas there were lots of Mac owners.)

What I am getting at here, in my very very long-winded way, is this.

Because we ended up with a small number of players, each of ‘em tried to do everything, and more or less succeeded. The same OS in my phone is running the server I’ll be posting this message to, and if I happened to be using a laptop to write this, it’d be the same OS as on my PC.

If I was on my (dual-booting) Win10 laptop and was posting this to a blog on CodePlex or something, it’d be the same thing, but a different OS. If MS still offered phones with keyboards, I’d not object to a Windows phone — that’s why I switched to a Blackberry — but as it is Windows phones don’t offer anything I can’t get elsewhere.

But if the world had turned out differently, perhaps, unified by FOSS, TCP/IP, HTML, Java and Javascript, my phone would be a Symbian one — because I did prefer it, dammit — and my laptop would be a non-Unix Apple machine and my desktop an OS/2 box and they’d be talking to DEC servers. For gaming I’d fire up my Amiga-based console.

All talking over Dropbox or the like, all running Google Docs instead of LibreOffice and ancient copies of MS Word.

It doesn’t sound so bad to me. Actually, it sounds great.

Look at the failure of Microsoft’s attempt to converge its actually-pretty-good tablet interface with its actually-pretty-good desktop UI. Bombed, may yet kill them.

Look at Ubuntu’s failure to deliver its converged UI yet. As Scott Gilbertson said:

Before I dive into what's new in Ubuntu 16.10, called Yakkety Yak, let's just get this sentence out of the way: Ubuntu 16.10 will not feature Unity 8 or the new Mir display server.

I believe that's the seventh time I've written that since Unity 8 was announced and here we are on the second beta for 16.10.

And yet look at how non-techies are perfectly happy moving from Windows computers to Android and iPhones, despite totally different UIs. They have no problems at all. Different tools for different jobs.

From where we are, the idea of totally different OSes on different types of computer sounds ridiculous, but I think that’s a quirk of the market and how things happened to turn out. At different points in the history of the industry _as it actually happened_ things went very differently.

Microsoft is a juggernaut now, but for about 10 years from the mid ‘80s and early ’90s, the world completely ignored Windows and bought millions of Atari STs and Commodore Amigas instead. Rich people bought Macs.

The world still mostly ignores FreeBSD, but NeXT didn’t, and FreeBSD is one of the parents of Mac OS X and iOS, both loved by hundreds of millions of happy customers.

This is not the best of all possible worlds.

But because our PCs are so fast and so capacious, most people seem to think it is, and that is very strange to me.

As it happens, we had a mass extinction event. It wasn’t really organised enough to call it a war. It was more of an emergent phenomenon. Microsoft and Apple didn’t kill Atari and Commodore; Atari and Commodore killed each other in a weird sort of unconscious suicide pact.

But Windows and Unix won, and history is written by the winners, and so now, everyone seems to think that this was always going to be and it was obvious and inevitable and the best thing.

It wasn’t.

And it won’t continue to be.

Mon, Oct. 24th, 2016 04:41 pm (UTC)

I suspect we might still have had Linux, although perhaps not nearly as dominant as it now is.

Mon, Oct. 24th, 2016 10:56 pm (UTC)

Well, possibly something *like* it, anyway.

Various GNU people admit it was a mistake:


Torvalds said back in '93 he wouldn't have bothered if there was a 386 BSD:

"Actually, I have never even checked 386BSD out; when I started on Linux it wasn't available (although Bill Jolitz series on it in Dr. Dobbs Journal had started and were interesting), and when 386BSD finally came out, Linux was already in a state where it was so usable that I never really thought about switching. If 386BSD had been available when I started on Linux, Linux would probably never had happened."


But there are lots of FOSS Unixes. The fork that created the BSDs was later, but there was Minix, Mach... Plan 9 & Inferno were open-sourced much later. I don't know if things like Amoeba, Sprite and Chorus count.

But overall, a BSD-based GNU OS would have been easily doable and working by the late '80s. So probably no Linux, but I daresay there'd still be (at least) 2 branches of BSD: GNU BSD and, er, well, BSD BSD. And probably some experimental microkernel stuff, at least.

Tue, Oct. 25th, 2016 07:47 am (UTC)

386/ix (commercial 386 unix), I think Xinu was available for the 386. I guess neither of those count as FOSS, though.

Tue, Oct. 25th, 2016 12:54 pm (UTC)

That's true.

But PC/IX was US$900, and 386/ix was $495 standalone or $895 for the networked version.


This is why Unix didn't do well on COTS PCs in the early days -- it was eye-wateringly expensive.

E.g. SCO Xenix: $595. Development system: $595 extra. Text processing system: $495. Bargain bundle price: $1350.


When I worked with Xenix/286 (on my P{C-AT) and Xenix/386 on my customers' boxes, in '88-'89, we didn't have a compiler, network card support, TCP/IP or X.11, because all of those things cost extra. Nearly as much again as the OS. A whole rig was well over £1000.

Work wouldn't pay for them, and rightly, too.

This is why the Mark Williams Corporation's Coherent was briefly popular (on a small scale): it was a full Unix-compatible OS, albeit Unix v7 without the BSD extensions, for a bargain $495. It was cheap because programs could only occupy one 8086 segment -- 64kB. Eventually there was a 386 version without this limitation, which was more useful, and it went down to $99 by 1990:


But even then, it was a limited, old-fashioned Unix clone.

Also why DR's PC OSes didn't thrive. CP/M-86: £240. PC-DOS: £40.

Microsoft are to blame for a lot of things, but puncturing and deflating the soaring prices of PC software is to their credit.

IMHO, Linux rose to prominence not from technical virtue, because it didn't particularly have one, but because it was free and the GPL meant it didn't fork all over the shop, unlike BSD.

Microsoft changed the game by commoditising software on commodity hardware. It basically killed off all its competitors by under-cutting them. In some cases, to zero -- e.g. Netscape by offering IE & IIS for free.

So there was only one competitive tactic left, once the price is nothing: make the source code free as well, but protected by a viral licence. ;-)

Edited at 2016-10-25 12:57 pm (UTC)

Fri, Nov. 11th, 2016 03:09 pm (UTC)

I guess one of the things to consider is the price of a full-fledged "unix system" (with bespoke vendor hardware and all) at the time. From memory, you could get a decent 386 PC for, um, about 5000 - 6000 SEK and a 386/ix license for another 4000-8000 SEK.

But if you wanted the cheapest possible "not unix on a PC" at the time, you were looking at a DIAB DS/20 (or DS/21, hazy on the naming standards there) and I think the cheapest you could get a new one was probably on the order of 50k to 60k. And at that point, you were on a DNIX system (started from a System III source code license, hacked into SysV compliance via "Read SVID, hackhackhack") with an... interesting... networking stack (cool, but weird; FTP and Telnet running on the network card, email running on the network card, dropping UUCP spool files into the file system, ...).

So, yeah, expensive, but much MUCH cheaper than the alternative. Probably not cheap enough, though.

Fri, Nov. 11th, 2016 03:18 pm (UTC)

Very good point, yes.

Unix on a PC was much much cheaper than a purpose-built Unix box, so the PC-Unix vendors felt that even at high prices it was a good deal.

PC owners mostly didn't. :-)

If you were using a PC as a multiuser host, that was different -- it was competitive with a PC fileserver and workstations, if a single PC + terminals did what you needed.

So PC-DOS (& variants and relatives) outcompeted not only Unix but also more expensive OSes like the UCSD p-System and CP/M-86... until a Unix that was cheaper than anything came along. I.e. it cost nothing at all.

And still most PC owners use Windows. :-/

Mon, Oct. 24th, 2016 09:28 pm (UTC)

I think it's the hardware support that pushes towards minimum numbers of OSs. Software is relatively easy to make OS independent. Throw in enough abstraction layers, emulation layers and if all else fails VMs and it'll work.

But device drivers are a PITA. The OS writers can't afford to write a driver for every shitty, badly documented bit of kit that's barfed out of China. It's not the days of wordperfect anymore when WP could supply floppy after floppy of printer and video card driver. They're too complicated. So as soon as one OS starts becoming dominant enough that the HW manufacturers need to write drivers for it, then that OS benefits from a massive advantage over its competitors.

So much so that the only way to compete is to
- convince thousands of volunteers to do it for you
- refuse to run on any hardware except the stuff you make

Mon, Oct. 24th, 2016 11:24 pm (UTC)

That's a good point, but even so, I don't think it's entirely a deal-breaker, not on its own.

But you highlight something I glossed over, and something I missed out.

One of the problems with the '80s 16-bitters was proprietary hardware, or at least, expensive, workstation-class hardware.

The thing I glossed over was what Apple belatedly cottoned on.

I never owned a Mac of my own 'til my "Road Apple" Performa 5200...


... in about '96, thanks to Apple's "Platform Club" for UK journos.

It was one of the first PowerMacs where Apple started to get serious about cost reduction -- so its hard disk and (I think) CD were both IDE. It still had SCSI, which it didn't really need, so they hadn't fully grasped the nettle.

But within a few years, Macs used a lot of plain old cheap PC parts: EIDE disks, vanilla RAM, PCI graphics cards. OK, some had special firmware, but still, a lot of generic PC bits.

Apple replaced SCSI with Firewire, ADB & RS422/LocalTalk with USB, NuBus with PCI, its own ROM with OpenFirmware. It switched the monitor connector to a standard VGA socket; previously, it was the same connector as a Thick Ethernet transceiver. This enabled it to bin the AAUI Ethernet connector and switch to an RJ-45.

The "Tanazania" machines -- the PowerMac 4400 and its kin -- were the ultimate expression of this, with effectively an ATX motherboard & ATX PSU in a cheap metal ATX case.

The Mac became something like a PC with a different CPU and chipset. And in the end, they switched to Intel CPUs too, of course.

So this is the thing I didn't address.

If the Amiga and ST had lived long enough to do the same, it would have helped them, I reckon. Bin as many proprietary interfaces and parts as possible, and switch to cheap mass-market ones.

Acorn, oddly, did live long enough to start this process. They switched from their own mouse and keyboard interfaces to PS/2, they switched to PC SIMM memory and IDE disks. The unreleased Phoebe machine went further, with PCI slots and some more PC I/O.


They still had implemented their own audio and graphics, believing -- I'm sure correctly -- that they could do better than PC industry parts. True, but this was more expensive.

It didn't use USB, which was short-sighted. It was demonstrated the same year that the original G3 iMac launched. The ports were out there and so were some drivers (e.g. Win 95B).

Obviously, yes, this stuff does need drivers all the same, but if you're using COTS components with developer documentation and possibly even x86 source code available, it would be easier.

Another thing that was visible in later editions of OS/2, BeOS and indeed still is in Linux is using relatively generic drivers which can cope with a range of hardware using relatively standard interfaces -- e.g. VESA screen modes, SoundBlaster-compatible sound, NE2000-compatible NICs, etc. Sure, it's not ideal, but it gets you working and later, if some particular device proves popular, you do a better driver.

So, for instance, when the first releases of Mac OS X appeared, it drove the screen as a dumb framebuffer but worked with any Mac GPU out there, including many (but not all) 3rd party ones. However, after a release or 2, the ATI Mach64 family got its own driver with some acceleration and that was far and away the best GPU to use with OS X if you had a choice. Later came Quartz compositing and the direction changed.

So yes, you make a fair point, but there were some ways around some of these issues.

But no, Amiga had its own fancy chipset and its own expansion bus, and Commodore stuck with it even when it wasn't that competitive any more. The ST used a lot of cheap off-the-shelf bits, even the OS -- that was part of the brilliance of the design -- but then for later models they chose the somewhat obscure VMEbus. Nice try but they backed the wrong horse. So they never developed the critical mass they needed.

From the era of the first PC clones, it was clear that the PC market was a solid horse to back. Enthusiastic adoption of PC hardware standards, de facto or not, would have helped them, I reckon.

Edited at 2016-10-24 11:33 pm (UTC)

Mon, Oct. 24th, 2016 11:33 pm (UTC)

One thing Acorn seemed dimly aware of but another nettle it didn't grasp was that by the late '90s, its selling point wasn't CPU power, it was CPU cost & low heat output. Acorn should have jumped on the bandwagon Simtec tried to start with the Hydra:


Acorn could have had the cheapest SMP multiprocessor computers in the world. Yes, they'd have had to break RISC OS to support it, but Apple sold multiprocessor Macs with the resolutely single-processor Classic MacOS:


In other words, it was doable.

The Phoebe was not competitive by the time it was ready. It saddens me hugely to say it, but Acorn was right to cancel it. But there was a potential edge there that the company missed.