Hard Stare

Time to go

I am closing down this blog.

Henceforth, you will find it and its continuation over on Dreamwidth.

I've imported the old content. Comments, in theory, should come across in time.
Hard Stare

A small Yuletide gift

It's been a while since I have had any time to work on one of my pet projects...

So, herewith, step 1 in it: a downloadable FAT32 PC-DOS 7.1 Virtualbox disk image.

This is the PC DOS 2000 disk image from Connective VirtualPC – I described how I created that this time last year.

I've replaced the kernel files, COMMAND.COM and a few utilities with those from the freely-downloadable PC DOS 7.1 made available by IBM. I've described that and how to get it, too.

So what I did was make a new FAT32 8GB virtual drive. Partitioned it with PC-DOS 7.1's FDISK32 command. Formatted it with PC-DOS 7.1's FORMAT32 command. Copied the system from the Connectix FAT16 drive, check it boots, and here it is.

Next planned step: add in the IBM Warp Server DOS LAN Services & IBM TCP/IP and make it able to talk to the VirtualBox host. Sadly, after so long away from this, it took me some hours to remember where I was up to and build this disk image.
Hard Stare

The vagaries of 1990s 32-bit Windows networking

I recently commented about a deeply misguided comment on HN that claimed that Windows 98 was the beginning of integrated networking in Windows. Wolfie Pauli (as I like to call him) applies: "that is not only not right, that is not even wrong!"
But, to be fair, which I rarely feel the urge to be towards Microsoft, Win98 did have one killer networking feature: (effectively) unlimited IP addresses.
I do not know the internals and the Web seems to have forgotten if it ever did... but the Win9x and NT networking stacks were very different.

The NT network stack is cleanly layered, supported multiple adaptor types and protocols and clients all in one, and was so complicated that up to NT 4, when you finished making changes to the networking configuration and clicked "OK", a tiny embedded Prolog interpreter fired up and ran a single embedded Prolog program, the only one in all of the DOS, OS/2 and Windows codebase that I'm aware of -- possibly the only one in any commercial OS anywhere. This Prolog code parsed your desired config, worked out how to interconnect all the layers of the NT network stack, and wrote the configuration file(s) and registry settings in order to give you what you wanted.

That's why a little progress dialog box popped up for a while.

I do not know if the Prolog has gone, but I suspect that the progress indicator box has gone just because [a] everything defaults to TCP/IP now, and the Netware support stuff is mostly or all gone and [b] computers are much faster now so there's no perceptible pause, so no need for a progress bar.
Win9x, on  the other hand, was Windows 4, the successor to Windows for Workgroups 3.11, and so I strongly suspect that its network stack was just a modified version of the WfWg 3.11 stack, from what tiny bit I know from experience plus some educated guesswork.
In Win95/95B you could only have a maximum of 4 TCP/IP addresses. That's on all adapters (Ethernet, modem, Direct Cable Conneciton, AOL dialup, etc.) put together.

Aside: yes, the AOL adaptor was its own device, not a Windows-managed modem device: AOL managed its own dial-up. Don't knock it, it worked: it was much easier to get online with AOL in the 1990s than it was with, well anything else, because AOL did a lot of R&D to make it happen. I know this because I've had a free journo AOL account since the 1990s and it gave me toll-free worldwide dialup, so I used it a lot when visiting Oslo, which I did monthly for a couple of years at the start of the century. (Thanks, Ryanair!) No wifi or broadband back then!

Win95 bound all protocols to all adaptors. No need for Prolog here. I had a Thinkpad 701C -- the classic folding-keyboard Butterfly model -- and I had several PCMCIA cards for it. One had a 56K modem. One was a 10base-T network card. One was a 100base-T network card. And I had direct cable connection for linking to other PCs. That's four network connections, if you're counting.

The AOL dial-up adaptor made 5.

Awooga! Alert! That means five network cards, and Win95 didn't support more than four IP addresses, in total, for all devices. A dynamic address -- that is, an adaptor that does BootP or DHCP -- is still an address. An adaptor that's not plugged in at the moment was still an adaptor and had TCP/IP bound to it so it needed a slot for an address.

(Oh, and forget about IPv6 – Win95 just didn't do that, period; it hadn't been invented yet.)

Result: problem. A basically-insoluble problem. Microsoft didn't really expect people to want so many IP addresses in those days.

Windows 98 allowed effectively unlimited IP addresses, so I could have all those adaptors at once.

Problem: the Thinkpad 701C is a 75MHz 486 with 40MB of RAM, and Windows 98 was too much for it. I used 98lite to trim it down, but it was still bulky and sluggish.

In desperation I did try Windows 2000 on the machine. Win2K on a 486 with 40MB of RAM is possible, just, but it's not much fun to use. Thankfully the old Thinkpad's hard disk tray was removable and I had a few drives and caddies, so I could switch Windows versions in half a minute.

Anyway: that was the sole significant improvement in networking in Win98 that I am aware of. The other thing 98 could do was drive multiple monitors, if you had the right — that is, from an extremely limited selection — graphics cards. Otherwise, it was just 95B with more drivers and the odious Active Desktop built in.
Hard Stare

The rise of the LAN in business computing

Someone on HN claimed "the arrival of built-in Windows Networking in Windows 98".

Since that is, IMHO, "not even wrong", I felt I had to reply...

This is not correct.

Windows 95 had built-in networking from launch and an email client on the desktop. (It did not have a bundled web browser at launch, but it had networking including TCP/IP and dial-up.)

Built-in networking arrived with Windows for Workgroups 3.1 in 1992, and became mainstream with Windows for Workgroups 3.11 in 1993, because of the performance enhancements of 32-bit File Access. From '93 on almost all PCs shipped with WfWg 3.11 as the sole default version.

WfWg 3.11 had an optional extra add-on delivering 32-bit TCP/IP, and Internet Explorer was a free download that gave it dial-up TCP/IP.

Also, Windows NT launched in 1993, with built-in networking including TCP/IP over wired and dial-up networks.

But networking does not and did not equal TCP/IP. DOS and Windows 3.x defaulted to NetBIOS, with optional IPX/SPX for Novell Netware, which was the dominant PC networking standard from the late 1980s. Until the mid-1990s, TCP/IP was a niche protocol only needed if you wanted to communicate with expensive RISC-based UNIX™ workstations.

Microsoft used NetBEUI. Novell used IPX. Apple used AppleTalk. DEC used DECnet. IBM used lots of protocols including DLC but didn't use Ethernet all that much -- it had its own network system, Token Ring -- so you needed special hardware to talk to IBM kit and only IBM-centric businesses used it much.

LANs in office networks were certainly entirely mainstream in the first half of the 1990s. When I started my first job in London in 1991, we had just one client who didn't have an office network. That was considered unusual but it was an intentional management decision, intended to slow the possible spread of malware and increase real-life face-to-face staff communication.

What wasn't mainstream was them being based on TCP/IP.

These days, networking and TCP/IP seem synonymous, but that's just how it happens to be this century. Networking, mostly over Ethernet, initially Thin Ethernet (10base-2), in wide use as a common office tool predated the rise of TCP/IP by a good 15 years or so. Some early adopters were using it 20+ years earlier.

Network protocols then were a bit like OSes are now. Many people use Windows but lots use Macs, some use *BSD, a few still use commercial UNIX, etc., and there are things like ChromeOS, thin clients over RDP, and stuff. It's not at all homogenous and it's hard to even say there's a clear majority for any one OS: Windows has the edge, but not by a lot any more.

Well, in the era of MS-DOS, Novell was the server OS of choice for almost everyone, with rivalry from 3Com and its 3+Share MS-DOS-based server OS; 3+Share was related to MS LAN Manager, which was in OS/2 and led to 3+Open. They all used NetBEUI. LAN Manager also ran on VMS thanks to DEC Pathworks, running over DECnet, which was handy because it also supported terminal sessions -- remember, this is before SSH -- and X.11 and DEC email and more.

Focussing on big businesses was Banyan VINES, with its own protocol derived from Xerox's Alto and so on. This had the first network directory. Novell designed Netware 4, with NDS, as a direct response to Banyan's StreetTalk, and in the NT 3.x era it kicked Microsoft's behind in the market; the tide only turned with NT 4.

Speaking of big enterprises, email was common long before LANs or TCP/IP. All big DEC users and IBM users had those companies' email systems. Small firms used dial-up to pre-Internet service providers -- I used CIX, which dominated in Britain. My 1991 CIX email address is still live and still works. Americans favoured CompuServe, AKA Compu$erve, but it was too expensive in Europe where we pay for local calls too.

This stuff is, ballpark, a quarter of a century older than TCP/IP and Internet-based networking. And given that that is only about 25 years old, what I am saying is that widespread LAN use didn't begin with Win98.

At the time, Win98 wasn't even a blip; it was nicknamed "GameOS" in enterprise IT circles and few companies even considered it. NT 4 was where it was at, and it launched with full TCP/IP support two whole years before Win98.

So no, Win98's networking didn't begin anything at all. It wasn't significant in any way, then or now. Win98 was a home OS for standalone PCs with dial-up, but it merely took over from Win95 which created that market.

The rise of TCP networking in business LANs arguably began with Windows NT, but NT 3.x wasn't very significant, and NT itself arrived about half way through the lifetime of business use of machine-to-machine communications, email, groupware, etc. from its beginning to now.

If you want to argue that integrated networking in Windows was a significant turn, that I won't argue with. But it began 5 years before Win98, with Windows for Workgroups and Windows NT.

The fact that now it looks big and significant that it's when TCP/IP became the default is an emergent artifact of the current focus on IP. It wasn't at the time.

Email is a 1960s thing. The Internet started to become significant in the 1970s, long after email. Corporate LANs rose in prominence in the 1980s and by the 1990s were almost a given. Macintosh-based companies (mostly in design, print, repro etc.) did direct peer-to-peer comms over ISDN.

In the 1990s, for most people, TCP/IP only ran over dial-up modem connections, and it was contemporaneous with the industry moving to 10base-T: Ethernet over UTP replacing Ethernet over Coax.

For a time, the obvious successor to 10base-T looked to be ATM, which is a protocol at well as a cabling system; TCP/IP had to be tunnelled over ATM, but it looked clear for a while that ATM was the future. 100base-T (Fast Ethernet) was just one contender among several.

But actually, as it happened, TCP/IP rose vastly in importance, and networking switched to 100base-T and then wifi.

LANs switched to IP in the 21st century but they were a roughly 20-year-old, established, totally normal technology then.
Hard Stare

Just because a vendor sells laptops with a Linux on, caveat emptor still applies

Some companies sell laptops with Linux pre-installed. However in some cases I have read about, there may be significant caveats.

Some examples:

  • Dell pre-installed their own drivers for Ubuntu on their laptops, and if you format the machine and reinstall, or reinstall a different distro, you can't get the source of the drivers and build your own.

  • In other instances I've heard of, the machines work fine but some features are not supported on Linux. Or perhaps only works on the vendor's supported distro & not other distros. Or perhaps on Linux but not on -- say -- FreeBSD.

  • Or all features work, but you require Windows to update the firmware, or to update peripherals' firmware, such as docking stations.

  • Or the Linux models have slightly different specs, such as a specific WLAN card, and the generic Windows version of the same model is not 100% compatible.


The fact that someone offers one or two specific models with one particular Linux distro as an option is good, sure, but it doesn't automatically mean that that particular machine may be a good choice if you run a different distro, or don't want their pre-installed OS, or you didn't buy it with Linux and put it on later.

Long long ago, in the mid-1990s, I ran the testing labs for a major UK computer magazine, called PC Pro. In about 1996 I proposed and ran and edited a feature which the editors were very dubious about, but it proved to be a big hit.

The idea was very simple: at that time, all PCs shipped with Windows 95. As 95 was DOS-based at heart and had no concept of user space vs kernel space, drivers were quite easy. You could in a push use DOS drivers, or port Win32 drivers from Windows for Workgroups which did terrible hacky direct-hardware access stuff.

So my feature was: we want machines designed, built and supplied with Windows NT. At the time, that meant NT 4.

NT 4 was not at all like Win95; it just looked superficially like it. It needed its own, new, specially-written drivers for everything. It had built-in drivers for some things, for example EIDE (i.e. PATA) hard disks, but these did not use DMA, only programmed IO. (Not slow, but caused very high CPU usage; no problem on Win9x, but a performance-killer on NT.)

The PC vendors loved and hated us for it.

Some vendors...

  • promised machines then withdrew at the last minute;

  • promised machines, then changed the spec or price;

  • delivered machines with features not working;

  • delivered machines with expensive replacement hardware for built-in parts that didn't work with NT.


And so on. There was a huge delta in performance (while all Win9x machines performed pretty much alike: we could look at the parts list and predict the benchmark scores with an accuracy of about 5%.)

Many vendors didn't know about DMA hard disk drivers.

Some did but didn't know how to fix it. Some fitted SCSI hard disks as a way round this, not knowing that with the motherboard came a floppy disk with a free driver that would enable DMA on EIDE.

Some shipped CD burners that couldn't burn because the burner software didn't work on NT. Some shipped DVD drives which couldn't play movies on NT because the graphics adaptor's video playback acceleration didn't work on NT.

And so on.

Readers *loved* that feature because it separated the wheat from the chaff: it showed the cheap vendors whose PCs mostly worked but they didn't know how to tune them, from the solid vendors who knew what they were doing and how to make stuff work, from the solid vendors who could build a great PC for the task but it doubled the price.

I got a lot of praise for that article, and it was well worth the work.

Some vendors thanked me because it was so educational for them!

Well, Linux on laptops is still a bit like that today. There is a whole pile of stuff that's easy and a given on Windows that is difficult or problematic on Linux and just plain impossible on any other FOSS OS.

  • Switchable GPUs are a problem

  • Proprietary binary graphics drivers are sometimes a problem

  • Displays on docking stations can be tricky


Interactions between these things is even worse; e.g. multiple displays on USB docking stations can be extra-tricky

For example, with openSUSE Leap I found that with Intel graphics, two screens on a USB-C docking station was easy, but with nVidia Optimus, almost impossible.

With my own Latitude E7270, under KDE I can only drive 1 external screen; if I add 2 as well as the built-in one, then window borders disappear on the laptop screen and so windows can't be moved or resized. But under the lighter-weight Xfce, this is fine & all 3 screens can be used. And that's with an Intel GPU and a proper, PCIe-bus-attached dock.

But every time I un-dock or re-dock, it forgets the screen arrangement and the Display preferences have to be redone every single time.

Most apps can't remember what screen they were on and reopen on a random monitor every time. Possibly entirely offscreen if I have a different screen arrangement.

Even the same screens attached directly to the machine and via the dock confuse it. And I have both a full-size and mini dock. All the ports appear different.

Linux on laptops is still complicated.

Just because things work for 1 person doesn't mean they'll work for everyone. Just because a vendor ships a model with Linux doesn't mean all models work. Just because a vendor ships 1 distro doesn't mean all distros work.

And when the machine is new, you can probably be sure that there will be serious firmware issues with Linux because the firmware was only tested against Windows and sketchily even then. This is the era of Agile and minimum viable products, after all.

So do not take it as read that because Dell ship 2 or 3 models with Ubuntu, all models will Just Work™ with any disto.

I absolutely categorically promise you they don't and they won't.
Hard Stare

Some thoughts about raising the profile of Lisp

I must be mellowing in my old age (possibly as opposed to bellowing) because I have been getting praise and compliments recently on comments in various places.

Don't worry, there are still people angrily shouting at me as well.

This was the earlier comment, I think... There was a slightly forlorn comment in the Reddit Lisp community, talking about this article, which I much enjoyed myself:

Someone was asking why so few people seemed interested in Lisp.

As an outsider – a writer and researcher delving into the history of OSes and programming languages far more than an actual programmer – my suspicion is that part of the problem is that this positively ancient language has accumulated a collection of powerful but also ancient tooling, and it's profoundly off-putting to young people used to modern tools who approach it.

Let me describe what happened and why it's relevant.

I am not young. I first encountered UNIX in the late 1980s, and UNIX editors at the same time. But I had already gone through multiple OS transitions by then:

[1] weird tiny BASICs and totally proprietary, very limited editors.

[2] early standardised microcomputer OSes, such as CP/M, with more polished and far more powerful tools.

[3] I personally went from that to an Acorn Archimedes: a powerful 32-bit RISC workstation with a totally proprietary OS (although it's still around and it's FOSS now) descended from a line of microcomputers as old as CP/M, meaning no influence from CP/M or the American mainstream of computers. Very weird command lines, very weird filesystems, very weird editors, but all integrated and very powerful and capable.

[4] Then I moved to the same tools I used at work: DOS and Windows, although I ran them under OS/2. I saw the strange UIs of CP/M tools that had come across to the DOS world run up against the new wave of standardisation imposed by (classic) MacOS and early Windows.

This meant: standard layouts for menus, contents of menus, for dialog boxes, for keystrokes as well as mouse actions. UIs got forcibly standardised and late-1980s/early-1990s DOS apps mostly had to conform, or die.

And they did. Even then-modern apps like WordPerfect gained menu bars and changed their weird keystrokes to conform. If their own weird UIs conflicted, then the standards took over. WordPerfect had a very powerful, efficient, UI driven by function keys. But it wasn't compatible with the new standards. It used F3 for help and Escape to repeat a character, command or macro. The new standards said F1 must be help and Esc must be cancel. So WordPerfect complied.

And until the company stumbled, porting to OS/2 and ignoring Windows until it was too late, it worked. WordPerfect remained the dominant industry-standard, even as its UI got modernised. Users adapted.

So why am I talking about this?

Because the world of tools like Emacs never underwent this modernisation.

Like it or not, for 30 years now, there's been a standard language for UIs and so on. Files, windows, the clipboard, cut, copy, paste. Standard menus in standard places and standard commands on them with standard keystrokes.

Vi ignores this. Its fans love its power and efficiency and are willing to learn its weird UI.

Emacs ignores this, for the same reasons. The manual and tutorial talk about "buffers" and "scratchpads" and "Meta keys" and dozens of things that no computer made in 40 years has: a whole different language before the Mac and DOS and Windows transformed the world of computing.

The result of this is that if you read guides and so on about Lisp environments, they don't tell you how to use it with the tools you already know, in terms you're familiar with.

Instead they recommend really weird editors and weird add-ons and tools and options for those editors, all from long before this era of standardization. They don't discuss using Sublime Text or Atom or VS Code: no, it's "well you can use your own editor but we recommend EMACS and SLIME and just learn the weird UI, it's worth it. Trust us."

It's counter-productive and it turns people off.

I propose that a better approach would be to modernize some of the tooling, forcibly make it conform to modern standards. I'm not talking about trivial stuff like CUA-mode, but bigger changes, such as ErgoEmacs. By all means leave the old UI there and make it possible for those who have existing configs to keep it, but update the tools to use standard terminology, use the names printed on actual 21st century keyboards, and editors that work the same way as every single GUI editor out there.

Then once the barrier to entry is lowered a bit, start modernising it. Appearance counts for a lot. "You never get a second chance to make a first impression."

One FOSS tool that's out there is Interlisp Medley. There are efforts afoot to modernise this for current OSes.

How about just stealing the best bits and moving it to SBCL? Modernising its old monochrome GUI and updating its look and feel so it blends into a modern FOSS desktop?

Instead of pointing people at '70s tools like Emacs, assemble an all-graphical, multi-window, interactive IDE on top of the existing infrastructure and make it look pretty and inviting.

Keep the essential Lispiness by all means, but bring it into the 2020s and make it pretty and use standard terms and standard keystrokes, menu layouts, etc. So it looks modern and shiny, not some intimidating pre-GUI-era beast that will take months to learn.

Why bother? Who'll do it?

Well, Linux spent a decade or more as a weird, clunky, difficult and very specialist OS, which was just fine for its early user community... until it started catching up with Windows and Mac and growing into a pretty smooth, polished, quite modern desktop... partly fuelled by server advancements. Things like NetBSD still are and have zero mainstream presence.

Summary: You have to get in there and compete with mainstream, play their game by their rules, if you want to compete.

I'd like to have the option to give Emacs a proper try, but I am not learning an entire new vocabulary and a different UI to do it. I learned dozens of 'em back in the 1980s and it was a breath of fresh air when one standard one swept them all away.

There were very modern Lisp environments around before the rise of the Mac and Windows swept all else away. OpenGenera is still out there, but we can't legally run it any more -- it's IP that belongs to the people who inherited Symbolics when its founders died.

But Interlisp/Medley is still there and it's FOSS now. I think hardcore Lispers see stuff like a Lisp GUI and natively-graphical Lisp editors as pointless bells and whistles – Emacs was good enough for John McCarthy and it still is for me! – but they really are not in 2021.

There were others, too. Apple's Dylan project was built in Lisp, as was the amazing SK8 development environment. They're still out there somewhere.

Hard Stare

Re-evaluating "in the beginning was the Command Line" 23 years later

A short extract of Neal Stephenson's seminal essay has been doing the rounds on HackerNews.


OK, fine, so let's go with it.

Since my impression is that HN people are [a] xNix fans [b] often quite young therefore [c] have little exposure to other OSes, let me try to unpack what Stephenson was getting at, in context.

The Hole Hawg is a dangerous and overpowered tool for most non-professionals. It is big and heavy. It can take on big tough jobs with ease, but its size and its brute power mean that it is not suitable for precision work. It has relatively few safety features, so that if used inexpertly, it will hurt its operator.

DIY stores are full of smaller, much less powerful tools. This is for good reasons:

  • because for non-professional users, those smaller, less-powerful tools are much safer. A company which sells a tool to untrained users which tends to maim or kill them will go out of business.

  • because smaller, less-powerful tools are better for smaller jobs, that a non-professional might undertake, such as hanging a picture, or putting up some shelves.

  • professionals know to use the right tool for the job. Surgeons do not operate with chainsaws (even though they were invented for surgery). Carpenters do not use axes.


The Hole Hawg, as described, is a clumsy tool that needs things attached to it in order to be used, and even then, you need to know the right way or it will hurt you.

Compare with a domestic drill with a pistol grip that is ready to use out of its case. Modern ones are cordless, increasing their convenience.

One is a tool for someone building a house; the other is a better tool for someone living in that house.

That's the drill part.

Now, let's discuss the OSes talked about in the rest of the 1999 piece from which that's a clipping [PDF].

There are:

  • Linux, before KDE, with no free complete desktop environments yet;

  • Windows, meaning Windows 98SE or NT 4;

  • Classic MacOS – version 9;

  • BeOS.

Stephenson points out that Linux is as powerful as any of them, cheaper, but slower, ugly and unfriendly.

He points out that MacOS 9 is as pretty, friendly, and comprehensible as OSes get, but it doesn't multitask well, it is not very stable, and when a program crashes, your entire computer probably goes with it.

He points out that Windows is overpriced, performs poorly, and is not the best option for anyone – but that everyone runs it and most people just conform with what the mainstream does.

He praises BeOS very highly, which was 100% justified at the time: it was faster than anything else, by a large margin. It has superb multimedia support and integration, better than anything else at the time. It was standards-compliant but not held back by it. For its time, it has a supermodern OS, eliminating tonnes of legacy cruft.

But it didn't have many apps so it was mainly for people in narrow niches, such as music production or maybe video editing.

It was manifestly the future, though. But we're living in the future and it wasn't. This was 23 years ago, nearly a quarter of a century, before KDE and GNOME, before Windows XP, before Mac OS X. You need to know that.

What Unix people interpret as praise here is in fact criticism.

That Unix is very unfriendly and can easily hurt its user. (Think `rm -rf /` here.)

That Unix has a great deal of raw power but maybe more than most people need.

That Unix is, frankly, kinda ugly, and only someone who doesn't care about appearances would choose it.

That something of this brute power is not suitable for fine precision work. (Which it still mostly isn't -- Mac OS X is Unix, tuned and polished, and that's what the creative pros use now.)

Here's a response from 17 years ago.
Hard Stare

The historical significance of DEC and the PDP-7, -8, -11 & VAX

Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It's entitled "First new vax in ...30 years? 🙂"

Someone posted it on Hackernews. One of the comments said, roughly, that they didn't see the significance and could someone "explain it like I'm a Computer Science undergrad." This is my attempt to reply...

Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... and both were from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 16-bit, 18-bit, or 36-bit logic (and an unreleased 24-bit model, the PDP-2).

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS/8. This OS is long gone but it was the origin of a command-line interface (largely shared with TOPS-10 on the later, bigger and more expensive, 36-bit PDP-10 series) with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extensions, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was planned to be the OS for the IBM PC, too, but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, which called IBM called "PC DOS" and Microsoft "MS-DOS".

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

(When it gained a POSIX-compliant mode and TCP/IP, it was renamed from VAX/VMS to OpenVMS.)

OpenVMS is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler et al finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
Hard Stare

Mankind is a monkey with its hand in a trap, & legacy operating systems are among the bait

[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.