Hard Stare

My favourite useful low-end machine (in answer to an Ask Hackernews post)

In answer to this Ask HN question.

• I had a client with a Novell IntraNetware 4.1 network. I did a bargain-basement system upgrade for them. With a local system builder, we took a whole storage closet full of decade-old 386 and 486 desktops and turned them into Cyrix 6x86 166+ clients. The motherboards had integrated graphics and NICs (rare back then), 32MB RAM and a smallish local EIDE hard disk, say 1.2GB. No CD drives, original 14-15" SVGA CRTs.

A 2nd Novell server would have been too expensive, so I put in an old Pentium 133 workstation as a fileserver running Caldera OpenLinux with its built-in MARSNWE Netware server emulation. It held CD images of NT 4 Workstation, the latest Service Pack, the latest IE, MS Office 97 and a few other things like printer drivers. Many gigs of stuff, which would have required a new hard disk in the main server, which with Netware would have meant a mandatory RAM upgrade -- Netware 3 & 4 kept disks' FATs in RAM, so the bigger the disk, the more RAM the server needed.

On each client, I booted from floppy and installed DOS 6.22. Then I installed the Netware client and copied the NT 4 installation files from the new server. Ran WINNT.EXE and half an hour later it was an NT workstation. Install Office etc. straight off the server. (An advantage of this was that client machines could auto-install any extra bits they needed straight off the server.)

For the cost of one fancy Dell server & a NOS licence, I upgraded an entire office to a fleet of fast new PCs. As a bonus, they had no local optical drives for users to install naughty local apps.

• Several 486s with PCI USB cards, driving "Manta Ray" USB ADSL modems -- yes, modems -- running Smoothwall, a dedicated Linux firewall distro.

http://www.computinghistory.org.uk/det/36102/Alcatel-Stingra...

https://www.smoothwall.org/

This was at the end of the 1990s, when 486s were long obsolete, but integrated router/firewalls were still very expensive.

Smoothwall also ran a caching Squid proxy server, which really sped up access for corporate users regularly accessing the same stuff. For instance, if all the client machines ran the same version of Windows, say, Windows 2000 Pro, then after the first ran Windows Update, all successive boxes downloaded the updates from the Smoothwall box in seconds. Both far easier and much cheaper than MS Systems Management Server. (And bear in mind, at the turn of the century, fast broadband was 1Mb/s. Most of my clients had 512kb/s.)

There was one really hostile, aggressive guy in the Smoothwall team, who single-handedly drove away a lot of people, including me. The last such box I put in ran IPCop instead. http://www.ipcop.org/ After that, though, routers became affordable and a lot easier.
Hard Stare

Was Acorn's RISC OS an under-appreciated pearl of OS design?

I was a huge Archimedes fan and still have an A310, an A5000, a RiscPC and a RasPi running RISC OS.

But no, I have to disagree. RISC OS was a hastily-done rescue effort after Acorn PARC failed to make ARX work well enough. I helped to arrange this talk by the project lead a few years ago.

RISC OS is a lovely little OS and a joy to use, but it's not very stable. It has no worthwhile memory protection, no virtual memory, no multi-processor support, and true preemptive multitasking is a sort of bolted-on extra (the Task Window). When someone tried to add pre-emption, it broke a lot of existing apps.

It was not some industry-changing work of excellence that would have disrupted everything. It was just barely good enough. Even after 33 years, it doesn't have wifi or bluetooth support, for instance, and although efforts are going on to add multi-processor support, it's a huge amount of work for little gain. There are a whole bunch of memory size limits in RISC OS as it is -- apps using >512MB RAM are very difficult and that requires hackery.

IMHO what Acorn should have done is refocus on laptops for a while -- they could have made world-beating thin, light, long-life, passively-cooled laptops in the late 1990s. Meanwhile, worked with Be on BeOS for a multiprocessor Risc PC 2. I elaborated on that here on this blog.

But RISC OS was already a limitation by 1996 when NT4 came out.

I've learned from Reddit that David Braben (author of Elite and the Archimedes' stunning "Lander" demo and Zarch game) offered to add enhancements to BBC BASIC to make it easier to write games. Acorn declined. Apparently, Sony was also interested in licensing the ARM and RISC OS for a games console -- probably the PS1 -- but Acorn declined. I had no idea. I thought the only 3rd party uses of RISC OS were NCs and STBs. Acorn's platform was, at the time, almost uniquely suitable for this -- a useful Internet client on a diskless machine.

The interesting question, perhaps, is the balance between pragmatic minimalism as opposed to wilful small-mindedness.

I really recommend the Chaos Computer Congress Ultimate Archimedes talk on this subject.

There's a bunch of stuff in the original ARM2/IOC/VIDC/MEMC design (e.g. no DMA, e.g. the 26-bit Program Counter register) that looks odd but reflects pragmatic decisions about simplicity and cost above all else... but a bit like the Amiga design, one year's inspired design decision may turn out, a few years later, to be a horrible millstone around the team's neck. Even the cacheless design which was carefully tuned to the access speeds of mid-1990s FP-mode DRAM.

They achieved greatness by leaving a lot out -- but not just from some sense of conceptual purity. Acorn's Steve Furber said it best: "Acorn gave us two things that nobody else had. No people and no money."

Acorn implemented their new computer on four small, super-simple, chips and a minimalist design, not because they wanted to, but because it was a design team of about a dozen people and almost no budget. They found elegant work-arounds and came up with a clever design because that's all they could do.

I think it may not be a coincidence that a design that was based on COTS parts and components, assembled into an expensive, limited whole eventually evolved into the backbone of the entire computer industry. It was poorly integrated but that meant that parts could be removed and replaced without breaking the whole: the CPU, the display, the storage subsystems, the memory subsystem, in the end the entire motherboard logic and expansion bus.

I refer, of course, to the IBM PC design. It was poor then, but now it's the state of the art. All the better-integrated designs with better CPUs are gone, all the tiny OSes with amazing performance and abilities in a tiny space are gone.

When someone added proper pre-emptive multitasking to RISC OS, it could no longer run most existing apps. If CBM had added 68030 memory management to AmigaOS, it would have broken inter-app communication.

Actually, the much-maligned Atari ST's TOS got further, with each module re-implemented by different teams in order to give it better display support, multitasking etc. while remaining compatible. TOS became MINT -- Mint Is Not TOS -- and then MINT became TOS 4. It also became the proprietary MaGiC OS-in-a-VM for Mac and PC, and later, volunteers integrated 3rd party modules to create a fully GPL edition, AFROS.

But it doesn't take full advantage of later CPUs and so on -- partly because Atari didn't.
Apple famously tried to improve MacOS into something with proper multitasking, nearly went bankrupt doing so, bought their co-founder's company NeXT and ended up totally dumping their own OS, frameworks, APIs and tooling -- and most of the developers -- and switching to a UNIX.

Sony could doubtless have done wonderful stuff with RISC OS on a games console -- but note that the Playstation 4 runs Orbis, which is based on FreeBSD 9, but none of Sony's improvements have made it back to FreeBSD.

Apple macOS is also in part based on FreeBSD, and none of its improvements have made it back upstream. macOS has a better init system, launchd, and a networked metadata directory, netinfo, and a fantastic PDF-based display server, Quartz, as well as some radical filesystem tech.
You won't find any of that in FreeBSD. It may have some driver stuff but the PC version is the same ugly old UNIX OS.

If Acorn made its BASIC into a games engine, that would have reduced its legitimacy in the sciences market. Gamers don't buy expensive kit, universities and laboratories do. Games consoles sell at a loss, like inkjet printers -- the makers earn a profit on the games or ink cartridges. It's called the Gilette razors model.

As a keen user, it greatly saddened me when Acorn closed down its workstations division, but the OS was by then a huge handicap, and there simply wasn't an available replacement by then. As I noted in that blog post I linked to, they could have done attractive laptops, but it wouldn't have helped workstation sales, not back then.

The Phoebe, the cancelled RISC PC 2, had PCI and dual-processor support. Acorn could have sold SMP PCs way cheaper than any x86 vendor, for most of whom the CPU was the single most expensive component. But it wasn't an option, because RISC OS couldn't use 2 CPUs and still can't. If they'd licensed BeOS, and maybe saved Be, who knows -- a decade as the world's leading vendor of inexpensive multiprocessor workstations doesn't sound so bad -- well, the resultant machines would have been very nice, but they wouldn't be RISC PCs because they wouldn't run Archimedes apps, and in 1998 the overheads of running RISC OS in a VM would have been prohibitive. Apple made it work, but some 5 years later, when it was normal for a desktop Mac to come with 128MB or 256MB of RAM and a few gigs of disk, and it was doable to load a 32-64MB VM with another few hundred megs of legacy OS in it. That was rather less true in 1997 or 1998, when a high-end PC had 32 or 64MB of RAM, a gig of disk, and could only take a single CPU running at a couple of hundred megahertz.

I reckon Acorn and Be could have done it -- BeOS was tiny and fast, RISC OS was positively minute and blisteringly fast -- but whether they could have done it in time to save them both is much more doubtful.
I'd love to have seen it. I think there was a niche there. I'm a huge admirer of Neal Stephenson and his seminal essay In The Beginning Was The Command Line is essential reading. It dissects some of the reasons Unix is the way it is and accurately depicts Linux as the marvel it was around the turn of the century. He lauds BeOS, and rightly so. Few ever saw it but it was breathtaking at the time.

Amiga fans loved their machine, not only for its graphics and sound, but multitasking too. This rather cheesy 1987 video does show why...


Just a couple of years later, the Archimedes did pretty much all that and more and it did it with raw CPU grunt, not fancy chips. There are reasons its OS is still alive and still in use. Now, it runs on a mass-market £25 computer. AmigaOS is still around, but all the old apps only run under emulation and it runs on niche kit that costs 5-10x more than a PC of comparable spec.

A decade later, PCs had taken over and were stale and boring. Sluggish and unresponsive despite their immense power. Acorn computers weren't, but x86 PCs were by then significantly more powerful, had true preemptive multitasking, built-in networking and WWW capabilities and so on. But no pizazz. They chugged. They were boring office kit, and they felt like it.

But take a vanilla PC and put BeOS on it, and suddenly, it booted in seconds, ran dozens of apps with ease without flicker or hesitation, played back multiple video streams while rendering them onto OpenGL 3D solids. And, like the Archimedes did a decade before, all in software, without hardware acceleration. All the Amiga's "wow factor" long after we'd given up ever seeing it again.

This, at the time when Linux hadn't even got a free desktop GUI yet, required hand-tuning thousands of lines of config files like OS/2 at its worst, and had no productivity apps.

But would this have been enough to keep A&B going until mass-market multi-core x86 chips came along and stomped them? Honestly, I really doubt it. If Apple had bought Be, it would have got a lovely next-gen OS, but it wouldn't have got Steve Jobs, and it wouldn't have been able to tempt classic MacOS devs to the new OS with amazing next-gen dev tools. I reckon it would have died not long after.

If Acorn and Be had done a deal, or merged or whatever, would there have been enough appeal in the cheapest dual-processor RISC workstation, with amazing media abilities, in the industry? (Presumably, soon after, quad-CPU and even 6- or 8- CPU boxes.)

I hate to admit it, but I really doubt it.
Hard Stare

The state of the desktop art

(Repurposing a couple of Reddit comments from a chap considering switching to Linux because of design and look-and-feel considerations.)

I would say that you need to bear in mind that Linux is not a single piece of software by a single company. Someone once made the comparison something like this: "FreeBSD is a single operating system. Linux is not. Linux is 3,000 OS components flying in close formation."

The point is that every different piece was made by a different person, group of people, organisation or company, working to their own agenda, with their own separate plans and designs. All these components don't look the same or work the same because they're all separately designed and written.

If you install, say, a GTK-based desktop and GTK-based components, then there's a good chance there will be a single theme and they'll all look similar, but they might not work similarly. If you then install a KDE app it will suck in a whole ton of KDE libraries and they might look similar but they might also look totally different -- it depends on how much effort the distro designers put in.

If you want a nice polished look and feel, then your best bet is to pick a mainstream distro and its default desktop, because the big distro vendors have teams of people trying to make it look nice.

That means Ubuntu or Fedora with GNOME, or openSUSE with KDE.

(Disclaimer: I work for SUSE. I run openSUSE for work. I do not use KDE, or GNOME, as I do not personally like either.)

If you pick an OS that is a side-project of a small hardware vendor, then you are probably not going to get the same level of fit and finish, simply because the big distros are assembled by teams of tens to hundreds of people as their day job, whereas the smaller distros are a handful of volunteers, or people working on a side-job, and the niche distros are mostly one person in their spare time, maybe with a friend helping out sometimes.

Windows is far more consistent in this regard, and macOS is more consistent than Windows. None of them are as consistent as either Windows or Classic MacOS were before the WWW blew the entire concept of unified design and functionality out of the water and vapourised it into its component atoms, never to be reassembled.

Don't judge a book by its cover -- everyone knows that. Well, don't judge a distro by a couple of screenshots.

As for my expertise -- well, "expertise" is very subjective! :-D You would easily find people who disagree with me -- there are an awful lot of strong biases and preconceptions in the Linux world.

For one thing, it is so very customisable that people have their own workflows that they love and they won't even consider anything else.

For another, there is 51 years of UNIX™ cultural baggage. For example in the simple matter of text editors. There are two big old text editors in the UNIX world, both dating from the 1970s. Both are incredibly powerful and capable, but both date from an era before PCs, before screens could display colours or formatting or move blocks of characters around "live" in real time, before keyboards had cursor keys or keys for insert, delete, home, end, and so on.

So both are horrible. They are abominations from ancient times, with their own weird names for everyday stuff like "files" and "windows" -- because they are so old they predate words like "files" and "windows"! They don't use the normal keyboard keys and they have their own weird names for keyboard keys, names from entire companies that went broke and disappeared 30 or 40 years ago.

But people still use these horrible old lumps of legacy cruft. People who were not yet born when these things were already obsolete will fight over them and argue that they are the best editors ever written.

Both GNOME and KDE are very customisable. Unfortunately, you have to customise them in the ways that their authors thought of and permitted.

KDE has a million options to twiddle, but I happen to like to work in ways that the KDE people never thought of, so I don't get on with it. (For example, on a widescreen monitor, I put my taskbar vertically on the left side. This does not work well with KDE, or with MATE, or with Cinnamon, or most other desktops, because they never thought of it or tried it, even though it's been a standard feature of Windows since 1995.)

GNOME has almost no options, and its developers are constantly looking for things they don't use and removing them. (Unfortunately, some of these are things I use a dozen times a day. Sucks to be me, I guess.) If you want to customise GNOME, you have to write your own add-on extensions in JavaScript. JavaScript is very trendy and popular, which is a pity, as it is probably the worst programming language in the world. After PHP, anyway.

So if you want to customise GNOME, you'd better hope that someone somewhere has programmed the customisation you want, and that their extension still works, because there's a new version of GNOME every 6 months and it usually breaks everything. If you have a broken extension, your entire desktop might crash and not let you log in, or log out, or do anything. This is considered perfectly normal in GNOME-land.

Despite this, these two desktops are the most popular ones around. Go figure.

There was one that was a ripoff of Mac OS X, and I really liked it. It was discontinued a few years ago. Go figure.

Rather than ripping off other desktops, the trend these days is to remove most of the functions, and a lot of people like super-minimal setups with what are called "tiling window managers". These basically try to turn your fancy true-colour hardware-3D-accelerated high-definition flat-panel monitor into a really big glass text terminal from 1972. Go figure.

There used to be ripoffs of other OSes, including from dead companies who definitely won't sue. There were pretty good ripoffs of AmigaOS, classic MacOS, Windows XP, Acorn RISC OS, SGI Irix, NeXTstep, Sun OpenLook, The Open Group's CDE and others. Most are either long dead, or almost completely ignored.

Instead today, 7 out of the 8 leading Linux desktops are just ripoffs of Windows 95, of varying quality. Go figure.

Hard Stare

Attempted answer to the FAQ: "Will `apt-get dist-upgrade` install a new Ubuntu release?"

In the context of the `apt` command, `update` means "refresh the database containing the current index of what versions are in the configured repositories". It does not install, remove, upgrade or change any installed software.

I wonder if this is because of people lacking historical context?

The important things to know are 3 concepts: dependencies, recursion, and resolution.

The first Linux distributions, like SLS and Yggrasil and so on, were built from source. You want a new program? Get the source and compile it.

Then package managers were invented. Someone else got the source, compiled it, bundled it up in a compressed archive with any config files it needed and instructions for where to put its contents on your computer.

As programs got more complex, they were built using other programs. So the concept of "dependencies" appeared. Let's say text editor "Superedit" can import RTF (Revisable Text Format) files, and save RTF (Rich Text Format) files. It does not read these formats itself: it uses another tool, rich_impex, and rich_impex needs rft_import and rtf_export.

(Note: RTF and RFT are real formats and they are totally different and unrelated. I picked them intentionally as their names are so similar.)

If you need a new version of Superedit, then you first need new version of rich_impex. But rich_impex needs rtf_import and rtf_export.

So in the early days of Linux with package managers, e.g. Red Hat Linux 4, if you tried to install superedit.2.rpm, it would fail, saying it needed rich_impex-1.1.rpm. This is called a dependency.

And if you tried to install rich_impex-1.1.rpm, it said you needed rft_import 1.5 and rtf_export 1.7.

So to install Superedit 2, you had to try, fail, note down the error, then go try to install rich_impex, which would fail, then note down the error, then go install rft_import 1.5, and rtf_export 1.7.

THEN you could install rich_impex 1.1.

THEN you would find that it was now possible to install superedit_2.rpm.

It was a lot of work. Installing something big, like KDE 1, would be almost impossible as you had to go find hundreds of these dependencies, by trial and error. It could take days.

Debian was the first to fix this. To its package manager, dpkg, it added another tool on top: apt.

Apt did automatic dependency resolution. So when you tried to install superedit 2, it would check and find that superedit-2 needed rich_impex-1.1 and install that for you.

This is no use if it does 1 level and stops. It would fail when it couldn't install rich_impex because that in turn had its own dependencies.

So what is needed is a tool that goes, installs your dependencies, and their dependencies, and their dependencies, all the way down, starting with the ends of each chain. This requires a  programming technique called recursion:
https://dev.to/rapidnerd/comment/62km

Now, let's imagine that superedit-2, which depends on rich_impex, which depends on rft_import and rtf_export.

But sadly, the maintainer of rft_import got run over by a bus and died. So, no new versions of rft_import. That means no new version of rich_impex which means no new version of superedit.

So someone comes along, reads the source code of rft_import, thinks they could do it better, and writes their own routine. They call it import_rft because they don't want to have to fix any bugs in rft_import.

The writer of rich_impex does a new version, rich_impex 2. They switch the import filter, so rich_impex 2 uses import_rtf 1.0 and rft_export 1.8.

Superedit 3 also comes out and it uses rich_impex 2. So if you want to upgrade from superedit 2 to superedit 3, you need to upgrade rich_impex 2 to v3. To get rich_impex 3, you need to remove rft_import and install a new dependency, import_rft.

When you start doing recursive solution to a problem, you don't know where it's going to go. You find out on the way.

So apt has 2 choices:

[1] recurse, install newer versions of anything needed, until you can upgrade the target package (which could be "all packages"), but don't add anything that isn't there

OR

[2] recurse, install all newer versions of anything needed INCLUDING ADDING NEW PACKAGES, until the entire distribution has been upgraded

#1 is meant for 1 program at a time, but you can tell it to do all programs. But it won't add new packages.

So if you use `apt-get upgrade` you will not get superedit 3, because to install superedit 3, it will have to install rich_impex 2, and that means it would need to remove rft_import and install import_rft instead. `upgrade` won't do that -- it only installs newer versions. So your copy of superedit will be stuck at v2.

#2 is meant for upgrading the whole installed system to the latest version of all packages, including adding any new requirements it needs on the way.

If you do it, it will replace superedit 2 with superedit 3, because `dist-upgrade` has the authority to remove the rft_import module and install a different one, import_rft, in its place.

Neither of them will rewrite the sources listed in /etc/apt/sources.list. Neither of them will ever upgrade the entire distro to a new release. Neither of them will ever move from one major release of Ubuntu or Debian or Crunchbang or Mint or Bodhi or whatever to a new release.

All they do is update that version of the distribution to the newest version of that release.

"Ubuntu 20.04" is not a distribution. "Ubuntu" is the distribution. "20.04" is a release of the distribution. It's the 32nd so far. (W, H, B, D then through the alphabet from E to Z, then back to A. Now we're at F again.)

So `dist-upgrade` does not upgrade the release. It upgrades your whole DISTRO but only to the latest version of that release.

If you want a new release then you need `do-release-upgrade`.

Do not use `apt upgrade` for upgrading the whole distro; `apt dist-upgrade` does a more thorough job. `apt upgrade` will not install superedit 3 because it won't add new packages or remove obsolete ones.

In the old days, you should have used `apt-get dist-upgrade` because it will replace or remove obsoleted dependencies.

Now, you should use `apt full-upgrade` which does the same thing.

Relax. Rest assured, neither will ever, under any circumstances, upgrade to a new release.
Hard Stare

A great price for a cheap BASIC – but with an extremely expensive legacy

Commodore's Jack Tramiel got a very sweet deal from Microsoft for MS BASIC, as used in CBM's PET, once of the first integrated microcomputers. The company didn't even pay royalties. The result is that CBM used pretty much the same BASIC in the PET, VIC-20 and C64. It got trivial adjustments for the hardware, but bear in mind: the PET had no graphics, no colour, and only a beep; the VIC-20 had (poor) graphics and sound, and the C64 had quite decent graphics and sound.

So the BASIC was poor for the VIC-20 and positively lousy on the C64. There were no commands to set colours, draw or load or save graphics, play music, assemble sound effects, nothing.

I.e. in effect the same BASIC interpreter got worse and worse with each successive generation of machines, ending up positively terrible on the C64. You had to use PEEKs and POKEs to use any of the machine's facilities.

AIUI, CBM didn't want to pay MS for a newer, improved BASIC interpreter. It thought, with some justice, that the main uses of the VIC-20 and C-64 were games machines, using 3rd party games written in assembly language for speed, and so the BASIC was a reasonable saving: a corner it could afford to cut.

The C64 also had a very expensive floppy disk drive (with its own onboard 6502 derivative, ROM & RAM) but a serial interface to the computer, so it was both dog-slow and very pricey.

This opened up opportunities for competition, at least outside the US home market. It led to machines like (to pick 2 extremes):
• the Sinclair ZX Spectrum, which was cheaper & had a crappy keyboard, no joystick ports, etc., but whose BASIC included graphics and sound commands.
• the Acorn BBC Micro, which was expensive (like the C64 at launch), but included a superb basic (named procedures with local variables, allowing recursion; if/then/else, while...wend, repeat/until etc., and inline assembly code), multiple interfaces (printer, floppy drive, analogue joysticks, 1nd CPU, programmable parallel expansion bus, etc.)

All because CBM cheaped out and used a late-1970s MS BASIC in an early-1980s machine with, for the time, quite high-end graphics and sound.

The C64 sold some 17 million units, so a lot of '80s kids knew nothing else and thought the crappy BASIC was normal. Although it was one of the worst BASICs of its day, it's even been reimplemented as FOSS now! The worst BASIC ever lives on, while far finer versions such as Beta BASIC or QL SuperBASIC languish in obscurity.

It is also largely responsible, all on its own, for a lot of the bad reputation that BASIC has to this day, which in turn was in part responsible for the industry's move away from minis programmed in BASIC (DEC, Alpha Micro, etc.) and towards *nix programmed in C, and *nix rivals such as OS/2 and Windows, also programmed in C.

Which is what has now landed us with an industry centred around huge, unmaintainable, insecure OSes composed of tens of millions of lines of unsafe C (& C derivatives), daily and weekly mandatory updates in the order of hundreds of megabytes, and a thriving industry centred around keeping obsolete versions of these vast monolithic OSes (which nobody fully understands any more) maintained and patched for 5, 10, even 15 or so years after release.

Which is the business I work in.

Yay.

It sounds ridiculous but I seriously propose that much of this is because the #1 home computer vendor in the Western world kept using a cheap and nasty BASIC for nearly a decade after its sell-by date.

CBM had no real idea what it was doing. It sold lots of PETs, then lots more VIC-20s, then literally millions of C64s, without ever improving the onboard software to match the hardware.

So what did it do next? A very expensive portable version, for all the businesspeople who needed a luggable home gaming computer.

Then it tried to sell incompatible successor machines, which failed -- the Commodore 16 and Plus 4.

Better BASIC, bundled ROM business apps -- why?! -- but not superior replacements for its best-selling line. Both flopped horribly.

This showed that CBM apparently still had no real clue why the C64 was a massive hit, or who was buying it, or why.

Later it offered the C128, which had multiple operating modes, including a much better BASIC and an 80-column display, but also an entire incompatible 2nd processor -- a Z80 so it could run CP/M. This being the successor model to the early-'80s home computer used by millions of children to play video games. They really did not want, need or care about CP/M of all things.

This sold a decent 5 million units, showing how desperate C64 owners were for a compatible successor.

(Commodore people often call this the last new 8-bit home computer -- e.g. its lead designer Bil Herd -- which of course it wasn't. The Apple ][GS was in some ways more radical -- its 16-bit enhanced 6502, the 64C816, was more use than the C128's 2 incompatible 8-bit chips, for a start -- and came out the following year. Arguably a 16-bit machine, though, even if it was designed to run 8-bit software .

But then there was the UK SAM Coupé, a much-enhanced ZX Spectrum clone with a Z80, released 4 years later in 1989. Amstrad's PcW 16, again a Z80 machine with an SSD and a GUI OS, came out in 1995.)

There was nearly another, incompatible of course, successor model later still, the C65.

That would have been a worthy successor, but by then, CBM had bought the Amiga and wasn't interested any more -- and wisely, I think, didn't want to compete with itself.

To be fair, it's not entirely obvious what CBM should have done to enhance the C64 without encroaching too much into the Amiga's market. A better CPU, such as the SuperCPU, a small graphics upgrade as in the C128, and an optional 3.5" disk drive would have been enough, really. The GEOS OS was available and well-liked.

GEOS was later ported to the x86, as used in the HP OmniGo 100 -- I have one somewhere -- and later became GeoWorks Ensemble, which tried to compete with MS Windows. PC GEOS is still alive and is now, remarkably, FOSS. I hope it gets a bit of a renaissance -- I am planning to try it on my test Open DR-DOS and IBM PC-DOS 7.1 systems. I might even get round to building a live USB image for people to try out. 
Hard Stare

Not one but 𝘁𝘄𝗼 complete, working, & 𝙪𝙨𝙚𝙛𝙪𝙡 Raspberry Pi projects!

I have several RasPis lying around the place. I sold my π2 when I got a π3, but then that languished largely unused for several years, after the fun interlude of getting it running RiscOS in an old ZX Spectrum case.

Then I bought myself a π3+ in a passive-cooling heatsink/case for Yule 2018, which did get used for some testing at work, and since then, has also been gathering dust. I am sure this is the fate of many a π.

The sad thing about the RasPi is that it's a bit underpowered. Not unreasonable for a £30 computer. The π1 was a single rather gutless ARM6 core. The π2 at least had 4 cores, but still weedy ones. The π3 had faster cores and wifi, but all still only have 1GB of non-upgradable RAM. They're not really up to running a full Linux desktop. What's worse, the Ethernet and wifi are USB devices, sharing the single USB2 bus with any external storage – badly throttling the bandwidth for server stuff. The π3+ is a bit less gutless but all the other limitations apply – and it needs more power and some form of cooling.

But then a chap on FesseBouc offered an official π touchscreen, used and a bit cheaper than new. That gave me an idea. I listen to a lot of BBC 6music – I am right now, in fact – but it needs a computer. Czech radio seems to mainly play a lot of bland pop which isn't my thing, and of course I can't understand a useful amount of Czech yet. It's at about the level of my Swedish in 1993 or so: if I listen intently and concentrate very hard, I may be able to work out the subject being discussed, but not follow the discussion.

But I don't want to leave a laptop on 24×7 and I definitely don't want a big computer with a separate screen, keyboard and mouse doing it. What I want is something the size of a radio but which can connect to wifi and stream music to simple old-fashioned wired speakers, without listening to me. I most definitely do not want a spy basestation for a dot-com listening to my home, thank you.
Image
So I bought the touchscreen, connected it to my old π3, powered them both off a couple of old phone chargers, bunged in a spare µSD card, and started playing with software. I know where I am with software.

First I tried OSMC. It worked, detected and used the touchscreen, and could connect to my wifi... but it doesn't directly support streaming audio, as far as I can tell, and I could not work out how to install add-ins, nor how to update the underlying Linux.

I had a look at LibreElec but it looked very similar. While I don't really want the bloat of an entire general-purpose Linux distro, I just want this to work, and I had 8GB to play with, which is plenty.

So next I tried XBian. This is a cut-down Debian, running on Btrfs, which boots straight to Kodi. Kodi used to be called XBox Media Centre, and that's where I first met it – I softmodded an old original black XBox that my friend Dop gave me and put XBMC on it. It streamed movies off my server and played DVDs through my TV set, which is all I needed.

XBian felt a lot more familiar. It has a settings page through which I could update the underlying OS. It worked with the touchscreen out of the box. It has a UI for connecting to wifi. It too didn't include streaming Internet radio support, but it had a working add-ons browser, in which I found both BBC iPlayer and Internet Radio extensions.

Soon I was in business. It connected to wifi, it was operable with the touchscreen, connected to some old Altec Lansing speakers I had lying around. So I bought a case from Mironet, my friendly local electronics store. (There is a veritable Aladdin's Cave even closer to my office, GM electronic – but I'm afraid they're not very friendly. Sort of the opposite, in fact.)

I assembled the touchscreen and π3 into my case, and hit a problem. Only one available opening for a µUSB lead, but the screen needs its own. Some Googling later, it emerges than you can power the touchscreen from the π's GPIO pins, but I don't have the cables.

So off to GME it was, and some tricky negotiations later, I bought a strip of a dozen jumper cables. Three of them got me in business, but since it was £1 for all of them, I can't really complain about the wastage.

So now, there's a little compact unit in my bedroom which plays the radio whenever I want, on the power usage of a lightbulb. No fans, no extra cooling, nothing. I've had to use my single Official Raspberry Pi PSU brick, as all my phone chargers gave me the lightning-bolt icon undervoltage warning.

This emboldened me for Project 2.

Some years ago, Morgan's had a cheap offer on 2TB hard disks. I bought all their remaining stock, 5 mismatched drives. One went into an external case for my Mac mini and later died. The other four were in a box, pending installation into my old HP Microserver G1, which currently has 4×300GB drives in it, in a Linux software RAID controlled by Ubuntu. (Thanks to hobnobs!) However, this only has 2GB of RAM, and I figured that wasn't enough for a 5TB RAID. I may have accidentally killed it trying to fit more RAM, and the job of troubleshooting and fixing it has been waiting for, um, a couple of years now.

Meanwhile, the iMac's 1TB Fusion Drive was at 97.5% full and I don't have any drives big enough to back up everything on it.

I slowly and reluctantly conceded to myself that it might be quicker and easier to build a new server than fix and upgrade the old one.

The Raspberry Pi 4 is quite a different beast. Apart from a beefier 64-bit ARM7 quad-core, it has 2GB and 4GB RAM options, and it has much faster I/O. Its wifi and Ethernet are directly attached to the CPU, not on the USB bus, and it has 2 of those: the old USB2 bus (480Mb/s) and a new, separate 5Gb/s USB3 bus. This is useful power. It can also drive dual monitors via twin µHDMI ports.

But the π4 runs quite hot. The Flirc case my π3+ is in is only meant for home theatre stuff. A laden π4 needs something beefier, and sadly, my local mail-order electronics place, Alza, doesn't offer anything that appealed. I found the Maouii case on Amazon Germany and that fit the bill. (It also gave me a good excuse to buy the entire Luna trilogy by Ian McDonald in order to qualify for free shipping.)

So, from Alza I ordered a 4GB π4 and 4 USB3 desktop drive cases. From Mall CZ I ordered a USB3 hub with a fairly healthy 2.5A power output, thinking this would be enough to power a headless π4. USB C cables and µSD cards I have, and I figured all the USB 3 cables would come with the enclosures, which they did. In these quarantine lockdown times, the companies deliver to electronically-controlled mailboxes in shopping malls and so on, where you enter a code and pick up your package without ever interacting with a potentially-infectious human being.

It was all with me within days.

Now, I place some trust in those techies that I know who are more skilled and experienced than I, especially if they are jaded, cynical ones. File systems are one of the few significant differentiating factors between modern Linux server distros. Unfortunately, a few years ago, the kernel maintainers refused to integrate EVMS and picked the far simpler LVM instead. This has left something of a gap, with enterprise UNIXes still having more sophisticated storage tech than Linux. On the upside, though, this is driving differentiation.

SUSE favours Btrfs, although there's less enthusiasm outside the company. It is stable, but even now, you're recommended not to try to repair a Btrfs filesystem, and it can't give a reliable answer to the 'df' command – in other words, the basic question "how much free space have I got left?"

I love SUSE's YaST admin tool, and for other server stuff, especially on x86, I would probably recommend it, but it's not ideal for what I wanted in this role. Its support for the π4 is a bit preliminary so far, too.

Red Hat has officially deprecated Btrfs, but that left it with the problem that LVM with filesystems placed on top is a complex solution which still leaves something lacking, so with its typical galloping NIH syndrome, it is in the process of inventing an entirely new disk management layer, Stratis. Stratis integrates SGI's tried-and-tested, now-FOSS XFS filesystem with LVM into a unified disk management system.

Yeah, no thanks. Not just yet. I am not fond of Fedora, anyway. No stable or LTS versions (because that's RHEL's raison d'etre). CentOS is a different beast, and also not really my thing. And Fedora is also a bit more bleeding-edge than I like. I do not consider Fedora a server OS; it's more of a rolling tech testbed for RHEL.

Despite some dissenting opinions, the prevailing opinion seems to be that Sun's ZFS is the current state of the art. Ubuntu has decided to go with ZFS, although its license is incompatible with the Linux kernel's GPL. Ubuntu is, to be honest, my preferred distro for desktop stuff, and I've run it on πs before. It works well – better than Fedora, which like Debian eschews non-free drivers completely. It doesn't have Raspian's hardware acceleration but then everyone uses Raspbian on the π so it's an obvious target.

So, Ubuntu Server. Modern versions include ZFS built-in.

I tested this in a VM. Ubuntu Server 18.04 on its own ext4 boot drive... then add a bunch of 20GB drives to the VM... then tell it to create a RAIDZ. One very short time later, it has not only partitioned my drives, created an array, and formatted it, it's also created a mount point and mounted the new array on it. In seconds.

This is quite impressive and far more automatic than the many manual steps involved in doing this with the old Linux built-in 'mdraid' subsystem, as used in my old home server.

Conveniently – it was totally unplanned – by the time all my π4 bits were here, a new Ubuntu LTS was out, 20.04.

I installed all my drives into their new enclosures, plugged them one-by-one into my one of iMac's USB3 ports, and checked that they registered as 2TB drives. They did. Result. Oh, and yes, the cables were in the boxes. USB3 cables are entertainingly fat with shielding, but 5Gb/s is not to be sniffed at.

So, I put my new π4 in its case, put the latest Ubuntu Server on a µSD card – and hit a problem. I can't connect a display. I only have one HDMI monitor and nothing that will connect to a π4's micro-HDMI ports. And I don't really want to try to set this all up headless.

So off to Alza's actual physical shop I trogged to buy a µHDMI to HDMI convertor. Purchasing under quarantine is tricky, so it took a while, but I got it.

Fired up the π4 and it ran fine. No undervoltage warning running off the hub. So I hooked up all the drives, and sure enough, all were visible to the 'lsusb' command.

I referred to various howtos. Hmm. Apparently, you need to put partition records on them. Odd; I thought ZFS subsumed partitioning. Oh, well. I put an empty GUID disklabel on each drive. Then I added them to a RAIDZ, ZFS' equivalent of a RAID5 array.

Well, it wasn't as quick as in an VM, but only a minute or so of heavy disk activity later, the array is created, formatted, its mountpoint created and it's online. This is quite impressive stuff.



Then came the usual joys of Linux' fairly poor subsystem integration: Samba is a separate, different program, Samba user accounts are not Linux user accounts so passwords are different. Mounted filesystems inherit the permissions of their mountpoint. Macs still favour the old Apple protocol, so you need Netatalk as well. It, of course, doesn't integrate with Samba. NFS has two alternatives, and neither, of course, integrate with either Samba or Netatalk. There are good reasons NT caught on, which Apple successfully imitated and even exceeded in Mac OS X – and the Linux world remains as blindly indifferent to them as it has for a quarter of a century.

But some hours of swearing later, it all works. I can connect from Windows, Linux or Mac. It's all passively-cooled so it runs almost completely silently. It does need five power sockets, which is a snag, and there's a bit of cable spaghetti, but for an outlay of about £150 I have a running server which can sustain write speeds of about a gigabyte per second to the array.

I've put my old friend Webmin on it for a friendly web GUI.


So there you are.

While the π3 is a little bit underpowered, for a touchscreen Internet radio, it's great, and I'm very pleased with the result.

But the π4 is very different. It's a thoroughly capable little machine, perfectly usable as a general-purpose desktop PC, or as a server with quite decent bandwidth.

No, the setup has not been a beginner-friendly process. Apparently OpenMediaVault has versions for some single-board computers, including the π3, but not for the π4 yet. I am sure wider support will come.

But overall I'm impressed with how easy this was, without vast expert knowledge, and I'm delighted with the result. I will keep you posted on how it works longer-term.
Hard Stare

Moore's Law: I ATEN'T DEAD [tech blog post, by me]

Hmmm. For the first time, ever, really, I hit the limits of modern vs. decade-old wifi and networking.

My home broadband is 500Mb/s. Just now, what with quarantine and so on, I have had to set up a home office in our main bedroom. My "spare" Mac, the Mac mini, has been relegated to the guest room and my work laptop is set up on a desk in the bedroom. This means I can work in there while Jana looks after Ada in the front room, without disturbing me too much.

(Aside: I'm awfully glad I bought a flat big enough to allow this, even though my Czech friends and colleagues, and realtor, all thought I was mad to want one so big.)

The problem was that I was only getting 3/5 bars of wifi signal on the work Dell Latitude, and some intermittent connectivity problems – transient outages and slowdowns. Probably this is when someone uses their microwave oven nearby or something.

It took me some hours of grovelling around on my hands and knees – which is rather painful if one knee has metal bits in -- but I managed to suss out the previous owners' wiring scheme. I'd worked out that there was a cable to the middle room, and connected it, but I couldn't find the other end of the cable to the master bedroom.

So, I dug out an old ADSL router that one of my London ISPs never asked for back: a Netgear DGN-1000. According to various pages Google found, this has a mode where it can be used as a wireless repeater.

Well, not on mine. The hidden webpage is there, but the bridge option isn't. Dammit. I should have checked before I updated its firmware, shouldn't I?

Ah well. There's another old spare router lying around, an EE BrightBox, and this one can take an Ethernet WAN – it's the one that firewalled my FTTC connection. It does ADSL as well but I don't need that here. I had tried and failed to sell this one on Facebook, which meant looking it up and discovering that it can run OpenWRT.

So I tried it. It's quite a process -- you have to enable a hidden tiny webserver in the bootloader, use that to unlock the bootloader, then use the unlocked bootloader to load a new ROM. I did quite a lot of reading and discovered that there are driver issues with OpenWrt. It works, but apparently ADSL doesn't work (don't care, don't need it), but also, its wifi chip is not fully supported and with the FOSS driver it maxes out at 54Mb/s.

Sounds like quite a lot, but it isn't when your broadband is half-gigabit.

So I decided to see what could be done with the standard firmware, with its closed-source Broadcom wifi driver.

(Broadcom may employ one of my Great Heroines of Computing, the remarkable Sophie Wilson, developer of the ARM processor, but their record on open-sourcing drivers is not good.)

So I found a creative combination of settings to turn the thing into a simple access point as it was, without hacking it. Upstream WAN on Ethernet... OK. Disable login... OK. Disable routing, enable bridging... OK.

Swaths of the web interface are disappearing as I go. Groups of fields and even whole tabs vanish each time I click OK. Disable firewall... OK. Disable NAT... OK. Disable DHCP... OK.

Right, now it just bridges whatever on LAN4 onto LAN1-3 and wifi. Fine.

Connect it up to the live router and try...

And it works! I have a new access point, and 2 WLANs, which isn't ideal -- but the second WLAN works, and I can connect and get an Internet connection. Great!

So, I try through the wall. Not so good.

More crawling around and I find a second network cable in the living room that I'd missed. Plug it in, and the cable in the main bedroom comes alive! Cool!

So, move the access point in there. Connect to it, test... 65-70 Mb/s. Hmm. Not that great. Try a cable to it. 85 Mb/sec. Uninspiring.

Test the wifi connection direct to the main router...

Just over 300 Mb/s.

Ah.

Oh bugger!

In other words, after some three hours' work and a fair bit of swearing, my "improved", signal-boosted connection is at best one-fifth as fast as the original one.

I guess the things are that firstly, my connection speed really wasn't as bad as I thought, and secondly, I was hoping with some ingenuity to improve it for free with kit I had lying around.

The former invalidates the latter: it's probably not worth spending money on improving something that is not in fact bad in the first place.

I don't recall when I got my fibre connection in Mitcham, but I had it for at least a couple of years, maybe even 3, so I guess around 2011-2012. It was blisteringly quick when I got it, but the speeds fell and fell as more people signed up and the contention on my line rose. Especially at peak times in the evenings. The Lodger often complained, but then, he does that anyway.

But my best fibre speeds in London were 75-80Mb/s just under a decade ago. My cable TV connection (i.e. IP over MPEG (!)) here in Prague is five times faster.

So the kit that was an adequate router/firewall then, which even supports a USB2 disk as some NAS, is now pitifully unequal to the task. It works fine but its maximum performance will actually reduce the speed of my home wifi, let alone its Fast Ethernet hub when now I need gigabit just for my broadband.

I find myself reeling a little from this.

It reminds me of my friend Noel helping me to cable up the house in Mitcham when I bought it in about 2002. Noel, conveniently, was a BT engineer.

We used Thin Ethernet. Yes, Cheapernet, yes, BNC connections etc. Possibly the last new deployment of 10base-2 in the world!

Why? Well, I had tons of it. Cables, T-pieces, terminators, BNC network cards in ISA or PCI flavours, etc. I had a Mac with BNC. I had some old Sun boxes with only BNC. It doesn't need switches or hubs or power supplies. One cable is the backbone for the whole building -- so fewer holes in the wall. Noel drilled a hole from the small bedroom into the garage, and one from the garage into the living room, and that was it. Strategic bit of gaffer tape and the job's a good 'un.

In 2002, 10 Mb/s was plenty.

At first it was just for a home LAN. Then I got 512kb/s ADSL via one of those green "manta ray" USB modems. Yes, modem, not router. Routers were too expensive. Only Windows could talk to them at first, so I built a Windows 2000 server to share the connection, with automatic fallback to 56k dialup to AOL (because I didn't pay call charges).

So the 10Mb/s network shared the broadband Internet, using 5% of its theoretical capacity.

Then I got 1Mb/s... Then 2Mb/s... I think I got an old router off someone for that at first. The Win 2K Server was a Pentium MMX/200MHz and was starting to struggle.

Then 8MB/s, via Bulldog, who were great: fast and cheap, and they not only did ADSL but the landline too, so I could tell BT to take a running jump. (Thereby hangs a tale, too.)

With the normal CSMA/CD Ethernet congestion, already at 8Mb/s, the home 10base-2 network was not much quicker than wifi -- but it was still worth it upstairs, where the wifi signal was weaker.

Then I got a 16Mb/s connection and now the Cheapernet became an actual bottleneck. It failed – the great weakness of 10base-2 is that a cable break anywhere brings down the entire LAN – and I never bothered to trace it. I just kept a small segment to link my Fast Ethernet switch to the old 10Mb/s hub for my testbed PC and Mac. By this point, I'd rented out my small bedroom too, so my main PC and server were in the dining room. That mean a small 100base-T star LAN under the dining table was all I needed.

So, yes, I've had the experience of networking kit being obsoleted by advances in other areas before – but only very gradually, and I was starting with 1980s equipment. It's a tribute to great design that early-'80s cabling remained entirely usable for 25 years or more.

But to find that the router from my state-of-the-art, high-speed broadband from just six years ago, when I emigrated, is now hopelessly obsolete and a significant performance bottleneck: that was unexpected and disconcerting.

Still, it's been educational. In several ways.

The thing that prompted the Terry Pratchett reference in my title is this:
https://www.extremetech.com/computing/95913-koomeys-law-replacing-moores-focus-on-power-with-efficiency
https://www.infoworld.com/article/2620185/koomey-s-law--computing-efficiency-keeps-pace-with-moore-s-law.html

A lot of people are still in deep denial about this, but x86 chips stopped getting very much quicker in about 2007 or so. The end of the Pentium 4 era, when Intel realised that they were never going to hit the 5 GHz clock that Netburst was aimed at, and went back to an updated Pentium Pro architecture, trading raw clock speeds for instructions-per-clock – as AMD had already done with the Sledgehammer core, the origin of AMD64.

Until then, since the 1960s, CPU power roughly doubled every 18 months. For 40 years.
8088: 4.77MHz.
8086: 8MHz.
80286: 6, 8, 12, 16 MHz.
80386: 16, 20, 25, 33 MHz.
80486: 25, 33; 40, 50; 66; 75, 100 MHz.
Pentium: 60, 66, 75, 90, 100; 120, 133; 166, 200, 233 MHz.
Pentium II: 233, 266, 300, 333, 350.
Pentium III: 333 up to 1GHz.
Pentium 4: topped out at about 3.5 GHz.
Core i7 is still around the same, with brief bursts of more, but it can't sustain it.

The reason was that adding more transistors kept getting cheaper, so processors went from 4-bit to 8-bit, to 16-bit, to 32-bit with a memory management unit onboard, to superscalar 32-bit with floating-point and Level 1 cache on-die, then with added SIMD multimedia extensions, then to 32-bit with out-of-order execution, to 32-bit with Level 2 cache on-die, to 64-bit...

And then they basically ran out of go-faster stuff to do with more transistors. There's no way to "spend" that transistor budget and make the processor execute code faster. So, instead, we got dual cores. Then quadruple cores.

More than that doesn't help most people. Server CPUs can have 24-32 or more cores now – twice that or more on some RISC chips – but it's no use in a general-purpose PC, so instead, the effort now goes to reducing power consumption instead.

Single-core execution speed, the most important benchmark for how fast stuff runs, now gets 10-15% faster every 18 months to 2 years, and has done for about a dozen years. Memory is getting bigger and a bit quicker, spinning HDs now reach vast capacities most standalone PCs will never need, so they're getting replaced by SSDs which themselves are reaching the point where they offer more than most people will ever want.

So my main Mac is 5 years old, and still nicely quick. My spare is 9 years old and perfectly usable. My personal laptops are all 5-10 years old and I don't need anything more.

The improvements are incremental, and frankly, I will take a €150 2.5 GHz laptop over a €1500 2.7 GHz laptop any day, thanks.

But the speeds continue to rise in less-visible places, and now, my free home router/firewall is nearly 10x faster than my 2012 free home router/firewall.

And I had not noticed at all until the last week.
Hard Stare

Fun times in the mid-1990s PC Pro labs

I ran the testing labs for PC Pro magazine from 1995 to 1996, and acted as the magazine's de facto technical editor. (I didn't have enough journalistic experience yet to get the title Technical Editor.)

The first PC we saw at PC Pro magazine with USB ports was an IBM desktop 486 or Pentium -- in late 1995, I think. Not a PS/2 but one of their more boring industry-standard models, an Aptiva I think.
We didn't know what they were, and IBM were none too sure either, although they told us what the weird little tricorn logo represented: Universal Serial Bus.
Image result for unicode usb logo

"It's some new Intel thing," they said. So I phoned Intel UK -- 1995, very little inter-company email yet -- and asked, and learned all about it.
But how could we test it, with Windows 95A or NT 3.51? We couldn't.
I think we still had the machine when Windows 95B came out... but the problem was, Windows 95B, AKA "OSR2", was an OEM release. No upgrades. You couldn't officially upgrade 95A to 95B, but I didn't want to lose the drivers or the benchmarks...

I found a way. It involved deleting WIN.COM from C:\WINDOWS which was the file that SETUP.EXE looked for to see if there was an existing copy of Windows.

Reinstalling over the top was permitted, though. (In case it crashed badly, I suppose.) So I reinstalled 95B over the top, it picked up the registry and all the settings... and found the new ports.
But then we didn't have anything to attach to them to try them. :-) The iMac wouldn't come out for another 2.5 years yet.
Other fun things I did in that role:
• Discovered Tulip (RIP) selling a Pentium with an SIS chipset that they claimed supported EDO RAM (when only the Intel Triton chipset did). Under threat of a lawsuit, I showed them that it did support it -- it recognised it, printed a little message saying "EDO RAM detected" and worked... but it couldn't use it and benchmarked at exactly the same speed as with cheaper FP-mode RAM.
I think that led to Tulip suing SIS instead of Dennis Publishing. :-)
• Evesham Micros (RIP) sneaking the first engineering sample Pentium MMX in the UK -- before the MMX name had even been settled -- into a grouptest of Pentium 166 PCs. It won handily, by about 15%, which should have been impossible if it was a standard Pentium 1 CPU. But it wasn't -- it was a Pentium MMX with twice as much L1 cache onboard.
Intel was very, very unhappy with naughty Evesham.
• Netscape Communications (RIP) refused to let us put Communicator or Navigator on our cover CD. They didn't know that Europeans pay for local phone calls, so that it cost money to make a big download (30 or 40 MB!). They wouldn't believe us and in the end flew 2 executives to Britain to explain to us that it was a free download and they wanted to trace who downloaded it.
As acting technical editor, I had to explain to them. Repeatedly.

When they finally got it, it resulted in a panicked trans-Atlantic phone call to Silicon Valley, getting someone senior out of bed, as they finally realised why their download and adoption figures were so poor in Europe.

We got Netscape on the cover CD, the first magazine in Europe to do so. :-) Both Communicator and Navigator, IIRC.
• Fujitsu supplied the first PC OpenGL accelerator we'd ever seen. It cost considerably more than the PC. We had no way to test it -- OpenGL benchmarks for Windows hadn't been invented yet. (It wasn't very good in Quake, though.)
I originally censored the company names, but I checked, and the naughty or silly ones no longer exist, so what the hell...
Tulip were merely deceived and didn't verify. Whoever picked SIS was inept anyway -- they made terrible chipsets which were slow as hell.

(Years later, they upped their game, and by C21 there really isn't much difference, unless you're a fanatical gamer and overcloker.)
Lemme think... other fun anecdotes...
PartitionMagic caused me some fun. When I joined (at Issue 8) we had a copy of v1 in the cupboard. Its native OS was OS/2 and nobody cared, I'm afraid. I read what it claimed and didn't believe it so I didn't try it.
Then v2 arrived. It ran on DOS. Repartitioning a hard disk when it was full of data? Preposterous! Impossible!
So I tried it. It worked. I wrote a rave review.
It prompted a reader letter.
"I think I've spotted your April Fool's piece. A DOS program that looks exactly like a Windows 95 app? Which can repartition a hard disk full of data? Written by someone whose name is an anagram of 'APRIL VENOM'? Do I win anything?"
He won a phonecall from me, but he did teach me an anagram of my name I never knew.
It led me to run a tip in the mag.

At the time, a 1.2 GB hard disk was the most common size (and a Quantum Fireball the fastest model for the money). Format that as a FAT16 drive and you got super-inefficient 16 kB clusters. (And in 1995 or early 1996, FAT16 was all you got.)
With PartitionMagic, you could take 200 MB off the end, make it into a 2nd partition, and still fit more onto the C: drive because of far more efficient 8 kB clusters. If you didn't have PQMagic you could partition the disk that way before installing. The only key thing was that C: was less than 1 GB. 0.99 GB was fine.
I suggested putting the swap file on D: -- you saved space and reduced fragmentation.
One of our favourite suppliers, Panrix, questioned this. They reckoned that having the swap file on the outer, longer tracks of the drive made it slower, due to slower access times and slower transfer speeds. They were adamant.
So I got them to bring in a new, virgin PC with Windows 95A, I benchmarked it with a single big, inefficient C: partition, then I repartitioned it, put the swapfile on the new D: drive, and benchmarked it again. It was the same to 2 decimal places, and the C drive had about 250MB more free space.
Panrix apologised and I gained another geek cred point. :-)
Hard Stare

A brief history of Apple's transition from Classic MacOS to the NeXTstep-based OS X

[Repurposed from a reply in a Hackernews thread]

Apple looked at buying in an OS after Copland failed. But all the stuff about Carbon, Blue Box, Yellow Box, etc. -- all those were NeXT ideas after the merger. None of it was pre-planned.

So, they bought NeXTstep, a very weird UNIX with a proprietary, PostScript-based GUI and a rich programming environment with tons of rich foundation classes, all written in Objective C.

A totally different API, utterly unlike and unrelated to Classic MacOS.

Then they had to decide how to bring these things together.

NeXT already offered its OPENSTEP GUI on top of other Unixes. OPENSTEP ran on Sun Solaris and IBM AIX, and I think maybe others I've forgotten. Neither were commercial successes.

NeXT had a plan to create a compatibility environment for running NeXT apps on other OSes. The idea was to port the base ObjC classes to the native OS, and use native controls, windows, widgets etc. but to be able to develop your apps in ObjC on NeXTstep using Interface Builder.

In the end, only one such OS looked commercially viable: Windows NT. So the plan was to offer a NeXT environment on top of NT.

This is what was temporarily Yellow Box and later became Cocoa.

Blue Box was a VM running a whole copy of Classic MacOS under NeXTstep, or rather, Rhapsody. In Mac OS X 10.0, Blue Box was renamed the Classic environment and it gained the ability to mix windows with NeXT windows.

But there still needed to be a way to port apps from Classic MacOS to Mac OS X.

So what Apple did was go through the Classic MacOS API and cut it down, removing all the calls and functions that would not be safe in a pre-emptively multitasking, memory-managed environment.

The result was a safe subset of the Classic MacOS API called Carbon, which could be implemented both on Classic MacOS and on the new NeXTstep-based OS.

Now there was a transition plan:

• your old native apps will still work in a VM

• apps written to Carbon can be recompiled for OS X

• for the full experience, rewrite or write new apps using the NeXT native API, now renamed Cocoa.

• incidentally there was also a rich API for Java apps, too

Now there was a plan.

Here's how they executed it.

1. Copland was killed. A team looked at if anything could be salvaged.

2. They got to work porting NeXTstep to PowerPC

3. 2 main elements from Copland were extracted:

• The Appearance Manager, a theming engine allowing skins for Classic MacOS: https://en.wikipedia.org/wiki/Appearance_Manager

• A new improved Finder

The new PowerPC-native Finder had some very nice features, many never replicated in OS X... like dockable "drawers" -- drag a folder to a screen edge and it vanished, leaving just a tab, which opens a pop-out draw. Multithreading: start a copy or move and then carry on doing other things.

The Appearance Manager was grafted onto NeXTstep, leading to Rhapsody, which became Mac OS X Server: basically NeXTstep on PowerPC with a Classic MacOS skin, so a single menu bar at the top, desktop icons, Apple fonts and things -- but still using the NeXT "Miller columns" Workspace file manager and so on.

Apple next released MacOS 8, with the new Appearance control panel and single skin, called Platinum: a marginally-updated classic look and feel. There were never any official others, but some leaked, and a 3rd party tool called Kaleidoscope offered many more.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_View/Entries/2011/2/26_Copland.html

So some improvements, enough to make it a compelling upgrade...

And also to kill off the MacOS licensing programme, which only covered MacOS 7. (Because originally 7 had been planned to be replaced with Copland, the real MacOS 8.)

MacOS 8 was also the original OS of the first iMac.

Then came MacOS 8.1, which also got HFS+, a new, more efficient filesystem for larger multi-gigabyte hard disks. It couldn't boot off it, though (IIRC).

MacOS 8.1 was the last release for 680x0 hardware and needed a 68040 Mac.

Then came the first PowerPC-only version, MacOS 8.5, which brought in booting from HFS+. Then MacOS 8.6, a bugfix release, mainly.

Then MacOS 9, with better-integrated WWW access and some other quite nice features... but all really stalling for time while they worked on what would become Mac OS X.

The paid releases were 8.0, 8.5 and 9. 8.1, 8.6, 9.1 and 9.2 were all free updates.

In a way they were just trickling out new features, while working on adapting NeXTstep:

1. Rhapsody (Developer Release 1997, DR2 1998)

2. Mac OS X Server (1.0 1999, 1.2 2000)

3. Mac OS X Public Beta (2000)

But all of these releases supported Carbon and could run Carbon apps, and PowerPC-native Carbon apps would run natively under OS X without the need for the Classic environment.

Finally in 2001, Mac OS X 10.0 "Cheetah".