Hard Stare

The historical significance of DEC and the PDP-7, -8, -11 & VAX

Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It's entitled "First new vax in ...30 years? 🙂"

Someone posted it on Hackernews. One of the comments said, roughly, that they didn't see the significance and could someone "explain it like I'm a Computer Science undergrad." This is my attempt to reply...

Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... and both were from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 18-bit, or 36-bit logic.

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS-8. This OS is long gone but it was the origin of a command-line interface with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extensions, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was going to be the OS for the IBM PC but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, called MS-DOS.

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

(When it gained a POSIX-compliant mode and TCP/IP, it was renamed from VAX/VMS to OpenVMS.)

OpenVMS is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler et al finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
Hard Stare

Mankind is a monkey with its hand in a trap, & legacy operating systems are among the bait

[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
Hard Stare

Did you know that you can 100% legally get & run WordPerfect for free?

In fact, there are two free versions: one for Classic MacOS, made freeware when WordPerfect discontinued Mac support, and a native Linux version, for which Corel offered a free, fully-working, demo version.

But there is a catch – of course: they're both very old and hard to run on a modern computer. I'm here to tell you how to get them and how to install and run them.

WordPerfect came to totally dominate the DOS wordprocessor market, crushing pretty much all competition before it, and even today, some people consider it to be the ultimate word-processor ever created.

Indeed the author of that piece maintains a fan site that will tell you how to download and run WordPerfect for DOS on various modern computers,  if you have a legal copy of it. And, of course, if you run Windows, then the program is still very much alive and well and you can buy it from Corel Corp.

Sadly, the DOS version has never been made freeware. It still works – I have it running under PC-DOS 7.1 on an old Core 2 Duo Thinkpad, and it's blindingly fast. It also works fine on dosemu. It is still winning new fans today. Even the cut-down LetterPerfect still cost money. The closest thing to a free version is the plain-text-only WordPerfect Editor.

Edit: I do not know if Corel operates a policy like Microsoft, where owning a new version allows you run any older version. It may be worth asking.

But WordPerfect was not, originally, a DOS or a PC program. It was originally developed for a Data General minicomputer, and only later ported to the PC. In its heyday, it also ran on classic MacOS, the Amiga, the Atari ST and more. I recall installing a text-only native Unix version on SCO Xenix 386 for a customer. In theory, this could run on Linux using iBCS2 compatibility.

When Mac OS X loomed on the horizon, WordPerfect Corporation discontinued the Mac version – but when they did so, they made the last ever release, 3.5e, freeware.

WordPerfect 3.5e 
(Image source.)

Of course, this is not a great deal of use unless you have a Mac that can still run Classic – which today means a PowerPC Mac with Mac OS X 10.4 or earlier. However, hope springs eternal: there is a free emulator called SheepShaver that can emulate classic MacOS on Intel-based Macs, and the WPDOS site has a downloadable, ready-to-use instance of the emulator all set up with MacOS 9 and WordPerfect for Mac.

To be legal, of course, you will need to own a copy of MacOS 9 – that, sadly, isn't free. Efforts are afoot to get it to run natively on some of the later PowerMac G4 machines on which Apple disabled booting the classic OS. I must try this on my Mac mini G4 and iBook G4.

The non-Windows version of WordPerfect that lived the longest, though, was the Linux edition. Corel was very keen on Linux. It had its own Linux distro, Corel LinuxOS, which had a very smooth modified KDE and was the first distro to offer graphical screen-resolution setting. Corel made its own ARM-based Linux desktop, the NetWinder, as reviewed in LinuxJournal.

And of course it made WordPerfect available for Linux.

Edit: Sadly, though, Microsoft intervened, as it is wont to do. The programs in WordPerfect Office originally came from different vendors. Some reviews suggested that the slightly different looks and feels of the different apps would be a problem, compared to the more uniform look and feel of MS Office. (The Microsoft apps in Office 4 were very different from one another. Office 95 and Office 97 had a lot of effort put in to make them more alike, and not much new functionality.)

Corel was persuaded to license the MS Office look-and-feel – the button bars and designs – and the macro language (Visual BASIC for Applications) and incorporate them into WordPerfect Office.

But the deal had a cost above the considerable financial one: Corel had to discontinue all its Linux efforts. So it sold off Corel LinuxOS, which became Xandros. It sold its NetWinder hardware, which became independent. It killed off its native Linux app, and ended development of WordPerfect Office for Linux, which was a port of the then-current Windows version using Winelib. In fact, Corel contributed quite a lot of code to the WINE Project at this time in order to bring WINE up to a level where it could completely and stably support all of WordPerfect Office.


I'm not sure if the text-only WordPerfect for Unix ever had a native Linux version – I didn't see it if it did – but a full graphical version of WordPerfect 8 was included with Corel LinuxOS and also sold at retail. Corel offered both a free edition with fewer bundled fonts, as well as a paid version.

This is still out there – although most of its mirrors are long gone, the Linux Documentation Project has it. It's not trivial to install a 20-year-old program on a modern distro, but luckily, help is at hand. The XWP8Users site has offered some guidance for many years, but I confess I never got it to work except by installing a very old version of Linux in a VM. For instance, it's easy enough to get it running on Ubuntu 8.04 or 8.10 – Corel LinuxOS was a Debian-derivative, and so is Ubuntu.

The problem is that even in these days of containers for everything, Ubuntu 8 is older than anything supports. Linux containers came along rather later than 2008. In fact, in 2011 I predicted that containers were going to be the Next Big Thing. (I was right, too.)

So I've not been able to find any easy way to create an Ubuntu 8.04 container on modern Ubuntu. If anyone knows, or is up for the challenge, do please get in touch!

But the "Ex WP8 Users" site folk have not been idle, and a few months ago, they released a big update to their installation instructions. Now, there's a script, and all you need to do is download the script, grab the WordPerfect 8.0 Downloadable Personal Edition (DPE), put them in a folder together and run the script, and voilá. I tried it on Ubuntu 20.04 and it works a treat so long as I run it as root. I have not seen any reports from anyone else about this, so it might be just my installation.

Read about it and get the script here.

Edit:

For more info, read the WordPerfect for Linux FAQ. This includes instructions on adding new fonts, fixing the MS Word import filter and some other useful info.

From the discussion on Hackernews and the FAQ, I should note that there are terms and conditions attached to the free  WP 8.0 DPE. It is only free for personal, non-commercial use, and some people interpret Corel's licence as meaning that although it was a free download, it is not redistributable. This means that if you did not obtain it from Corel's own Linux site (taken down in 2003) or from an authorised re-distributor (such as bundled with SUSE Linux up to 6.1 and early versions of Mandrake Linux, and the "WordPerfect for Linux Bible" hardcopy book, and a few resellers) then it is not properly licensed.

I dispute this: as multiple vendors did re-distribute it and Corel took no action, I consider it fair play. I also very much doubt that anyone will use this in a commercial setting in 2021.

If you are interested in the more complete WordPerfect 8.1, I note that it was included in Corel LinuxOS Deluxe Edition and that this is readily downloaded today, for example from the Internet Archive or from ArchiveOS. However, unless you bought a licence to this, this is not freeware and does not include a licence for use today.



r/linux - A blast from the past: native WordPerfect 8 for Linux running on Fedora 13. It still works! [pic]
(Image source.)

Postscript

If you really want a free full-function word-processor for DOS, which runs very well under DOSemu on Linux, I suggest Microsoft Word 5.5. MS made this freeware at the turn of the century as a free Y2K update for all previous versions of Word for DOS.

How to get it:
Microsoft Word for DOS — it’s FREE

Sadly, MS didn't make the last ever version of Word for DOS free. It only got one more major release, Word 6 for DOS. This has the same menu layout and the same file format as Word 6 for Windows and Word 6 for Mac, and also Word 95 in Office 95 (for Win95 and NT4). It's a little more pleasant to use, but it's not freeware — although if you own a later version of Word, the licence covers previous versions too.

Here is a comparison of the two:
Microsoft Word 5.5 And 6.0 In-depth DOS Review With Pics
Hard Stare

The pros and cons of cheap Android phones — or why you don't get OS updates

I like cheap Chinese phones. I am on my 3rd now: first an iRulu Victory v3, which came with 5.1. First 6.5" phablet I ever saw: plasticky, not very hi-res, but well under €200 and had dual SIMs, a µSD slot and a replaceable battery. No compass though.

Then a PPTV King 7, amazing device for the time, which came with 5 as well but half in Chinese. I rooted it and put CyanogenMod on it, getting me Android 6. Retina screen, dual SIM or 1 + µSD, fast, amazing screen.

Now, an Umidigi F2, which came with Android 10. Astonishing spec for about €125. Dual SIM + µSD, 128GB flash, fast, superb screen.

But with all of them, typically, you get 1 ROM update ever, normally the first time you turn it on, then that's it. The PPTV was a slight exception as a 3rd party ROM got me a newer version, but with penalties: the camera autofocus failed and all images were blue-tinged, the mic mostly stopped working, and the compass became a random-number generator.

They are all great for the money, but the chipset will never get a newer Android. This is normal. It's the price of getting a £150 phone with the specification of a £600+ phone.

In contrast, I bought my G/F a Xiaomi A2. It's great for the money – a £200 phone – but it wasn't high-end when new. But the build quality is good, the OS has little bloatware (because Android One), at 3YO the battery still lasts a day, there are no watermarks on photos etc.

It had 3 major versions of Android (7, then 8, then 9) and then some updates on top.

This is what you get with Android One and a big-name Chinese vendor.

Me, I go for the amazing deals from little-known vendors, and I accept that I'll never get an update.

MediaTek are not one of those companies that maintain their version for years. In return, they're cheap and the spec is good when they're new. They just move on to new products. Planet persuaded 'em to put 8 on it, and they deserve kudos for that, not complaining. It's an obsolete product; there's no reason to buy a Gemini when you could have a Cosmo, other than cost.

No, these are not £150 phones. They're £500 phones, because of the unique form-factor: a clamshell with the best mobile keyboard ever made.

But Planet Computers are a small company making an almost-bespoke device: i.e. in tiny numbers by modern standards. So, yes, it's made from cheap parts from the cheapest possible manufacturers, because the production run is thousands. A Chinese phone maker like Xiaomi would consider a production run of only 20 million units to be a failure. (Source: interview with former CEO.) 80 million is a niche product to them.

PlanetComp production is below prototype scale for these guys. It's basically a weird little niche hand-made item.

For that, £500 is very good. Compare with the F(x)tech Pro-1, still not shipping a good 18 months after I personally enquired about one, which is about £750 – for a poorer keyboard and a device with fewer adaptations to landscape use.

This is what you get when one vendor -- Google -- provides the OS, another does the port, another builds products around it, and often, another sells the things. Mediatek design and build the SoC, and port one specific version of Android to it... a bit of work from the integrator and OEM builder, and there's your product.

This is one of the things you sometimes get if you buy a name-brand phone: OS updates. But the Chinese phones I favour are ½-⅓ of the price of a cheap name-brand Android and ¼ of the price of a premium brand such as Samsung. So I can replace the phone 2-3× more often and keep more current that way... and still be a lot less worried about having it stolen, or breaking it, or the like. Win/win, for my perspective.

Part of this is because the ARM world is not like the PC world.

For a start, in the x86 world, you can rely on their being system firmware to boot your OS. Most PCs used to use a BIOS; the One Laptop Per Child XO-1 used Open Firmware, like NewWorld PowerMacs. Now, we all get UEFI.

(I do not like UEFI much, as regular readers, if I have a plural number of those, may have gathered.)

ARM systems have no standard firmware. No bootloader, nothing at all. The system vendor has to do all that stuff themselves. And with a SoC (System On A Chip), the system vendor is the chip designer/fabricator.

(For instance, the Raspberry Pi's ARM cores are actually under the control of the GPU which runs its own OS -- a proprietary RTOS called ThreadX. When a RasPi boots, the *GPU* loads the "firmware" from SD card, which boots ThreadX, and then ThreadX starts the ARM core(s) and loads an OS into them. That's why there must be the special little FAT partition: that is what ThreadX reads. That's also why RasPis do not use GRUB or any other bootloader. The word "booting" is a reference to Baron Münchausen lifting himself out of a swamp by his own bootstraps. The computer loads its own software, a contradiction in terms: it lifts itself into running condition by its own bootstraps. I.e. it boots up.

Well, RasPis don't. The GPU boots, loads ThreadX, and then ThreadX initialises the ARMs and puts an OS into their memory for them and tells them to run it.)

So each and every ARM system (i.e. device built around a particular SoC, unless it's very weird) has to have a new native port of every OS. You can't boot a one phone off the Android from another.

A Gemini is a cheapish very-low-production-run Chinese Android phone, with an additional keyboard wired on, and the screen forced to landscape mode in software. (A real landscape screen would have cost too much.)

Cosmo piggybacks a separate little computer in the lid, much like the "touchbar" on a MacBook Pro is a separate little ARM computer running its own OS, like a tiny, very long thin iPad.

AstroSlide will do away with this again, so the fancy hinge should make for a simpler, less expensive design... Note, I say should...
Hard Stare

Some random sketchy thoughts on Unix, web-apps and workflow

I want to get this down before I forget.

I use Gmail. I even pay for Gmail, which TBH really rankles, but I filled up my inbox, I want that stuff as a searchable record (not a Zip or something), any other way of storing dozens of gigs of email means either a lot more work, or moving to other paid storage so a lot of work and paying, so paying for more storage is the least-effort strategy.

I also use a few other web apps, mostly Google: calendar, contacts (I've been using pocket computers since the 1980s, so I have over 5000 people in my address book, and yes, I do want to keep them all), Keep for notes (because Evernote crippled their free tier and don't offer enough value to me to make it worth buying). I very occasionally use Google Sheets and Docs, but would not miss them.

But by and large, I hate web apps. They are slow, they are clunky, they have poor UI with very poor keyboard controls, they often tie you to specific browsers, and modern browsers absolutely suck and are getting worse with time. Firefox was the least worst, but they crippled it with Firefox Quantum and since then it just continues to degenerate.

I used Firefox because it was customisable. I had a vertical tab bar (one addon) – no, not tree-style, that's feature bloat – and it shared my sidebar with my bookmarks (another addon), and the bookmarks were flattened not hierarchical (another addon) because that's bloat too.

Some examples of browser bloat:

  • I have the bookmarks menu for hierarchical bookmark search & storage. I don't need it in the bookmarks sidebar too; that's duplication of functionality.

  • I don't need hierarchical tabs; I have multiple windows for that. So TreeStyleTabs is duplication of functionality – but it's an add-on, so I don't mind too much, so long as I have a choice.

  • I don't need user profiles; my OS has user profiles. That should be an add-on, too.

  • Why do all my browsers include the pointless bloat of web-developer tools? I and 99.99% of the Web's users never ever will need or use them. Why are they part of the base install?

I don't think Mozilla thought of this. I think the Mozilla dev team don't do extensive customisation of their browser. So when they went with multi-process rendering (a good thing) and Rust for safer rendering (also a good thing), they looked at XUL and all the fancy stuff it did, and they compared it with Chrome's (crippled, intrusive) WebExtensions, and decided to copy Chrome and ripped out their killer feature, because they didn't know how to use it effectively themselves.

We all have widescreens now. It's hard to get anything but widescreens. Horizontal toolbars are the enemy of widescreens. Vertical pixels are in short supply; horizontal ones are cheap. The smart thing to do is "spend" cheap, plentiful horizontal space on toolbars, and save valuable, "expensive" vertical space for what you are working on.

The original Windows 95 Explorer could do a vertical taskbar, which is superb on widescreens -- although they hadn't been invented yet. But the Cinnamon and Mate FOSS desktops, both copies of the Win95 design, can't do this. KDE does it so badly that for me it's unusable.

It's the Mozilla syndrome: don't take the time to understand what the competition does well. Just copy the obvious bits from the competition and hope.

Hiding stuff from view and putting it on keystrokes or mouse gestures is not the smart answer. That impedes discoverability and undoes the benefits of nearly 40 years of work on graphical user interfaces. It's fine to do that as well as good GUI design, but not instead of it. If your UI depends on things like putting a mouse into one corner, then slapping a random word in there as a visual clue (e.g. "Activities") is poor design. GUIs were in part about replacing screensful of text with easy, graphical cues.

Chrome has good things. Small ones. The bookmarks toolbar that only appears in new tabs and windows? That's a good thing. In 15 years, Firefox never copied that, but it ripped out its rich extensions system and copied Chrome's broken one.

Tools like these extensions are local tools that do things I need. Mozilla tried to copy Chrome's simplicity by ripping out the one thing that kept me using Firefox. They didn't look at what was good about what they had; they tried to copy what was good about what their competitor had. I used Firefox because it wasn't Chrome.

For now, I switched to Waterfox, because I can keep my most important XUL extensions.

I run 2 browsers because I need some web apps. I use Chrome for Google's stuff, and Waterfox for everything else. Why them? Because they are cross-platform. I use macOS on my home desktop, Linux on my work desktop and my laptops. I rarely use Windows at all but they both work on those too. I don't care if Safari has killer features, because it doesn't work on Windows (any more) or Linux, so it's no use to me. Any Apple tool gets replaced with a cross-platform tool on my Macs.

I also use Thunderbird, another of its own tools Mozilla doesn't understand. It's my main work email client, and I use it at home to keep a local backup of my Gmail. But I don't use it as my home email client, partly because I switch computers every day. My email is IMAP so it's synched – all clients see the same email. But my filters are not. IMAP doesn't synch filters. I have over 100 filters for home use and a few dozen for work use. I get hundreds of emails every day, and I only see half a dozen in my inbox because the rest are filtered into subfolders.

We have a standard system for cross-platform email storage (IMAP), that replaced minimal mail retrieval for local storage (POP3), but nobody's ever extended it to try to compete with systems, such as Gmail or MS Outlook and Exchange Server, that offer more, such as rules, workflow, rich calendaring, rich contacts storage. And so local email clients are fading away and more and more companies use webmail.

Why have web apps prospered so when rich local tools can do more? Because only a handful of companies grok that rich local tools can be strong selling points and keep enhancing them.

I used to use Pidgin for all my chat stuff. Pidgin talked to all the chat protocols: AIM (therefore Apple iMessage too), Yahoo IM, MSN IM, ICQ, etc. Now, I use Franz, because it can talk to all the modern chat systems: Facebook Messenger, WhatsApp, Slack, Discord, etc. It's good: a big help. But it's bloated and sluggish, needs gigs of RAM, and each tab has a different UI inside it, because it's an Electron app. Each tab is a dedicated web browser.

Pidgin, via libpurple plugins, can do some of these – FB via a plugin FB keeps trying to block; Skype; Telegram, the most rich and mature of the modern chat systems. But not all, so I need Franz too. Signal, because it's a nightmare cluster of bad design by cryptology and cryptocurrency nerds, doesn't even work in a web browser.

Chat systems, like email, are a failure, both of local rich app design to keep up, and of protocol standardisation in order to compete with proprietary solutions.

Email is not a hard problem. This is a more than fifty-year-old tool.

Chat is not hard either. This is a more than forty-year-old tool.

But groupware is different. Groupware builds on these but adds forms, workflow, organisation-wide contacts and calendar management. Groupware never got standardised.

Ever see Engelbart's "mother of all demos"? Also more than half a century ago. It included collaborative file editing. But it was a server-based demo, because microcomputers hadn't been invented yet. So, yes, for some things, like live editing by multiple users, a web app can do things local apps can't easily do.

But for most things, a native app should always be able to outcompete a web app. Web apps grew because they exploited a niche: if you have lots of proprietary protocols, then that implies lots of rich proprietary clients for lots of local OSes. Smartphones and tablets made that hard – lots of duplication of functionality in what must be different apps because different OSes need different client apps – so the functionality was moved into the server, enabled by Javascript.

Javascript was and is a bad answer. It was a hastily-implemented, poorly-specified language, which vastly bloats browsers, needs big expensive just-in-time-compiling runtimes and locks lower-powered devices out of the web.

The web is no longer truly based on HTML and formatting enhancements such as CSS. Now, the Web is a delivery medium for Javascript apps.

Why?

Javascript happened because the protocols for common usage of internet communications were inadequate.
This meant one company could obtain a stranglehold via proprietary communications tools.

That was a bad thing.

FOSS xNix arose because of standardisation.

xNix is a shorthand for Unix. Unix™ does not mean "an OS based on AT&T UNIX code". It has not meant this since 1993, when Novell gave the Unix trademark to the Open Group. Since then, "Unix" means "any OS that passes Open Group Unix compatibility testing." Linux has passed these tests, more than once. Both the K-OS and EulerOS distros passed the tests. This means that Linux is a Unix these days. Accept it and move on; it is a matter of legal fact and record. The "based on AT&T code" thing has not been true for thirty-eight years. It is long past time to let it go.

The ad-hoc, informal IBM PC compatibility standard meant that any computer that wanted to be sold as "IBM compatible" had to run IBM software, not just run MS-DOS. All the other makes of MS-DOS computer couldn't run MS Flight Simulator and Lotus 1-2-3, so they died out. Later, that came to include powerful 32-bit 80386DX computers, which allowed 32-bit OSes to come to the PC. Later still, the 80386SX made 32-bit computers cheap and widespread, and that allowed 32-bit OSes to become mainstream. Some, like FreeBSD, stuck to their own standards (e.g. their own partitioning schemes), and permissive licenses meant people could fork them or take their code into proprietary products. Linux developed on the PC and from its beginning embraced PC standards, including PC partitioning, PC and Minix filesystems and so on... and its strict licence largely stopped people building Linux into proprietary software. So it throve in ways no BSD ever did.

Because of standards. Standards, even informally-specified ad-hoc ones, are good for growth and very good for FOSS.

The FOSS world does have basic standards for email retrieval and storage, but they're not rich enough, which means proprietary groupware systems had an edge and thrived.

Web apps are the third-party software-vendor world's comeback against Windows, Office and Exchange. They let macOS and iOS and Android and ChromeOS (and as a distant outlier, other Linux distros) participate in complex workflows. Smartphones and tablet and ChromeOS have done well because their local apps are, like Franz's tabs, mostly embedded single-app web browsers.

Web apps use a loose ad-hoc standard – web browsers and Javascript – to offer the rich functionality previously dominated by one vendor's proprietary offerings.

But they delivered their rich cross-platform functionality using a kludge: an objectively fairly poor, mostly-interpreted language, in browsers.

Even browser and browser developers haven't even learned the lessons of rich local clients.

Standards too need to evolve and move and keep up with proprietary tech, and they haven't. XMPP and Jabber were pretty good for a while, and originally, FB Messengers and Google Chat were XMPP-based... but they didn't cover some use cases, so they got extended and replaced.

I've read many people saying Slack is just enhanced IRC: multi-participant chat. XMPP doesn't seem to handle multi-participant chat very well. And that's a moving target: Slack adds formatting, emoticons, animated attachments, threading...

The FOSS answer should be to make sure that open standards for this stuff keep up and can be implemented widely, both by web apps and by local clients. Standards-based groupware. There are forgotten standards such as NNTP that are relevant to this.

But standards go both ways. Old-fashioned internet-standard email – plain text, inline quoting, and so on – has compelling advantages that "business-style" email cannot match. Rich clients (local or web-based) need to enforce this stuff and help people learn to use them. Minimal standards that everyone can use are good for accessibility, good for different UI access methods (keyboard+mouse, or keyboard + touchscreen, or tablet + keyboard, or smartphone).

Richer local apps aren't enough. Chandler showed that. Standards so multiple different clients can use it are needed too.

What I am getting at here is that there is important value in defining minimum viable functionality and ensuring that it works in an open, documented way.

The design of Unix largely predates computer graphics and totally predates GUIs. The core model is "everything is a file", that files contain plain text, markup is also plain text, and that you can link tools together by exchanging text:

ls -la | grep ^d | less

This is somewhat obsolete in the modern era of multimedia and streaming, but the idea was sound back then. It's worth remembering that Windows now means Windows NT, and the parents of the core Windows NT OS (not the Win32 GUI part) were the processor-portable OS/2 3 and DEC VMS. VMS had a richer content model than Unix, as it should – its design is nearly a decade younger.

Dave Cutler, lead architect of both VMS and NT, derided the Unix it's-all-just-text model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.

Defined protocols for communication, so that different apps from different teams can communicate – just like an email client receives messages from an email server and so can download your mail, and then this evolved to it being able to ask the server and show you your mail while the messages stay on the server. This is immensely powerful, and now, we are neglecting it.

We can't force companies to open their protocols. We can reverse-engineer them and so write unauthorised clients, like many libpurple plugins for Pidgin, but that's not ideal. What we need to do is look at the important core functionality and make sure that FOSS protocols can do that too. Email clients ask for POP version 3, not just any old POP. IMAP v4 added support for multiple mailboxes (i.e. folders). I propose that it's time for something like IMAP v5, adding server-side filters... and maybe IMAP v6, that grandfathers in some part of LDAP for a server-side contact list too. And maybe IMAP v7, which adds network calendar support.

Got a simple email client that doesn't do calendaring? No problem, stick with IMAP 4. So long as the server and client can negotiate a version both understand, it's all good.

Ditto XMPP: extend that so it supports group chats.

NNTP and RSS have relevance to Web fora and syndication.

Getting together and talking, defining minimum acceptable functionality, and then describing a standard for it. Even if it's an unofficial standard, not ratified by any body, it can still work.

But by the same token, I think it's time to start discussing how we could pare the Web back to something rich and useful but which eliminates Javascript and embedded apps inside web pages. Some consensus for something based on most of HTML5 and CSS, and make it a fun challenge to see how much interactivity programmers can create without Javascript.

What's the minimum useful working environment we could build based on simple open standards that are easily implemented? Not just email + IRC, but email with basic text formatting – the original *bold* _underline_ /italic/ ~strikethrough~ that inspired Markdown, plus shared local and global address books, plus local and global calendars, plus server-side profile storage – so when you enter your credentials, you don't just get your folders, you also get your filters, your local and organisation-wide address books, your local and org-wide calendar, too. If you wish, via separate apps. I don't particularly want them all in one, myself.

Ditto shared drives for safe, encrypted, drive mounts and shares over the internet. I can't use my Google Drive or my MS OneDrive from Linux. Why not? Why isn't there some FOSS alternative mechanism?

Is there any way to get out of the trap of prioprietary apps running on top of open-ish standards (Web 2.0) and help rich local software get more competitive? I am not sure. But right now, we seem to be continuing up a blind alley and everyone's wondering why the going is so slow...


The year of Linux on the desktop came, and the Linux industry didn't notice. It's ChromeOS. Something like 30 million ChromeBooks sold in 2020. No Linux distro ever came close to that many units.

But ChromeOS means webapps for everything.

I propose it's time for a concerted effort at a spec for a set of minimal clean local apps and open protocols to connect them to existing servers. As a constraint, set a low ceiling: e.g. something that can run on some $5 level device, comparable to the raw power of a Raspberry Pi Zero: 1GB RAM and 1GHz of CPU in 1 core. Not enough for web apps, but more than enough for a rich capable email client, for mounting drives, for handling NNTP and internet news.

Something that can be handed out to the couple of billion people living in poverty, with slow and intermittent Internet access at best. This isn't just trying to compete with entrenched businesses: it should be philanthropic, too.
Hard Stare

The DOS and Windows drive-letter allocation process is more complex than you might think

Someone asked me if I could describe how to perform DOS memory allocation. It's not the first time, either. It's a nearly lost art. To try to illustrate that it's a non-trivial job, I decided to do something simpler: describe how DOS allocates drive letters.

I have a feeling I've done this before somewhere, but I couldn't find it, so I tried writing it up as an exercise.

Axioms:

  • DOS only understands FAT12, FAT16 and in later versions FAT32. HPFS, NTFS and all *nix filesystems will be skipped.

  • We are only considering MBR partitioning.

So:

  • Hard disks support 2 partition types: primary and logical. Logical drives must go inside an extended partition.

  • MBR supports a legal max of 4 primaries per drive.

  • Only 1 primary partition on the 1st drive can be marked "active" and the BIOS will boot that one _unless_ you have a *nix bootloader installed.

  • You can only have 1 extended partition per drive. It counts as a primary partition.

  • To be "legal" and to support early versions of NT and OS/2, only 1 DOS-readable primary partition per drive is allowed. All other partitions should go inside an extended partition.

  • MS-DOS, PC DOS and NT will only boot from a primary partition. (I think DR-DOS is more flexible and  I don't know for FreeDOS.)

Those are our "givens". Now, after all that, how does DOS (including Win9x) assign drive letters?

  1. It starts with drive letter C.

  2. It enumerates all available hard drives visible to the BIOS.

  3. The first *primary* partition on each drive is assigned a letter.

  4. Then it goes back to the start and starts going through all the physical hard disks a 2nd time.

  5. Now it enumerates all *logical* partitions on each drive and assigns them letters.

  6. So, all the logicals on the 1st drive get sequential letters.

  7. Then all the logicals on the next drive.

  8. And so on through all logicals on all hard disks.

  9. Then drivers in CONFIG.SYS are processed and if they create drives (e.g. DRIVER.SYS) those letters are assigned next.

  10. Then drivers in AUTOEXEC.BAT are processed and if they create drives (e.g. MSCDEX) those are assigned next.

So you see... it's quite complicated. :-)

Assigning upper memory blocks is more complicated.

NT changes this and I am not 100% sure of the details. From observation:

  • NT 3 did the same, but with the addition of HPFS and NTFS (NT 3.1 & 3.5) and NTFS (3.51) drives.

  • NT 4 does not recognise HPFS at all but the 3.51 driver can be retrofitted.

  • NT 3, 4 & 5 (Win2K) *require* that partitions are in sequential order.

Numbers may be missing but you can't have, say:
[part № 1] [part № 2] [part № 4] [part № 3]

They will blue-screen on boot if you have this. Linux doesn't care.

Riders:

  1. The NT booloader must be on the first primary partition on the first drive.

  2. (A 3rd party boot-loader can override this and, for instance, multi-boot several different installations on different drives.)

  3. The rest of the OS can be anywhere, including a logical drive.

NT 6 (Vista) & later can handle it, but this is because MS rewrote the drive-letter allocation algorithm. (At least I think this is why but I do not know for sure; it could be a coincidence.)

Conditions:

  • The NT 6+ bootloader must be in the same drive as the rest of the OS.

  • The bootloader must be on a primary partition.

  • Therefore, NT 6+ must be in a primary partition, a new restriction.

  • NT 6+ must be installed on an NTFS volume, therefore, it can no longer dual-boot with DOS on its own & a 3rd party bootloader is needed.

NT 6+ just does this:

  1. The drive where the NT bootloader is becomes C:

  2. Then it allocates all readable partitions on drive 1, then all those on drive 2, then all those on drive 3, etc.

So just listing the rules is quite complicated. Turning into a step-by-step how-to guide is significantly longer and more complex. As an example, the much simpler process of cleaning up Windows 7/8.x/10 if preparing to dual-boot took me several thousand words, and I skipped some entire considerations to keep it that "short".

Errors & omissions excepted, as they say. Corrections and clarifications very welcome. To comment, you don't need an account — you can sign in with any OpenID, including Facebook, Twitter, UbuntuOne, etc.
Hard Stare

The decline and fall of A/UX

The story of why A/UX existed is simple but also strangely sad, IMHO.

Apple wanted to sell to the US military, who are a huge purchaser. At that time, the US military had a policy that they would not purchase any computers which were not POSIX compliant – i.e. they had to run some form of UNIX.

So, Apple did a UNIX for Macs. But Apple being what they are, they did it right – meaning they integrated MacOS into their Unix: it had a Mac GUI, making it the most visually-appealing UNIX of its time by far, and it could network with MacOSs and run (some) MacOS apps.

It was a superb piece of work, technically, but it was a box-ticking exercise: it allowed the military to buy Macs, but in fact, most of them ran MacOS and Mac apps.

For a while, the US Army hosted its web presence on classic MacOS. It wasn't super stable, but it was virtually unhackable: there is no shell to access remotely, however good your 'sploit. There's nothing there.

The irony and the sad thing is that A/UX never got ported to PowerPC. This is at least partly because of the way PowerPC MacOS was done: MacOS was still mostly 68K code and the whole OS ran under an emulator in a nanokernel running underneath it. This would have made A/UX-style interoperability, between a PowerPC-native A/UX and 68K-native MacOS, basically impossible without entirely rewriting MacOS in PowerPC code.

But around the same time that the last release of A/UX came out (3.1.1 in 1995), Apple was frantically scrabbling around for a new, next-gen OS to compete with Win95. If AU/X had run on then-modern – i.e. PowerPC- and PCI-based – Macs by that time, it would have been an obvious candidate. But it didn't and it couldn't.

So Apple spent a lot of time flailing around with Copland and Gershwin and Taligent and OpenDoc, wasted a lot of money, and in the end merged with NeXT.

The irony is that in today's world, spoiled with excellent development tools, everyone has forgotten that late-1980s and early-to-mid 1990s dev tools were awful: 1970s text-mode tools for writing graphical apps.

Apple acquired NeXT because it needed an OS, but what clinched the deal was the development tools (and the return of Jobs, of course.) NeXT had industry-leading dev tools. Doom was written on NeXTs. The WWW was written on NeXTs.

Apple had OS choices – modernise A/UX, or buy BeOS, or buy NeXT, or get bought and move to Solaris or something – but nobody else had Objective-C and Interface Builder, or the NeXT/Sun foundation classes, or anything like them.

The meta-irony being that if Apple had adapted A/UX, or failing that, had acquired Be for BeOS, it would be long dead by now, just a fading memory for middle-aged graphical designers. Without the dev tools, they'd never have got all the existing Mac developers on board, and never got all the cool new apps – no matter how snazzy the OS.

And we'd all be using Vista point 3 by now, and discussing how bad it was on Blackberries and clones...
Hard Stare

"What We Have Lost" -- 3C talk

Wow. This is possibly the nerdiest talk I have ever seen, but it is very relevant to my own interests, especially my FOSDEM 2018 talk.

The talk takes very quick looks at Symbolics Genera and OpenGenera and then compares it to Interlisp-D – or as they compare them, "west coast and east coast takes on the Lisp Machine context". That's a powerful comment right there. They draw comparisons between Interlisp-D and Smalltalk, although I do not see a lot of direct resemblance myself, but that is an interesting point. Another interesting factoid is that Interlisp-D is now open source, and efforts are afoot to modernise it.

Then it moves on to BTRON, which I'd never met before. BTRON is still available. It's the desktop iteration of the TRON family, which is doubtless by far the most widely-used operating system you've never heard of. iTRON is used in millions of embedded roles in Japanese consumer electronics and there are also real-time and server products. It has tens to hundreds of millions of instances out there.

And it concludes with IBM i, formerly known as IBM OS/400 for the AS/400 minicomputer range. This is the only surviving single-level store OS in the world (as far as I know; I welcome corrections!) and although it's very much a niche server OS it therefore is also a pointer to a future of PMEM-only computers which just have nonvolatile RAM and dispense with the 1960s concept of "disk drives" and "second level storage" – i.e. the concept behind every other OS you've ever heard of, of any form whatsoever.


Direct link if the embedded video doesn't work.
Hard Stare

Installing Linux on an old 2008 MacBook needs some workarounds & fixes

I just finished doing up an old white MacBook from 2008 (note: not MacBook Pro) for Jana's best friend, back in Brno.

I hit quite a few glitches along the way. Partly for my own memory, partly in case anyone else hits them, here are the work-arounds I needed...

BTW, I have left the links visible and in the text so you can see where you're going. This is intentional.

Picking a distribution and desktop

As the machine is maxed out with 4GB of RAM, and only has a fairly feeble Intel HD 3100 GPU, I went for Xfce as a lightweight desktop that's very configurable and doesn't need hardware OpenGL. (I just wish Xfce had the GNOME 2/Maté facility to lock controls and panels into place.)

Xubuntu (18.10, later upgraded to 19.04) had two peculiar and annoying errors.

  1. On boot, NumLock is always on. This is a serious snag because a MacBook has no NumLock key, nor a NumLock indicator to tell you, and thus no easy way to turn it off. (Fn+F6 twice worked on Xubuntu 18/19, but not on 20.04.) I found a workaround: https://help.ubuntu.com/community/AppleKeyboard#Numlock_on_Apple_Wireless_Keyboard

  2. Secondly, Xubuntu sometimes could not bring the wifi connection up. Rebooting into Mac OS X and then warm-booting into Xubuntu fixed this.

For this and the webcam issue below, I really strongly recommend keeping a bootable Mac OS X partition available and dual-booting between both Mac OS X and Linux. OS X Lion (10.7) is the latest this machine can run. Some Macs – e.g. MacBook Pro and iMac models –  from around this era can run El Cap (10.11) which is probably still somewhat useful. My girlfriend's MacBook Pro is a 2009 model, just one year younger, and it can run High Sierra (10.13) which still supports the latest Firefox, Chrome, Skype, LibreOffice etc without any problem.

By the way: there are "hacks" to install newer versions of macOS onto older Macs which no longer support them. Colin "dosdude1" Mistr has a good list, here: http://dosdude1.com/software.html

However quite a few of these have serious drawbacks on a machine this old. For instance, my 2008 MB might be able to run Mountain Lion (10.8) but probably nothing newer, and if it did, I would have no graphics acceleration, making the machine slow and maybe unstable. Similarly, my 2011 Mac Mini maxes out at High Sierra. Mojave (10.14) and Catalina (10.15) apparently work well, but Big Sur (11) again has no graphics acceleration and is thus well-nigh unusable. But if you have a newer machine and the reports are that it works well as a hack, this may make it useful again.

I had to reinstall Lion. Due to this, I found that the MacBook will not boot Lion off USB; I had to burn a DVD-R. This worked perfectly first time. There are some instructions here:
https://www.lifewire.com/install-os-x-lion-using-bootable-dvd-2260333

Beware, retail Mac OS X DVDs are dual-layer. If the image is more than 5GB, it may not fit on an ordinary single-layer DVD-R.

If I remember correctly, Lion was the last version of Mac OS X that was not a free download. However, that was 10 years and 8 versions ago, so I hope Apple will forgive me helping you to pirate it. A Bittorrent can be found here.

Incidentally, a vaguely-current browser for Lion is ParrotGeeks Firefox Legacy. I found this made the machine much more useful with Lion, able to access Facebook, Gmail etc. absolutely fine, which the bundled version of Safari cannot do. If you disable all sharing options in OS X and only use Firefox, the machine should be reasonably secure even today. OS X is immune to all Windows malware. Download Firefox Legacy from here:
https://parrotgeek.com/fxlegacy.html

However, saying all that, Linux Mint does not suffer from either of these Xubuntu issues, so I recommend Linux Mint Xfce. I found Mint 20 worked well and the upgrade to Mint 20.1 was quick and seamless.

Installation

If you make a 2nd partition in Disk Utility while you're (re-)installing Mac OS X, you can just reformat that as ext4 in the Linux setup program. This saves messing around with Linux disk partitioning on a UEFI MacBook, which I am warning you is not like doing it on a PC. (I accidentally corrupted the MacBook's hard disk trying to copy a Linux partition onto it with gparted, then remove it using fdisk. That's why I had to reinstall. Again, I strongly recommend doing any partitioning with Mac OS X's Disk Utility, and not with Linux.) All Intel Macs have UEFI, not a BIOS, and so they all use only GPT partitioning, not MBR.

I set aside 48GB for Lion and all the rest for Mint. (Mint defaults to using a swapfile in the root partition, just like Ubuntu. This means that 2 partitions are enough. I was trying to keep things as simple as possible.)

If you use Linux fdisk, or Gparted, to look at the disk from Linux, remember to leave the original Apple EFI System Partition ("ESP") alone and intact. You need that even if you single-boot Linux and nothing else.

Wifi doesn't work out of the box on Mint. You need to connect to the Internet via Ethernet, then open the Software and Drivers settings program and install the Broadcom drivers. That was enough for me; more info is here:
https://askubuntu.com/questions/55868/installing-broadcom-wireless-drivers

While connected with a cable, I also did a full update:

sudo -s
apt update
apt full-upgrade -y
apt autoremove -y
apt purge
apt clean


Glitches and gotchas

Startup or shutdown can take ages, or freeze the machine entirely, hanging during shutdown. The fan may spin up during this. The fix is an simple edit to add an extra kernel parameter to GRUB, described here:
https://forums.linuxmint.com/viewtopic.php?t=284960

(Aside: hoping to work around this, I installed kexec-tools for faster reboots. It didn't work. I don't know why not. Perhaps it's something to do with the machine using UEFI, not a BIOS. I also installed the Ubuntu Hardware Enablement stack with its newer kernel, in case that helped, but it didn't. It didn't seem to cause any problems, though, so I left it.)

GRUB shows an error about not being able to find a Mok file, then continues because SecureBoot is disabled. This is non-fatal but there is a fix here:
https://askubuntu.com/questions/1279602/ubuntu-20-04-failed-to-set-moklistrt-invalid-parameter

While troubleshooting the Mok error above, I found that the previous owner of this machine had Fedora on it at some point, and even though I removed and completely reinstalled OS X Lion in a new partition, the UEFI boot entry for Fedora was still there and was still the default. I removed it using the instructions here:
https://www.linuxbabe.com/command-line/how-to-use-linux-efibootmgr-examples

NOTE: I suggest you don't set a boot sequence. Just set the ubuntu entry as the default and leave it at that. The Apple firmware very briefly displays a no-bootable-volume icon (a folder with a question mark on it) as it boots. I think this is why, when I used efibootmgr to set Mint as the default then OS X, it never loaded GRUB but went straight into OS X.

(Mint have not renamed their UEFI bootloader; it's still called "ubuntu" from the upstream distro. I believe this means that you cannot dual-boot a UEFI machine with both Ubuntu and Mint, or multiple versions of either. This reflects my general impression that UEFI is a pain in the neck.)

The Apple built-in iSight Webcam requires a firmware file to work under Linux, which you must extract from Mac OS X:
https://help.ubuntu.com/community/MactelSupportTeam/AppleiSight

Both Xubuntu and Mint automatically install entries in the GRUB boot menu for Mac OS X. For Lion, there are 2: one for the 32-bit kernel, one for the 64-bit kernel. These will not work. To boot into macOS, hold down the Opt key as the machine powers on; this will display the firmware's graphical boot-device selection screen. The Linux partition is described as "EFI Boot". Click on "macOS" or whatever you called your Mac HD partition. If you want to boot into Linux, just power-cycle it and then leave it alone – the screen goes grey, then black with a flashing cursor, then the GRUB menu appears and you can pick Linux. The Linux partition is not visible from macOS and you can't pick it in the Startup Disk system preference-pane.

Post-install fine-tuning

I also added the ubuntu-restricted-extras package to get some nicer web fonts, a few handy codecs, and so on. Remember when installing this that you must use the cursor keys and Enter/Return to say "yes" to the Microsoft free licence agreement. The mouse won't work – use your keyboard. I also added Apple HFS support, so that Linux can easily manipulate the Mac OS X partition.

I installed Google Chrome and Skype, direct from their vendors' download pages. Both of these add their own repositories to the system, so they will automatically update when the OS does. I also installed Zoom, which does not have a repo and so won't get updated. This is an annoyance; we'll have to look at that later if it becomes problematic. I also added VLC because the machine has a DVD drive and this is an easy way to play CDs and DVDs.

As this machine and the old Thinkpad I am sending along with it are intended for kids to use, I installed the educational packages from UbuntuEd. I added those that are recommended for pre-school, primary and secondary schoolchildren, as listed here:
https://discourse.ubuntu.com/t/ubuntu-education-ubuntued/17063

I enabled unattended-upgrades (and set the machine to install updates at shutdown) as described here:
https://www.cyberciti.biz/faq/set-up-automatic-unattended-updates-for-ubuntu-20-04/

While testing the webcam, I discovered that Mint doesn't include Cheese, so I installed that, too:
sudo apt install -y ubuntu-restricted-extras hfsprogs vlc cheese
Hard Stare

Feel like playing with DR-DOS in VirtualBox? Have a few virtual hard disk images!

My occasional project to resurrect DR-DOS and make something vaguely useful from it continues, and in the spirit of "release early, release often", I thought that someone somewhere might enjoy having a look at some of my work-in-progress snapshots.

So while there is nothing vastly new here, building a bootable DOS VM is not completely trivial without what is now some very old knowledge, so I thought these might help someone.

The story so far...

In the OpenDOS Enhancement Project, Udo Kuhnt took Caldera's FOSS release of DR-DOS 7.01 (which they had renamed OpenDOS) and added in FAT32 support and some other things. Caldera spin-off Lineo (later DeviceLogics) implemented these in later, closed-source versions of DOS, but they were not officially FOSS. They also used bits of FreeDOS and were later withdrawn. DeviceLogics has since gone out of business.

Udo's disk images are on Archive.org but they aren't bootable. I've made bootable images you can download. I have a bootable VM of DR-DOS 7.01-08 but I need to clean it up and give it some spit and polish. I also added back the ViewMax GUI from DR-DOS 6.

Meantime, what I have uploaded here are three Zip-compressed VirtualBox VDI files. A VDI is the hard disk of a VirtualBox VM. These contain FAT16 hard disks.

The quick way to use them:


  1. Download the image.

  2. Run VirtualBox. Create a new VM. Call it (e.g.) "DR-DOS 6". You must have "DOS" in the name for Virtualbox to correctly configure the new VM for DOS! Otherwise you must manually do that part.

  3. When you get to the "create or add hard disk stage", stop!

  4. Switch to the file manager. Unzip the file. Put it in the newly-created VM's directory.

  5. Go back to VirtualBox. Pick "add an existing hard disk". Browse to the file you just moved into place. Click it, and click "Add".

  6. Now you're back at the "choose a disk" dialog. Pick the newly-added one.

  7. Finish VM setup.

Now you can start the new DOS VM and enjoy.