Hard Stare

Just because a vendor sells laptops with a Linux on, caveat emptor still applies

Some companies sell laptops with Linux pre-installed. However in some cases I have read about, there may be significant caveats.

Some examples:

  • Dell pre-installed their own drivers for Ubuntu on their laptops, and if you format the machine and reinstall, or reinstall a different distro, you can't get the source of the drivers and build your own.

  • In other instances I've heard of, the machines work fine but some features are not supported on Linux. Or perhaps only works on the vendor's supported distro & not other distros. Or perhaps on Linux but not on -- say -- FreeBSD.

  • Or all features work, but you require Windows to update the firmware, or to update peripherals' firmware, such as docking stations.

  • Or the Linux models have slightly different specs, such as a specific WLAN card, and the generic Windows version of the same model is not 100% compatible.


The fact that someone offers one or two specific models with one particular Linux distro as an option is good, sure, but it doesn't automatically mean that that particular machine may be a good choice if you run a different distro, or don't want their pre-installed OS, or you didn't buy it with Linux and put it on later.

Long long ago, in the mid-1990s, I ran the testing labs for a major UK computer magazine, called PC Pro. In about 1996 I proposed and ran and edited a feature which the editors were very dubious about, but it proved to be a big hit.

The idea was very simple: at that time, all PCs shipped with Windows 95. As 95 was DOS-based at heart and had no concept of user space vs kernel space, drivers were quite easy. You could in a push use DOS drivers, or port Win32 drivers from Windows for Workgroups which did terrible hacky direct-hardware access stuff.

So my feature was: we want machines designed, built and supplied with Windows NT. At the time, that meant NT 4.

NT 4 was not at all like Win95; it just looked superficially like it. It needed its own, new, specially-written drivers for everything. It had built-in drivers for some things, for example EIDE (i.e. PATA) hard disks, but these did not use DMA, only programmed IO. (Not slow, but caused very high CPU usage; no problem on Win9x, but a performance-killer on NT.)

The PC vendors loved and hated us for it.

Some vendors...

  • promised machines then withdrew at the last minute;

  • promised machines, then changed the spec or price;

  • delivered machines with features not working;

  • delivered machines with expensive replacement hardware for built-in parts that didn't work with NT.


And so on. There was a huge delta in performance (while all Win9x machines performed pretty much alike: we could look at the parts list and predict the benchmark scores with an accuracy of about 5%.)

Many vendors didn't know about DMA hard disk drivers.

Some did but didn't know how to fix it. Some fitted SCSI hard disks as a way round this, not knowing that with the motherboard came a floppy disk with a free driver that would enable DMA on EIDE.

Some shipped CD burners that couldn't burn because the burner software didn't work on NT. Some shipped DVD drives which couldn't play movies on NT because the graphics adaptor's video playback acceleration didn't work on NT.

And so on.

Readers *loved* that feature because it separated the wheat from the chaff: it showed the cheap vendors whose PCs mostly worked but they didn't know how to tune them, from the solid vendors who knew what they were doing and how to make stuff work, from the solid vendors who could build a great PC for the task but it doubled the price.

I got a lot of praise for that article, and it was well worth the work.

Some vendors thanked me because it was so educational for them!

Well, Linux on laptops is still a bit like that today. There is a whole pile of stuff that's easy and a given on Windows that is difficult or problematic on Linux and just plain impossible on any other FOSS OS.

  • Switchable GPUs are a problem

  • Proprietary binary graphics drivers are sometimes a problem

  • Displays on docking stations can be tricky


Interactions between these things is even worse; e.g. multiple displays on USB docking stations can be extra-tricky

For example, with openSUSE Leap I found that with Intel graphics, two screens on a USB-C docking station was easy, but with nVidia Optimus, almost impossible.

With my own Latitude E7270, under KDE I can only drive 1 external screen; if I add 2 as well as the built-in one, then window borders disappear on the laptop screen and so windows can't be moved or resized. But under the lighter-weight Xfce, this is fine & all 3 screens can be used. And that's with an Intel GPU and a proper, PCIe-bus-attached dock.

But every time I un-dock or re-dock, it forgets the screen arrangement and the Display preferences have to be redone every single time.

Most apps can't remember what screen they were on and reopen on a random monitor every time. Possibly entirely offscreen if I have a different screen arrangement.

Even the same screens attached directly to the machine and via the dock confuse it. And I have both a full-size and mini dock. All the ports appear different.

Linux on laptops is still complicated.

Just because things work for 1 person doesn't mean they'll work for everyone. Just because a vendor ships a model with Linux doesn't mean all models work. Just because a vendor ships 1 distro doesn't mean all distros work.

And when the machine is new, you can probably be sure that there will be serious firmware issues with Linux because the firmware was only tested against Windows and sketchily even then. This is the era of Agile and minimum viable products, after all.

So do not take it as read that because Dell ship 2 or 3 models with Ubuntu, all models will Just Work™ with any disto.

I absolutely categorically promise you they don't and they won't.
  • g

The first and last time AIM was hacked

I'm an AOL hacking historian. No doubt about it.

I remember intimate details about most of the major breaches that occurred between the mid 90s and 2000s. I was there and actively participating in most of it. America Online's security was being compromised nonstop. It was unbeliveable. Corporate cybersecurity has a bad reputation now, but what was commonplace then is unthinkable now.  Security was bad. This was especially the case with AOL.

Through phishing attacks, password cracking, social engineering, whatever it took - we were breaking into employee accounts and staff areas at scale. In the late 90s and early 2000s the golden goose was the customer records information system, or as it was known mostly commonly, "CRIS", which AOL employees used to action customer accounts.



Early AOL "Mac hackers," (Macintosh users) - many congregating in the the private chat "macfilez", were the first to access CRIS. Legendary early Mac hackers like "Happy Hardcore" were able to breach various internal accounts and gain access. this was before the keyword became LAN only, i.e. on-campus VPN use only. There is a misconception in cybersecurity. Massive corporations are expected to have the tightest security. The truth is that the more employees a corporation has, the less secure it is - and AOL is a perfect example.

AOL + loads of employees = tons of AOL accounts being hacked through CRIS, and later Merlin. This is in contrast to the AIM team which was very small.

AIM + very few employees = very few AIM accounts being hacked outside of screen name exploits here and there. Nobody even knew the name of AIM's internal area(s).

Fast forward to autumn of 2003 and meet a couple old friends of mine - Dime and Toast. that's when they discovered, and subsequently broke into the LAN-only AIM admin web area. It was called WHAOPS. For the first time in history AOL hackers were finally able to learn the name of, and actually view, the elusive AIM administration panel.





Dime programmed a botnet web browser in Delphi in such a way that it let him take infected AOL employee computers and leverage them to connect to staff-only websites. Prior to finding WHAOPS, we'd just been hanging out and watching zombie AOL staff computers fill an IRC channel, casually surfing their internal networks to see what we could uncover.

Toast, Dime's twin brother, found WHAOPS. from there they started methodically targeting AIM team members, of which there were few. Wventually they hacked an AIM admin screen name, used the botnet web browser, logged into https://whaops.aol.com (iirc) and started raising hell. They reset the password to "OnlineHost", stole a slew of screen names, suspended and unsuspended other people's accounts at will - all types of fuckery.

It was an incredible night, and it was dime's final performance. He pulled off the impossible and quit the scene permanently. I think it's unfortunate that only hackers who were busted were immortalized by the internet. Dime and Toast are legends you'd have never otherwise heard of. This post is for them. {S GOODBYE

<3 pad

YCombinator Update Sept 23, 2021
First of all, Null, or "risk", Dime just said you're full of shit. Nobody helped him code anything. I say you're full of shit too.

The following comment was written by someone named Justin Perras aka Null who came into the scene after Dime had already left. This guy always pops up to lie on my reputation to discredit me (by, again, lying) whenever I write anything. The last time this happened was on DG last year. Today it was YCombinator.



The cognative disonnance needed for Justin call me a groupie is immesurable. YCombinator ghosted my response post because it was submitted by a new account - here's that:



Hopefully that summarizes it but the tl;dr is that I'm a triple OG and Null is a coattail riding poor retard with abysmal grammar. You'll never make anything of yourself my guy. Hell, we're old. You just never pulled it off. While the upper echelon (i.e. not you) was pushing up 7 figure stats you were skidding about in the playground we left behind for you. You never managed to find yourself a seat at the table Justin. I talk to millionaire entrepreneurs all day and do very well for myself. How about you? You're a loser and that's nothing new - and you are a liar.

In other news - I managed to summon the backend programmer of WHAOPS. That was interesting.



L8Z

Hard Stare

Some thoughts about raising the profile of Lisp

I must be mellowing in my old age (possibly as opposed to bellowing) because I have been getting praise and compliments recently on comments in various places.

Don't worry, there are still people angrily shouting at me as well.

This was the earlier comment, I think... There was a slightly forlorn comment in the Reddit Lisp community, talking about this article, which I much enjoyed myself:

Someone was asking why so few people seemed interested in Lisp.

As an outsider – a writer and researcher delving into the history of OSes and programming languages far more than an actual programmer – my suspicion is that part of the problem is that this positively ancient language has accumulated a collection of powerful but also ancient tooling, and it's profoundly off-putting to young people used to modern tools who approach it.

Let me describe what happened and why it's relevant.

I am not young. I first encountered UNIX in the late 1980s, and UNIX editors at the same time. But I had already gone through multiple OS transitions by then:

[1] weird tiny BASICs and totally proprietary, very limited editors.

[2] early standardised microcomputer OSes, such as CP/M, with more polished and far more powerful tools.

[3] I personally went from that to an Acorn Archimedes: a powerful 32-bit RISC workstation with a totally proprietary OS (although it's still around and it's FOSS now) descended from a line of microcomputers as old as CP/M, meaning no influence from CP/M or the American mainstream of computers. Very weird command lines, very weird filesystems, very weird editors, but all integrated and very powerful and capable.

[4] Then I moved to the same tools I used at work: DOS and Windows, although I ran them under OS/2. I saw the strange UIs of CP/M tools that had come across to the DOS world run up against the new wave of standardisation imposed by (classic) MacOS and early Windows.

This meant: standard layouts for menus, contents of menus, for dialog boxes, for keystrokes as well as mouse actions. UIs got forcibly standardised and late-1980s/early-1990s DOS apps mostly had to conform, or die.

And they did. Even then-modern apps like WordPerfect gained menu bars and changed their weird keystrokes to conform. If their own weird UIs conflicted, then the standards took over. WordPerfect had a very powerful, efficient, UI driven by function keys. But it wasn't compatible with the new standards. It used F3 for help and Escape to repeat a character, command or macro. The new standards said F1 must be help and Esc must be cancel. So WordPerfect complied.

And until the company stumbled, porting to OS/2 and ignoring Windows until it was too late, it worked. WordPerfect remained the dominant industry-standard, even as its UI got modernised. Users adapted.

So why am I talking about this?

Because the world of tools like Emacs never underwent this modernisation.

Like it or not, for 30 years now, there's been a standard language for UIs and so on. Files, windows, the clipboard, cut, copy, paste. Standard menus in standard places and standard commands on them with standard keystrokes.

Vi ignores this. Its fans love its power and efficiency and are willing to learn its weird UI.

Emacs ignores this, for the same reasons. The manual and tutorial talk about "buffers" and "scratchpads" and "Meta keys" and dozens of things that no computer made in 40 years has: a whole different language before the Mac and DOS and Windows transformed the world of computing.

The result of this is that if you read guides and so on about Lisp environments, they don't tell you how to use it with the tools you already know, in terms you're familiar with.

Instead they recommend really weird editors and weird add-ons and tools and options for those editors, all from long before this era of standardization. They don't discuss using Sublime Text or Atom or VS Code: no, it's "well you can use your own editor but we recommend EMACS and SLIME and just learn the weird UI, it's worth it. Trust us."

It's counter-productive and it turns people off.

I propose that a better approach would be to modernize some of the tooling, forcibly make it conform to modern standards. I'm not talking about trivial stuff like CUA-mode, but bigger changes, such as ErgoEmacs. By all means leave the old UI there and make it possible for those who have existing configs to keep it, but update the tools to use standard terminology, use the names printed on actual 21st century keyboards, and editors that work the same way as every single GUI editor out there.

Then once the barrier to entry is lowered a bit, start modernising it. Appearance counts for a lot. "You never get a second chance to make a first impression."

One FOSS tool that's out there is Interlisp Medley. There are efforts afoot to modernise this for current OSes.

How about just stealing the best bits and moving it to SBCL? Modernising its old monochrome GUI and updating its look and feel so it blends into a modern FOSS desktop?

Instead of pointing people at '70s tools like Emacs, assemble an all-graphical, multi-window, interactive IDE on top of the existing infrastructure and make it look pretty and inviting.

Keep the essential Lispiness by all means, but bring it into the 2020s and make it pretty and use standard terms and standard keystrokes, menu layouts, etc. So it looks modern and shiny, not some intimidating pre-GUI-era beast that will take months to learn.

Why bother? Who'll do it?

Well, Linux spent a decade or more as a weird, clunky, difficult and very specialist OS, which was just fine for its early user community... until it started catching up with Windows and Mac and growing into a pretty smooth, polished, quite modern desktop... partly fuelled by server advancements. Things like NetBSD still are and have zero mainstream presence.

Summary: You have to get in there and compete with mainstream, play their game by their rules, if you want to compete.

I'd like to have the option to give Emacs a proper try, but I am not learning an entire new vocabulary and a different UI to do it. I learned dozens of 'em back in the 1980s and it was a breath of fresh air when one standard one swept them all away.

There were very modern Lisp environments around before the rise of the Mac and Windows swept all else away. OpenGenera is still out there, but we can't legally run it any more -- it's IP that belongs to the people who inherited Symbolics when its founders died.

But Interlisp/Medley is still there and it's FOSS now. I think hardcore Lispers see stuff like a Lisp GUI and natively-graphical Lisp editors as pointless bells and whistles – Emacs was good enough for John McCarthy and it still is for me! – but they really are not in 2021.

There were others, too. Apple's Dylan project was built in Lisp, as was the amazing SK8 development environment. They're still out there somewhere.

Hard Stare

Re-evaluating "in the beginning was the Command Line" 23 years later

A short extract of Neal Stephenson's seminal essay has been doing the rounds on HackerNews.


OK, fine, so let's go with it.

Since my impression is that HN people are [a] xNix fans [b] often quite young therefore [c] have little exposure to other OSes, let me try to unpack what Stephenson was getting at, in context.

The Hole Hawg is a dangerous and overpowered tool for most non-professionals. It is big and heavy. It can take on big tough jobs with ease, but its size and its brute power mean that it is not suitable for precision work. It has relatively few safety features, so that if used inexpertly, it will hurt its operator.

DIY stores are full of smaller, much less powerful tools. This is for good reasons:

  • because for non-professional users, those smaller, less-powerful tools are much safer. A company which sells a tool to untrained users which tends to maim or kill them will go out of business.

  • because smaller, less-powerful tools are better for smaller jobs, that a non-professional might undertake, such as hanging a picture, or putting up some shelves.

  • professionals know to use the right tool for the job. Surgeons do not operate with chainsaws (even though they were invented for surgery). Carpenters do not use axes.


The Hole Hawg, as described, is a clumsy tool that needs things attached to it in order to be used, and even then, you need to know the right way or it will hurt you.

Compare with a domestic drill with a pistol grip that is ready to use out of its case. Modern ones are cordless, increasing their convenience.

One is a tool for someone building a house; the other is a better tool for someone living in that house.

That's the drill part.

Now, let's discuss the OSes talked about in the rest of the 1999 piece from which that's a clipping [PDF].

There are:

  • Linux, before KDE, with no free complete desktop environments yet;

  • Windows, meaning Windows 98SE or NT 4;

  • Classic MacOS – version 9;

  • BeOS.

Stephenson points out that Linux is as powerful as any of them, cheaper, but slower, ugly and unfriendly.

He points out that MacOS 9 is as pretty, friendly, and comprehensible as OSes get, but it doesn't multitask well, it is not very stable, and when a program crashes, your entire computer probably goes with it.

He points out that Windows is overpriced, performs poorly, and is not the best option for anyone – but that everyone runs it and most people just conform with what the mainstream does.

He praises BeOS very highly, which was 100% justified at the time: it was faster than anything else, by a large margin. It has superb multimedia support and integration, better than anything else at the time. It was standards-compliant but not held back by it. For its time, it has a supermodern OS, eliminating tonnes of legacy cruft.

But it didn't have many apps so it was mainly for people in narrow niches, such as music production or maybe video editing.

It was manifestly the future, though. But we're living in the future and it wasn't. This was 23 years ago, nearly a quarter of a century, before KDE and GNOME, before Windows XP, before Mac OS X. You need to know that.

What Unix people interpret as praise here is in fact criticism.

That Unix is very unfriendly and can easily hurt its user. (Think `rm -rf /` here.)

That Unix has a great deal of raw power but maybe more than most people need.

That Unix is, frankly, kinda ugly, and only someone who doesn't care about appearances would choose it.

That something of this brute power is not suitable for fine precision work. (Which it still mostly isn't -- Mac OS X is Unix, tuned and polished, and that's what the creative pros use now.)

Here's a response from 17 years ago.
Hard Stare

The historical significance of DEC and the PDP-7, -8, -11 & VAX

Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It's entitled "First new vax in ...30 years? 🙂"

Someone posted it on Hackernews. One of the comments said, roughly, that they didn't see the significance and could someone "explain it like I'm a Computer Science undergrad." This is my attempt to reply...

Um. Now I feel like I'm 106 instead of "just" 53.

OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families... and both were from the same company.

Minicomputers are what came after mainframes, before microcomputers. A microcomputer is a computer whose processor is a microchip: a single integrated circuit containing the whole processor. Before the first one was invented in 1974 (IIRC), processors were made from discrete logic: lots of little silicon chips.

The main distinguishing feature of minicomputers from micros is that the early micros were single-user: one computer, one terminal, one user. No multitasking or anything.

Minicomputers appeared in the 1960s and peaked in the 1970s, and cost just tens to hundreds of thousands of dollars, while mainframes cost millions and were usually leased. So minicomputers could be afforded by a company department, not an entire corporation... meaning that they were shared, by dozens of people. So, unlike the early micros, minis had multiuser support, multitasking, basic security and so on.

The most significant minicomputer vendor was a company called DEC: Digital Equipment Corporation. DEC made multiple incompatible lines of minis, many called PDP-something -- some with 9-bit logic, some with 12-bit, 16-bit, 18-bit, or 36-bit logic (and an unreleased 24-bit model, the PDP-2).

One of its early big hits was the 12-bit PDP-8. It ran multiple incompatible OSes, but one was called OS/8. This OS is long gone but it was the origin of a command-line interface (largely shared with TOPS-10 on the later, bigger and more expensive, 36-bit PDP-10 series) with commands such as DIR, TYPE, DEL, REN and so on. It also had a filesystem with 6-letter names (all in caps) with semi-standardised 3-letter extensions, such as README.TXT.

This OS and its shell later inspired Digital Research's CP/M OS, the first industry-standard OS for 8-bit micros. CP/M was planned to be the OS for the IBM PC, too, but IBM got a cheaper deal from Microsoft for what was essentially a clean-room re-implementation of CP/M, which called IBM called "PC DOS" and Microsoft "MS-DOS".

So DEC's PDP-8 and OS-8 directly inspired the entire PC-compatible industry, the whole x86 computer industry.

Another DEC mini was the 18-bit PDP-7. Like almost all DEC minis, this too ran multiple OSes, both from DEC and others.

A 3rd-party OS hacked together as a skunkworks project on a disused spare PDP-7 at AT&T's research labs was UNIX.

More or less at the same time as the computer industry gradually standardised on the 8-bit byte, DEC also made 16-bit and 32-bit machines.

Among the 16-bit machines, the most commercially successful was the PDP-11. This is the machine that UNIX's creators first ported it to, and in the process, they rewrote it in a new language called C.

The PDP-11 was a huge success so DEC was under commercial pressure to make an improved successor model. It did this by extending the 16-bit PDP-11 instruction set to 32 bits. For this machine, the engineer behind the most successful PDP-11 OS, called RSX-11, led a small team that developed a new, pre-emptive multitasking, multiuser OS with virtual memory, called VMS.

(When it gained a POSIX-compliant mode and TCP/IP, it was renamed from VAX/VMS to OpenVMS.)

OpenVMS is still around: it was ported to DEC's Alpha, the first 64-bit RISC chip, and later to the Intel Itanium. Now it has been spun out from HP and is being ported to x86-64.

But the VMS project leader, Dave Cutler, and his team, were headhunted from DEC by Microsoft.

At this time, IBM and Microsoft had very acrimoniously fallen out over the failed OS/2 project. IBM kept the x86-32 version OS/2 for the 386, which it completed and sold as OS/2 2 (and later 2.1, 3, 4 and 4.5. It is still on sale today under the name Blue Lion from Arca Noae.)

At Microsoft, Cutler and his team got given the very incomplete OS/2 version 3, a planned CPU-independent portable version. Cutler et al finished this, porting it to the new Intel RISC chip, the i860. This was codenamed the "N-Ten". The resultant OS was initially called OS/2 NT, later renamed – due to the success of Windows 3 – as Windows NT. Its design owes as much to DEC VMS as it does to OS/2.

Today, Windows NT is the basis of Windows 10 and 11.

So the PDP-7, PDP-8 and PDP-11 directly influenced the development of CP/M, MS-DOS, OS/2, Windows 1 through to Windows ME.

A different line of PDPs directly led to UNIX and C.

Meanwhile, the PDP-11's 32-bit successor directly influenced the design of Windows NT.

When micros grew up and got to be 32-bit computers themselves, and vendors needed multitasking OSes with multiuser security, they turned back to 1970s mini OSes.

This project is a FOSS re-implementation of the VAX CPU on an FPGA. It is at least the 3rd such project but the earlier ones were not FOSS and have been lost.
Hard Stare

Mankind is a monkey with its hand in a trap, & legacy operating systems are among the bait

[Another recycled mailing list post]

I was asked what options there were for blind people who wish to use Linux.

The answer is simple but fairly depressing: basically every blind person I know personally or via friends of friends who is a computer user, uses Windows or Mac. There is a significant move from Windows to Mac.

Younger computer users -- by which I mean people who started using computers since the 1990s and widespread internet usage, i.e. most of them -- tend to expect graphical user interfaces, menus and so on, and not to be happy with command-line-driven programs.

This applies every bit as much to blind users.

Linux can work very well for blind users if they use the terminal. The Linux shell is the richest and most powerful command-line environment there is or ever has been, and one can accomplish almost anything one wants to do using it.

But it's still a command line, and a notably unfriendly and unhelpful one at that.

In my experience, for a lot of GUI users, that is just too much.

For instance, a decade or so back, the Register ran some articles I wrote on switching to Linux. They were, completely intentionally, what is sometimes today called "opinionated" -- that is, I did not try to present balance or a spread of options. Instead I presented what was, IMHO, the best choices.


Multiple readers complained that I included a handful of commands to type in. "This is why Linux is not usable! This is why it is not ready for the real world! Ordinary people can't do this weird arcane stuff!" And so on.

Probably some of these remarks are still there in the comments pages.

In vain did some others try to reason with them.

But it was 10x quicker to copy-and-paste these commands!
-> No, it's too hard.

He could give GUI steps but it would take pages.
-> Then that's what he should have done, because we don't do this weird terminal nonsense.

But then the article would have been 10x longer and you wouldn't read it.
-> Well then the OS is not ready, it's not suitable for normal people.

If you just copy-and-paste, it's like 3 mouse clicks and you can't make a typing error.
-> But it's still weird and scary and I DON'T LIKE IT.

You can't win.

This is why Linux Mint succeeded -- partly because when Ubuntu introduced its non-Windows-like desktop after Microsoft threatened to sue, Mint hoovered up those users who wanted it Windows-like.

But also because Mint didn't make you install the optional extras. It bundled them, and so what if that makes it illegal to distribute in some countries? It Just Worked out of the box, and it looked familiar, and that won them millions of fans.

Mac OS X has done extremely well partly because users never ever need to go need a command line, for anything, ever. You can if you want, but you never, ever need to.

If that means you can't move your swap file to another drive, so be it. If that means that a tonne of the classic Unix configuration files are gone, replaced by a networked configuration database, so be it.

Apple is not afraid to break things in order to make something better.

The result has been to become the first trillion-dollar computer company, and hundreds of millions of happy customers.

Linux gives you choices, lets you pick what you want, work the way you want... and despite offering the results for free, the result has been about 1% of the desktop market and basically zero of the tablet and smartphone markets.

Ubuntu made a valiant effort to make a desktop of Mac-like simplicity, and it successfully went from a new entrant in a busy marketplace in 2004 to being the #1 desktop Linux within a decade. It has made virtually no dent on the non-Linux world, though.

After 20 years of this, Google (after *bitter* internal argument) introduced ChromeOS, a Linux which takes away all your choices. It only runs on Google hardware, has no apps, no desktop, no package management, no choices at all. It gives you a dead cheap, virus-proof computer that gets you on the Web.

In less time than Ubuntu took to win about 1% of the Windows market over to Linux, ChromeBooks persuaded about one third of the world laptop buying market to switch to Linux. More Chromebooks sell every year -- tens of millions -- than Ubuntu users in total since it lauched.

What effect has this had on desktop Linux? Zero. None at all. If that is the price of success, they are not willing to pay it. What Google has done is so unspeakable foul, so wrong, so blasphemous, they don't even talk about it.

What effect has it had on Microsoft? A lot. Cheaper Windows laptops than ever, new low-end editions of Windows, serious efforts to reduce the disk and memory usage...

And little success. The cheap editions lose what makes Windows desirable, and ultra-cheap Windows laptops make poorer slower Chromebooks than actual Chromebooks.

Apple isn't playing. It makes its money in the high-end.

Unfortunately a lot of people are very technologically conservative. Once they find something they like, they will stay with it at all costs.

This attitude is what has kept Microsoft immensely profitable.

A similar one is what has kept Linux as the most successful server OS in the world. It is just a modernised version of a quick and dirty hack of an OS from the 1960s, but it's capable and it's free. "Good enough" is the enemy of better.

There are hundreds of other operating systems out there. I listed 25 non-Linux FOSS OSes in this piece, and yes, FreeDOS was included.

There are dozens that are better in various ways than Unix and Linux.

  • Minix 3 is a better FOSS Unix than Linux: a true microkernel which can cope with parts of itself failing without crashing the computer.

  • Plan 9 is a better UNIX than Unix. Everything really is a file and the network is the computer.

  • Inferno is a better Plan 9 than Plan 9: the network is your computer, with full processor and OS-independence.

  • Plan 9's UI is based on Oberon: an entire mouse-driven OS in 10,000 lines of rigorous, type-safe code, including the compiler and IDE.

  • A2 is the modern descendant of Oberon: real-time capable, a full GUI, multiprocessor-aware, internet- and Web-capable.

(And before anyone snarks at me: they are all niche projects, direly lacking polish and not ready for the mass market. So was Linux until the 21st century. So was Windows until version 3. So was the Mac until at the very least the Mac Plus with a hard disk. None of this in any way invalidates their potential.)

But almost everyone is too invested in the way they know and like to be willing to start over.

So we are trapped, the monkey with its hand stuck in a coconut shell full of rice, even though it can see the grinning hunter coming to kill and eat it.

We are facing catastrophic climate change that will kill most of humanity and most species of life on Earth, this century. To find any solutions, we need better computers that can help us to think better and work out better ways to live, better cleaner technologies, better systems of employment and housing and everything else.

But we can't let go of the single lousy handful of rice that we are clutching. We can't let go of our broken political and economic and military-industrial systems. We can't even let go of our broken 1960s and 1970s computer operating systems.

And every day, the hunter gets closer and his smile gets bigger.
Hard Stare

Did you know that you can 100% legally get & run WordPerfect for free?

In fact, there are two free versions: one for Classic MacOS, made freeware when WordPerfect discontinued Mac support, and a native Linux version, for which Corel offered a free, fully-working, demo version.

But there is a catch – of course: they're both very old and hard to run on a modern computer. I'm here to tell you how to get them and how to install and run them.

WordPerfect came to totally dominate the DOS wordprocessor market, crushing pretty much all competition before it, and even today, some people consider it to be the ultimate word-processor ever created.

Indeed the author of that piece maintains a fan site that will tell you how to download and run WordPerfect for DOS on various modern computers,  if you have a legal copy of it. And, of course, if you run Windows, then the program is still very much alive and well and you can buy it from Corel Corp.

Sadly, the DOS version has never been made freeware. It still works – I have it running under PC-DOS 7.1 on an old Core 2 Duo Thinkpad, and it's blindingly fast. It also works fine on dosemu. It is still winning new fans today. Even the cut-down LetterPerfect still cost money. The closest thing to a free version is the plain-text-only WordPerfect Editor.

Edit: I do not know if Corel operates a policy like Microsoft, where owning a new version allows you run any older version. It may be worth asking.

But WordPerfect was not, originally, a DOS or a PC program. It was originally developed for a Data General minicomputer, and only later ported to the PC. In its heyday, it also ran on classic MacOS, the Amiga, the Atari ST and more. I recall installing a text-only native Unix version on SCO Xenix 386 for a customer. In theory, this could run on Linux using iBCS2 compatibility.

When Mac OS X loomed on the horizon, WordPerfect Corporation discontinued the Mac version – but when they did so, they made the last ever release, 3.5e, freeware.

WordPerfect 3.5e 
(Image source.)

Of course, this is not a great deal of use unless you have a Mac that can still run Classic – which today means a PowerPC Mac with Mac OS X 10.4 or earlier. However, hope springs eternal: there is a free emulator called SheepShaver that can emulate classic MacOS on Intel-based Macs, and the WPDOS site has a downloadable, ready-to-use instance of the emulator all set up with MacOS 9 and WordPerfect for Mac.

To be legal, of course, you will need to own a copy of MacOS 9 – that, sadly, isn't free. Efforts are afoot to get it to run natively on some of the later PowerMac G4 machines on which Apple disabled booting the classic OS. I must try this on my Mac mini G4 and iBook G4.

The non-Windows version of WordPerfect that lived the longest, though, was the Linux edition. Corel was very keen on Linux. It had its own Linux distro, Corel LinuxOS, which had a very smooth modified KDE and was the first distro to offer graphical screen-resolution setting. Corel made its own ARM-based Linux desktop, the NetWinder, as reviewed in LinuxJournal.

And of course it made WordPerfect available for Linux.

Edit: Sadly, though, Microsoft intervened, as it is wont to do. The programs in WordPerfect Office originally came from different vendors. Some reviews suggested that the slightly different looks and feels of the different apps would be a problem, compared to the more uniform look and feel of MS Office. (The Microsoft apps in Office 4 were very different from one another. Office 95 and Office 97 had a lot of effort put in to make them more alike, and not much new functionality.)

Corel was persuaded to license the MS Office look-and-feel – the button bars and designs – and the macro language (Visual BASIC for Applications) and incorporate them into WordPerfect Office.

But the deal had a cost above the considerable financial one: Corel had to discontinue all its Linux efforts. So it sold off Corel LinuxOS, which became Xandros. It sold its NetWinder hardware, which became independent. It killed off its native Linux app, and ended development of WordPerfect Office for Linux, which was a port of the then-current Windows version using Winelib. In fact, Corel contributed quite a lot of code to the WINE Project at this time in order to bring WINE up to a level where it could completely and stably support all of WordPerfect Office.


I'm not sure if the text-only WordPerfect for Unix ever had a native Linux version – I didn't see it if it did – but a full graphical version of WordPerfect 8 was included with Corel LinuxOS and also sold at retail. Corel offered both a free edition with fewer bundled fonts, as well as a paid version.

This is still out there – although most of its mirrors are long gone, the Linux Documentation Project has it. It's not trivial to install a 20-year-old program on a modern distro, but luckily, help is at hand. The XWP8Users site has offered some guidance for many years, but I confess I never got it to work except by installing a very old version of Linux in a VM. For instance, it's easy enough to get it running on Ubuntu 8.04 or 8.10 – Corel LinuxOS was a Debian-derivative, and so is Ubuntu.

The problem is that even in these days of containers for everything, Ubuntu 8 is older than anything supports. Linux containers came along rather later than 2008. In fact, in 2011 I predicted that containers were going to be the Next Big Thing. (I was right, too.)

So I've not been able to find any easy way to create an Ubuntu 8.04 container on modern Ubuntu. If anyone knows, or is up for the challenge, do please get in touch!

But the "Ex WP8 Users" site folk have not been idle, and a few months ago, they released a big update to their installation instructions. Now, there's a script, and all you need to do is download the script, grab the WordPerfect 8.0 Downloadable Personal Edition (DPE), put them in a folder together and run the script, and voilá. I tried it on Ubuntu 20.04 and it works a treat so long as I run it as root. I have not seen any reports from anyone else about this, so it might be just my installation.

Read about it and get the script here.

Edit:

For more info, read the WordPerfect for Linux FAQ. This includes instructions on adding new fonts, fixing the MS Word import filter and some other useful info.

From the discussion on Hackernews and the FAQ, I should note that there are terms and conditions attached to the free  WP 8.0 DPE. It is only free for personal, non-commercial use, and some people interpret Corel's licence as meaning that although it was a free download, it is not redistributable. This means that if you did not obtain it from Corel's own Linux site (taken down in 2003) or from an authorised re-distributor (such as bundled with SUSE Linux up to 6.1 and early versions of Mandrake Linux, and the "WordPerfect for Linux Bible" hardcopy book, and a few resellers) then it is not properly licensed.

I dispute this: as multiple vendors did re-distribute it and Corel took no action, I consider it fair play. I also very much doubt that anyone will use this in a commercial setting in 2021.

If you are interested in the more complete WordPerfect 8.1, I note that it was included in Corel LinuxOS Deluxe Edition and that this is readily downloaded today, for example from the Internet Archive or from ArchiveOS. However, unless you bought a licence to this, this is not freeware and does not include a licence for use today.



r/linux - A blast from the past: native WordPerfect 8 for Linux running on Fedora 13. It still works! [pic]
(Image source.)

Postscript

If you really want a free full-function word-processor for DOS, which runs very well under DOSemu on Linux, I suggest Microsoft Word 5.5. MS made this freeware at the turn of the century as a free Y2K update for all previous versions of Word for DOS.

How to get it:
Microsoft Word for DOS — it’s FREE

Sadly, MS didn't make the last ever version of Word for DOS free. It only got one more major release, Word 6 for DOS. This has the same menu layout and the same file format as Word 6 for Windows and Word 6 for Mac, and also Word 95 in Office 95 (for Win95 and NT4). It's a little more pleasant to use, but it's not freeware — although if you own a later version of Word, the licence covers previous versions too.

Here is a comparison of the two:
Microsoft Word 5.5 And 6.0 In-depth DOS Review With Pics
Hard Stare

The pros and cons of cheap Android phones — or why you don't get OS updates

I like cheap Chinese phones. I am on my 3rd now: first an iRulu Victory v3, which came with 5.1. First 6.5" phablet I ever saw: plasticky, not very hi-res, but well under €200 and had dual SIMs, a µSD slot and a replaceable battery. No compass though.

Then a PPTV King 7, amazing device for the time, which came with 5 as well but half in Chinese. I rooted it and put CyanogenMod on it, getting me Android 6. Retina screen, dual SIM or 1 + µSD, fast, amazing screen.

Now, an Umidigi F2, which came with Android 10. Astonishing spec for about €125. Dual SIM + µSD, 128GB flash, fast, superb screen.

But with all of them, typically, you get 1 ROM update ever, normally the first time you turn it on, then that's it. The PPTV was a slight exception as a 3rd party ROM got me a newer version, but with penalties: the camera autofocus failed and all images were blue-tinged, the mic mostly stopped working, and the compass became a random-number generator.

They are all great for the money, but the chipset will never get a newer Android. This is normal. It's the price of getting a £150 phone with the specification of a £600+ phone.

In contrast, I bought my G/F a Xiaomi A2. It's great for the money – a £200 phone – but it wasn't high-end when new. But the build quality is good, the OS has little bloatware (because Android One), at 3YO the battery still lasts a day, there are no watermarks on photos etc.

It had 3 major versions of Android (7, then 8, then 9) and then some updates on top.

This is what you get with Android One and a big-name Chinese vendor.

Me, I go for the amazing deals from little-known vendors, and I accept that I'll never get an update.

MediaTek are not one of those companies that maintain their version for years. In return, they're cheap and the spec is good when they're new. They just move on to new products. Planet persuaded 'em to put 8 on it, and they deserve kudos for that, not complaining. It's an obsolete product; there's no reason to buy a Gemini when you could have a Cosmo, other than cost.

No, these are not £150 phones. They're £500 phones, because of the unique form-factor: a clamshell with the best mobile keyboard ever made.

But Planet Computers are a small company making an almost-bespoke device: i.e. in tiny numbers by modern standards. So, yes, it's made from cheap parts from the cheapest possible manufacturers, because the production run is thousands. A Chinese phone maker like Xiaomi would consider a production run of only 20 million units to be a failure. (Source: interview with former CEO.) 80 million is a niche product to them.

PlanetComp production is below prototype scale for these guys. It's basically a weird little niche hand-made item.

For that, £500 is very good. Compare with the F(x)tech Pro-1, still not shipping a good 18 months after I personally enquired about one, which is about £750 – for a poorer keyboard and a device with fewer adaptations to landscape use.

This is what you get when one vendor -- Google -- provides the OS, another does the port, another builds products around it, and often, another sells the things. Mediatek design and build the SoC, and port one specific version of Android to it... a bit of work from the integrator and OEM builder, and there's your product.

This is one of the things you sometimes get if you buy a name-brand phone: OS updates. But the Chinese phones I favour are ½-⅓ of the price of a cheap name-brand Android and ¼ of the price of a premium brand such as Samsung. So I can replace the phone 2-3× more often and keep more current that way... and still be a lot less worried about having it stolen, or breaking it, or the like. Win/win, for my perspective.

Part of this is because the ARM world is not like the PC world.

For a start, in the x86 world, you can rely on their being system firmware to boot your OS. Most PCs used to use a BIOS; the One Laptop Per Child XO-1 used Open Firmware, like NewWorld PowerMacs. Now, we all get UEFI.

(I do not like UEFI much, as regular readers, if I have a plural number of those, may have gathered.)

ARM systems have no standard firmware. No bootloader, nothing at all. The system vendor has to do all that stuff themselves. And with a SoC (System On A Chip), the system vendor is the chip designer/fabricator.

(For instance, the Raspberry Pi's ARM cores are actually under the control of the GPU which runs its own OS -- a proprietary RTOS called ThreadX. When a RasPi boots, the *GPU* loads the "firmware" from SD card, which boots ThreadX, and then ThreadX starts the ARM core(s) and loads an OS into them. That's why there must be the special little FAT partition: that is what ThreadX reads. That's also why RasPis do not use GRUB or any other bootloader. The word "booting" is a reference to Baron Münchausen lifting himself out of a swamp by his own bootstraps. The computer loads its own software, a contradiction in terms: it lifts itself into running condition by its own bootstraps. I.e. it boots up.

Well, RasPis don't. The GPU boots, loads ThreadX, and then ThreadX initialises the ARMs and puts an OS into their memory for them and tells them to run it.)

So each and every ARM system (i.e. device built around a particular SoC, unless it's very weird) has to have a new native port of every OS. You can't boot a one phone off the Android from another.

A Gemini is a cheapish very-low-production-run Chinese Android phone, with an additional keyboard wired on, and the screen forced to landscape mode in software. (A real landscape screen would have cost too much.)

Cosmo piggybacks a separate little computer in the lid, much like the "touchbar" on a MacBook Pro is a separate little ARM computer running its own OS, like a tiny, very long thin iPad.

AstroSlide will do away with this again, so the fancy hinge should make for a simpler, less expensive design... Note, I say should...
Hard Stare

Some random sketchy thoughts on Unix, web-apps and workflow

I want to get this down before I forget.

I use Gmail. I even pay for Gmail, which TBH really rankles, but I filled up my inbox, I want that stuff as a searchable record (not a Zip or something), any other way of storing dozens of gigs of email means either a lot more work, or moving to other paid storage so a lot of work and paying, so paying for more storage is the least-effort strategy.

I also use a few other web apps, mostly Google: calendar, contacts (I've been using pocket computers since the 1980s, so I have over 5000 people in my address book, and yes, I do want to keep them all), Keep for notes (because Evernote crippled their free tier and don't offer enough value to me to make it worth buying). I very occasionally use Google Sheets and Docs, but would not miss them.

But by and large, I hate web apps. They are slow, they are clunky, they have poor UI with very poor keyboard controls, they often tie you to specific browsers, and modern browsers absolutely suck and are getting worse with time. Firefox was the least worst, but they crippled it with Firefox Quantum and since then it just continues to degenerate.

I used Firefox because it was customisable. I had a vertical tab bar (one addon) – no, not tree-style, that's feature bloat – and it shared my sidebar with my bookmarks (another addon), and the bookmarks were flattened not hierarchical (another addon) because that's bloat too.

Some examples of browser bloat:

  • I have the bookmarks menu for hierarchical bookmark search & storage. I don't need it in the bookmarks sidebar too; that's duplication of functionality.

  • I don't need hierarchical tabs; I have multiple windows for that. So TreeStyleTabs is duplication of functionality – but it's an add-on, so I don't mind too much, so long as I have a choice.

  • I don't need user profiles; my OS has user profiles. That should be an add-on, too.

  • Why do all my browsers include the pointless bloat of web-developer tools? I and 99.99% of the Web's users never ever will need or use them. Why are they part of the base install?

I don't think Mozilla thought of this. I think the Mozilla dev team don't do extensive customisation of their browser. So when they went with multi-process rendering (a good thing) and Rust for safer rendering (also a good thing), they looked at XUL and all the fancy stuff it did, and they compared it with Chrome's (crippled, intrusive) WebExtensions, and decided to copy Chrome and ripped out their killer feature, because they didn't know how to use it effectively themselves.

We all have widescreens now. It's hard to get anything but widescreens. Horizontal toolbars are the enemy of widescreens. Vertical pixels are in short supply; horizontal ones are cheap. The smart thing to do is "spend" cheap, plentiful horizontal space on toolbars, and save valuable, "expensive" vertical space for what you are working on.

The original Windows 95 Explorer could do a vertical taskbar, which is superb on widescreens -- although they hadn't been invented yet. But the Cinnamon and Mate FOSS desktops, both copies of the Win95 design, can't do this. KDE does it so badly that for me it's unusable.

It's the Mozilla syndrome: don't take the time to understand what the competition does well. Just copy the obvious bits from the competition and hope.

Hiding stuff from view and putting it on keystrokes or mouse gestures is not the smart answer. That impedes discoverability and undoes the benefits of nearly 40 years of work on graphical user interfaces. It's fine to do that as well as good GUI design, but not instead of it. If your UI depends on things like putting a mouse into one corner, then slapping a random word in there as a visual clue (e.g. "Activities") is poor design. GUIs were in part about replacing screensful of text with easy, graphical cues.

Chrome has good things. Small ones. The bookmarks toolbar that only appears in new tabs and windows? That's a good thing. In 15 years, Firefox never copied that, but it ripped out its rich extensions system and copied Chrome's broken one.

Tools like these extensions are local tools that do things I need. Mozilla tried to copy Chrome's simplicity by ripping out the one thing that kept me using Firefox. They didn't look at what was good about what they had; they tried to copy what was good about what their competitor had. I used Firefox because it wasn't Chrome.

For now, I switched to Waterfox, because I can keep my most important XUL extensions.

I run 2 browsers because I need some web apps. I use Chrome for Google's stuff, and Waterfox for everything else. Why them? Because they are cross-platform. I use macOS on my home desktop, Linux on my work desktop and my laptops. I rarely use Windows at all but they both work on those too. I don't care if Safari has killer features, because it doesn't work on Windows (any more) or Linux, so it's no use to me. Any Apple tool gets replaced with a cross-platform tool on my Macs.

I also use Thunderbird, another of its own tools Mozilla doesn't understand. It's my main work email client, and I use it at home to keep a local backup of my Gmail. But I don't use it as my home email client, partly because I switch computers every day. My email is IMAP so it's synched – all clients see the same email. But my filters are not. IMAP doesn't synch filters. I have over 100 filters for home use and a few dozen for work use. I get hundreds of emails every day, and I only see half a dozen in my inbox because the rest are filtered into subfolders.

We have a standard system for cross-platform email storage (IMAP), that replaced minimal mail retrieval for local storage (POP3), but nobody's ever extended it to try to compete with systems, such as Gmail or MS Outlook and Exchange Server, that offer more, such as rules, workflow, rich calendaring, rich contacts storage. And so local email clients are fading away and more and more companies use webmail.

Why have web apps prospered so when rich local tools can do more? Because only a handful of companies grok that rich local tools can be strong selling points and keep enhancing them.

I used to use Pidgin for all my chat stuff. Pidgin talked to all the chat protocols: AIM (therefore Apple iMessage too), Yahoo IM, MSN IM, ICQ, etc. Now, I use Franz, because it can talk to all the modern chat systems: Facebook Messenger, WhatsApp, Slack, Discord, etc. It's good: a big help. But it's bloated and sluggish, needs gigs of RAM, and each tab has a different UI inside it, because it's an Electron app. Each tab is a dedicated web browser.

Pidgin, via libpurple plugins, can do some of these – FB via a plugin FB keeps trying to block; Skype; Telegram, the most rich and mature of the modern chat systems. But not all, so I need Franz too. Signal, because it's a nightmare cluster of bad design by cryptology and cryptocurrency nerds, doesn't even work in a web browser.

Chat systems, like email, are a failure, both of local rich app design to keep up, and of protocol standardisation in order to compete with proprietary solutions.

Email is not a hard problem. This is a more than fifty-year-old tool.

Chat is not hard either. This is a more than forty-year-old tool.

But groupware is different. Groupware builds on these but adds forms, workflow, organisation-wide contacts and calendar management. Groupware never got standardised.

Ever see Engelbart's "mother of all demos"? Also more than half a century ago. It included collaborative file editing. But it was a server-based demo, because microcomputers hadn't been invented yet. So, yes, for some things, like live editing by multiple users, a web app can do things local apps can't easily do.

But for most things, a native app should always be able to outcompete a web app. Web apps grew because they exploited a niche: if you have lots of proprietary protocols, then that implies lots of rich proprietary clients for lots of local OSes. Smartphones and tablets made that hard – lots of duplication of functionality in what must be different apps because different OSes need different client apps – so the functionality was moved into the server, enabled by Javascript.

Javascript was and is a bad answer. It was a hastily-implemented, poorly-specified language, which vastly bloats browsers, needs big expensive just-in-time-compiling runtimes and locks lower-powered devices out of the web.

The web is no longer truly based on HTML and formatting enhancements such as CSS. Now, the Web is a delivery medium for Javascript apps.

Why?

Javascript happened because the protocols for common usage of internet communications were inadequate.
This meant one company could obtain a stranglehold via proprietary communications tools.

That was a bad thing.

FOSS xNix arose because of standardisation.

xNix is a shorthand for Unix. Unix™ does not mean "an OS based on AT&T UNIX code". It has not meant this since 1993, when Novell gave the Unix trademark to the Open Group. Since then, "Unix" means "any OS that passes Open Group Unix compatibility testing." Linux has passed these tests, more than once. Both the K-OS and EulerOS distros passed the tests. This means that Linux is a Unix these days. Accept it and move on; it is a matter of legal fact and record. The "based on AT&T code" thing has not been true for thirty-eight years. It is long past time to let it go.

The ad-hoc, informal IBM PC compatibility standard meant that any computer that wanted to be sold as "IBM compatible" had to run IBM software, not just run MS-DOS. All the other makes of MS-DOS computer couldn't run MS Flight Simulator and Lotus 1-2-3, so they died out. Later, that came to include powerful 32-bit 80386DX computers, which allowed 32-bit OSes to come to the PC. Later still, the 80386SX made 32-bit computers cheap and widespread, and that allowed 32-bit OSes to become mainstream. Some, like FreeBSD, stuck to their own standards (e.g. their own partitioning schemes), and permissive licenses meant people could fork them or take their code into proprietary products. Linux developed on the PC and from its beginning embraced PC standards, including PC partitioning, PC and Minix filesystems and so on... and its strict licence largely stopped people building Linux into proprietary software. So it throve in ways no BSD ever did.

Because of standards. Standards, even informally-specified ad-hoc ones, are good for growth and very good for FOSS.

The FOSS world does have basic standards for email retrieval and storage, but they're not rich enough, which means proprietary groupware systems had an edge and thrived.

Web apps are the third-party software-vendor world's comeback against Windows, Office and Exchange. They let macOS and iOS and Android and ChromeOS (and as a distant outlier, other Linux distros) participate in complex workflows. Smartphones and tablet and ChromeOS have done well because their local apps are, like Franz's tabs, mostly embedded single-app web browsers.

Web apps use a loose ad-hoc standard – web browsers and Javascript – to offer the rich functionality previously dominated by one vendor's proprietary offerings.

But they delivered their rich cross-platform functionality using a kludge: an objectively fairly poor, mostly-interpreted language, in browsers.

Even browser and browser developers haven't even learned the lessons of rich local clients.

Standards too need to evolve and move and keep up with proprietary tech, and they haven't. XMPP and Jabber were pretty good for a while, and originally, FB Messengers and Google Chat were XMPP-based... but they didn't cover some use cases, so they got extended and replaced.

I've read many people saying Slack is just enhanced IRC: multi-participant chat. XMPP doesn't seem to handle multi-participant chat very well. And that's a moving target: Slack adds formatting, emoticons, animated attachments, threading...

The FOSS answer should be to make sure that open standards for this stuff keep up and can be implemented widely, both by web apps and by local clients. Standards-based groupware. There are forgotten standards such as NNTP that are relevant to this.

But standards go both ways. Old-fashioned internet-standard email – plain text, inline quoting, and so on – has compelling advantages that "business-style" email cannot match. Rich clients (local or web-based) need to enforce this stuff and help people learn to use them. Minimal standards that everyone can use are good for accessibility, good for different UI access methods (keyboard+mouse, or keyboard + touchscreen, or tablet + keyboard, or smartphone).

Richer local apps aren't enough. Chandler showed that. Standards so multiple different clients can use it are needed too.

What I am getting at here is that there is important value in defining minimum viable functionality and ensuring that it works in an open, documented way.

The design of Unix largely predates computer graphics and totally predates GUIs. The core model is "everything is a file", that files contain plain text, markup is also plain text, and that you can link tools together by exchanging text:

ls -la | grep ^d | less

This is somewhat obsolete in the modern era of multimedia and streaming, but the idea was sound back then. It's worth remembering that Windows now means Windows NT, and the parents of the core Windows NT OS (not the Win32 GUI part) were the processor-portable OS/2 3 and DEC VMS. VMS had a richer content model than Unix, as it should – its design is nearly a decade younger.

Dave Cutler, lead architect of both VMS and NT, derided the Unix it's-all-just-text model by reciting "Get a byte, get a byte, get a byte byte byte" to the tune of the finale of Rossini's William Tell Overture.

Defined protocols for communication, so that different apps from different teams can communicate – just like an email client receives messages from an email server and so can download your mail, and then this evolved to it being able to ask the server and show you your mail while the messages stay on the server. This is immensely powerful, and now, we are neglecting it.

We can't force companies to open their protocols. We can reverse-engineer them and so write unauthorised clients, like many libpurple plugins for Pidgin, but that's not ideal. What we need to do is look at the important core functionality and make sure that FOSS protocols can do that too. Email clients ask for POP version 3, not just any old POP. IMAP v4 added support for multiple mailboxes (i.e. folders). I propose that it's time for something like IMAP v5, adding server-side filters... and maybe IMAP v6, that grandfathers in some part of LDAP for a server-side contact list too. And maybe IMAP v7, which adds network calendar support.

Got a simple email client that doesn't do calendaring? No problem, stick with IMAP 4. So long as the server and client can negotiate a version both understand, it's all good.

Ditto XMPP: extend that so it supports group chats.

NNTP and RSS have relevance to Web fora and syndication.

Getting together and talking, defining minimum acceptable functionality, and then describing a standard for it. Even if it's an unofficial standard, not ratified by any body, it can still work.

But by the same token, I think it's time to start discussing how we could pare the Web back to something rich and useful but which eliminates Javascript and embedded apps inside web pages. Some consensus for something based on most of HTML5 and CSS, and make it a fun challenge to see how much interactivity programmers can create without Javascript.

What's the minimum useful working environment we could build based on simple open standards that are easily implemented? Not just email + IRC, but email with basic text formatting – the original *bold* _underline_ /italic/ ~strikethrough~ that inspired Markdown, plus shared local and global address books, plus local and global calendars, plus server-side profile storage – so when you enter your credentials, you don't just get your folders, you also get your filters, your local and organisation-wide address books, your local and org-wide calendar, too. If you wish, via separate apps. I don't particularly want them all in one, myself.

Ditto shared drives for safe, encrypted, drive mounts and shares over the internet. I can't use my Google Drive or my MS OneDrive from Linux. Why not? Why isn't there some FOSS alternative mechanism?

Is there any way to get out of the trap of prioprietary apps running on top of open-ish standards (Web 2.0) and help rich local software get more competitive? I am not sure. But right now, we seem to be continuing up a blind alley and everyone's wondering why the going is so slow...


The year of Linux on the desktop came, and the Linux industry didn't notice. It's ChromeOS. Something like 30 million ChromeBooks sold in 2020. No Linux distro ever came close to that many units.

But ChromeOS means webapps for everything.

I propose it's time for a concerted effort at a spec for a set of minimal clean local apps and open protocols to connect them to existing servers. As a constraint, set a low ceiling: e.g. something that can run on some $5 level device, comparable to the raw power of a Raspberry Pi Zero: 1GB RAM and 1GHz of CPU in 1 core. Not enough for web apps, but more than enough for a rich capable email client, for mounting drives, for handling NNTP and internet news.

Something that can be handed out to the couple of billion people living in poverty, with slow and intermittent Internet access at best. This isn't just trying to compete with entrenched businesses: it should be philanthropic, too.
Hard Stare

The DOS and Windows drive-letter allocation process is more complex than you might think

Someone asked me if I could describe how to perform DOS memory allocation. It's not the first time, either. It's a nearly lost art. To try to illustrate that it's a non-trivial job, I decided to do something simpler: describe how DOS allocates drive letters.

I have a feeling I've done this before somewhere, but I couldn't find it, so I tried writing it up as an exercise.

Axioms:

  • DOS only understands FAT12, FAT16 and in later versions FAT32. HPFS, NTFS and all *nix filesystems will be skipped.

  • We are only considering MBR partitioning.

So:

  • Hard disks support 2 partition types: primary and logical. Logical drives must go inside an extended partition.

  • MBR supports a legal max of 4 primaries per drive.

  • Only 1 primary partition on the 1st drive can be marked "active" and the BIOS will boot that one _unless_ you have a *nix bootloader installed.

  • You can only have 1 extended partition per drive. It counts as a primary partition.

  • To be "legal" and to support early versions of NT and OS/2, only 1 DOS-readable primary partition per drive is allowed. All other partitions should go inside an extended partition.

  • MS-DOS, PC DOS and NT will only boot from a primary partition. (I think DR-DOS is more flexible and  I don't know for FreeDOS.)

Those are our "givens". Now, after all that, how does DOS (including Win9x) assign drive letters?

  1. It starts with drive letter C.

  2. It enumerates all available hard drives visible to the BIOS.

  3. The first *primary* partition on each drive is assigned a letter.

  4. Then it goes back to the start and starts going through all the physical hard disks a 2nd time.

  5. Now it enumerates all *logical* partitions on each drive and assigns them letters.

  6. So, all the logicals on the 1st drive get sequential letters.

  7. Then all the logicals on the next drive.

  8. And so on through all logicals on all hard disks.

  9. Then drivers in CONFIG.SYS are processed and if they create drives (e.g. DRIVER.SYS) those letters are assigned next.

  10. Then drivers in AUTOEXEC.BAT are processed and if they create drives (e.g. MSCDEX) those are assigned next.

So you see... it's quite complicated. :-)

Assigning upper memory blocks is more complicated.

NT changes this and I am not 100% sure of the details. From observation:

  • NT 3 did the same, but with the addition of HPFS and NTFS (NT 3.1 & 3.5) and NTFS (3.51) drives.

  • NT 4 does not recognise HPFS at all but the 3.51 driver can be retrofitted.

  • NT 3, 4 & 5 (Win2K) *require* that partitions are in sequential order.

Numbers may be missing but you can't have, say:
[part № 1] [part № 2] [part № 4] [part № 3]

They will blue-screen on boot if you have this. Linux doesn't care.

Riders:

  1. The NT booloader must be on the first primary partition on the first drive.

  2. (A 3rd party boot-loader can override this and, for instance, multi-boot several different installations on different drives.)

  3. The rest of the OS can be anywhere, including a logical drive.

NT 6 (Vista) & later can handle it, but this is because MS rewrote the drive-letter allocation algorithm. (At least I think this is why but I do not know for sure; it could be a coincidence.)

Conditions:

  • The NT 6+ bootloader must be in the same drive as the rest of the OS.

  • The bootloader must be on a primary partition.

  • Therefore, NT 6+ must be in a primary partition, a new restriction.

  • NT 6+ must be installed on an NTFS volume, therefore, it can no longer dual-boot with DOS on its own & a 3rd party bootloader is needed.

NT 6+ just does this:

  1. The drive where the NT bootloader is becomes C:

  2. Then it allocates all readable partitions on drive 1, then all those on drive 2, then all those on drive 3, etc.

So just listing the rules is quite complicated. Turning into a step-by-step how-to guide is significantly longer and more complex. As an example, the much simpler process of cleaning up Windows 7/8.x/10 if preparing to dual-boot took me several thousand words, and I skipped some entire considerations to keep it that "short".

Errors & omissions excepted, as they say. Corrections and clarifications very welcome. To comment, you don't need an account — you can sign in with any OpenID, including Facebook, Twitter, UbuntuOne, etc.