Hard Stare

Fun times in the mid-1990s PC Pro labs

I ran the testing labs for PC Pro magazine from 1995 to 1996, and acted as the magazine's de facto technical editor. (I didn't have enough journalistic experience yet to get the title Technical Editor.)

The first PC we saw at PC Pro magazine with USB ports was an IBM desktop 486 or Pentium -- in late 1995, I think. Not a PS/2 but one of their more boring industry-standard models, an Aptiva I think.
We didn't know what they were, and IBM were none too sure either, although they told us what the weird little tricorn logo represented: Universal Serial Bus.
Image result for unicode usb logo

"It's some new Intel thing," they said. So I phoned Intel UK -- 1995, very little inter-company email yet -- and asked, and learned all about it.
But how could we test it, with Windows 95A or NT 3.51? We couldn't.
I think we still had the machine when Windows 95B came out... but the problem was, Windows 95B, AKA "OSR2", was an OEM release. No upgrades. You couldn't officially upgrade 95A to 95B, but I didn't want to lose the drivers or the benchmarks...

I found a way. It involved deleting WIN.COM from C:\WINDOWS which was the file that SETUP.EXE looked for to see if there was an existing copy of Windows.

Reinstalling over the top was permitted, though. (In case it crashed badly, I suppose.) So I reinstalled 95B over the top, it picked up the registry and all the settings... and found the new ports.
But then we didn't have anything to attach to them to try them. :-) The iMac wouldn't come out for another 2.5 years yet.
Other fun things I did in that role:
• Discovered Tulip (RIP) selling a Pentium with an SIS chipset that they claimed supported EDO RAM (when only the Intel Triton chipset did). Under threat of a lawsuit, I showed them that it did support it -- it recognised it, printed a little message saying "EDO RAM detected" and worked... but it couldn't use it and benchmarked at exactly the same speed as with cheaper FP-mode RAM.
I think that led to Tulip suing SIS instead of Dennis Publishing. :-)
• Evesham Micros (RIP) sneaking the first engineering sample Pentium MMX in the UK -- before the MMX name had even been settled -- into a grouptest of Pentium 166 PCs. It won handily, by about 15%, which should have been impossible if it was a standard Pentium 1 CPU. But it wasn't -- it was a Pentium MMX with twice as much L1 cache onboard.
Intel was very, very unhappy with naughty Evesham.
• Netscape Communications (RIP) refused to let us put Communicator or Navigator on our cover CD. They didn't know that Europeans pay for local phone calls, so that it cost money to make a big download (30 or 40 MB!). They wouldn't believe us and in the end flew 2 executives to Britain to explain to us that it was a free download and they wanted to trace who downloaded it.
As acting technical editor, I had to explain to them. Repeatedly.

When they finally got it, it resulted in a panicked trans-Atlantic phone call to Silicon Valley, getting someone senior out of bed, as they finally realised why their download and adoption figures were so poor in Europe.

We got Netscape on the cover CD, the first magazine in Europe to do so. :-) Both Communicator and Navigator, IIRC.
• Fujitsu supplied the first PC OpenGL accelerator we'd ever seen. It cost considerably more than the PC. We had no way to test it -- OpenGL benchmarks for Windows hadn't been invented yet. (It wasn't very good in Quake, though.)
I originally censored the company names, but I checked, and the naughty or silly ones no longer exist, so what the hell...
Tulip were merely deceived and didn't verify. Whoever picked SIS was inept anyway -- they made terrible chipsets which were slow as hell.

(Years later, they upped their game, and by C21 there really isn't much difference, unless you're a fanatical gamer and overcloker.)
Lemme think... other fun anecdotes...
PartitionMagic caused me some fun. When I joined (at Issue 8) we had a copy of v1 in the cupboard. Its native OS was OS/2 and nobody cared, I'm afraid. I read what it claimed and didn't believe it so I didn't try it.
Then v2 arrived. It ran on DOS. Repartitioning a hard disk when it was full of data? Preposterous! Impossible!
So I tried it. It worked. I wrote a rave review.
It prompted a reader letter.
"I think I've spotted your April Fool's piece. A DOS program that looks exactly like a Windows 95 app? Which can repartition a hard disk full of data? Written by someone whose name is an anagram of 'APRIL VENOM'? Do I win anything?"
He won a phonecall from me, but he did teach me an anagram of my name I never knew.
It led me to run a tip in the mag.

At the time, a 1.2 GB hard disk was the most common size (and a Quantum Fireball the fastest model for the money). Format that as a FAT16 drive and you got super-inefficient 16 kB clusters. (And in 1995 or early 1996, FAT16 was all you got.)
With PartitionMagic, you could take 200 MB off the end, make it into a 2nd partition, and still fit more onto the C: drive because of far more efficient 8 kB clusters. If you didn't have PQMagic you could partition the disk that way before installing. The only key thing was that C: was less than 1 GB. 0.99 GB was fine.
I suggested putting the swap file on D: -- you saved space and reduced fragmentation.
One of our favourite suppliers, Panrix, questioned this. They reckoned that having the swap file on the outer, longer tracks of the drive made it slower, due to slower access times and slower transfer speeds. They were adamant.
So I got them to bring in a new, virgin PC with Windows 95A, I benchmarked it with a single big, inefficient C: partition, then I repartitioned it, put the swapfile on the new D: drive, and benchmarked it again. It was the same to 2 decimal places, and the C drive had about 250MB more free space.
Panrix apologised and I gained another geek cred point. :-)
Hard Stare

A brief history of Apple's transition from Classic MacOS to the NeXTstep-based OS X

[Repurposed from a reply in a Hackernews thread]

Apple looked at buying in an OS after Copland failed. But all the stuff about Carbon, Blue Box, Yellow Box, etc. -- all those were NeXT ideas after the merger. None of it was pre-planned.

So, they bought NeXTstep, a very weird UNIX with a proprietary, PostScript-based GUI and a rich programming environment with tons of rich foundation classes, all written in Objective C.

A totally different API, utterly unlike and unrelated to Classic MacOS.

Then they had to decide how to bring these things together.

NeXT already offered its OPENSTEP GUI on top of other Unixes. OPENSTEP ran on Sun Solaris and IBM AIX, and I think maybe others I've forgotten. Neither were commercial successes.

NeXT had a plan to create a compatibility environment for running NeXT apps on other OSes. The idea was to port the base ObjC classes to the native OS, and use native controls, windows, widgets etc. but to be able to develop your apps in ObjC on NeXTstep using Interface Builder.

In the end, only one such OS looked commercially viable: Windows NT. So the plan was to offer a NeXT environment on top of NT.

This is what was temporarily Yellow Box and later became Cocoa.

Blue Box was a VM running a whole copy of Classic MacOS under NeXTstep, or rather, Rhapsody. In Mac OS X 10.0, Blue Box was renamed the Classic environment and it gained the ability to mix windows with NeXT windows.

But there still needed to be a way to port apps from Classic MacOS to Mac OS X.

So what Apple did was go through the Classic MacOS API and cut it down, removing all the calls and functions that would not be safe in a pre-emptively multitasking, memory-managed environment.

The result was a safe subset of the Classic MacOS API called Carbon, which could be implemented both on Classic MacOS and on the new NeXTstep-based OS.

Now there was a transition plan:

• your old native apps will still work in a VM

• apps written to Carbon can be recompiled for OS X

• for the full experience, rewrite or write new apps using the NeXT native API, now renamed Cocoa.

• incidentally there was also a rich API for Java apps, too

Now there was a plan.

Here's how they executed it.

1. Copland was killed. A team looked at if anything could be salvaged.

2. They got to work porting NeXTstep to PowerPC

3. 2 main elements from Copland were extracted:

• The Appearance Manager, a theming engine allowing skins for Classic MacOS: https://en.wikipedia.org/wiki/Appearance_Manager

• A new improved Finder

The new PowerPC-native Finder had some very nice features, many never replicated in OS X... like dockable "drawers" -- drag a folder to a screen edge and it vanished, leaving just a tab, which opens a pop-out draw. Multithreading: start a copy or move and then carry on doing other things.

The Appearance Manager was grafted onto NeXTstep, leading to Rhapsody, which became Mac OS X Server: basically NeXTstep on PowerPC with a Classic MacOS skin, so a single menu bar at the top, desktop icons, Apple fonts and things -- but still using the NeXT "Miller columns" Workspace file manager and so on.

Apple next released MacOS 8, with the new Appearance control panel and single skin, called Platinum: a marginally-updated classic look and feel. There were never any official others, but some leaked, and a 3rd party tool called Kaleidoscope offered many more.

http://basalgangster.macgui.com/RetroMacComputing/The_Long_View/Entries/2011/2/26_Copland.html

So some improvements, enough to make it a compelling upgrade...

And also to kill off the MacOS licensing programme, which only covered MacOS 7. (Because originally 7 had been planned to be replaced with Copland, the real MacOS 8.)

MacOS 8 was also the original OS of the first iMac.

Then came MacOS 8.1, which also got HFS+, a new, more efficient filesystem for larger multi-gigabyte hard disks. It couldn't boot off it, though (IIRC).

MacOS 8.1 was the last release for 680x0 hardware and needed a 68040 Mac.

Then came the first PowerPC-only version, MacOS 8.5, which brought in booting from HFS+. Then MacOS 8.6, a bugfix release, mainly.

Then MacOS 9, with better-integrated WWW access and some other quite nice features... but all really stalling for time while they worked on what would become Mac OS X.

The paid releases were 8.0, 8.5 and 9. 8.1, 8.6, 9.1 and 9.2 were all free updates.

In a way they were just trickling out new features, while working on adapting NeXTstep:

1. Rhapsody (Developer Release 1997, DR2 1998)

2. Mac OS X Server (1.0 1999, 1.2 2000)

3. Mac OS X Public Beta (2000)

But all of these releases supported Carbon and could run Carbon apps, and PowerPC-native Carbon apps would run natively under OS X without the need for the Classic environment.

Finally in 2001, Mac OS X 10.0 "Cheetah".
Hard Stare

Cargo cult software design [blog post, by me]

[Repurposed mailing list reply]

I mentioned that I still don't use GNOME even though there are extensions to fix a lof of the things I don't like. (My latest attempted ended in failure just yesterday.) Someone asked what functionality was still missing. It's a reasonable question, so I tried to answer.

It is not (only) a case of missing functionality, it is a case of badly-implemented or non-working functionality.

I can go into a lot of depth on this, if you like, but it is not very relevant to this list and it is probably not a good place.

A better place, if you have an OpenID of some form, might be over on my blog.

This post lays out some of my objections:

"Why I don't use GNOME Shell"

& is followed up here:

"On GNOME 3 and design simplicity"

Here's what I found using the extensions was like:

A quick re-assessment of Ubuntu GNOME now it's got its 2nd release

For me, Ubuntu Unity worked very well as a Mac OS X-like desktop, with actual improvements over Mac OS X (which I use daily.) I used it from the version when it was first released -- 11.04 I think? -- and still do. In fact I just installed it on 19.04 this weekend after my latest efforts to tame GNOME 3 failed.

I don't particularly like Win95-style desktops -- I'm old, I predate them -- but I'm perfectly comfortable using them. I have some tests I apply to see if they are good enough imitations of the real thing to satisfy me. Notable elements of these tests: does it handle a vertical taskbar? Is it broadly keystroke-compatible with Win9x?

Windows-like desktops which pass to some degree, in order of success: Xfce; LXDE; LXQt
Windows-like desktops which fail: MATE; Cinnamon; KDE 5

If I was pressed to summarise, I guess I'd say that some key factors are:
• Do the elements integrate together?
• Does it make efficient use of screen space, or does it gratuitously waste it?
(Failed badly by GNOME Shell and Elementary)
• Does it offer anything unique or is it something readily achieved by reconfiguring an existing desktop?
(Failed badly by Budgie & arguably Elementary)
• Do standard keystrokes work by default?
(Failed badly by KDE)
• Can it be customised in fairly standard, discoverable ways?
• Is the result robust?
E.g. will it survive an OS upgrade (e.g. Unity), or degrade gracefully so you can fix it (Unity with Nemo desktop/file manager), or will it break badly enough to prevent login (GNOME 3 + multiple extensions)?

If, say, you find that Arc Menu gives GNOME 3 an menu and what more can you want, or if you are happy with something as minimal as Fluxbox, then my objections to many existing desktops are probably things that have never even occurred to you and will probably seem trivial, frivolous, and totally unimportant. It may be very hard to discuss them, unless you're willing to accept that, as an opening position, stuff that you don't even notice is critically, crucially important to other people.

Elementary is quite a good example, because it seems to me that the team trying to copy the look and feel of Mac OS X in Elementary OS do not actually understand how Mac OS X works.

Elementary presents a cosmetic imitation of Mac OS X, but it is skin-deep. Its developers seem not to understand how Mac OS X works and how the elements of the desktop function. So, they have implemented things that look quite Mac-like, but don't work. Not "don't work in a Mac-like way". I mean, don't work at all.

It is what I call "cargo cult" software: when you see something, think it looks good, so you make something that looks like it and then you take it very seriously and go through the motions of using it and say it's great.



Actually, your aeroplane is made of grass and rope. It doesn't roll let alone fly. Your radio is a wooden fruit box. Your headphones are woven from reeds. They don't do anything. They're a hat.

You're wearing a hat but you think you're a radio operator.

As an example: Mac OS X is based on a design that predates Windows 3. Programs do not have a menu bar in their windows. Menus are elsewhere on the screen. On the Mac, they're always in a bar at the top. On NeXTstep, which is what Mac OS X is based on, they're vertically stacked at the top left of the screen.

If you don't know that, and you hear that these OSes were very simple to use, and you look at screenshots, then you might think "look at those apps! They have no menu bars! No menus at all! Wow, what a simple, clean  design! Right, I will write apps with no menus!"

That is a laudable goal in its way -- but it can mean that the result is a rather broken, braindead app, with no advanced options, no customisation, no real power. Or you have to stick a hamburger menu in the title bar with a dozen unrelated options that you couldn't fit anywhere else.

What's worse is that you didn't realise that that's the purpose of that panel across the top of the desktop in all the screenshots. You don't know that that's where the menus go. All you see is that it has a clock in it.

You don't know your history, so you think that it's there for the clock.  You don't know that 5 or 6 years after the OS was launched with that bar for the menus, someone wrote an add-on that put a clock on the end, and the vendor went "that's a good idea" and built it in.

But you don't care about history, you never knew and you don't want to... So you put in a big panel that doesn't do anything, with a clock in it, and waste a ton of valuable space...

Cargo cult desktops.

Big dock thing because the Mac has a dock but they don't know that the Dock has about 4 different roles (app launcher and app switcher and holds minimised windows and is a shortcut for useful folders and is a place for status monitors. But they didn't know that so their docks can't do all this.

Menu bar with no menus because the Mac has a menu bar and it looks nice and people like Macs so we'll copy it but we didn't know about the menus, but we listened to Windows users who tried Macs and didn't like the menu bar.
Copying without understanding is a waste. A waste of programmer time and effort, a waste of user time and effort, a waste of screen space, and a waste of code.

You must understand first and only then copy.

If you do not have time or desire to understand, then do not try to copy. Do something else while you learn.
Hard Stare

"Social networking: it's new but it isn't News" (from an old Inquirer article of mine)

(30th June 2007 on The Inquirer)

THERE'S ANOTHER NEW social networking site around, from the guy behind Digg. It's called Pownce, it's still invitation-only and if they're offering anything genuinely new and different they aren't shouting about it. In particular, nobody's talking about the feature I want to see.

Get connected

There are myriads of social networking-type sites these days; Wikipedia lists more than ninety. Some of the big ones are MySpace, Bebo, Facebook and Orkut. Then there are "microblogging" sites like Twitter and Jaiku. Then of course there are all the tired old pure-play blogging sites like LiveJournal and Blogger. I have accounts on a handful of them - in some cases, just so I can comment, because OpenID isn't as well-supported as it deserves to be.

They all do much the same sort of thing. You get an account for free, you put up a profile, maybe upload some photos, tunes, video clips or a blog, then you can look up your mates and "add" them as "friends". Mainly, this allows you to get a summary list of what your mates are up to; secondarily, you can restrict who can see what that you're putting up.

Doesn't sound like much, but these are some of the biggest and most popular websites on the Internet. That means money: News International paid $580 million for MySpace and its founders are asking for $12.5 million a year each to stay on for another couple of years.

The purely social sites, like Myspace, sometimes serve as training wheels for Internet newbies. You don't need to understand email and all that sort of thing - you can talk to your mates entirely within the friendly confines of one big website. After all, there's no phonebook for the Internet - it's hard for friends to find one another, especially if they're not all that Net-literate.

A lot of the sites try to keep you in their confines. MySpace offers its own, closed instant-messenging service, for example - so long as you use Windows. Another way is that when someone sends you a message or comment on MySpace or Facebook, the site informs you by email - but the email doesn't tell you what the actual message was. You have to go to the site and sign in to read it.

Buzzword alert

Some sites aren't so closed - for example, the email notifications from Livejournal tell you what was said and let you respond from within your email client, and its profiles offer basic integration of external IM services. On the other hand, Facebook offers trendy Web 2.0 features, like "applications" that can run within your profile and can be rearranged by simple drag&drop, whereas LJ or Facebook owners who want unique customisations must fiddle with CSS and HTML or use a third-party application.

As well as aggregating your mates' blogs, many social networking sites let you syndicate "web feeds" from other sites. A "feed" - there are several standards to choose from, including Atom and various versions of RSS - supplies a constantly-updated stream of new stories or posts from one site into another. For instance, as I write, fifteen people on LiveJournal read The Inquirer through its LJ feed.

(If you fancy this aggregation idea but don't want to join a networking site, you can also do this using an "feed reader" on your own computer. There are a growing number of these: as well as standalone applications such as FeedReader or NetNewsWire, many modern browsers and email clients can handle RSS feeds - for example, IE7, Firefox, Outlook and Safari.)

But even with feeds, the social networking sites are still a walled garden. If you read a story or a post syndicated from another site, you'll probably get a space to enter comments - but you won't see the comments from users on the original site and they won't see yours. The same goes for users anywhere else reading a syndicated feed - only the stories themselves get passed through, not the comments.

A lot of the point of sites like Digg and Del.icio.us is the recently popular concept of "wisdom of crowds". If lots of people "tag" something as being interesting and the site presents a list of the most-tagged pages, then the reader is presented with an instantaneous "what's hot" list - say, what the majority of the users of the site are currently viewing.

There are sites doing lots of clever stuff with feeds, such as Yahoo Pipes, which lets you visually put together "programs" to combine the information from multiple feeds - what the trendy Web 2.0 types call a "mashup". What you don't get through a feed, though, is what people are saying.

Similarly, the social networking sites are, in a way, parasitic on email: you get more messages than before, but for the most part they have almost no informational content, and in order to communicate with other users, they encourage you to use the sites' own internal mechanisms rather than email or IM. Outside a site like Facebook, you can't see anything much - you must join to participate. Indeed, inside the site, the mechanisms are often rather primitive - for instance, Facebook and Twitter have no useful threading. All you get is a flat list of comments; people resort to heading messages "@alice" or "@bob" to indicate to whom they're talking. Meanwhile, the sites' notifications to the outside world are a read-only 1-bit channel, just signals that something's happened. You might as well just have an icon flashing on your screen.

In other words, it's all very basic. Feeds allow for clever stuff, but the actual mechanics of letting people communicate tend to be rather primitive, and often it's the older sites that do a better job. The social sites are in some ways just a mass of private web fora (it's the correct plural of "forum), with all their limitations of poor or nonexistent threading and inconsistent user interfaces. Which seems a bit back-asswards to me. Threaded discussions are 1980s technology, after all.

Going back into time

Websites have limits. Email may be old-fashioned, but it's still a useful tool, especially with good client software. Google's Gmail does some snazzy AJAX magic to make webmail into a viable alternative to a proper email client - its searching and threading are both excellent. An increasing number of friends and clients of mine are giving up on standalone email clients and just switching to Gmail. The snag with a website, though, is that if you're not connected - or the site is down - you're a bit stuck. When either end is offline, the whole shebang is useless.

Whereas if you download your email into a client on your own computer, you can use it even when not connected - if it's in a portable device, underground or on a plane or in the middle of Antarctica with no wireless Internet coverage. You can read existing emails, sort and organize, compose replies, whatever - and when you get back online, the device automatically does the sending and receiving for you. What's more, when you store and handle your own email, you have a major extra freedom - you can change your service provider. If you use Gmail or Hotmail, you're tied to the generosity of those noted non-profit philanthropic organizations Google and Microsoft.

The biggest reason email works so well is that it's open: it's all based on free, open standards. Anyone with Internet email can send messages to anyone with an Internet email address. Even someone on one proprietary system, say Outlook and Exchange, can send mail to a user on another, say Lotus Notes. Both systems talk the common protocols: primarily, SMTP, the Simple Mail Transfer Protocol. Outside the proprietary world, most email clients use POP3 or IMAP to receive messages from servers - and again, SMTP to send.

Now here's a thought. Wouldn't it be handy if there was an open standard for moving messages between online fora? (It's the correct plural of "forum", not "forums".) So that if you were reading a friend's blog through a feed into your preferred social networking site, you could read all the comments, too, and participate in the discussion? If it worked both ways, on a peer-to-peer basis, the people discussing a story on Facebook could also discuss it with the users on Livejournal. If it was syndicated in from Slashdot, they could talk to all the Slashdot users, too.

Now there is a killer feature for a new, up and coming social networking site. Syndication of group discussions, not just stories. It would be a good basis for competitive features, too - like good threading, management of conversations and so on.

The sting in the tail

The kicker is, there already is such a protocol. It's called NNTP: the Network News Transfer Protocol.

The worldwide system for handling threaded public discussions has been around for 26 years now. It's called Usenet and since a decade before the Web was invented it's been carrying some 20,000 active discussion groups, called "newsgroups", all around the world. It's a bit passé these days - spam originated on Usenet long before it came to email, and although Usenet still sees a massive amount of traffic, 99% of it is encoded binaries - many people now only use it for file sharing.

You may never have heard of it, but there's a good chance that your email system supports Usenet. Microsofties can read newsgroups in Outlook Express, Windows Mail and Entourage, or in Outlook via various addons; open sourcerers can use Mozilla's Thunderbird on Windows, Mac OS X or Linux. Google offers GoogleGroups, which has the largest and oldest Usenet archive in the world. There are also lots of dedicated newsreaders - on Windows, Forté's Agent is one of the most popular.

Usenet is a decentralised network: users download messages from news servers, but the servers pass them around amongst themselves - there's no top-down hierarchy. Companies can run private newsgroups if they wish and block these from being distributed. All the problems of working out unique message identifiers and so on were sorted out a quarter of a century ago. Messages can be sent to multiple newsgroups at once, and like discussion forum posts, they always have a subject line. Traditionally, they are in plain text, but you can use HTML as well - though the old-timers hate it.

There are things Usenet doesn't do well. There's no way to look up posters' profiles, for example - but that's exactly the sort of thing that social networking sites are good at. Every message shows its sender's email address - but then, the social networking sites all give you your own personal ID anyway.

Big jobs, little jobs

It would be a massive task to convert the software driving all the different online discussion sites to speaking NNTP, though. It isn't even remotely what they were intended for.

But there's another way. A similar problem already exists if you use a webmail service like Hotmail but want to download your messages into your own email client. Hotmail used to offer POP3 downloads as a free service, but it became a paid-for extra years ago. Yahoo and Gmail offer it for free, but lots of webmail providers don't.

Happily, though, there's an answer.

If you use Thunderbird, there's an extension called Webmail which can download from Hotmail as well as Yahoo, Gmail and other sites. Like all Mozilla extensions, it runs on any platform that Thunderbird supports.

But better still, there's a standalone program. It's called MrPostman and because it's written in Java it runs on almost anything - I've used it on Windows, Mac OS X and Linux. It's modular, using small scripts to support about a dozen webmail providers, including Microsoft Exchange's Outlook Web Access; it can even read RSS feeds. Its developers cautiously say that "Adding a new webmail provider might be as simple as writing a script of 50 lines."

And it's GPL open source, so it won't cost you anything. It's a fairly small program, too - it will just about fit on a floppy disk.

MrPostman shows that it's possible to convert a web-based email service into standard POP3 - and for this to be done by a third party with no no access to the source code of the server. Surely it can be done for a forum, too? And if it's done right, for lots of fora? It doesn't need the help or cooperation of the source sites, though that would surely help. More to the point, if it was done online, the servers offering the NNTP feeds can be separate from those hosting the sites.

What's more, there's a precedent. For users of the British conferencing service CIX, there's a little Perl program called Clink, which takes CoSy conferences and topics and presents them as an NNTP feed, so that you can read - and post to - CIX through your newsreader.

It sounds to me like the sort of task that would be ideal for the Perl and Python wizards who design Web 2.0 sites, and it would be a killer feature for any site that acts as a feed aggregator.

Rather than reading contentless emails and going off to multiple different sites to read the comments and post replies, navigating dozens of different user interfaces and coping with crappy non-threaded web for a, you could do it all in one place - as the idea spread, whichever site you preferred.

And, of course, the same applies to aggregator software as well. When you download this stuff to your own machine, you can read it at your leisure, without paying extortionate bills for mobile connectivity. Download the bulk of the new messages on a fast free connection, then just post replies on the move when you're paying for every kilobyte over a slow mobile link.

What's more, in my experience of many different email systems, it's the offline ones that are the fastest and offer the best threading and message management. It could bring a whole new life to discussions on the Web.

All this, and all I ask for the idea is a commission of 1 penny per message to anyone who implements it. It's a bargain.
Hard Stare

Apple's long processor journey

There have been multiple generations of Macs. Apple has not really divided them up.

1. Original 680x0 Macs with 24-bit ROMs (ending with the SE/30, Mac II, IIx & IIcx)
2. 32-bit-clean-ROM 680x0 Macs (starting with the Mac IIci)
3. NuBus-based PowerMacs (6100, 7100, 8100)
4. OldWorld-ROM PCI-based PowerMacs (all the Beige PowerMacs including the Beige G3 & black PowerBooks) ← note, many but not all of these can run Mac OS X
5. NewWorld-ROM PCI-based PowerMacs (iMac, iBook & later)
6. OS-X-only PowerMacs (starting with the Mirrored Drive Doors 1GHz G4 with Firewire 800)
7. 32-bit Intel Macs (iMac, Mac mini and MacBook Core Solo and Core Duo models)
8. 64-bit Intel Macs with 32-bit EFI (Core 2 Duo models from 2006)
9. 64-bit Intel Macs with 64-bit EFI (anything from 2008 onwards)

Classic MacOS was written for 68000 processors. Later it got some extensions for 68020 and 68030.

When the PowerMacs came out, Apple wrote a tiny, very fast emulator that translated 680x0 instructions on the fly into PowerPC instructions. However, unlike modern retrocomputer emulators, this one allowed apps to call PowerPC code, and the OS was adapted to run on the emulator. It was not like running an Amiga emulator on a PC or something, when the OS in the emulator doesn't "know" it's in an emulator. MacOS did and was tailored for it.

They ran Classic MacOS on this emulator, and profiled it.

They identified which parts were the most performance-critical and were running slowly through the emulator, and where they could, they rewrote the slowest of them in PowerPC code.

Bear in mind, this was something of an emergency, transitional project. Apple did not intend to rewrite the whole OS in PowerPC code. Why? Because:
1. It did not have the manpower or money
2. Classic MacOS was already rather old-fashioned and Apple intended to replace it
3. If it did, 68000 apps (i.e. all of them) wouldn't work any more

So it only did the most performance-critical sections. Most of MacOS remained 68K code and always did for the rest of MacOS' life.

However, all the projects to replace MacOS failed. Copland failed, Pink failed, Taligent failed, IBM Workplace OS failed.

So Apple was stuck with Classic MacOS. So, about the MacOS 7.5 timeframe, Apple got serious about Classic.
A lot of MacOS 7.6 was rewritten from assembly code and Pascal into C. This made it easier to rewrite chunks for PowerPC. However it also made 7.6 larger and slower. This upset a lot of users, but it meant new facilities: e.g. the previously-optional "MultiFinder" was now always on, & there was a new network stack, OpenTransport.

This is also the time that Apple licensed MacOS to other vendors.

Soon afterwards, Apple realised it could not build a new replacement OS itself, and would have to buy one. It considered former Apple exec Jean Louis Gassée's Be for BeOS, and Apple co-founder Steve Jobs' NeXT Computer for the Unix-based NeXTstep.

It bought NeXTstep and got Jobs back into the bargain. He regained control, fired Gil Amelio and killed off the licensing program. He also killed Copland, the experimental multitasking MacOS replacement, and got his coders to salvage as much as they could from it and bolt it onto Classic, calling the result MacOS 8.

MacOS 8 got a multithreaded Finder, desktop "drawers", new gaming and web APIs and more. This also killed the licensing programme, which only applied to MacOS 7.

MacOS 8.1 got a new filesystem, HFS+. This still works today and was the default up to High Sierra.

8.1 is the last release for 680x0 Macs and needs a 68040, although a few 68030 Macs work via Born Again.

The "monolithic" / "nanokernel" distinction applies to CPU protection rings.

These days this principally applies to OSes written entirely in compiled code, usually C code, where some core OS code runs in Ring 0, with no restrictions on its behaviour, and some in Ring 3 where it cannot directly access the hardware. IBM OS/2 2 and later uniquely used Ring 1. I've blogged about this before.

OS/2 2 using Ring 1 is why VirtualBox exists.

Decades ago, more complex OSes like Multics had many more rings and used all of them.

If a Unix-like OS is rewritten and split up sop that a minimal part of the OS runs in Ring 0 and manages the rest of the OS as little separate parts that run in Ring 3, that's called a "microkernel". Ignore the marketing, Mac OS X isn't one and neither is Windows NT. There are only 2 mass-market microkernel OSes and they are both obscure: QNX, now owned by Blackberry, and Minix 3, embedded in the control/management circuitry embedded into every modern Intel x86-64 CPU.

Classic MacOS is not a C-based OS, nor is it an Intel x86 OS. It does not have a distinction between kernel space and user space. It does not use CPU rings, at all. Everything is in Ring 0, all the time. Kernel, drivers, apps, INITs, CDEVs, screensavers, the lot.

MacOS 8.5 went PowerPC-only, and in the process of dropping support for 680x0 Macs, Apple made some provision for future improved PowerMacs.

The 68K emulator got a big upgrade to the emulator, now renamed a "nanokernel". It is not an OS in its own right: it boots and runs another OS on top of it.

It is not a HAL: this is native code, deep within an OS kernel, that allows the same OS to run with little modification on widely-different underlying hardware, with different memory maps, I/O spaces, APICs etc., without adapting the kernel to all the different platforms. MacOS 8.5+ only runs on Macs and the hardware could be adapted to the OS and the OS to the hardware. No need for a HAL.

It is not a hypervisor. A hypervisor partitions a machine up into multiple virtual machines -- it allows 1 PC to emulate multiple separate PCs and each virtual emulated PC runs a separate OS. Classic MacOS can't do that and only runs 1 OS at a time.

The MacOS nanokernel is a very small bit of code that boots first and then executes most of the rest of the code, and manages calls from apps from a 68K OS back to code written for the underlying PowerPC CPU.

It'sd a shame that this bit of code is secret and little-known, but some details have been leaked over the years.

Hard Stare

FOSDEM 2020 write-up (warning: very long, mostly unedited -- sorry in advance)

Or "What I Did On My Holidays by Liam Proven aged 52¼."

The first mainline talk I got to on Saturday was the one before mine: “The Hidden Early History of Unix” by Warner Losh of the FreeBSD project. [https://fosdem.org/2020/schedule/event/early_unix/]

This was a good deep-dive into the very early, including pre-C-language versions, and how little remains of them. Accidental finds of some parts and a lot of OCR work and manual re-keying has one PDP-7 version running in an emulator; for most of the others, nothing is left but at best the kernel and init and maybe a shell. In other words, not enough to run or to study.

What’s quite notable is that it was tied very closely to the machine -- they can even ID the serial numbers of the units that it ran on, and only those individual machines’ precise hardware configurations were supported.

There was an extensive list of the early ports, who did them, what they ran on and some of the differences, and what if any code made it back into the mainline -- but it’s gleaned from letters, handwritten notebooks, and a few academic papers. Printed publications have survived; machine-readable tapes or disks and actual code: almost nothing.

It’s within my lifetime but it’s all lost. This is quite sobering. Digital records are ephemeral and die quicker than their authors; paper can last millennia.

Then I did my talk, [https://fosdem.org/2020/schedule/event/generation_gaps/]. There’s a brief interview with me here: [https://fosdem.org/2020/interviews/liam-proven/]

It seems to have been well-received. A LinkedIn message said:

«Hello Liam,
Just wanted to let you know that your talk was one of the best so far on
FOSDEM. Thank you for all the context on OS/HW history, as well as for
putting Intel Optane on my map. I did not understand the potential of
that technology, now I think I do.»

That was very gratifying to hear. There has also been some very positive feedback on Twitter, e.g.
https://twitter.com/wstephenson/status/1223607640607141888
https://twitter.com/jhaand/status/1223918106839519232
https://twitter.com/pchapuis/status/1223622107361431552
https://twitter.com/untitledfactory/status/1223609733325651968

Then I went to “Designing Hardware, Journey from Novice to Not Bad” [https://fosdem.org/2020/schedule/event/openelectronicslab/]

A small team built an open source EKG machine for use in the developing world where unreliable power supply destroys expensive Western machinery.

They taught themselves SMT soldering by hand, they did demos and test runs that included moving the mouse pointer by mind control! It’s not just an EKG machine. Kit they used included an OpenHardware ExG shield and OpenSCAD. They noted that the Arduino build environment is great even for total beginners -- for example they learned to do SMT soldering from Youtube. (Top tip: solder paste really helps.) Don’t necessarily use a soldering iron, use a heat gun or do it by hand but under a dissection microscope.

Don’t be afraid to make mistakes. Chip’s legs can be missoldered: just cut them, lift a leg, and attach a wire. Wrong way round? Desolder a component, turn it, reattach it. It doesn’t need to look good, it just needs to work.

But you do need to understand legal risk, as described in ISO 14971. Some risk is acceptable, some can be mitigated… add audible alarms for things going wrong. Remove functions, you don’t need. For example, remove internet access if you don’t need it -- it makes your device much harder to hack. Similarly, if power reliability is a problem, run the device off a battery not the mains. That also reduces the chances of shocks to the patient, but also isolate sensors from control logic; isolators are a cheap off-the-shelf part.

Then I humoured myself with a fun ZX Spectrum item: “An arcade game port to the ZX Spectrum: a reverse engineering exercise” [https://fosdem.org/2020/schedule/event/retro_arcade_game_port_zx/]

We were warned that this will be tough on those who didn’t do assembly language. I never did.

Doing reverse engineering for fun is an educational challenge, but there are now competitions for this. You must pick your origin and target carefully. It is not like developing a new game. Also, you can throw away all you know about programming modern hardware. Vintage hardware limits can be v weird, such as not being able to clear the screen because it takes to long, or is not a supported function.

You need to know your game amazingly closely. You need to play it obsessively -- it took months of effort to map everything. You need to know how does it feel, which means you must watch others play, and also, play with others - multiplay teaches a lot.

To find the essence of a game is surprisingly hard. E.g. Pacman… Sure you recognise it, but how *well* do you know it? Do you know the character names, or the ghosts’ different search patterns? Or Tetris. Do you know it completely? Is next-part selection random? Are you sure?

Rui picked a game similar to “Bubble Bobble”. Everyone knows that, but in the colopured bubbles, are there patterns? If so, do they change? Are they different for 1 or 2 players?

Or “R Type”. Do you know how to beat all the bosses? His point being that you often can’t exactly reproduce a game, especially on lower-spec hardware, so you have to reproduce how it feels to play, even if it’s not identical.

Rui picked “Magical Drop 2”, to re-implement on a ZX Spectrum. This is a Neo Geo MVS game -- the professional, arcade NeoGeo. Its specifications are much higher than a Spectrum -- such as using a 12MHz 68000 CPU!
Even its sound chip is a full Z80 that is faster than the Spectrum’s.

To work out what he could do, he methodically worked out the bandwidth required.

So, a full Spectrum screen (255*192 pixels, with 32*22 colours) needs 6912 bytes per frame. At 50 Hz. The Spectrum’s CPU has just 70,000 ticks per frame. (That’s operations, but not instructions: the fastest Z80
instruction is 4 T-states, and pop/push take 10.)

If you draw a frame, that cuts the size of screen updates a lot, and it looks better. If you only update small bits, it’s quicker too. Rui came up with a clever hack: pre-draw the bubbles, then just change the colours. Black-on-black is invisible. Set the colour and it appears. But there are only 8, in 2 levels of brightness, and you need to reserve some for animations, leaving at most 5 or 6.

His demo of the effects was amazing: a Spectrum normally can’t draw stuff that fast.

Reverse engineering is not the same as a port. If you do a port, that implies you have source code access. RE means you have none. There are other things to note. The in-game player instructions are very basic. Why? Because it’s a coin-op! They want you to spend money learning to play!

Using colours not pixels is 8x faster, which leaves time for the game logic. He uses an array for ball colours and a mark/sweep algorithm to look for 3+ matching balls. But even this needs special care: edge checking is very instruction-intensive, so rather than check for bounds, which is too slow, he puts a fence around the playing field -- an area that doesn’t count.

He then listed a lot of optimisations one could use, from tail call optimisation for functions, moving constants out of loops, unrolling loops, and more to the point unrolling them in binary multiples that are efficient for a slow CPU. He even used self-modifying code (surrounded by skulls in the listings!) But it all got too technical for me.

After 6 months, he is still not finished. Single- and dual-player works, but not against the computer.

I was aghast at the amount of work and effort.

-----

On Sunday, I went to a talk by SUSE’s own Richard Brown, “Introducing libeconf” [https://fosdem.org/2020/schedule/event/ilbsclte/]

However, it was a bit impenetrable unless you code on MicroOS talking a lot to systemd.

Then I went to a talk on the new “NOVA Microhypervisor on ARMv8-A” [https://fosdem.org/2020/schedule/event/uk_nova/]

But this was very deep stuff. Also, the speaker constantly referred back to an earlier talk, so it was opaque unless you were at that. I sneaked out and went instead to:

“Regaining control of your smartphone with postmarketOS and Maemo Leste” [https://fosdem.org/2020/schedule/event/smartphones/]

This was a much more accessible overview of FOSS Linuxes for smartphones, including why you should use one. There were 2 speakers,  and one, Bart, spent so much time trying to be fair to other, rival distros that he left little reason to use his (postmarketOS). It’s a valiant effort to replace outdated Android with a far more standard mainline Linux OS, to keep old basic hardware working after vendors stop updating it.

The other speaker, Merlijn, was more persuasive about Maemo. This was Nokia’s Linux for the N900. It’s now abandoned by Nokia, and unfortunately was not all OSS. So some parts can’t be updated and must be thrown away and replaced. But all thw work since Nokia is FOSS. He talked a lot about its history, its broad app support, etc. The modernised version is built on Debian or Devuan. They have updated the FOSS bits, and replaced the proprietary bits. A single repo adds all the phone components to a standard ARM install. It is only Alpha quality for now. It runs on the original N900, the Motorola Droid 4 (one of the last smartphones with a physical QWERTY keyboard) & the new PinePhone.

The closing main item was “FOSSH - 2000 to 2020 and beyond!” by Jon “maddog” Hall. [https://fosdem.org/2020/schedule/event/fossh/]

maddog makes the point that he’s an old man now, at 69. He’s had 3 heart attacks, and as he puts it, is running on ½ a heart; the rest is dead. He’s been 50+ years in the industry.

He has a lot to teach. He started with how software used to be bundled with computer hardware, as a mix of source & binaries, until a historical Amdahl v IBM legal case. As a result, bundling became illegal for system vendors. Then software started to be sold as a product. This was so Amdahl plug-compatible mainframes could run IBM software, which enabled Amdahl to sell them.

He was using Hypervisors in 1968, and name-checked IBM VM (née CP-67) & on that, cms (née the Cambridge Monitor Systems, later renamed Conversational Monitor System).

He also pointed out that `chroot` has worked since 1979 - containers aren’t that new.

It’s often underestimated how the sales of video games in the ‘80s propelled software patents & copyright. Rip-off vendors could just clone the hardware and copy the ROM.

rms among others objected to this. While maddog “disagress with rms about a few things”, he credits him with the establishment of the community -- but points out that it’s a massive shame he didn’t call it The Freedom Software Foundation. That one extra syllable could have saved years of arguments.

And for all that rms hates copyright, and fought it with a different kind of licence agreement -- the GPL of course -- maddog points out that licenses don’t work without copyright…

Maddog had many years of non-free software experience before Linux -- CP/M, MS-DOS, Apple and more. But then came BSD… and we owe BSD a lot, because it’s much more than just a Unix. Many of the tools used on many
OSes, including Linux, come from BSD.

The commercial relevance is also important. Many “small” companies have come out of FOSS, including:


  • Ingres / Postgres

  • Cygnus

  • PrimeTime S/W

  • Walnut Creek


The invention of the CD-ROM was a big deal. Not just for size or speed, but for simple cost. A DEC TK50 tape was $100 new. But CD-ROMs were very nearly very bad for Unix. The ISO-9660 standard only used 8.3 names… because it was set by MS and DEC. It was enough for DOS and VMS. As it happened, at the time, maddog worked at DEC, so he traced the person who was the official standards setter and author, who worked a few cubicles away, and there not being much time, simply blackmailed him into including the Rock Ridge extensions for Unix-like filenames into the standard. This got a round of applause.

The original BSD Unix distro -- because distributions are not a new, Linux thing -- led to BSDi, which in turn led to the famous AT&T lawsuit.

But that also let to Unix System V. This caught on against a lot of opposition and led to the rise of Unix. For example, the very finely-tuned SunOS 4 was replaced with the still research-oriented System V. The Sun OS developers were horrified, but management forced them to adopt it. This is why it was called “Slowlaris” -- all the optimisations were lost. But it did lead to a more standardised Unix industry, and a lot more cross-compatibility, so it was good for everyone.

Keith Bostick led the effort to create BSD Lite and deserves a lot more credit for it than he got. He and his team purged all the code that even looked like AT&T code from BSD. This left just 17 questionable files, which they simply dropped. The result was criticised because it wasn’t a complete OS, but it was not so hard to replace those files, and the result, BSD Lite, led to Free BSD, Net BSD, Open BSD etc. It was very much worth it.

It nearly didn’t come in time. By ‘92 all the Unix vendors had ceded the desktop to M$ & Apple. (M$ is the term he used.) NT started to win everywhere… but then the Unix world realised MS wanted everything, the server too. A warning bell was when even O’Reilly started publishing NT books.

But then, just as it looked dark, came...


  • GNU (everything but a kernel)

  • Then the Linux kernel in 1991

  • Then Linux v1.0 in 1994.


Linux distros started, and maddog tried


  • SLS

  • Yggdrasil

  • Debian

  • RH

  • Slackware

  • And others.


There even came a time when he called Linus Torvalds in his office at Transmeta, and he answered the phone with “Lie-nus here”. He had gone so native, he even pronounced his own name the American way!

“Mind you, Linus said ‘I don’t care what you call the OS so long as you use it.’ So here’s your chance, BSD people! Call it BSD!”

But there were no apps. Instead, Linux was used in…


  • ISPs (to replace SPARC & Solaris)

  • shells

  • DNS

  • LAMP (thanks, timbl)

  • As a way to reuse old boxes

  • Firewall

  • file & print server (thanks, Samba)


Again underestimated, Beowulf clusters (1995) were important. All the old supercomputer vendors were going under. They would spend $millions on developing a new supercomputer, then sell 5. One “to a government agency we can’t name, but you all know who I mean”, and 4 to universities who couldn’t afford them. So, credit to Thomas Sterling & Don Becker. Beowulf changed this. There was no commercial supercomp software any more. Although apparently, Red Hat did a boxed supercomputer distro & sold it as a joke. But thousands of people bought it, so they could show it off on the shelf - never opened.

Then came a long run-through of the early stages of the commercial Linux industry.

From 1997-1999, Slashdot, Sourceforge and Thinkgeek. Linux International, the Linux Mark Institute and the LSB. Linux professional certification from Sair and the LPI. These bodies were supporters of early Linux marketing, for example trade shows like CeBIT and LinuxWorld, of user groups, and so on.

A good sign was when commercial databases announced they were supporting Linux. The first was Informix, on October 2nd 1998. Hearing about it, Oracle announced theirs 2 days before, but it didn’t ship until 9 months later. (Maddog is very evidently not a fan of Oracle.) Then Sun buys MySQL, then Oracle buys Sun.

The term “Open Source” -- it was not his fault. He was at the meeting, but he went to bathroom, and when he came back, it was too late. They’d named it.

The dot-com boom/bust was bad but not as bad as people thing. There were the RH and V.A. Linux IPOs. IBM invested $1Bn in Linux, & got it back many timers over. The OSDL (2000) was important, to. It helped CA, HP, and IBM with hardware. It even hired Torvalds, but went broke.

Although the following talk was meant to be about the history since 2000, maddog gave his favourites. His interesting events in or since 2000 were:


  • 2000

  • - Linux in Embedded systems

  • - Knoppix

  • - FreeBSD jails

  • 2001

  • - Steve Ballmer’s famous “cancer” quote

  • 2003 onwards

  • - SCO lawsuit -- and how it was the evil, post-Caldera SCO, not the original Michels family SCO, who were good guys. They even gave Linus an award. Doug Michels asked Linus “what can SCO do to help Linux?” Torvalds later told maddog of his embarrasment -- he could not think of a single thing to say.

  • 2004

  • - Ubuntu, of course. For better or worse, a huge step in popularising Linux.

  • 2008

  • - Android

  • - VMs: KVM, Xen, VBox, Bochs, UML

  • - The fog cloud. Yes, he calls it “the fog”.

  • 2011

  • - Rasberry Pi

  • - Containers

maddog’s favour illustration of Linux’s progress over time are 4 quotes from a leading industry analyst, Jonathan Eunice or Illuminata. They were


  1. “Linux is a toy.”

  2. “Linux is getting interesting”

  3. “I recommend Linux for non-critical apps.”

  4. “I recommend to at least look at Linux for *all* enterprise apps.”


On maddog’s last day at DEC, in 1999… his boss bought him a coffee and asked “whenm will this thing be ready? When can I get rid of all my Digital Unix engineers?” That’s when he knew it had won.

Why is free(dom) software important? As he put it, “I have a drawer full of old calculators & computers I can’t use, because their software wasn’t free.” Avoiding obsolescence is a significant issue that gave Linux an in-road into the mainstream, and it remains just as important.

The thing about freedom software is that both nobody & everybody owns it. Even companies making closed products with freedom software are still helping.

Today’s challenges & opportunities? Well, security & privacy -- it‘s worse than you think. Ease of use is still not good enough: “it’s gotta be easy enough for mom & pop.”

He doesn’t like the term AI -- it should be “inorganic intelligence”. It will be the same as our meat intelligence, in another substrate: maddog agrees with Alan Turing: at heart, all we need to do is duplicate the brain in silicon and we’re there. And he feels we can do that.

He feels that freedom software needs a lot more advertising. It needs to be on TV. It needs to be a household word, a brand.

Winding up, he says it’s all about love. Love is Love. Ballmer now *says* he loves FOSS, companies say it, but end-users should say they that they love freedom software. Or, as he put it, “world domination through cooperation”.


The final main item was “FOSDEM@20 - A Celebration” [https://fosdem.org/2020/schedule/event/fosdem_at_20/] by Steve Goodwin, @marquisdegeek

He felt it was very apt that FOSDEM happens in Brussels, as the Mundaneum in 1934 was the first attempt at an indexing system of everything.

Goodwin doesn’t take credit -- he says that the first, OSDEM, was organised by Raphaël Bauduin. He just claims to have inspired it, by sending out an email, which I noted had a misspelled subject line: “programers meeting” [sic]

FOSDEM 1 was in 2002. It even had its own dance, which Bauduin demonstrated. He also showed that he was wearing the same T-shirt as in the photo of him opening the first event, 20 years earlier.

FOSDEM was meant to be fun, because RB didn’t feel comfy at commercial FOSS conferences.

When it started, the IT world was very different. In 2001, there was no FaceBook, no Twitter, no Stack Overflow (to a chorus of boos), no Uber. Google was 3, Amazon was 7 and sold only books. Billie Eilish was born… in December, and Goodwin didn’t believe there’d be a single middle-aged geek who would have heard of her.

Mac OS X and XP were both new.

There are some photos on http://fosdem.3ti.be/ showing its intentional lack of professionalism or seriousness -- for instance, the Infodesk was subtitled “a bunch of wacko looneys at your service”. But they got a lot of big names. Miguel de Icaza was an early speaker, demoing Mono, GNOME, Xamarin. A heckler bizarrely shouted “this is Coca-cola!” i.e. demoing Mono controlling the proprietary Unity was wrong. Then there was a video speech from Eben Moglen introducing the Freedombox: https://freedombox.org/

And that’s a run-down of my FOSDEM. This is just my notes expanded to sentence length. Forgive the pretentious quote from Blaise Pascal: “Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.” (If I had more time, I would have written a shorter letter.)

Hard Stare

"Generation Gaps" -- FOSDEM 2020 talk

On Saturday 1st Feb, I did another FOSDEM talk in the History stream. (I was, in fact, the end of history.)

Here's the presentation that I made, with speakers' notes. It's a LibreOffice Impress file. I'll add a video link when I get it.

If you prefer plain text, well, here's the script... LibreOffice Writer or MS Word format.

UPDATE: in theory, there should be video here. It seems not to be available just yet, though.

https://video.fosdem.org/2020/Janson/generation_gaps.mp4
https://video.fosdem.org/2020/Janson/generation_gaps.webm

Here is my previous FOSDEM talk from 2018, if you're interested.
Hard Stare

It's funny how life can be fractal: recursive and self-similar.

About 15 years ago, I agreed to review Douglas Adams' TV documentary Hyperland for the newsletter of ZZ9, the Hitch-hikers' Guide fanclub. I still haven't, but I finally got around to watching it 6 months ago, and it features Ted Nelson.

My non-$DAYJOB research into computer science has led to me reading and following Ted Nelson, inventor of hypertext and arguably the man who inspired the creation of the WorldWideWeb because Nelson's Xanadu was never finished, so Tim Berners-Lee did a quick-and-dirty lightweight version of some of its core ideas instead.

And here I am blogging on it.

When I finally got round to watching Hyperland, who is interviewed but... Ted Nelson.

My recent research has led me to Niklaus Wirth's remarkable Oberon language and OS.

But in $DAYJOB I'm documenting SANs and the like, which led me to ATA-over-Ethernet among other things. It's a tech I've long admired for its technical elegance.

I found the author, and his blog... and he talks about his prior big invention, the Cisco PIX. That was my last big project in my last hands-on techie day-job. It emerges he also invented NAT. And reading about that:
http://coraid.com/b190403-the-pix.html

... What does he talk about but Oberon.
Hard Stare

How to clean up a Windows disk (and prepare for dual-booting, and why you should)

I keep getting asked about this in various places, so I thought it was high time I described how I do it. I will avoid using any 3rd party proprietary tools; everything you need is built-in.

Notes for dual-booters:

This is a bit harder with Windows 10 than it was with any previous versions. There are some extra steps you need to do. Miss these and you will encounter problems, such as Linux refusing to boot, or hanging on boot, or refusing to mount your Windows drive.

It is worth keeping Windows around. It's useful for things like updating your motherboard firmware, which is a necessary maintenance task -- it's not a one-off. Disk space is cheap these days.

Also, most modern PCs have a new type of firmware called UEFI. It can be tricky to get Linux to boot off an empty disk with UEFI, and sometimes, it's much easier to dual-boot with Windows. Some of the necessary files are supplied by Windows and that saves you hard work. I have personally seen this with a Dell Precision 5810, for instance.

Finally, it's very useful for hardware troubleshooting. Not sure if that new device works? Maybe it's a Linux problem. Try it in Windows then you'll know. Maybe it needs initialising by Windows before it will work. Maybe you need Windows to wipe out config information. I have personally seen this with a Dell Precision laptop and a USB-C docking station, for example: you could only configure triple-head in Windows, but once done, it worked fine in Linux too. But if you don't configure it in Windows, Linux can't do it alone.

Why would you want to do this? Well, there are various reasons.


  1. You regularly, often or only run Windows and want to keep it performing well.

  2. You run Windows in a VM under another OS and want to minimize the disk space and RAM it uses.

  3. You dual-boot Windows with another OS, and want to keep it happy in less disk space than it might normally enjoy to itself.

  4. You're preparing your machine for installing Linux or another OS and want to shrink the Windows partition right down to make as much free space as possible.

  5. You've got a slightly troublesome Windows installation and want to clean things up as a troubleshooting step.

Note, this stuff also applies to a brand-new copy of Window, not just an old, well-used installation.

I'll divide the process into 2 stages. One assuming you're not preparing to dual-boot, and a second stage if you are.

So: how to clean up a Windows drive.

The basic steps are: update; clean up; check for errors.

If you're never planning to use Windows again, you can skip the updating part -- but you shouldn't. Why not? Well, as I advised above, you should keep your Windows installation around unless you are absolutely desperate for disk space and so poor that you can't afford to buy more. It's useful in emergencies. And in emergencies, you don't want to spend hours installing updates. So do it first.

Additionally, some Windows updates require earlier ones to be installed. A really old copy might be tricky to update.


  1. Updating. This is easy but not quite as easy as it looks at first glance. Connect your machine to the Internet, open Windows Update, click "Check for updates". But wait! There's more! Because Microsoft has a vested interest in making things look smooth and easy and untroubled, Windows lies to you. Sometimes, when you click "check for updates", it says there are none. Click again and magically some more will appear. There's also a concealed option to update other Microsoft products and it is, unhelpfully, off by default. You should turn that on.

  2. Once Windows Update has installed everything, reboot. Sometimes updates make you do this, but even if they don't, do it manually anyway.

  3. Then run Windows Update and check again. Sometimes, more will appear. If they do, install them and go back to step 1. Repeat this process until no new updates appear when you check.

  4. Next, we're going to clean up the disk. This is a 2-stage process.

  5. First, run Disk Cleanup. It's deeply buried in the menus so just open the Start menu and type CLEAN. It should appear. Run it.

  6. Tick all the boxes -- don't worry, it won't delete stuff you manually downloaded -- and run the cleanup. Normally, this is fast. A few minutes is enough.

  7. Once it's finished, run disk cleanup again. Yes, a second time. This is important.

  8. Second time, click the "clean up system files" button.

  9. Again, tick all the boxes, then click the button to run the cleanup.

  10. This time, it will take a long time. This is the real clean up and it's the step I suspect many people miss. Be prepared for your PC to be working away for hours, and don't try to do anything else while it works, or it will bypass files that are in use.

  11. When it's finished, reboot.

  12. After your PC reboots, right-click on the Start button and open an administrative command prompt. Click yes to give it permission to run. When it appears, type: CHKDSK C: /F

  13. Type "y" and hit "Enter" to give it permission.

  14. Reboot your PC to make it happen.

  15. This can take a while, too. This can fix all sorts of Windows errors. Give it time, let it do what needs to be done.

  16. Afterwards, the PC will reboot itself. Log in, and if you want an extra-thorough job, run Disk Cleanup a third time and clean up the system files. This will get rid of any created by the CHKDSK process.

  17. Now you should have got rid of most of the cruft on your C drive. The next step requires 2 things: firstly, that you have a Linux boot medium, so if you don't have it ready, go download and make one now. Secondly, you need to have some technical skill or experience, and familiarity with the Windows folder tree and navigating it. If you don't have that, don't even try. One slip and you will destroy Windows.

  18. If you do have that experiece, then what you do is reboot your PC from the Linux medium -- don't shutdown and then turn it back on, pick "restart" so that Windows does a full shutdown and reboot -- and manually delete any remaining clutter. The places to look are in C:\WINDOWS\TEMP and C:\USERS\$username\AppData\Local\Temp. "$username" is a placeholder here -- look in the home directory of your Windows login account, whatever that's called, and any others you see here, such as "Default", "Default User", "Public" and so on. Only delete files in folders called TEMP and nowhere else. If you can't find a TEMP folder, don't delete anything else. Do not delete the TEMP folders themselves, they are necessary. Anything inside them is fair game. You can also delete the files PAGEFILE.SYS, SWAPFILE.SYS and HIBERFIL.SYS in the root directory -- Windows will just re-create them next boot anyway.

That's about it. After you've done this, you've eliminated all the junk and cruft that you reasonably can from your Windows system. The further stages are optional and some depend on your system configuration.

Optional stages

Defragmenting the drive

Do you have Windows installed on a spinning magnetic hard disk, or on an SSD?

If it's a hard disk, then you may wish to run a defrag. NEVER defrag an SSD -- it's pointless and it wears out the disk.

But if you have an old-fashioned HDD, then by all means, after your cleanup, defrag it. Here's how.

I have not tested this on Win10, but on older versions, I found that defrag does a more thorough job, faster, if you run it in Safe Mode. Here's how to get into Safe Mode in Windows 10.

Turning off Fast Boot

Fast Boot is a featue that only shuts down part of Windows and then hibernates. Why? Because when you turn your PC on, it's quicker to wake Windows and then load a new session than it is to boot it from scratch, with all the initialisation that involves. Shutdown and startup both become a bit quicker.

If you only run Windows and have no intention of dual-booting, then ignore this if you wish. Leave it on.

But if you do dual-boot, it's definitely worth doing. Why? Because when Fast Boot is on, Windows doesn't totally stop when you shut down, only when you restart. This means that the C drive is marked as being still mounted, that is, still in use. And if it's in use, then Linux won't mount it and you can't access your Windows drive from Linux.

Worse still, if like me you mount the Windows drive automatically during bootup, then Linux won't finish booting. It waits for the C drive to become available, and since Windows isn't running, it never becomes available so the PC never boots. This is a new problem introduced by the Linux systemd tool -- older init systems just skipped the C drive and moved on, but systemd tries to be clever and as a result it hangs.

So, if you dual boot, always disable Fast Boot. It gives you more flexibility. I will list a few how-tos since Microsoft doesn't seem to officially document this.Turning off Hibernation

IF you have a desktop PC, once you have disabled Fast Boot, also disable Hibernation.

If you have a notebook, you might want to leave it on. It's useful if you find yourself in the middle of something but running out of power, or about to get off a train or plane. But for a desktop, there's less reason, IMHO.

There are a few reasons to disable it:

  1. It eliminates the risk of some Windows update turning Fast Boot back on. If Hibernation is disabled, it can't.

  2. It means when you boot Linux your Windows drive will always be available. Starting another OS when Windows is alive but hibernating risks drive corruption.

  3. It frees up a big chunk of disk space -- equal to your physical RAM -- that you can take off your Windows partition and give to Linux.

Here's how to disable it:In brief: open an Admin Mode command prompt, and type powercfg /h off.
That's it. Done.

Once it's done, if it's still there, in Linux you can delete C:\HIBERFIL.SYS.

Final steps -- preparing for installing a 2nd operating system

If you've got this far and you're not about to set up your PC for dual-boot, then stop, you're done.

But if you do want to dual-boot, then the final step is shrinking your Windows drive.

There are 2 ways to do this. You might want one or the other, or both.

The safe way is to follow a dual-booter's handy rule:

Always use an OS-native tool to manipulate that OS.

What this means is this: if you're doing stuff to, or for, Windows, then use a Windows tool if you can. If you're doing it to or for Linux, use a Linux tool. If you're doing it to or for macOS, use a macOS tool.

  • Fixing a Windows disk? Use a Windows boot disk and CHKDSK. Formatting a drive for Windows? Use a Windows install medium. Writing a Windows USB key? Use a Windows tool, such as Rufus.

  • Writing a Linux USB? Use Linux. Formatting a drive for Linux? Use Linux.

  • Adjusting the size of a Mac partition? Use macOS. Writing a bootable macOS USB? Use macOS.

So, to shrink a Windows drive to make space for Linux, then use Windows to do it.

Here's the official Microsoft way.

Check how much space Windows is using, and how much is free. (Find the drive in Explorer, right-click it and pick Properties.)

The free space is how much you can give to Linux.

Note, once Windows is shut down, you can delete the pagefile and swapfile to get a bit more space.

However, if you want to be able to boot Windows, then it needs some free working space. Don't shrink it down until it's full and there's no free space. Try to leave it about 50% empty, and at least 25% empty -- below that and Windows will hit problems when it boots, and if you're in an emergency situation, the last thing you need are further problems.

As a rule of thumb, a clean install of Win10 with no additional apps will just about run in a 16 GB partition. A 32 GB partition gives it room to breathe but not much -- you might not be able to install a new release of Windows, for example. A 64 GB partition is enough space to use for light duties and install new releases. A 128 GB partition is enough for actual work in Windows if your apps aren't very big.

Run Disk Manager, select the partition, right-click and pick "shrink". Pick the smallest possible size -- Windows shouldn't shrink the disk so much you have no free space, but note my guidelines above.

Let it work. When it's done, look at how much unpartitioned space you have. Is there enough room for what you want? Yes? Great, you're done. Reboot off your Linux medium and get going.

No? Then you might need to shrink it further.

Sometimes Disk Manager will not offer to shrink the Windows drive as much as you might reasonably expect. For example, even if you only have 10-20 GB in use, it might refuse to shrink the drive below several hundred GB.

If so, here is how to proceed.

  1. Shrink the drive as far as Windows' Disk Manager will allow.

  2. Reboot Windows

  3. Run "CHKDSK /F" and reboot again.

  4. Check you've disabled Fast Boot and Hibernation as described above.

  5. Try to shrink it again.

No joy? Then you might have to try some extra persuasion.

Boot off a Linux medium, and as described above, delete C:\PAGEFILE.SYS, C:\SWAPFILE.SYS and C:\HIBERFIL.SYS.

Reboot into Windows and try again. The files will be automatically re-created, but in new positions. This may allow you to shrink the drive further.

If that still doesn't work, all is not lost. A couple more things to try:

  • If you have 8 GB or more of RAM, you can tell Windows not to use virtual memory. This frees up more space. Here's how.

  • Disable System Protection. This can free up quite a bit of space on a well-used Windows install. Here's a how-to.

Try that, reboot, and try shrinking again.

If none of this works, then you can shrink the partition using Linux tools. So long as you have a clean disk, fully shut down (Fast Boot off, not hibernated, etc.) then this should be fine.

All you need to do is boot off your Linux medium, remove the pagefile, swapfile and any remaining hibernation file, then run GPARTED.Again, bear in mind that you should leave 25-50% of free space if you want Windows to be able to run afterwards.

Once you've shrunk the partition, try it. Reboot into Windows and check it still works. If not, you might need to make the C partition a little bigger again.

Once you have a small but working Windows drive, you're good to go ahead with Linux.
Hard Stare

Choose your future.

Choose 68K. Choose a proprietary platform. Choose an OS. Choose games. Choose a fucking CRT television, choose joysticks, floppies, ROM cartridges, and proprietary memory. Choose no pre-emption, crap programming languages and sprite graphics. Choose a safe early-80s sound chip. Choose a second floppy drive. Choose your side. Choose badges, stickers and T-shirts to proclam your loyalty. Choose one of the two best-selling glorified games consoles with the same range of fucking games. Choose trying to learn to write video games and dreaming you'll be a millionaire from your parents' spare bedroom. Choose reading games magazines and pretending that one day you'll do one like that in AMOS or STOS, while buying another sideways-scrolling shooter or a platformer and thinking it's original or new or worth the thirty notes you paid for it. Choose rotting away at the end of it all, running the same old crap games in miserable emulators, totally forgotten by the generic x86 business boxes with liquid cooling that your fucked-up brats think are exciting, individual and fun... Choose your future. Choose 68K... But why would I want to do a thing like that?

I chose not to do that. I chose something different.