?

Log in

No account? Create an account

Sun, Feb. 4th, 2018, 07:04 pm
"The Circuit Less Travelled" -- #FOSDEM 2018 "History" stream talk, notes & slides

So, yesterday I presented my first conference talk since the Windows Show 1996 at Olympia, where I talked about choosing a network operating system — that is, a server OS — for PC Pro magazine.

(I probablystill have the speaker's notes and presentation for that somewere too. The intensely curious may ask and I maybe able share it too.)

It seemed to go OK, I had a whole bunch of people asking questions afterwards, commenting or thanking me.

[Edit] Video! https://youtu.be/jlERSVSDl7Y

I have to check out the video recording and make some editing marks before it will be published and I am not sure that the hotel wifi connection is fast or capacious enough for me to do that. However, I'll post it as soon as I can.

Meantime, here is some further reading.

I put together a slightly jokey deck of slides and was very pleasantly impressed at how good and easy LibreOffice Impress made it to create and to present them. You can download the 9MB ODP file here:

https://www.dropbox.com/s/xmmz5r5zfmnqyzm/The%20circuit%20less%20travelled.odp?dl=0

The notes are a 110 kB MS Word 2003 document. They may not always be terribly coherent -- some were extensively scripted, some are just bullet points. For best results, view in MS Word (or the free MS Word Viewer, which runs fine under WINE) in Outline mode. Other programs will not show the structure of the document, just the text.

https://www.dropbox.com/s/7b2e1xny53ckiei/The%20Circuit%20less%20travelled.doc?dl=0

I had to cut the talk fairly brutally to fit the time and did not get to discuss some of the operating systems I planned to. You can see some additional slides at the end of the presentation for stuff I had to skip.

Here's a particular chunk of the talk that I had to cut. It's called "Digging deeper" and you can see what I was goingto say about Taos, Plan 9, Inferno, QNX and Minix 3. This is what the slides on the end of the presentation refer to.

https://www.dropbox.com/s/hstqmjy3wu5h28n/Part%202%20%E2%80%94%20Digging%20deeper.doc?dl=0

Links I mentioned in the talk or slides

The Unix Haters' Handbook [PDF]: https://simson.net/ref/ugh.pdf

Stanislav Datskovskiy's Loper-OS:  http://www.loper-os.org/

Paul Graham's essays: http://www.paulgraham.com/

Notably his Lisp Quotes: http://www.paulgraham.com/quotes.html

Steve Jobs on the two big things he missedwhen he visited Xerox PARC:
http://www.mac-history.net/computer-history/2012-03-22/apple-and-xerox-parc/2

Alan Kay interview where he calls Lisp "the Maxwell's Equations of software": https://queue.acm.org/detail.cfm?id=1039523

And what that means: http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/

"In the Beginning was the Command Line" by Neal Stephenson: http://cristal.inria.fr/~weis/info/commandline.html

Author's page: http://www.cryptonomicon.com/beginning.html


Symbolics OpenGenera: https://en.wikipedia.org/wiki/Genera_(operating_system)

How to run it on Linux (some of several such pages):
http://www.jachemich.de/vlm/genera.html
https://loomcom.com/genera/genera-install.html

A brief (13min) into to OpenGenera by Kalman Reti: https://www.youtube.com/watch?v=o4-YnLpLgtk&t=5s
A longer (1h9m) talk about it, also by him: https://www.youtube.com/watch?v=OBfB2MJw3qg

Fri, Dec. 22nd, 2017, 12:22 am
FOSDEM


It might interest folk hereabout that I've had a talk accepted at February's FOSDEM conference in Brussels. The title is "The circuit less travelled" and I will be presenting a boiled-down, summarised version of my ongoing studies into OS, language and app design, on the thesis of where, historically, the industry made arguably poor (if pragmatic) choices, some interesting technologies that weren't pursued, where it'll go next and how reviving some forgotten ideas could lend technological advantage to those trying different angles.

In other words, much of what I've been ranting about on here for the last several years.

It will, to say the least, be interesting to see how it goes down.

SUSE is paying for me to attend, but the talk is not on behalf of them -- it's entirely my own idea and submission. A jog from SUSE merely gave me the impetus to submit an abstract and description.

Thu, Aug. 3rd, 2017, 03:49 pm
It is not just me. I swear, it really isn't.

Once again, recently, I have been told that I simply cannot write about -- for instance -- the comparative virtues of programming languages unless I am a programmer and I can actually program in them. That that is the only way to judge.

This could be the case, yes. I certainly get told it all the time.

But the thing is that I get told it by very smart, very experienced people who also go on to tell me that I am completely wrong about other stuff where I know that I am right, and can produce abundant citations to demonstrate it. All sorts of stuff.

I can also find other people -- just a few -- who know exactly what I am talking about, and agree, and have written much the same, at length. And their experience is the same as mine: years, decades, of very smart highly-experienced people who just do not understand and cannot step outside their preconceptions far enough to get the point.

It is not just me.

Read more...Collapse )

Thu, Jul. 27th, 2017, 07:18 pm
If you're an outsider in the world of computing, you can see for miles… but you annoy everyone.

This is a repurposed CIX comment. It goes on a bit. Sorry for the length. I hope it amuses.

So, today, a friend of mine accused me of getting carried away after reading a third-generation Lisp enthusiast's blog. I had to laugh.

The actual history is a bit bigger, a bit deeper.

The germ was this:

https://www.theinquirer.net/inquirer/news/1025786/the-amiga-dead-long-live-amiga

That story did very well, amazing my editor, and he asked for more retro stuff. I went digging. I'm always looking for niches which I can find out about and then write about -- most recently, it has been containers and container tech. But once something goes mainstream and everyone's writing about it, then the chance is gone.

I went looking for other retro tech news stories. I wrote about RISC OS, about FPGA emulation, about OSes such as Oberon and Taos/Elate.

The more I learned, the more I discovered how much the whole spectrum of commercial general-purpose computing is just a tiny and very narrow slice of what's been tried in OS design. There is some amazingly weird and outré stuff out there.

Many of them still have fierce admirers. That's the nature of people. But it also means that there's interesting in-depth analysis of some of this tech.

It's led to pieces like this which were fun to research:

http://www.theregister.co.uk/Print/2013/11/01/25_alternative_pc_operating_systems/

I found 2 things.

One, most of the retro-computers that people rave about -- from mainstream stuff like Amigas or Sinclair Spectrums or whatever -- are actually relatively homogenous compared to the really weird stuff. And most of them died without issue. People are still making clone Spectrums of various forms, but they're not advancing it and it didn't go anywhere.

The BBC Micro begat the Archimedes and the ARM. Its descendants are everywhere. But the software is all but dead, and perhaps justifiably. It was clever but of no great technical merit. Ditto the Amiga, although AROS on low-cost ARM kit has some potential. Haiku, too.

So I went looking for obscure old computers. Ones that people would _not_ read about much. And that people could relate to -- so I focussed on my own biases: I find machines that can run a GUI or at least do something with graphics more interesting than ones before then.

There are, of course, tons of the things. So I needed to narrow it down a bit.

Like the "Beckypedia" feature on Guy Garvey's radio show, I went looking for stuff of which I could say...

"And why am I telling you this? Because you need to know."

So, I went looking for stuff that was genuinely, deeply, seriously different -- and ideally, stuff that had some pervasive influence.

Read more...Collapse )
And who knows, maybe I’ll spark an idea and someone will go off and build something that will render the whole current industry irrelevant. Why not? It’s happened plenty of times before.

And every single time, all of the most knowledgeable experts said it was a pointless, silly, impractical flash-in-the-pan. Only a few nutcases saw any merit to it. And they never got rich.

Fri, Jun. 16th, 2017, 02:48 pm
The death of the filesystem [tech blog post, by me - please comment on LJ, not on Twitter/FB]

I've written a few times about a coming transition in computing -- the disappearance of filesystems, and what effects this will have. I have not had any constructive dialogue with anyone.

So I am trying yet again, by attempting to to rephrase this in a historical context:

There have been a number of fundamental transitions in computing over the years.

1st generation

The very early machines didn't have fixed nonvolatile storage: they had short-term temporary storage, such as mercury delay lines or storage CRTs, and read data from offline, non-direct-access, often non-rewritable means, such as punched cards or paper tape.

2nd generation

Hard disks came along, in about 1953, commercially available in 1957: the IBM RAMAC...

https://en.wikipedia.org/wiki/IBM_305_RAMAC

Now, there were 2 distinct types of directly-accessible storage: electronic (including core store for the sake of argument) and magnetic.

A relatively small amount of volatile storage, in which the processor can directly work on data, and a large amount of read-write volatile storage but which must be transferred into volatile storage for processing. You can't add 2 values in 2 disk blocks without transferring them into memory first.

This is one of the fundamental basic models of computer architecture. However, it has been _the_ single standard architecture for many decades. We've forgotten there was ever anything else.

[[

Aside:

There was a reversion to machines with no directly-accessible storage in the late 1970s and early-to-mid 1980s, in the form of 8-bit micros with only cassette storage.

The storage was not under computer control, and was unidirectional: you could load a block, or change the tape and save a block, but in normal use for most people except the rather wealthy, the computer operated solely on the contents of its RAM and ROM.

Note: no filesystems.

(

Trying to forestall an obvious objection:
Later machines, such as the ZX Spectrum 128 and Amstrad PCW, had RAMdisks, and therefore very primitive filesystems, but that was mainly a temporary stage due to processors that couldn't access >64kB of RAM and the inability to modify their ROMs to support widespread bank-switching, because it would have broken backwards-compatibility.)

)

]]

Once all machines have this 2-level-store model, note that the 2 stores are managed differently.

Volatile store is not structured as a filesystem as it is dynamically constructed on the fly every boot. It has little to no metadata.

Permanent store needs to have metadata as well as data. The computer is regularly rebooted, and then, it needs to be able to find its way through the non-volatile storage. Thus, increasingly elaborate systems of indexing.

But the important thing is that filesystems were a solution to a technology issue: managing all that non-volatile storage.

Over the decades it has been overloaded with other functionality: data-sharing between apps, security between users, things like that. It's important to remember that these are secondary functions.

It is near-universal, but that is an artefact of technological limitations. That the fast, processor-local storage was volatile, and non-volatile storage was slow and large enough that it had to be non-local. Nonvolatile storage is managed via APIs and discrete hardware controllers, whose main job was transferring blocks of data from volatile to non-volatile storage and back again.

And that distinction is going away.

The technology is rapidly evolving to the point where we have fast, processor-local storage, in memory slots, appearing directly in the CPUs' memory map, which is non-volatile.

Example -- Flash memory DIMMs:

https://www.theregister.co.uk/2015/11/10/micron_brings_out_a_flash_dimm/

Now, the non-volatile electronic storage is increasing rapidly in speed and decreasing in price.

Example -- Intel XPoint:

https://arstechnica.com/information-technology/2017/02/specs-for-first-intel-3d-xpoint-ssd-so-so-transfer-speed-awesome-random-io/

Note the specs:

Reads as fast as Flash.
Writes nearly the same speed as reads.
Half the latency of Flash.
100x the write lifetime of Flash.

And this is the very first shipping product.

Intel is promising "1,000 times faster than NAND flash, 10 times denser than (volatile) DRAM, and with 1,000 times the endurance of NAND".

This is a game-changer.

What we are looking at is a new type of computer.

3rd generation

No distinction between volatile and non-volatile storage. All storage appears directly in the CPUs' memory map. There are no discrete "drives" of any kind as standard. Why would you? You can have 500GB or 1TB of RAM, but if you turn the machine off, then a day later turn it back on, it carries on exactly where it was.

(Yes there will be some caching and there will need to be a bit of cleverness involving flushing them, or ACID writes, or something.)

It ships to the user with an OS in that memory.

You turn it on. It doesn't boot.

What is booting? Transferring OS code from non-volatile storage into volatile storage so it can be run. There's no need. It's in the processor's memory the moment it's turned on.

It doesn't boot. It never boots. It never shuts down, either. You may have tell it you're turning it off, but it flushes its caches and it's done. Power off.

No hibernation: it doesn't need to. The OS and all state data will be there when you come back. No sleep states: just power off.

What is installing an OS or an app? That means transferring from slow non-volatile storage to fast non-volatile storage. There is no slow or fast non-volatile storage. There's just storage. All of it that the programmer can see is non-volatile.

This is profoundly different to everything since 1957 or so.

It's also profoundly different from those 1980s 8-bits with their BASIC or Forth or monitor in ROM, because it's all writable.

That is the big change.

In current machines, nobody structures RAM as a filesystem. (I welcome corrections on this point!) Filesystems are for drives. Doesn't matter what kind of drive: hard disk, SSD, SD card, CD, DVD, DVD-RW, whatever. Different filesystems, but all need transfers from them to and from volatile storage to function.

That's going away. The writing is on the wall. The early tech is shipping right now.

What I am asking is, how will it change OS design?

All I am getting back is, "don't be stupid, it won't, OS design is great already, why would we change?"

This is the same attitude the DOS and CP/M app vendors had to Windows.

WordStar and dBase survived the transition from CP/M to MS-DOS.

They didn't survive the far bigger one from MS-DOS to Windows.

WordPerfect Corp and Lotus Corp tried. They still failed and died.

A bigger transition is about to happen. Let's talk about it instead of denying it's coming.

Wed, May. 10th, 2017, 06:43 pm
Daydreaming of alternate universes & a tech marriage made in heaven: BeOS on Acorn


Acorn pulled out of making desktop computers in 1998, when it cancelled the Risc PC 2, the Acorn Phoebe.

The machine was complete, but the software wasn't. It was finished and released as RISC OS 4, an upgrade for existing Acorn machines, by RISC OS Ltd.

by that era, ARM had lost the desktop performance battle. If Acorn had switched to laptops by then, I think it could have remained competitive for some years longer -- 486-era PC laptops were pretty dreadful. But the Phoebe shows that what Acorn was actually trying to build was a next-generation powerful desktop workstation.

Tragically, I must concede that they were right to cancel it. If there had been a default version with 2 CPUs, upgradable to 4, and that were followed with 6- and 8-core models, they might have made it, but RISC OS couldn't do that, and Acorn didn't have the resources to rewrite RISC OS to do it. A dedicated Linux machine in 1998 would have been suicidal -- Linux didn't even have a FOSS desktop in those days. If you wanted a desktop Unix workstation, you still bought a Sun or the like.

(I wish I'd bought one of the ATX cases when they were on the market.)

Read more...Collapse )

Wed, May. 3rd, 2017, 04:28 pm
The decline & fall of the last British makes of computer: Acorn & Psion

I was a keen owner and fan of multiple Psion PDAs (pocket digital assistants – today, I have a Psion 3C, a 5MX and a Series 7/netBook) and several Acorn desktop computers running RISC OS (I have an A310 and an A5000).

I was bitterly disappointed when the companies exited those markets. They still survive -- Psion's OS became Symbian and I had several Symbian devices, including a Sony-Ericsson P800, plus two Nokias -- a 7700 and an E90 Communicator. The OS is now dead, but Psion's handhelds still survive -- I'll get to them.

I have dozens of ARM-powered devices, and I have RISC OS Open running on a Raspberry Pi 3.

But despite my regret, both Psion's and Acorn's moves were excellent, sensible, pragmatic business decisions.

How many people used PDAs?

How many people now use smartphones?

Read more...Collapse )

Wed, Apr. 19th, 2017, 09:12 pm
The state of the Linux desktop

A summary of where were are and where we might be going next.

Culled from a couple of very lengthy CIX posts.

A "desktop" means a whole rich GUI with an actual desktop -- a background you can put things on, which can hold folders and icons. It also includes an app launcher, a file manager, generally a wastebin or something, accessory apps such as a text editor, calculator, archive manager, etc. It can mount media and show their contents. It can unmount them again. It can burn rewritable media such as CDs and DVDs.

The whole schmole.

Some people don't want this and use something more basic, such as a plain window manager. No file manager, or they use the command line, or they pick their own, along with their own text editor etc., which are not integrated into a single GUI.

This is still a GUI, still a graphical UI, but may include little or nothing more than window management. Many Unix users want a bunch of terminals and nothing else.

A desktop is an integrated suite, with all the extras, like you get with Windows or a Mac, or back in the day with OS/2 or something.

The Unix GUI stack is as follows:
Read more...Collapse )

Fri, Mar. 31st, 2017, 02:58 pm
The art of Sinclair -- in Agile terms, making computers that are "just barely good enough"

So in a thread on CIX, someone was saying that the Sinclair computers were irritating and annoying, cut down too far, cheap and slow and unreliable.

That sort of comment still kinda burns after all these decades.

I was a Sinclair owner. I loved my Spectrums, spent a lot of time and money on them, and still have 2 working ones today.

Yes, they had their faults, but for all those who sneered and snarked at their cheapness and perceived nastiness, *that was their selling point*.

They were working, usable, useful home computers that were affordable.

They were transformative machines, transforming people, lives, economies.

I had a Spectrum not because I massively wanted a Spectrum -- I would have rather had a BBC Micro, for instance -- but because I could afford a Spectrum. Well, my parents could, just barely. A used one.

My 2nd, 3rd and 4th ones were used, as well, because I could just about afford them.

If all that had been available were proper, serious, real computers -- Apples, Acorns, even early Commodores -- I might never have got one. My entire career would never have happened.

A BBC Micro was pushing £350. My used 48K Spectrum was £80.

One of those is doable for what parents probably worried was a kid's toy that might never be used for anything productive. The other was the cost of a car.
Read more...Collapse )

Tue, Mar. 28th, 2017, 12:40 am
The successors to the Z80-based micros of the early 1980s which never happened. Or did they?




Although we almost never saw any of them in Europe, there were later models in the Z80 family.

The first successors, the Z8000 (1985, 16-bit) and its later successor the Z80000 (1986, 32-bit) were not Z80-compatible. They did not do well.

Zilog did learn, though, and the contemporaneous Z800, which was Z80 compatible, was renamed the Z280 and relaunched in 1987. 16-bit, onboard cache, very complex instruction set, could handle 16MB RAM.

Hitachi did the HD64180 (1985), a faster Z80 with an onboard MMU that could handle 512 kB of RAM. This was licensed back to Zilog as the Z61480.

Then Zilog did the Z180, an enhancement of that, which could handle 1MB RAM & up to 33MHz.

That was enhanced into the Z380 (1994) -- 16/32-bit, 20MHz, but not derived from and incompatible with the Z280.

Then came the EZ80, at up to 50MHz. No MMU but 24-bit registers for 16MB of RAM.

Probably the most logical successor was the ASCII Corp R800 (1990), an extended 16-bit Z800-based design, mostly Z80 compatible but double-clocked on a ~8MHz bus for ~16MHz operation.

So, yes, lots of successor models -- but the problem is, too many, too much confusion, and no clear successors. Zilog, in other words, had the same failure as its licensees: it didn't trade on the advantages of its previous products. It did realise this and re-align itself, and it's still around today, but it did so too late.

The 68000 wasn't powerful enough to emulate previous-generation 8-bit processors. Possibly one reason why Acorn went its own way with the ARM, which was fast enough to do so -- the Acorn ARM machines came equipped with an emulator to run 6502 code. It emulated a 6502 "Tube" processor -- i.e. in an expansion box, with no I/O of its own. If your code was clean enough to run on that, you could run it on RISC OS out of the box.

Atari, Commodore, Sinclair and Acorn all abandoned their 8-bit heritage and did all-new, proprietary machines. Acorn even did its own CPU, giving it way more CPU power than its rivals, allowing emulation of the old machines -- not an option for the others, who bought in their CPUs.

Amstrad threw in the towel and switched to PC compatibles. A wise move, in the long view.

The only line that sort of transitioned was MSX.

MSX 1 machines (1983) were so-so, decent but unremarkable 8-bits.

MSX 2 (1985) were very nice 8-bitters indeed, with bank-switching for up to 4MB RAM, a primitive GPU for good graphics by Z80 standards. Floppy drives and 128 kB RAM were common as standard.

MSX 2+ (1988) were gorgeous. Some could handle ~6MHz, and the GPU has at least 128 kB VRAM, so they had serious video capabilities for 8-bit machines -- e.g. 19K colours.

MSX Turbo R (1990) were remarkable. Effectively a ~30MHz 16-bit CPU, 96 kB ROM, 256 kB RAM (some battery-backed), a GPU with its own 128 kB RAM, and stereo sound via multiple sound chips plus MIDI.

As a former Sinclair fan, I'd love to see what a Spectrum built using MSX Turbo R technology could do.


Postscript

Two 6502 lines did transition, kinda sortof.

Apple did the Apple ][GS (1986), with a WD65C816 16-bit processor. Its speed was tragically throttled and the machine was killed off very young so as not to compete with the still-new Macintosh line.

Acorn's Communicator (1985) also had a 65C816, with a ported 16-bit version of Acorn's MOS operating system, BBC BASIC, the View wordprocessor, ViewSheet spreadsheet, Prestel terminal emulator and other components. Also a dead end.

The 65C816 was also available as an add-on for several models in the Commodore 64 family, and there was the GEOS GUI-based desktop to run on it, complete with various apps. Commodore itself never used the chip, though.

10 most recent