?

Log in

Fri, Jun. 16th, 2017, 02:48 pm
The death of the filesystem [tech blog post, by me - please comment on LJ, not on Twitter/FB]

I've written a few times about a coming transition in computing -- the disappearance of filesystems, and what effects this will have. I have not had any constructive dialogue with anyone.

So I am trying yet again, by attempting to to rephrase this in a historical context:

There have been a number of fundamental transitions in computing over the years.

1st generation

The very early machines didn't have fixed nonvolatile storage: they had short-term temporary storage, such as mercury delay lines or storage CRTs, and read data from offline, non-direct-access, often non-rewritable means, such as punched cards or paper tape.

2nd generation

Hard disks came along, in about 1953, commercially available in 1957: the IBM RAMAC...

https://en.wikipedia.org/wiki/IBM_305_RAMAC

Now, there were 2 distinct types of directly-accessible storage: electronic (including core store for the sake of argument) and magnetic.

A relatively small amount of volatile storage, in which the processor can directly work on data, and a large amount of read-write volatile storage but which must be transferred into volatile storage for processing. You can't add 2 values in 2 disk blocks without transferring them into memory first.

This is one of the fundamental basic models of computer architecture. However, it has been _the_ single standard architecture for many decades. We've forgotten there was ever anything else.

[[

Aside:

There was a reversion to machines with no directly-accessible storage in the late 1970s and early-to-mid 1980s, in the form of 8-bit micros with only cassette storage.

The storage was not under computer control, and was unidirectional: you could load a block, or change the tape and save a block, but in normal use for most people except the rather wealthy, the computer operated solely on the contents of its RAM and ROM.

Note: no filesystems.

(

Trying to forestall an obvious objection:
Later machines, such as the ZX Spectrum 128 and Amstrad PCW, had RAMdisks, and therefore very primitive filesystems, but that was mainly a temporary stage due to processors that couldn't access >64kB of RAM and the inability to modify their ROMs to support widespread bank-switching, because it would have broken backwards-compatibility.)

)

]]

Once all machines have this 2-level-store model, note that the 2 stores are managed differently.

Volatile store is not structured as a filesystem as it is dynamically constructed on the fly every boot. It has little to no metadata.

Permanent store needs to have metadata as well as data. The computer is regularly rebooted, and then, it needs to be able to find its way through the non-volatile storage. Thus, increasingly elaborate systems of indexing.

But the important thing is that filesystems were a solution to a technology issue: managing all that non-volatile storage.

Over the decades it has been overloaded with other functionality: data-sharing between apps, security between users, things like that. It's important to remember that these are secondary functions.

It is near-universal, but that is an artefact of technological limitations. That the fast, processor-local storage was volatile, and non-volatile storage was slow and large enough that it had to be non-local. Nonvolatile storage is managed via APIs and discrete hardware controllers, whose main job was transferring blocks of data from volatile to non-volatile storage and back again.

And that distinction is going away.

The technology is rapidly evolving to the point where we have fast, processor-local storage, in memory slots, appearing directly in the CPUs' memory map, which is non-volatile.

Example -- Flash memory DIMMs:

https://www.theregister.co.uk/2015/11/10/micron_brings_out_a_flash_dimm/

Now, the non-volatile electronic storage is increasing rapidly in speed and decreasing in price.

Example -- Intel XPoint:

https://arstechnica.com/information-technology/2017/02/specs-for-first-intel-3d-xpoint-ssd-so-so-transfer-speed-awesome-random-io/

Note the specs:

Reads as fast as Flash.
Writes nearly the same speed as reads.
Half the latency of Flash.
100x the write lifetime of Flash.

And this is the very first shipping product.

Intel is promising "1,000 times faster than NAND flash, 10 times denser than (volatile) DRAM, and with 1,000 times the endurance of NAND".

This is a game-changer.

What we are looking at is a new type of computer.

3rd generation

No distinction between volatile and non-volatile storage. All storage appears directly in the CPUs' memory map. There are no discrete "drives" of any kind as standard. Why would you? You can have 500GB or 1TB of RAM, but if you turn the machine off, then a day later turn it back on, it carries on exactly where it was.

(Yes there will be some caching and there will need to be a bit of cleverness involving flushing them, or ACID writes, or something.)

It ships to the user with an OS in that memory.

You turn it on. It doesn't boot.

What is booting? Transferring OS code from non-volatile storage into volatile storage so it can be run. There's no need. It's in the processor's memory the moment it's turned on.

It doesn't boot. It never boots. It never shuts down, either. You may have tell it you're turning it off, but it flushes its caches and it's done. Power off.

No hibernation: it doesn't need to. The OS and all state data will be there when you come back. No sleep states: just power off.

What is installing an OS or an app? That means transferring from slow non-volatile storage to fast non-volatile storage. There is no slow or fast non-volatile storage. There's just storage. All of it that the programmer can see is non-volatile.

This is profoundly different to everything since 1957 or so.

It's also profoundly different from those 1980s 8-bits with their BASIC or Forth or monitor in ROM, because it's all writable.

That is the big change.

In current machines, nobody structures RAM as a filesystem. (I welcome corrections on this point!) Filesystems are for drives. Doesn't matter what kind of drive: hard disk, SSD, SD card, CD, DVD, DVD-RW, whatever. Different filesystems, but all need transfers from them to and from volatile storage to function.

That's going away. The writing is on the wall. The early tech is shipping right now.

What I am asking is, how will it change OS design?

All I am getting back is, "don't be stupid, it won't, OS design is great already, why would we change?"

This is the same attitude the DOS and CP/M app vendors had to Windows.

WordStar and dBase survived the transition from CP/M to MS-DOS.

They didn't survive the far bigger one from MS-DOS to Windows.

WordPerfect Corp and Lotus Corp tried. They still failed and died.

A bigger transition is about to happen. Let's talk about it instead of denying it's coming.

Wed, May. 10th, 2017, 06:43 pm
Daydreaming of alternate universes & a tech marriage made in heaven: BeOS on Acorn


Acorn pulled out of making desktop computers in 1998, when it cancelled the Risc PC 2, the Acorn Phoebe.

The machine was complete, but the software wasn't. It was finished and released as RISC OS 4, an upgrade for existing Acorn machines, by RISC OS Ltd.

by that era, ARM had lost the desktop performance battle. If Acorn had switched to laptops by then, I think it could have remained competitive for some years longer -- 486-era PC laptops were pretty dreadful. But the Phoebe shows that what Acorn was actually trying to build was a next-generation powerful desktop workstation.

Tragically, I must concede that they were right to cancel it. If there had been a default version with 2 CPUs, upgradable to 4, and that were followed with 6- and 8-core models, they might have made it, but RISC OS couldn't do that, and Acorn didn't have the resources to rewrite RISC OS to do it. A dedicated Linux machine in 1998 would have been suicidal -- Linux didn't even have a FOSS desktop in those days. If you wanted a desktop Unix workstation, you still bought a Sun or the like.

(I wish I'd bought one of the ATX cases when they were on the market.)

Read more...Collapse )

Wed, May. 3rd, 2017, 04:28 pm
The decline & fall of the last British makes of computer: Acorn & Psion

I was a keen owner and fan of multiple Psion PDAs (pocket digital assistants – today, I have a Psion 3C, a 5MX and a Series 7/netBook) and several Acorn desktop computers running RISC OS (I have an A310 and an A5000).

I was bitterly disappointed when the companies exited those markets. They still survive -- Psion's OS became Symbian and I had several Symbian devices, including a Sony-Ericsson P800, plus two Nokias -- a 7700 and an E90 Communicator. The OS is now dead, but Psion's handhelds still survive -- I'll get to them.

I have dozens of ARM-powered devices, and I have RISC OS Open running on a Raspberry Pi 3.

But despite my regret, both Psion's and Acorn's moves were excellent, sensible, pragmatic business decisions.

How many people used PDAs?

How many people now use smartphones?

Read more...Collapse )

Wed, Apr. 19th, 2017, 09:12 pm
The state of the Linux desktop

A summary of where were are and where we might be going next.

Culled from a couple of very lengthy CIX posts.

A "desktop" means a whole rich GUI with an actual desktop -- a background you can put things on, which can hold folders and icons. It also includes an app launcher, a file manager, generally a wastebin or something, accessory apps such as a text editor, calculator, archive manager, etc. It can mount media and show their contents. It can unmount them again. It can burn rewritable media such as CDs and DVDs.

The whole schmole.

Some people don't want this and use something more basic, such as a plain window manager. No file manager, or they use the command line, or they pick their own, along with their own text editor etc., which are not integrated into a single GUI.

This is still a GUI, still a graphical UI, but may include little or nothing more than window management. Many Unix users want a bunch of terminals and nothing else.

A desktop is an integrated suite, with all the extras, like you get with Windows or a Mac, or back in the day with OS/2 or something.

The Unix GUI stack is as follows:
Read more...Collapse )

Fri, Mar. 31st, 2017, 02:58 pm
The art of Sinclair -- in Agile terms, making computers that are "just barely good enough"

So in a thread on CIX, someone was saying that the Sinclair computers were irritating and annoying, cut down too far, cheap and slow and unreliable.

That sort of comment still kinda burns after all these decades.

I was a Sinclair owner. I loved my Spectrums, spent a lot of time and money on them, and still have 2 working ones today.

Yes, they had their faults, but for all those who sneered and snarked at their cheapness and perceived nastiness, *that was their selling point*.

They were working, usable, useful home computers that were affordable.

They were transformative machines, transforming people, lives, economies.

I had a Spectrum not because I massively wanted a Spectrum -- I would have rather had a BBC Micro, for instance -- but because I could afford a Spectrum. Well, my parents could, just barely. A used one.

My 2nd, 3rd and 4th ones were used, as well, because I could just about afford them.

If all that had been available were proper, serious, real computers -- Apples, Acorns, even early Commodores -- I might never have got one. My entire career would never have happened.

A BBC Micro was pushing £350. My used 48K Spectrum was £80.

One of those is doable for what parents probably worried was a kid's toy that might never be used for anything productive. The other was the cost of a car.
Read more...Collapse )

Tue, Mar. 28th, 2017, 12:40 am
The successors to the Z80-based micros of the early 1980s which never happened. Or did they?




Although we almost never saw any of them in Europe, there were later models in the Z80 family.

The first successors, the Z8000 (1985, 16-bit) and its later successor the Z80000 (1986, 32-bit) were not Z80-compatible. They did not do well.

Zilog did learn, though, and the contemporaneous Z800, which was Z80 compatible, was renamed the Z280 and relaunched in 1987. 16-bit, onboard cache, very complex instruction set, could handle 16MB RAM.

Hitachi did the HD64180 (1985), a faster Z80 with an onboard MMU that could handle 512 kB of RAM. This was licensed back to Zilog as the Z61480.

Then Zilog did the Z180, an enhancement of that, which could handle 1MB RAM & up to 33MHz.

That was enhanced into the Z380 (1994) -- 16/32-bit, 20MHz, but not derived from and incompatible with the Z280.

Then came the EZ80, at up to 50MHz. No MMU but 24-bit registers for 16MB of RAM.

Probably the most logical successor was the ASCII Corp R800 (1990), an extended 16-bit Z800-based design, mostly Z80 compatible but double-clocked on a ~8MHz bus for ~16MHz operation.

So, yes, lots of successor models -- but the problem is, too many, too much confusion, and no clear successors. Zilog, in other words, had the same failure as its licensees: it didn't trade on the advantages of its previous products. It did realise this and re-align itself, and it's still around today, but it did so too late.

The 68000 wasn't powerful enough to emulate previous-generation 8-bit processors. Possibly one reason why Acorn went its own way with the ARM, which was fast enough to do so -- the Acorn ARM machines came equipped with an emulator to run 6502 code. It emulated a 6502 "Tube" processor -- i.e. in an expansion box, with no I/O of its own. If your code was clean enough to run on that, you could run it on RISC OS out of the box.

Atari, Commodore, Sinclair and Acorn all abandoned their 8-bit heritage and did all-new, proprietary machines. Acorn even did its own CPU, giving it way more CPU power than its rivals, allowing emulation of the old machines -- not an option for the others, who bought in their CPUs.

Amstrad threw in the towel and switched to PC compatibles. A wise move, in the long view.

The only line that sort of transitioned was MSX.

MSX 1 machines (1983) were so-so, decent but unremarkable 8-bits.

MSX 2 (1985) were very nice 8-bitters indeed, with bank-switching for up to 4MB RAM, a primitive GPU for good graphics by Z80 standards. Floppy drives and 128 kB RAM were common as standard.

MSX 2+ (1988) were gorgeous. Some could handle ~6MHz, and the GPU has at least 128 kB VRAM, so they had serious video capabilities for 8-bit machines -- e.g. 19K colours.

MSX Turbo R (1990) were remarkable. Effectively a ~30MHz 16-bit CPU, 96 kB ROM, 256 kB RAM (some battery-backed), a GPU with its own 128 kB RAM, and stereo sound via multiple sound chips plus MIDI.

As a former Sinclair fan, I'd love to see what a Spectrum built using MSX Turbo R technology could do.


Postscript

Two 6502 lines did transition, kinda sortof.

Apple did the Apple ][GS (1986), with a WD65C816 16-bit processor. Its speed was tragically throttled and the machine was killed off very young so as not to compete with the still-new Macintosh line.

Acorn's Communicator (1985) also had a 65C816, with a ported 16-bit version of Acorn's MOS operating system, BBC BASIC, the View wordprocessor, ViewSheet spreadsheet, Prestel terminal emulator and other components. Also a dead end.

The 65C816 was also available as an add-on for several models in the Commodore 64 family, and there was the GEOS GUI-based desktop to run on it, complete with various apps. Commodore itself never used the chip, though.

Mon, Mar. 6th, 2017, 02:37 pm
Follow-up: the family links between DOS, OS/2, NT and VMS

My previous post was an improvised and unplanned comment. I could have structured it better, and it caused some confusion on https://lobste.rs/

Dave Cutler did not write OS/2. AFAIK he never worked on OS/2 at all in the days of the MS-IBM pact -- he was still at DEC then.

Many sources focus on only one side of the story -- the DEC side, This is important but only half the tale.

IBM and MS got very rich working together on x86 PCs and MS-DOS. They carefully planned its successor: OS/2. IBM placed restrictions on this which crippled it, but it wasn't apparent at the time just how bad this would turn out to be.

In the early-to-mid 1980s, it seemed apparent to everyone that the most important next step in microcomputers would be multitasking.

Even small players like Sinclair thought so -- the QL was designed as the first cheap 68000-based home computer. No GUI, but multitasking.

I discussed this a bit in a blog post a while ago: http://liam-on-linux.livejournal.com/46833.html

Apple's Lisa was a sideline: too expensive. Nobody picked up on its true significance.

Then, 2 weeks after the QL, came the Mac. Everything clever but expensive in the Lisa stripped out: no multitasking, little RAM, no hard disk, no slots or expansion. All that was left was the GUI. But that was the most important bit, as Steve Jobs saw and nobody much else did.

So, a year later, the ST had a DOS-like OS but a bolted-on GUI. No shell, just a GUI. Fast-for-the-time CPU, no fancy chips, and it did great. It had the original, uncrippled version of DR GEM. Apple's lawsuit meant that PC GEM was crippled: no overlapping windows, no desktop drive icons or trashcan, etc.

Read more...Collapse )

Sat, Mar. 4th, 2017, 04:20 pm
The family link between OS/2 and Windows NT

Windows NT was allegedly partly developed on OS/2. Many MSers loved OS/2 at the time -- they had co-developed it, after all. But there was more to it than that.

Windows NT was partly based on OS/2. There were 3 branches of the OS/2 codebase:

[a] OS/2 1.x – at IBM’s insistence, for the 80286. The mistake that doomed OS/2 and IBM’s presence in the PC industry, the industry it had created.

[b] OS/2 2.x – IBM went it alone with the 80386-specific version.

[c] OS/2 3.x – Portable OS/2, planned to be ported to multiple different CPUs.

After the “divorce”, MS inherited Portable OS/2. It was a skeleton and a plan. Dave Cutler was hired from DEC, which refused to allow him to pursue his PRISM project for a modern CPU and successor to VMS. Cutler got the Portable OS/2 project to complete. He did, fleshing it out with concepts and plans derived from his experience with VMS and plans for PRISM.

Read more...Collapse )

Wed, Mar. 1st, 2017, 05:15 pm
I was challenged to write something positive about computing research, with concrete suggestions.

When was the last time you saw a critic write a play, compose a symphony, carve a statue?

I've seen a couple of attempts. I thought they were dire, myself. I won't name names (or media), as these are friends of friends.

Some concrete examples. I have given dozens on liam-on-linux.livejournal.com, but I wonder if I can summarise.

[1]

Abstractions. Some of our current core conceptual models are poor. Bits, bytes, directly accessing and managing memory.

If the programmer needs to know whether they are on a 32-bit or 64-bit processor, or whether it's big-endian or little-endian, the design is broken.

Higher-level abstractions have been implemented and sold. This is not a pipedream.

One that seems to work is atoms and lists. That model has withstood nearly 50Y of competition and it still thrives in its niche. It's underneath Lisp and Scheme, but also several languages far less arcane, and more recently, Urbit with Nock and Hoon. There is room for research here: work out a minimal abstraction set based on list manipulation and tagged memory, and find an efficient way to implement it, perhaps at microcode or firmware level.

Read more...Collapse )

Mon, Feb. 13th, 2017, 07:51 pm
USB C. Everyone's complaining. I can't wait. I still hope for cable nirvana.

Things have been getting better for a while now. For smaller gadgets, micro-USB is now the standard charging connector. Cables are becoming
a consumable for me, but they're cheap and easy to find.

But it only goes in one way and it's hard to see and to tell. And not all my gadgets want it the same way round, meaning I have to either remember or peer at a tiny socket and try to guess.

So conditions were right for an either-way-round USB connector.


Read more...Collapse )

10 most recent