?

Log in

No account? Create an account

Wed, Apr. 27th, 2016, 07:06 pm
Where did we all go wrong? And why doesn't anyone remember? [Tech blog post]

My contention is that a large part of the reason that we have the crappy computers that we do today -- lowest-common-denominator boxes, mostly powered by one of the kludgiest and most inelegant CPU architectures of the last 40 years -- is not technical, nor even primarily commercial or due to business pressures, but rather, it's cultural.

When I was playing with home micros (mainly Sinclair and Amstrad; the American stuff was just too expensive for Brits in the early-to-mid 1980s), the culture was that Real Men programmed in assembler and the main battle was Z80 versus 6502, with a few weirdos saying that 6809 was better than either. BASIC was the language for beginners, and a few weirdos maintained that Forth was better.

At university, I used a VAXcluster and learned to program in Fortran-77. The labs had Acorn BBC Micros in -- solid machines, the best 8-bit BASIC ever, and they could interface both with lab equipment over IEEE-488 and with generic printers and so on over Centronics parallel and its RS-423 interface [EDIT: fixed!], which could talk to RS-232 kit.

As I discovered when I moved into the professional field a few years later (1988), this wasn't that different from the pro stuff. A lot of apps were written in various BASICs, and in the old era of proprietary OSes on proprietary kit, for performance, you used assembler.

But a new wave was coming. MS-DOS was already huge and the Mac was growing strongly. Windows was on v2 and was a toy, but Unix was coming to mainstream kit, or at least affordable kit. You could run Unix on PCs (e.g. SCO Xenix), on Macs (A/UX), and my employers had a demo IBM RT-6150 running AIX 1.

Unix wasn't only the domain (pun intentional) of expensive kit priced in the tens of thousands.

A new belief started to spread: that if you used C, you could get near-assembler performance without the pain, and the code could be ported between machines. DOS and Mac apps started to be written (or rewritten) in C, and some were even ported to Xenix. In my world, nobody used stuff like A/UX or AIX, and Xenix was specialised. I was aware of Coherent as the only "affordable" Unix, but I never saw a copy or saw it running.

So this second culture of C code running on non-Unix OSes appeared. Then the OSes started to scramble to catch up with Unix -- first OS/2, then Windows 3, then the for a decade parallel universe of Windows NT, until XP became established and Win9x finally died. Meanwhile, Apple and IBM flailed around, until IBM surrendered, Apple merged with NeXT and switched to NeXTstep.

Now, Windows is evolving to be more and more Unix-like, with GUI-less versions, clean(ish) separation between GUI and console apps, a new rich programmable shell, and so on.

While the Mac is now a Unix box, albeit a weird one.

Commercial Unix continues to wither away. OpenVMS might make a modest comeback. IBM mainframes seem to be thriving; every other kind of big iron is now emulated on x86 kit, as far as I can tell. IBM has successfully killed off several efforts to do this for z Series.

So now, it's Unix except for the single remaining mainstream proprietary system: Windows. Unix today means Linux, while the weirdoes use FreeBSD. Everything else seems to be more or less a rounding error.

C always was like carrying water in a sieve, so now, we have multiple C derivatives, trying to patch the holes. C++ has grown up but it's like Ada now: so huge that nobody understands it all, but actually, a fairly usable tool.

There's the kinda-sorta FOSS "safe C++ in a VM", Java. The proprietary kinda-sorta "safe C++ in a VM", C#. There's the not-remotely-safe kinda-sorta C in a web browser, Javascript.

And dozens of others, of course.

Even the safer ones run on a basis of C -- so the lovely cuddly friendly Python, that everyone loves, has weird C printing semantics to mess up the heads of beginners.

Perl has abandoned its base, planned to move onto a VM, then the VM went wrong, and now has a new VM and to general amazement and lack of interest, Perl 6 is finally here.

All the others are still implemented in C, mostly on a Unix base, like Ruby, or on a JVM base, like Clojure and Scala.

So they still have C like holes and there are frequent patches and updates to try to make them able to retain some water for a short time, while the "cyber criminals" make hundreds of millions.

Anything else is "uncommercial" or "not viable for real world use".

Borland totally dropped the ball and lost a nice little earner in Delphi, but it continues as Free Pascal and so on.

Apple goes its own way, but has forgotten the truly innovative projects it had pre-NeXT, such as Dylan.

There were real projects that were actually used for real work, like Oberon the OS, written in Oberon the language. Real pioneering work in UIs, such as Jef Raskin's machines, the original Mac and Canon Cat -- forgotten. People rhapsodise over the Amiga and forget that the planned OS, CAOS, to be as radical as the hardware, never made it out of the lab. Same, on a smaller scale, with the Acorn Archimedes.

Despite that, of course, Lisp never went away. People still use it, but they keep their heads down and get on with it.

Much the same applies to Smalltalk. Still there, still in use, still making real money and doing real work, but forgotten all the same.

The Lisp Machines and Smalltalk boxes lost the workstation war. Unix won, and as history is written by the victors, now the alternatives are forgotten or dismissed as weird kooky toys of no serious merit.

The senior Apple people didn't understand the essence of what they saw at PARC: they only saw the chrome. They copied the chrome, not the essence, and now all that any of us have is the chrome. We have GUIs, but on top of the nasty kludgy hacks of C and the like. A late-'60s skunkware project now runs the world, and the real serious research efforts to make something better, both before and after, are forgotten historical footnotes.

Modern computers are a vast disappointment to me. We have no thinking machines. The Fifth Generation, Lisp, all that -- gone.

What did we get instead?

Like dinosaurs, the expensive high-end machines of the '70s and '80s didn't evolve into their successors. They were just replaced. First little cheapo 8-bits, not real or serious at all, although they were cheap and people did serious stuff with them because it's all they could afford. The early 8-bits ran semi-serious OSes such as CP/M, but when their descendants sold a thousand times more, those descendants weren't running descendants of that OS -- no, it and its creator died.

CP/M evolved into a multiuser multitasking 386 OS that could run multiple MS-DOS apps on terminals, but it died.

No, then the cheapo 8-bits thrived in the form of an 8/16-bit hybrid, the 8086 and 8088, and a cheapo knock-off of CP/M.

This got a redesign into something grown-up: OS/2.

Predictably, that died.

So the hacked-together GUI for DOS got re-invigorated with an injection of OS/2 code, as Windows 3. That took over the world.

The rivals - the Amiga, ST, etc? 680x0 chips, lots of flat memory, whizzy graphics and sound? All dead.

Then Windows got re-invented with some OS/2 3 ideas and code, and some from VMS, and we got Windows NT.

But the marketing men got to it and ruined its security and elegance, to produce the lipstick-and-high-heels Windows XP. That version, insecure and flakey with its terrible bodged-in browser, that, of course, was the one that sold.

Linux got nowhere until it copied the XP model. The days of small programs, everything's a text file, etc. -- all forgotten. Nope, lumbering GUI apps, CORBA and RPC and other weird plumbing, huge complex systems, but it looks and works kinda like Windows and a Mac now so it looks like them and people use it.

Android looks kinda like iOS and people use it in their billions. Newton? Forgotten. No, people have Unix in their pocket, only it's a bloated successor of Unix.

The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten. A proprietary microkernel Unix-like OS for phones -- Blackberry 10, based on QNX -- not Androidy enough, and bombed.

We have less and less choice, made from worse parts on worse foundations -- but it's colourful and shiny and the world loves it.

That makes me despair.

We have poor-quality tools, built on poorly-designed OSes, running on poorly-designed chips. Occasionally, fragments of older better ways, such as functional-programming tools, or Lisp-based development environments, are layered on top of them, but while they're useful in their way, they can't fix the real problems underneath.

Occasionally someone comes along and points this out and shows a better way -- such as Curtis Yarvin's Urbit. Lisp Machines re-imagined for the 21st century, based on top of modern machines. But nobody gets it, and its programmer has some unpleasant and unpalatable ideas, so it's doomed.

And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.

And it almost makes me want to cry sometimes.

We should have brilliant machines now, not merely Steve Jobs' "bicycles for the mind", but Gossamer Albatross-style hang-gliders for the mind.

But we don't. We have glorified 8-bits. They multitask semi-reliably, they can handle sound and video and 3D and look pretty. On them, layered over all the rubbish and clutter and bodges and hacks, inspired kids are slowly brute-forcing machines that understand speech, which can see and walk and drive.

But it could have been so much better.

Charles Babbage didn't finish the Difference Engine. It would have paid for him to build his Analytical Engine, and that would have given the Victorian British Empire the steam-driven computer, which would have transformed history.

But he got distracted and didn't deliver.

We started to build what a few old-timers remember as brilliant machines, machines that helped their users to think and to code, with brilliant -- if flawed -- software written in the most sophisticated computer languages yet devised, by the popular acclaim of the people who really know this stuff: Lisp and Smalltalk.

But we didn't pursue them. We replaced them with something cheaper -- with Unix machines, an OS only a nerd could love. And then we replaced the Unix machines with something cheaper still -- the IBM PC, a machine so poor that the £125 ZX Spectrum had better graphics and sound.

And now, we all use descendants of that. Generally acknowledged as one of the poorest, most-compromised machines, based on descendants of one of the poorest, most-compromised CPUs.

Yes, over the 40 years since then, most of rough edges have been polished out. The machines are now small, fast, power-frugal with tons of memory and storage, with great graphics and sound. But it's taken decades to get here.

And the OSes have developed. Now they're feature-rich, fairly friendly, really very robust considering the stone-age stuff they're built from.

But if we hadn't spent 3 or 4 decades making a pig's ear into silk purse -- if we'd started with a silk purse instead -- where might we have got to by now?

Wed, Apr. 27th, 2016 09:26 pm (UTC)
waistcoatmark

A sig I've seen at the big G goes: "brute force done fast enough looks slick".

I am very interested by Rust: they've got some very nice ideas there. Alas I can't see it making its way to my work machine any time soon :-(

Sat, Apr. 30th, 2016 06:25 pm (UTC)
liam_on_linux

Good point. I do need to learn more about stuff like Rust and Go.

But I don't expect to be seeing new OSes written in them. Ever, really. Does anyone?

Thu, Apr. 28th, 2016 02:55 am (UTC)
(Anonymous): ...

What an absolute load of rambling horsesh1t.

Thu, Apr. 28th, 2016 10:53 pm (UTC)
liam_on_linux: Re: ...

Thanks for the feedback. You could at least have had the guts to sign in.

Thu, Apr. 28th, 2016 06:23 am (UTC)
drplokta

Is the ARM CPU architecture really that bad? I thought it was pretty good compared to minority architectures like Intel x386 and x64.

Thu, Apr. 28th, 2016 10:00 am (UTC)
jmtd: It's complicated.

I think the answer is complicated. There's ARM and then there's ARM. I recall someone who knows better than I complaing that ARMv7 (or v6 perhaps) was hardly "RISC" anymore. Then the 64bit ARMv8 is a new beast entirely.

Thu, Apr. 28th, 2016 10:59 pm (UTC)
liam_on_linux: Re: It's complicated.

Yes, that's my impression.

ARM's great virtue was extreme simplicity, which, as seems inevitable with RISC over time, is less true now.

Sadly, the best insight I have had into this was a talk by none other than Sophie Wilson herself at ROUGOL.

I have not got precise recollection. IIRC:

She has ideas and plans for what she considered a proper, real, clean 64-bit ARM, but she now works for Broadcom, not ARM. She seemed not to really approve of what ARM was doing -- just feeling it a bit inelegant -- but she has no input any more. (This was before the 64-bit extensions, of which she would not approve, I suspect.)

She said that the actual next-gen ARM, or the closest thing that there would ever be, was Firepath. This is her main brainchild at Broadcom.

She said that if it'd still been under the same roof as ARM, Firepath could have been more ARM-like, but it would not have been a compatible device and she never meant it to be. Firepath builds on lessons she learned from ARM, but was never intended to be a general-purpose desktop CPU.

There's a little info here, which TBH goes over my head:

http://everything2.com/title/FirePath

Thu, Apr. 28th, 2016 07:03 am (UTC)
uon

Where could we have got to by now if more plants had adopted the C4 pathway?

Fri, Apr. 29th, 2016 01:20 pm (UTC)
liam_on_linux

As ever, a terse response, freighted with layers of meaning.

I could answer several of them, but perhaps the most salient is this.

Some plants /do/ use C4 rather than C3 or CAM. (CAM was always my favourite, but then, I like succulents. You may remember my struggles to keep one alive in your old room. A tiny scion of that line of jade plants is now thriving on my windowsill in Brno.)

The point is, there's room for more than one form of photosynthesis in nature. Some have advantages for some roles, niches, ecosystems; others fit better in different ones.

I think the same could be true of OSes. These days, if the apps were written in high-level scripting languages, then they could run on multiple quite different platforms. Let's say they ran inside a JVM, or MoarVM, or perhaps even that Parrot got resurrected and they ran on that.

Or, to play it a bit more technologically conservatively, they could run in containers, against some standard Unix-like API, and not need to care what that container was /actually/ hosted on. A binary-compatible POSIX for C21.

OTOH, for client-end devices, then so long as it can provide a proper rich web experience, for a fair bit of stuff, that would be enough these days.

Sat, Apr. 30th, 2016 08:18 pm (UTC)
uon

Some plants do use C4, and some code is written in languages other than C!

Any time you find yourself meditating on the fate of lisp, it's probably worth rereading the usual essay: at some point in the past, in certain environments, the average programmer could just get more efficient code out the door using C. You could have shipped something neater and cleverer, but you'd ship it after your competitor and it would have needed a beefier machine with more memory.

During the time when, in an alternative reality, people were obsessing over rewriting the world over again in shiny shiny lisp and then maybe trying to squeeze it down to run on machines owned by mere mortals, in this reality the basic infrastructure of the internet was being written in the (then) most straightforward language/OS combo for doing so. Is it really worth sweating the detail of your basic photosynthesis process when you can grow a rainforest with what you've got?

To take issue with specific points:
C always was like carrying water in a sieve

I assure you that C/Unix is a programmer's dream compared to plenty of other environments, particularly of the past.
And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.

This is some real actual bullshit. It's pretty common nowadays to write frontend code using closures and continuation passing style, which is about as lispy and un-C-like as you can get, short of having a shrine to John McCarthy in your house.

The great thing about programming is that you can still go back and revive old evolutionary dead-ends once it looks worthwhile. There's a huge revival of interest in functional programming and immutable data structures right now, which all comes back to lisp; and by "revival of interest" I don't mean "cool kids do stuff in it and tweet about it", I mean "I know of massive corporations, whose core business is not software, with ten-digit IT budgets, who are using this stuff". Often indeed running on JVMs - for example Clojure or Scala. Remember Occam? The Transputer? At their core was a theory from the 70s called CSP which made its way, via Plan 9, into Google's Go language, and is the basis for plenty of concurrent designs today.

I really love your "look at all the weird and wonderful languages and OSes of the past" posts, except for the whole fall-from-grace tone of them - if anything this is the best time ever to be interested in weird software design because it's so easy to do now. I really don't understand why you're so despondent about this.

Your paintings are stuck; you are stuck. Stuck, stuck, stuck!


Edited at 2016-04-30 08:20 pm (UTC)

Thu, Apr. 28th, 2016 10:02 am (UTC)
jmtd

I really enjoyed this but it manages to ignore that Linux (for all its plusses/minuses) was the dominant OS in server-land long before Windows XP came along. However, much of the stuff you complain about for modern Desktop Linux (CORBA-style stuff etc) is slowly working its way down the stack into the server (dbus).

Sat, Apr. 30th, 2016 06:21 pm (UTC)
liam_on_linux

Was Linux so dominant in servers before the mid-200x decade?

I am not contradicting you. I don't know. I didn't see much in my world. I put in a few servers, quite a lot of firewalls for a while (Smoothwall FTW), but I very rarely encountered anyone else doing so.

As for the creeping integration across layers -- well, I guess that in a way this is an example of what I compared to creeping WinXP-isation. It's one that hadn't occurred to me, TBH.

Thu, Apr. 28th, 2016 11:03 am (UTC)
(Anonymous)

Great article and I feel very much the same about a lot of what you said gotta disagree with you about urbit. Have you tried actually running it? It takes >4GB ram and then crashes. You need more than that to use it. How much ram does the best game you ever played on the amiga need?

It's a bloated pig built on a ridiculous obfuscated language designed to mystify people and think there's some magic or genius behind that there really isn't. Don't get me wrong - I love the lisp programming language... but I grew out of defining integers as peano expressions long ago.

It's loads of fun to think about and would be a perfect centrepiece to a neal stevenson novel but the reason it's never going to take off isn't the systematic social attack on his character. It's because it's yet another mr creosote software without any well defined scope that uses up boundless resources piling abstraction after abstraction ontop of garbage to try to hide it. exactly unlike those little unix tools that took us so far.

Thu, Apr. 28th, 2016 11:32 pm (UTC)
liam_on_linux

Thanks!

No, I've not tried it, and I was never an Amiga gamer. Got my A1200 in about 2000. I was an Archimedes fan back in the day, but mainly spent my time generating fractals in BBC BASIC V. Was starting to explore animating them and drawing them in 3D when I got a job.

Thu, Apr. 28th, 2016 11:38 pm (UTC)
liam_on_linux

No, I've not tried Urbit. (Yet.)

But my impression is this:

It's not obfuscatory for the hell of it. It is, yes, but for a valid reason: that he doesn't want to waste time explaining or supporting it. It's hard because you need to be v v bright to fathom it; obscurity is a user filter.

He claims NOT to be a Lisp type, not to have known anything much about the language or LispMs, & to have re-invented some of the underlying ideas independently. I'm not sure I believe this.

My view of it from a technical perspective is this. (This may sound over-dramatic.)

We are so mired in the C world that modern CPUs are essentially C machines. The conceptual model of C, of essentially all compilers, OSes, imperative languages, &c. is a flawed one -- it is too simple an abstraction. Q.v. http://www.loper-os.org/?p=55

The LispM model was a better one, because it's slightly (in the St Exupery sense) richer. I.e.

"…perfection is attained not when there is nothing more to add, but when there is nothing more to remove."

Instead of bytes & blocks of them, the basic unit is the list. Operations are defined in terms of lists, not bytes. You define a few very simple operations & that's all you need.

http://stackoverflow.com/questions/3482389/how-many-primitives-does-it-take-to-build-a-lisp-machine-ten-seven-or-five

The way LispMs worked, AIUI, is that the machine language wasn't Lisp, it was something far simpler, but designed to map onto Lisp concepts.

I have been told that modern CPU design & optimisations & so on map really poorly onto this set of primitives. That LispM CPUs were stack machines, but modern processors are register machines. I am not competent to judge the truth of this.

If Yarvin's claims are to be believed, he has done 2 intertwined things:

#1 Experimentally or theoretically worked out something akin to these primitives.
#2 Found or worked out a way to map them onto modern CPUs.

This is his "machine code". Something that is not directly connected or associated with modern CPUs' machine languages. He has built something OTHER but defined his own odd language to describe it & implement it. He has DELIBERATELY made it unlike anything else so you don't bring across preconceptions & mental impurities. You need to start over.

The basic layer is both foundation & insulation. It's technological insulation, a barrier between the byte machine underneath & the list-oriented layer on top. It's also conceptual insulation, to make you re-learn how to work.

But, as far as I can judge, the design is sane, clean, & I am taking it that he has reasons for the weirdness. I don't think it's gratuitous.

So what on a LispM was the machine language, in Urbit, is Nock. It's a whole new machine language layer, placed on top of an existing OS stack, so I'm not surprised if it's horrendously inefficient.

Compare with Ternac, a trinary computer implemented as a simulation on a binary machine. It's that big a change. https://en.wikipedia.org/wiki/Ternac

Then, on top of this layer, he's built a new type of OS. This seems to have conceptual & architectural analogies with LispM OSes such as Genera. Only Yarvin claims not to be a Lisper, so he's re-invented that wheel. That is Hoon.

But he has an Agenda.

Popehat explained it well here:
https://popehat.com/2013/12/06/nock-hoon-etc-for-non-vulcans-why-urbit-matters/

… via the medium of this sig:

Timothy C. May, Crypto Anarchy: encryption, digital money, anonymous networks, digital pseudonyms, zero knowledge, reputations, information markets, black markets, collapse of government.

I would be interested in an effort to layer a bare-metal-up LispM-type layer on top of x86, ARM, &c. But Yarvin isn't here for the sheer techno-wanking. Oh no. He wants to reinvent the world, via the medium of encryption, digital currencies, &c. So he has a whole other layer on top of Urbit, which is the REASON for Urbit -- a secure, P2P, encrypted, next-gen computer system which happens to run on existing machines & over the existing Internet, because that's the available infrastructure, & whereas it's a horrid mess, it's what is there. You can't ignore it, you can't achieve these grandiose goals within it, so, you just layer your new stuff over the top.

I hope that makes some kind of sense.

Thu, Apr. 28th, 2016 11:11 am (UTC)
(Anonymous): Liam hates x86, but x86 rules.

As much as I loved the M68-k flat memory model, it's gone.
It.s been reduced to an x86 or ARM world. Anything else is just chickens running without their heads, like PowerPC used in car computers or the marginal MIPS.

x86 will go on to power game machine power guzzlers.

End user software will move on to run on top of JVM, or JVM-like abstraction layers (the JS-on-the-browser being the most wasteful).

Low level software will continue to be made with C.

We're an old generation, Liam, bitching about how good train travel was with steam powered locomotives. The new generations, the screen-touching kids, will pay our nursing homes. We better be good to them.
FC

Thu, Apr. 28th, 2016 11:49 pm (UTC)
liam_on_linux: Re: Liam hates x86, but x86 rules.

Aha! Fernando found a way. Good.

Yes, all that is true.

But the times, they are a-changin'.

Two trends that I see which will change everything are this:

Computers have stopped getting much faster. They're still improving but we're in diminishing-returns territory.

Prediction: this will continue. It will be possible to build *much* faster CPUs, but it will use exotic materials, manufacturing techniques, etc. They won't be mass-produced & they will be extortionately expensive.

Mass-market kit will run out of doublings inside a decade, max. Then, we get more cores, running on less power, but not much faster.


Secondly: the distinction between "RAM" and "drives" was a temporary artefact and it will go away. These future machines will have a single layer of non-volatile storage. Lots of it, but one big flat space. No loading from SSD into RAM; no storing data on "drives". Software is written into the storage at manufacture, and it boots once and runs forever, except for faults, crashes, patching, etc. You will be able to reboot, but you won't need to -- it will be a bit like wiping & reinstalling today. Something only techies do.

If this happens, and I predict that it will, then there is no need for filesystems any more. Filesystems have been a wonderfully useful abstraction and data-interchange tool, but there will be no technological need for them any more. It will be absurd to partition up a non-volatile-memory's machine into "workspace" and "store" -- it's all one.

There is just *one* single-level store architecture left from the old days. IBM System i, formerly OS/400 on AS/400. I do not expect a renaissance, but I think a lot of conventional OS thinking will need to be thrown away.

This will *mandate* new ways of writing software. The current stuff assumes loading from disk into RAM, executing, writing stuff back; it is integral to its design & functioning.

And I think it will go away.

At first, that's how it will be implemented, because that's the only way we know today.

But LispMs and Smalltalk machines weren't like that. The way they boot, save data, load and run, is more like suspend-and-resume: total IPLs were rare. (Except when they crashed, of course.)

Someone will find a way to reimplement this, some new version of the idea, and it will be so much more efficient that the old disk-based systems will look foolish and antiquated by comparison, and they will end up sidelined to the server clusters in the datacentres with their huge arrays of spinning rust... until even those fade slowly away into HSM systems.

We will /need/ to make these changes, so I figure, hey, let's go strip-mining history now and see if we can find useful pointers as to how we can make the next generation of machines better than the current generation.

Fri, Apr. 29th, 2016 03:18 pm (UTC)
(Anonymous): Re: Liam hates x86, but x86 rules.

tl:dr We abandoned elegant, efficient and very expensive computing platforms for lower quality options that most people could actually afford.

Sat, Apr. 30th, 2016 06:24 pm (UTC)
liam_on_linux: Re: Liam hates x86, but x86 rules.

That's a cogent summary, yes.

But it misses out on the big point I made, and the one I was trying to make:

[1] We threw away a lot of really good tech, and we have not come close to replacing a lot of it.

This seems not to be generally known or understood, AFAICT.

[2] We would benefit from trying to rediscover/re-implement some of this stuff.

Sun, May. 1st, 2016 07:25 am (UTC)
uon: RE: Re: Liam hates x86, but x86 rules.

This is already happening! In the 90s there was a company which made gorgeous 68k-based workstations running a revolutionary OS with a strongly Smalltalk-inspired object model - these machines were literally decades ahead of their time, and they basically flopped for being too expensive. Yet the device I'm using to write this post is entirely based on technology plundered from this dead end!

Old tech doesn't get thrown away - it just sporulates until conditions are favourable again. Look at what happened with neural nets. Look at the endless cycle of re-re-re-invention in graphics hardware and software - you realise that, seen from the right angle, HTML5 is distinctly reminiscent of Display Postscript?

This is happening all the time and it has been happening all the time and you've been unable to see it because in a weird nostalgic way your ideas are stuck, stuck, stuck!

Thu, May. 19th, 2016 05:22 pm (UTC)
pndc

I know relatively little about the 6809, but the question and answer at http://retrocomputing.stackexchange.com/a/369/80 discusses an interesting advanced technique that gave it a definite performance boost over the 6502 and Z80. So those weirdoes may not be completely deluded :)

The 6502 wasn't great to program for, but it had the major advantages over its comtemporaries in that it was the cheapest, and it could do more useful work per clock cycle than the Z80 (and apart from that trick, probably the 6809 as well). It was *almost* RISC in its own bizarre way.

Thu, May. 19th, 2016 06:28 pm (UTC)
liam_on_linux

I never learned assembly so I can't personally judge, but the smart people say that the 6809 was the best of the mass-market 8-bits, and so ended up sidelined by cheaper-and-nasty, as all the best tech does.

And yes, I've heard that of the 6502. In a way it had 2 descendants -- the 65816, the direct 16-bit descendant, sort of more CISC-ish. There was a planned and part-designed 65832, a 32-bit descendant of that.

And the ARM, a sort of conceptual offspring, which in its modern incarnations is no longer simple or clean at all; over 3 decades it's acquired its own layers of cruft. And ARM's conceptual offspring is Firepath.