Mon, Sep. 26th, 2016, 10:27 pm
In a response to a comment on:It’s time to ban ‘stupid’ IoT devices. They’re as dangerous as post-Soviet era nuclear weapons.
One of the elements of security is currentness. It is more or less axiomatic that all software contains errors. Over time, these are discovered, and then they can be exploited to gain remote control over the thing running the software.
This is why people talk about "software rot" or "rust". It get old, goes off, and is not desirable, or safe, to use any more.
Today, embedded devices are becoming so powerful & capable that it's possible to run ordinary desktop/server operating systems on them. This is much, much easier than purpose-writing tiny, very simple, embedded code. The smaller the software, the less there is to go wrong, so the less there is to debug.
Current embedded systems are getting pretty big. The £5 Raspberry pi zero can run a full Linux OS, GUI and all. This makes it easy and cheap to use.
For instance, the possibly forthcoming ZX Spectrum Next
and Ben Versteeg's ZX HD
Spectrum HDMI adaptor both work by just sticking a RasPi Zero in there and having it run software that converts the video signal. Even if the device is 1000x more powerful and capable than the computer it's interfaced to, it doesn't matter if it only costs a fiver.
The problem is that once such a device is out there in lots of Internet-connected hardware, it never gets updated. So even in the vanishingly-unlikely even that it was entirely free of known bugs, issues and vulnerabilities when it was shipped, it won't stay that way. They *will* be discovered and then they *will* be exploited and the device *will* become vulnerable to exploitation.
And this is true of everything from smartphone-controlled light switches to doorbells to Internet-aware fridges. To a first approximation, all of them.
You can't have them automatically update themselves, because general-purpose OSes more or less inevitably grow over time. At some point they won't fit and your device bricks itself.
Or you give it lots of storage, increasing its price, but then the OS gets a new major version, which can't be automatically upgraded.
Or the volunteers updating the software stop updating that release, edition, family, or whatever, or it stops supporting the now-elderly chip your device uses...
Whichever way, you're toast. You are inevitably going to end up screwed.
What is making IoT possible is that computer power is cheap enough to embed general-purpose computers running general-purpose OSes into cheap devices, making them "smart". But that makes them inherently vulnerable.
This is a more general case of the argument that I tried (& judging by the comments, failed) to make in one of my relatively recent The Register pieces
Cheap general-purpose hardware is a great thing and enables non-experts to do amazing and very cool things. However, so long as it's running open, general-purpose software designed for radically different types of computer, we have a big problem, and one that is going to get a whole lot worse.
So, very rarely for me, a YouTube comment.
I know, I know, "never read the comments". But sheesh...
This is the single most inaccurate, error-ridden piece of computer reporting I have ever seen. Almost every single claim is wrong.
#9 Corel LinuxOS
This wasn't "designed by Debian". It was designed by, as the name says, Corel, but based on Debian, as is Ubuntu, Mint, Elementary & many other distros. For its time it was pretty good. I ran it.
"Struggled to detect drives" is nonsense.
It begat Xandros which continued for some years. Why was it killed? Because Corel did a licensing deal with Microsoft to add Visual Basic for Applications and MS Office toolbars to WordPerfect Office. One of the terms of the deal that MS insisted on was the cancellation of WordPerfect Office for Linux, Corel LinuxOS, and Corel's ARM-based NetWinder line of hardware.
"Offered absolutely no security". Correct -- by design. Because it came out of what later became the GNU Project, and was meant to encourage sharing.
#6 GNU Hurd
Still isn't complete because it was vastly over-optimistic, but it has inspired L4, Minix 3 and many others. Most of its userland became the basis of Linux, arguably the most successful OS in the history of the world.
#5 Windows ME
There is a service pack, but it's unofficial.
It runs well on less memory than Windows 2000 did, and it was the first (and last) member of the Windows 9x family to properly support FireWire -- important if you had an iPod, for instance.
#4 MS-DOS 4.0
Wasn't written by Microsoft; it was a rebadged version of IBM's PC-DOS 4.0.
The phrase "badly-coded memory addresses" is literally meaningless, it is empty techno-babble.
It ran fine and introduced many valuable additions, such as support for hard disk partitions over 32MB, disk caching as standard, and the graphical DOSShell with its handy program-switching facility.
No, it wasn't a classic release, but it was the beginning of Microsoft being forced into making DOS competitive, alongside PC-DOS 4.0 and DR-DOS 5. It wasn't a result of creeping featuritis -- it was the beginning of it, and not from MS.
Symbian was a triumph, powering the very successfully Psion Series 5, 5mx, Revo and NetBook as well as multiple mobile phones.
Meanwhile, there was no such device as "the Nokia S60" -- S60 was a user interface, a piece of software, not a phone. It was one of Symbian's UIs, alongside S80, S90 and UIQ in Europe and others elsewhere.
Symbian was the only mobile OS with good enough realtime support to run the GSM stack on the same CPU as the main OS -- all other smartphones used a separate CPU running a separate OS.
Its browser was fine for the time.
Nokia only moved to Windows Phone OS when it hired a former Microsoft manager to run the company. Before then it also had its own Linux, Maemo, and also made Android devices.
"The open source distribution of Linux" is more technobabble. A distribution is a variety of Linux -- Lindows was one.
Its UI was Windows-like, like many other Linuxes even today, but Lindows' selling point was that it could run Windows apps via WINE. This wasn't a good idea - the compatibility wasn't there yet although it's quite good today -- but it's not even mentioned.
Like Corel LinuxOS, it was based on Debian, but Debian is a piece of software, not a company. Debian didn't "expect" anything.
Almost every single statement here is wrong.
#1 Vista / Windows 8
Almost every new version of Windows ever has required high-end specs for the time. This wasn't a new failing of Vista.
Windows 8 is not more "multi-functional" than any previous version. Totally wrong.
It didn't "do away with the desktop" -- also totally wrong. It's still there and is the primary UI.
JavaOS and Windows 1.0 are by comparison almost fair and apt, but this is shameful travesty of a piece. Everyone involves should be ashamed.
My last job over here in Czechia was a year basically acting as the entire international customer complaints department for a prominent antivirus vendor.
Damned straight, Windows still has severe malware and virus problems! Yes, even Windows 8.x and 10.
The original dynamic content model for Interner Explorer was: download and run native binaries from the Internet. (AKA "ActiveX
", basically OLE
on web pages.) This is insane
if you know anything about safe, secure software design.
It's better now, but the problem is that since IE is integrated into Windows, IE uses Windows core code to render text, images, etc. So any exploit that targets these Windows DLLs can allow a web page to execute code on your machine.
Unix' default model is that only binaries on your own system that have been marked as executable can run. By default it won't even run local stuff that isn't marked as such, let alone anything from a remote host.
(This is a dramatic oversimplification.)
Microsoft has slowly and painfully learned that the way Unix does things is safer than its own ways, and it's changing, but the damage is done. If MS rewrote Windows and fixed all this stuff, a lot of existing Windows programs wouldn't work any more. And the only reason to choose Windows is the huge base of software that there is for Windows.
Such things can be done. Mac OS X didn't run all classic MacOS apps when it was launched in 2001 or so. Then in 10.5 Apple dropped the ability to run old classic apps at all. Then in 10.6 it dropped the ability to run the OS on machines with the old processors. Then in 10.7 it dropped the ability to run apps compiled for the old processor.
It has carefully stage managed a transition, despite resistance. Microsoft _could_ have done this, but it didn't have the nerve.
It's worth mentioning that, to give it credit, the core code of both Windows 3 and Windows 95 contains some _inspired_ hacks to make stuff work, that Windows NT is a technical tour de force, and that the crap that has gradually worked its way in since Windows XP is due to the marketing people's insistence, not due to the programmers and their managers, who do superb work.
Other teams _do_ have the guts for drastic changes: look at Office 2007 (whole new UI, which I personally hate, but others like), and Windows 8 (whole new UI, which I liked but everyone else hated).
However Windows is the big cash cow and they didn't have the the courage when it was needed. Now, it's too late.
Something I seldom see mentioned, but I use a lot, is Linux systems installed directly onto USB sticks (pendrives)
No, you can't install from these, but they are very useful for system recovery & maintenance.
There are 2 ways to do it.
 Use a diskless PC, or disconnect your hard disk.
This is fiddly.
 Use a VM.
VirtualBox is free and lets you assign a physical disk drive to a VM. It's much harder to do this than it is in VMware -- it requires some shell commands to create, and other ones every time you wish to use it -- but it does
Read the comments!
Every time you want to run the VM, you must take ownership of the USB device's entry in /dev
N.B. This may require sudo.
Then the VM works. If you don't do this, the VM won't start and will give an unhelpful error message about nonexistent devices, then quit.
(It's possible that you could work around this by running VirtualBox as root, but that is not advisable.)
The full Unity edition of Ubuntu 16.04 will not install on an 8GB USB key, but Lubuntu will. I suspect that Xubuntu would also be fine, and maybe the Maté edition. I suspect but have not tested that KDE and GNOME editions won't work, as they're bigger. They'd be fine on bigger keys, of course, but see the next paragraph.
Also note that desktops based on GNOME 3 require hardware OpenGL support, and thus run very badly inside VMs. This includes GNOME Shell, Unity & Cinnamon, and in my experience, KDE 4 & 5.
Installation puts GRUB in the MBR of the key
, so it boots like any other disk.
- Partition the disk as usual. I suggest no separate /home but it's up to you. A single partition is easiest.
- Format the root partition as ext2 to extend flash media life (no journalling -> fewer writes)
- Add ``noatime'' to the /etc/fstab entry for the root volume -- faster & again reduces disk writes
- No swap. Swapping wears out flash media. I install and enable ZRAM just in case it's used on low-RAM machines: http://askubuntu.com/questions/174579/how-do-i-use-zram
- You can add VirtualBox Guest Additions if you like. The key will run better in a VM and when booted on bare metal they just don't activate.
I then update as normal.
You can update when booted on bare metal, but if it installs a kernel update, then it will run ``update-grub'' and this will add entries for any OSes on that machine's hard disk into the GRUB menu. I don't like this -- it looks messy -- so I try to only update inside a VM.
I usually use a 32-bit edition; the resulting key will boot and run 64-bit machines too and modern versions automatically run PAE and use all available RAM.
Sadly my Mac does not see such devices as bootable volumes, but the keys work on normal PCs fine.EDIT:
It occurs to me that they might not work on UEFI PCs unless you create a UEFI system partition and appropriate boot files. I don't have a UEFI PC to experiment with. I'd welcome comments on this.
Windows can't see them as it does not natively understand ext* format filesystems. If you wish you can partition the drive and have an exFAT (or whatever format you prefer) data partition as well, of course.
I also install some handy tools such as additional filesystem support (exFAT, HFS etc.), GParted, things like that.
I find such keys a handy addition to my portable toolkit and have used them widely.
If you wish and you used a big enough key, you could install multiple distros on a single key this way. But remember, you can't install from them.
I've also found that the BootRepair tool won't install on what it considers to be an installed system. It insists on being installed on a live installer drive.
If you want to carry around lots of ISO files and choose which to install, a device like this is the easiest way:http://www.zalman.com/contents/products/view.html?no=212
I am reluctant, but I have to sell this lovely phone.
It's a 32GB, fully-unlocked Blackberry Passport running the latest OS. It's still in support and receiving updates.
The sale includes a PDAir black leather folding case which is included in the price -- one of these:
It is used but in excellent condition and fully working. I have used both Tesco Mobile CZ and UK EE micro SIM cards and both worked perfectly.
The keyboard is also a trackpad and can be used to scroll and select text. The screen is square and hi-resolution -- the best I have ever used on a smartphone.
It runs the latest Blackberry 10 OS, which has the best email client on any pocket device. It can also run some Android apps and includes the Amazon app store. I side-loaded the Google Play store but not all apps for standard Android work. I am happy to help you load this if you want.
It is 100% usable without a Google, Apple or Microsoft account, if you are concerned about privacy issues.
It supports Blackberry Messenger, obviously, and has native clients for Twitter and other social networks -- I used Skype, Reddit, Foursquare and Untappd, among others. I also ran Android clients for Runkeeper, Last.FM and several other services. Facebook, Google+ and others are usable via their web interfaces.
I will do a full factory reset before handing it over.
It has a microSD slot for additional storage if you need it.
It is about a year old and has been used, so the battery is not good as new, but it still lasts much longer than the Android phablet that replaced it!
You can see it and try it before purchase if you wish.
Reason for sale: I needed more apps. I do not speak Czech and I need Google Translate and Google Maps almost every day.
Note: no mains adaptor included but it charges over micro-USB, so any charger will work, although it complains about other phone brand's chargers -- but they still work.
IKEA sell a cheap multiport one:
You can see photos of my device here:This is the Flickr album, or click on the photo above.
I am hoping for CzK 10000 but I am willing to negotiate.
Contact details on my profile page, or email lproven on Google Mail.
I found this post interesting:"Respinning Linux"
It led me to comment as follows...
Have you folks encountered LXLE? It's a special version of Lubuntu, the lightest-weight of the official Ubuntu remixes, optimised for older hardware.http://www.lxle.net/Cinnamon is a lot less than ideal, because it uses a desktop compositor. This requires hardware OpenGL. If the graphics driver doesn't do this, it emulates it using a thing called "LLVMpipe". This process is slow & uses a lot of CPU bandwidth. This is true of all desktops based on GNOME 3 -- including Unity, Elementary OS, RHEL/CentOS "Gnome Classic", SolusOS's Consort, and more. All are based on Gtk 3.In KDE, it is possible to disable the compositor, but it's still very heavyweight.The mainstream desktops that do not need compositing at all are, in order of size (memory footprint), from largest to smallest:
All are based on Gtk 2, which has now been replaced with Gtk 3.
Of these, LXDE is the lightest, but it is currently undergoing a merger with the Razor-Qt desktop to become LXQt. This may be larger & slower when finished -- it's too soon to tell.
However, of the 3, this means it has a brighter-looking future because it will be based on a current toolkit. Neither Maté nor Xfce have announced firm migration paths to Gtk 3 yet.
I almost never saw 2.8MB floppy drives.
I know they were out there. The later IBM PS/2 machines used them, and so did some Unix workstations, but the 2.8MB format -- quad or extended density -- never really took off.
It did seem to me that if the floppy companies & PC makers had actually adopted them wholesale, the floppy disk as a medium might have survived for considerably longer.
The 2.8MB drives never really took off widely, so the media remained expensive, ISTM -- and thus little software was distributed on the format, because few machines could read it.
By 1990 there was an obscure and short-lived 20MB floptical diskette format:http://www.cbronline.com/news/insites_20mb_floptical_drive_reads_144mb_disks
Then in 1994 came 100MB Zip disks, which for a while were a significant format -- I had Macs with built-in-as-standard Zip drives.
Then the 3½" super floptical drives, the Imation SuperDisk in 1997, 144MB Caleb UHD144 in early 1998 and then 150MB Sony HiFD in late 1998.
(None of these later drives could read 2.8MB diskettes, AFAIK.)
After that, writable CDs got cheap enough to catch on, and USB Flash media mostly has killed them off now.
If the 2.8 had taken off, and maybe even intermediate ~6MB and ~12MB formats -- was that feasible? -- before the 20MB ones, well, with widespread adoption, there wouldn't have been an opening for the Zip drive, and the floppy drive might have remained a significant and important medium for another decade.
I didn't realise that the Zip drive eventually got a 750MB version, presumably competing with Iomega's own 1GB Jaz drive. If floppy drives had got into that territory, could they have even fended off CDs? Rewritable CDs always were a pain. They were a one-shot medium and thus inconvenient and expensive -- write on one machine, use a few times at best, then throw away.
I liked floppies. I enjoy playing with my ancient Sinclair computers, but loading from tape cassette is just a step too far. I remember the speed and convenience when I got my first Spectrum disk drive, and I miss it. Instant loading from an SD drive just isn't the same. I don't use them on PCs any more -- I don't have a machine with a floppy drive in this country -- but for 8-bits, two drives with a meg or so of storage was plenty. I used them long after most people, if only for updating BIOSes and so on.
I was surprised to read someone castigating and condemning the Cyrix line of PC CPUs today.
For a while, I recommended 'em and used 'em myself. My own home PC was a Cyrix 6x86 P166+ for a year or two. Lovely machine -- a 133MHz processor that performed about 30-40% better than an Intel Pentium MMX at the same clock speed.
My then-employer, PC Pro magazine, recommended them too.
I only ever hit one problem: I had to turn down reviewing the latest version of Aldus PageMaker because it wouldn't run on a 6x86. I replaced it with a Baby-AT Slot A Gigabyte motherboard and a Pentium II 450. (Only the 100MHz front side bus Pentium IIs were worth bothering with IMHO. The 66MHz FSB PIIs could be outperformed by a cheaper SuperSocket 7 machine with a Cyrix chip.) It was very
difficult to find a Baby-AT motherboard for a PII -- the market had switched to ATX by then -- but it allowed me to keep a case I particularly liked, and indeed, most of the components in that case, too.
The one single product that killed the Cyrix chips was id Software's Quake.
Quake used very cleverly optimised x86 code that interleaved FPU and integer instructions, as John Carmack had worked out that apart from instruction loading, which used the same registers, FPU and integer operations used different parts of the Pentium core and could effectively be overlapped. This nearly doubled the speed of FPU-intensive parts of the game's code.
The interleaving didn't work on Cyrix cores. It ran fine, but the operations did not overlap, so execution speed halved.
On every other benchmark and performance test we could devise, the 6x86 core was about 30-40% faster than the Intel Pentium core -- or the Pentium MMX, as nothing much used the extra instructions, so really only the additional L1 cache helped. (The Pentium 1 had 16 kB of L1; the Pentium MMX had 32 kB.)
But Quake was extremely popular, and everyone used it in their performance tests -- and thus hammered the Cyrix chips, even though the Cyrix was faster in ordinary use, in business/work/Windows operation, indeed in every other game except
And ultimately that killed Cyrix off. Shame, because the company had made some real improvements to the x86-32 design. Improving instructions-per-clock is more important than improving the raw clock speed, which was Intel's focus right up until the demise of the Netburst Pentium 4 line.
AMD with the 64-bit Sledgehammer core (Athlon 64 & Opteron) did the same to the P4 as Cyrix's 6x86 did to the Pentium 1. Indeed I have a vague memory some former Cyrix processor designers were involved.
Intel Israel came back with the (Pentium Pro-based) Pentium M line, intended for notebooks, and that led to the Core series, with IPC speeds that ultimately beat even AMD's. Today, nobody can touch Intel's high-end x86 CPUs. AMD is looking increasingly doomed, at least in that space. Sadly, though, Intel has surrendered the low end and is killing the Atom line.http://www.pcworld.com/article/3063672/windows/the-death-of-intels-atom-casts-a-dark-shadow-over-the-rumored-surface-phone.html
The Atoms were always a bit gutless, but they were cheap, ran cool, and were frugal with power. In recent years they've enabled some interesting cheap low-end Windows 8 and Windows 10 tablets:http://www.anandtech.com/show/8760/hp-stream-7-reviewhttps://www.amazon.co.uk/Windows10-Tablet-Display-11000mAh-Battery-F-Black-B-Gray/dp/B01DF3UV3Y?ie=UTF8&keywords=hi12&qid=1460578088&ref_=sr_1_2&sr=8-2
Given that there is Android for x86, and have already been Intel-powered Android phones, plus Windows 10 for phones today, this opened up the intriguing possibility of x86 Windows smartphones -- but then Intel slammed the door shut.
Cyrix still exists, but only as a brand for Via, with some very low-end x86 chips. Interestingly, these don't use Cyrix CPU cores -- they use a design taken from a different non-Intel x86 vendor, the IDT WinChip:https://en.wikipedia.org/wiki/WinChip
I installed a few WinChips as upgrades for low-speed Pentium PCs. The WinChip never was all that fast, but it was a very simple, stripped-down core, so it ran cool, was about as quick as a real Pentium core, but was cheaper and ran at higher clock speeds, so they were mainly sold as an aftermarket upgrade for tired old PCs. The Cyrix chips weren't a good fit for this, as they required different clock speeds, BIOS support, additional cooling and so on. IDT spotted a niche and exploited it, and oddly, that
is the non-Intel x86 core that's survived at the low-end, and not the superior 6x86 one.
In the unlikely event that Via does some R&D work, it could potentially move into the space now vacated by the very low-power Atom chips. AMD is already strong in the low-end x86 desktop/notebook space with its Fusion processors which combine a 64-bit x86 core with an ATI-derived GPU, but they are too big, too hot-running and too power-hungry for smartphones or tablets.
(Repurposed email reply)
Although I was educated & worked with DEC systems, I didn't have much to do with the company itself. Its support was good, the kit ludicrously expensive, and the software offerings expensive, slow and lacking competitive features. However, they also scored in some ways.
My 60,000' view:
Microsoft knew EXACTLY what it was doing with its practices when it built up its monopoly. It got lucky with the technology: its planned future super products flopped, but it turned on a dime & used what worked.
But killing its rivals, any potential rival? Entirely intentional.
The thing is that no other company was poised to effectively counter the MS strategy. Nobody.
MS' almost-entirely-software-only model was almost unique. Its ecosystem of apps and 3rd party support was unique.
In the end, it actually did us good. Gates wanted a computer on every desk. We got that.
The company's strategy called for open compatible generic hardware. We got that.
Only one platform, one OS, was big enough, diverse enough, to compete: Unix.
But commercial, closed, proprietary Unix couldn't. 2 ingredients were needed:
#1 COTS hardware - which MS fostered;
#2 FOSS software.
Your point about companies sharing their source is noble, but I think inadequate. The only thing that could compete with a monolithic software monopolist on open hardware was open software.
MS created the conditions for its own doom.
Apple cleverly leveraged FOSS Unix and COTS X86 hardware to take the Mac brand and platform forward.
Nobody else did, and they all died as a result.
If Commodore, Atari and Acorn had adopted similar strategies (as happened independently of them later, after their death, resulting in AROS, AFROS & RISC OS Open), they might have lived.
I can't see it fitting the DEC model, but I don't know enough. Yes, cheap low-end PDP-11s with FOSS OSes might have kept them going longer, but not saved them.
The deal with Compaq was catastrophic. Compaq was in Microsoft's pocket. I suspect that Intel leant on Microsoft and Microsoft then leant on Compaq to axe Alpha, and Compaq obliged. It also knifed HP OpenMail, possibly the Unix world's only viable rival to Microsoft Exchange.
After that it was all over bar the shouting.
Microsoft could not have made a success of OS/2 3 without Dave Cutler... But DEC couldn't have made a success out of PRISM either, I suspect. Maybe a stronger DEC would have meant Windows NT would never have happened.
My contention is that a large part of the reason that we have the crappy computers that we do today -- lowest-common-denominator boxes, mostly powered by one of the kludgiest and most inelegant CPU architectures of the last 40 years -- is not technical, nor even primarily commercial or due to business pressures, but rather, it's cultural.
When I was playing with home micros (mainly Sinclair and Amstrad; the American stuff was just too expensive for Brits in the early-to-mid 1980s), the culture was that Real Men programmed in assembler and the main battle was Z80 versus 6502, with a few weirdos saying that 6809 was better than either. BASIC was the language for beginners, and a few weirdos maintained that Forth was better.
At university, I used a VAXcluster and learned to program in Fortran-77. The labs had Acorn BBC Micros in -- solid machines, the
best 8-bit BASIC ever, and they could interface both with lab equipment over IEEE-488 and with generic printers and so on over Centronics parallel and its RS-423
interface [EDIT: fixed!], which could talk to RS-232 kit.
As I discovered when I moved into the professional field a few years later (1988), this wasn't that different from the pro stuff. A lot of apps were written in various BASICs, and in the old era of proprietary OSes on proprietary kit, for performance, you used assembler.
But a new wave was coming. MS-DOS was already huge and the Mac was growing strongly. Windows was on v2 and was a toy, but Unix was coming to mainstream kit, or at least affordable kit. You could run Unix on PCs (e.g. SCO Xenix), on Macs (A/UX), and my employers had a demo IBM RT-6150 running AIX 1.
Unix wasn't only the domain (pun intentional) of expensive kit priced in the tens of thousands.
A new belief started to spread: that if you used C, you could get near-assembler performance without the pain, and the code could be ported between machines. DOS and Mac apps started to be written (or rewritten) in C, and some were even ported to Xenix. In my world, nobody used stuff like A/UX or AIX, and Xenix was specialised. I was aware of Coherent as the only "affordable" Unix, but I never saw a copy or saw it running.
So this second culture of C code running on non-Unix OSes appeared. Then the OSes started to scramble to catch up with Unix -- first OS/2, then Windows 3, then the for a decade parallel universe of Windows NT, until XP became established and Win9x finally died. Meanwhile, Apple and IBM flailed around, until IBM surrendered, Apple merged with NeXT and switched to NeXTstep.
Now, Windows is evolving to be more and more Unix-like, with GUI-less versions, clean(ish) separation between GUI and console apps, a new rich programmable shell, and so on.
While the Mac is now a Unix box, albeit a weird one.
Commercial Unix continues to wither away. OpenVMS might make a modest comeback. IBM mainframes seem to be thriving; every other kind of big iron is now emulated on x86 kit, as far as I can tell. IBM has successfully killed off several efforts to do this for z Series.
So now, it's Unix except for the single remaining mainstream proprietary system: Windows. Unix today means Linux, while the weirdoes use FreeBSD. Everything else seems to be more or less a rounding error.
C always was like carrying water in a sieve, so now, we have multiple C derivatives, trying to patch the holes. C++ has grown up but it's like Ada now: so huge that nobody understands it all, but actually, a fairly usable tool.
And dozens of others, of course.
Even the safer ones run on a basis of C -- so the lovely cuddly friendly Python, that everyone loves, has weird C printing semantics to mess up the heads of beginners.
Perl has abandoned its base, planned to move onto a VM, then the VM went wrong, and now has a new VM and to general amazement and lack of interest, Perl 6 is finally here.
All the others are still implemented in C, mostly on a Unix base, like Ruby, or on a JVM base, like Clojure and Scala.
So they still have C like holes and there are frequent patches and updates to try to make them able to retain some water for a short time, while the "cyber criminals" make hundreds of millions.
Anything else is "uncommercial" or "not viable for real world use".
Borland totally dropped the ball and lost a nice little earner in Delphi, but it continues as Free Pascal and so on.
Apple goes its own way, but has forgotten the truly innovative projects it had pre-NeXT, such as Dylan.
There were real projects that were actually used for real work, like Oberon the OS, written in Oberon the language. Real pioneering work in UIs, such as Jef Raskin's machines, the original Mac and Canon Cat -- forgotten. People rhapsodise over the Amiga and forget that the planned OS, CAOS, to be as radical as the hardware, never made it out of the lab. Same, on a smaller scale, with the Acorn Archimedes.
Despite that, of course, Lisp never went away. People still use it, but they keep their heads down and get on with it.
Much the same applies to Smalltalk. Still there, still in use, still making real money and doing real work, but forgotten all the same.
The Lisp Machines and Smalltalk boxes lost the workstation war. Unix won, and as history is written by the victors, now the alternatives are forgotten or dismissed as weird kooky toys of no serious merit.
The senior Apple people didn't understand the essence of what they saw at PARC: they only saw the chrome. They copied the chrome, not the essence, and now all that any
of us have is the chrome. We have GUIs, but on top of the nasty kludgy hacks of C and the like. A late-'60s skunkware project now runs the world, and the real serious research efforts to make something better, both before and after, are forgotten historical footnotes.
Modern computers are a vast disappointment to me. We have no thinking machines. The Fifth Generation, Lisp, all that -- gone.
What did we get instead?
Like dinosaurs, the expensive high-end machines of the '70s and '80s didn't evolve into their successors. They were just replaced. First little cheapo 8-bits, not real or serious at all, although they were cheap and people did serious stuff with them because it's all they could afford. The early 8-bits ran semi-serious OSes such as CP/M, but when their descendants sold a thousand times more, those descendants weren't running descendants of that OS -- no, it and its creator died.
CP/M evolved into a multiuser multitasking 386 OS that could run multiple MS-DOS apps on terminals, but it died.
No, then the cheapo 8-bits thrived in the form of an 8/16-bit hybrid, the 8086 and 8088, and a cheapo knock-off of CP/M.
This got a redesign into something grown-up: OS/2.
Predictably, that died.
So the hacked-together GUI for DOS got re-invigorated with an injection of OS/2 code, as Windows 3. That took over the world.
The rivals - the Amiga, ST, etc? 680x0 chips, lots of flat memory, whizzy graphics and sound? All dead.
Then Windows got re-invented with some OS/2 3 ideas and code, and some from VMS, and we got Windows NT.
But the marketing men got to it and ruined its security and elegance, to produce the lipstick-and-high-heels Windows XP. That version, insecure and flakey with its terrible bodged-in browser, that, of course, was the one that sold.
Linux got nowhere until it copied the XP model. The days of small programs, everything's a text file, etc. -- all forgotten. Nope, lumbering GUI apps, CORBA and RPC and other weird plumbing, huge complex systems, but it looks and works kinda like Windows and a Mac now so it looks like them and people use it.
Android looks kinda like iOS and people use it in their billions. Newton? Forgotten. No, people have Unix in their pocket, only it's a bloated successor of Unix.
The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten. A proprietary microkernel Unix-like OS for phones -- Blackberry 10, based on QNX -- not Androidy enough, and bombed.
We have less and less choice, made from worse parts on worse foundations -- but it's colourful and shiny and the world loves it.
That makes me despair.
We have poor-quality tools, built on poorly-designed OSes, running on poorly-designed chips. Occasionally, fragments of older better ways, such as functional-programming tools, or Lisp-based development environments, are layered on top of them, but while they're useful in their way, they can't fix the real problems underneath.
Occasionally someone comes along and points this out and shows a better way -- such as Curtis Yarvin's Urbit. Lisp Machines re-imagined for the 21st century, based on top of modern machines. But nobody gets it, and its programmer has some unpleasant and unpalatable ideas, so it's doomed.
And the kids who grew up after C won the battle deride the former glories, the near-forgotten brilliance that we have lost.
And it almost makes me want to cry sometimes.
We should have brilliant machines now, not merely Steve Jobs' "bicycles for the mind", but Gossamer Albatross-style hang-gliders for the mind.
But we don't. We have glorified 8-bits. They multitask semi-reliably, they can handle sound and video and 3D and look pretty. On them, layered over all the rubbish and clutter and bodges and hacks, inspired kids are slowly brute-forcing machines that understand speech, which can see and walk and drive.
But it could have been so much better.
Charles Babbage didn't finish the Difference Engine. It would have paid for him to build his Analytical Engine, and that would have given the Victorian British Empire the steam-driven computer, which would have transformed history.
But he got distracted and didn't deliver.
We started to build what a few old-timers remember as brilliant machines, machines that helped their users to think and to code, with brilliant -- if flawed -- software written in the most sophisticated computer languages yet devised, by the popular acclaim of the people who really know this stuff: Lisp and Smalltalk.
But we didn't pursue them. We replaced them with something cheaper -- with Unix machines, an OS only a nerd could love. And then we replaced the Unix machines with something cheaper still -- the IBM PC, a machine so poor that the £125 ZX Spectrum had better graphics and sound.
And now, we all use descendants of that. Generally acknowledged as one of the poorest, most-compromised machines, based on descendants of one of the poorest, most-compromised CPUs.
Yes, over the 40 years since then, most of rough edges have been polished out. The machines are now small, fast, power-frugal with tons of memory and storage, with great graphics and sound. But it's taken decades to get here.
And the OSes have developed. Now they're feature-rich, fairly friendly, really very robust considering the stone-age stuff they're built from.
But if we hadn't spent 3 or 4 decades making a pig's ear into silk purse -- if we'd started with a silk purse instead -- where might we have got to by now?