From a G+ thread
that just won't die.
A British digital artist called William Latham
-- he has a site, but it won't load for me -- once co-developed a wonderful screensaver for early 32-bit Windows, called Organic Art
There was even an MS-sponsored free demo version
Sadly this won't install on 64-bit Windows, as the installer has 16-bit components. However, you can
get it working. I did it, after a bit of fiddling, on Windows 7.
Here's how, in brief:
* Install XP Mode
* Boot it, let it update etc.
* In Win7, meanwhile, download the demo from Nemeton
* Once XP Mode is all updated, install the OA MS edition demo from the host drive
* Check it works
* I copied the whole Program Files/Computer Artworks tree into my W7 Downloads folder
* I also retrieved the screensaver (.SCR) file from C:\WINDOWS\SYSTEM32 -- and as mentioned above, D3DRM.DLL
In W7, I copied this into the same locations on my Win7/64 system.
I used the documented hack to re-enable screensavers (JFGI).
It now ran but couldn't find any profiles.
* In XP Mode, I exported the entire Computer Artworks hive from the Registry to a file in my W7 Downloads folder.
In W7 I imported this file.
Now the 'saver runs. It's worth disabling mode switching and forcing it to use Hardware Acceleration. Not all of the saver modules work but most do -- and very
quickly and smoothly, too.
This won't work as-is on Windows 8 or newer. There are hacks
but I only got the VirtualPC component of XP Mode running on Win8. Nothing newer worked.
But you can run XP Mode in VirtualBox, and I've published an article
on how to do that. The other steps are much the same.
Try it. It's really quite beautiful.
A lot of my speculations concern the future of new, alternative operating systems which could escape from old-fashioned, sometimes ill-conceived models and languages.
But I do spend some time thinking about what is happening with Linux, with FOSS Unix in general, and especially with container technologies, something I deal with in my current and recent day-jobs more and more.
One answer to legacy nastiness for years now has been to virtualise it. Today, that's changing to "containerise it".
There is a ton of cruft in Linux and in the BSDs and so on which nobody is ever going to fix. It's too hard, it would break too much stuff... but most of all, there is no commercial pressure to do it, so it's not going to happen.
I can certainly see potentialities. There are parallels that run quite deep.
For instance, consider a few unrelated technologies:
- FreeBSD jails and Solaris Zones. Start here.
They indirectly evolved into LXC, the container mechanism in the Linux kernel which gets relatively little attention. (Docker has critical mass, systemd namespaces are trendier in some niches, CRIO is gaining a little bit of traction.)
Docker now means Linux containers are a known thing, already widely-used with money being poured into their R&D.Joyent
, a company with some vision, saw a chance here. It took Illumos
, the FOSS fork of Solaris, and revived and modernised some long-dead Sun code: lxrun
, the Linux runtime for Solaris. Joyent SmartOS
is therefore a tiny Solaris derivative -- it runs entirely from RAM, booted off a USB stick, but can efficiently scale to hundreds of CPU cores and many terabytes of RAM -- which can natively
run Docker Linux containers.
You don't need to run a hypervisor. (It is
a hypervisor, if you want that.) You don't need to partition the machine. You don't even need a single copy of Linux on it. You have a rack of x86-64 boxes running SmartOS, and you can throw tens of thousands of Docker containers at them.
It gives capacities and scalability that only IBM mainframes can approach.
Now, if one small company can do this with some long-unmaintained code, then consider what else could be done with it.
- Want more resilient hosts for long-lived containers? Put some work into Minix 3
until it can efficiently run Linux containers. A proper fully-modular-all-the-way-down microkernel which can detect when its constituent in-memory services fail and restart them. It can in principle even undergo binary version upgrades, piecemeal, on a running system. This is stuff Linux vendors can't even dream of. It would, for a start, make quite a lot of the Spectre and Meltdown vulnerabilities moot, because there's no shared kernel memory space.
Unlike Darwin and xnu, it's a proper microkernel -- no huge in-kernel servers for anything here. (Don't eve try
to claim WinNT is a microkernel or I will slap you.) Unlike the GNU HURD, it's here, it works, it's being very widely used for real workloads. And it's 100% FOSS.
- Want a flexible cluster host which can migrate containers around a globe-spanning virtual datacenter?
Put some work into Plan 9's APE
, its Linux runtime. Again, make it capable of running Linux containers. To Plan 9 they'd just be processes and it was built
to efficiently fling them around a network.
I have looked into container-hosting Linux distros for several different dayjobs. I can't give details, but they scare me. One I've tried has a min spec of 8GB of RAM and 40GB of disk per cluster node, and a minimum of 3-4 nodes.
This is not small efficient tech. But it could be; SmartOS shows that.
- Hell, more down to earth -- many old Linux hands are deserting to FreeBSD in disgust over systemd. FreeBSD already has containers and a quite current Linux runtime, the Linuxulator
. It would be relatively easy to put them together and have FreeBSD host Linux containers, but the sort of people who dislike systemd also dislike containers.
Not everything would run under containers, sure, no. But they're suitable for far bigger workloads than is generally expected. You can migrate a whole complex Linux server into a container -- P2V migration as was once common when moving to hypervisors. I've talked to people doing it.
Ubuntu LXD is specifically intended for this, because Ubuntu isn't certified for SAP, only SUSE is, so Ubuntu wants to be able to run SLE userlands. Ditto some RHEL-only stuff.
But what if it doesn't work with containers at all?
Well, as parallels...
 A lot of Win32 stuff got abandoned with the move to WinXP. People liked the new OS enough that stuff that didn't work got left behind.
 Apple formalised this with Carbon after the NeXT acquisition. The MacOS APIs were not clean and suitable for a pre-emptive multitasking OS. So Apple stripped them out and said "if you use this subset, your app can be ported. If you don't, it can't."
Over the next few years, the old OS was forcibly phased out -- there is a generation of late-era gigahertz-class G4 and G5 PowerMac that refuses to boot classic MacOS. Apple tweaked the firmware to prevent it. You _had_ to run OS X on them, and although versions >= 10.4 could run a Classic MacOS VM, not everything worked in a VM.
So the developers had to migrate. And they did, because although it was a lot of work, they wanted to keep selling software.
It worked so well that in the end the migration from PowerPC to Intel was less painful than the one from classic MacOS to OS X.
So maybe Linux workloads that won't work in containers will just go away, replaced by ones that will -- and apps that play nice in a container don't care what distro they're on, and that means that they will run on top of SmartOS and FreeBSD and maybe in time Minix 3 or Plan 9.
And so we'll get that newer, cleaner, reworked Unix after all, but not by any incremental process, by a quite dramatic big-bang approach.
And if there comes a point when it's desirable to run these alternative OSes for some users, because they provide useful features in nice handy easy ways, well, maybe they'll gain traction.
And if that happened, then maybe some people will investigate native ports instead of containerised Linux versions, and gain some edge, and suddenly the Unix world will be blown wide open again.
Might happen. Might not. It's not what I am really interested in, TBH. But it's possible
-- existing products, shipping for a few years, show that.
So, yesterday I presented my first conference talk since the Windows Show 1996 at Olympia, where I talked about choosing a network operating system — that is, a server OS — for PC Pro magazine.
(I probablystill have the speaker's notes and presentation for that somewere too. The intensely curious may ask and I maybe able share it too.)
It seemed to go OK, I had a whole bunch of people asking questions afterwards, commenting or thanking me.
[Edit] Video! https://youtu.be/jlERSVSDl7Y
I have to check out the video recording and make some editing marks before it will be published and I am not sure that the hotel wifi connection is fast or capacious enough for me to do that. However, I'll post it as soon as I can.
Meantime, here is some further reading.
I put together a slightly jokey deck of slides and was very pleasantly impressed at how good and easy LibreOffice Impress made it to create and to present them. You can download the 9MB ODP file here:
The notes are a 110 kB MS Word 2003 document. They may not always be terribly coherent -- some were extensively scripted, some are just bullet points. For best results, view in MS Word (or the free MS Word Viewer, which runs fine under WINE) in Outline mode. Other programs will not show the structure of the document, just the text.
I had to cut the talk fairly brutally to fit the time and did not get to discuss some of the operating systems I planned to. You can see some additional slides at the end of the presentation for stuff I had to skip.
Here's a particular chunk of the talk that I had to cut. It's called "Digging deeper" and you can see what I was goingto say about Taos, Plan 9, Inferno, QNX and Minix 3. This is what the slides on the end of the presentation refer to.
Links I mentioned in the talk or slides
The Unix Haters' Handbook [PDF]: https://simson.net/ref/ugh.pdf
Stanislav Datskovskiy's Loper-OS: http://www.loper-os.org/
Paul Graham's essays: http://www.paulgraham.com/
Notably his Lisp Quotes: http://www.paulgraham.com/quotes.html
Steve Jobs on the two big things he missedwhen he visited Xerox PARC:
Alan Kay interview where he calls Lisp "the Maxwell's Equations of software": https://queue.acm.org/detail.cfm?id=1039523
And what that means: http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/
Fri, Dec. 22nd, 2017, 12:22 am
It might interest folk hereabout that I've had a talk accepted at February's FOSDEM conference in Brussels. The title is "The circuit less travelled" and I will be presenting a boiled-down, summarised version of my ongoing studies into OS, language and app design, on the thesis of where, historically, the industry made arguably poor (if pragmatic) choices, some interesting technologies that weren't pursued, where it'll go next and how reviving some forgotten ideas could lend technological advantage to those trying different angles.
In other words, much of what I've been ranting about on here for the last several years.
It will, to say the least, be interesting to see how it goes down.
SUSE is paying for me to attend, but the talk is not on behalf of them -- it's entirely my own idea and submission. A jog from SUSE merely gave me the impetus to submit an abstract and description.
Once again, recently, I have been told that I simply cannot write about -- for instance -- the comparative virtues of programming languages unless I am a programmer and I can actually program in them. That that is the only way to judge.
This could be the case, yes. I certainly get told it all the time
But the thing is that I get told it by very smart, very experienced people who also go on to tell me that I am completely wrong about other stuff where I know
that I am right, and can produce abundant citations to demonstrate it. All sorts of stuff.
I can also find other people -- just a few -- who know exactly what I am talking about, and agree, and have written much the same, at length. And their experience is the same as mine: years, decades, of very smart highly-experienced people who just do not understand and cannot step outside their preconceptions far enough to get the point.
It is not just me
.( Read more...Collapse )
This is a repurposed CIX comment. It goes on a bit. Sorry for the length. I hope it amuses.
So, today, a friend of mine accused me of getting carried away after reading a third-generation Lisp enthusiast's blog. I had to laugh.
The actual history is a bit bigger, a bit deeper.
The germ was this:https://www.theinquirer.net/inquirer/news/1025786/the-amiga-dead-long-live-amiga
That story did very well, amazing my editor, and he asked for more retro stuff. I went digging. I'm always looking for niches which I can find out about and then write about -- most recently, it has been containers and container tech. But once something goes mainstream and everyone's writing about it, then the chance is gone.
I went looking for other retro tech news stories. I wrote about RISC OS, about FPGA emulation, about OSes such as Oberon and Taos/Elate.
The more I learned, the more I discovered how much the whole spectrum of commercial general-purpose computing is just a tiny and very narrow slice of what's been tried in OS design. There is some amazingly weird and outré stuff out there.
Many of them still have fierce admirers. That's the nature of people. But it also means that there's interesting in-depth analysis of some of this tech.
It's led to pieces like this which were fun to research:http://www.theregister.co.uk/Print/2013/11/01/25_alternative_pc_operating_systems/
I found 2 things.
One, most of the retro-computers that people rave about -- from mainstream stuff like Amigas or Sinclair Spectrums or whatever -- are actually relatively homogenous compared to the really weird stuff. And most of them died without issue. People are still making clone Spectrums of various forms, but they're not advancing it and it didn't go anywhere.
The BBC Micro begat the Archimedes and the ARM. Its descendants are everywhere. But the software is all but dead, and perhaps justifiably. It was clever but of no great technical merit. Ditto the Amiga, although AROS on low-cost ARM kit has some potential. Haiku, too.
So I went looking for obscure old computers. Ones that people would _not_ read about much. And that people could relate to -- so I focussed on my own biases: I find machines that can run a GUI or at least do something with graphics more interesting than ones before then.
There are, of course, tons of the things. So I needed to narrow it down a bit.
Like the "Beckypedia" feature on Guy Garvey's radio show, I went looking for stuff of which I could say...
"And why am I telling you this? Because you need to know."
So, I went looking for stuff that was genuinely, deeply, seriously different
-- and ideally, stuff that had some pervasive influence.( Read more...Collapse )
And who knows, maybe I’ll spark an idea and someone will go off and build something that will render the whole current industry irrelevant. Why not? It’s happened plenty of times before.
And every single time, all of the most knowledgeable experts said it was a pointless, silly, impractical flash-in-the-pan. Only a few nutcases saw any merit to it. And they never got rich.
I've written a few times about a coming transition in computing -- the disappearance of filesystems, and what effects this will have. I have not had any constructive dialogue with anyone.
So I am trying yet again, by attempting to to rephrase this in a historical context:
There have been a number of fundamental transitions in computing over the years.1st generation
The very early machines didn't have fixed nonvolatile storage: they had short-term temporary storage, such as mercury delay lines or storage CRTs, and read data from offline, non-direct-access, often non-rewritable means, such as punched cards or paper tape.2nd generation
Hard disks came along, in about 1953, commercially available in 1957: the IBM RAMAC...https://en.wikipedia.org/wiki/IBM_305_RAMAC
Now, there were 2 distinct types of directly-accessible storage: electronic (including core store for the sake of argument) and magnetic.
A relatively small amount of volatile storage, in which the processor can directly work on data, and a large amount of read-write volatile storage but which must be transferred into volatile storage for processing. You can't add 2 values in 2 disk blocks without transferring them into memory first.
This is one of the fundamental basic models of computer architecture. However, it has been _the_ single standard architecture for many decades. We've forgotten there was ever anything else.
There was a reversion to machines with no directly-accessible storage in the late 1970s and early-to-mid 1980s, in the form of 8-bit micros with only cassette storage.
The storage was not under computer control, and was unidirectional: you could load a block, or change the tape and save a block, but in normal use for most people except the rather wealthy, the computer operated solely on the contents of its RAM and ROM.
Note: no filesystems.
Trying to forestall an obvious objection:
Later machines, such as the ZX Spectrum 128 and Amstrad PCW, had RAMdisks, and therefore very primitive filesystems, but that was mainly a temporary stage due to processors that couldn't access >64kB of RAM and the inability to modify their ROMs to support widespread bank-switching, because it would have broken backwards-compatibility.)
Once all machines have this 2-level-store model, note that the 2 stores are managed differently.
Volatile store is not
structured as a filesystem as it is dynamically constructed on the fly every boot. It has little to no metadata.
Permanent store needs to have metadata as well as data. The computer is regularly rebooted, and then, it needs to be able to find its way through the non-volatile storage. Thus, increasingly elaborate systems of indexing.
But the important thing is that filesystems were a solution to a technology issue: managing all that non-volatile storage.
Over the decades it has been overloaded with other functionality: data-sharing between apps, security between users, things like that. It's important to remember that these are secondary functions.
It is near-universal, but that is an artefact of technological limitations. That the fast, processor-local storage was volatile, and non-volatile storage was slow and large enough that it had to be non-local. Nonvolatile storage is managed via APIs and discrete hardware controllers, whose main job was transferring blocks of data from volatile to non-volatile storage and back again.
And that distinction is going away.
The technology is rapidly evolving to the point where we have fast, processor-local storage, in memory slots, appearing directly in the CPUs' memory map, which is non-volatile.
Example -- Flash memory DIMMs:https://www.theregister.co.uk/2015/11/10/micron_brings_out_a_flash_dimm/
Now, the non-volatile electronic storage is increasing rapidly in speed and decreasing in price.
Example -- Intel XPoint:https://arstechnica.com/information-technology/2017/02/specs-for-first-intel-3d-xpoint-ssd-so-so-transfer-speed-awesome-random-io/
Note the specs:
Reads as fast as Flash.
Writes nearly the same speed as reads.
Half the latency of Flash.
100x the write lifetime of Flash.
And this is the very first shipping product
Intel is promising "1,000 times faster than NAND flash, 10 times denser than (volatile) DRAM, and with 1,000 times the endurance of NAND".
This is a game-changer.
What we are looking at is a new type of computer.3rd generation
No distinction between volatile and non-volatile storage. All storage appears directly in the CPUs' memory map. There are no discrete "drives" of any kind as standard. Why would you? You can have 500GB or 1TB of RAM, but if you turn the machine off, then a day later turn it back on, it carries on exactly where it was.
(Yes there will be some caching and there will need to be a bit of cleverness involving flushing them, or ACID writes, or something.)
It ships to the user with an OS in that memory.
You turn it on. It doesn't boot.
What is booting? Transferring OS code from non-volatile storage into volatile storage so it can be run. There's no need. It's in the processor's memory the moment it's turned on.
It doesn't boot. It never boots. It never shuts down, either. You may have tell it you're turning it off, but it flushes its caches and it's done. Power off.
No hibernation: it doesn't need to. The OS and all state data will be there when you come back. No sleep states: just power off.
What is installing an OS or an app? That means transferring from slow non-volatile storage to fast non-volatile storage. There is no slow or fast non-volatile storage. There's just storage. All of it that the programmer can see is non-volatile.
This is profoundly different to everything since 1957 or so.
It's also profoundly different from those 1980s 8-bits with their BASIC or Forth or monitor in ROM, because it's all writable.
That is the big change.
In current machines, nobody
structures RAM as a filesystem. (I welcome corrections on this point!) Filesystems are for drives. Doesn't matter what kind of drive: hard disk, SSD, SD card, CD, DVD, DVD-RW, whatever. Different filesystems, but all need transfers from them to and from volatile storage to function.
That's going away. The writing is on the wall. The early tech is shipping right now.
What I am asking is, how will it change OS design?
All I am getting back is, "don't be stupid, it won't, OS design is great already, why would we change?"
This is the same attitude the DOS and CP/M app vendors had to Windows.
WordStar and dBase survived the transition from CP/M to MS-DOS.
They didn't survive the far bigger one from MS-DOS to Windows.
WordPerfect Corp and Lotus Corp tried. They still failed and died.
A bigger transition is about to happen. Let's talk about it instead of denying it's coming.
Acorn pulled out of making desktop computers in 1998, when it cancelled the Risc PC 2, the Acorn Phoebe.
The machine was complete, but the software wasn't. It was finished and released as RISC OS 4, an upgrade for existing Acorn machines, by RISC OS Ltd.
by that era, ARM had lost the desktop performance battle. If Acorn had switched to laptops by then, I think it could have remained competitive for some years longer -- 486-era PC laptops were pretty dreadful. But the Phoebe shows that what Acorn was actually trying to build was a next-generation powerful desktop workstation.
Tragically, I must concede that they were right to cancel it. If there had been a default version with 2 CPUs, upgradable to 4, and that were followed with 6- and 8-core models, they might have made it, but RISC OS couldn't do that, and Acorn didn't have the resources to rewrite RISC OS to do it. A dedicated Linux machine in 1998 would have been suicidal -- Linux didn't even have a FOSS desktop in those days. If you wanted a desktop Unix workstation, you still bought a Sun or the like.
(I wish I'd bought one of the ATX cases when they were on the market.)( Read more...Collapse )
I was a keen owner and fan of multiple Psion PDAs (pocket digital assistants – today, I have a Psion 3C, a 5MX and a Series 7/netBook) and several Acorn desktop computers running RISC OS (I have an A310 and an A5000).
I was bitterly disappointed when the companies exited those markets. They still survive -- Psion's OS became Symbian and I had several Symbian devices, including a Sony-Ericsson P800, plus two Nokias -- a 7700 and an E90 Communicator. The OS is now dead, but Psion's handhelds still survive -- I'll get to them.
I have dozens of ARM-powered devices, and I have RISC OS Open running on a Raspberry Pi 3.
But despite my regret, both Psion's and Acorn's moves were excellent, sensible, pragmatic business decisions.
How many people used PDAs?
How many people now use smartphones?( Read more...Collapse )
A summary of where were are and where we might be going next.
Culled from a couple of very lengthy CIX
A "desktop" means a whole rich GUI with an actual desktop -- a background you can put things on, which can hold folders and icons. It also includes an app launcher, a file manager, generally a wastebin or something, accessory apps such as a text editor, calculator, archive manager, etc. It can mount media and show their contents. It can unmount them again. It can burn rewritable media such as CDs and DVDs.
The whole schmole.
Some people don't want this and use something more basic, such as a plain window manager. No file manager, or they use the command line, or they pick their own, along with their own text editor etc., which are not integrated into a single GUI.
This is still a GUI, still a graphical UI, but may include little or nothing more than window management. Many Unix users want a bunch of terminals and nothing else.
A desktop is an integrated suite, with all the extras, like you get with Windows or a Mac, or back in the day with OS/2 or something.
The Unix GUI stack is as follows:( Read more...Collapse )