April 7th, 2014

small shadowed

So this kid on a mailing list is telling the world that Arch Linux is the best ever...

Apparently, it's the ultimate Linux, and with his tweaks to the current development kernel and a custom scheduler, it's insanely responsive, and if you haven't tried it, you're not a Linux god.

So I said...

Um. Good for you. I am pleased you've found a system you find nicely responsive.

Me, I just want something simple, low-maintenance and reliable, with a
good polished rich UI, that does what I need. The less work I have to
do to achieve this, the better the OS is, for me.

Yours sounds very high-maintenance indeed and I'm not remotely
interested in going to all that work.

I don't consider myself a Linux god. I am reasonably clueful. I've
been using Ubuntu since it came out in 2004, SuSE for a couple of
years before that, Caldera for a couple of years before that. That
followed a good few years on NT 3.51 and NT 4, which followed Windows
95. I switched to Windows 95 from OS/2 - I was a keen OS/2 user from
2.0 to 2.1 to 3.0. It really was the best 32-bit OS for PCs back then.

Before that, at work, I built, ran and supported servers running SCO
Unix and before that SCO Xenix. My Unix experience goes back to about
1988, which is when I switched over from the VAX/VMS I used at
University.

I have also used IBM AIX and SUN SunOS and Solaris, but not much.

Plus Novell Netware - I was a bit of a guru on Netware 2 and 3 but
wasn't so impressed with Netware 4 and have barely used 5. I wrote a
masterclass on building a small-business server with Red Hat 6 for PC
Pro magazine in the late 1990s. I've also reviewed about 20 or 30
Linux distros over the years, so I feel I know the Linux landscape
well.

I'm also very interested in alternative (non-Unix) OSes, especially
for the PC. BeOS is my personal all-time favourite.

Off PC hardware, I'm also pretty good on Mac OS X and classic Mac OS,
before thzat Acorn RISC OS and Psion EPOC and its successor Symbian,
and have some knowledge of AmigaOS, Atari GEM (I was peripherally
involved in the GPL FOSS FreeGEM project to revive PC GEM; my name's
in the credits of FreeDOS, to my startlement.)

I was definitely an MS-DOS guru back in the late 1980s/early 1990s and
supported all the major networking systems - 3Com 3+Share, 3+Open, DEC
Pathworks, AppleShare, Sage MainLAN, Personal Netware, Netware Lite,
NT Server from the very first version, etc.

So I guess you could say that my knowledge is broad but in places
shallow, rather than very deep in any one area, such as Linux. :-)

But I feel really sorry for you if you think that /any/ Linux system
is genuinely fast and responsive. It's not. It's a huge lumbering
sloth of an OS. You really need to try BeOS, or failing that Haiku, if
you want to experience what a fast responsive OS on PC hardware feels
like.

Sadly, there just weren't the apps for it, and no VMs in those days.

And for something vastly more responsive than Haiku, try Acorn's RISC
OS. It's the original OS for the ARM chip that these days struggles to
run bloated leviathans like Apple iOS and Android. RISC OS is the
single most responsive system I've ever used, because the entire core
OS - kernel, GUI, main accessory apps - fits into about 6MB of Flash
ROM.

No, that's not a typo. Six megabytes. Complete Internet-capable
multitasking GUI OS with network clients etc.

It runs on the Raspberry Pi and RISC OS itself is now shared-source
freeware so you can download it from Risc OS Open Ltd. for nothing and
run it on a £25 computer - on which it performs very very well, many
tens of times faster than a lightweight cut-down Linux such as
Raspbian.

So, no, not a Linux god, but, you know, not a n00b either.

Try some of these OSes. Prepare to be surprised. You might enjoy the experience.

Most of them have nice friendly GUI text editors, too, way friendlier
than Vi /or/ Emacs. ;-D
small shadowed

Taking a 10,000' view of modern OS design. (Warning: extended rant.)

Frankly, coming from a background in 1980s and 1990s OSes, I think modern ones are appalling shite. They're huge, baggy, flabby sacks of crap that drag themselves around leaving a trail of slime and viscera - but like some blasphemous shoggoth, they have organs to spare, and the computers they run on are so powerful and have so much storage that the fact that these disgusting shambling zombie Frankenstein's-monster things, stitched together from bits of the dead, dropping eyeballs and fingers, actually work for weeks on end.

On the server, no problem, run hundreds of instances of them, so when they implode, spawn another.

It's crap. It's all terrible, blatantly obvious utter crap, but there's almost nobody left who remembers any other way. I barely do, from old accounts, & I'm near 50.

We have layers of sticking-plaster and bandages over kernels that are hugely-polished turds, moulded into elegant shapes. These are braindead but have modules for every conceivable function and so can run on almost anything and do almost anything, so long as you don't mind throwing gigabytes and gigahertz at the problem.

And those shiny turds are written in braindead crap languages, designed for semi-competent poseurs to show off their manliness by juggling chainsaws: pointless Byzantine wank like pointer arithmetic, missing basic types for strings, array bounds-checking, and operator overloading. Any language that even allows the possibility of a buffer or stack overflow is hopelessly broken and should be instantly discarded. The mere idea of a portable assembly language is a vestige of days when RAM was rationed and programmers needed to twiddle bits directly; it should have been history before the first machine with more than a megabyte of RAM per user was sold.

Computers should be bicycles for the mind. They let us take our existing mental tools and provide leverage, mechanical advantage, to let us do more.

We work in patterns, in sets, in rich symbols; it is how we think and how we communicate. That, then, should be the native language to which our computers aim: the logic of entities and sets of entities, that is, atoms and lists, not allocated blocks of machine storage - that is an implementation detail, it should be out of sight, and if it's visible, then your design is faulty. If you routinely need to access things, then your design is not even wrong.

By the late '50s we had a low-level programming language that could handle this. It's unreadable, but it was only meant to be the low-level; we just never got the higher level wrapper to make it readable to mortals. The gods themselves can work in it; to lesser beings, it's all parens.

Now, we have a rich choice of higher-level wrappers to make it all nice and easy and pretty. Really very pretty.

And later, people built machines specifically to run that language, whose processors understood its primitives.

But they lost out. CPUs were expensive, memory was expensive, so instead, OSes grew simpler; Unix replaced Multics, and CPUs grew simpler too, to just do what these simple OSes written in simple languages did. Result, these simple, stripped-down machines and OSes were way more cost-effective, and they won. The complex machines died out.

Then the simpler machines - which were still quite big and expensive - were stripped down even more, to make really cheap, rudimentary 4-bit CPUs for calculators, ones that fitted on one chip.

They sold like hotcakes, and were developed and refined, from 4-bit to 8-bit, from primitive 8-bit to better 8-bit, with its own de-facto standard OS which was a dramatically simpler version of a simple, obsolete OS for 16-bit minicomputers.

And that chip begat a clunky segmented 8/16-bit one, and that a clunky segmented 16-bit one, and that a bizarre half-crippled 32-bit one that could emulate lots of the 8/16-bit one in hardware FFS. And that redefined the computer industry and it was nearly two decades until we got something slightly better, a somewhat-improved version of the same old same old.

And that's where we are now. The world runs on huge, vastly complex scaled-up go-faster versions of a simplified-to-the-maximum-extent-possible calculator chip. These chips grew out of a project to scale-down simple, dumb, brain-dead chips built to be cheap-but-quick because the proper ones, that people actually liked, were too expensive 40 years ago. Of course, now, the descendants of those simplified chips are vastly more complex than the big expensive ones their ancestors killed off.

And what do we run on them? Two OSes. One a descendant of a quick-n-dirty lab skunkworks project to make an old machine useful for games, still today written in portable assembler with richer portable-assembler things written in the lower-level one running on top of it. And a descendant of a copy of a copy of a primitive '60s mini OS which has been extensively rewritten in order to imitate the skunkworks thing.

But these turds have been polished so brightly, moulded into such pretty shapes, that they've utterly dominated the world since my childhood. It's still all made from shit but it's been refined so much that it looks, smells and tastes quite nice now.

We still are covered in shit and flies - "binaries", "compilers", "linkers", "IDEs", "interpreters", "disk" versus "RAM", "partitions" and "filesystems", all this technical cruft that better systems banished before the first Mac was made, before the 80286 hit the market.

But as the preface to the Unix-Hater's Handbook says:

``I liken starting ones computing career with UNIX, say as an undergraduate, to being born in East Africa. It is intolerably hot, your body is covered with lice and flies, you are malnourished and you suffer from numerous curable diseases. BUT, as far as young East Africans can tell, this is simply the natural condition and they live within it. By the time they find out differently, it is too late. They already think that the writing of shell scripts is a natural act.''


- Patrick Sobalvarro

Nobody knows any better any more. And when you try to point out that there was once something better, that there are other ways, that it doesn't need to be like this... people just ridicule you.

And no, in case it's not clear, I am not a Lisp zealot. I find it unreadable and cannot write "hello world" in it. I also don't want 1980s Lisp Machines back - they were designed for Lisp programmers, and I'm not one of them.

I want rich modern programming languages, as easy to read as Python, as expressive as Lisp, with deep rich integration into the GUI - not some bolt-on extra like a tool to draw forms and link them to bits of code in 1970s languages. There's no implicit reason that why the same language shouldn't be usable by a non-specialist programmer writing simple imperative code, and also by a master wielding complex class frameworks like a knight with a lightsabre. It's all code to the computer: you should be able to choose your preferred viewing level, low-level homoiconicity or familiar Algol-like structures. There shouldn't be difference between interpreted languages and compiled - it's all the same to the machine. JIT and so on solved this years ago. There's no need for binaries at all - look at Java, look at Taos and Intent Elate, look at Inferno's Limbo and Dis. Hell, look at Forth over 30 years ago: try out a block of code in the interpreter; once it works, name it and bosh, it's compiled and cached.

Let's assume it's all FOSS. No need for licences mandating source distribution: the end-product is all source. You run the source directly, like a BASIC listing for a ZX Spectrum in 1983, but at modern speeds. If you aren't OK with that, you don't like distributing your code, fine, go use a proprietary OS and we wish you well.  Hope it still works on their next version, eh?

It could be better than we have. It should be better than we have. Think the Semantic Web all the way down: your chip knows what a function is, what a variable is, what a string or array is - there's no level transition where suddenly it's all bytes. There doesn't need to be.

And this stuff isn't just for programmers. I'm not a programmer. Your computer should know that a street address is an address, and with a single command you can look up anyone's address that is in any document on your machine - no need to maintain a separate address-book app. It should understand names and dates and amounts of money; there were apps that could do this in the 1980s. That we still need separate "word processors" and "spreadsheets" and "databases" today is a sick joke.

I have clients who keep all their letters in one huge document, one per page or set of pages per correspondant... and there's nothing wrong with that. We shouldn't be forced to use abstractions like files and documents and folders if we don't want to.

I have seen many clients who don't understand what a window is, what a scrollbar does; these abstractions are too complex for them, even for college professors after decades of use of GUIs. That's why iPads are doing so well. You reach out and you pull with a fingertip.

And that's fine, too. The ancestor of the iPad was the Newton, but the Newton that got launched was a crippled little thing; the original plan was a pocket Lisp Machine, with everything in Dylan all the way down to the kernel.

And the ancestor of the Macintosh was Jef Raskin's "information appliance", with a single global view of one big document. Some bits local, some remote; some computed, some entered; some dynamic, some static; with the underlying tools modular and extensible. No files, no programs, just commands to calculate this bit, reformat that bit, print that bit there and send this chunk to Alice and Charlie but not Bob who gets that other chunk.

Sounds weird and silly, but it was, as he said, humane; people worked for millennia on sheets of paper before we got all this nonsense of icons, files, folders, apps, saving, copying and pasting. The ultimate discrete computer is a piece of smart paper that understands what you're trying to do.

And whereas we might be able to get there building on bytes in portable assembler, it will be an awful lot harder, tens to hundreds of times as much work and the result won't be very reliable.