?

Log in

Fri, Nov. 11th, 2016, 12:30 pm
More of the same -- a rant copied from Facebook. Don't waste your time. ;-)

I'm mainly putting this here to keep it around, as writing it clarified some of my thinking about technological generations.

From https://www.facebook.com/groups/vintagecomputerclub/

You're absolutely right, Jim.

The last big advances were in the 1990s, and since then, things have just stagnated. There are several reasons why -- like all of real life, it's complex.

Firstly, many people believe that computing (and _personal_ computing) began with the 8-bits of the late 1970s: the Commodore PETs, Apple ][s and things. That before them, there were only big boring mainframes and minicomputers, room-sized humming boxes managing bank accounts.

Of course, it didn't. In the late '60s and early '70s, there was an explosion of design creativity, with personal workstations -- Lisp Machines, the Xerox PARC machines: the Alto, Star, Dandelion and so on. There were new cutting-edge designs, with object-oriented languages, graphical user interfaces, networking, email and the internet. All before the 8-bit microprocessors were invented.

Then what happened is a sort of mass extinction event, like the end of the dinosaurs. All the weird clever proprietary operating systems were overtaken by the rise of Unix, and all the complex, very expensive personal workstations were replaced with microcomputers.

But the early micros were rubbish -- so low-powered and limited that all the fancy stuff like multitasking was thrown away. They couldn't handle Unix or anything like it. So decades of progress was lost, discarded. We got rubbish like MS-DOS instead: one program, one task, 640kB of memory, and only with v2 did we get subdirectories and with v3 proper hard disk support.

A decade later, by the mid-to-late 1980s, the micros had grown up enough to support GUIs and sound, but instead of being implemented on elegant grown-up multitasking OSes, we got them re-implemented, badly, on primitive OSes that would fit into 512kB of RAM on a floppy-only computer -- so we got ST GEM, Acorn RISC OS, Windows 2. No networking, no hard disks -- they were too expensive at first.

Then a decade after that, we got some third-generation 32-bit micros and 3rd-gen microcomputer OSes, which brought back networking and multitasking: things like OS/2 2 and Windows NT. But now, the users had got used to fancy graphics and sound and whizzy games, which the first 32-bit 3rd-gen OSes didn't do well, so most people stuck with hybrid 16/32-bit OSes like Windows 9x and MacOS 8 and 9 -- they didn't multitask very well, but they could play games and so on.

Finally, THREE WHOLE DECADES after the invention of the GUI and multitasking workstations and everything connected via TCP/IP networking, we finally got 4th-gen microcomputer OSes: things like Windows XP and Mac OS X. Both the solid multitasking basis with networking and security, AND the fancy 3D graphics, video playback etc.

It's all been re-invented and re-implemented, badly, in a chaotic mixture of unsuitable and unsafe programming languages, but now, everyone's forgotten the original way these things were done -- so now, we have huge, sprawling, messy OSes and everyone thinks it's normal. They are all like that, so that must be the only way it can be done, right? If there was another way, someone would have done it.

But of course, they did do it, but only really old people remember it or saw it, so it's myth and legend. Nobody really believes in it.

Nearly 20y ago, I ran BeOS for a while: a fast, pre-emptive multitasking, multithreaded, 3D and video capable GUI OS with built-in Internet access and so on. It booted to the desktop in about 5 seconds. But there were few apps, and Microsoft sabotaged the only hardware maker to bundle it.

This stuff _can_ be done better: smaller, faster, simpler, cleaner. But you can't have that and still have compatibility with 25y worth of DOS apps or 40y worth of Unix apps.

So nobody used it and it died. And now all we have is bloatware, but everyone points at how shiny it is and if you give it a few billion kB of RAM and Flash storage, it actually starts fairly quickly and you only need to apply a few hundred security fixes a year. We are left with junk reimplemented on a basis of more junk and because it's all anyone knows they think it's the best it could be.

Sat, Nov. 12th, 2016 10:47 pm (UTC)
waistcoatmark

The more I look at Rust, the more impressed I am by it. It's not a silver bullet, but it does make entire classes of bugs inexpressible outside (what I hope are a few, small) explicitly unsafe sections.

I doesn't have java's occasionally pause the world garbage collection problems and from what I've seen it's a lot easier to inter-operate with foreign code than JNI et al. It allows you to write your own type-safe collections (looking at you, Go). It's small enough that you don't spend the first month of a project deciding which subset of the language you should use (C++). Run speed is comparable to C++ and I'd hope (but have yet to play with anything large enough to tell) that it's build speed better (if nothing else due to importing pre-compiled modules rather than having to re-compile included text files).

It still suffers from BeOS-style chicken and egg problems, but hopefully Mozilla can give it sufficient momentum to take off. I just need an opportunity to give it a proper spin myself...

Sun, Nov. 13th, 2016 03:19 pm (UTC)
liam_on_linux

I note your comment in re Go.

Overall, how would you say they fare in comparison?

In my world, mainly Linux/FOSS these days, Go is getting a substantially more attention.

Mon, Nov. 14th, 2016 10:33 pm (UTC)
waistcoatmark

At the Big G, Go is primarily used as a replacement for python. It's not a good replacement for C++. The Java folks can't see any reason to switch: it's few advantages over Java aren't enough to outweigh the disadvantages (lack of generics and switching cost being the main ones). But the python folks love it. It's threading and execution speed is a vast improvement, the compilation speed is so fast that it's not too painful and python folks don't mind the lack of type safety. Having said that, if it hadn't been developed by an important googler, it's doubtful that it would have been taken up as an Offical Language.

Rust has a fair amount of engineers (including myself) looking at it closely and wondering if there's any way we can justify adding it to Official Language status. But the costs of taking on a new language are enormous (adding it to the build infrastructure, writing client libraries for all our services, getting a critical mass of developers to learn it etc.)

I think Go is also a more mature language, Rust still seems more in flux, I hear of new features still being added to nightly builds etc.

Tue, Nov. 15th, 2016 09:26 pm (UTC)
uon

You talk a lot about bloatware: what exactly would you like to throw away?

(Concerning safe languages: yeah, I really like what I've seen of Rust so far. If they ever get something looking like the SciPy stack I'll be over like a shot.)

Tue, Nov. 15th, 2016 09:52 pm (UTC)
liam_on_linux

Currently, I've got A2/Bluebottle running in a VM. I'm wondering if I can use my ancient, meagre Pascal knowledge to get up to speed enough in Oberon to even attempt to port it to a Raspberry Pi 2/3 natively.

It's an interesting example: the old Oberon OS, inspired by the same Xerox PARC research that led to the Alto/Star/Dandelion etc., Smalltalk, and indirectly to the Lisa and Macintosh, and thence to Windows... but with a full windowing GUI layered on top.

Multithreaded, multiprocessor-capable, networking-capable, but single-user, all written in one type-safe garbage-collected language.

Interestingly, it doesn't seem to natively support subdirectories. As far as I can tell so far, all the files, binaries, data, the lot, is in one root directory.

No, it's no Unix replacement, but it's an interesting demonstration of what can be done.

As a desktop OS, BeOS was close to perfection for me. A query-based 64-bit FS with extensible metadata, a clean, modern, 32-bit GUI-based OS on top. Rich media and 3D support, multithreaded/multiprocessor aware. Fast as all hell, a rich desktop comparable to both the Windows Explorer and classic MacOS. Rich apps, just not many of them. Ran like a modern multigigahertz machine with a shedload of RAM and an array of SSDs, but did it in 64MB of RAM on a Pentium or PowerPC at about 100MHz. POSIX-compatible, had a Unix shell and Unix apps could be ported without massive effort.

And about 100 users in the world, of course.

But it shows what _could_ be done. It _was_ done.

So, one example. I don't need multiuser support. That is, for PCs, a weird legacy thing. I don't need retargetable graphics (that seems to be going away with Wayland and Mir anyway). So long as I have a JVM and a good capable web browser, I don't need app compatibility with anything much, ta. If stuff can be ported to it or run in a VM or a remote desktop client of some kind, fine.

Haiku is struggling to get there, but one of the reasons it's struggling is that it's trying to retain backwards-compatibility. Noble but it may well doom it.

I have read that Be jumped on the C++ ship before the language was really mature and that caused problems; I don't know enough to tell. I've also heard that Symbian made the same mistake, before the language was even fully standardised, so Symbian devs had to do weird nonstandard things.

A2/Bluebottle's simplistic filesystem might have an interesting side-effect, though.

Filesystems are utterly integral to all modern OSes. The distinction between (stuff in nonvolatile, block-addressable storage) versus (stuff in live dynamic storage) is fundamental and has been since magnetic media were invented.

But I think it may go away relatively soon. Intel, HP and others are working on what is basically nonvolatile RAM. If they get there, and I think they will, we'll have computers without the notion of storage drives. They'll have a terabyte or so of flat, nonvolatile storage. How will Unix or Windows cope with that? Partition it into "the drive" and "memory" and shuttle stuff between them? That's absurd. But if we do away with filesystems except on servers with magnetic media, all current OS design goes out the window.

So why not start again? There will be servers around to host VMs or containers -- 2 technologies that are converging, remarkably -- and they can host the legacy stuff.

We're facing another big shift in hardware tech. Software hasn't really addressed the last one yet.

Tue, Nov. 15th, 2016 10:19 pm (UTC)
uon

There's little distinction between filesystem and memory in modern linux: you allocate RAM by mmap()ing /dev/zero , and cacheing is often aggressive enough that "file" data might as well be in memory.

Tue, Nov. 15th, 2016 10:14 pm (UTC)
liam_on_linux

But we can't get to something simpler by cutting down anything existing. It's got to be a clean start.

Precedents: Be made one. Psion did with EPOC and later Symbian. Apple did with NewtonOS, although what shipped was a flawed shadow of what was planned. It's been done and turned into real shipping products and some did very well for a while.

Google is, allegedly, preparing its own kernel to replace Linux in Android. But apart from NewtonOS, all those were quite traditional, based on standard, tried-and-tested notions of disks, files, folders, RAM, apps loading and saving data.

So what would I leave out?

Well, how about something in a type-safe language, *not* multiuser-capable but with hardware-protected process isolation, with *no filesystem* but rich interprocess communication instead. Designed to suspend and resume rather than boot and shutdown: dumping a run-state snapshot into part of its memory for maintenance. (Precedent: this is how Smalltalk machines worked.) Rebooting would be something analogous to installing the OS today: possibly only done once at the factory.

A microkernel-like design so that any module of code can be stopped, updated and restarted at any time, without stopping the system. Precedent: Minix 3.

It might be rubbish for servers but there are plenty of OSes for servers and they won't go away.

A baseline abstraction rather richer than bytes and words: say a translation layer. Precedent: IBM System i (AKA OS/400)'s IMPI. A low level layer which presents lists as an atomic type. That's a model that's survived for over 50y now. This could provide for processor-independence.

For developers, some really rich languages -- something Lisp-like, as has already been independently reinvented (they claim) in Urbit, for the elite folks who can think in abstract syntax trees.

I know the stories about the productivity of Lispers sound like fairy tales today but they were very well-substantiated in their time. They really were. Mostly lost to legend now.

Something more conventional like Dylan for the less gifted among us, the mortals who used to make a living in Delphi. Algebraic notation, conventional syntax. Precedent: Julia is doing interesting things with homoiconicity while looking (in very broad terms) C-like.

But an important element is that the easy, more readable language maps cleanly onto the lower-level one, as CGOL or Dylan maps onto Common Lisp.

This probably sounds even _more_ ludicrous but I'm not 100% convinced that OOPS is essential. Elements of it -- the isolation and hiding -- but not the whole schmoigle. From what I've read, syntactic macros and homoiconicity can deliver more. Simplicity is key. Clojure, for instance, provides some syntactic sugar (I may be using that term wrongly) to be a bit more readable than traditional Lisps, but I think a key design concern should be simplicity.

Initially, trying to keep the whole thing small enough and simple enough that a single individual can understand it with a month or two's study. It should be fun to play with. One person should be able to fathom it, as they could with the 1980s home micros. Precedent: this was a design goal of Oberon. It can be small and simple while still being a rich multitasking multicore internet-capable OS, without being as weird as TempleOS. It's been done.

Throw away most of what current OSes provide. Offer clean simple versions of what has succeeded in the past, before the rise of the microprocessor.

Does that answer you?

Sun, Nov. 20th, 2016 07:59 pm (UTC)
uon

(Sorry, life got in the way for a bit there.)
Well, how about something in a type-safe language, [...]
If you're going to be prescriptive about what language everybody uses, you're already vastly cutting down the number of people who are likely to be interested in using this environment, and network effects like this matter a lot for something like OSes, because you need it to be worthwhile to keep the latest version of Chrome or Photoshop or gcc or whatever ported to it. Otherwise you need to accept that this will forever be a niche project (which is fine!)

I do understand (and to some degree share) your desire for simplicity; I fear however that for an all-purpose end-user OS to be simple in these terms is a contradiction. Even if your stripped-down beautiful OS becomes wildly popular, once it becomes popular, people will want to run popular apps on it, and that means porting over the huge crufty libraries of whatever that these apps are built upon, and people will use your OS for new things, and different people will do the new things in different ways, and no obvious standard will emerge for a while so you've got to either duplicate functionality or write crufty translation layers, and then later a standard does emerge so everybody codes to that but has to retain the crufty bits because there's always a couple of folks running ancient versions of things, and a decade or two later you're wondering why everything's getting so bloated.

This effect gets far worse the more popular your platform is.

I *do* think that something really stripped down is very desirable for certain areas (eg hello IoT), I just think that it's sort of contradictory to expect it to work out for an all-purpose OS.

*not* multiuser-capable

What's your beef with multi-user? It doesn't even need to be visible to end-users to provide you with a useful level of isolation (eg the root user on macOS).

*no filesystem*
Again, what problems do you see caused by filesystems? How do you propose to manage non-transient state? Are you sure you're not going to wind up with a bunch of half-assed quasi-filesystems instead? It sort of reminds me a bit of how people were all "ooh, relational databases are too complicated and too much faff, let's go NoSQL!" and while it was great for certain domains, at some point you start regretting not having transactions or referential integrity or even thinking about going down this route at all because MongoDB has just shat the bed again.
dumping a run-state snapshot into part of its memory for maintenance
Ah, the Microsoft Word model of state management! A thing of wonder while it works, a mindbending clusterfuck when it goes wrong, particularly when it actually went wrong three weeks ago but you've only just noticed now that the problem has metastasized into something extensive and tentacular enough for you to notice.

Most of what you're talking about is a language stack rather than an OS, and sounds rather OS-agnostic to me.

for the elite folks who can think in abstract syntax trees.
Dude, they teach the basics of this in primary school when they tell you what verbs and nouns and adjectives are and draw little lines to connect each bit of a sentence, even if they don't teach you that it's called "parsing". It's not a mystical power available only to a shady priesthood, and casting it in this way helps nobody. Go read some Julia Evans.
This probably sounds even _more_ ludicrous but I'm not 100% convinced that OOPS is essential
Oh, God, no, not ludicrous at all.
From what I've read
This is a terrible, terrible basis upon which to make judgements of or pronouncements upon software design.

Fri, Jan. 6th, 2017 04:31 pm (UTC)
liam_on_linux

This has been on my to-do list for over a month. There are so many things to address that it's silly.


So, to take one -- OOP.


Here's a good response. Not mine, note. Parts 1 & 3 are relevant, but I think pt 2 cover the main ground.


http://prog21.dadgum.com/156.html


I think it might be one of those things that is so much of a given now, in so many languages, that it is hard to step outside it. There's been some interesting discussion on the Oberon mailing list recently ( https://lists.inf.ethz.ch/mailman/listinfo/oberon ) about OOP in Oberon. AIUI -- very imperfectly -- Oberon lacked it, Oberon 2 added it, later versions either removed it again or enhanced it, and now it's rather confusing. Chris Burrows explained it thus:


«


The two descendants of Oberon should be viewed as two siblings from the
same parent rather than a grandparent-parent-child relationship.


Oberon was designed by Niklaus Wirth.


The evolution of Oberon that led to Oberon-2 was primarily the work of
Hanspeter Mossenbock via his work with Josef Templ and Robert Griesemer on
"Object Oberon". You can discover the motivation behind this by reading
their paper titled "Object Oberon. An Object-Oriented Extension of Oberon."
ETH Technical Report 109, 1989:


http://e-collection.library.ethz.ch/eserv/eth:3204/eth-3204-01.pdf


Oberon 7 was designed by Niklaus Wirth as an evolution of Oberon (not
Oberon-2) in the opposite direction. It was designed to further simplify the
language, not to extend it. His motivation is somewhat different. For an
explanation see "Oberon, the result of simplification" in "Computers and
Computing - A Personal Perspective", Niklaus Wirth December 2015:


https://www.inf.ethz.ch/personal/wirth/Miscellaneous/index.html


»


It seems that there are now 2 factions -- one side see it as essential, the others -- more interested perhaps in minimalism -- see it as a distraction.


As is so often the case, sometimes, a simple image can cut through the discussion:




Edited at 2017-01-06 04:31 pm (UTC)

Fri, Jan. 6th, 2017 04:34 pm (UTC)
liam_on_linux

To pick another...

>> *no filesystem*
>Again, what problems do you see caused by filesystems?

I see no problems as such caused by filesystems. I hope you'll forgive me re-using an answer from another place:

One is on a fairly major shakeup in near-future operating systems design which almost nobody seems to have noticed is coming. HP, with
its memristor technology, is one of the only companies taking it seriously -- as per:

http://www.theregister.co.uk/2016/11/24/hpes_machinations_to_rewrite_server_design_laws/

... and nobody is taking HP seriously.

The executive summary:

Since the 1960s, virtually all computer designs have shared one underlying characteristic: that there are 2 types of storage, dynamic
and nonvolatile. RAM and disks, essentially. This assumption underpins all OS design: stuff lives on disk, and you load it into RAM to work
on it, then save the results back.

That's going away. Most computers now have no disk drives -- smartphones and tablets and lightweight laptops all have SSDs: Flash
storage. To an approximation, fast RAM and slow RAM. SSDs are a kind of nonvolatile memory accessed as if it were a spinning disk drive.

HP, Intel and others are all working feverishly on nonvolatile memory, such as Intel & Micron's Xpoint.

https://www.micron.com/about/emerging-technologies/3d-xpoint-technology

It's taking time and the industry are poking fun at it because they don't understand it. It's radically new tech and it's more profoundly
transformative than anyone has spotted.

But if it, or something like it, pans out, which it almost certainly will, then we'll have computers with a terabyte or so of single-level
storage. Just one big pool of memory, which is as fast as RAM but will hold its contents when the computer's turned off. No disks, no
distinction between "RAM" and "ROM" and "SSD".

And that will render all current operating systems obsolete, because they are entirely predicated around moving stuff between disks and
RAM, and managing stuff on disk -- and that is all about to become a legacy technology.