A response to a Reddit question
I can only agree with you. I have blogged and commented enough about this that I fear I am rather unpopular with the GNOME developer team these days. :-(
The direct reason for the sale is that in founder Mark Shuttleworth's view, Ubuntu's bug #0
His job is done. He has helped to make Linux far more popular and mainstream than it was. Due to Ubuntu being (fairly inarguably, I'd say) the best desktop distro for quite a few years, all the other Linux vendors [disclaimer: including my employer] switched away from desktop distros and over to server distros, which is where the money is. The leading desktop is arguably now Mint, then the various Ubuntu flavours. Linux is now mainstream and high-quality desktop Linuxes are far more popular than ever and they're all freeware.
Shuttleworth used an all-FOSS stack to build Thawte. When he sold it to Verisign in 1999, he made enough that he'd never need to work again. Ubuntu was a way for Shuttleworth to do something for the Linux and FOSS world in return.
Thus, Shuttleworth is preparing Ubuntu for an IPO and floatation on the public stock market. As part of this, the company asked the biggest techie community what they'd like to see happen: https://news.ycombinator.com/item?id=14002821
The results were resounding. Drop all the Ubuntu-only projects and switch back to upstream ones. Sadly, this mostly means Red Hat-backed projects, as it is the upstream developer of systemd, PulseAudio, GNOME 3, Flatpak and much more.
Personally I am interested in non-Windows-like desktops. I think the fragmentation in the Linux desktop market has been immensely harmful, has destroyed the fragile unity (pun intended) that there was in the free Unix world, and the finger of blame can be firmly pointed at Microsoft, which did this intentionally. I wrote about this here: https://www.theregister.co.uk/Print/2013/06/03/thank_microsoft_for_linux_desktop_fail/
The Unity desktop came out of that, and that was a good thing. I never like GNOME 2 much and I don't use Maté. But Unity was a bit of a lash-up behind the scenes, apparently, based on a series of Compiz plugins. It was not super stable and it was hard to maintain. The unsuccessful Unity-2D fork was killed prematurely (IMHO), whereas Unity 8 (the merged touchscreen/desktop version) was badly late.
There were undeniably problems with the development approach. Ubuntu has always faced problems with Red Hat, the 800lb gorilla of FOSS. The only way to work with a RH-based project is to take it and do as your told. Shuttleworth has written about this.https://www.markshuttleworth.com/archives/654
(See the links in that post too.)
Also, some contemporary analysis: https://www.osnews.com/story/24510/shuttleworth-seigo-gnomes-not-collaborating/
I am definitely not claiming that Ubuntu always does everything right! Even with the problems of working with GNOME, I suspect that Mir was a big mistake and that Ubuntu should have gone with Wayland.
Cinnamon seems to be sticking rather closer to the upstream GNOME base for its different desktop. Perhaps Unity should have been more closely based on GNOME 3 tech, in the same way.
But IMHO, Ubuntu was doing terrifically important work with Unity 8, and all that has come to nothing. Now the only real convergence efforts are the rather half-hearted KDE touchscreen work and the ChromeOS-on-tablet work from Google, which isn't all-FOSS anyway TTBOMK.
I am terribly disappointed they surrendered. They were so close.
I entirely agree with you: Unity was _the_ best Linux desktop, bar none. A lot of the hate was from people that never learned to use it properly. I have seen it castigated for lacking stuff that is basic built-in functionality that people never found how to use.
In one way, Unity reminded me of OS/2 2: "a better DOS than DOS, a better Windows than Windows." And it *was*! Unity was a better Mac OS X desktop than Mac OS X. I'm typing on a Mac now and there's plenty of things it can't do that Unity could. Better mouse actions. *Far* better keyboard controls.
I hope that the FOSS forks do eventually deliver.
Meantime, I reluctantly switched to Xfce. It's fine, it works, it's fast and simple, but it lacks functionality I really want.
Originally posted by ccdesan
. Reposted by liam_on_linux
at 2019-01-23 12:31:00.
I need to go shopping, that's all there is to it.(These were originally published in MAD Magazine #515, June 2012 - Writer: Scott Maiko, Artist: Scott Bricher)
Another recycled Quora answer.
The main reason is fairly simple.
Windows was designed to be easy to use, and to be compatible with older Microsoft operating systems, notably 16-bit Windows and DOS.
So, for example, by design, it treats any file with certain extensions (.EXE, .COM, .CMD, .BAT, etc.) as executable and will try to run them.
In contrast, Unix systems do not do this. They will not run even executable files unless they are specifically _marked_ as executable _and_ you have permissions to run them, and by default, Unix does not look in the current directory for executables.
This makes Unix less friendly, but more secure.
Microsoft also made other mistakes. For instance, it wanted to promote Internet Explorer to prevent Netscape getting control of the nascent browser market. To do this, it bundled IE with all copies of Windows. This was challenged in court as anti-competitive — which it was — and a demonstration was staged, in court, by intellectual property expert Lawrence Lessig, showing that IE could be uninstalled from Windows.
To counter this, MS tied IE in more deeply. So, for instance, Windows 98 has a multi-threaded Explorer, based in part on IE code. Window contents are rendered to HTML and IE then displays that content.
This means that all a hostile party has to do is embed a virus into a suitable image file, such as an icon or a folder wallpaper, and IE will render it. Exploit that IE process and you own the computer.
( Read more...Collapse )
I think our current style of rich, full-function, "power user" OSes is really starting to show signs of going away. (Yes, it's another of those handwavey sort of big-picture things.)
(Actually I should be drafting a FOSDEM talk about this right now, but I'm having a £1.50 draught beer & on FB instead. Gotta <3 České dráhy.)
Kids — i.e. in the Douglas Adams sense anyone under about 35
-- are more used to phones and tablets. The next billion people to come online will know nothDouglas Adams sense anyone under about 35ing else.
I reckon OSes will become more like Android, iOS, and ChromeOS — self-updating, without a "desktop" or much in the way of local storage or rich window management (for instance, see what's happening to GNOME
) and fairly useless unless connected to the Internet and various invisible servers off in the "cloud" somewhere.
Some "apps" will run locally, some will be web apps, some will be display sessions on remote servers. There will be little visible distinction.
We'll have no local admin rights, just the ability to put on sandboxed apps that only interact via the online connection. No local storage, or just a cache. No software updates; the OS will handle that. No easy way to see what apps are running or where.
What'll drive this will be sky-rocketing costs of supporting rich local traditional OSes.
It will have the side-effect of blurring the lines between a workstation, a laptop, a tablet and a phone.
For some current examples, see the Red Hat Atomic Workstation project
, Endless OS
, and SUSE Kubic MicroOS
. Read-only root FS, updates via a whole-system image pushed out periodically. Only containerised apps are allowed: there's not even a local package manager.
(Adapted from a Quora answer
OS/2 1.x was a clean-sweep, largely legacy-free OS with only limited backwards compatibility with DOS.
OS/2 2.x and later used VMs to do the hard stuff of DOS emulation, because they ran on a chip with hardware-assisted DOS VMs: the 80386’s Virtual86 mode.
NeXTstep was a Unix. It predated FreeBSD, but it was based off the same codebase: BSD 4 Unix. It “only” contained a new display layer, and that itself was based off existing code — Adobe PostScript — and the then-relatively-new technique of object-oriented development. Still substantial achievements, but again, built on existing code, and with no requirement for backwards compatibility.
BeOS was a ground-up new OS which wasn’t backwards or sideways compatible with anything else at all.
NT is based on OS/2 3.x, the planned CPU-independent portable version, with a lot of design concepts from DEC VMS incorporated, because it had the same lead architect, Dave Cutler. Again, the core NT OS isn’t compatible with anything else. This is rarely understood. NT is not a Win32-compatible kernel. NT isn’t compatible with anything else, including VMS. It’s something new. But NT supports personalities, which are like emulation layers running on top of the kernel. When NT shipped, it included 3: OS/2, POSIX and Win32. OS/2 is deprecated now, POSIX has developed into the Linux subsystem, and Win32 is still there, now in 64-bit form.
The point is, none of these OSes were enhanced versions of anything else, and none were constrained by compatibility with existing drivers, extensions, applications, or anything else.
Apple tried to do something much, much harder. It tried to create a successor OS to a single-user, single-tasking (later cooperatively-multitasking, and not very well), OS for the 68000 (not something with hardware memory protection, like the 68030 or 68040), which would introduce those new features: pre-emptive multitasking, virtual memory, memory protection, integrated standards-based networking, etc.
All while retaining the existing base of applications, which weren’t written or designed or planned for any of this. No apps == no market == no use.
Apple took on a far harder project than anyone else, and arguably, with less experience. And the base hardware wasn’t ready for the notion of virtual machines yet.
It’s a great shame it failed, and the company came relatively close — it did have a working prototype.
It’s often said that Apple didn’t take over NeXT, nor did it merge with NeXT — in many important ways, NeXT took over Apple. Most Apple OS developers and project managers left, and were replaced by the NeXT team.
The NeXT management discarded Copland, most Apple technologies — OpenDoc, OpenTransport, GameSprockets, basically everything except QuickTime. It took some very brave, sweeping moves. It took the existing MacOS classic APIs, which weren’t really planned or designed, they just evolved over nearly 1½ decades — and cut out everything that wouldn’t work on a clean, modern, memory-managed, multitasking OS. The resulting cut-down, cleaned-up API was called “Carbon”. This was presented to developers as what they had to target if they wanted their apps to run on the new OS.
Alternatively, they could target the existing, far cleaner and richer NeXT API, now called “Cocoa”.
The NeXT team made no real attempt to be compatible with classic MacOS. Instead, it just ran all of classic MacOS inside a VM — by the timeframe that the new OS was targeting, machines would be high-enough spec to support a complete classic MacOS environment in a window on top of the Unix-based NeXTstep, now rebadged as “Mac OS X”. If you wanted your app to run outside the VM, you had to rebuild for “Carbon”. Carbon apps could run on both late versions of classic MacOS and on OS X.
This is comparable to what NT did: it offered a safe subset of the Win32 APIs inside a “personality” on top of NT, and DOS VMs with most of Win16.
It was a brave move. It’s impressive that it worked so well. It was a fairly desperate, last-ditch attempt to save the company and the platform, and it’s easier to make big, brave decisions when your back is against the wall and there are no alternatives... especially if the mistakes that got you into that corner were made by somebody else.
A lot of old Apple developers left in disgust. People who had put years of work into entire subsystems and APIs that had been thrown in the trash. Some 3rd party developers weren’t very happy, either — but at least there was a good path forwards now.
In hindsight, it’s clear that Apple did have an alternative. It had a rich, relatively modern OS, upon the basis of which it could have moved forwards: A/UX. This was Apple’s Unix for 680x0, basically done as a side project to satisfy a tick-box for US military procurement, which required Unix compatibility. A/UX was very impressive for its time — 1988, before Windows 3.0. It could run both Unix apps and classic MacOS ones, and put a friendly face on Unix, which was pretty ugly in the late 1980s and early 1990s.
But A/UX was never ported to the newer PowerPC Macs.
On the other hand, the NeXT deal got back Steve Jobs. NeXTstep also had world-beating developer tools, which A/UX did not. Nor did BeOS, the other external alternative that Gil Amelio-era Apple considered.
No Jobs, no NeXT dev tools, and no Apple today.
I really do. Even wifi.
(Prompted by "I still miss my headphone jack, and I want it back: Two years after Apple removed the iPhone’s headphone jack, life without it still sucks
I recently bought a used iPhone, a 6S+, because my cheapo ChiPhone died and I didn't know when my new Kickstarted Planet Computers Gemini would come, and I couldn't wait. I needed a phone.
So I bought a used iPhone, because I've not had an iPhone in years, since my freebie hand-me-down iPhone 4 was nicked in Bratislava a few years back. I hadn't personally used iOS since iOS 6, and a lot of my gripes have been fixed. I can have Swype on it now. The clock icon on the homescreen works. There's something functionally like a "back" button to get to the previous app. I can choose my own email app. Etc.
But I don't plan on ever buying a newer iPhone after this one, because later models don't have headphone sockets, and I don't own and don't want wireless headphones.
I dislike wireless kit. I have no wireless keyboards. I have one mouse, because, to quote Police Academy, "my mom gave it to me!" And a Bluetooth Magic Mouse which I don't like and don't use.
The few bits of it in my life make it a lot worse. I don't even like Wifi much.
Between trying to get stuff to connect, keeping
it connected, troubleshooting why it won't, or why it connects to the wrong thing, or why it's connected but horribly slow, or what is interfering with what, and which standards and versions of $wireless_product_A
can successfully link to $wireless_product_B
without breaking the connection between $wireless_product_C
, I detest the entire mess.
My computer is linked via a cable to a hub, which is cabled to a powerline adaptor, which has a copper connection to another powerline adaptor and into the router. Its keyboard is cabled in, too. And I wish the mouse was.
To update my phone, I plug the phone into a cable into the back of the computer.
It always works and it's fast.
I have umpteen pairs of headphones -- the ones in the day back, the ones in the travel bag, the ones in the bedside drawer, the ones in the jacket pocket. They all work with everything, on every OS, with no pairing, no codes, no drivers, and they never need charging. Some are a decade old and they work fine with 30y old kit and 3-month-old kit.
I spent ~25 years fixing this stuff for a living and I know the points of failure.
I am not resistant to new tech if it's a benefit
. I like things like USB and Firewire a lot. Even SATA. They're vastly better than serial, parallel, EIDE, ATAPI, SCSI and all that old horribleness. I'm starting to adopt USB C and Thunderbolt.
But wires just work. Wireless stuff trades an apparent benefit -- no cords to tangle -- for a whole mess of tech-support horror. Charging, pairing, encryption, compatibility, link speeds, transmission range, range anxiety. I don't detest it out of some kind of superstition, I detest it because I used to get paid to troubleshoot it and make it work.
Basically, IMHO, assessing tech works like this:
You average the good opinions: 50000 people giving it +5 means a score of +5. 2 people giving it a score of -1 means a score of -1. You add the +5 and -1 and it gets 4/5.
(It's an illustration. Factor in fractional scores etc. if it makes you happier.)
But the point is this: if a billion people love it and say it's great and ten out of ten, and a dozen people point out horror stories and give it minus 5, then it only scores 5 out of 10.
It doesn't matter how many
people like it, or how many don't
like it. The point is that the negative scores have equal weight.
I have direct personal experience of the negatives, and this colours my perception.
I base my assessment of most tech on the negative scores. Positive ones are easy. Most people don't push stuff hard, there's astroturfing, there are crappy reviewers writing positive stuff in return for free kit, etc. Basically, they are valueless. Does what it do match what is says on the box? Yes? Good. That's all you can take from them.
negative report buried in a forum somewhere carries as much weight
as a thousand Amazons full of laudatory reviews.
And I have a Bluetooth mouse, and a pair of Bluetooth dongles, and a Wifi USB dongle, and a few pocket devices with no ports except Wifi and Bluetooth for connecting to other stuff. I know
what a pain it is. That is why, although I do not own a single Bluetooth audio device, I don't want one. I have borrowed them. Sometimes they work great. But they don't always, and they always run out of power at maximally inconvenient times, just like
Initially Light Table
, the Aurora, then Eve, then bust.
The first Big Idea -- Light Table...
That eventually turned into Aurora...
When it was still up and coming: In Search of Tomorrow -- the Future of Programming...
And then, a post-mortem: Against the Current (part 1)...
And the much shorter Part 2...
A remarkable modern iteration of the life of Charles Babbage, it seems to me. An inspired, brilliant visionary, who can't actually settle down and implement anything, because they are far too busy coming up with the next
thing which is going to be even cooler
You should probably read his article Toward a Better Programming
PC DOS 7.1 is the last version of the original Microsoft product line that started with MS-DOS 1 (itself now open source
I mentioned this in my post about DR-DOS
PC DOS 7.1 is often confused with 7.01, also known as PC DOS 2000, which is actually version 7.01. It's a modest enough bug-fix for MS-DOS 6.22.
Version 7.1 is a different beast. It's based off the same core as the embedded DOS in Windows 95B (OSR2) and Windows 98. It supports FAT32, including the ability to boot from them. It supports LBA hard disks, meaning it can handle volumes of over 8GB. It fixes a lot of bugs in the DOS codebase.
Here's some more information
(in Spanish but Google translates it fine.)
The primary author, Vernon Brooks, has a site
which details the development history and itemises his fixes.
Here is how to get PC DOS 7.1
, which IBM makes available as a free download. You may have to hunt -- the ServerGuide toolkit is quite old now.
Here is at least one method to install it
Note, it is not a complete OS. Unfortunately in this way it resembles DR's versions of DR-DOS 7.04
and later, which consists only of boot files embedded into the startup diskettes
for products such as Seagate Disk Manager and Powerquest PartitionMagic. IBM only updated the kernel and some core tools.
To make a complete OS from this, you need a full copy of PC DOS 2000, then replace some of its files with the updated ones from the SGTK, as detailed above. I am reluctant to link to sources for this, as it is still copyright code. If you can't find it, ask me.
I have done this and can confirm that it works and works well. I have it running inside VirtualBox, and booting natively on the bare metal of a Lenovo Thinkpad X200. It is somehow aesthetically pleasing to have IBM PC DOS running natively on modern hardware that still has IBM branding. I can also say that classic DOS word-processors such as MS Word 6 and WordPerfect 6 run both very well and very quickly on it.
"The most dangerous thought you can have as a creative person is to think you know what you're doing."
Presented at Dropbox's DBX conference on July 9, 2013.
This is an absolutely wonderful ½-hour long talk which pretends
to be from 1973.
LJ embedding doesn't seem to work, so go visit YouTube
The speaker has more info
on his own site.
I run Linux, and I work for a Linux vendor. I'm typing on Linux right now.
But I still don't particularly like Linux. I like OS X a bit better, but I miss aspects of classic MacOS. I am playing with Haiku with some interest -- it's getting there. And I'm fiddling with Oberon, mainly for research.
But all this stuff is very mature now, meaning hidebound and inflexible and showing its age badly. Thus all the patching over the cracks with VMs and containers and automated deployment tools and devops and all that.
Basically I think we're just for a huge shift in end-user-facing computing. It's time for a generation shift. I've worked through 1 big one of these, and maybe 2 or 3 small ones.
I was actively involved, as a working professional, in the shift from text-based computing (mostly DOS, a little SCO and Concurrent CP/M & Concurrent DOS stuff, and bits and bobs of other older stuff, mostly terminal-based) to GUI-based computing.
The hardware and OS makers survived. Few app vendors did. It basically killed most of the big DOS app vendors: WordStar, Lotus, WordPerfect, Ashton-Tate.
This is prehistory to younger pros these days: it was in the 1990s, while they were children.
Then there were smaller shifts:
 from predominantly stand-alone machines to LANs
 from proprietary LANs to TCP/IP
(and the switch from DOS-plus-Windows to Win9x and NT here and there)
 connecting those LANs to the Internet
(and the partly-16-bit to purely-32-bit switch to a world of mostly NT servers talking to NT workstations.)
Then once we were all on NT, there were two relatively tiny ones:
* multi-processor workstations becoming the default (plus GPUs as standard, leading to compositing desktops everywhere)
* the (remarkably seamless) transition from 32-bit to 64-bit.
But the big one was DOS to Windows (and a few Macs). It's a shame most people have forgotten about it. It was a huge, difficult, painful, gradual, and very expensive shift.
And at the time, most people in the business pooh-poohed GUIs.
For a full decade, there were multiple families of successful, capable, GUI-based computers, with whole ecosystems of apps and 3rd party vendors -- not just the Mac, but the Amiga, and the Atari ST, and the Acorn ARM machines, and at the high end graphical UNIX workstations...
And the DOS world staidly ignored all of it. They were toys. GUIs were toys for children, except very expensive ones for graphic designers, in which casee they were too niche to matter. Macs weren't real computers, they were for the rich or the simple-minded.
[Insert discussion about intersectionality and toxic masculinity here.]
Windows 3 was all very well but it was still DOS underneath and ran DOS apps and it was just a pretty face. The more serious you were, the more DOS apps you ran. Accountants didn't use Windows. Well maybe except Excel, but that didn't count. (Har har.)
It was still a sign of Manliness to be able to drive a CLI.
(It still is in the FOSS world, where people take great pride in their skills with horrible 1970s text editors.)
There was real scorn for so-called toy interfaces for toy computers, when people have work to do, etc.
Then, once people showed that you could actually do real work using these alleged toys, it switched to being inefficiency: it was claimed to be a waste of computer power driving all those pixels. Then when the computer power became plentiful enough for it not to be a problem, it was a waste of money equipping everyone with big screens and graphical displays and lots of RAM.
This kind of crap held back real progress for about a decade, I reckon. I mean, MS was singularly slow off the ball too, partly due to wasting (in hindsight) a lot of time and effort on OS/2. Even then, OS/2 1.0 came without a GUI at all, in 1988 IIRC, because it wasn't finished yet. (One could argue this showed commendable dedication to getting the underpinnings right first. Possibly. If the underpinnings had been right, which is debatable.)
DR GEM showed that mid-1980s PCs were perfectly able to drive a useful, productive GUI. The early Amstrad 8086 machines shipped with GEM, along with a GEM programming language, a (very basic) GEM word processor, a GEM paint program, etc. It even ran usefully on machines without a hard disk!
Windows 3.0 (1990) was all right. Good enough for some use. Benefited a lot from a 286 and at least 1MB of RAM, though.
Windows 3.1 (1992) was useful. Really wanted a 386 and at least 2MB, ideally 4MB.
NT 3.1 and WfWg were both 1993. WfWg was useful but already looking old-fashioned, whereas NT wanted a £5K+ PC to work well.
It was 1995 before a version that ran on an ordinary computer and gave unambiguous, demonstrable benefits to basically all users came along. That's what OS/2 should have been on high-end 286s a decade earlier.
Then, suddenly, a full decade after the Amiga and the ST, we got Win95 and suddenly everyone wanted them.
Few lessons were learned from this shift.
We haven't had such a big generation shift in 25 years, which means that now there are lots of middle-aged pros who don't really remember the last one. They've never worked through one.
Now we're facing another big shift, and again, although the signs are here, nobody takes them seriously. The writing is on the wall as it was in the late 1980s and early 1990s, and nobody is seeing it.
To spell it out:
* Keyboardless computers are huge. Smartphones, tablets, tills and other touchscreen devices.
* Most of them are all-solid-state: just RAM and flash.
* The first non-volatile RAM is on sale now. I just wrote a whole new manual chapter on it, for a boring enterprise OS. It's becoming mainstream.
* It's ~10× cheaper than DRAM and ~10× faster than Flash. This is the early, v1.0 kit, note.
Soon we will have the first new generation of computers since the 8-bit microcomputer revolution of the late 1970s. All-nonvolatile-RAM machines. They will not have the distinction between RAM and disk storage that all computers since about the 1950s have had. This is a bigger shift than minicomputers were, than the micro was.
It will, of course, be perfectly possible to adapt disk-based OSes to them and run them on these machines, partitioning the storage into a fake-disk bit and a live-RAM bit. But it will be inefficient and
pointless to shuffle all this data around like that -- however, it is an assumption so completely implicit in every OS in the world today (except one¹) that it is insurmountable.
Try to imagine Unix without a filesystem. It doesn't really work. Take away the notion of a "file" and basically all current OSes sort of fall apart.
But there is a certain mindset that I encounter very often who find the concept very hard to even imagine, and are extremely hostile to it.
Which is exactly the sort of thing I saw in the era of the transition from DOS (and text-only Unix) to ubiquitous GUIs.
Both Unix and Windows are cultures now.
One could argue that devotees of any OS or platform ever were, sure -- but in the 20th century, most platforms came and went relatively quickly. A decade and they had been invented, thrived, flowered, there was an explosion of apps, peripherals, wide support, and then they faltered and were gone.
This meant that most enthusiasts of any particular make or series of computer had exposure to quite a few others, too. And TBH while every 1980s computer had strengths and virtues -- OK, almost every -- they all had weaknesses and rivals which were stronger in those particular areas.
Now, much less so.
Now there are only 2 platforms -- Windows or Unix -- and they both mainly run on x86 with a toehold on ARM. They're mature enough that the CPU doesn't make a ton of difference.
There are lots of flavours of Unix, and some are very different to others. However a lot of the old-time 20th-century Linux enthusiasts I know, or know of, have switched to Mac OS X now, basically for an easier life. The rivalries are much smaller-scale: free vs commercial, BSD vs Linux, distro rivalries, desktop rivalries, and of course the eternal editor wars.
Step back far enough and the 2 are very clearly siblings with very similar conceptual models.
Low-level code is in C, stuff layered on top is in slightly higher-level languages (both compiled and interpreted, both generally imperative, usually object-oriented). Performance-critical stuff is compiled to static CPU-specific binaries and libraries, stored as files in a hierarchical filesystem along with config info stored in text files, some sort of system-wide database or both. There's a rigid distinction between "software" and "data" but both are kept in files which may be visible to the user or hidden, but this is a cosmetic difference. Users switch between different "applications" to accomplish defined tasks; data interchange between these is limited, often difficult, usually involves complex translations, and as a result is often one-way, as round-tripping means data loss.
There are of course dozens of dead OSes which conform to exactly this model, from all the beloved 1980s home computers (ST, Amiga, Mac, Acorns, QL, different vendors' Unix boxes, etc.).
But the point is that now, this 2-family model has been totally pervasive for about 30 years. Anyone younger than 25 has basically never seen anything else.
Compare these systems to some older ones, and the differences are startling. Compare with Classic MacOS, for something close to home. Not a single config file anywhere, no shell, no CLI, no scripting, nothing like that at all.
Compare with a Xerox Smalltalk box, or a Lisp machine, or Tao Intent, or Vitanuova Inferno, or colorForth, and you will see multiple radically different approaches to OS design... but all of them are basically forgotten now.
I think we lost something really important in the last 25y, and rediscovering it is going to be very painful.
I very much hope that bold, innovative new OSes come along that exploit this new design fully and richly.
But it's almost inconceivable to users of current tools. Partly because the concepts of filesystems are so very powerful, it's almost unimaginable that you would want to throw that away.
Which is very close to what DOS power users said in 1990, and I think is just as valid. (FTAOD, that means it's not valid at all.)
It's an interesting time.
P.S. at least for now, I think the current model suits servers very well, and just like GUI ideas matured in their own market sector of weird 680x0 computers that were unconnected from the x86 PC market
until it caught up a decade late, I think the server side of things will trundle on for another decade plus without this stuff having any impact.
I also suspect that as these new machines rise to dominance, as with previous shifts, pretty much no existing vendors, of hardware or OSes or apps, will survive the change. Entirely new companies will come along and get huge.
¹ IBM i. That is, OS/400. I doubt it will suddenly become very relevant.
[Adapted from some list posts, in lieu of real content.]