Log in

No account? Create an account

Fri, Dec. 14th, 2018, 07:26 pm
"Why is Ubuntu faster than Windows?"

Another recycled Quora answer.

The main reason is fairly simple.

Windows was designed to be easy to use, and to be compatible with older Microsoft operating systems, notably 16-bit Windows and DOS.

So, for example, by design, it treats any file with certain extensions (.EXE, .COM, .CMD, .BAT, etc.) as executable and will try to run them.

In contrast, Unix systems do not do this. They will not run even executable files unless they are specifically _marked_ as executable _and_ you have permissions to run them, and by default, Unix does not look in the current directory for executables.

This makes Unix less friendly, but more secure.

Microsoft also made other mistakes. For instance, it wanted to promote Internet Explorer to prevent Netscape getting control of the nascent browser market. To do this, it bundled IE with all copies of Windows. This was challenged in court as anti-competitive — which it was — and a demonstration was staged, in court, by intellectual property expert Lawrence Lessig, showing that IE could be uninstalled from Windows.

To counter this, MS tied IE in more deeply. So, for instance, Windows 98 has a multi-threaded Explorer, based in part on IE code. Window contents are rendered to HTML and IE then displays that content.

This means that all a hostile party has to do is embed a virus into a suitable image file, such as an icon or a folder wallpaper, and IE will render it. Exploit that IE process and you own the computer.

Read more...Collapse )

Fri, Dec. 14th, 2018, 07:19 pm
Yet another "the future of operating systems" post

I think our current style of rich, full-function, "power user" OSes is really starting to show signs of going away. (Yes, it's another of those handwavey sort of big-picture things.)

(Actually I should be drafting a FOSDEM talk about this right now, but I'm having a £1.50 draught beer & on FB instead. Gotta <3 České dráhy.)

Kids — i.e. in the Douglas Adams sense anyone under about 35 --  are more used to phones and tablets. The next billion people to come online will know nothDouglas Adams sense anyone under about 35ing else.

I reckon OSes will become more like Android, iOS, and ChromeOS — self-updating, without a "desktop" or much in the way of local storage or rich window management (for instance, see what's happening to GNOME) and fairly useless unless connected to the Internet and various invisible servers off in the "cloud" somewhere.

Some "apps" will run locally, some will be web apps, some will be display sessions on remote servers. There will be little visible distinction.

We'll have no local admin rights, just the ability to put on sandboxed apps that only interact via the online connection. No local storage, or just a cache. No software updates; the OS will handle that. No easy way to see what apps are running or where.

What'll drive this will be sky-rocketing costs of supporting rich local traditional OSes.

It will have the side-effect of blurring the lines between a workstation, a laptop, a tablet and a phone.

For some current examples, see the Red Hat Atomic Workstation project,  Endless OS, and SUSE Kubic MicroOS. Read-only root FS, updates via a whole-system image pushed out periodically. Only containerised apps are allowed: there's not even a local package manager. 

Thu, Dec. 6th, 2018, 01:16 pm
Why did Apple's Copland fail when so many early-90s OS projects succeeded & dominate the world today

(Adapted from a Quora answer.)

OS/2 1.x was a clean-sweep, largely legacy-free OS with only limited backwards compatibility with DOS.

OS/2 2.x and later used VMs to do the hard stuff of DOS emulation, because they ran on a chip with hardware-assisted DOS VMs: the 80386’s Virtual86 mode.

NeXTstep was a Unix. It predated FreeBSD, but it was based off the same codebase: BSD 4 Unix. It “only” contained a new display layer, and that itself was based off existing code — Adobe PostScript — and the then-relatively-new technique of object-oriented development. Still substantial achievements, but again, built on existing code, and with no requirement for backwards compatibility.

BeOS was a ground-up new OS which wasn’t backwards or sideways compatible with anything else at all.

NT is based on OS/2 3.x, the planned CPU-independent portable version, with a lot of design concepts from DEC VMS incorporated, because it had the same lead architect, Dave Cutler. Again, the core NT OS isn’t compatible with anything else. This is rarely understood. NT is not a Win32-compatible kernel. NT isn’t compatible with anything else, including VMS. It’s something new. But NT supports personalities, which are like emulation layers running on top of the kernel. When NT shipped, it included 3: OS/2, POSIX and Win32. OS/2 is deprecated now, POSIX has developed into the Linux subsystem, and Win32 is still there, now in 64-bit form.

The point is, none of these OSes were enhanced versions of anything else, and none were constrained by compatibility with existing drivers, extensions, applications, or anything else.

Apple tried to do something much, much harder. It tried to create a successor OS to a single-user, single-tasking (later cooperatively-multitasking, and not very well), OS for the 68000 (not something with hardware memory protection, like the 68030 or 68040), which would introduce those new features: pre-emptive multitasking, virtual memory, memory protection, integrated standards-based networking, etc.

All while retaining the existing base of applications, which weren’t written or designed or planned for any of this. No apps == no market == no use.

Apple took on a far harder project than anyone else, and arguably, with less experience. And the base hardware wasn’t ready for the notion of virtual machines yet.

It’s a great shame it failed, and the company came relatively close — it did have a working prototype.

It’s often said that Apple didn’t take over NeXT, nor did it merge with NeXT — in many important ways, NeXT took over Apple. Most Apple OS developers and project managers left, and were replaced by the NeXT team.

The NeXT management discarded Copland, most Apple technologies — OpenDoc, OpenTransport, GameSprockets, basically everything except QuickTime. It took some very brave, sweeping moves. It took the existing MacOS classic APIs, which weren’t really planned or designed, they just evolved over nearly 1½ decades — and cut out everything that wouldn’t work on a clean, modern, memory-managed, multitasking OS. The resulting cut-down, cleaned-up API was called “Carbon”. This was presented to developers as what they had to target if they wanted their apps to run on the new OS.

Alternatively, they could target the existing, far cleaner and richer NeXT API, now called “Cocoa”.

The NeXT team made no real attempt to be compatible with classic MacOS. Instead, it just ran all of classic MacOS inside a VM — by the timeframe that the new OS was targeting, machines would be high-enough spec to support a complete classic MacOS environment in a window on top of the Unix-based NeXTstep, now rebadged as “Mac OS X”. If you wanted your app to run outside the VM, you had to rebuild for “Carbon”. Carbon apps could run on both late versions of classic MacOS and on OS X.

This is comparable to what NT did: it offered a safe subset of the Win32 APIs inside a “personality” on top of NT, and DOS VMs with most of Win16.

It was a brave move. It’s impressive that it worked so well. It was a fairly desperate, last-ditch attempt to save the company and the platform, and it’s easier to make big, brave decisions when your back is against the wall and there are no alternatives... especially if the mistakes that got you into that corner were made by somebody else.

A lot of old Apple developers left in disgust. People who had put years of work into entire subsystems and APIs that had been thrown in the trash. Some 3rd party developers weren’t very happy, either — but at least there was a good path forwards now.

In hindsight, it’s clear that Apple did have an alternative. It had a rich, relatively modern OS, upon the basis of which it could have moved forwards: A/UX. This was Apple’s Unix for 680x0, basically done as a side project to satisfy a tick-box for US military procurement, which required Unix compatibility. A/UX was very impressive for its time — 1988, before Windows 3.0. It could run both Unix apps and classic MacOS ones, and put a friendly face on Unix, which was pretty ugly in the late 1980s and early 1990s.

But A/UX was never ported to the newer PowerPC Macs.

On the other hand, the NeXT deal got back Steve Jobs. NeXTstep also had world-beating developer tools, which A/UX did not. Nor did BeOS, the other external alternative that Gil Amelio-era Apple considered.

No Jobs, no NeXT dev tools, and no Apple today.

Tue, Nov. 27th, 2018, 07:12 pm
I hate wireless kit.

I really do. Even wifi.

(Prompted by "I still miss my headphone jack, and I want it back: Two years after Apple removed the iPhone’s headphone jack, life without it still sucks.")

I recently bought a used iPhone, a 6S+, because my cheapo ChiPhone died and I didn't know when my new Kickstarted Planet Computers Gemini would come, and I couldn't wait. I needed a phone.

So I bought a used iPhone, because I've not had an iPhone in years, since my freebie hand-me-down iPhone 4 was nicked in Bratislava a few years back. I hadn't personally used iOS since iOS 6, and a lot of my gripes have been fixed. I can have Swype on it now. The clock icon on the homescreen works. There's something functionally like a "back" button to get to the previous app. I can choose my own email app. Etc.

But I don't plan on ever buying a newer iPhone after this one, because later models don't have headphone sockets, and I don't own and don't want wireless headphones.

I dislike wireless kit. I have no wireless keyboards. I have one mouse, because, to quote Police Academy, "my mom gave it to me!" And a Bluetooth Magic Mouse which I don't like and don't use.

The few bits of it in my life make it a lot worse. I don't even like Wifi much.

Between trying to get stuff to connect, keeping it connected, troubleshooting why it won't, or why it connects to the wrong thing, or why it's connected but horribly slow, or what is interfering with what, and which standards and versions of $wireless_product_A can successfully link to $wireless_product_B without breaking the connection between $wireless_product_C and $wireless_product_D, I detest the entire mess.

My computer is linked via a cable to a hub, which is cabled to a powerline adaptor, which has a copper connection to another powerline adaptor and into the router. Its keyboard is cabled in, too. And I wish the mouse was.

To update my phone, I plug the phone into a cable into the back of the computer.

It always works and it's fast.

I have umpteen pairs of headphones -- the ones in the day back, the ones in the travel bag, the ones in the bedside drawer, the ones in the jacket pocket. They all work with everything, on every OS, with no pairing, no codes, no drivers, and they never need charging. Some are a decade old and they work fine with 30y old kit and 3-month-old kit.

I spent ~25 years fixing this stuff for a living and I know the points of failure.

I am not resistant to new tech if it's a benefit. I like things like USB and Firewire a lot. Even SATA. They're vastly better than serial, parallel, EIDE, ATAPI, SCSI and all that old horribleness. I'm starting to adopt USB C and Thunderbolt.

But wires just work. Wireless stuff trades an apparent benefit -- no cords to tangle -- for a whole mess of tech-support horror. Charging, pairing, encryption, compatibility, link speeds, transmission range, range anxiety. I don't detest it out of some kind of superstition, I detest it because I used to get paid to troubleshoot it and make it work.

Basically, IMHO, assessing tech works like this:

You average the good opinions: 50000 people giving it +5 means a score of +5. 2 people giving it a score of -1 means a score of -1. You add the +5 and -1 and it gets 4/5.

(It's an illustration. Factor in fractional scores etc. if it makes you happier.)

But the point is this: if a billion people love it and say it's great and ten out of ten, and a dozen people point out horror stories and give it minus 5, then it only scores 5 out of 10.

It doesn't matter how many people like it, or how many don't like it. The point is that the negative scores have equal weight.

I have direct personal experience of the negatives, and this colours my perception.

I base my assessment of most tech on the negative scores. Positive ones are easy. Most people don't push stuff hard, there's astroturfing, there are crappy reviewers writing positive stuff in return for free kit, etc. Basically, they are valueless. Does what it do match what is says on the box? Yes? Good. That's all you can take from them.

But one negative report buried in a forum somewhere carries as much weight as a thousand Amazons full of laudatory reviews.

And I have a Bluetooth mouse, and a pair of Bluetooth dongles, and a Wifi USB dongle, and a few pocket devices with no ports except Wifi and Bluetooth for connecting to other stuff. I know what a pain it is. That is why, although I do not own a single Bluetooth audio device, I don't want one. I have borrowed them. Sometimes they work great. But they don't always, and they always run out of power at maximally inconvenient times, just like

Thu, Nov. 22nd, 2018, 09:44 pm
A very recent future-that-wasn't

Initially Light Table, the Aurora, then Eve, then bust.

The first Big Idea -- Light Table...

That eventually turned into Aurora...

When it was still up and coming: In Search of Tomorrow -- the Future of Programming...

And then, a post-mortem: Against the Current (part 1)...

And the much shorter Part 2...

A remarkable modern iteration of the life of Charles Babbage, it seems to me. An inspired, brilliant visionary, who can't actually settle down and implement anything, because they are far too busy coming up with the next thing which is going to be even cooler.

You should probably read his article Toward a Better Programming, too.

Thu, Nov. 8th, 2018, 05:17 pm
Getting and running IBM PC DOS 7.1

PC DOS 7.1 is the last version of the original Microsoft product line that started with MS-DOS 1 (itself now open source.)

I mentioned this in my post about DR-DOS.

PC DOS 7.1 is often confused with 7.01, also known as PC DOS 2000, which is actually version 7.01. It's a modest enough bug-fix for MS-DOS 6.22.

Version 7.1 is a different beast. It's based off the same core as the embedded DOS in Windows 95B (OSR2) and Windows 98. It supports FAT32, including the ability to boot from them. It supports LBA hard disks, meaning it can handle volumes of over 8GB. It fixes a lot of bugs in the DOS codebase.

Here's some more information (in Spanish but Google translates it fine.)

The primary author, Vernon Brooks, has a site which details the development history and itemises his fixes.

Here is how to get PC DOS 7.1, which IBM makes available as a free download. You may have to hunt -- the ServerGuide toolkit is quite old now.

Here is at least one method to install it in VirtualBox.

Note, it is not a complete OS. Unfortunately in this way it resembles DR's versions of DR-DOS 7.04 and later, which consists only of boot files embedded into the startup diskettes for products such as Seagate Disk Manager and Powerquest PartitionMagic. IBM only updated the kernel and some core tools.

To make a complete OS from this, you need a full copy of PC DOS 2000, then replace some of its files with the updated ones from the SGTK, as detailed above. I am reluctant to link to sources for this, as it is still copyright code. If you can't find it, ask me.

I have done this and can confirm that it works and works well. I have it running inside VirtualBox, and booting natively on the bare metal of a Lenovo Thinkpad X200. It is somehow aesthetically pleasing to have IBM PC DOS running natively on modern hardware that still has IBM branding. I can also say that classic DOS word-processors such as MS Word 6 and WordPerfect 6 run both very well and very quickly on it.

Tue, Nov. 6th, 2018, 05:44 pm
"The Future of Programming" by Bret Victor

"The most dangerous thought you can have as a creative person is to think you know what you're doing."

Presented at Dropbox's DBX conference on July 9, 2013.
This is an absolutely wonderful ½-hour long talk which pretends to be from 1973.

LJ embedding doesn't seem to work, so go visit YouTube or Vimeo.

The speaker has more info on his own site.

Tue, Nov. 6th, 2018, 01:38 pm
The future comes at you fast

I run Linux, and I work for a Linux vendor. I'm typing on Linux right now.

But I still don't particularly like Linux. I like OS X a bit better, but I miss aspects of classic MacOS. I am playing with Haiku with some interest -- it's getting there. And I'm fiddling with Oberon, mainly for research.

But all this stuff is very mature now, meaning hidebound and inflexible and showing its age badly. Thus all the patching over the cracks with VMs and containers and automated deployment tools and devops and all that.

Basically I think we're just for a huge shift in end-user-facing computing. It's time for a generation shift. I've worked through 1 big one of these, and maybe 2 or 3 small ones.

I was actively involved, as a working professional, in the shift from text-based computing (mostly DOS, a little SCO and Concurrent CP/M & Concurrent DOS stuff, and bits and bobs of other older stuff, mostly terminal-based) to GUI-based computing.

The hardware and OS makers survived. Few app vendors did. It basically killed most of the big DOS app vendors: WordStar, Lotus, WordPerfect, Ashton-Tate.

This is prehistory to younger pros these days: it was in the 1990s, while they were children.

Then there were smaller shifts:

[1] from predominantly stand-alone machines to LANs

[2] from proprietary LANs to TCP/IP
(and the switch from DOS-plus-Windows to Win9x and NT here and there)

[3] connecting those LANs to the Internet
(and the partly-16-bit to purely-32-bit switch to a world of mostly NT servers talking to NT workstations.)

Then once we were all on NT, there were two relatively tiny ones:
* multi-processor workstations becoming the default (plus GPUs as standard, leading to compositing desktops everywhere)
* the (remarkably seamless) transition from 32-bit to 64-bit.

But the big one was DOS to Windows (and a few Macs). It's a shame most people have forgotten about it. It was a huge, difficult, painful, gradual, and very expensive shift.

And at the time, most people in the business pooh-poohed GUIs.

For a full decade, there were multiple families of successful, capable, GUI-based computers, with whole ecosystems of apps and 3rd party vendors -- not just the Mac, but the Amiga, and the Atari ST, and the Acorn ARM machines, and at the high end graphical UNIX workstations...

And the DOS world staidly ignored all of it. They were toys. GUIs were toys for children, except very expensive ones for graphic designers, in which casee they were too niche to matter. Macs weren't real computers, they were for the rich or the simple-minded.

[Insert discussion about intersectionality and toxic masculinity here.]

Windows 3 was all very well but it was still DOS underneath and ran DOS apps and it was just a pretty face. The more serious you were, the more DOS apps you ran. Accountants didn't use Windows. Well maybe except Excel, but that didn't count. (Har har.)

It was still a sign of Manliness to be able to drive a CLI.

(It still is in the FOSS world, where people take great pride in their skills with horrible 1970s text editors.)

There was real scorn for so-called toy interfaces for toy computers, when people have work to do, etc.

Then, once people showed that you could actually do real work using these alleged toys, it switched to being inefficiency: it was claimed to be a waste of computer power driving all those pixels. Then when the computer power became plentiful enough for it not to be a problem, it was a waste of money equipping everyone with big screens and graphical displays and lots of RAM.

This kind of crap held back real progress for about a decade, I reckon. I mean, MS was singularly slow off the ball too, partly due to wasting (in hindsight) a lot of time and effort on OS/2. Even then, OS/2 1.0 came without a GUI at all, in 1988 IIRC, because it wasn't finished yet. (One could argue this showed commendable dedication to getting the underpinnings right first. Possibly. If the underpinnings had been right, which is debatable.)

DR GEM showed that mid-1980s PCs were perfectly able to drive a useful, productive GUI. The early Amstrad 8086 machines shipped with GEM, along with a GEM programming language, a (very basic) GEM word processor, a GEM paint program, etc. It even ran usefully on machines without a hard disk!

Windows 3.0 (1990) was all right. Good enough for some use. Benefited a lot from a 286 and at least 1MB of RAM, though.

Windows 3.1 (1992) was useful. Really wanted a 386 and at least 2MB, ideally 4MB.

NT 3.1 and WfWg were both 1993. WfWg was useful but already looking old-fashioned, whereas NT wanted a £5K+ PC to work well.

It was 1995 before a version that ran on an ordinary computer and gave unambiguous, demonstrable benefits to basically all users came along. That's what OS/2 should have been on high-end 286s a decade earlier.

Then, suddenly, a full decade after the Amiga and the ST, we got Win95 and suddenly everyone wanted them.

Few lessons were learned from this shift.

We haven't had such a big generation shift in 25 years, which means that now there are lots of middle-aged pros who don't really remember the last one. They've never worked through one.

Now we're facing another big shift, and again, although the signs are here, nobody takes them seriously. The writing is on the wall as it was in the late 1980s and early 1990s, and nobody is seeing it.

To spell it out:

* Keyboardless computers are huge. Smartphones, tablets, tills and other touchscreen devices.
* Most of them are all-solid-state: just RAM and flash.
* The first non-volatile RAM is on sale now. I just wrote a whole new manual chapter on it, for a boring enterprise OS. It's becoming mainstream.
* It's ~10× cheaper than DRAM and ~10× faster than Flash. This is the early, v1.0 kit, note.

Soon we will have the first new generation of computers since the 8-bit microcomputer revolution of the late 1970s. All-nonvolatile-RAM machines. They will not have the distinction between RAM and disk storage that all computers since about the 1950s have had. This is a bigger shift than minicomputers were, than the micro was.

It will, of course, be perfectly possible to adapt disk-based OSes to them and run them on these machines, partitioning the storage into a fake-disk bit and a live-RAM bit. But it will be inefficient and
pointless to shuffle all this data around like that -- however, it is an assumption so completely implicit in every OS in the world today (except one¹) that it is insurmountable.

Try to imagine Unix without a filesystem. It doesn't really work. Take away the notion of a "file" and basically all current OSes sort of fall apart.

But there is a certain mindset that I encounter very often who find the concept very hard to even imagine, and are extremely hostile to it.

Which is exactly the sort of thing I saw in the era of the transition from DOS (and text-only Unix) to ubiquitous GUIs.

Both Unix and Windows are cultures now.

One could argue that devotees of any OS or platform ever were, sure -- but in the 20th century, most platforms came and went relatively quickly. A decade and they had been invented, thrived, flowered, there was an explosion of apps, peripherals, wide support, and then they faltered and were gone.

This meant that most enthusiasts of any particular make or series of computer had exposure to quite a few others, too. And TBH while every 1980s computer had strengths and virtues -- OK, almost every -- they all had weaknesses and rivals which were stronger in those particular areas.

Now, much less so.

Now there are only 2 platforms -- Windows or Unix -- and they both mainly run on x86 with a toehold on ARM. They're mature enough that the CPU doesn't make a ton of difference.

There are lots of flavours of Unix, and some are very different to others. However a lot of the old-time 20th-century Linux enthusiasts I know, or know of, have switched to Mac OS X now, basically for an easier life. The rivalries are much smaller-scale: free vs commercial, BSD vs Linux, distro rivalries, desktop rivalries, and of course the eternal editor wars.

Step back far enough and the 2 are very clearly siblings with very similar conceptual models.

Low-level code is in C, stuff layered on top is in slightly higher-level languages (both compiled and interpreted, both generally imperative, usually object-oriented). Performance-critical stuff is compiled to static CPU-specific binaries and libraries, stored as files in a hierarchical filesystem along with config info stored in text files, some sort of system-wide database or both. There's a rigid distinction between "software" and "data" but both are kept in files which may be visible to the user or hidden, but this is a cosmetic difference. Users switch between different "applications" to accomplish defined tasks; data interchange between these is limited, often difficult, usually involves complex translations, and as a result is often one-way, as round-tripping means data loss.

There are of course dozens of dead OSes which conform to exactly this model, from all the beloved 1980s home computers (ST, Amiga, Mac, Acorns, QL, different vendors' Unix boxes, etc.).

But the point is that now, this 2-family model has been totally pervasive for about 30 years. Anyone younger than 25 has basically never seen anything else.

Compare these systems to some older ones, and the differences are startling. Compare with Classic MacOS, for something close to home. Not a single config file anywhere, no shell, no CLI, no scripting, nothing like that at all.

Compare with a Xerox Smalltalk box, or a Lisp machine, or Tao Intent, or Vitanuova Inferno, or colorForth, and you will see multiple radically different approaches to OS design... but all of them are basically forgotten now.

I think we lost something really important in the last 25y, and rediscovering it is going to be very painful.

I very much hope that bold, innovative new OSes come along that exploit this new design fully and richly.

But it's almost inconceivable to users of current tools. Partly because the concepts of filesystems are so very powerful, it's almost unimaginable that you would want to throw that away.

Which is very close to what DOS power users said in 1990, and I think is just as valid. (FTAOD, that means it's not valid at all.)

It's an interesting time.

P.S. at least for now, I think the current model suits servers very well, and just like GUI ideas matured in their own market sector of weird 680x0 computers that were unconnected from the x86 PC market
until it caught up a decade late, I think the server side of things will trundle on for another decade plus without this stuff having any impact.

I also suspect that as these new machines rise to dominance, as with previous shifts, pretty much no existing vendors, of hardware or OSes or apps, will survive the change. Entirely new companies will come along and get huge.

¹ IBM i. That is, OS/400. I doubt it will suddenly become very relevant.

[Adapted from some list posts, in lieu of real content.] 

Sun, Oct. 21st, 2018, 04:04 pm
The first try at "an Ubuntu" -- Corel LinuxOS

(Hacked together from a few Reddit comments. Pardon disjointedness.)

Corel LinuxOS was a great distro. I reviewed it at the time.

It was the first serious big-backer effort to make Debian user-friendly and to make a Linux distro that could rival Windows NT 4 as a credible business desktop OS.

It has a custom remix of KDE -- I think KDE 2 -- heavily rewritten to make it more like WinXP. So they looked at Konqueror and discarded it as a bad job (overcomplex, trying to do too many different things... as KDE itself eventually decided too).

It had its own file manager, very like Windows Explorer. A real pleasure to work with. It even browsed Windows networks, better than anything can today.

IMHO, having used KDE since version 1, it was the best version of KDE ever and the only one I liked using. (2nd place: Red Hat Linux' Bluecurve edition in RH 9, before the Fedora/RHEL split, but that was just a really good theme and replacement of all the KDE apps with best-of-breed alternatives, which usually meant Gtk apps.)

I think Corel defaulted to Netscape Communicator as its web browser & email client -- Firefox didn't exist yet.

It was the first Linux ever to have display setup and adjustment using a graphical tool. You could just pick a colour depth from a drop-down, and drag a slider to adjust screen res. Just like Windows. This was world-beating stuff in the late 1990s -- nobody had ever seen anything like it on any Unix before. (Except maybe NeXTstep on its proprietary hardware. Which didn't do colour at all in the first versions.)

And of course it had WordPerfect, too. Remember this is before StarOffice (later OpenOffice later LibreOffice) was acquired by Sun and made freeware and later FOSS.

WordPerfect started out on Data General minicomputers in the 1970s. It was ported to everything early on. There were versions for the Atari ST, Amiga, OS/2 and classic MacOS as well as for MS-DOS.

The original edition was text-based and famed not only for its speed and very useful "reveal codes" function, which split the window into 2 showing editable markup in the bottom half, but also its very rich printer support.

In the era of text-based OSes, pre-GUI, it was common for apps to have to provide their own printer drivers. The OS maybe managed spooling but nothing more. No drivers. If you wanted bold or underline, you had to write all your own drivers.

WordPerfect did this better than anyone. It supported just about every printer in the world and did it better than anyone.

This early text-mode WordPerfect also ran on text-based Unix. I had a customer who wanted it on SCO Xenix 386. I installed it. It worked and the printer support was great, but fought for control with SCO's spooler. I had to edit SCO's "printer drivers" (really just shell scripts with minimal start/stop/set paper size control) in order to get it working.

The result was not great. In the end the customer switched users who needed wordprocessing from terminals to PCs running local copies of WP, and a terminal emulator for talking to the SCO host.

(SCO did not include networking -- it was an expensive optional extra on an expensive OS. X.11 was another extra. A C compiler was another.)

So WP always ran on Unix, for about 40 years, from before Linux was invented.

The first full-GUI WYSIWYG version, I think, was on OS/2 2. That was later ported to Windows 3 (and wasn't very good at first).

That's the version that they ported to Linux, version 8 of a full native rich Unix app, but an old Unix app with an old design, probably originating on SCO and running on various 1980s proprietary non-x86 Unixes, such as AIX, Solaris, etc.

I used it and liked it but it was a bit clunky. Printer support was good, though, which was a weak point on Linux back then.

So when Corel did a Linux distro, a selling point was the inclusion of WP 8.

There were 3 separate versions of WordPerfect on Unix.

#1 -- the original text-based version, no GUI, for proprietary Unixes with no GUI, such as SCO Xenix.

#2 -- the later graphical version, derived in part from the OS/2 and Windows 3 codebase, which was bundled in Corel LinuxOS. This was WordPerfect for Linux version 8, and it was a full native Linux app.

#3 WordPerfect Office

So about #3...

Corel got really into Linux around the end of the 1990s. It ported graphical WYSIWYG WordPerfect, it did its own distro, and it did its own ARM-based hardware, the Netwinder.

But there was no Linux office suite yet. WP 8 was the first credible commercial wordprocessor for the OS yet.

So Corel, flushed with confidence, and having now acquired WordPerfect Corp and part of Borland (for the Paradox database and Quattro Pro spreadsheet) and having its own Windows office suite, decided to port the whole thing to Linux.

But only WordPerfect was portable, cross-platform code. The other apps (Paradox, Quattro, Impress (presentations), InfoCentral (PIM) etc.) were Windows-only.

So it used WINE, specifically winelib. This was a related project to WINE but instead of letting you run Windows binaries, winelib lets you port Windows apps to Linux by providing Windows-compatible APIs to link to.

The result is a native Linux binary, although often called WHATEVER.EXE, which installs and runs natively -- but displays everything by calling winelib functions which translate Windows API calls to Linux ones.

That's how WordPerfect Office for Linux was written. Custom, tailored versions of the apps, with stuff that was totally Windows-specific removed, and features adjusted to work with winelib. But still, not a true native Linux app -- a suite of big complex Windows apps ported to Linux via WINE, and so dependent on WINE. And this is 20y ago and WINE wasn't very mature yet.

It worked and it was the first native (ish) Linux productivity suite, but it was buggy and unstable.

But then Corel did a fateful deal with the enemy. With Microsoft.

To improve adoption of WordPerfect Office on Windows, some gullible Corel boss was persuaded that what WPO needed was to be more compatible with MS Office. And the way to do that was to license the Office look and feel, i.e. the custom menus and toolbars, and the Visual BASIC for Applications macro language.

(Aside: you should realise that VBA was bolted on to MS Office when Office was quite mature. Word had its own macro language, WordBasic. Excel had its own too, similar to Lotus 1-2-3 in-cell macros. These were ripped out and replaced with VBA. For a while Excel could run _both_. That's what Corel did too... only it didn't even own the code it replaced its macro languages with.)

Corel licensed VBA and the look and feel and bolted them onto WPO for Windows. It paid a lot. Tens of millions, US.

But Microsoft insisted that Corel kill off all its Linux work as a result.

And Corel bought it. So it killed WPO for Linux... and CorelDraw for Linux and its other Linux apps. It killed WordPerfect for Linux, the native port. It killed the NetWinder and it killed LinuxOS. A lot of people lost their jobs.

The NetWinder and LinuxOS got sold off.

Corel LinuxOS became Xandros, also a damned good distro, but with no WordPerfect and no big-company backing. There were 2 more releases then it died.

The NetWinder sold some units as a thin client and then died.

But Microsoft had eliminated the only serious rival desktop OS that existed and it got paid money to do so!

And all this did Corel little good, because Microsoft just switched out the look-and-feel in the next version of Office anyway. If you install all the versions next to each other, they all look different.

Office 4 for Windows 3 just looked like a native Windows 3 app.

Office 95 had custom skins and title bars and buttons, so it looked more like a Win95 app with weird title bars.

Office 97 dropped the fancy styled title bars but made buttons squarer and so on, brought in tooltips everywhere, and switched all the file formats so you had to upgrade.

Office 2000 brought in the horrible self-customising menus, the edges of toolbar buttons disappeared except when hovered over... And Corel didn't get it because the licensing was not forward-looking, it was for one version only.

Office XP had "intellisense" and an unhelpful Help box and wizards everywhere instead of dialog boxes.

Office 2003 had more of the same, horrible shaded gradients in the toolbars and menus.

Office 2007 ditched menus for ribbons and I stopped using it.


Corel LinuxOS was great, ahead of its time, but Microsoft killed it.

Fri, Jun. 8th, 2018, 12:31 pm
The decline and fall of Windows

Underneath, in a lot of ways, NT is a pretty decent OS. The problems are mostly due to the marketing dept.

If the NT team had been allowed to pursue their original goals undisturbed, it'd be a better OS. Marketing insisted:

  • It was possible to use it standalone with full local admin rights;

  • IE had to be integrated;

  • It had to run as many Win9x binaries as possible;

  • The GDI was moved into the kernel for performance.

Etc. etc.

NT the core OS had nothing to do with Windows. It's derived from OS/2 3, the original cross-platform CPU-independent non-x86 version. It originally targeted the Intel i860, the N-Ten. That is where the "NT" sobriquet comes from.

Part of the design spec for NT was that nothing talked directly to the native API. It exposed "personalities" to userspace. It shipped with 3 of them: OS/2, Win32 and POSIX. OS/2 was later removed, again mainly for marketing reasons. It never included Presentation Manager, so it could only run text-mode OS/2 apps, but NT 3.1 (& I think 3.5) included full HPFS support. NT 3.51 could use them but not create them. NT4 couldn't but you could use the driver from 3.51 if you copied it across and created some registry entries.

There's a lot of nostalgia for OS/2 out there, but I used it and it was pretty nasty, especially the 32-bit version. A massive, 1000+line CONFIG.SYS file, which had to be byte-perfect or it wouldn't boot. A weird mixture of 16-bit and 32-bit code, much like Win9x but, amazingly, not even as clearly separated as in Win9x. (Be afraid.) Inability to bootstrap itself from DOS or any other OS. Inability to boot directly from CD for a long time. A weird shell unlike anything else, with weird definition of mouse buttons. E.g. the tree view was separate from file management windows, and its native app model was a strange template-based thing like the Apple Lisa, unlike anything else I've seen.

NT 3 was OS/2 done right, I'm afraid. Simple quick Win3 UI. No text files; everything in a database. (Yes, that decision has come back to bite now, but at the time, it seemed right.) Supported lots of disk formats, lots of network protocols; nothing was more or less "native" than anything else. It was relatively simple, clean and fast. It didn't support much Win9x stuff but to hell with that toytown GameOS.

The shell could be replaced, although few ever made it out there. There was a NeXTstep port, never released, and a skin-deep FOSS clone thereof.

NT4 was when the rot set in. The GDI was moved into the kernel, so rogue graphics drivers could bring down your enterprise OS. Unfinished unstable APIs like DirectX and Direct3D were imported from the Win9x division, but it didn't do PnP or USB.

The new Cairo FS and UI were dumped, unfinished, and the Win9x shell bolted on top instead.

NT 5 ("Windows 2000") at least supported PnP, USB etc. which were nice. Crapware like "Windows Movie Maker", some IE-related bollocks, a half-assed "file and settings transfer wizard" thing were bolted on and couldn't be uninstalled. You couldn't uninstall and reconfigure the network stack or the crappy extras any more. It was still a half-decent OS but starting to bloat out and crumble.

Then XP completed the marketing-led transition to junkware. Ugly themes, more unremovable bloatware. The only good thing was much faster hibernate and wake.

That's when I ditched Windows and switched to Linux and OS X full-time.

But that's when hoi polloi jumped on board with they nasty little games and the malware authors followed them. XP brought NT to the mainstream and that spelled its doom as a credible OS. When the world heard about it, it was all over.

Some Mac owners bitch about how Apple removes "legacy" features too readily. OS X 10.5 couldn't run Classic MacOS apps. 10.6 didn't support PowerPC machines or AppleTalk file sharing, but it could run PowerPC OS X apps. 10.7 dropped PowerPC apps, leaving you with only crappy Ribbon-infested versions of MS Office, and couldn't even print to AppleTalk printers.

But taking a longer view, this is a good thing. It means OS X doesn't accrue so much legacy cruft.

64-bit Windows finally dropped 16-bit app support, but that is about all MS has removed except some ancient network protocols. If MS was serious about Windows, it would have at least the choice of an edition which dropped 32-bit apps too, dropped everything except IPv4, dropped directly-installable binaries, and if I could be bothered to give any more thought to this, I'm sure I could identify half a dozen other ancient bits of unnecessary junk they could remove. MBR disk support, perhaps.

But it isn't. Marketing still rules.

10 most recent