?

Log in

Mon, Feb. 13th, 2017, 07:51 pm
USB C. Everyone's complaining. I can't wait. I still hope for cable nirvana.

Things have been getting better for a while now. For smaller gadgets, micro-USB is now the standard charging connector. Cables are becoming
a consumable for me, but they're cheap and easy to find.

But it only goes in one way and it's hard to see and to tell. And not all my gadgets want it the same way round, meaning I have to either remember or peer at a tiny socket and try to guess.

So conditions were right for an either-way-round USB connector.


Read more...Collapse )

Mon, Feb. 6th, 2017, 03:35 pm
On being boggled by technological advances [tech blog b

I had Sinclair Microdrives on my ZX Spectrum. They were better than tape cassette but nothing else -- ~90 kB of slowish, rather unreliable storage.

So I bought an MGT DISCiPLE and an old cheap 5¼" 80-track, DS/DD drive.

780 kB of storage! On demand! Programs loaded in seconds! Even when I upgraded to an ex-demo 128K Spectrum from Curry's, even 128 kB programs loaded in a (literal) flash!

(MGT's firmware strobed the Spectrum's screen border, in homage to loading from tape, so you could see the data streaming into memory.)

That was the first time I remember being excited by the size and speed of my computer's storage.
Read more...Collapse )

Sun, Jan. 15th, 2017, 03:36 pm
How I chose & bought a cheap Chinese smartphone [tech blog post, by me]

So... when the lack of apps for my beloved Blackberry Passport, and the issues with running sideloaded Android apps, became problematic, I decided to check out a cheap Chinese Android Phablet.

(P.S. The Passport is for sale! Let me know if you're interested.)

The Passport superseded a Samsung Galaxy Note 2, which subsequently got stolen, unfortunately. It was decent, occasionally sluggish, ran an elderly version of Android with no updates in ages, and had a totally useless stylus I never used. It replaced an iPhone 4 which replaced an HTC Desire HD, which replaced a Nokia Communicator E90 -- the best form-factor for a smartphone I've ever had, but nothing like it exists any more.

I wanted a dual-core or quad-core phablet, bigger than 5.5", with dual SIM and a memory card. That was my starting point.  I don't have or use a tablet and never have -- I'm a keyboard junkie. I spend a lot of time surfing the web, on social networks, reading books and things on my phone. I wanted one as big as I could get, but still pocketable. My nicked Samsing was 5.5" and I wanted a little larger. I tried a 6" phablet in a shop and wanted still bigger if possible. I also tried a 6.8" Lenovo Phab Pro in a shop and that was a bit too big (but I might be persuaded -- with a tiny bezel, such a device might be usable).
Read more...Collapse )

Fri, Nov. 11th, 2016, 03:54 pm
Why I don't use GNOME Shell

Although the launch of GNOME 3 was a bumpy ride and it got a lot of criticism, it's coming back. It's the default desktop of multiple distros again now. Allegedly even Linus Torvalds himself uses it. People tell me that it gets out of the way.

I find this curious, because I find it a little clunky and obstructive. It looks great, but for me, it doesn’t work all that well. It’s OK — far better than it was 2-3 years ago. But while some say it gets out of the way and lets them work undistracted, it gets in my way, because I have to adapt to its weird little quirks. It will not adapt to mine. It is dogmatic: it says, you must work this way, because we are the experts and we have decided that this is the best way.

So, on OS X or Ubuntu, I have my dock/launcher thing on the left, because that keeps it out of the way of the scrollbars. On Windows or XFCE, I put the task bar there. For all 4 of these environments, on a big screen, it’s not too much space and gives useful info about minimised windows, handy access to disk drives, stuff like that. On a small screen, it autohides.

But not on GNOME, no. No, the gods of GNOME have decreed that I don’t need it, so it’s always hidden. I can’t reveal it by just putting my mouse over there. No, I have to click a strange word in the menu bar. “Activities”. What activities? These aren’t my activities. They’re my apps, folders, files, windows. Don’t tell me what to call them. Don’t direct me to click in a certain place to get them; I want them just there if there’s room, and if there isn’t, on a quick flick of the wrist to a whole screen edge, not a particular place followed by a click. It wastes a bit of precious menu-bar real-estate with a word that’s conceptually irrelevant to me. It’s something I have to remember to do.

That’s not saving me time or effort, it’s making me learn a new trick and do extra work.

The menu bar. Time-honoured UI structure. Shared by all post-Mac GUIs. Sometimes it contains a menu, efficiently spread out over a nice big easily-mousable spatial range. Sometimes that’s in the window; whatever. The whole width of the screen in Mac and Unity. A range of commands spread out.

On Windows, the centre of the title bar is important info — what program this window belongs to.

On the Mac, that’s the first word of the title bar. I read from left to right, because I use a Latinate alphabet. So that’s a good place too.

On GNOME 3, there’s some random word I don’t associate with anything in particular as the first word, then a deformed fragment of an icon that’s hard to recognise, then a word, then a big waste of space, then the blasted clock! Why the clock? Are they that obsessive, such clock-watchers? Mac and Windows and Unity all banish the clock to a corner. Not GNOME, no. No, it’s front and centre, one of the most important things in one of the most important places.

Why?

I don’t know, but I’m not allowed to move it.

Apple put its all-important logo there in early versions of Mac OS X. They quickly were told not to be so egomaniac. GNOME 3, though, enforces it.

On Mac, Unity, and Windows, in one corner, there’s a little bunch of notification icons. Different corners unless I put the task bar at the top, but whatever, I can adapt.

On GNOME 3, no, those are rationed. There are things hidden under sub options. In the pursuit of cleanliness and tidiness, things like my network status are hidden away.

That’s my choice, surely? I want them in view. I add extra ones. I like to see some status info. I find it handy.

GNOME says no, you don’t need this, so we’ve hidden it. You don’t need to see a whole menu. What are you gonna do, read it?

It reminds me of the classic Bill Hicks joke:

"You know I've noticed a certain anti-intellectualism going around this country ever since around 1980, coincidentally enough. I was in Nashville, Tennessee last weekend and after the show I went to a waffle house and I'm sitting there and I'm eating and reading a book. I don't know anybody, I'm alone, I'm eating and I'm reading a book. This waitress comes over to me (mocks chewing gum) 'what you readin' for?'...wow, I've never been asked that; not 'What am I reading', 'What am I reading for?’ Well, goddamnit, you stumped me... I guess I read for a lot of reasons — the main one is so I don't end up being a f**kin' waffle waitress. Yeah, that would be pretty high on the list. Then this trucker in the booth next to me gets up, stands over me and says [mocks Southern drawl] 'Well, looks like we got ourselves a readah'... aahh, what the fuck's goin' on? It's like I walked into a Klan rally in a Boy George costume or something. Am I stepping out of some intellectual closet here? I read, there I said it. I feel better."

Yeah, I read. I like reading. It’s useful. A bar of words is something I can scan in a fraction of a second. Then I can click on one and get… more words! Like some member of the damned intellectual elite. Sue me. I read.

But Microsoft says no, thou shalt have ribbons instead. Thou shalt click through tabs of little pictures and try and guess what they mean, and we don’t care if you’ve spent 20 years learning where all the options were — because we’ve taken them away! Haw!

And GNOME Shell says, nope, you don’t need that, so I’m gonna collapse it all down to one menu with a few buried options. That leaves us more room for the all-holy clock. Then you can easily see how much time you’ve wasted looking for menu options we’ve removed.

You don’t need all those confusing toolbar buttons neither, nossir, we gonna take most of them away too. We’ll leave you the most important ones. It’s cleaner. It’s smarter. It’s more elegant.

Well, yes it is, it’s true, but you know what, I want my software to rank usefulness and usability above cleanliness and elegance. I ride a bike with gears, because gears help. Yes, I could have a fixie with none, it’s simpler, lighter, cleaner. I could even get rid of brakes in that case. Fewer of those annoying levers on the handlebars.

But those brake and gear levers are useful. They help me. So I want them, because they make it easier to go up hills and easier to go fast on the flat, and if it looks less elegant, well I don’t really give a damn, because utility is more important. Function over form. Ideally, a balance of both, but if offered the choice, favour utility over aesthetics.

Now, to be fair, yes, I know, I can install all kinds of GNOME Shell extensions — from Firefox, which freaks me out a bit. I don’t want my browser to be able to control my desktop, because that’s a possible vector for malware. A webpage that can add and remove elements to my desktop horrifies me at a deep level.

But at least I can do it, and that makes GNOME Shell a lot more usable for me. I can customise it a bit. I can add elements and I could make my favourites bar be permanent, but honestly, for me, this is core functionality and I don’t think it should be an add-on. The favourites bar still won’t easily let me see how many instances of an app are running like the Unity one. It doesn’t also hold minimised windows and easy shortcuts like the Mac one. It’s less flexible than either.

There are things I like. I love the virtual-desktop switcher. It’s the best on any OS. I wish GNOME Shell were more modular, because I want that virtual-desktop switcher on Unity and XFCE, please. It’s superb, a triumph.

But it’s not modular, so I can’t. And it’s only customisable to a narrow, limited degree. And that means not to the extent that I want.

I accept that some of this is because I’m old and somewhat stuck in my ways and I don’t want to change things that work for me. That’s why I use Linux, because it’s customisable, because I can bend it to my will.

I also use Mac OS X — I haven’t upgraded to Sierra yet, so I won’t call it macOS — and anyway, I still own computers that run MacOS, as in MacOS 6, 7, 8, 9 — so I continue to call it Mac OS X. What this tells you is that I’ve been using Macs for a long time — since the late 1980s — and whereas they’re not so customisable, I am deeply familiar and comfortable with how they work.

And Macs inspired the Windows desktop and Windows inspired the Linux desktops, so there is continuity. Unity works in ways I’ve been using for nearly 30 years.

GNOME 3 doesn’t. GNOME 3 changes things. Some in good ways, some in bad. But they’re not my ways, and they do not seem to offer me any improvement over the ways I’m used to. OS X and Unity and Windows Vista/7/8/10 all give me app searching as a primary launch mechanism; it’s not a selling point of GNOME 3. The favourites bar thing isn’t an improvement on the OS X Dock or Unity Launcher or Windows Taskbar — it only delivers a small fraction of the functionality of those. The menu bar is if anything less customisable than the Mac or Unity ones, and even then, I have to use extensions to do it. If I move to someone else’s computer, all that stuff will be gone.

So whereas I do appreciate what it does and how and why it does so, I don’t feel like it’s for me. It wants me to change to work its way. The other OSes I use — OS X daily, Ubuntu Unity daily, Windows occasionally when someone pays me — don’t.

So I don’t use it.

Does that make sense?

Fri, Nov. 11th, 2016, 12:30 pm
More of the same -- a rant copied from Facebook. Don't waste your time. ;-)

I'm mainly putting this here to keep it around, as writing it clarified some of my thinking about technological generations.

From https://www.facebook.com/groups/vintagecomputerclub/

You're absolutely right, Jim.

The last big advances were in the 1990s, and since then, things have just stagnated. There are several reasons why -- like all of real life, it's complex.

Firstly, many people believe that computing (and _personal_ computing) began with the 8-bits of the late 1970s: the Commodore PETs, Apple ][s and things. That before them, there were only big boring mainframes and minicomputers, room-sized humming boxes managing bank accounts.

Of course, it didn't. In the late '60s and early '70s, there was an explosion of design creativity, with personal workstations -- Lisp Machines, the Xerox PARC machines: the Alto, Star, Dandelion and so on. There were new cutting-edge designs, with object-oriented languages, graphical user interfaces, networking, email and the internet. All before the 8-bit microprocessors were invented.

Then what happened is a sort of mass extinction event, like the end of the dinosaurs. All the weird clever proprietary operating systems were overtaken by the rise of Unix, and all the complex, very expensive personal workstations were replaced with microcomputers.

But the early micros were rubbish -- so low-powered and limited that all the fancy stuff like multitasking was thrown away. They couldn't handle Unix or anything like it. So decades of progress was lost, discarded. We got rubbish like MS-DOS instead: one program, one task, 640kB of memory, and only with v2 did we get subdirectories and with v3 proper hard disk support.

A decade later, by the mid-to-late 1980s, the micros had grown up enough to support GUIs and sound, but instead of being implemented on elegant grown-up multitasking OSes, we got them re-implemented, badly, on primitive OSes that would fit into 512kB of RAM on a floppy-only computer -- so we got ST GEM, Acorn RISC OS, Windows 2. No networking, no hard disks -- they were too expensive at first.

Then a decade after that, we got some third-generation 32-bit micros and 3rd-gen microcomputer OSes, which brought back networking and multitasking: things like OS/2 2 and Windows NT. But now, the users had got used to fancy graphics and sound and whizzy games, which the first 32-bit 3rd-gen OSes didn't do well, so most people stuck with hybrid 16/32-bit OSes like Windows 9x and MacOS 8 and 9 -- they didn't multitask very well, but they could play games and so on.

Finally, THREE WHOLE DECADES after the invention of the GUI and multitasking workstations and everything connected via TCP/IP networking, we finally got 4th-gen microcomputer OSes: things like Windows XP and Mac OS X. Both the solid multitasking basis with networking and security, AND the fancy 3D graphics, video playback etc.

It's all been re-invented and re-implemented, badly, in a chaotic mixture of unsuitable and unsafe programming languages, but now, everyone's forgotten the original way these things were done -- so now, we have huge, sprawling, messy OSes and everyone thinks it's normal. They are all like that, so that must be the only way it can be done, right? If there was another way, someone would have done it.

But of course, they did do it, but only really old people remember it or saw it, so it's myth and legend. Nobody really believes in it.

Nearly 20y ago, I ran BeOS for a while: a fast, pre-emptive multitasking, multithreaded, 3D and video capable GUI OS with built-in Internet access and so on. It booted to the desktop in about 5 seconds. But there were few apps, and Microsoft sabotaged the only hardware maker to bundle it.

This stuff _can_ be done better: smaller, faster, simpler, cleaner. But you can't have that and still have compatibility with 25y worth of DOS apps or 40y worth of Unix apps.

So nobody used it and it died. And now all we have is bloatware, but everyone points at how shiny it is and if you give it a few billion kB of RAM and Flash storage, it actually starts fairly quickly and you only need to apply a few hundred security fixes a year. We are left with junk reimplemented on a basis of more junk and because it's all anyone knows they think it's the best it could be.

Mon, Oct. 24th, 2016, 06:35 pm
Playing "what if" with the history of IT

Modern OSes are very large and complicated beasts.

This is partly because they do so many different things: the same Linux kernel is behind the OS for my phone, my laptop, my server, and probably my router and the server I'm posting this on.

Much the same is true of Windows and of most Apple products.

So they have to be that complex, because they have to do so many things.

This is the accepted view, but I maintain that this is at least partly cultural and partly historical.

Some of this stuff, like the story that “Windows is only so malware-vulnerable because Windows is so popular; if anything else were as popular, it’d be as vulnerable” is a pointless argument, IMHO, because lacking access to alternate universes, we simple cannot know.

So, look, let us consider, as a poor parallel, the industry’s own history.

Look at Windows in the mid to late 1990s as an instance.

Because MS was busily developing a whole new OS, NT, and it couldn’t do everything yet, it was forced to keep maintaining and extending an old one: DOS+Win9x.

So MS added stuff to Win98 that was different to the stuff it was adding to NT.

Some things made it across, out of sync…

NT 3.1 did FAT16, NTFS and HPFS.

Win95 only did FAT. So MS implemented VFAT: long filenames on FAT.

NT 3.1 couldn’t see them; NT 3.5 added that.
Then Win 95B added FAT32. NT 3.5 couldn’t read FAT32; it was added in 3.51 (IIRC).

Filesystems are quite fundamental — MS did the work to keep the 2 lines able to interwork

But it didn’t do it with hardware support. Not back then.

Win95: APM, Plug’n’Play, DirectX.
Later, DirectX 2 with Direct3D.
Win95B: USB1.
Win98: USB2, ACPI; GDI+.
Win98SE: basic Firewire camera-only support; Wake-on-LAN; WDM modems/audio.
WinME: USB mass storage & HID; more complete Firewire; S/PDIF.

(OK, NT 4 did include DirectX 2.0 and thus Direct3D. There were rumours that it only did software rendering on NT and true hardware-accelerated 3D wasn’t available until Windows 2000. NT had OpenGL. Nothing much used it.)

A lot of this stuff only came to the NT family with XP in 2002. NT took a long time to catch up.

My point here is that, in the late ‘90s, Windows PCs became very popular for gaming, for home Internet access over dialup, for newly-capable Windows laptops which were becoming attractive for consumers to own. Windows became a mass-market product for entertainment purposes.

And all that stuff was mainly supported on Win9x, _not_ on NT, because NT was at that time being sold to business as a business OS for business desktop computers and servers. It was notably bad as a laptop OS. It didn’t have PnP, its PCMCIA/Cardbus support and power management was very poor, it didn’t support USB at all, and so on.

Now, imagine this as an alternate universe.

In ours, as we know, MS was planning to merge its OS lines. Sensible plan, the DOS stuff was a legacy burden. But what if it wasn’t? Say it had developed Win9x as the media/consumer OS and NT as the business OS?

This is only a silly thought experiment, don’t try to blow it down by pointing out why not to do it. We know that.

They had a unified programming model — Win32. Terrified of the threat of the DoJ splitting them up, they were already working on its successor, the cross-platform .NET.

They could have continued both lines: one supporting gaming and media and laptops, with lots of special driver support for those. The other supporting servers and business desktops, not supporting all the media bells and whistles, but much more solid.

Yes it sounds daft, but this is what actually happened for the best part of 6 years, from 1996 and the releases of NT 4 and Win 95 OSR2 until Windows XP in 2002.

Both could run MS Office. Both could attach to corporate networks and so on. But only one was any good for gaming, and only the other if you wanted to run SQL Server or indeed any kind of server, firewall, whatever.

Both were dramatically smaller than the post-merger version which does both.

The tendency has been to economise, to have one do-everything product, but for years, they couldn’t do that yet, so there were 2 separate OS teams, and both made major progress, both significantly advanced the art. The PITA “legacy” platform went through lots of releases, steadily gaining functionality, as I showed with that list above, but it was all functionality that didn’t go into the enterprise OS, which went through far fewer releases — despite it being the planned future one.

Things could have gone differently. It’s hard to imagine now, but it’s entirely possible.

If IBM had committed to OS/2 being an 80386 OS, then its early versions would have been a lot better, properly able to run and even multitask DOS apps. Windows 3 would never have happened. IBM and MS would have continued their partnership for longer; NT might never have happened at all, or DEC would have kept Dave Cutler and PRISM might have happened.

If Quarterdeck had been a bit quicker with it, DESQview/X might have shipped before Windows 3, and been a far more compelling way of running DOS apps on a multitasking GUI OS. The DOS world might have been pulled in the Unix-like direction of X.11 and TCP/IP, instead of MS’s own in-house GUI and Microsoft and Novell’s network protocols.

If DR had moved faster with DR-DOS and GEM — and Apple hadn’t sued — a 3rd party multitasking DOS with a GUI could have made Windows stillborn. They had the tech — it went into Flex/OS but nobody’s heard of it.

If the later deal between a Novell-owned DR and Apple had happened, MacOS 7 would have made the leap to the PC platform:

https://en.wikipedia.org/wiki/Star_Trek_project

(Yes, it sounds daft, but this was basically equivalent to Windows 95, 3 years earlier. And for all its architectural compromises, look how successful Win95 was: 40 million copies in the first year. 10x what any previous version did.)

Maybe Star Trek would have bridged the gap and instead of NeXT Apple bought Be instead and migrated us to BeOS. I loved BeOS even more than I loved classic MacOS. I miss it badly. Others do too, which is why Haiku is still slowly moving forward, unlike almost any other non-Unix FOSS OS.

If the competing GUI computers of the late 1980s had made it into the WWW era, notably the Web 2.0 era, they might have survived. The WWW and things like Java and JavaScript make real rich cross-platform apps viable. I am not a big fan of Google Docs, but they are actually usable and I do real, serious, paying work with them sometimes.

So even if they couldn’t run PC or Mac apps, a modern Atari ST or Commodore Amiga or Acorn RISC OS system with good rich web browsers could be entirely usable and viable. They died before the tech that could have saved them, but that’s partly due to mismanagement, it’s not some historical inevitability.

If the GNU project had adopted the BSD kernel, as it considered, and not wasted effort on the HURD, Linux would never have happened and we’d have had a viable FOSS Unix several years earlier.

This isn’t entirely idle speculation, IMHO. I think it’s instructive to wonder how and where things might have gone. The way it happened is only one of many possible outcomes.

We now have effectively 3 mass-market OSes, 2 of them Unixes: Windows NT (running on phones, xBoxes and PCs), Linux (including Android), and macOS/iOS. All are thus multipurpose, doing everything from small devices to enterprise servers. (Yes, I know, Apple’s stopped pushing servers, but it did once: the Xserve made it to quad-core Xeons & its own RAID hardware.)

MS, as one company with a near-monopoly, had a strong incentive to only support one OS family, and it’s done it even when it cost it dearly — for instance, moving the phones to the NT kernel was extremely costly and has essentially cost them the phone market. Windows CE actually did fairly well in its time.

Apple, coming back from a weak position, had similar motivations.

What if instead the niches were held by different companies? If every player didn’t try to do everything and most of them killed themselves trying?

What if we’d had, say, in each of the following market sectors, 1-2+ companies with razor sharp focus aggressively pushing their own niches…

* home/media/gaming
* enterprise workstations
* dedicated laptops (as opposed to portable PCs)
* enterprise servers
* pocket PDA-type devices

And there are other possibilities. The network computer idea was actually a really good one IMHO. The dedicated thin client/smart terminal is another possible niche.

There are things that came along in the tech industry just too late to save players that were already moribund. The two big ones I’m thinking of were the Web, especially the much-scorned-by-techies (including me) Web 2, and FOSS. But there are others — commodity hardware.

I realise that now, it sounds rather ludicrous. Several companies, or at least product lines, destroyed themselves trying to copy rivals too closely — for instance, OS/2. Too much effort trying to be “a better DOS than DOS, a better Windows than Windows”, rather than trying to just be a better OS/2.

Apple didn’t try this with Mac OS X. OS X wasn’t a better Classic MacOS, it was an effectively entirely new OS that happened to be able to run Classic MacOS in a VM. (I say effectively entirely new, because OS X did very little to try to appeal to NeXT owners or users. Sure, they were rich, but there weren’t many of them, whereas there were lots of Mac owners.)

What I am getting at here, in my very very long-winded way, is this.

Because we ended up with a small number of players, each of ‘em tried to do everything, and more or less succeeded. The same OS in my phone is running the server I’ll be posting this message to, and if I happened to be using a laptop to write this, it’d be the same OS as on my PC.

If I was on my (dual-booting) Win10 laptop and was posting this to a blog on CodePlex or something, it’d be the same thing, but a different OS. If MS still offered phones with keyboards, I’d not object to a Windows phone — that’s why I switched to a Blackberry — but as it is Windows phones don’t offer anything I can’t get elsewhere.

But if the world had turned out differently, perhaps, unified by FOSS, TCP/IP, HTML, Java and Javascript, my phone would be a Symbian one — because I did prefer it, dammit — and my laptop would be a non-Unix Apple machine and my desktop an OS/2 box and they’d be talking to DEC servers. For gaming I’d fire up my Amiga-based console.

All talking over Dropbox or the like, all running Google Docs instead of LibreOffice and ancient copies of MS Word.

It doesn’t sound so bad to me. Actually, it sounds great.

Look at the failure of Microsoft’s attempt to converge its actually-pretty-good tablet interface with its actually-pretty-good desktop UI. Bombed, may yet kill them.

Look at Ubuntu’s failure to deliver its converged UI yet. As Scott Gilbertson said:

<<
Before I dive into what's new in Ubuntu 16.10, called Yakkety Yak, let's just get this sentence out of the way: Ubuntu 16.10 will not feature Unity 8 or the new Mir display server.

I believe that's the seventh time I've written that since Unity 8 was announced and here we are on the second beta for 16.10.
>>
http://www.theregister.co.uk/2016/09/26/ubuntu_16_10_beta_2_review/

And yet look at how non-techies are perfectly happy moving from Windows computers to Android and iPhones, despite totally different UIs. They have no problems at all. Different tools for different jobs.

From where we are, the idea of totally different OSes on different types of computer sounds ridiculous, but I think that’s a quirk of the market and how things happened to turn out. At different points in the history of the industry _as it actually happened_ things went very differently.

Microsoft is a juggernaut now, but for about 10 years from the mid ‘80s and early ’90s, the world completely ignored Windows and bought millions of Atari STs and Commodore Amigas instead. Rich people bought Macs.

The world still mostly ignores FreeBSD, but NeXT didn’t, and FreeBSD is one of the parents of Mac OS X and iOS, both loved by hundreds of millions of happy customers.

This is not the best of all possible worlds.

But because our PCs are so fast and so capacious, most people seem to think it is, and that is very strange to me.

As it happens, we had a mass extinction event. It wasn’t really organised enough to call it a war. It was more of an emergent phenomenon. Microsoft and Apple didn’t kill Atari and Commodore; Atari and Commodore killed each other in a weird sort of unconscious suicide pact.

But Windows and Unix won, and history is written by the winners, and so now, everyone seems to think that this was always going to be and it was obvious and inevitable and the best thing.

It wasn’t.

And it won’t continue to be.

Sun, Oct. 23rd, 2016, 03:57 pm
What might drive the adoption of new OSes in such a mature market as we have now?

If you write code to target a particular OS in a compiled language that is linked against local OS libraries and needs to talk to particular hardware via particular drivers, then that code is going to be quite specific to the OS it was designed upon and will be difficult to port to another OS. Indeed it might end up tied to one specific version of one specific distro.

Notably, the lowest-level of high-level languages, C.

If, OTOH, you write an app that runs in an interpreted language, that never runs outside the sandbox of the interpreter, which makes a request to get its config from a defined service on another machine, stores its working data on another such machine, and emits its results across the network to another machine, all within one set of, say, Ruby modules, then you don't need to care so much.

This is, vastly reduced and simplified, called the microservices "design pattern".

It is how most modern apps are built — either for the public WWW, or VPNs, or "intranets" not that that word is used any more as everyone has one now. You split it up into chunks as small as possible, because small "Agile" teams can work on each chunk. You farm the chunks out onto VMs, because then, if you do well and have a load spike, you can start more VMs and scale to much much larger workloads.

From Twitter to Instagram, the modern Web is built like this. Instagram, for instance, was built by a team of 3 people, originally in pure Python. They did not own a single server. They deployed not one OS instance. Nothing. EVERYTHING was "in the cloud" running on temporary rented VMs on Amazon EC2.

After 1 year they had 14 million users.

This service sold for a billion dollars.

http://highscalability.com/blog/2012/4/9/the-instagram-architecture-facebook-bought-for-a-cool-billio.html

These aren't toy technologies. They're not little prototypes or occasional research projects.

So if you break your architecture down into little fungible boxes, and you don't manually create any of the boxes, you just create new instances in a remote datacenter with automated tools, then…

Firstly, some of the infrastructure that hosts those boxes could be exchanged if it delivered major benefits and the users would never even know.

Secondly, if a slight change to the internal makeup of some of the boxes delivered major improvements — e.g. by using massively less memory, or starting quicker — it would be worth changing the box makeup to save money and improve performance. That is what is driving migration to containers. Rather than lots of Linux hosts with lots of VMs holding more Linux instances, you have a single host with a single kernel and lots of containers, and you save 1GB RAM per instance or something. That adds up very quickly.

So, thirdly, if such relatively modest efficiencies drive big cost savings or performance improvements, then perhaps if you could replace those of your VMs with other ones that run a small set of (probably quite big & complex) Python scripts, say, with a different OS that can run those scripts quicker in a tenth of the RAM, it might be worth some re-tooling.

That's speculative but it's far from impossible.

There is much more to life than compiled languages. A lot of production software today — countless millions of lines — never goes near a compiler. It's written in interpreted "scripting" languages, or ones that run in VMs, or it just calls on off-the-shelf code on other systems which is a vanilla app, straight out of a repo, that just has some config applied.

Some of my English students here are training up to be developers. #1 language of choice: JavaScript. #2: Java. C? Historical curiosity. C++ a
more recent curiosity.

If one had, for instance, some totally-un-Unix-like OS with a working JVM, or with a working Python interpreter — CPython for compatibility - that
would run a lot of off-the-shelf code for a lot of people.

Old time Unix or Windows hands, when they hear "develop a new app", think of reaching for a compiler or an IDE.

The millennials don't. They start looking for modules of existing code that they can use, stitched together with some Python or Ruby.

No, $NewOS will not replace all uses of Windows or Unix. Don't be silly. But lots of companies aren't running any workloads at all on bare metal any more: it's all in VMs, and what those VMs are hosted on is a matter of convenience or personal preference of the sysadmins. Lots of that is now starting to move to containers instead, because they're smaller and faster to start and stop. Apps are structured as micro services across farms of VMs and are moving to farms of containers instead.

A container can be 10x smaller than a VM and start 100x faster. That's enough to drive adoption.

Well if you can host that container on an OS that's 10x smaller and starts 100x faster, that too might be enough to drive adoption.

You don't need to know how to configure and provision it so long as you can script it with a few lines in an Ansible playbook or a small Puppet file. If the OS has almost no local config because it's only able to run inside a standardised VM, how much setup and config does it need anyway?

Tue, Oct. 18th, 2016, 09:41 pm
Computing: FEATURE - Server integration - Windows onto Unix

I stumbled across an old article of mine earlier, and tweeted it. Sadly, the server seems to have noticed and slapped a paywall onto it. So, on the basis that I wrote the bally thing anyway, here's a copy of the text for posterity, grabbed from Google's cache. Typos left from the original.

FEATURE - Server integration - Window onto Unix

If you want to access a Unix box from a Windows PCs you might feel that the world is against you. Although Windows wasn't designed with Unix integration in mind there is still a range of third-party products that can help. Liam Proven takes you through a selection of the better-known offerings.

10 March 1998

Although Intel PCs running some variant of Microsoft Windows dominateat the world is against you. Although Windows wasn't designed with Unix integration in mind there is still a range of third-party products that can help. Liam Proven takes you through a selection of the better-known offerings. the desktop today, Unix remains strong as a platform for servers and some high-end graphics workstations. While there's something to be said in favour of desktop Unix in cost-of-ownership terms, it's generally far cheaper to equip users with commodity Windows PCs than either Unix workstations or individual licences for the commercial Unix offering, such as Sun's Solaris or SCO's products, that run on Intel PCs.

The problem is that Windows was not designed with Unix integration as a primary concern. Granted, the latest 32-bit versions are provided with integrated Internet access in the form of TCP/IP stacks and a web browser, but for many businesses, a browser isn't enough.

These power users need more serious forms of connectivity: access to Unix server file systems, text-based applications and graphical Unix programs.

These needs are best met by additional third-party products. Most Unix vendors offer a range of solutions, too many to list here, so what follows is a selection of the better-known offerings.

Open access

In the 'Open Systems' world, there is a single, established standard for sharing files and disks across Lans: Network File System (NFS). This has superseded the cumbersome File Transfer Protocol (FTP) method, which today is mainly limited to remote use, for instance in Internet file transfers.

Although, as with many things Unix, it originated with Sun, NFS is now the de facto standard, used by all Unix vendors. In contrast to FTP, NFS allows a client to mount part of a remote server's filesystem as if it were a local volume, giving transparent access to any program.

It should come as no surprise that no version of Windows has built-in NFS support, either as a client or a server. Indeed, Microsoft promotes its own system as an alternative to NFS under the name of CIFS. Still, Microsoft does include FTP clients with its TCP/IP stacks, and NT Server even includes an FTP server. Additionally, both Windows 95 and NT can print to Unix print queues managed by the standard LPD service.

It is reasonably simple to add NFS client support to a small group of Windows PCs. Probably the best-regarded package is Hummingbird's Maestro (formerly from Beame & Whiteside), a suite of TCP/IP tools for Windows NT and 95. In addition to an NFS client, it also offers a variety of terminal emulations, including IBM 3270 and 5250, Telnet and an assortment of Internet tools. A number of versions are available including ones to run alongside or independently of Microsoft's TCP/IP stack. DOS and Windows 3 are also provided for.

There is also a separate NFS server to allow Unix machines to connect to Windows servers.

If there are a very large number of client machines, though, purchasing multiple licences for an NFS package might prove expensive, and it's more cost-effective to make the server capable of serving files using Windows standards. Effectively, this means the Server Message Block (SMB) protocol, the native 'language' of Microsoft's Lan Manager, as used in everything from Windows for Workgroups to NT Server.

Lan Manager - or, more euphemistically, LanMan - has been ported to run on a range of non-MS operating systems, too. All Microsoft networking is based on LanMan, so as far as any Windows PCs are concerned, any machine running LanMan is a file server: a SCO Unix machine running VisionFS, or a Digital Unix or OpenVMS machine running PathWorks. For Solaris systems, SunLink PC offers similar functionality.

It's completely transparent: without any additional client software, all network-aware versions of Windows (from Windows 3.1 for Workgroups onwards) can connect to the disks and printers on the server. For DOS and Windows 3.1 clients, there's even a free LanMan (Dos-based) client available from Microsoft. This can be downloaded from www.microsoft. com or found on the NT Server CD.

Samba in the server

So far, so good - as long as your Unix vendor offers a version of LanMan for its platform. If not, there is an alternative: Samba. This is a public domain SMB network client and server, available for virtually all Unix flavours. It's tried and tested, but traditionally-minded IT managers may still be biased against public domain software. Even so, Samba is worth a look; it's small and simple and works well. It only runs over TCP/IP, but this comes as standard with 32-bit Windows and is a free add-on for Windows 3. A Unix server with Samba installed appears in "Network Neighborhood" under Windows as another server, so use is completely transparent.

File and print access is fine if all you need to do is gain access to Unix data from Windows applications, but if you need to run Unix programs on Windows, it's not enough. Remote execution of applications is a built-in feature of the Unix operating system, and works in three basic ways.

The simplest is via the Unix commands rexec and rsh, which allow programs to be started on another machine across the network. However, for interactive use, the usual tools are Telnet, for text-terminal programs, and the X Window System (or X) for GUI applications.

Telnet is essentially a terminal emulator that works across a TCP/IP network, allowing text-based programs to be used from anywhere on the network. A basic Telnet program is supplied free with all Windows TCP/IP stacks, but only offers basic PC ANSI emulation. Traditional text-based Unix applications tend to be designed for common text terminals such as the Digital VT220 or Wyse 60, and use screen controls and keyboard layouts specific to these devices, which the Microsoft Telnet program does not support.

A host of vendors supply more flexible terminal emulators with their TCP/IP stacks, including Hummingbird, FTP Software, NetManage and many others. Two specialists in this area are Pericom Software and J River.

Pericom's Teem range of terminal emulators is probably the most comprehensive, covering all major platforms and all major emulations. J River's ICE range is more specific, aiming to connect Windows PCs to Unix servers via TCP/IP or serial lines, providing terminal emulation, printing to Unix printers and easy file transfer.

Unix moved on from its text-only roots many years ago and modern Unix systems have graphical user interfaces much like those of Windows or the MacOS. The essential difference between these and the Unix GUI, though, is that X is split into two parts, client and server. Confusingly, these terms refer to the opposite ends of the network than in normal usage: the X server is the program that runs on the user's computer, displaying the user interface and accepting input, while the X client is the actual program code running on a Unix host computer.

The X factor

This means that all you need to allow PCs to run X applications is an X server for MS Windows - and these are plentiful. While Digital, Sun and other companies offer their own X servers, one of the best-regarded third-party offerings, Exceed, again comes from Hummingbird. With an MS Windows X server, users can log-in to Unix hosts and run any X-based application as if they were using a Unix workstation - including the standard X terminal emulator xterm, making X ideal for mixed graphical and character-based work.

The only drawback of using terminal emulators or MS Windows X servers for Unix host access is the same as that for using NFS: the need for multiple client licences. However, a radical new product from SCO changes all that.

The mating game

Tarantella is an "application broker": it shifts the burden of client emulation from the desktop to the server. In short, Tarantella uses Java to present a remote desktop or "webtop" to any client computer with a Java-capable web browser. From the webtop, the user can start any host-based application to which they have rights, and Tarantella downloads Java code to the client browser to provide the relevant interface - either a terminal emulator for character-based software or a Java X emulator for graphical software.

The host software can be running on the Tarantella server or any other host machine on the network, meaning that it supports most host platforms - including Citrix WinFrame and its variants, which means that Tarantella can supply Windows applications to all clients, too.

Tarantella is remarkably flexible, but it's early days yet - the first version only appeared four months ago. Currently, Tarantella is confined to running on SCO's own UnixWare, but versions are promised for all major Unix variants and Windows NT.

There are plenty of ways to integrate Windows and Unix environments, and it's a safe bet that whoever your Unix supplier is they will have an offering - but no single product will be perfect for everyone, and those described here deserve consideration. Tarantella attempts to be all things to all system administrators, but for now, only if they are running SCO. It's highly likely, though, that it is a pointer to the way things will go in the future.

USING WINDOWS FROM UNIX

There are a host of solutions available for accessing Unix servers from Windows PCs. Rather fewer go the other way, allowing Unix users to use Windows applications or data stored on Windows servers.

For file-sharing, it's easiest to point out that the various solutions outlined in the main article for accessing Unix file systems from Windows will happily work both ways. Once a Windows machine has access to a Unix disk volume, it can place information on to that volume as easily as it can take it off.

For regular transfers, or those under control of the Unix system, NFS or Samba again provide the answer. Samba is both a client and a server, and Windows for Workgroups, Windows 95 and Windows NT all offer server functionality.

Although a Unix machine can't access the hard disk of a Windows box which is only running an NFS client, most NFS vendors also offer separate NFS servers for Windows. It would be unwise, at the very least, to use Windows 3 or Windows 95 as a file server, so this can reasonably be considered to apply mainly to PCs running Windows NT.

Here, the licensing restrictions on NT come into play. NT Workstation is only licensed for 10 simultaneous incoming client connections, so even if the NFS server is not so restricted, allowing more than this violates Microsoft's licence agreement. Different versions of NT Server allow different numbers of clients, and additional licences are readily available from Microsoft, although versions 3.x and 4 of NT Server do not actually limit connections to the licensed number.

There are two routes to running Windows applications on Unix workstations: emulating Windows itself on the workstation, or adding a multi-user version of Windows NT to the Unix network.

Because there are so many applications for DOS and Windows compared to those for all other operating system platforms put together, several companies have developed ways to run Windows, or Windows programs, under Unix. The simplest and most compatible method is to write a Unix program which emulates a complete Intel PC, and then run an actual copy of Windows on the emulator.

This has been done by UK company Insignia, whose SoftWindows was developed with assistance from Microsoft itself. SoftWindows runs on several Unix architectures including Solaris, IRIX, AIX and HP-UX (as well as the Apple Macintosh), and when running on a powerful workstation is very usable.

A different approach was tried by Sun with Wabi. Wabi once stood for "Windows Application Binary Interface", but for legal reasons, this was changed, and now the name doesn't stand for anything. Wabi translates Windows API calls into their Unix equivalents, and emulates an Intel 386 processor for use on RISC systems. This enables certain 16-bit Windows applications, including the major office suites, to run under Unix, without requiring an actual copy of Microsoft Windows. However, it isn't guaranteed to run any Windows application, and partly due to legal pressure from Microsoft, development was halted after the 16-bit edition was released.

It's still on sale, and versions exist for Sun Solaris, SCO Unix and Caldera OpenLinux.

Both these approaches are best suited to a small number of users who don't require high Windows performance. For many users and high-performance, Insignia's NTrigue or Tektronix' WinDD may be better answers. Both are based on Citrix WinFrame, which is a version of Windows NT Server 3.51 licensed from Microsoft and adapted to allow true multi-user access. While WinFrame itself uses the proprietary ICA protocol to communicate with clients, NTrigue and WinDD support standard X Windows, allowing Unix users to log-in to a PC server and remotely run 32-bit Windows software natively on Intel hardware.

Mon, Oct. 10th, 2016, 07:09 pm
Some ramblings on the importance of culture in tech, especially around choice of programming tools

[A friend asked why, if Lisp was so great, it never got a look-in when Ada was designed.]

My impression is that it’s above all else cultural.

There have long been multiple warring factions depending on deeply-felt beliefs about how computing should be done. EBDCIC versus ASCII, RISC vs CISC, C vs Pascal, etc. Now it’s mostly sorted inasmuch as we all use Unix-like OSes — the only important exception, Windows, is becoming more Unix-like — and other languages etc. are layered on top.

But it goes deeper than, e.g., C vs Pascal, or BASIC or Fortran or whatever. There is the imperative vs functional camp. Another is algebraic expressions versus non-algebraic: i.e. prefix or postfix (stack-oriented RPN), or something Other such as APL/I/J/A+; manual memory management versus automatic with GC; strongly versus weakly typed (and arguably sub-battles such as manifest versus inferred/duck typing, static vs dynamic, etc.)

Mostly, the wars settled on: imperative; algebraic (infix) notation; manual memory management for system-level code and for externally-distributed code (commercial or FOSS), and GC Pascal-style languages for a lot of internal corporate s/w development (Delphi, VB, etc.).

FP, non-algebraic notation and things like were thus sidelined for decades, but are now coming back layered on top of complex OSes written in C-like languages. This is an era of proliferation in dynamic, interpreted or JITTed languages used for specific niche tasks, running on top of umpteen layers of GP OS. Examples range across Javascript, Perl 6, Python, Julia, Clojure, Ruby and tons more.

Meanwhile, new safer members of the broader C family of compiled languages, such as Rust and Go, and stretching a point Swift, are getting attention for more performance-critical app programming.

All the camps have strong arguments. There are no single right or wrong answers. However, cultural pressure and uniformity mean that outside of certain niches, we have several large camps or groups. (Of course, individual people can belong to more than one, depending on job, hobby, whatever.)

C and its kin are one, associated with Unix and later Windows.

Pascal and its kin, notably Object Pascal, Delphi/FPC, another. Basic now means VB and that means .NET family languages, another family. Both have historically mainly been part of the MS camp but now reaching out, against some resistance, into Unix land.

Java forms a camp of its own, but there are sub-camps of non-Java-like languages running on the JVM — Clojure, Scala, etc.

Apple’s flavour of Unix forms another camp, comprising ObjC and Swift, having abandoned outreach efforts.

People working on the development of Unix itself tend to strongly favour C above all else, and like relatively simple, old-fashioned tools — ancient text editors, standalone compilers. This has influenced the FOSS Unix GUIs and their apps.

The commercial desktop app developers are more into IDEs and automation; these days this covers .NET and JVM camps, and spans all OSes, but the Pascal/VM camp are still somewhat linked to Windows.

The people doing niche stuff, for their own needs or their organisations, which might be distributed as source — which covers sysadmins, devops and so on — are more into scripting languages, where there’s terrific diversity.

Increasingly the in-house app devs are just using Java, be they desktop or server apps. Indeed “desktop” apps of this type might now often mean Java server apps generating a remote UI via web protocols and technologies.

Multiple camps and affiliations. Many of them disdain the others.

A summary of how I’m actually addressing your question:

But these ones are the dominant ones, AFAICS. So when a new “safe” “secure” language was being built, “weird” niche things like Lisp, Forth, or APL never had a chance of a look-in. So it came out looking a bit Pascal- and BASIC-like, as those are the ones on the safe, heavily-type-checked side of the fence.

A more general summary:

I am coming to think that there are cultural forces stronger than technical forces involved in language choice.

Some examples I suspect that have been powerful:

Lisp (and FP) are inherently complex to learn and to use and require exceptionally high intelligence in certain focussed forms. Some people perfectly able to be serviceable, productive coders in simple imperative languages find themselves unable to fathom these styles or methods of programming. Their response is resentment, and to blame the languages, not themselves. (Dunning Kruger is not a problem confined to those of low intelligence.)

This has resulted in the marginalisation of these technologies as the computing world became vastly more commoditised and widespread. Some people can’t handle them, and some of them end up in positions of influence, so teaching switched away from them and now students are taught in simpler, imperative languages. Result, there is a general perception that some of these niche tools are exotic, not generally applicable or important, just toys for academics. This isn’t actually true but it’s such a widespread belief that it is self-perpetuating.

This also applies to things like Haskell, ML/OCaml, APL, etc.

On the flip side: programming and IT are male-dominated industries, for no very good reason. This results in masculine patterns of behaviour having profound effects and influences.

So, for instance, languages in the Pascal family have safety as a priority and try to protect programmers from errors, possibly by not allowing them to write unsafe code. A typically masculine response to this is to resent the exertion of oppressive control.

Contrastingly, languages in the BCPL/C/C++ family give the programmer extensive control and require considerable discipline and care to write safe code. They allow programmers to make mistakes which safer languages would catch and prevent.

This has a flip side, though: the greater control potentially permits or offers theoretically higher performance.

This aligns with “manly” virtues of using powerful tools — the appeal of chainsaws, fast cars and motorcycles, big powerful engines, even arguably explicitly dangerous things like knives and guns. Cf. Perl, “the Swiss Army chainsaw”.

Thus, the masculine culture around IT has resulted in people favouring these languages. They’re dangerous in unskilled hands. So, get skilled, then you can access the power.

Of course, again, as Dunning Kruger teach us, people cannot assess their own skill, and languages which permit bugs that others would trap have been used very widely for 3 decades or more, often on the argument of performance but actually because of toxic culture. All OSes are written in them; now as a result it is a truism that only these languages are suitable for writing OSes.

(Ignoring the rich history of OSes in safer languages — Algol, Lisp, Oberon, perhaps even Mesa, or Pascal in the early Macs.)

If you want fast code, you need a fast language! And Real Men use C, and you want to be a Real Man, don’t you?

Cf. the story of Mel The Real Programmer.

Do it in something low-level, manage your own memory. Programming is a game for the smart, and you must be smart because you’re a programmer, so you can handle it and you won’t drop a pointer or overflow an array.

Result, decades of complex apps tackling arbitrary complex data — e.g. Web browsers, modern office suites — written in C, and decades of software patching and updating trying to catch the legions of bugs. This is now simply perceived as how software works, as normal.

Additionally, in many cases, any possible performance benefits have long been lost due to large amounts of protective code, of error-checking, in libraries and tools, made necessary by the problems and inherent fragility of the languages.

The rebellion against it is only in the form of niche line-of-business app developers doing narrow, specific stuff, who are moving to modern interpreted languages running on top of tens of million of lines of C written by coders who are only just able to operate at this level of competence and make lots of mistakes.

For people not facing the pressures of commercial releases, there was an era of using safer, more protective compiled languages for in-company apps — Turbo Pascal, Delphi, VB. But that’s fading away now in favour of Java and .NET, “managed” languages running under a VM, with concomitant loss of performance but slight improvement in safety and reliability.

And because this has been widespread for some 2-3 decades, it’s now just _how things are done_. So if someone presents evidence and accounts of vastly better programmer productivity in other tools, decades ago, in things like Lisp or Smalltalk, then these are discounted as irrelevant. Those are not manly languages for manly programmers and so should not be considered. They’re toys.

People in small enough niches continue to use them but have given up evangelising about them. Like Mac users, their comments are dismissed as fanboyism.

So relatively small cultural effects have created immensely strong cultures, dogmas, about what is or isn’t a good choice for certain categories of problem. People outside those categories continue to use some of these languages and tools, while others languish.

This is immensely sad.

For instance, there have been successful hybrid approaches.

OSes written in Pascal derivatives, or in Lisp, or in Smalltalk, now lost to history. As a result, processor design itself has shifted and companies make processors that run C and C-like languages efficiently, and processors that understood richer primitives — lists, or objects — are now historical footnotess.

And languages which attempted to straddle different worlds — such as infix-notation Lisp derivatives, readable and easily learnable by programmers who only know infix-based, imperative languages — e.g. Dylan, PLOT, or CGOL — are again forgotten.

Or languages which developed down different avenues, such as the families of languages based on or derived from Oberon, or APL, or ML. All very niche.

And huge amounts of precious programmer time and effort expended fighting against limited and limiting tools, not well suited to large complex projects, because they simply do not know that there are or were alternatives. These have been crudely airbrushed out, like disappearing Soviet commissars.

“And so successful was this venture that very soon Magrathea itself became the richest planet of all time, and the rest of the galaxy was reduced to abject poverty. And so the system broke down, the empire collapsed, and a long, sullen silence settled over the galaxy, disturbed only by the pen-scratchings of scholars as they laboured into the night over smug little treatises on the value of a planned political economy. In these enlightened days, of course, no one believes a word of it.”

(Douglas Adams)

Sun, Oct. 9th, 2016, 04:13 pm
Switching OSes regularly is good for your brain.

Recycled blog comment, in reply to this post and this tweet, itself a comment on Bill Bennet's blog post.

I couldn't really disagree more, I'm afraid.

I regularly switch between Mac OS X, Linux & Windows. Compared to genuinely different OSes -- RISC OS, Plan 9, Bluebottle -- they're almost identical. There's no such thing as "intuitive" computing (yet) -- it's just what you're most familiar with.

IMHO the problem is that Windows has been so dominant for 25Y+ that its ways are the only ones for which most people have "muscle memory".

There is nothing intuitive about hierarchical filing systems. It's not how real life works. People don't have folders full of folders full of folders. They have 1 level, maybe 2. E.g. a drawer or set of drawers containing folders with documents in. No more levels that that

The deep hierarchies of 1970s to 1990s computers were a techie thing. They're conceptually abstract for normal folk. Tablets and Android phones show that: people have 1 level of folders and that's enough. The success of MS Office 2007 et seq (which I cordially loathe) shows that hunting through 1 level of tabs on a ribbon is easier for non-techies than layers of menus. Me, I like the menus

You get used to Windows-isms and if they're taken away or altered, suddenly, it's all weird. But it's not harder, it's just different. The Mac way, even today, is somewhat simpler, and once you learn the new grammar, it's less hassle. Windows has the edge in some things, but surprisingly few, and with the accumulation of cruft like ribbons everywhere, it's losing that, too

You say Apple's spent 27y hiding stuff. No. That's obviously silly. OS X is only 16y old, for a start. But it's spent 27y doing things differently and you didn't keep up, so when you switched, aaaargh, it's all weird!

OS X is Unix! Trademarked, POSIX certified, the lot. You know Unix? Pop open a terminal, all the usual stuff is there. But it's too much for non-techies, so it's simplified for them. Result, a trillion-dollar company and what PC types call "Mac fanbois". There's a reason – because it really is easier for them. No window management: full-screen apps. No need to remember the meaning of multiple mouse buttons. They're there if you need them, but you can do it with gestures instead^d^dI learned Macs in 1988 and have used them alongside Windows and Linux for as long as all 3 existed. I use a 29Y old Apple keyboard and a 5-button Dell mouse on my Mac. I use it in a legacy way, with deep folder trees, a few symlinks to find things, and no Apple apps at all. When I borrowed the Mac of a student, set up with everything full-screen on multiple desktops switched between with gestures, all synched with his iPad and iPhone, I was totally lost. He uses it in a totally different way to the way I use mine -- with the same FOSS apps as on my Linux laptops and my dusty unused Windows partitions

But that flexibility is good. And the fact that they have sold hundreds of millions   of iOS devices and Macs indicates that it really is good for people, and they love it. It's not slavish fashion-following: to account for a company surviving and thriving for 40 years based on that is arrant foolishness

Perhaps you're a car driver. Most of them think that car controls are intuitive. They aren't. They're entirely arbitrary. I mostly switched from motorcycles to cars in 2005 at nearly 40 years old. Motorbike controls -- a hand throttle, because it needs great precision, but a foot gearchange because that doesn't -- still feel far more natural to me, a decade later

But billions drive cars and find car controls natural and easy

It's just what you're used to

It's not Apple's fault, I'm afraid. It's yours. Sorry

I urge you to exercise your brain and learn new muscle memories. It's worth it. The additional flexibility feels great.

10 most recent