My octogenarian mum is on her second iPad now, a 2012 iPad 3, the first Retina model of iPad. It’s a decent device, quite high-spec, fast and reliable. It has a lovely sharp 1536×2048 display, a gig of RAM, excellent battery life even for a second-hand device, Siri, the works. It runs iOS 9, version 9.3.5 to be precise.
This is the same version as its predecessor, a much slower non-Retina (768×1024) 2011 iPad 2 with 512 MB of RAM. The iPad 3 feels much quicker although both have dual-core 1GHz ARM CPUs.
Old versions of Skype (from before version 8) can’t connect any more… and the new versions only run on iOS 10 or above.
Tablet sales are slackening off. Perhaps such moves are intentional as a way to drive sales of newer models, when the old devices are still perfectly functional.
But there is an odd little wrinkle. The iPhone version of Skype 8 works on iOS 9, but the tablet version doesn’t. And the iPad is really just a big iPhone and it can run iPhone apps – in the early days of the iPad, when there were few iPad-native apps, so iPad owners ran iPhone apps, which appeared huge with big chunky controls. But they worked.
If you coax the iPhone version of Skype onto your out-of-support iPad, you will still be able to connect, and both make and receive calls and messages.
I couldn’t find any instructions online. There are a couple of wordless, agonizingly slow Youtube videos showing how to do it – if you read French.
So I thought I’d describe how I did it.
The basic procedure is that we will use a specific old version of iTunes on Windows to add Skype for iPhone to the Apple account used on the iPad, and then use the App Store on the iPad itself to install this from our web account.
What you will need:
- an iPad that can’t run anything newer than iOS 9
- a working Apple ID
- a Windows PC on which you can install an old version of iTunes
- Ideally one which didn’t have iTunes on it already
- a cable to connect them
Just to make this harder, in version 12.7, Apple removed iTunes’ ability to install and manage apps on iOS devices in a recent version. This method won’t work with any current version of iTunes. So you’ll need to install a special, older version -- iTunes 12.6.3. If you already have a current version of iTunes installed, you’ll need to remove it and install the last version with the App Store functionality. Old versions of iTunes can’t open the libraries of newer versions. That means you’ll lose access to your iTunes library.
So make sure you have a backup, export your music/photos/videos and any other content to somewhere else and make sure you have a safe copy.
Then download iTunes from here: http://osxdaily.com/2017/10/09/get-itunes-12-6-3-with-app-store/
The other option is to make a special new Windows user account, just for this process. You’ll still need to downgrade iTunes, at least temporarily, but if you work in a dedicated one-shot user account, the new account won’t have access to your library, so you won’t lose it.
If you don’t have anything in your iTunes library, or you don’t normally use iTunes at all – like my mother, or indeed me, as I sync my iPhone to my iMac – then the easiest way to proceed is to erase your entire iTunes library and config files.
The procedure is as follows:
- Install the last version of iTunes with the App Store.
- Log in to the same Apple ID as the as used on the iPad.
- In iTunes, find Skype for iPhone.
- Ask to install it on your device. It’s free, so no payment method is required.
- Now, Skype for iPhone is on the inventory of your Apple ID.
(At this point, you can connect the iPad and try to sync it. It won’t install Skype for you, but you can try.)
- Now, eject your iPad using the button next to its icon in iTunes. After that, you’re done with the PC.
- Now switch over to the iPad and open the Apple Store.
- Go to the “Purchased Apps” tab.
- Note: you might need to switch views. There is a choice of “iPad apps” and “iPhone apps”. Since we’re looking for Skype for iPhone, it should appear under iPhone apps, not under iPad apps.
- If you can’t see it, you can also search for “Skype for iPhone” – capitals don’t matter, but the exact phrase will help.
- When you find it, you should see a little cloud logo next to it. That means it’s on your account, but not on this iPad.
- Tap the “install” button.
- The App Store should tell you that the latest version will not run on your device – it needs iOS 10 or newer, which is why we are here. Crucially, though, it should offer to install the latest version which will work on your device. Say yes to this.
That is about it. It should install the iPhone version of Skype onto your iPad 2 or 3. You will see a small circle in the bottom right corner of the screen. This lets you change the magnification: normally, the app is doubled to fill most of the screen, and it says “1×” in the circle. Tap it to turn off the scaling, and the app will shrink down to phone size and the circle will say “2×” to return to double size.
For me, it worked and I could make and receive calls. However, I could not send video, only receive it.
Sadly if you use large text – my mum’s eyesight is failing – the phone app is almost unusable due to the text size, so we have sold on the iPad and bought a newer model, a fifth generation model. Now she is struggling with iOS 12 instead, which is a major step up in complexity from iOS 9. If you are attempting to do this for a technophobe or an elderly relative, you might consider switching to Facetime, Google Hangouts, or something else, as newer iPads are significantly less easy to use.
A poorly-worded question on Quora
links to a rather interesting (if patchily-translated) Chinese discussion
of the Fuchsia OS project
It suckered me into answering
But so as to keep my answer outside of Quora...
Fuschia is an incomplete project. It is not yet clear what Google intends for it. It is probably
intended as a replacement for Android.
Android is a set of custom layers on top of an old version of the Linux kernel. Android apps run on a derivative of the Java virtual machine.
This means that Android apps are not strictly native Linux applications.
Linux is a Unix-like OS, written in C. C is a simple programming language. It has many design defects, among which are that it does not have strong typing, meaning that it is not type-safe. You can declare a variable as being a long floating-point number, and then access one byte of it as if it were a string and replace what looks like the the letter “q” with the letter “r”. But actually it wasn’t a “q”, it was the value 80, and now you’ve put 81 in there. What was the number 42.37428043 is now 42.37428143, all because you accidentally treated a floating point number as a string.
[Disclaimer: this is a very poorly-described hypothetical instance and I am aware it wouldn't really work like that. Consider it figurative rather than literal.]
Better-designed programming languages prevent this. C just lets you, without an error.
It also does little to no checks on memory accesses. E.g. if you declare an array of 30 numbers, C will happily let you read, or worse still write, the 31st entry, or the 32nd, or the 42nd, or the 375324564th.
The result is that C programs are unsafe because of the language design. It is essentially impossible to write safe programs in C.
However, all Unix-like OSes are written in C. The entire kernel is in C, and all of the tools, from the “ls” command to the text editors to the programs that read and write configuration files and set up the computer, all in C. All in a language that has no way to tell if it’s reading or writing text or integer numbers or floating point numbers or hexadecimal or a binary-encoded image file. A language which won’t tell you if you slip up and accidentally do the wrong thing.
A few geniuses can handle this. A very, very few. People like Dennis Richie and Ken Thompson, who wrote Unix.
Ordinary humans can’t.
But unfortunately, Unix caught on, and now most of the world runs on it.
Later derivatives of the Unix operating system gradually fixed this. First Plan 9, which imposed much stricter limits on how C worked, and then tried to replace it with a language called Alef. Then Plan 9 led to Inferno, which largely replaced C with a safer language called Limbo.
But they didn’t catch on.
One of the leading architects of those operating systems was a programmer called Rob Pike.
He now works for Google, and one of his big projects is a new programming language called Go. Go draws on the lessons of Plan 9, Alef and Limbo.
Fuschia is written in Go instead of C.
Thus, although it has many other changes as discussed in the article you link to, it should in theory be fundamentally safer than Unix, being immune to whole categories of programming errors that are inherent to Unix and all Unix-like OSes.
I had to point out a couple of issues...
* The OS that came with it... The original 'Strads came with _two_. Digital Research's DOS Plus:https://en.wikipedia.org/wiki/DOS_Plus
... _and_ MS-DOS. DOS Plus was very obscure -- the only other machine I know to come with it was the Acorn BBC Master 512 -- but it was a forerunner of DR-DOS, which was a huge success and much later became open source.
* That isn't WordStar you show. Well, it sort of is, but it's not _the_ WordStar that you correctly describe as the leading DOS wordprocessor until WordPerfect came along. Amstrad bundled a special custom wordprocessor called WordStar 1512. This is a rebadged version of WordStar Express, which although it came from MicroPro Corp, is in fact totally unrelated to the actual WordStar program. The rumour was that WordStar Express was a student project, written in Modula-2. It is totally incompatible with actual WordStar, using different keystrokes, different file formats, everything. But it did allegedly get the student a job! It didn't sell so Amstrad got it very cheap.https://www.wordstar.org/index.php/wordstar-history
* WordStar was originally written for CP/M and ported to MS-DOS, meaning that it didn't support MS-DOS's more advanced features, such as subdirectories, very well. MicroPro flailed around a bit, including developing WordStar 2000, another unrelated program that looked similar but used a totally different and incompatible user interface, thus alienating all the existing users.
(And WordStar users are almost fanatically loyal. George R R Martin is one -- all of "a Game of Thrones" was written in WordStar!)
After annoying its users for so long that various companies cloned the original program, MicroPro eventually did something marginally sensible. It bought the leading clone, which was called NewWord, and rebadged it as "WordStar 4," even though it wasn't derived from WordStar 3 at all.
So what Doris had there is a shoddy alternative app from MicroPro, and a better 3rd party alternative that in fact _became_ the real product.
* Locomotive BASIC 2 -- this was sort of a sop, a bone thrown to Locomotive Software who did almost all the original Amstrad CPC and PCW 8-bit business apps. BASIC 2 is pretty much totally unrelated to, and incompatible with, the ROM BASIC in the CPC range, or Locomotive's Mallard BASIC for the PCW, but it was written by the same company. It was the only high-level language built for PC GEM, I believe. It was sold on nothing other than the Amstrads and so disappeared into obscurity.
Rather than BASIC 2 and the fairly awful WordStar 1512, Amstrad ought to have offered LocoScript PC, the DOS version of the Amstrad PCW's bundled wordprocessor. This was a very good app in its day, one of the most powerful DOS wordprocessors in its time, with advanced font handling and very limited WYSIWYG support.
* No RAM expansion in the 1640. That's a plain mistake. There's no expansion possible. The 8086 can only address 1 MB of RAM, and the upper 384 kB of that space is filled with ROM and I/O space in the PC design. 640 kB is all an 8086 PC can take, so there *is* no possible expansion. Thus, no point in fitting slots for it.
Apart from these cavils, a good video that I enjoyed!
Someone at $JOB said that they really wished that rsync could give a fairly close estimate of how long a given operation would take to complete. I had to jump in...
Be careful what you wish for.Especially
that "close" in there, which is a disastrous
do that, because the way it works is comparing files on source and destination block-by-block to work out if they need to be synched or not.
To give an estimate, it would have to do that twice, and thus, its use would be pointless. Rsync is not a clever copy program. Rsync exists to synch 2 files/groups of files without transmitting all the data they contain over a slow link; to do the estimate you ask would obviate its raison d'être.
If it just looked at file sizes, the estimate would be wildly pessimistic, and thus make the tool far less attractive and that would have led to it not being used and becoming a success.
Secondly, by comparison: people clearly asked for this from the Windows developers, and commercial s/w being what it is, they got it.
That's how on Win10 you get a progress bar for all
file operations. Which means deleting a 0-byte file takes as long as deleting a 1-gigabyte file: it has to simulate the action first, in order to show the progress, so everything now has a built-in multi-second-long delay (far longer than the actual operation) so it can display a fancy animated progress bar and draw a little graph, and nothing happens instantly, not even the tiniest operations.
Thus a harmless-sounding UI request completely obviated the hard work that went into optimising NTFS, which for instance stores tiny files inside
the file system indices so they take no disk sectors at all, meaning less head movement too.
All wasted because of a UI change.
Better to have no estimate than a wildly inaccurate estimate or
an estimate that doubles the length of the task.
Yes, some other tools do give a min/max time estimate.
There are indeed far more technically-complex solutions, like...
(I started to do this in pseudocode but I quickly ran out of width, which tells you something)
* start doing the operation, but
also time it
* if the time is more than (given interval)
* display a bogus progress indicator, while you work out an estimate
* then start displaying the real progress indicator
* while continuing the operation, which means your estimate is now
* adjust the estimate to improve its accuracy
* until the operation is complete
* show the progress bar hitting the end
* which means you've now added a delay at the end
So you get a progress meter throughout which only shows for longer operations, but it delays the whole job.
This is what Windows Vista did, and it was a pain.
And as we all know, for any such truism, there is an XKCD for it.https://xkcd.com/612/
That was annoying. So in Win10 someone said "fix it". Result, it now takes a long time to do anything at all, but there's a nice progress bar to look at.
So, yeah, no. If you want a tool that does its job efficiently and as quickly as possible, no, don't
try to put a time estimate in it.
Non-time-based, non-proportional time indicators are fine.
E.g. "processed file XXX" which increments, or "processed XXX $units_of_storage"
But they don't tell you how long it will take, and that
annoys people. They ask "if you can tell me how much you've done, can't you tell me what fraction of the whole that is?" Well, no, not without doing a potentially big operation before beginning work
which makes the whole job bigger.
And the point
of rsync is that it speeds up work over slow links.
Estimates are hard. Close
estimates are very
hard. Making the estimate makes the job take much
longer (generally, at a MINIMUM twice
as long). Poor estimates are very annoying.
So, don't ask for them.
TL;DR Executive summary (which nobody at Microsoft was brave enough to do):
This was one of those things that for a long time I just assumed everyone knew... then it has become apparent in the last ~dozen years (since Vista) that apparently lots of people didn't
know, and indeed, that this lack of knowledge was percolating up the chain.
The time it hit me personally was upgrading a customer's installation of MS Office XP to SR1. This was so big, for the time -- several hundred megabytes, zipped, in 2002 and thus before many people had broadband -- that optionally you could request it on CD.
The CD contained a self-extracting Zip that extracted into the current directory. So you couldn't run it directly from the CD. It was necessary to copy it to the hard disk, temporarily wasting ¼ GB or so, then
The uncompressed files would have fitted on the CD. That was a warning sign; several people failed in attention to detail and checks.
(Think this doesn't matter? The tutorial for Docker instructs you to install a compiler, then build a copy of MongoDB (IIRC) from source. It leaves the compiler and the sources in the resulting container. This is the exact same sort of lack of attention to detail. Deploying that container would waste a gigabyte or so per instance, and thus waste space, energy, machine time, and cause over-spend on cloud resources.
All because some people just didn't think. They didn't do their job well enough.
So, I copied the self-extractor, I ran it, and I started the installation.
A progress bar slowly crept up to 100%. It took about 5-10 minutes. The client and I watched.
When it got to 100%... it went straight back to zero and started again.
This is my point: progress bars are actually quite difficult.
It did this seven times
The installation of a service release took about 45 minutes, three-quarters of an hour, plus the 10 minutes wasted because an idiot put a completely unnecessary download-only self-extracting archive onto optical media.
The client paid his bill, but unhappily, because he'd watched me
wasting a lot of expensive time because Microsoft was incompetent at:
 Packaging a service pack properly.
 Putting it onto read-only media properly.
 Displaying a progress bar properly.
Of course it would have been much easier and simpler to just distribute a fresh copy of Office, but that would have made piracy easier than this product is proprietary software and one of Microsoft's main revenue-earners, so it's understandable that they didn't want to do that.
But if the installer had just said:
Installation stage x/7:
That would have been fine. But it didn't. It went from 0 to 100%, seven times over, probably because first the Word team's patch was installed, then the Excel team's patch, then the Powerpoint team's patch, then the Outlook team's patch, then the Access team's patch, then the file import/export filters team's patch, etc. etc.
Poor management. Poor attention to detail. Lack of thought. Lack of planning. Major lack of integration and overview.
But this was just a service release. Those are unplanned; if the apps had been developed and tested better, in a language immune to buffer overflows and which didn't permit pointer arithmetic and so on, it would have have been necessary.
But the Windows Vista copy dialog box, as parodied in XKCD -- that's taking orders from poorly-trained management who don't understand the issues, because someone didn't think it through or explain it, or because someone got promoted to a level they were incompetent for.https://en.wikipedia.org/wiki/Peter_principle
These are systemic problems. Good high-level management can prevent them. Open communications, where someone junior can point out issues to someone senior without fear of being disciplined or dismissed, can help.
But many companies lack this. I don't know yet if $DAYJOB has sorted these issues. I can confirm from bitter personal experience that my previous FOSS-centric employer suffered badly from them.
Of course, some kind of approximate estimate, or incremental progress indicator for each step, is better than nothing.
Another answer is to concede that the problem is hard, and display a "throbber" instead: show an animated widget that shows something is happening, but not how far along it is. That's what the Microsoft apps team often does now.
Personally, I hate it. It's better than nothing but it conveys no useful information.
Doing an accurate estimator based on integral speed tests is also significantly tricky and can slow down the whole operation. Me personally, I'd prefer an indicator that says "stage 6 of 15, copying file 475 of 13,615."
I may not know which files are big or small, which stages will be quick or slow... but I can see what it's doing, I can make an approximate estimate in my head, and if it's inaccurate, well, I can blame myself and not the developer.
And nobody has to try to work out what percent of an n
stage process with o
files of p
different sizes they're at. That's hard for someone to work out, and it's possible that someone can't tell them a correct number of files or something... so you can get progress bars that go to 87% and then suddenly end, or that go to 106%, or that go to 42% and then sit there for an hour, and then do the rest in 2 seconds.
I'm sure we've all seen all of those. I certainly have.
From a Quora answer
Windows 10 is Windows NT version 10. Windows NT copied the patterns of MS-DOS, because DOS was the dominant OS when NT was launched in 1993.
DOS copies its disk assignment methods from Digital Research CP/M, because DOS started out as a copy of CP/M.https://en.wikipedia.org/wiki/CP/M
What Microsoft bought was originally called QDOS, Quick and Dirty OS, from Seattle Computer Products.https://en.wikipedia.org/wiki/Seattle_Computer_Products
The way IBM PC-compatibles assign disk drives is copied from the way the IBM PC running PC DOS assigned them. PC DOS is IBM’s brand of MS-DOS. See the answer about Apricot computers
for how (some) non-IBM-compatible DOS computers assign drive letters.
The way that CP/M and MS-DOS originally assigned drive letters was simple.
The drive you booted from was the first, so it was called A. It doesn’t matter what kind of drive. But floppy drives were expensive and hard drives were very
expensive, so in the late 1970s when this stuff was standardized, most machines only had a floppy drive or 2.
If you only had 1 drive, which was common, then the OS called it both
B. This is so that you could copy files from one disk to another; otherwise there would be no way.
So, you copied from A: to a the virtual drive B: and the OS prompted you to swap disks as necessary.
Floppy drives got cheaper, and it became common to have 2. So, the one you booted from was A, and the second drive was B.
So far, so simple. If you were rich and added more floppy drives, you got A, B, C, D etc. and if you were lucky enough to have good firmware that let you boot from any of them, the one you booted off was A and the rest were simply enumerated.
It is common to read that "certain drive letters are reserved for floppies". This is wrong. Nothing was reserved for anything.
If you had a floppy and a hard disk, then if you booted off the floppy, the floppy drive was A and the hard disk was B. If you booted off the hard disk — and early hard disks were often not bootable — then the hard disk became A and the floppy became B.
You didn't need the virtual drive thing any more; to copy from one floppy to another, you copy from floppy to hard disk, then swap floppies, then copy back.
However, having drives change letter depending on which you booted from was confusing — again, see the Apricot comment — so later firmware started changing this. So, for instance, in the Amstrad PCW range, the last new CP/M computers made, Amstrad hard-wired the drive letters.https://en.wikipedia.org/wiki/Amstrad_PCW
The first floppy was A. The second, if you had one, was B. And the rest of the machine's RAM aside from the 64 kB that CP/M used was made into a RAMdisk called drive M: "M" for Memory.
The IBM PC hard-wired some
letters too. Floppy 1, A. Floppy 2, B, even if not there. Partition 1 on hard disk 1, C. Partition 1 on hard disk 2, D. Partitions 2+ on HD #1
, E/F etc. Partitions 2+ on HD #2
, G/H etc.
This was very common as up to and including MS-DOS 3.3, DOS only supported partitions of up to 32 MB. So, for instance, in 1989 I installed an IBM PS/2 Model 80 with a 330MB hard disk as a server running the DOS-based 3Com 3+Share NOS.https://en.wikipedia.org/wiki/3%2BShare
It had hard disk partitions lettered C, D, E, F, G, H, I, J, K, L and M. (!)
DOS has a setting called LASTDRIVE. This tells it how many drive letters to reserve for assignment. Each takes some memory and you only had 640 kB to use, no matter how much was fitted. https://en.wikipedia.org/wiki/Conventional_memory
The default value for LASTDRIVE is E. Thus, the rival Novell Netware OS used the first drive after
that as the "network drive" with the login command and so on: F.https://en.wikipedia.org/wiki/NetWare
So, drive letters are not "reserved". They were originally assigned sequentially starting with the boot drive, and then by hardware ID number, and later by that and partition number, according to a slightly complex scheme that several people have linked to.
It is a convention
that A was the first floppy and C was the first hard disk, and everything else was assigned at boot time.
A response to a Reddit question
I can only agree with you. I have blogged and commented enough about this that I fear I am rather unpopular with the GNOME developer team these days. :-(
The direct reason for the sale is that in founder Mark Shuttleworth's view, Ubuntu's bug #0
His job is done. He has helped to make Linux far more popular and mainstream than it was. Due to Ubuntu being (fairly inarguably, I'd say) the best desktop distro for quite a few years, all the other Linux vendors [disclaimer: including my employer] switched away from desktop distros and over to server distros, which is where the money is. The leading desktop is arguably now Mint, then the various Ubuntu flavours. Linux is now mainstream and high-quality desktop Linuxes are far more popular than ever and they're all freeware.
Shuttleworth used an all-FOSS stack to build Thawte. When he sold it to Verisign in 1999, he made enough that he'd never need to work again. Ubuntu was a way for Shuttleworth to do something for the Linux and FOSS world in return.
Thus, Shuttleworth is preparing Ubuntu for an IPO and floatation on the public stock market. As part of this, the company asked the biggest techie community what they'd like to see happen: https://news.ycombinator.com/item?id=14002821
The results were resounding. Drop all the Ubuntu-only projects and switch back to upstream ones. Sadly, this mostly means Red Hat-backed projects, as it is the upstream developer of systemd, PulseAudio, GNOME 3, Flatpak and much more.
Personally I am interested in non-Windows-like desktops. I think the fragmentation in the Linux desktop market has been immensely harmful, has destroyed the fragile unity (pun intended) that there was in the free Unix world, and the finger of blame can be firmly pointed at Microsoft, which did this intentionally. I wrote about this here: https://www.theregister.co.uk/Print/2013/06/03/thank_microsoft_for_linux_desktop_fail/
The Unity desktop came out of that, and that was a good thing. I never like GNOME 2 much and I don't use Maté. But Unity was a bit of a lash-up behind the scenes, apparently, based on a series of Compiz plugins. It was not super stable and it was hard to maintain. The unsuccessful Unity-2D fork was killed prematurely (IMHO), whereas Unity 8 (the merged touchscreen/desktop version) was badly late.
There were undeniably problems with the development approach. Ubuntu has always faced problems with Red Hat, the 800lb gorilla of FOSS. The only way to work with a RH-based project is to take it and do as your told. Shuttleworth has written about this.https://www.markshuttleworth.com/archives/654
(See the links in that post too.)
Also, some contemporary analysis: https://www.osnews.com/story/24510/shuttleworth-seigo-gnomes-not-collaborating/
I am definitely not claiming that Ubuntu always does everything right! Even with the problems of working with GNOME, I suspect that Mir was a big mistake and that Ubuntu should have gone with Wayland.
Cinnamon seems to be sticking rather closer to the upstream GNOME base for its different desktop. Perhaps Unity should have been more closely based on GNOME 3 tech, in the same way.
But IMHO, Ubuntu was doing terrifically important work with Unity 8, and all that has come to nothing. Now the only real convergence efforts are the rather half-hearted KDE touchscreen work and the ChromeOS-on-tablet work from Google, which isn't all-FOSS anyway TTBOMK.
I am terribly disappointed they surrendered. They were so close.
I entirely agree with you: Unity was _the_ best Linux desktop, bar none. A lot of the hate was from people that never learned to use it properly. I have seen it castigated for lacking stuff that is basic built-in functionality that people never found how to use.
In one way, Unity reminded me of OS/2 2: "a better DOS than DOS, a better Windows than Windows." And it *was*! Unity was a better Mac OS X desktop than Mac OS X. I'm typing on a Mac now and there's plenty of things it can't do that Unity could. Better mouse actions. *Far* better keyboard controls.
I hope that the FOSS forks do eventually deliver.
Meantime, I reluctantly switched to Xfce. It's fine, it works, it's fast and simple, but it lacks functionality I really want.
Originally posted by ccdesan
. Reposted by liam_on_linux
at 2019-01-23 12:31:00.
I need to go shopping, that's all there is to it.(These were originally published in MAD Magazine #515, June 2012 - Writer: Scott Maiko, Artist: Scott Bricher)
Another recycled Quora answer.
The main reason is fairly simple.
Windows was designed to be easy to use, and to be compatible with older Microsoft operating systems, notably 16-bit Windows and DOS.
So, for example, by design, it treats any file with certain extensions (.EXE, .COM, .CMD, .BAT, etc.) as executable and will try to run them.
In contrast, Unix systems do not do this. They will not run even executable files unless they are specifically _marked_ as executable _and_ you have permissions to run them, and by default, Unix does not look in the current directory for executables.
This makes Unix less friendly, but more secure.
Microsoft also made other mistakes. For instance, it wanted to promote Internet Explorer to prevent Netscape getting control of the nascent browser market. To do this, it bundled IE with all copies of Windows. This was challenged in court as anti-competitive — which it was — and a demonstration was staged, in court, by intellectual property expert Lawrence Lessig, showing that IE could be uninstalled from Windows.
To counter this, MS tied IE in more deeply. So, for instance, Windows 98 has a multi-threaded Explorer, based in part on IE code. Window contents are rendered to HTML and IE then displays that content.
This means that all a hostile party has to do is embed a virus into a suitable image file, such as an icon or a folder wallpaper, and IE will render it. Exploit that IE process and you own the computer.
( Read more...Collapse )
I think our current style of rich, full-function, "power user" OSes is really starting to show signs of going away. (Yes, it's another of those handwavey sort of big-picture things.)
(Actually I should be drafting a FOSDEM talk about this right now, but I'm having a £1.50 draught beer & on FB instead. Gotta <3 České dráhy.)
Kids — i.e. in the Douglas Adams sense anyone under about 35
-- are more used to phones and tablets. The next billion people to come online will know nothDouglas Adams sense anyone under about 35ing else.
I reckon OSes will become more like Android, iOS, and ChromeOS — self-updating, without a "desktop" or much in the way of local storage or rich window management (for instance, see what's happening to GNOME
) and fairly useless unless connected to the Internet and various invisible servers off in the "cloud" somewhere.
Some "apps" will run locally, some will be web apps, some will be display sessions on remote servers. There will be little visible distinction.
We'll have no local admin rights, just the ability to put on sandboxed apps that only interact via the online connection. No local storage, or just a cache. No software updates; the OS will handle that. No easy way to see what apps are running or where.
What'll drive this will be sky-rocketing costs of supporting rich local traditional OSes.
It will have the side-effect of blurring the lines between a workstation, a laptop, a tablet and a phone.
For some current examples, see the Red Hat Atomic Workstation project
, Endless OS
, and SUSE Kubic MicroOS
. Read-only root FS, updates via a whole-system image pushed out periodically. Only containerised apps are allowed: there's not even a local package manager.
(Adapted from a Quora answer
OS/2 1.x was a clean-sweep, largely legacy-free OS with only limited backwards compatibility with DOS.
OS/2 2.x and later used VMs to do the hard stuff of DOS emulation, because they ran on a chip with hardware-assisted DOS VMs: the 80386’s Virtual86 mode.
NeXTstep was a Unix. It predated FreeBSD, but it was based off the same codebase: BSD 4 Unix. It “only” contained a new display layer, and that itself was based off existing code — Adobe PostScript — and the then-relatively-new technique of object-oriented development. Still substantial achievements, but again, built on existing code, and with no requirement for backwards compatibility.
BeOS was a ground-up new OS which wasn’t backwards or sideways compatible with anything else at all.
NT is based on OS/2 3.x, the planned CPU-independent portable version, with a lot of design concepts from DEC VMS incorporated, because it had the same lead architect, Dave Cutler. Again, the core NT OS isn’t compatible with anything else. This is rarely understood. NT is not a Win32-compatible kernel. NT isn’t compatible with anything else, including VMS. It’s something new. But NT supports personalities, which are like emulation layers running on top of the kernel. When NT shipped, it included 3: OS/2, POSIX and Win32. OS/2 is deprecated now, POSIX has developed into the Linux subsystem, and Win32 is still there, now in 64-bit form.
The point is, none of these OSes were enhanced versions of anything else, and none were constrained by compatibility with existing drivers, extensions, applications, or anything else.
Apple tried to do something much, much harder. It tried to create a successor OS to a single-user, single-tasking (later cooperatively-multitasking, and not very well), OS for the 68000 (not something with hardware memory protection, like the 68030 or 68040), which would introduce those new features: pre-emptive multitasking, virtual memory, memory protection, integrated standards-based networking, etc.
All while retaining the existing base of applications, which weren’t written or designed or planned for any of this. No apps == no market == no use.
Apple took on a far harder project than anyone else, and arguably, with less experience. And the base hardware wasn’t ready for the notion of virtual machines yet.
It’s a great shame it failed, and the company came relatively close — it did have a working prototype.
It’s often said that Apple didn’t take over NeXT, nor did it merge with NeXT — in many important ways, NeXT took over Apple. Most Apple OS developers and project managers left, and were replaced by the NeXT team.
The NeXT management discarded Copland, most Apple technologies — OpenDoc, OpenTransport, GameSprockets, basically everything except QuickTime. It took some very brave, sweeping moves. It took the existing MacOS classic APIs, which weren’t really planned or designed, they just evolved over nearly 1½ decades — and cut out everything that wouldn’t work on a clean, modern, memory-managed, multitasking OS. The resulting cut-down, cleaned-up API was called “Carbon”. This was presented to developers as what they had to target if they wanted their apps to run on the new OS.
Alternatively, they could target the existing, far cleaner and richer NeXT API, now called “Cocoa”.
The NeXT team made no real attempt to be compatible with classic MacOS. Instead, it just ran all of classic MacOS inside a VM — by the timeframe that the new OS was targeting, machines would be high-enough spec to support a complete classic MacOS environment in a window on top of the Unix-based NeXTstep, now rebadged as “Mac OS X”. If you wanted your app to run outside the VM, you had to rebuild for “Carbon”. Carbon apps could run on both late versions of classic MacOS and on OS X.
This is comparable to what NT did: it offered a safe subset of the Win32 APIs inside a “personality” on top of NT, and DOS VMs with most of Win16.
It was a brave move. It’s impressive that it worked so well. It was a fairly desperate, last-ditch attempt to save the company and the platform, and it’s easier to make big, brave decisions when your back is against the wall and there are no alternatives... especially if the mistakes that got you into that corner were made by somebody else.
A lot of old Apple developers left in disgust. People who had put years of work into entire subsystems and APIs that had been thrown in the trash. Some 3rd party developers weren’t very happy, either — but at least there was a good path forwards now.
In hindsight, it’s clear that Apple did have an alternative. It had a rich, relatively modern OS, upon the basis of which it could have moved forwards: A/UX. This was Apple’s Unix for 680x0, basically done as a side project to satisfy a tick-box for US military procurement, which required Unix compatibility. A/UX was very impressive for its time — 1988, before Windows 3.0. It could run both Unix apps and classic MacOS ones, and put a friendly face on Unix, which was pretty ugly in the late 1980s and early 1990s.
But A/UX was never ported to the newer PowerPC Macs.
On the other hand, the NeXT deal got back Steve Jobs. NeXTstep also had world-beating developer tools, which A/UX did not. Nor did BeOS, the other external alternative that Gil Amelio-era Apple considered.
No Jobs, no NeXT dev tools, and no Apple today.