Sunday, September 8, 2013

She'll Be Right, Mate!

Dateline - Sydney, 8th September, 1953

Awoke this morning to a grey and overcast morning - the clear and sunny skies of recent weeks have gone - and the realization that we are Under New Management. Changes Will Have to be Made if we are to Open For Business.

The first order of business is to get a haircut. My hair is too long - it slightly covers my ears - and it's going to cost a fortune if I use that much Brylcreem (and that's another thing - Her Indoors is going to have to buy some antimacassars). Then I shall have to see my tailor - it will be best to get in quickly, before demand drives the price of brown tweed up through the roof. Something with a waistcoat, I fancy - and while I await my suit, I shall have to get the little woman to let down the hems of my existing trousers and create some turnups, in line with the new fashion. I shall need to get a couple of new hats, too.

I'm going to have to change careers, of course - there will be no demand for computer security in Australia and all that work will be in countries that have decided to build telecommunications infrastructure. The German car will have to go, too - it's politically incorrect, now. A Holden in every garage, that's the motto! And I hope that we'll soon be able to take a break for a short holiday - perhaps in the Great Barrier Reef, before it's gone, or the bits of the Northern Territory that haven't been dug up yet.

The better half is going to have to sell off her business - in our hearts, we always felt it was a mistake to let women advise giant corporations and government. What were we thinking? But she's quite looking forward to her new life at home, and has set about choosing material for new curtains, throw cushions, bedspreads and tablecloths. But before she can put the sewing machine to work on those, she's going to have to let those hemlines down - they're just not acceptable these days! And she's going to need a couple of new hats, too.

Oh, yes - comfort and style for the ladies!

We are expecting some resistance from our daughter. We should never have encouraged her in this foolish belief that she should get educated, learn languages and travel, so we are just going to have to bite the bullet and withdraw her from University so that she can go to secretarial college and learn shorthand. Hopefully she can master it quickly enough to get a job in an office and then, with a bit of luck, she'll meet a sound chap, get married and settle down. We shall be grandparents before we know it! I wonder how she'll look in a hat?

The young master, of course, accepts that he will have to sign up for nasho. My advice to him is: join the Navy! There'll be plenty of demand for young officer cadets to take charge in our Northern Defences, and since they'll need lots of ships to tow leaky fishing boats full of refugees back out to sea, I expect he'll have his own command in just a few years. After that, he should be well set for a job with a bank.

I do hope the little woman's over-active mind doesn't lead her astray.

And there's more good news for the young people, too - housing will be much more affordable, with the reintroduction of asbestos sheeting as a cheap building material (thanks, Julie!). The economy is going to boom, and soon the Harbour will once more be full of ships from distant lands, the docks ringing to the cheerful sounds of low-paid immigrant labourers! Gina's mines, too, will benefit  - her "New Australian" workers are so much better suited to tunneling work, and they need very little pay, since they only eat a couple of bowls of rice per day.

Soon, our coal-powered factories will be mass-producing widgets at prices that can compete with expensive Chinese-made widgets, those silly Chinese having introduced a 10 yuan-per-tonne carbon tax. How silly is that? We know that carbon is a colourless gas, so how can it be harmful? Tony is right - global warming is crap, and those boffins who dreamt it up have lost touch with reality. It's time somebody stood up to these "experts". And even better - our cheap electricity will allow us to run bigger air conditioners, which I find is so important, with the warmer weather we seem to be getting at the moment.

It's good that we have such a stable hand on the tiller, a sound chap, educated in the Mother Country, who understands our rightful place in the world, and that we mustn't get above our station as a colonial outpost of The Empire. A man who understands that Thrift is a Virtue, and that we cannot afford luxuries like the ABC - no matter, since we look forward to clustering around our wireless sets of an evening listening to the replayed highlights of BBC Radio Four, or dancing to big band sounds. Then it's off to bed, and up with the dawn to cycle to work - or down to the Surf Club, on the weekend. If the pushbike is good enough for Our Leader, it's good enough for me!

Anyway, that's all I have time to write in this first report - we have to rush off to church, as Tony is expecting a good turnout. And after that, the ladies will come home and put the roast in the oven for Sunday lunch, while me and my cobbers sink a few schooners of Tooheys Old at the RSL.

Aye, Australia - it's the lucky country, all right.

Tuesday, September 3, 2013

Another Paradigm Shift

When you've been in the software development business as long as I have, you get used to things changing continuously, but in retrospect, you can see the steps that represent paradigm shifts. When I learned to program, at school, in the early 1970's, we wrote programs with paper and pencil; the final version was written up on coding sheets which were then posted to the nearest university computer centre, over a hundred miles away. And if you weren't thoughtful and careful, the result that was delivered ten days later was a sheet of fan-fold paper which simply said, "SYNTAX ERROR ON LINE 2".

So, over the years, I've used punched cards, paper tape, mag tape (both 9-track and cassette, including audio cassette storage!), floppy disks, hard disks, flash and solid-state disks. I've been through traditional procedural languages like FORTRAN IV and Algol, multiple assemblers, object-oriented languages like Clascal, C++ and Java, logic languages like Prolog, functional languages like ML and Ocaml and peculiar specialized languages.

I've also seen user interfaces evolve. After the paper-and-pencil era, I got to use interactive systems via an ASR-33 teletype - i.e. a roll of paper - and early Tektronix storage-tube graphic displays. Then came serial terminals like the ADM-3a and VT-220, as well as block-mode mainframe terminals like the 3270, which could do basic form-filling. The major shift from shared minicomputers to single-user microcomputers enabled a switch to memory-mapped displays with crude monochrome graphics (remember the ETI-640, anyone?) and further improvements in graphics brought us GUI's with the Apple Lisa (yes, I had one), Macintosh and the X Window System. Oh, yes - and Windows (and OS/2 PM!).

The transition to graphics made it difficult to separate the computer from the display. The X Windows system was designed to support graphical terminals attached to a minicomputer, but later systems like OS/2 Presentation Manager and Windows were designed with directly-accessible memory-mapped displays in mind. Extensions to Windows to support remote access, like Citrix Metaframe, have had to rely heavily on very clever protocols like ICA to minimise their bandwidth requirements, and even X has a variant called LBX (Low Bandwidth X).

However, remote computing has been taking place via the Internet for more than two decades now, and the web has evolved a protocol for the remote display of text and graphics, in the form of HTML. For much of that time, users have sat at comparatively powerful computers - these days, they're quad-core Intel CPU's with multiple GBytes of RAM - while performing relatively simple tasks on remote servers, such as posting messages, ordering products or sharing photographs.

But the balance between the server and client has changed. These days, clients are more commonly phones or tablets; although these machines are more powerful than ever before, for reasons of cost and battery life, most of the power needs to be devoted to those tasks that must be performed locally and every thing else, especially compute-intensive tasks, off-loaded to the server.

This has coincided with the development of even more powerful, yet cheaper, servers which support virtualization; add a layer of self-service, and the result is the cloud - an amorphous set of services made available on a pay-as-you-go basis to both enterprise and individual customers.


In particular, a desire to have information available on a range of devices, particularly mobile devices, is driving both types of customer to store their data in the cloud, rather than on the devices themselves. I used to store all my important files on a RAID-based file server in my office, and that was fine as long as I only wanted to share them between computers inside the office; but then I wanted to be able to access them while on the road, and that required me to set up and configure another level of complexity in the form of a VPN, with all the management of clients that that requires.

Now I want to access them from my tablet, my phone and my Chromebook, and that involves finding, installing and supporting VPN software for them, too. In the end, it's easier just to say, "I've had enough; I don't care if the NSA wants to read my files, and realistically, why would they bother? So I'm just going to stick my stuff in Google Drive, or Dropbox or Box or wherever".

But the relative power of mobile devices means that they can't do the processing - therefore that has to be done in the cloud, too. This isn't a completely new idea - large-scale number-crunching has always been done on supercomputers or even mainframes with specialized hardware like array processors. And, of course, database servers represent another category of specialized devices used by clients, as does print servers.

For my number-crunching needs, I've been using Sage ( an open-source mathematics program that combines lots of other specialized maths programs ( into one - and for its user interface, Sage uses a web browser. It's extremely difficult to build Sage for Windows, so if you want to run it, you download a complete appliance virtual machine which runs under VirtualBox, and then point your browser at that. However, I prefer not to load up a machine that's also being used for other things - especially my smaller machines. So, by installing Sage on a spare server - I start its notebook interface only when needed - I am able to keep my work files in a single location but access Sage from any of my machines (although using it from my phone would be a bit tedious).

New Architectural Paradigm

This may well be the computing model and user interface paradigm of choice for the next few years. It makes no sense for me to design any applications that I write from this point onwards to use a conventional GUI like Java's Swing or - not that I would - the various classes in Windows. If I do that, they become inaccessible from my phone, tablet and Chromebook. Nor should I use the Android SDK - that gives me apps that will run on a phone or tablet, but won't run on a PC. I suspect that many enterprise developers are facing the same dilemma - demand for access from mobile devices is causing them portability problems,

The answer has to be a move to a thin-client approach to everything. The client is, inevitably, the web browser, which is now so capable that it is nothing like as "thin" as the thin clients of yesteryear. The form description and layout language is HTML 5, the client-side procedural language is JavaScript, the communication protocol is HTTP with AJAX, WebSockets, XML and JSON layered on top.

The back end language can be any of the traditional languages - C/C++, Java, Python, etc. A lot of my code is written in Java; adapting it to run as a servlet would not be particularly difficult. For new projects I also have some new (-ish) language options, such as PHP or JavaScript, in the form of Node.js, which I would not previously have considered for conventional applications. And now I have the choice of running it on my own machines, or finding a cloudy home for it with Google, Amazon, Rackspace or any of a number of public or private cloud providers.

For the User

I suspect we're going to see a change in the way people use computers, too. Right now, many people have both a desktop and laptop computer, and switch between them, sometimes transferring files via USB drive. Gone will be the large desktop PC; people will prefer laptops and tablets, but they may still need a fixed box - as I do - to handle DTV tuners, scanners, home theater, media services, etc. Such devices might even morph into a "home server" - a headless box that serves music, video, graphics files, etc. to the various TV's, and perhaps even terminates fixed-line phone services, should they survive that long. And yes, we can stream music and video from the cloud, but here in the Third World (where the distant promise of fiber-to-the-home seems likely to be snatched from our grasp) there just isn't the bandwidth for us all to do that all the time.

So, in a sense, we're back where I almost started - with minicomputers and mainframes (replaced by servers and the cloud) and users connecting to them via terminals - replaced by a variety of thin clients.

Plus ca change, plus c'est la même chose.

Sunday, September 1, 2013

Living with the Acer C710 Chromebook (Review)

The end of the financial year saw me with a few spare dollars to put back into the business for tax reasons. Now, I've moved a lot of my business technology over to Google (and GoDaddy) and have grown to appreciate the convenience and functionality of my Nexus tablet and phone, so the obvious next step was to investigate a Chromebook. Fortunately, JB Hi-Fi had a special deal on the Acer C710 Chromebook that weekend, so I hopped in the car and an hour later was unboxing the new beast in order to explore further.


The left side of the C710, showing (L-R) Ethernet, VGA, HDMI and USB ports.
I had chosen the C710, rather than the more expensive Samsung Chromebooks, for one simple reason - I do a lot of presentations, and need VGA output to drive an external projector. The Samsung devices offer HDMI output only, while the Acer has both VGA and HDMI. However, despite the low price, the Acer machine is quite nicely built. It has the grey finish made popular by recent Macbooks, although it is obviously made of plastic. The 11.6" screen opens up with an even force on its hinges, and is coated with a glossy, reflective finish. The right side of the machine has a slot for a locking cable, inlet for the external power adapter (which is just a largish wall-wart), two USB ports and the headphone jack. The left side has wired Ethernet jack (another plus for the Acer compared to the Samsungs), VGA, HDMI and a third USB port, while the front edge conceals a dual SD/MMC memory card slot.

At 1366 x 768, the screen is a little smaller than I'm used to (my main laptop is a Thinkpad T500, since it does duty as a backup development workstation), but it is sharp, the colour saturation is good and it is quite bright. The keyboard is a weak point, though - the keys feel slightly spongy, with only short travel; while it's quite usable, it's not in the same league as the Thinkpad (but then, it's darn near one-tenth of the price). The trackpad is quite usable, with various shortcut gestures (e.g. sliding two fingers to scroll a window), although I still prefer the Thinkpad's Trackpoint - this is more a matter of taste, though.

Finally, my model came with a 320 GB hard drive. That really is ridiculously large for a machine that's intended for use with Google's cloud services, and I note that the current model now ships with a 16 GB solid-state drive. In fact, the day I bought my Chromebook I also ordered up an additional 4 GB of RAM (which upped it to 6 GB in total) and a 120 GB Samsung SSD. When I opened up the Acer to swap the drive, I noticed that the construction was quite solid, with all the parts securely mounted in a very confined space, yet easy to work on. The only moving parts left in my machine are the CPU fan and the power button, so I'm expecting it to be quite reliable.

The Software

While Google already has a platform for its cloud and email services in the form of the Android OS and the related tablets and phones, the Chrome browser represents another platform which has made strong inroads, becoming the dominant browser in the marketplace. Google's enterprise business (as opposed to its ad-sales and analytics business) is based on the SaaS (Software as a Service) and PaaS (Platform as a Service) cloud service models (see NIST SP800-145, "The NIST Definition of Cloud Computing").

For most individual users, the SaaS component of interest is email, delivered as a cloud-based application presented in a browser. (In fact, from teaching undergraduates how to encrypt email, I've learned that a whole generation has grown up not knowing that there's such a thing as an email client!). However, Google has extended Gmail into related services such as contact management and calendar management.

This has led Google to enterprise customers who require additional functionality - hence Google Apps, which additionally provides word processing, spreadsheet and presentation graphics functionality. The technology that underpins these is AJAX (Asynchronous Javascript And XML), which allows Javascript code, running in the context of the browser, to dynamically communicate with a web server while updating the onscreen window via the DOM (Document Object Model). This allows what are sometimes called "single-page web apps", i.e. web applications which present as a single, continuously-updated page, as opposed to older web applications which required an entire page refresh when a form button was pressed. Remember what a breakthrough Google Earth was? AJAX.

What has really made this possible is the maturation of JavaScript as a programming language, along with the development of libraries like JQuery. I admit, I used to be somewhat dismissive of JavaScript, relegating it to such tasks as mouseover graphics (pop-ups, etc.) and basic form validation (although the server always has to perform its own input sanitization for security reasons).

But the fact is that, whether by accident or by good design is not clear, JavaScript is actually a rather sophisticated programming language with some advanced features, such as closures and the ability to treat functions as first-class objects. These functional programming paradigms are then capitalized on by libraries like JQuery to achieve a high level of productivity for the programmer and a high level of functionality for the user.

Because they are pushing JavaScript and the DOM pretty hard, it was obvious that Google would try to minimise cross-platform problems by developing their own browser. The result is Chrome, a browser based on the Blink rendering engine. And one important component of Chrome is V8, a JavaScript engine developed by Google in Denmark. V8 is actually an incremental compiler - it compiles JavaScript down to machine code (for 32-bit or 64-bit Intel, ARM and other architectures) then performs sophisticated optimizations at run-time - and therefore achieves high performance. And although it was developed for the Chrome browser, V8 has also found its way onto the server, in the form of Node.js, making JavaScript an option for both client- and server-side development.

The result is that JavaScript, today, isn't your mother's JavaScript. People have made it do some bizarre things, down to implementing a virtual machine that can boot and run Linux - in the context of the browser!

As a consequence, Google has been able to implement quite a high level of functionality in the SaaS applications. As an initial test, I exported the introductory slide set for my "CISSP Fast Track Review" class in Powerpoint format (I usually maintain them in Libre Office, for historical reasons) and then imported them into Google Slides. While there were some initial alignment and scaling issues, I was quickly able to clean them up and the result has been quite effective.

Given that these applications run in the context of the browser, there is no longer a need for many other native applications and utilities. Without them, a lot of other OS and run-time library services can be dropped as well. All that is really required is a stripped-down kernel that can boot a restricted graphical desktop on which the browser is the major application. And that's ChromeOS. Install it in flash memory on an otherwise diskless machine, and you have a Chromebook. Although most are laptop devices, Google also sells (in some markets) a desktop (or -side) device called a ChromeBox, which works with an external keyboard, mouse and monitor.

The Chromebook boots near-instantaneously, since the operating system is minimal. On first startup, the purchaser is offered the opportunity to register a new Google account or to sign in with an existing one. Since I already have three corporate/education Google Apps accounts, I signed in with one of them, and immediately the device displayed my photo (probably from my Google+ profile) and the browser bookmarks were immediately populated with my desktop Chrome bookmarks.

The operating system binaries are signed and checked on startup via a public key stored in the TPM (Trusted Platform Module), although it is possible to install other operating systems such as variants of Ubuntu and other Linux distributions. However, the result is an unmodified OS kernel on every boot; soon after initial setup the latest OS will be downloaded and a reboot is all that's required for installation - it happened last night and took less than 10 seconds.


By default, the Chromebook has icons arrayed in a launcher at the bottom of the screen. Where Windows would have a "Start" button, the Chromebook has a "Chrome" icon, which just launches the browser. There's also an icon for GMail, and a 3x3 matrix which pops up a window with all other application icons (rather like the equivalent on Android). However, you can "pin" other application icons to the launcher, and so mine now has icons for Google Drive, Plus, Keep, Slides, Docs, Sheets, etc. Each icon can be configured to open the application as a standard tab (default), as a "pinned" tab, as a window or maximised.

I don't propose to review the major applications here - they're software which is, to some extent, external to the hardware - especially since, if you want to explore their functionality, all you need is a free Google account and a browser; they work fine in both Firefox and Chrome. What you will find is that they do not provide all the functionality of a high-end suite like Microsoft Office. However, they work adequately well for personal, educational and small business use, and are continually being upgraded and improved. The Docs word processor, for example, is considerably more powerful than a cloud-based note database like Evernote, and not far off the capabilities of Open Office or Libre Office.

The applications also have their own unique capabilities - for example, it's possible for multiple users to edit documents and spreadsheets simultaneously, with coloured highlighting (and a key at top right of the window) indicating the cells or locations where each user is currently working. This makes collaboration easy - a popular feature for students working on group assignments and projects. However, I've also used it myself, to work on two different areas of the same document simultaneously.

Apart from the usual office productivity applications, you also get a scratchpad app of dubious value - the Google Keep application does that kind of thing much better - Google Forms for data collection, Google Drawings, Blogger, an SSH client, Google Play books, music and movies, a calculator, camera and web-conferencing apps (Google Hangouts). There's also a "Files" application, which allows management of both local and cloud-stored files.

Google Keep started life as a very basic notebook program, but has been revised to add functionality such as checklist formatting and reminders, so that it now fills a niche for those simple to-do list and shopping list requirements that do not justify the use of a full word processor. And this is one of the nice features of Chrome - the applications are continually evolving and because they load from the cloud on startup, you always get the latest version. There's also no obvious way for a virus to infect the programs - another subtle benefit. And more apps are on the way - there are hints (search in chrome://flags) that Google Now cards are coming to Chrome in the future.

Additional apps - as well as extensions and themes - can be obtained from the Chrome Web Store ( These are mostly apps that run  within the browser itself.

By default, all work files are stored in Google Drive, and the Chromebook comes with 100 GB of cloud storage, free of charge for two years. As a Google Apps customer, I already had 30 GB of storage, shared transparently between Gmail/Calendar/Contacts and Drive, and I've used less than 1% if it so far, even with a number of quite large documents in it.

The Google Infrastructure and Ecosystem

As mentioned above, Google also sells PaaS cloud services. Developers can write applications in their choice of JavaScript (Node.js), Go, PHP or Java, and deploy them in the Google cloud. The result is a large collection of applications that are available to Google Apps customers - the enterprise market for Google. Additionally, Google Apps customers have access to a device management console which they can use to enforce corporate policies across Chrome devices.

Wi-Fi Dependence?

Because the applications are written in JavaScript, they load at run-time from Google's servers. At first glance, this suggests that the Chromebook is of no use without an Internet connection, but surprisingly, that isn't the case. The device comes with GMail for offline use as standard, and this can be used to read and reply to emails, schedule appointments, etc. In addition, Google Drive - the storage application for Google Apps - has an option for "Offline" which can be used to cache folders and files for stand-alone use - rather like Windows' "offline files" and "Sync Center" feature.

My Hardware

Feeling that 360 GB was far too much local storage for this type of machine and that rotating media wasn't the most reliable for a lightweight, highly portable computer, I pulled the drive out of mine and replaced it with a 120 GB Samsung SSD (solid state drive). The result is even faster to boot and shut down.

I also upgraded the RAM from 2 GB to 6 GB - it's easy to slip in an extra DIMM, following the instructions at However, examining the memory usage in the system configuration report (chrome://system) it appears that it rarely gets much above the 2 GB usage level anyway.


It's not a netbook. Netbooks were typically underpowered, especially those machines which ran a stripped down Windows XP and were built down to a price in order to fund the MS Windows software licence. In addition, many netbooks offered no cloud storage, but were totally reliant on local SSD storage only, and ran minimalistic sets of applications.

The trick is to realize that JavaScript today has moved to fulfill the promise of portable applications, originally made by Java. Whether by accident or design, it's a very powerful programming language with functional programming idoms such as closures, functions as first-class objects, etc. When coupled with AJAX capabilities and libraries like JQuery, running on a fast, low-powered processor, it can deliver extremely sophisticated applications that happen to run in a browser.

And then there's Google's PaaS cloud - you also have the ability to script the server side with JavaScript, or run entire Java, Python, PHP, Go and Node.js applications - and that, in turn, has created the applications that we see in the Chrome Web Store and the Google Apps Store.

In essence, then, the Chromebook is a low-maintenance thin client backed up by Google's cloud services, and not a simple cheap laptop.


It's nice to have a lightweight machine for working outdoors.
I am already well-catered-for, in terms of computing resources - my desktop machine is an i7 quad-core box with 32 GB of RAM and 5 TB of disk, while my regular laptop is a large-screen Thinkpad T500. I use these large machines because I develop and run large simulations using Eclipse and Java. But I find that I do some of my best creative thinking away from my desk - over a coffee or in a quiet library, or even, as shown above, sitting on the deck wishing the pool was just a little warmer (it's only the first day of Spring as I write this).

While I have a Nexus 7 tablet, and am experimenting with a Bluetooth keyboard for it, I find that the Chromebook occupies a rather nice niche - it's much lighter than the Thinkpad, but much more useful for serious work than the Nexus 7. In fact, I sometimes find myself using the two together - the Nexus 7 to display reference documents and the Chromebook to write. It's also light enough to carry as a backup laptop - the week before last I took it on a business trip and worked in the evenings, using the Chromebook by itself at a restaurant table, and the three-screen combination of the Thinkpad, Chromebook and Nexus 7 at the desk in my hotel room.

In addition, like many others, I've moved a lot of my work from the local desktop to separate compute engines, either on dedicated servers or in the cloud. For example, as I write this, I have another tab open to perform complex calculations (calculating the transition probability matrices of Markov Decision Processes, if you must know) using Sage on one of my servers. The resultant output renders just fine on the Chromebook and because the number-crunching is done on the server, there's no difference in performance compared to using my desktop machine.


My conclusions are that the  C710 Chromebook represents a highly attractive platform for several types of user:
  • Business customers using Google Apps
  • Education customers using Google Apps
  • Those who need a spare computer
  • Those who want a lightweight, simple (nearly no management) computer
  • Those concerned about loss or damage, with consequent loss of their work
The latter is highly appealing to many users who are losing patience with the issues of patching, securing, backing up and generally maintaining more conventional (Windows, Mac and even Linux) computers. Essentially, the Chromebook represents the most successful implementation yet of software as a service - all the stressful maintenance is taken care of by the provider, and the user can simply concentrate on getting their work done.

At $US199, I'd say the C710 Chromebook is tremendous value, and the potential for ongoing savings make it better and better.

Sunday, July 28, 2013

Nexus 7 - One Year On

This time last year, I was given a Nexus 7 as a birthday gift (I'd hinted really, really strongly!). One year on, Google has released an updated model, and I have a lot better understanding of how the thing works and what it's good at. The new Nexus 7 hasn't been released in Australia yet - but will I upgrade?

I think so. I've come to regard the N7, along with the Roomba, as one of my two most successful "Let's give this a shot and see what it's all about" tech purchases. However, the N7 hasn't achieved this position on the basis of its technology or bang-for-the-buck alone; its significance was to introduce me to the Google ecosystem and I should, perhaps, give a close runner-up award to the Galaxy Nexus phone which I bought as a result of my positive experience with the N7.

I haven't used the N7 as a toy at all. Never watched a movie on it, rarely play music on it, will never play a game on it (I'm not a gamer, unless you count me vs the evil Java compiler as some kind of strategy game).

For me, it's all been about personal organization and having instant access to information wherever I happen to be - in my office, in the kitchen, in front of the TV, in a lecture theatre, in the coffee shop. The apps I use most heavily would be Gmail (I have two business and one university accounts), Google Calendar, and Google Maps, along with Google Drive/Docs/Apps. The latter, especially, has been getting heavy use for writing up course materials and presentations - I do a lot of the heavy lifting on my desktop machines or on a Chromebook I also bought in the "Let's give this a shot and see what it's all about" mind-set, but it's been really useful to have ability to view materials while away from my desk, or to display them on a second (really fourth!) screen while working.

Then there's Evernote, which has also been getting heavy use, especially for mundane things like shopping lists. However, with Google Keep maturing and being standard in Android 4.3, it might take over for those lightweight tasks.

Perhaps the biggest unexpected "killer app" is Google Now, which integrates voice search against the Google Knowledge Graph with before-demand presentation of information cards to organise my day.

And then there's a whole host of other information-handling apps: Wikipedia, Youtube, IMDB for when I'm watching movies (always nice to be able to answer "What else have we seen him in?"), the Guardian for my twice-daily news fix - plus, of course, go41cx and free42 for calculations. And the Kindle app has proved especially useful while dining alone in dimly-lit restaurants recently.

Where has the N7 fallen down, and what would I like to fix? The only thing I would change would be to get a 3G/HSPA+/LTE model next time. Although the N7 is not as dependent on an always-on connection as the Chromebook, and I can get a wi-fi connection all over the university campus, there are times when the longer battery life and larger display of the N7 has made it a better choice for navigation and some other tasks than the Galaxy Nexus phone, which is my "always-connected" device. I suspect that quite a low-cost, low-bandwidth prepaid SIM would be more than adequate, so it needn't break the bank.

The other thing I want to investigate is yet another case, this time with a bluetooth keyboard. While the standard Google keyboard's swype input technique really is quite usable, a more capable keyboard would hugely improve the usability of Evernote and similar apps.

All things considered, I think I'll be queuing up for a 32G/HSPA+ N7 when they finally make it to Australia.

Sunday, May 19, 2013

Thanks for Nothing, Verisign

For, lo, these many years I have used a VeriSign "Class 1 Individual Subscriber - Persona Not Validated" personal X.509 certificate for several purposes. I originally got mine back in The Day, when Netscape offered a web mail service but required that you log in using a personal X.509 certificate for authentication. Not surprisingly, Netscape's web mail service didn't take off, even in competition with the horror that was Hotmail.

But the personal certificate has other uses, too - mainly in conjunction with email. You can use it to sign emails, using the S/MIME protocol. Mail clients like Thunderbird automatically append your certificate to signed emails, making it easy to verify the signature - just calculate a hash over the email message, decrypt the signature using the public key in the attached certificate, and if the two match, the message wasn't modified and it was sent by whoever has the private key that matches the certificate. Very easy, even for completely non-technical users.

And once you've received someone's certificate, the email client automatically places a copy in your certificate store. Since the certificate contains their public key, you can now encrypt emails that you send to them (assuming that you also have a private key and certificate. It's almost foolproof.

But now comes the $64,000 question: how does the recipient of a signed email know that they can trust the public key in the attached certificate? Of course, that's the whole point of a PKI (Public Key Infrastructure): you can trust the certificate to identify an entity because the certificate is itself signed by a Certification Authority. Of course, for an inexpensive certificate like the Class 1 Persona Not Validated ones, all it really means is that the person who bought the certificate was able to collect it via a link in an email sent to them by the CA, proving that they can access the email postbox.

But to check the validity of the certificate signature itself, we need the public key of the CA. No problem - browsers and email clients have those public keys built in, in the form of self-signed root certificates. We explicitly trust the root certificate, and given that, we know we can trust any certificates signed by it (actually, signed by the private key that corresponds to the root, but the terminology gets pretty lax, as we'll see).

And so that is how it has been, since time immemorial. Each year, I have renewed my certificate and dutifully backed it up from Firefox, then imported a copy into Thunderbird and used the new one.

Last year, I even taught the details of this as part of a university class on Cryptography and Information Security, and I set the students an exercise - get yourself a free Class 1 personal certificate from Comodo, send me a signed email, exchange signed emails with other students, start sending each other encrypted emails, then obtain my certificate from the VeriSign Digital ID directory (or directly import it from a file) and send me an encrypted email. It all went swimmingly well, with no problems.

This year, lots of problems - albeit a good opportunity for learning, for my students. First of all, I forgot to upload my new certificate, and the old one had obviously expired. But even when I exported and uploaded the new, current, certificate, they still couldn't import it into Thunderbird. Typically, they would get an alert dialog:

This is not cool. Now, I knew the certificate was perfectly valid. I even opened it using the Windows Crypto Shell Exensions on my desktop machine, and here's what it looked like:

Looks fine to me. Clicking on the "Certification Path" tab reveals that the certificate isn't directly signed by a VeriSign root certificate - rather, it's signed by an intermediate CA certificate and that one is signed by the root:

 But here's how it looks on my Windows 7 machine:

What's this? "... not enough information to verify this certificate"? Let's have a look at the "Certification Path" tab:

 The issuer of the certificate could not be found. Hmm. Remember, I'm looking at the Windows certificate store, here, not the Firefox or Thunderbird one - they were fine because I'd been using that cert with Thunderbird on that machine for the best part of a year with no problems. But it looked as though the Windows certificate store was missing one or other, or both, of the Verisign CA certificates. So I went into the Firefox Certificate Manager and exported both the required certificates, noticing as I did so that the VeriSign Class 1 Public Primary Certification Authority - G3 certificate was a Builtin Object Token (distributed as part of the browser) while the VeriSign Class 1 Individual Subscriber CA - G3 certificate was stored in the Software Security Device - i.e. it had been imported separately.

Now, it was over to the Windows 7 machine, and the Certificates MMC snap-in. To run this, use Start -> Run, type in "mmc.exe" to start MMC, then use File -> Add/Remove Snap-in, Select "Certificates" and add to "Selected snap-ins", select "My user account" and click OK. A quick inspection revealed that the VeriSign Class 1 certificates weren't there, but right-clicking almost anywhere and choosing "All Tasks" -> "Import..." allowed them to be imported successfully.

Once this is done, my personal certificate shows up correctly on the Windows 7 desktop:

Now, the students were experiencing problems importing my certificate into Thunderbird, but I bet it was the same problem - missing Verisign root and intermediate signing certificates. I've sent a couple of them the certificates as described above, and I've also sent others my certificate exported from Thunderbird using the "X.509 Certificate with chain (PEM)" format, which puts all three required certificates into a single .crt file. I'm waiting to hear back from them, but I'm expecting success.

All this is complicated by some terminological inexactitudes in the industry and the software used. One has to be careful to distinguish between Firefox/Thunderbird's "Backup...", which exports both the certificate and the corresponding private key in PKCS #12 (.p12) format, which displays in a Windows folder as a certificate with a key in front of it. You don't want to give anyone else this file, as it contains your private key (albeit password protected). It's mainly useful for exporting your key and certificate from Firefox and importing it into Thunderbird or other applications. So, a "certificate file" may not contain just a certificate! (This is almost as egregious an error as the old man page for OpenSSH, which used the word "certificate" to mean "private key"!)

On the other hand, if you want to give someone else your public key, in the form of a certificate, you have to "View..." the certificate, click on the "Details" tab and then choose "Export...". At this point, one can choose PEM (.crt - appears as a certificate) or PKCS #7 (.p7c - appears as a rolodex card) format, with or without the certification chain.

But the real problem is that VeriSign - now part of Symantec - appears to be backing, slowly, and without notifying their customers, out of the personal certificate or Digital ID business. They are no longer distributing their Class 1 root and intermediate certificates with Firefox or Thunderbird, or with Windows. What's worse, my students have not been able to download my certificate from the Verisign Digital ID directory. It's there (as are records for my previous certs going all the way back to May 1997!) but there are no links or buttons to do anything with it. No-one can download it, and it looks as though I'm not going to be able to renew it - although without distribution of the CA certificates in email clients, it's of dubious value anyway.

By contrast, Comodo not only offers free Class 1 personal certificates, but also operates the SecureZIP Global Directory where you can place your certificate for use with the PKWare SecureZIP utility. And their Class 1 CA certificates are more widely distributed than Verisign's, making them less prone to problems.

So, if you've been experiencing problems with your VeriSign Class 1 certificate, perhaps you now know why. And if, like me, you've been paying for your certificate for the best part of 20 years, you'll join me in saying:

Thanks for nothing, VeriSign.

Sunday, April 21, 2013

2013: The Year of the Facebook Mobile Attack?

Facebook has been pushing - if you don't update, you'll receive notifications in your newsfeed - a new version of the Facebook app for Android. I've reluctantly upgraded the version on my Nexus 7, but I'm holding off installing it on my phone. At this point, I'm not sure the increased risk is worth it.

"What risk?", I hear you ask. There's a potential exposure in the new Facebook app; the app requires somewhat looser permissions than the previous version, including - wait for it - the ability to directly call phone numbers. Big red flag here, Facebook. The major form of malware seen to date on Android phones has been apps that use this permission to call premium-rate international numbers, running up a huge phone bill for the victim and delivering a nice profit for the attacker.

Properties required by the Facebook app for Android -
notice "direct call phone numers"

The need to make phone calls arises from the introduction of the new "Facebook Home" - an app which takes over the home screen of a phone to present a Facebook-centric experience - as well as Facebook Messenger, which integrates Facebook messaging with SMS as well as supporting voice messaging. It's not clear to me why the main Facebook app, which does not support these functions, should also require access to the phone functionality, not to mention the ability to record audio, download files without notification, read your contacts and many other privacy-invading permissions.

At the same time Facebook has been a terrific vector for the spread of malware on the PC, sometimes in the form of infected videos or apps, as well as privacy-invading apps which harvest your profile, contacts or other information or download files.

The message: expect this to spread rapidly to mobile devices. Facebook now exposes a relatively large attack surface, and an attacker who can compromise the Facebook app on Android can use its permissions in a range of creative ways.

2013: the year of the Facebook mobile attack? I hope not, but it looks likely to me.

Sunday, April 14, 2013

Google+: The Good, The Bad and The Ugly

I've recently introduced a group of online friends to Google+. We'd mostly met via Facebook, where we'd shared things via a secret group, but disenchantment set in and the group was fractured when some of our number were locked out of their accounts (the reasons for that are not at all straightforward and I won't go into them here).

So a few of us were chatting about how to get around this, and off the top of my head I quipped, "We ought to set up a similar group as a Google+ community". Then I thought, "why not?" and a minute later, I'd done it.

I spent the day intermittently writing short "How-To" posts for the new users I was dragging across from Facebook, and answering their questions, helping them to figure out how to get things done, etc. It's been a couple of days and the experience has given me a better understanding of Google+

Neither Good Nor Bad - Just Important

Circles. You have to grok circles. Circles have both read and write, or in and out, functionality. You can use circles to filter what you see in your home page - for example, you can suppress a circle from appearing in your Home page stream (great if they are prone to posting NSFW images!). That's the "read" functionality. You can also limit posts to only certain circles so that your doings are not broadcast to the wrong people - that's the "write" functionality, which will be more important to some people (to be honest, I regard anything that I post on a social networking site as public).

The problem is that the importance of circles, and the things one ought to consider when creating them and adding people to them, are not immediately obvious - it's only after you've spent some time fiddling with the various configuration options that their importance becomes apparent.

The Good

Now to some good points I've noticed and others have commented on.

Firstly, the Home page has filtering, so you can view just specific circles. Across the top of the home page are buttons for "All", "Friends", "Following", or whatever circles you've created. This means you can choose to see only posts from colleagues during the workday, then spend some time catching up with friends, or reading up on products/technology you're following.

The integration with Gmail, Contacts, Youtube, Blogger, etc. is nice - but only important for users who have already engaged with the Googleverse. It's good for me - I use Google Apps for both business and university purposes, and it was that that led me to get my Google+ profile sorted out and then start using it - but for people looking for a Facebook alternative, the fact that you might have to use some other Google service such as Google Drive to get things done seems odd.

There are some nice usability features; for example, you can drag and drop pictures directly into the "Share what's new..." comment box - there's no need to click on "Add Photos/Video" first. However, on the down side, sharing URL's requires you to click on a link button to get a field, rather than auto-recognising the URL in your text. And Google+ doesn't automatically provide previews of URL's in comments like Facebook does.

The privacy and security options are very granular; this is great if you're willing to take the time to learn and use them. Not every is willing, though - and it can be confusing for the new user, who doesn't know what all these things are.

Communities are essentially equivalent to Facebook groups, and can be made public with no barriers to joining, public with approval for joins, or secret, which will require an invitation to join. A nice feature which Facebook doesn't have is "Categories"; for example, I quickly created a a "Using Google+ And This Community" category where I could post hints and answer questions without overwhelming the main "Discussion" category. Of course, the default view when one logs in is "All posts", which displays everything - and it takes the new user some time to discover and use categories. Until they do, they post everything in the default "Discussion" category and (under "Bad") there's no way for moderators to move posts to the correct category.

It's quiet. I've given up on Twitter; it's been over-run by social media "marketers" who think they're slick, and aren't. Facebook is rapidly heading the same way; my newsfeed is starting to fill with posts from link farmers trying to trap people into granting access to their Facebook profiles. Google+ doesn't have that, as far as I can see. Yes, there are marketers there - I follow a couple of my favourite brands - but so far, it's a pretty well-behaved place.

The Bad

But there are problems, and it's been obvious as I've introduced these new users.

It's noisy. By default, every post, every comment on a post, every damn thing that happens, fires off an email. There's a notifications on/off button in communities, but that doesn't seem to do much to quieten things down - instead, you have to go to your profile,

Configuration options and settings are spread out in various places, mostly accessible from your Profile, via the gear-wheel icon at top right. Some options are under "Profile and Privacy" ( - for example, you can control which people appear in the "People in his/her circle" listing on your profile, on a circle-by-circle basis if you want. But other settings, such as just what "Your Circles" means when you share something with "Your Circles", and the email/SMS notification noise level, are under "Google+" ( It all gets rather confusing, especially for the new user.

Another big issue is the lack of group chat functionality. Just like Facebook, there's a "Chat" tab at lower right of most pages, but unlike Facebook, you can't add multiple people to the conversation. Googling "Google+ group chat" leads to articles that imply it's possible, but the software has obviously changed since they were written. And the confusion over Google's IM products don't help, there's Google+ Chat, Google Talk, Google Messenger and Google Voice, and they're all different things. In fact, it seems that two different things on different platforms (PC vs Android) can even have the same name even though they're incompatible and not interoperable.

If you really want a multi-way conversation, Google+ pushes you towards "Hangouts" which offer up to 10-way videoconferencing and have some really neat features such as screen-sharing, etc. However, not everyone has a webcam, or even a microphone, or they don't want to be seen. And Hangouts require special software; when you start a hangout (or try to join one?) without the software, you are prompted to download GoogleVoiceAndVideoSetup.exe. The messages seem to imply that the software has installed itself; however I soon discovered that it hadn't, and when I found and ran GoogleVoiceAndVideoSetup.exe, it downloaded and installed the actual code required. At this stage, no-one else in our little group seems to have completed the process and so we haven't actually accomplished a Hangout. If we do, we might well move this feature to the "Good" side of this balance sheet.

File sharing is difficult. Facebook groups have a "Files" tab and even an "Add file" link right at the top of the page. There's nothing like this in Google+ communities. The easiest way to share something seems to be to upload it to Google Drive, make it public and accessible to anyone who has the link, then copy and paste the link into a Google+ post. This is awkward at best, and it also means that the file is stored in an individual user's Drive, rather than storage space that belongs to the community. At the very least, the Share... menu option in Google Drive ought to have options for sharing to Google+ - that functionality already exists in Youtube and could almost be copied and pasted into the Google Drive code base.(Update: it turns out that there may be a button which allows direct sharing to Google+ [or email, Facebook or Twitter], but I don't see it because I'm using the Google Apps version of Google Drive. Just another complication - different people see different versions of the same thing, depending upon which Google services they're signed up for.)

Terminology keeps changing. For example, the term "stream" has fallen into disuse - your "stream" is now your "Home page". And I've already mentioned the confusion over the IM apps.

Functionality keeps changing and is inconsistent. Google+ - and the rest of the Googleverse - is obviously in a constant state of change and flux. New functionality is constantly appearing while older and less-used - but popular with its users - features are liable to disappear. I need only mention Google Reader at this point - but it's an issue I'll return to.

Related to this is the fact that while Google is positioning Google+ as the central hub of their applications and services, at least for identity and profile management, it is not very good as a user-centric dashboard. As one of my friends pointed out, iGoogle was much better for that - but it's due for end-of-life later this year. It's a great pity - Google needs something that provides a single page with widgets for Gmail, Calendar, Contacts, Google+, etc. Ironically, I realised that's what my home screen on the Nexus 7 provides - it would be wonderful if Google could provide a web page that could run the same widgets as Android devices. How about it, Google?

The Ugly

Now we're down to cosmetics - the kind of thing that a bit of CSS fine-tuning could probably fix

Google+ doesn't seem to fit as much information on the page as Facebook does. I say, "seem", because on close inspection they both use the same font size for the main text of posts. Google+ puts its major app icons down the left column while Facebook lists groups, apps and pages there; scrolling up, Facebook shifts it up, leaving empty white space. Over on the right, Google+ lists more "stuff" you might like while Facebook puts a scrolling "ticker" app, which is dense with a smaller font and less white space.

Part of the reason for the less dense appearance of Google+ is its use of boxes around posts and grey shading. Facebook's all-white page is much cleaner looking. Google+ could really use a makeover from a good designer.

Summing Up

Overall, the impression one gets is that Google+ is "geekier" - it's stronger and more innovative on the back end server functionality. There are lots of configuration options, but Google annoys most first-time users by not setting appropriate defaults - there are far too many email notifications and the privacy settings probably aren't set high enough for most users, requiring a good half-hour or more of stumbling around, changing things by trial and error.

I believe that Google+ is going to grow and get better - as more and more users acquire Android devices or switch to using Google Apps and Gmail, they will be assimilated, and the functionality will be refined. But for now, it's still rough round the edges and a bit abrasive for the user switching over from Facebook.

Monday, April 1, 2013

Much Ado About DNS Amplification Attacks

There's been much wailing and gnashing of teeth from one or two people over DNS amplification attacks, following an over-hyped DDoS attack on Spamhaus using this technique. The attack relies on sending DNS requests with the source IP address spoofed to be the address of the victim, which is swamped by comparatively large reply datagrams, Here are two techniques to make sure that your systems can't be used by Bad Guys to conduct these attacks.

For years now, in my CISSP Fast Track Review Seminars, I've been advocating the use of reverse path filtering in routers and firewalls. In fact, it's an Internet Best Practice - see BCP 38 [1]. It's implemented in the Linux kernel and many distributions turn it on by default. On Red Hat Enterprise Linux, CentOS or Scientific Linux, for example, take a look at the /etc/sysctl.conf file, looking for the following lines near the top:

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

If you change that to:

net.ipv4.conf.default.rp_filter = 2

you have solved the problem - before forwarding a packet, the kernel essentially asks itself, "If I was sending a reply to the source address of this packet, would I send that reply back out the interface that I received this packet on?". If the answer is no, the packet is dropped. So, for example, if a packet with a source address on your internal network arrived on the external interface, it would be dropped.

If your distro does not use the sysctl.conf file, you can achieve the same effect with the following command in a startup script such as /etc/rc.d/rc.local:

echo 2 > /proc/sys/net/ipv4/conf/default/rp_filter

The default value of 1 enables reverse path filtering only of addresses on directly connected networks. This is a safer option - full reverse path filtering can break networks which use asymmetric routing (e.g. the combination of satellite downlinks with dial-up back-channels) or dynamic routing protocols such as OSPF or RIP.

However, reverse path filtering really needs to be implemented by all ISP's, to stop datagrams with spoofed source addresses from getting anywhere on the Internet. For those of us who aren't ISP's but just operate our own networks, a better fix is to make sure that your DNS either does not support recursive lookups, or supports them only for your own networks.

If your DNS is intended only as a primary or slave master for your own public zones, and will therefore be authoritative, then just edit the named.conf file to set the global options:

options {
     allow-query-cache { none; };
     recursion no;

However, if your DNS will provide recursive lookups for your internal machines, then restrict recursive lookups like this:

acl ournets {;; };

options {
        directory "/var/named/data";
        version "This isn't the DNS you're looking for";
        allow-query { ournets; };
        allow-transfer { ournets;;; };
        allow-recursion { ournets; };

(Replace the network addresses in the ournets acl with your own addresses, obviously.)

The allow-transfer directive restricts zone transfers, and you would normally only allow slave DNS's (e.g. those provided by your ISP) and perhaps a few addresses within your own network - I've allowed transfers from all addresses in ournets, so that the dig command can be used for diagnostics. The allow-recursion directive allows recursive lookups only from our own machines

Finally, the allow-query directive means that only your own network(s) can even query this DNS - if you need to allow queries of your public zones, you can allow that in their specific options, later:

zone "" IN {
        type master;

        file "";
        allow-query { any; };

Should you choose to go even further, there are even patches for BIND which allow you to rate-limit responses, so that you can provide protection for your own addresses against DNS amplification attacks.

The bad news is that if you are running Windows, the only option that you have is to completely disable recursion - the Windows DNS is originally based on really old BIND code and does not have most of these options.

Implement these two simple fixes, and you can be confident that your systems won't be part of the problem.


[1] BCP 38: Network Ingress Filtering - Defeating Denial of Service Attacks which employ IP Source Address Spoofing - available online at

[2] US CERT Alert TA13-088A, DNS Amplification Attacks. Available online at

Sunday, March 24, 2013

High Kernel CPU Usage - Grrr!

My poor old desktop machine, Sleipnir, is much abused and overloaded. It's maxed out, with 3 GB of RAM (max usable for 32-bit XP),  300 GB IDE C: drive, and 1.5 TB and 2 TB drives for use with my SageTV software for DVD images and recorded TV shows, respectively.

For a few weeks now, the poor old thing has been dragging her feet. Everything was slow; menus would take many seconds, even a minute, to appear, programs were slow to load, and once RAM was fully committed, any switching of programs that involved the swap file - and with Firefox's memory leaks, that usually didn't take long to occur - was painful.

I didn't think too much of it; it's well known that Windows machines degrade over time. I've always put it down to registry rot, coupled with Microsoft's unholy alliance with hardware manufacturers that gives them an incentive to drive users to replace their computers frequently.

But it got to be a major Pain In The Ass. My work was slowing down; Eclipse was dragging along and even simple edits were getting to be painful. Worse still, TV recordings were becoming corrupted. Sleipnir contains three TV tuners, and we rely on the SageTV software to automatically record TV shows so that we can watch them at a convenient time. Downstairs, our main TV has a Sage HD-300 extender which allows us to view recordings or live TV, and we count heavily on this to allow us to watch our favourite shows when our workload allows. In fact, the TV won't work without it as there is no external antenna and simple rabbits-ears don't get a usable signal in that location - my upstairs office has much better reception.

However, now both recorded and "live" TV was jittering, dropping out and downright corrupted. At the least, there were occasional ear-shattering chirps; at worst, shows were just unwatchable. The pressure was on to either replace the computer or get the problem fixed.

So I did a little hunting around. Sleipnir is so heavily loaded that I routinely run the Task Manager to keep an eye on it, and it was already obvious that the CPU Usage display was showing most of the time spent in the kernel. At the same time, the hard drive activity light was solidly on. Hmmm. Disk activity involving lots of CPU? That shouldn't be happening. (You have to imagine me stroking my chin, thoughtfully at this point). Usually, disk I/O is handled by the DMA Controller, which transfers sectors (or more) directly from the disk controller buffers into main memory with no CPU intervention. The CPU hasn't been involved since the good old days of ...

PIO! Programmed I/O - where the processor itself enters a loop to transfer data, word by word (it used to be byte by byte, in the old days) from the disk controller into main memory.

Could it be? Opening Device Manager (from within the "My Computer" properties) and examining the "Primary IDE Channel" properties, "Advanced Settings" tab soon revealed that yes, indeed! - the "Current Transfer Mode" as set to "PIO" rather than the expected "Ultra DMA Mode 5". It turns out that if Windows experiences 6 or more CRC (Cyclic Redundancy Check) errors while reading a drive, it degrades the DMA mode setting, eventually getting to zero and then reverting to PIO mode. This won't actually help anything - the problem is with the disk drive, not the controller - but of course, the CPU is now having to work hard during disk transfers and it slows everything right down.

IDE Properties - if the "Current Transfer Mode" is "PIO" you're in trouble.
Simply setting the "Transfer Mode" to "DMA if available" won't reset things. Rather, you have to click on the "Driver" tab and - yes, this is correct - uninstall the driver. This is a considerable leap of faith, especially considering that this is Windows we're talking about here, people. In fact, you have to uninstall the driver on all IDE channels, and then reboot.

On rebooting, Windows will produce "New hardware discovered" messages and will reinstall the drivers. It did for me, and it should for you, too. If you haven't uninstalled the driver on all channels, then you'll probably find it's still running in PIO mode on the problematic channel. If you have uninstalled the driver on all channels, you might have to reboot yet again.

If this works for you, you should be back on the air with a decently-performing machine. It certainly worked, in my case. However, it's probably only a matter of time before the problem arises again - if there were six CRC errors on a drive, it may well be failing. In my case, I have a spare 320 GB IDE drive on the shelf - being a hardware hacker, I have spares for most things - and so I'll take care to back up anything vital and swap drives when I get time. All my work is stored on a server with RAID array and offsite backup or backed up to multiple machines and in the cloud anyway, and my iTunes library is also backed up to a pair of external hard drives rotated weekly to an off-site location. So I'm willing to sit and wait, in the interests of seeing how long it takes for the required six CRC errors to accumulate.

In the meantime, everything is so much snappier. The "All Programs" menu appears in less than a second rather than anything up to a minute, and I can watch TV while recording three programs simultaneously and running Eclipse, Thunderbird and Firefox.

Life is good again!