Tuesday, September 3, 2013

Another Paradigm Shift

When you've been in the software development business as long as I have, you get used to things changing continuously, but in retrospect, you can see the steps that represent paradigm shifts. When I learned to program, at school, in the early 1970's, we wrote programs with paper and pencil; the final version was written up on coding sheets which were then posted to the nearest university computer centre, over a hundred miles away. And if you weren't thoughtful and careful, the result that was delivered ten days later was a sheet of fan-fold paper which simply said, "SYNTAX ERROR ON LINE 2".

So, over the years, I've used punched cards, paper tape, mag tape (both 9-track and cassette, including audio cassette storage!), floppy disks, hard disks, flash and solid-state disks. I've been through traditional procedural languages like FORTRAN IV and Algol, multiple assemblers, object-oriented languages like Clascal, C++ and Java, logic languages like Prolog, functional languages like ML and Ocaml and peculiar specialized languages.

I've also seen user interfaces evolve. After the paper-and-pencil era, I got to use interactive systems via an ASR-33 teletype - i.e. a roll of paper - and early Tektronix storage-tube graphic displays. Then came serial terminals like the ADM-3a and VT-220, as well as block-mode mainframe terminals like the 3270, which could do basic form-filling. The major shift from shared minicomputers to single-user microcomputers enabled a switch to memory-mapped displays with crude monochrome graphics (remember the ETI-640, anyone?) and further improvements in graphics brought us GUI's with the Apple Lisa (yes, I had one), Macintosh and the X Window System. Oh, yes - and Windows (and OS/2 PM!).

The transition to graphics made it difficult to separate the computer from the display. The X Windows system was designed to support graphical terminals attached to a minicomputer, but later systems like OS/2 Presentation Manager and Windows were designed with directly-accessible memory-mapped displays in mind. Extensions to Windows to support remote access, like Citrix Metaframe, have had to rely heavily on very clever protocols like ICA to minimise their bandwidth requirements, and even X has a variant called LBX (Low Bandwidth X).

However, remote computing has been taking place via the Internet for more than two decades now, and the web has evolved a protocol for the remote display of text and graphics, in the form of HTML. For much of that time, users have sat at comparatively powerful computers - these days, they're quad-core Intel CPU's with multiple GBytes of RAM - while performing relatively simple tasks on remote servers, such as posting messages, ordering products or sharing photographs.

But the balance between the server and client has changed. These days, clients are more commonly phones or tablets; although these machines are more powerful than ever before, for reasons of cost and battery life, most of the power needs to be devoted to those tasks that must be performed locally and every thing else, especially compute-intensive tasks, off-loaded to the server.

This has coincided with the development of even more powerful, yet cheaper, servers which support virtualization; add a layer of self-service, and the result is the cloud - an amorphous set of services made available on a pay-as-you-go basis to both enterprise and individual customers.

Mobility

In particular, a desire to have information available on a range of devices, particularly mobile devices, is driving both types of customer to store their data in the cloud, rather than on the devices themselves. I used to store all my important files on a RAID-based file server in my office, and that was fine as long as I only wanted to share them between computers inside the office; but then I wanted to be able to access them while on the road, and that required me to set up and configure another level of complexity in the form of a VPN, with all the management of clients that that requires.

Now I want to access them from my tablet, my phone and my Chromebook, and that involves finding, installing and supporting VPN software for them, too. In the end, it's easier just to say, "I've had enough; I don't care if the NSA wants to read my files, and realistically, why would they bother? So I'm just going to stick my stuff in Google Drive, or Dropbox or Box or wherever".

But the relative power of mobile devices means that they can't do the processing - therefore that has to be done in the cloud, too. This isn't a completely new idea - large-scale number-crunching has always been done on supercomputers or even mainframes with specialized hardware like array processors. And, of course, database servers represent another category of specialized devices used by clients, as does print servers.

For my number-crunching needs, I've been using Sage (http://www.sagemath.org/) an open-source mathematics program that combines lots of other specialized maths programs (http://www.sagemath.org/links-components.html) into one - and for its user interface, Sage uses a web browser. It's extremely difficult to build Sage for Windows, so if you want to run it, you download a complete appliance virtual machine which runs under VirtualBox, and then point your browser at that. However, I prefer not to load up a machine that's also being used for other things - especially my smaller machines. So, by installing Sage on a spare server - I start its notebook interface only when needed - I am able to keep my work files in a single location but access Sage from any of my machines (although using it from my phone would be a bit tedious).

New Architectural Paradigm

This may well be the computing model and user interface paradigm of choice for the next few years. It makes no sense for me to design any applications that I write from this point onwards to use a conventional GUI like Java's Swing or - not that I would - the various classes in Windows. If I do that, they become inaccessible from my phone, tablet and Chromebook. Nor should I use the Android SDK - that gives me apps that will run on a phone or tablet, but won't run on a PC. I suspect that many enterprise developers are facing the same dilemma - demand for access from mobile devices is causing them portability problems,

The answer has to be a move to a thin-client approach to everything. The client is, inevitably, the web browser, which is now so capable that it is nothing like as "thin" as the thin clients of yesteryear. The form description and layout language is HTML 5, the client-side procedural language is JavaScript, the communication protocol is HTTP with AJAX, WebSockets, XML and JSON layered on top.

The back end language can be any of the traditional languages - C/C++, Java, Python, etc. A lot of my code is written in Java; adapting it to run as a servlet would not be particularly difficult. For new projects I also have some new (-ish) language options, such as PHP or JavaScript, in the form of Node.js, which I would not previously have considered for conventional applications. And now I have the choice of running it on my own machines, or finding a cloudy home for it with Google, Amazon, Rackspace or any of a number of public or private cloud providers.

For the User

I suspect we're going to see a change in the way people use computers, too. Right now, many people have both a desktop and laptop computer, and switch between them, sometimes transferring files via USB drive. Gone will be the large desktop PC; people will prefer laptops and tablets, but they may still need a fixed box - as I do - to handle DTV tuners, scanners, home theater, media services, etc. Such devices might even morph into a "home server" - a headless box that serves music, video, graphics files, etc. to the various TV's, and perhaps even terminates fixed-line phone services, should they survive that long. And yes, we can stream music and video from the cloud, but here in the Third World (where the distant promise of fiber-to-the-home seems likely to be snatched from our grasp) there just isn't the bandwidth for us all to do that all the time.

So, in a sense, we're back where I almost started - with minicomputers and mainframes (replaced by servers and the cloud) and users connecting to them via terminals - replaced by a variety of thin clients.

Plus ca change, plus c'est la même chose.

No comments: