Sunday, April 30, 2017

Another Chromebook Use Case

Recent restrictions on traveling with laptops have caused difficulties for business travelers.

My better half recently booked airline tickets to visit family in the UK, traveling with a codesharing combination of Qantas (Sydney - Dubai) and Emirates (Dubai - Birmingham). This is a much more convenient alternative to taking QF1 all the way to Heathrow and then organising land transport to the Midlands, but even QF1 still transits through Dubai, so would be subject to the same problem.
The Acer Chromebook 14, in Luxury Gold trim.

The Problem

Recently the US instituted a ban on passengers traveling from several Middle Eastern airports carrying electronic devices in their hand baggage. The ban applies to tablets (including some of the older, larger Kindles), laptop computers and other personal electronic devices, and apparently is based on received intelligence on bomb-making techniques.

This occurred a few weeks after my wife had bought her tickets, and we were initially unconcerned - until the UK followed suit, and specifically added Dubai to the list of ports concerned. Although this was a family visit, my wife needs to run her business while traveling, maintaining contact with clients and working on project reports and presentations. She had previously taken her Windows laptop for this purpose and so we initially considered how this could be done in the light of the new restrictions.

The most obvious alternative to hand luggage was to put the laptop into checked luggage. But there are problems with this approach.

Firstly, airlines (and aviation regulators) have specific rules for the carriage of dangerous goods, and lithium ion batteries feature quite prominently in the dangerous goods list. For example, Australia's Civil Aviation Safety Authority provides quite detailed advice to passengers ("Travelling safely with batteries and portable power packs", available online at and is quite clear that spare batteries must be in carry-on baggage only, because of the risk of fire.No advice is provided in relation to batteries installed in devices, probably because of the expectation that passengers will carry expensive and fragile devices as carry-on baggage anyway.

However, a laptop packed into a suitcase - especially a zip-up lightweight suitcase - poses its own risks. First, there is the possibility of theft; I have personally had electronics stolen from a checked bag, presumably by a baggage handler. Secondly, there's the possibility of damage - suitcases are stacked up in containers for loading into the freight holds of large aircraft, and a lightweight suitcase at the bottom of a pile could be subject to considerable pressure and deformation. Finally, the natural inclination is to wrap the laptop in soft clothing to provide protection against the shock of dropping - but what if pressure on a power switch or deformation of the case causes the laptop to power up? It is quite likely to overheat since the clothing will block the air vents - and the clothing is also likely to be highly flammable.

For these reasons, we rapidly ruled out the idea of packing the laptop in a suitcase - and I hope everyone else does, too.

The airline eventually proposed a scheme in which passengers transiting Dubai could surrender their laptops for carriage in the hold - but this is unattractive, too - since the laptop bag is the obvious place to store travel documents (e-ticket, passport, etc.) and in-flight requirements. Surrender the bag, and you lose access to those, or have to have yet another bag to carry them; surrender the laptop without the bag, and it is unprotected. Both cases still leave an exposure to damage, loss or theft. Not a comfortable option, either.

The Solution

What to do, then? Fortunately, there is an easy alternative: order a Chromebook in advance of travel, for delivery to a UK address, and that is what we chose in the end.

I drew up a short list of requirements for the various alternative solutions to the problem:

  • Functionality. The device has to support essential business applications: email/calendar, word processing, spreadsheet and presentation graphics.
  • Low cost. If we acquired a device just for use on visits to the UK, it would only get used for a few weeks each year, so a high-cost device is not justified. This requirement extends to software licences as well.
  • Low maintenance. The machine would lie unused for three to six months at a time, and if the first task on arrival was to install updates and patches, requiring multiple reboots and lots of interaction (e.g. via the Help -> About menu option in Mozilla applications), that's time badly spent on a short trip - but if not done, security exposures would result.
  • Security. If the device is stolen, lost, lent to a third party, etc. there should be no exposure of sensitive data on the device and no threat to system integrity.
  • No interruption to work, and no work lost. Locally-stored files, e.g. on the hard drive of a Windows laptop, could accidentally be left behind, requiring work to be done all over again.
  • Simplicity. We wanted to avoid complicated schemes of copying files to and from USB keys or compact flash. This poses too much risk of an old file over-writing a newer version.

Fortunately, the use of a Chromebook meets these requirements perfectly. Since my wife's business uses Google GSuite (formerly Google Apps), she is already familiar with some of its components and uses them, particularly for collaborative projects. So we knew the functionality requirement was met. We already have another Chromebook and a Chromebox, so the device is familiar, too.

The Chromebook meets the low maintenance requirement quite easily, as there's very little on the device itself to be updated, and that is taken care of with a few minutes downloading and a ten-second (at most!) reboot. All applications are cloud-based and continually updated.

Security, simplicity and the requirement for no work to be lost are dealt with by the fact that the Chromebook and GSuite are cloud-based. All she had to do was transfer more of her work to GSuite in the weeks leading up to the trip, and all her work documents were available immediately upon initial login. Similarly, she can leave the Chromebook behind and upon arrival, immediately resume work. Everything is stored in the cloud; nothing is stored on the machine. And because we use two-factor authentication with security keys, there's no real possibility of someone using the machine to gain access to her data. For the same reasons, the family member charged with storing the device is relieved of a lot of responsibility.

Finally, cost: the Acer Chromebook 14 is only GBP199.00 from (see That is sufficiently inexpensive that the low utilization is not a problem - it's a reasonable price to pay to solve the travel problem.

The Pudding

The proof of the pudding is in the eating, as they say. The trip is almost over, and my wife reports that the Chromebook worked well. Even as a non-technical user, she was able to get it unpacked, set up and working with minimal effort, and she has used it for ten days to complete a variety of work tasks. Not having to worry about taking a laptop was a load off her mind, and not having a laptop case to carry was a load off her shoulders.

The Chromebook is now permanently stationed in the UK for use on future trips, and travel - especially via Dubai - will be a lot easier. The whole exercise has proved yet another use case for the Chromebook, and it has turned out to be a useful addition to our business technology toolbox.

Sunday, April 2, 2017

An Infosec View of Privacy

Information security professionals, and especially cryptographers, tend to think in terms of preserving the security properties associated with information assets, and CISSP's in particular tend to start with the CIA Triad. Clearly, privacy relates to the first member of that triad - confidentiality - in some way, but the relationship is not obviously clear. For example, we often use secrecy as a synonym for confidentiality, but privacy is something different.

The difference is centered on agency or control, and in particular the relationship between the subject of the information and the information custodian.

The vast bulk of enterprise information - whether it be private enterprise, or public - is internally-generated, and the subject is, ultimately, the enterprise itself. For example, an ERP system revolves around accounting data (GL, A/R, A/P, etc.) and the ledgers therein describe the enterprise's financial state and history of transactions (as well as future revenue, of course). A CRM system may contain information about customers, but the bulk of that information relates to the enterprise's transactional history with the customer - sales calls, orders placed, etc.

In such cases, the enterprise is custodian of its own information - it is both subject and custodian. There is no conflict of interest - as custodian, the enterprise is never going to breach the confidentiality of its own information, and indeed will implement controls - policies, identity and access management, security models - to ensure that its employees and agents cannot. The enterprise, as the subject, has authority over the custodians and users of the information.

However, a conflict of interest arises when an enterprise is custodian of information about identified (or identifiable) individuals. For example, a medical practice maintains health records about patients; it is the custodian, while the patients are the subjects.

The patient records obviously have value for advertising and marketing purposes, in addition to the intended purpose of patient diagnosis and treatment. For example, a company selling stand-up desks or ergonomic chairs would see considerable value in a list of patients who have complained of chronic back pain, while over-the-counter pharmaceuticals marketers might want to sell directly to patients whose test results indicate pre-diabetes, early indications of hypertension or any of a range of conditions. And an unscrupulous marketer might approach an unscrupulous medical practice manager, resulting in patients being subjected to sales calls for products they do not necessarily want or - worse still - their medical histories or problems being leaked to other interested parties such as family members or employers.

There is a clear conflict of interest here. The subject of the data is not the custodian, and in fact, has no authority over the custodian. It is in the custodian's interest to on-sell the subject's data to anyone and everyone who is willing to pay for it. And while the example of a medical practice involves only a small business, many enterprises are much, much larger and employ many lawyers, resulting in a power imbalance between the enterprise and the affected individual.

This is why governments, acting on behalf of civil society and the individual, enact privacy legislation - the legislation gives the individual some degree of authority over enterprises and restores the balance of power.

Note that many information security controls are able to preserve confidentiality, but not privacy. Personal information is stored in databases and document management systems which are ultimately under the control of an information asset owner and users who are free to access the information for a range of purposes; if he or she decides to extract data, copy it to a USB key and sell it externally, the first two steps are probably authorized while the third cannot be detected, let alone prevented.

Hence the need for a privacy policy and strong privacy education and awareness within the enterprise. In the end, privacy comes down to personal ethics and compliance with the law. It is really a matter of trust in the integrity of those who have access to personal information - and the threat of legal action provides a degree of assurance in that integrity.

Notice that, in this model, the distinction between confidentiality and privacy can be extended beyond individual persons to companies or other entities. For example, the Chinese Wall model is another situation in which information about one entity is in the custody of another (e.g. information about clients held by a consulting firm would obviously be of great interest to other clients who are competitors). In that sense, then, the Chinese Wall model is intended to preserve privacy rather than integrity.

Finally, consider personal information in the custody of the person themselves. The subject and the custodian are the same individual - there is no conflict of interest, privacy laws do not apply, and the issue here is confidentiality, not privacy.

The distinction between confidentiality and privacy, then, is whether the subject of the information has authority over the custodian - if he does, it's a matter of confidentiality, but if he does not, then it's a matter of privacy.

Of course, there are other common conceptions of privacy, as well as legal views relating to photography, etc. but these are not considered here.

Sunday, July 17, 2016

A Little Learning - Electronic Voting and the Software Profession

Over the last week, I've had occasion to ponder the expression that 'a little learning is a dangerous thing'.

Quite spontaneously, a number of people have posted on Facebook in opposition to the idea of electronic voting. One linked to a YouTube video which clearly demonstrated the problems with some of the voting machines used in the US. Others posted a link to a Guardian opinion piece that labeled electronic voting a "monumentally fatuous idea".

The YouTube video - actually, a full-length documentary from 2006 entitled "Hacking Democracy" - showcased a number of well-known problems with the voting machines used in the US. Touch-screen misalignment causing the wrong candidate to be selected, possible tampering with the contents of memory cards, problems with mark-sense readers, allegations of corruption in the awarding of contracts for voting machines - these have been known for some years.

The Guardian article fell back on claims that "we could not guarantee that it was built correctly, even with source code audits. . . Until humans get better at building software (check back in 50 years) . . . we should leave e-voting where it needs to be: on the trash-heap of bad ideas."

However, by exactly the same argument, online banking is also a "monumentally fatuous" idea, along with chip & PIN credit card transactions, online shopping, fly-by-wire airliners, etc.

Whenever I've suggested that, actually, electronic voting is not such an outlandish idea, I've met with vehement opposition, usually supported by an appeal to authority along the lines of "I used to be a sysadmin, and let me tell you . . ." or "I'm a software engineer and it's impossible to write foolproof software. . ."

There's an old saying among cryptographers, that every amateur cryptographer is capable of inventing a security system that he, himself, is unable to break. In this case, the author of the Guardian piece demonstrates the converse - that because he, himself, is unable to come up with a system that he can't break, he is certain the professionals can't either. 

I have to say, that's a novel form of arrogance; to be so sure that because electronic voting systems are beyond your own modest capacity, nobody else can do it, either.

Yes, we've all seen software projects that were near-disasters, we've known programmers whose code had to be thrown away and rewritten, and poor practices like the storage of passwords and credit-card numbers in unencrypted form. I watch students today 'throw' code into the Eclipse IDE then run it through the debugger to figure out what the code they've just written actually does, and then I think back to my days of careful, paper-and-pencil bench-checking of Algol code, and I cringe!.  But not every system is designed and implemented that way.

It's true that electronic voting is a difficult problem, but it's one that some very fine minds in the cryptographic and security communities have been working on for several decades now. The underlying cause of this difficulty is that a voting protocol must simultaneously preserve what, at first sight, are a number of conflicting security properties.

Most security people are familiar with the basic security properties we work to preserve in the information and systems in our care: confidentiality (also known as secrecy), integrity (correctness) and availability - the so-called "CIA triad". But there are many more, and some of them, at first glance contradict each other.

For example, anonymity is important in a democracy; no-one should be able to find out how you voted so that nobody should fear any form of reprisal. For an electronic voting system, this means that the vote cannot be associated with the voter's identity, whether represented as a login ID, an IP address, a public key or some other identifying information. But democracy also requires that only those who are entitled to vote being able to do so - in Australia that means registering on the electoral roll and identifying yourself at the polling place. This property is called eligibility.

At first glance, these requirements are mutually exclusive and contradictory - how can you identify yourself to claim eligibility, while at the same time remaining anonymous? In the physical world, the time and space between your identification, the booth where you mark your ballot paper, and the random placement of the ballot paper in a box all serve to provide anonymity. Fortunately there are cryptographic techniques that effectively do the same thing.

An example is the use of blind signatures, which were originally invented by David Chaum in 1983 [1], although I'll use a later scheme, made possible by the RSA public-key system's property of homomorphism under multiplication. There are three parties: Henry, who holds a document (e.g. a vote) which he wants Sally (our electoral registrar) to sign without knowing what it is that she is signing. Finally, there is Victor (the verifier and vote tabulator), who needs to verify that the document was signed by Sally.

I don't want to delve into the mathematics too deeply, and am constrained by the limited ability of this blogging platform to express mathematics - hence '=' in what follows should be read as 'is congruent to' in modular arithmetic, rather than the more common 'equals'. Henry obtains Sally's public key, which comprises a modulus, N, and a key, e. Henry then chooses an integer blinding factor, r, and uses this with the key, e and the modulus N, along with his vote message, v to calculate a ciphertext

c = m x r^e (mod N)

and sends this to Sally along with any required proof of his identity. If Sally determines that Henry is eligible to vote, she signs his encrypted vote by raising it to the power of her private key, d:

s' = c^d (mod N) = (m x r^e)^d (mod N)  = r x m^d (mod N) (since ed = 1 (mod N))

and returns this to Henry. Henry now removes the blinding factor to get the correct signature:

s = s' x r^-1 (mod N) = r x m^d x r^-1 = m^d

Henry can now send his vote message, m, off to Victor, along with the signature, s. Victor uses Sally's public key, d, to check whether

s^d = m (mod N)

If the two are equal (congruent, really) then the signature is correct, and Victor will count the vote. Notice that Henry does not identify himself to Victor, and Victor does not know who he is; he is willing to accept that he is an eligible voter because Sally has signed his vote message. And, very importantly, note that Sally does not know how Henry voted - she signed the blinded version of his vote message.

If all that mathematics was a bit much, consider this simple analogy: if I want you to sign something for me without seeing what it is, I can fold up the paper I want to sign in such a way that the signature space is on top when the paper is placed in an envelope. I then insert a piece of carbon paper - the paper that's used in receipt books to make a second copy of a written receipt, and used to be used to make copies of typewritten documents - on top of the paper but inside the envelope. I then ask you to sign on the outside of the envelope, and the pressure of your pen will imprint your signature, via the carbon paper, to the unseen paper inside the envelope. Voila! You have blind-signed a document, which I can now extract from the envelope once alone.

My point is that cryptographers have, for many years known about, and had solutions for, the superficially contradictory requirements for eligibility and anonymity.

In practice, voting protocols have many more requirements:
  • The voter must be registered or authorized to vote (elegibility)
  • The voter must vote once only (uniqueness)
  • The vote must be recorded confidentially (privacy) and cannot be associated with the voter's identity (anonymity)
  • The vote must be recorded correctly (accuracy)
  • The voter must be able to confirm that his vote was recorded correctly (verifiability)
  • The voter must not be able to sell his vote
  • No-one can be able to duplicate votes (unreusability)
  • The vote must be unalterable after recording (integrity)
  • The voter must not be vulnerable to coercion (uncoercibility
  • The votes must not be revealed or tallied until the end of the election (fairness)
  • The electoral authority - or everyone - must know who voted and who did not
But we still have more tools at our disposal, such as zero knowledge proofs, digests, signatures and, of course, conventional symmetric crypto which we can use to build more sophisticated protocols which do satisfy these security requirements. Some very fine minds indeed have been working on this for decades now, although obviously their work is not well known outside a relatively small community of cryptographers with an interest in voting.

When the NSW Electoral Commission were working on their iVote system, they asked several universities to get involved. They supplied the design documentation for iVote, including references to the various cryptographic protocols used (e.g. Schnorr's Protocol - a zero-knowledge proof based on the discrete logarithm problem). Both I and a team from UNSW independently wrote programs which took over 250,000 votes and validated them, in a ceremony attended by scrutineers of the various political parties.

The iVote system is quite sophisticated - it's an enhancement of what's called a "split central tabulating facility" design, with a third, separate verification server which allows voters to verify their votes via a phone interface, while the counting system itself is airgapped from everything else. The process I described above compared the votes from the counting system with the encrypted votes in the verification server. The voter can additionally obtain verification that their vote was decrypted correctly via the web interface.

It's true that researchers did find a vulnerability in the SSL/TLS library on one server in the system. I'm not familiar with that part of the system, but I'm pretty sure that even if that vulnerability was exploited in a man-in-the-middle attack, the attacker would not be able to manipulate the vote in that session as the votes are encrypted in the application protocol and do not rely on the transport layer encryption of TLS. However, the browser supports TLS anyway, so it's a good idea to use it as one more layer of a defence in depth - and an alert user would expect to see that padlock in the URL bar in any case, so it would be a mistake not to use it.

Prior to getting involved with the iVote project, I'd been somewhat cynical about the prospects for electronic voting. Although I knew something of the underlying principles, I felt it was still in the too hard basket. And reports of the problems with US voting machines didn't help, although a little investigation will reveal they are almost embarrassingly primitive and bear no resemblance to the protocols I am discussing here. But they still contribute to popular fear of electronic voting systems and pollute the overall climate. Watch the video above and you'll see what I mean.

However, I discovered that work on voting protocols was more advanced than I had thought, and the implementation of the iVote system was considerably more sophisticated than most people would imagine.

But what has surprised me is the negative attitude of software professionals. The (ISC)2 Code of Ethics [2] requires me, as a CISSP, to "Protect society, the commonwealth, and the infrastructure" and as part of that effort, to "Promote and preserve public trust and confidence in information and systems". And yes, it also requires me to "Discourage unsafe practice". At this point, I cannot say that the introduction of electronic voting is an unsafe practice - over 250,000 people used in our last State election - but it's hard to promote public trust and confidence when software professionals who ought to know better are so active in promoting distrust. There's a huge gap between the quality of code produced by a junior programmer working on a mobile app with time-to-market pressures and the careful design of a voting system based on decades of cryptographic research and development, and the fact that the former may be unreliable is not an indication that the latter is inevitably insecure.

Only a fool would guarantee that an electronic voting system is 100% secure and error-free - but then, only a fool would guarantee that paper-based voting systems are 100% secure and error-free. However, humans are better at writing software than the Guardian author suggests and electronic voting system are in use today, will increase in popularity and are very unlikely to result in the downfall of democracy.


[1] Chaum, David. “Blind Signatures for Untraceable Payments.” In Advances in Cryptology, edited by David Chaum, Ronald L. Rivest, and Alan T. Sherman, 199–203. Springer US, 1983.

[2] (ISC)2 Code of Ethics -  The Pursuit of Integrity, Honor and Trust in Information Security, International Information Systems Security Certification Consortium, 2010. Available online at

Sunday, July 10, 2016


This morning brought yet another story of identity theft (ID Theft in three steps: 'Adequate' Telstra and telco identity checks questioned) by the all-too-easy technique of finding an individual's name, home address and date of birth. These are all things that are on the public record and quite easy to discover; they should not be used as an authenticator. The US has a related problem with Social Security numbers which should also be regarded as an identifier, not an authenticator.

One of the most important steps in identity theft is getting access to the victim's email account. This is because the attacker does not know the password for the victim's bank, PayPal, eBay or other accounts, but the most common technique for password reset is to email a link to the user. In other words, the email account is used as yet another authenticator in a chain of trust. This means that email account mismanagement poses an extreme risk for user - it is both highly likely to be targeted, and also leads to severe consequences if compromised.

The weak link in all of this is the use of passwords; human beings being human and of bounded rationality, we tend to choose bad passwords that are easily guessable or discoverable. Even worse, we reuse passwords - confronted with the need to memorise the passwords for 20 or 30 different accounts, we understandably fail. Not everybody knows about password safes, for example. Inevitably, however, one of the sites we use gets compromised, the password hashes are stolen and posted on pastebin, and within minutes, a Rainbow Tables attack (or even quite mundane dictionary attack) reveals that favourite password, along with the email address, to the Bad Guy. And if the same password was used for the email account, it's Game Over.

I long ago determined that my email account had to use a unique and painfully complex passphrase, since the security of so many other assets depended upon it. It might be a pain in the ass to pause for a while as I hunt and peck my way through a long near-random string, but it's the price I pay for peace of mind.

And a few years ago, I decided that it was time to move to two-factor authentication for my email account. I'd had enough of running my own email servers and moved my email, etc. to Google Apps for Work, and Google was starting to offer two-factor authentication. I'm not a starry-eyed believer in 2FA - in many cases, there are better alternatives and a more sophisticated risk-based approach to the problem should reveal them - but to rely purely on passwords for such an important application is inviting disaster.

Enabling two-factor authentication is a fairly straightforward process, and Google nicely provides several alternative approaches. The most obvious is to rely on a mobile phone number and send an SMS (or voice message) to it - what banking systems call a mobile Transaction Authentication Number (mTAN). But that won't always work for me - some places I work are screened so that a mobile phone signal can't reach them. Once nice alternative is a Security Key - a small device that plugs into a USB port and provides cryptographic authentication; this will work with any software that supports the FIDO (Fast Identity Online) U2F (Universal 2nd Factor) specification, such as Chrome (but not Firefox, unfortunately). And there's the Google Authenticator, a smartphone and tablet app (technically, a soft token) which generates a unique six-digit number each minute. Any of these can be combined with the passphrase to allow a successful login.

A Yubikey Neo - a "security key" which also supports NFC (near-field communications)
This will overcome what I suspect is the biggest obstacle for many people contemplating the switch to two-factor authentication for Gmail, etc. - the fear that they will be locked out of their account if they lose their phone or another device or simply don't have it to hand. With a little extra setup, there's always an alternative available.

Setting up two-factor authentication is done by going to or just clicking on your avatar at top right of a Gmail or other Google page, then choosing "My Account", then under "Sign-in & security", selecting "Signing in to Google" and on that page, under "Password & sign-in method" clicking on "2-step Verification". Unsurprisingly, you'll be prompted for your login ID and password before getting access to this page.

Turning on 2-step verification will start by sending a verification code to your recovery phone number. If you receive this successfully then 2-step verification will be turned on. At this point, any other sessions and devices you may have logged in to this Google account will be effectively logged out, and you will need to log in again, using 2-step verification. However, you may want to delay that until you have set up your security key or authenticator app.

Security keys, such as the Yubikey, are inexpensive and widely available - I got mine from Amazon. Adding the security key to a Google account is very easy - on the 2-step Verification page, just click on "ADD SECURITY KEY"and you will prompted to insert your key and tap on its button if required. A second or two later, it will have been recognised and added to your account.

Setting up Google Authenticator is almost as easy. The 2-step Verification page prompts you to install the app (versions are available for both iPhone and Android) and then tap "Set up account". This will allow you to scan a QR code which presumably seeds the pseudo-random number generator in the Authenticator app. And that's it - from this time on, opening the Authenticator app displays a six-digit number, changing every minute, which you can use to authenticate to Google.

Google Authenticator
Google selects the second factor in a priority order - first (default) choice for me is the security key, then comes the Google Authenticator app, and only if those aren't available does it fall back on SMS authentication codes.

Logging In

As mentioned above, setting up 2-step verification effectively logs you out on other devices and sessions, so you'll need to log in again. Your phone may pop up a dialog requesting an authentication code, especially if you have the Google Device Policy app installed for remote management. In my case, I tapped the button requesting Google to send an authentication code, and a second later the SMS arrived - but after I'd viewed the SMS and written down the code, I couldn't get back to the authorization dialog. Since the phone is still logged in and hasn't skipped a beat, I can only presume that the Device Policy app was able to directly read the SMS and didn't need me to manually enter the code.

When I booted my Chromebook and logged in, it initially accepted just the password but then gave me a notification message that said I should log out and in again. This time, it prompted me to insert my security key and tap its button, after which I was logged in.

Chrome on Windows can be a bit confusing, because it requires a log in to the browser itself, independently of any web sites you may log in to (this is because Chrome syncs your bookmarks and other data to your Google account). So you may get a couple of prompts to authenticate when you first launch Chrome and log in to a Google page such as Gmail.

Firefox does not yet support security keys - there's a Bugzilla page where you can track development progress at However, you can use Google Authenticator to log in to Google, as shown:

Thunderbird shares code with Firefox and also does not support security keys. In fact, if you've been relying on "Normal password" authentication for IMAP access to your Gmail account, it's now time to change. Right-click on your account in Thunderbird and choose "Settings", then click on "Server Settings" and change the "Authentication method" to OAuth2. Click on "OK", and then click on "Get messages" to force authentication. You'll get a password prompt and then a dialog like the one above, allowing you to authenticate using Google Authenticator or an SMS code. Once that's done, Thunderbird will be happy for the foreseeable future. If you're using the Provider for Google Calendar plugin, that should also prompt for re-authentication.

Third-party Sites

One of the beauties of adopting this technology is that it can eliminate the need for passwords for other web sites and applications. The OAuth2 protocol referred to above allows Google to function as an identity provider for other relying web sites - part of an underlying technology called federated identity management. I use this as part of my own systems - for example, I no longer use a password when logging in to my own web site, but instead have configured the content management system (in this case, Moodle) to use OAuth2 to get Google to confirm my identity. Not only can the site be more confident this really is me - I've used two-factor authentication to prove it - but I don't have to type in a password.
My own web site now supports login with Google credentials

Many other sites also use two-factor authentication or federated identity management; for example DropBox now supports security keys, as does GithubSalesforce and many other sites. You can set up Facebook to use Google Authenticator by going to your settings, then choosing "Security", then clicking on "Code Generator" and clicking on "Set up another way to get security codes" and scanning the resultant QR code. The Facebook app on your phone can also be used as a generator.

You can log in to The Guardian, O'Reilly Media and many other sites using federated identity management with Google, Facebook, LinkedIn or Twitter acting as identity providers. Of course, social media networks are keen to have you use them as identity providers since then they can track you across other sites, although they have other ways of doing that anyway. However, as a Google Apps user I am a Google customer, rather than a consumer of free services, so I prefer to use only my Google ID for authentication.


Email accounts are too valuable to protect with only a password, especially a weak or reused password. Two-factor authentication providers better security and lowers the risk of identity theft. It takes only a few minutes to set up and rarely inconvenient thereafter. Use it.

Wednesday, August 12, 2015

How To Bury Your Message - And Get Shot in the Process

Oracle hit the news today for all the wrong reasons. The company's Chief Security Officer, Mary Ann Davidson, wrote a blog post on the topic of "security researchers" who are hired by enterprises - in this case, Oracle's own customers - to go hunting for vulnerabilities in their systems. Because those systems are very often running on Oracle's platforms and products - not just the database, but all the stuff Oracle acquired from Sun, like Solaris and Java - the hired bug-hunters turn their attention to those products, using disassemblers to reverse engineer the code and automated tools to scan for vulnerabilities.

Unfortunately, Davidson chose to bang the drum a little too hard on the reverse-engineering aspect, predictably triggering a response from some "security researchers". Which , in turn, led Oracle to delete the blog post. However, nothing ever dies on the Internet, and so the blog can still be read via the Internet Wayback Machine, here (

End-user licence agreements which prohibit tracing, debugging and other reverse engineering techniques do pose something of a problem in the security world. Yes, we ought to honour them, especially since we accepted those terms and conditions. No, we shouldn't use these kinds of techniques to snoop around our suppliers' intellectual property. But the bad guys couldn't give a rat's patootie about these moral quibbles - they're hacking away for fun and profit, and if there's a vulnerability to be found, so much the better.

However, "security researchers" ought to know better. A junior "security researcher" armed with some automated tools for vulnerability is the white hat equivalent of a script kiddy - he knows just enough to be dangerous. This is one of the two points Davidson really ought to have emphasized - hiring this kind of person just encourages them. And here comes the second point: it's counterproductive.

Hiring "security researchers" of this type in an attempt to secure systems is the kind of futile endeavour that the much-misunderstood Kind Canute would have railed against. How many vulnerabilities are there in a complex business system? A handful? A few dozen? Hundreds? You don't know. You'll never know.

So you hire a "security researcher" to hammer on the code with his automated tools and Bingo! He finds one. So what? Out of the unknown n vulnerabilities in that subsystem, you've found one. Which leaves an unknown n-1 vulnerabilities. What are you going to do about them?

The answer is, you're going to deploy a variety of other controls in a defense-in-depth strategy to prevent entire classes of exploits. You'll use a DMZ configuration, an application-layer firewall, an IPS (Intrusion Prevention System) and a whole bunch of other things to make your systems and the business processes they support as resilient as possible. You have to do that.

Searching for, reporting, and patching vulnerabilities one by one is an inefficient strategy which ignores the fundamental asymmetry of the information security business:
The bad guys only have to be lucky once. The defenders have to be lucky every time.

Trying to do exactly the same thing - find vulnerabilities - faster than all the hackers out there is not a sensible strategy, unless and until you've done everything else you can to make your systems and business processes resilient. Otherwise, your resources - time and money - can be better employed elsewhere.

As Davidson explains, Oracle and other software developers already use static source code analysis tools to scan the original source code for vulnerabilities (among other errors). There's not much point in doing it all over again. There's a bit more point to performing dynamic testing, against complete systems - that's much more likely to turn up architectural issues and configuration problems, and - as Davidson unfortunately chose to over-emphasise - it doesn't violate the EULA.

So why do it? Because software companies pay bounties; the vigilante bug-hunter is in it for the money, and a little fame might be nice, too. But if you're going to play that game, do it properly - if you're going to hunt for vulnerabilities, follow through - document it and develop a proof-of-concept that shows it really is a vulnerability. Don't just submit the output of an automated scan and sit back with hand outstretched. If you do the whole job, properly, then you actually will be a Security Researcher.

(And I hope Mary Ann Davidson enjoys her next job, where the Marketing Department will hopefully put an approval process in place for her blog articles.)

Sunday, November 2, 2014

Sending Confirmation Emails from Google Apps Forms

For a recent project, I had to create a form which allows invited respondents to register for a survey. Since we already use Google Apps, this seemed like an ideal opportunity to make use of Google Forms - and Google Sites, to provide a wrapper for the form, with supporting text and an easy-to-type URL.

And so it proved; creating a simple form was easy and within five minutes I was capturing test registrations into a Google Sheets spreadsheet. Another five minutes and it was embedded into a Google Site, and I could turn it over to the intended user.

But I can never resist fixing things that aren't broken, and it seemed like a good idea to add automatic sending of a confirmation email. Sure enough, there are plenty of recipes for doing this on various blogs, and after a few minutes more, that was working, too.

But wait a minute! What if the respondent gets the email and realises they've entered the wrong data (excluding their email address, of course - if that was entered wrongly, they won't even get the confirmation email). Can I add a link that will allow the respondent to edit their own form submission?

A quick bit of Googling later, and it appears that this is a problem that has stumped a lot of people and given rise to some ugly hacks. When that happens to me, it usually means that I've missed the point and am trying a totally wrong approach. So I set to reading the Google Apps Script documentation and looking for the classes and functions that might do what I needed. And, to cut a long story short, I found them.

Here's the resultant code:

function setup() {

  /* First, delete all previous triggers */
  var triggers = ScriptApp.getProjectTriggers();

  for (var i in triggers) {

  /* Then add a trigger to send an email on form submit */

function sendConfirmationEmail(e) {
  // e is a Form Event object - see

  // Edit this to set the subject line for the sent email
  var subject = "Registration Successful";

  // This will show up as the sender's name
  var sendername = "Your Name Goes Here";
  // This is the body of the registration confirmation message
  var message = "Thank you for registering.<br>We will be in touch.<br><br>";
  message += "Your form responses were:<br><br>";

  // response is a FormResponse - see
  var response = e.response;

  var textbody, sendTo, bcc;

  // Get the script owner's email address, in order to bcc: them
  bcc = Session.getActiveUser().getEmail();

  // Now loop around, getting the item responses and writing them into the email message
  var itemResponses = response.getItemResponses();
  for (var i = 0; i < itemResponses.length; i++) {
    var itemResponse = itemResponses[i];
    message += itemResponse.getItem().getTitle() +": " + itemResponse.getResponse() + "<br>";
    // If this field is the email address, then use it to fill in the sendTo variable
    // Check that your form item is named "Email Address" or edit to match
    if (itemResponse.getItem().getTitle() == "Email Address") {
      sendTo = itemResponse.getResponse();

  message += "<br>If you wish to edit your response, please click on <a href=\"" + response.getEditResponseUrl() + "\">this link</a>.";
  message += "<br><br>";
  textbody = message.replace("<br>", "\n");
  GmailApp.sendEmail(sendTo, subject, textbody,
                       {bcc: bcc, name: sendername, htmlBody: message});

To use it, set up your form, making sure it has an item named "Email Address", and then choose "Tools" -> "Script Editor...". Copy and paste the code above into it, edit the variables at the top of the sendConfirmationEmail() function and then save it, naming the project as you do so. Then choose "Run" -> "setup", which installs a trigger on the form; as it does this, it will ask you for various permissions, such as to access your Gmail and to run when you're not present.

That's it. Now, if you enter some data into the form, the sender will receive an email which lists their form responses, and also provides a link to edit them if they wish.

How It Works

For those who want to know what's under the covers, here's how it works. First of all, you need to understand how Google Forms works, under the covers.

When someone fills in a form and clicks on the "Submit" button, their answers are recorded in an automatically-created Google Sheets spreadsheet - usually with the same name as the form plus "(Responses)" tacked on the end. It's easy to assume that the form just directly stuffs the responses into the spreadsheet - but that's not what it does.

(And it was this assumption that has led a lot of programmers astray - they try to write a script using the Sheets Script Editor. That has access to the spreadsheet content, but not the form response info, and so they tie themselves into knots trying to get the URL which will edit user's response. Wrong approach!).

The form actually records the responses in a database behind the form - if you look at the Form editor menu, you'll see a menu item for "Responses (n)" where n is the number of responses recorded. And this probably matches the number of response rows in the backing spreadsheet - but not always. For example, you can delete rows in the spreadsheet, but the number of responses shown in the Form Editor won't change.

And more impressively, you can choose "Responses" -> "Change response destination", choose a new spreadsheet, and when you open it, it won't be empty - it will contain all the responses previously captured! This shows that the responses are being stored elsewhere. It also implies that if you edit the contents of the spreadsheet, your edits could be over-written by data from the response storage - and this will happen if the respondent chooses to edit their responses.

It therefore must be the case that the Edit URL edits the data in this database (and then copy the change to the spreadsheet). All of this logic belongs to the Forms app, and not the spreadsheet.

So, the correct place to deal with emailing the respondent is when the completed form is submitted - not in the spreadsheet. Google Apps triggers allow a function to be called when various events occur (see, and this script mostly consists of an "Installable Trigger", plus the setup() function which installs it. When sendConfirmationEmail() is called, it is passed a FormEvent object, which contains an AuthMode object, a FormResponse object and a Form object representing the form itself. Only the FormResponse is of interest.

The script just sets up some strings for the email subject, sender name, and the beginning of the body text, then extracts the FormResponse variable (just for mnemonic simplicity). It then calls Session.getActiveUser().getEmail() to get the user's email address.

It then calls getItemResponses(), which returns an array of the form item responses, and loops through the array, getting the item title and the response, writing them to the email message. Along the way, when it finds the item titled "Email Address", it gets the corresponding reponse and uses it as the "Send To:" address. It then calls the FormResponse's getEditResponseUrl() method to get a URL which will edit this response, and wraps it up in an href element.

Finally, it calls the GmailApp sendEmail() method to send the email.

That's it; pretty clean and straightforward, once you know how Google Forms works.

Sunday, September 8, 2013

She'll Be Right, Mate!

Dateline - Sydney, 8th September, 1953

Awoke this morning to a grey and overcast morning - the clear and sunny skies of recent weeks have gone - and the realization that we are Under New Management. Changes Will Have to be Made if we are to Open For Business.

The first order of business is to get a haircut. My hair is too long - it slightly covers my ears - and it's going to cost a fortune if I use that much Brylcreem (and that's another thing - Her Indoors is going to have to buy some antimacassars). Then I shall have to see my tailor - it will be best to get in quickly, before demand drives the price of brown tweed up through the roof. Something with a waistcoat, I fancy - and while I await my suit, I shall have to get the little woman to let down the hems of my existing trousers and create some turnups, in line with the new fashion. I shall need to get a couple of new hats, too.

I'm going to have to change careers, of course - there will be no demand for computer security in Australia and all that work will be in countries that have decided to build telecommunications infrastructure. The German car will have to go, too - it's politically incorrect, now. A Holden in every garage, that's the motto! And I hope that we'll soon be able to take a break for a short holiday - perhaps in the Great Barrier Reef, before it's gone, or the bits of the Northern Territory that haven't been dug up yet.

The better half is going to have to sell off her business - in our hearts, we always felt it was a mistake to let women advise giant corporations and government. What were we thinking? But she's quite looking forward to her new life at home, and has set about choosing material for new curtains, throw cushions, bedspreads and tablecloths. But before she can put the sewing machine to work on those, she's going to have to let those hemlines down - they're just not acceptable these days! And she's going to need a couple of new hats, too.

Oh, yes - comfort and style for the ladies!

We are expecting some resistance from our daughter. We should never have encouraged her in this foolish belief that she should get educated, learn languages and travel, so we are just going to have to bite the bullet and withdraw her from University so that she can go to secretarial college and learn shorthand. Hopefully she can master it quickly enough to get a job in an office and then, with a bit of luck, she'll meet a sound chap, get married and settle down. We shall be grandparents before we know it! I wonder how she'll look in a hat?

The young master, of course, accepts that he will have to sign up for nasho. My advice to him is: join the Navy! There'll be plenty of demand for young officer cadets to take charge in our Northern Defences, and since they'll need lots of ships to tow leaky fishing boats full of refugees back out to sea, I expect he'll have his own command in just a few years. After that, he should be well set for a job with a bank.

I do hope the little woman's over-active mind doesn't lead her astray.

And there's more good news for the young people, too - housing will be much more affordable, with the reintroduction of asbestos sheeting as a cheap building material (thanks, Julie!). The economy is going to boom, and soon the Harbour will once more be full of ships from distant lands, the docks ringing to the cheerful sounds of low-paid immigrant labourers! Gina's mines, too, will benefit  - her "New Australian" workers are so much better suited to tunneling work, and they need very little pay, since they only eat a couple of bowls of rice per day.

Soon, our coal-powered factories will be mass-producing widgets at prices that can compete with expensive Chinese-made widgets, those silly Chinese having introduced a 10 yuan-per-tonne carbon tax. How silly is that? We know that carbon is a colourless gas, so how can it be harmful? Tony is right - global warming is crap, and those boffins who dreamt it up have lost touch with reality. It's time somebody stood up to these "experts". And even better - our cheap electricity will allow us to run bigger air conditioners, which I find is so important, with the warmer weather we seem to be getting at the moment.

It's good that we have such a stable hand on the tiller, a sound chap, educated in the Mother Country, who understands our rightful place in the world, and that we mustn't get above our station as a colonial outpost of The Empire. A man who understands that Thrift is a Virtue, and that we cannot afford luxuries like the ABC - no matter, since we look forward to clustering around our wireless sets of an evening listening to the replayed highlights of BBC Radio Four, or dancing to big band sounds. Then it's off to bed, and up with the dawn to cycle to work - or down to the Surf Club, on the weekend. If the pushbike is good enough for Our Leader, it's good enough for me!

Anyway, that's all I have time to write in this first report - we have to rush off to church, as Tony is expecting a good turnout. And after that, the ladies will come home and put the roast in the oven for Sunday lunch, while me and my cobbers sink a few schooners of Tooheys Old at the RSL.

Aye, Australia - it's the lucky country, all right.