Tuesday, April 3, 2018

Installing YARA from Source Code on CentOS 7

A short post - really more of a reminder to myself - on how to install YARA on CentOS Linux 7.

CentOS is an enterprise Linux distribution, and as a consequence aims for stability - it tends to have older versions of many software packages. This can make installing some software a bit of a challenge.

YARA is a pattern-matching program for use by malware analysts - it's a kind of Swiss Army Knife that can calculate hashes, perform string and regular expression matching, and understands various binary executable formats, like PE - most of the techniques that are useful for finding and investigating malware.

Preparation


Installing the various required packages can only be done as root. Rather than prefixing each command with sudo, just su to root
# sudo su -
Many of the packages required by YARA are also a little ahead of the standard CentOS releases - but that's common for up-to-date versions of many programs, like PHP and others. So you may already have the first requirement - a Yum configuration for the EPEL (Extra Packages for Enterprise Linux) repository. If you have, then you're good to go - otherwise, enable EPEL with the command
# yum install epel-release

If that doesn't work, because you don't have the CentOS Extras repository enabled, then try this:
# rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Now you're ready to start installing the required packages. Start with GNU Autoconf and libtool:

# yum install autoconf libtool

Then add the OpenSSL development files:

# yum install openssl-devel

If you intend to use the YARA cuckoo and magic modules:

# yum install file-devel
# yum install jansson jansson-devel

Finally, the latest YARA rules require Python 3.6, so if you don't have it:

# yum install python36 python36-devel

Installing YARA itself


From this point, everything goes as per the instructions at http://yara.readthedocs.io/en/v3.7.1/gettingstarted.html. You should drop your root privileges (exit or switch to another session) then download the latest version of YARA. From that point, it goes something like this:

$ tar xzvf yara-3.7.1.tar.gz
$ cd yara-3.7.1
$ ./bootstrap.sh
$ ./configure --enable-cuckoo --enable-magic
$ make
$ sudo make install

Finally, run the YARA tests:

$ make check

Among the output that follows, you should see:

PASS: test-alignment
PASS: test-api
PASS: test-rules
PASS: test-pe
PASS: test-elf
PASS: test-version
PASS: test-exception

[...]
============================================================================
Testsuite summary for yara 3.7.1
============================================================================
# TOTAL: 7
# PASS:  7
# SKIP:  0
# XFAIL: 0
# FAIL:  0
# XPASS: 0
# ERROR: 0
============================================================================

If all is correct, you're good to go! Have fun, and nail that malware!

Friday, March 30, 2018

Optus Cable with Google Wi-Fi

We had Optus Cable installed yesterday, replacing an aging ADSL connection. We already had Google Wi-Fi installed, replacing a complicated setup consisting of a Linux-based firewall and multiple access points, but ADSL had become painful, with disconnections whenever it drizzled, let alone rained, and one phone line not working at all (perhaps disturbed by a linesman while trying to get the ADSL fixed).
Up with this, I shall not put!






I had no intention of downgrading our Google Wi-Fi setup with the somewhat primitive devices Optus supply, so the problem was to get the combination working. Google Wi-Fi can lose some functionality if hidden behind another router, so I had googled for information on the Optus-provided devices to see how they performed. Posts on discussion boards suggested the Netgear CG3000 could be configured as a bridge via some barely-documented settings, while the Sagemcom devices should be avoided at all costs. With that in mind, I selected a plan that provided the CG3000 and figured I would let the Optus technicians get it working and then figure it out. I also took the precaution of buying a spare CG3000 - just so I could replace a Sagemcom if worse came to worst, or perhaps have one configured the way I want and the original to put back into place if necessary.

In the end, there was far less drama than expected. The Optus techs turned up with a Netgear CM500V modem and a separate Sagemcom 3864V3 router. I let them install it, connected to it via my laptop to show it was all working, and bid them adieu.

Then I unplugged the Sagemcom and put it back in the box and performed the following procedure:
  1. Switch off the CM500V. This is necessary, as the modem remembers the MAC address of the router it is connected to and will not talk to the Google Wi-Fi router without a reboot.
  2. Switch the CM500V back on again. It may take a few minutes to connect, so get it started while you're doing the rest of this procedure.
  3. Unplug the Google Wi-Fi router from the ADSL modem.
  4. Check the Google W-Fi router has realised it is offline - it should show a pulsing amber light.
  5. Turn off mobile data on your phone, then run the Google Wi-Fi app and go to Settings -> Network & General -> Advanced Networking -> WAN. The WAN settings are not editable unless Google Wi-Fi is offline and your phone is talking to it directly on the wireless LAN.
  6. Change the WAN settings to "DHCP" and tap "Save".
  7. Check the CM500V - the Power, Downstream, Upstream and Internet LED's (the top four) should all be solid green by now.
  8. Plug in the Ethernet cable from the Google Wi-Fi WAN port to the modem. Give it a few seconds - the Ethernet LED on the CM500V should turn green and the Google Wi-Fi router will get its WAN IP address via DHCP and should settle down to a stable white light.
  9. Phew! That's better!
  10. Go back to the Google Wi-Fi app Shortcuts page, tap "Network check" and then "Test Internet". Marvel at the impressive speed test result!

The NBN HFC Internet connection box sits, forlorn, on the wall of our house while nbnco tries to figure out how to get DOCSIS 3.1 up and running on the cable. We just couldn't wait that long in the end. However, I expect this procedure should work just fine with an NBN cable modem.

Final caveat: I haven't tested the CM500V with a phone, since we have an Asterisk VoIP setup. But I've no reason to suspect switch to the Google Wi-Fi setup will affect phone operation in any way.

Sunday, April 30, 2017

Another Chromebook Use Case

Recent restrictions on traveling with laptops have caused difficulties for business travelers.

My better half recently booked airline tickets to visit family in the UK, traveling with a codesharing combination of Qantas (Sydney - Dubai) and Emirates (Dubai - Birmingham). This is a much more convenient alternative to taking QF1 all the way to Heathrow and then organising land transport to the Midlands, but even QF1 still transits through Dubai, so would be subject to the same problem.
The Acer Chromebook 14, in Luxury Gold trim.

The Problem

Recently the US instituted a ban on passengers traveling from several Middle Eastern airports carrying electronic devices in their hand baggage. The ban applies to tablets (including some of the older, larger Kindles), laptop computers and other personal electronic devices, and apparently is based on received intelligence on bomb-making techniques.

This occurred a few weeks after my wife had bought her tickets, and we were initially unconcerned - until the UK followed suit, and specifically added Dubai to the list of ports concerned. Although this was a family visit, my wife needs to run her business while traveling, maintaining contact with clients and working on project reports and presentations. She had previously taken her Windows laptop for this purpose and so we initially considered how this could be done in the light of the new restrictions.

The most obvious alternative to hand luggage was to put the laptop into checked luggage. But there are problems with this approach.

Firstly, airlines (and aviation regulators) have specific rules for the carriage of dangerous goods, and lithium ion batteries feature quite prominently in the dangerous goods list. For example, Australia's Civil Aviation Safety Authority provides quite detailed advice to passengers ("Travelling safely with batteries and portable power packs", available online at https://www.casa.gov.au/standard-page/travelling-safely-batteries) and is quite clear that spare batteries must be in carry-on baggage only, because of the risk of fire.No advice is provided in relation to batteries installed in devices, probably because of the expectation that passengers will carry expensive and fragile devices as carry-on baggage anyway.

However, a laptop packed into a suitcase - especially a zip-up lightweight suitcase - poses its own risks. First, there is the possibility of theft; I have personally had electronics stolen from a checked bag, presumably by a baggage handler. Secondly, there's the possibility of damage - suitcases are stacked up in containers for loading into the freight holds of large aircraft, and a lightweight suitcase at the bottom of a pile could be subject to considerable pressure and deformation. Finally, the natural inclination is to wrap the laptop in soft clothing to provide protection against the shock of dropping - but what if pressure on a power switch or deformation of the case causes the laptop to power up? It is quite likely to overheat since the clothing will block the air vents - and the clothing is also likely to be highly flammable.

For these reasons, we rapidly ruled out the idea of packing the laptop in a suitcase - and I hope everyone else does, too.

The airline eventually proposed a scheme in which passengers transiting Dubai could surrender their laptops for carriage in the hold - but this is unattractive, too - since the laptop bag is the obvious place to store travel documents (e-ticket, passport, etc.) and in-flight requirements. Surrender the bag, and you lose access to those, or have to have yet another bag to carry them; surrender the laptop without the bag, and it is unprotected. Both cases still leave an exposure to damage, loss or theft. Not a comfortable option, either.

The Solution


What to do, then? Fortunately, there is an easy alternative: order a Chromebook in advance of travel, for delivery to a UK address, and that is what we chose in the end.

I drew up a short list of requirements for the various alternative solutions to the problem:

  • Functionality. The device has to support essential business applications: email/calendar, word processing, spreadsheet and presentation graphics.
  • Low cost. If we acquired a device just for use on visits to the UK, it would only get used for a few weeks each year, so a high-cost device is not justified. This requirement extends to software licences as well.
  • Low maintenance. The machine would lie unused for three to six months at a time, and if the first task on arrival was to install updates and patches, requiring multiple reboots and lots of interaction (e.g. via the Help -> About menu option in Mozilla applications), that's time badly spent on a short trip - but if not done, security exposures would result.
  • Security. If the device is stolen, lost, lent to a third party, etc. there should be no exposure of sensitive data on the device and no threat to system integrity.
  • No interruption to work, and no work lost. Locally-stored files, e.g. on the hard drive of a Windows laptop, could accidentally be left behind, requiring work to be done all over again.
  • Simplicity. We wanted to avoid complicated schemes of copying files to and from USB keys or compact flash. This poses too much risk of an old file over-writing a newer version.

Fortunately, the use of a Chromebook meets these requirements perfectly. Since my wife's business uses Google GSuite (formerly Google Apps), she is already familiar with some of its components and uses them, particularly for collaborative projects. So we knew the functionality requirement was met. We already have another Chromebook and a Chromebox, so the device is familiar, too.

The Chromebook meets the low maintenance requirement quite easily, as there's very little on the device itself to be updated, and that is taken care of with a few minutes downloading and a ten-second (at most!) reboot. All applications are cloud-based and continually updated.

Security, simplicity and the requirement for no work to be lost are dealt with by the fact that the Chromebook and GSuite are cloud-based. All she had to do was transfer more of her work to GSuite in the weeks leading up to the trip, and all her work documents were available immediately upon initial login. Similarly, she can leave the Chromebook behind and upon arrival, immediately resume work. Everything is stored in the cloud; nothing is stored on the machine. And because we use two-factor authentication with security keys, there's no real possibility of someone using the machine to gain access to her data. For the same reasons, the family member charged with storing the device is relieved of a lot of responsibility.

Finally, cost: the Acer Chromebook 14 is only GBP199.00 from Amazon.co.uk (see https://www.amazon.co.uk/Acer-Chromebook-CB3-431-14-Inch-Notebook/dp/B01MY6VFL3/). That is sufficiently inexpensive that the low utilization is not a problem - it's a reasonable price to pay to solve the travel problem.

The Pudding


The proof of the pudding is in the eating, as they say. The trip is almost over, and my wife reports that the Chromebook worked well. Even as a non-technical user, she was able to get it unpacked, set up and working with minimal effort, and she has used it for ten days to complete a variety of work tasks. Not having to worry about taking a laptop was a load off her mind, and not having a laptop case to carry was a load off her shoulders.

The Chromebook is now permanently stationed in the UK for use on future trips, and travel - especially via Dubai - will be a lot easier. The whole exercise has proved yet another use case for the Chromebook, and it has turned out to be a useful addition to our business technology toolbox.

Sunday, April 2, 2017

An Infosec View of Privacy

Information security professionals, and especially cryptographers, tend to think in terms of preserving the security properties associated with information assets, and CISSP's in particular tend to start with the CIA Triad. Clearly, privacy relates to the first member of that triad - confidentiality - in some way, but the relationship is not obviously clear. For example, we often use secrecy as a synonym for confidentiality, but privacy is something different.

The difference is centered on agency or control, and in particular the relationship between the subject of the information and the information custodian.

The vast bulk of enterprise information - whether it be private enterprise, or public - is internally-generated, and the subject is, ultimately, the enterprise itself. For example, an ERP system revolves around accounting data (GL, A/R, A/P, etc.) and the ledgers therein describe the enterprise's financial state and history of transactions (as well as future revenue, of course). A CRM system may contain information about customers, but the bulk of that information relates to the enterprise's transactional history with the customer - sales calls, orders placed, etc.

In such cases, the enterprise is custodian of its own information - it is both subject and custodian. There is no conflict of interest - as custodian, the enterprise is never going to breach the confidentiality of its own information, and indeed will implement controls - policies, identity and access management, security models - to ensure that its employees and agents cannot. The enterprise, as the subject, has authority over the custodians and users of the information.

However, a conflict of interest arises when an enterprise is custodian of information about identified (or identifiable) individuals. For example, a medical practice maintains health records about patients; it is the custodian, while the patients are the subjects.

The patient records obviously have value for advertising and marketing purposes, in addition to the intended purpose of patient diagnosis and treatment. For example, a company selling stand-up desks or ergonomic chairs would see considerable value in a list of patients who have complained of chronic back pain, while over-the-counter pharmaceuticals marketers might want to sell directly to patients whose test results indicate pre-diabetes, early indications of hypertension or any of a range of conditions. And an unscrupulous marketer might approach an unscrupulous medical practice manager, resulting in patients being subjected to sales calls for products they do not necessarily want or - worse still - their medical histories or problems being leaked to other interested parties such as family members or employers.

There is a clear conflict of interest here. The subject of the data is not the custodian, and in fact, has no authority over the custodian. It is in the custodian's interest to on-sell the subject's data to anyone and everyone who is willing to pay for it. And while the example of a medical practice involves only a small business, many enterprises are much, much larger and employ many lawyers, resulting in a power imbalance between the enterprise and the affected individual.

This is why governments, acting on behalf of civil society and the individual, enact privacy legislation - the legislation gives the individual some degree of authority over enterprises and restores the balance of power.

Note that many information security controls are able to preserve confidentiality, but not privacy. Personal information is stored in databases and document management systems which are ultimately under the control of an information asset owner and users who are free to access the information for a range of purposes; if he or she decides to extract data, copy it to a USB key and sell it externally, the first two steps are probably authorized while the third cannot be detected, let alone prevented.

Hence the need for a privacy policy and strong privacy education and awareness within the enterprise. In the end, privacy comes down to personal ethics and compliance with the law. It is really a matter of trust in the integrity of those who have access to personal information - and the threat of legal action provides a degree of assurance in that integrity.

Notice that, in this model, the distinction between confidentiality and privacy can be extended beyond individual persons to companies or other entities. For example, the Chinese Wall model is another situation in which information about one entity is in the custody of another (e.g. information about clients held by a consulting firm would obviously be of great interest to other clients who are competitors). In that sense, then, the Chinese Wall model is intended to preserve privacy rather than integrity.

Finally, consider personal information in the custody of the person themselves. The subject and the custodian are the same individual - there is no conflict of interest, privacy laws do not apply, and the issue here is confidentiality, not privacy.

The distinction between confidentiality and privacy, then, is whether the subject of the information has authority over the custodian - if he does, it's a matter of confidentiality, but if he does not, then it's a matter of privacy.

Of course, there are other common conceptions of privacy, as well as legal views relating to photography, etc. but these are not considered here.

Sunday, July 17, 2016

A Little Learning - Electronic Voting and the Software Profession

Over the last week, I've had occasion to ponder the expression that 'a little learning is a dangerous thing'.

Quite spontaneously, a number of people have posted on Facebook in opposition to the idea of electronic voting. One linked to a YouTube video which clearly demonstrated the problems with some of the voting machines used in the US. Others posted a link to a Guardian opinion piece that labeled electronic voting a "monumentally fatuous idea".

The YouTube video - actually, a full-length documentary from 2006 entitled "Hacking Democracy" - showcased a number of well-known problems with the voting machines used in the US. Touch-screen misalignment causing the wrong candidate to be selected, possible tampering with the contents of memory cards, problems with mark-sense readers, allegations of corruption in the awarding of contracts for voting machines - these have been known for some years.



The Guardian article fell back on claims that "we could not guarantee that it was built correctly, even with source code audits. . . Until humans get better at building software (check back in 50 years) . . . we should leave e-voting where it needs to be: on the trash-heap of bad ideas."

However, by exactly the same argument, online banking is also a "monumentally fatuous" idea, along with chip & PIN credit card transactions, online shopping, fly-by-wire airliners, etc.

Whenever I've suggested that, actually, electronic voting is not such an outlandish idea, I've met with vehement opposition, usually supported by an appeal to authority along the lines of "I used to be a sysadmin, and let me tell you . . ." or "I'm a software engineer and it's impossible to write foolproof software. . ."

There's an old saying among cryptographers, that every amateur cryptographer is capable of inventing a security system that he, himself, is unable to break. In this case, the author of the Guardian piece demonstrates the converse - that because he, himself, is unable to come up with a system that he can't break, he is certain the professionals can't either. 


I have to say, that's a novel form of arrogance; to be so sure that because electronic voting systems are beyond your own modest capacity, nobody else can do it, either.

Yes, we've all seen software projects that were near-disasters, we've known programmers whose code had to be thrown away and rewritten, and poor practices like the storage of passwords and credit-card numbers in unencrypted form. I watch students today 'throw' code into the Eclipse IDE then run it through the debugger to figure out what the code they've just written actually does, and then I think back to my days of careful, paper-and-pencil bench-checking of Algol code, and I cringe!.  But not every system is designed and implemented that way.

It's true that electronic voting is a difficult problem, but it's one that some very fine minds in the cryptographic and security communities have been working on for several decades now. The underlying cause of this difficulty is that a voting protocol must simultaneously preserve what, at first sight, are a number of conflicting security properties.

Most security people are familiar with the basic security properties we work to preserve in the information and systems in our care: confidentiality (also known as secrecy), integrity (correctness) and availability - the so-called "CIA triad". But there are many more, and some of them, at first glance contradict each other.

For example, anonymity is important in a democracy; no-one should be able to find out how you voted so that nobody should fear any form of reprisal. For an electronic voting system, this means that the vote cannot be associated with the voter's identity, whether represented as a login ID, an IP address, a public key or some other identifying information. But democracy also requires that only those who are entitled to vote being able to do so - in Australia that means registering on the electoral roll and identifying yourself at the polling place. This property is called eligibility.

At first glance, these requirements are mutually exclusive and contradictory - how can you identify yourself to claim eligibility, while at the same time remaining anonymous? In the physical world, the time and space between your identification, the booth where you mark your ballot paper, and the random placement of the ballot paper in a box all serve to provide anonymity. Fortunately there are cryptographic techniques that effectively do the same thing.

An example is the use of blind signatures, which were originally invented by David Chaum in 1983 [1], although I'll use a later scheme, made possible by the RSA public-key system's property of homomorphism under multiplication. There are three parties: Henry, who holds a document (e.g. a vote) which he wants Sally (our electoral registrar) to sign without knowing what it is that she is signing. Finally, there is Victor (the verifier and vote tabulator), who needs to verify that the document was signed by Sally.

I don't want to delve into the mathematics too deeply, and am constrained by the limited ability of this blogging platform to express mathematics - hence '=' in what follows should be read as 'is congruent to' in modular arithmetic, rather than the more common 'equals'. Henry obtains Sally's public key, which comprises a modulus, N, and a key, e. Henry then chooses an integer blinding factor, r, and uses this with the key, e and the modulus N, along with his vote message, v to calculate a ciphertext

c = m x r^e (mod N)

and sends this to Sally along with any required proof of his identity. If Sally determines that Henry is eligible to vote, she signs his encrypted vote by raising it to the power of her private key, d:

s' = c^d (mod N) = (m x r^e)^d (mod N)  = r x m^d (mod N) (since ed = 1 (mod N))

and returns this to Henry. Henry now removes the blinding factor to get the correct signature:

s = s' x r^-1 (mod N) = r x m^d x r^-1 = m^d

Henry can now send his vote message, m, off to Victor, along with the signature, s. Victor uses Sally's public key, d, to check whether

s^d = m (mod N)

If the two are equal (congruent, really) then the signature is correct, and Victor will count the vote. Notice that Henry does not identify himself to Victor, and Victor does not know who he is; he is willing to accept that he is an eligible voter because Sally has signed his vote message. And, very importantly, note that Sally does not know how Henry voted - she signed the blinded version of his vote message.

If all that mathematics was a bit much, consider this simple analogy: if I want you to sign something for me without seeing what it is, I can fold up the paper I want to sign in such a way that the signature space is on top when the paper is placed in an envelope. I then insert a piece of carbon paper - the paper that's used in receipt books to make a second copy of a written receipt, and used to be used to make copies of typewritten documents - on top of the paper but inside the envelope. I then ask you to sign on the outside of the envelope, and the pressure of your pen will imprint your signature, via the carbon paper, to the unseen paper inside the envelope. Voila! You have blind-signed a document, which I can now extract from the envelope once alone.

My point is that cryptographers have, for many years known about, and had solutions for, the superficially contradictory requirements for eligibility and anonymity.

In practice, voting protocols have many more requirements:
  • The voter must be registered or authorized to vote (elegibility)
  • The voter must vote once only (uniqueness)
  • The vote must be recorded confidentially (privacy) and cannot be associated with the voter's identity (anonymity)
  • The vote must be recorded correctly (accuracy)
  • The voter must be able to confirm that his vote was recorded correctly (verifiability)
  • The voter must not be able to sell his vote
  • No-one can be able to duplicate votes (unreusability)
  • The vote must be unalterable after recording (integrity)
  • The voter must not be vulnerable to coercion (uncoercibility
  • The votes must not be revealed or tallied until the end of the election (fairness)
  • The electoral authority - or everyone - must know who voted and who did not
But we still have more tools at our disposal, such as zero knowledge proofs, digests, signatures and, of course, conventional symmetric crypto which we can use to build more sophisticated protocols which do satisfy these security requirements. Some very fine minds indeed have been working on this for decades now, although obviously their work is not well known outside a relatively small community of cryptographers with an interest in voting.

When the NSW Electoral Commission were working on their iVote system, they asked several universities to get involved. They supplied the design documentation for iVote, including references to the various cryptographic protocols used (e.g. Schnorr's Protocol - a zero-knowledge proof based on the discrete logarithm problem). Both I and a team from UNSW independently wrote programs which took over 250,000 votes and validated them, in a ceremony attended by scrutineers of the various political parties.

The iVote system is quite sophisticated - it's an enhancement of what's called a "split central tabulating facility" design, with a third, separate verification server which allows voters to verify their votes via a phone interface, while the counting system itself is airgapped from everything else. The process I described above compared the votes from the counting system with the encrypted votes in the verification server. The voter can additionally obtain verification that their vote was decrypted correctly via the web interface.

It's true that researchers did find a vulnerability in the SSL/TLS library on one server in the system. I'm not familiar with that part of the system, but I'm pretty sure that even if that vulnerability was exploited in a man-in-the-middle attack, the attacker would not be able to manipulate the vote in that session as the votes are encrypted in the application protocol and do not rely on the transport layer encryption of TLS. However, the browser supports TLS anyway, so it's a good idea to use it as one more layer of a defence in depth - and an alert user would expect to see that padlock in the URL bar in any case, so it would be a mistake not to use it.


Prior to getting involved with the iVote project, I'd been somewhat cynical about the prospects for electronic voting. Although I knew something of the underlying principles, I felt it was still in the too hard basket. And reports of the problems with US voting machines didn't help, although a little investigation will reveal they are almost embarrassingly primitive and bear no resemblance to the protocols I am discussing here. But they still contribute to popular fear of electronic voting systems and pollute the overall climate. Watch the video above and you'll see what I mean.

However, I discovered that work on voting protocols was more advanced than I had thought, and the implementation of the iVote system was considerably more sophisticated than most people would imagine.

But what has surprised me is the negative attitude of software professionals. The (ISC)2 Code of Ethics [2] requires me, as a CISSP, to "Protect society, the commonwealth, and the infrastructure" and as part of that effort, to "Promote and preserve public trust and confidence in information and systems". And yes, it also requires me to "Discourage unsafe practice". At this point, I cannot say that the introduction of electronic voting is an unsafe practice - over 250,000 people used in our last State election - but it's hard to promote public trust and confidence when software professionals who ought to know better are so active in promoting distrust. There's a huge gap between the quality of code produced by a junior programmer working on a mobile app with time-to-market pressures and the careful design of a voting system based on decades of cryptographic research and development, and the fact that the former may be unreliable is not an indication that the latter is inevitably insecure.

Only a fool would guarantee that an electronic voting system is 100% secure and error-free - but then, only a fool would guarantee that paper-based voting systems are 100% secure and error-free. However, humans are better at writing software than the Guardian author suggests and electronic voting system are in use today, will increase in popularity and are very unlikely to result in the downfall of democracy.

References


[1] Chaum, David. “Blind Signatures for Untraceable Payments.” In Advances in Cryptology, edited by David Chaum, Ronald L. Rivest, and Alan T. Sherman, 199–203. Springer US, 1983. http://link.springer.com/chapter/10.1007/978-1-4757-0602-4_18.

[2] (ISC)2 Code of Ethics -  The Pursuit of Integrity, Honor and Trust in Information Security, International Information Systems Security Certification Consortium, 2010. Available online at https://www.isc2.org/uploadedfiles/(isc)2_public_content/code_of_ethics/isc2-code-of-ethics.pdf

Sunday, July 10, 2016

2FA FTW

This morning brought yet another story of identity theft (ID Theft in three steps: 'Adequate' Telstra and telco identity checks questioned) by the all-too-easy technique of finding an individual's name, home address and date of birth. These are all things that are on the public record and quite easy to discover; they should not be used as an authenticator. The US has a related problem with Social Security numbers which should also be regarded as an identifier, not an authenticator.

One of the most important steps in identity theft is getting access to the victim's email account. This is because the attacker does not know the password for the victim's bank, PayPal, eBay or other accounts, but the most common technique for password reset is to email a link to the user. In other words, the email account is used as yet another authenticator in a chain of trust. This means that email account mismanagement poses an extreme risk for user - it is both highly likely to be targeted, and also leads to severe consequences if compromised.

The weak link in all of this is the use of passwords; human beings being human and of bounded rationality, we tend to choose bad passwords that are easily guessable or discoverable. Even worse, we reuse passwords - confronted with the need to memorise the passwords for 20 or 30 different accounts, we understandably fail. Not everybody knows about password safes, for example. Inevitably, however, one of the sites we use gets compromised, the password hashes are stolen and posted on pastebin, and within minutes, a Rainbow Tables attack (or even quite mundane dictionary attack) reveals that favourite password, along with the email address, to the Bad Guy. And if the same password was used for the email account, it's Game Over.

I long ago determined that my email account had to use a unique and painfully complex passphrase, since the security of so many other assets depended upon it. It might be a pain in the ass to pause for a while as I hunt and peck my way through a long near-random string, but it's the price I pay for peace of mind.

And a few years ago, I decided that it was time to move to two-factor authentication for my email account. I'd had enough of running my own email servers and moved my email, etc. to Google Apps for Work, and Google was starting to offer two-factor authentication. I'm not a starry-eyed believer in 2FA - in many cases, there are better alternatives and a more sophisticated risk-based approach to the problem should reveal them - but to rely purely on passwords for such an important application is inviting disaster.

Enabling two-factor authentication is a fairly straightforward process, and Google nicely provides several alternative approaches. The most obvious is to rely on a mobile phone number and send an SMS (or voice message) to it - what banking systems call a mobile Transaction Authentication Number (mTAN). But that won't always work for me - some places I work are screened so that a mobile phone signal can't reach them. Once nice alternative is a Security Key - a small device that plugs into a USB port and provides cryptographic authentication; this will work with any software that supports the FIDO (Fast Identity Online) U2F (Universal 2nd Factor) specification, such as Chrome (but not Firefox, unfortunately). And there's the Google Authenticator, a smartphone and tablet app (technically, a soft token) which generates a unique six-digit number each minute. Any of these can be combined with the passphrase to allow a successful login.

A Yubikey Neo - a "security key" which also supports NFC (near-field communications)
This will overcome what I suspect is the biggest obstacle for many people contemplating the switch to two-factor authentication for Gmail, etc. - the fear that they will be locked out of their account if they lose their phone or another device or simply don't have it to hand. With a little extra setup, there's always an alternative available.

Setting up two-factor authentication is done by going to https://myaccount.google.com/security/signinoptions/two-step-verification or just clicking on your avatar at top right of a Gmail or other Google page, then choosing "My Account", then under "Sign-in & security", selecting "Signing in to Google" and on that page, under "Password & sign-in method" clicking on "2-step Verification". Unsurprisingly, you'll be prompted for your login ID and password before getting access to this page.

Turning on 2-step verification will start by sending a verification code to your recovery phone number. If you receive this successfully then 2-step verification will be turned on. At this point, any other sessions and devices you may have logged in to this Google account will be effectively logged out, and you will need to log in again, using 2-step verification. However, you may want to delay that until you have set up your security key or authenticator app.

Security keys, such as the Yubikey, are inexpensive and widely available - I got mine from Amazon. Adding the security key to a Google account is very easy - on the 2-step Verification page, just click on "ADD SECURITY KEY"and you will prompted to insert your key and tap on its button if required. A second or two later, it will have been recognised and added to your account.

Setting up Google Authenticator is almost as easy. The 2-step Verification page prompts you to install the app (versions are available for both iPhone and Android) and then tap "Set up account". This will allow you to scan a QR code which presumably seeds the pseudo-random number generator in the Authenticator app. And that's it - from this time on, opening the Authenticator app displays a six-digit number, changing every minute, which you can use to authenticate to Google.

Google Authenticator
Google selects the second factor in a priority order - first (default) choice for me is the security key, then comes the Google Authenticator app, and only if those aren't available does it fall back on SMS authentication codes.

Logging In

As mentioned above, setting up 2-step verification effectively logs you out on other devices and sessions, so you'll need to log in again. Your phone may pop up a dialog requesting an authentication code, especially if you have the Google Device Policy app installed for remote management. In my case, I tapped the button requesting Google to send an authentication code, and a second later the SMS arrived - but after I'd viewed the SMS and written down the code, I couldn't get back to the authorization dialog. Since the phone is still logged in and hasn't skipped a beat, I can only presume that the Device Policy app was able to directly read the SMS and didn't need me to manually enter the code.

When I booted my Chromebook and logged in, it initially accepted just the password but then gave me a notification message that said I should log out and in again. This time, it prompted me to insert my security key and tap its button, after which I was logged in.

Chrome on Windows can be a bit confusing, because it requires a log in to the browser itself, independently of any web sites you may log in to (this is because Chrome syncs your bookmarks and other data to your Google account). So you may get a couple of prompts to authenticate when you first launch Chrome and log in to a Google page such as Gmail.

Firefox does not yet support security keys - there's a Bugzilla page where you can track development progress at https://bugzilla.mozilla.org/show_bug.cgi?id=1065729. However, you can use Google Authenticator to log in to Google, as shown:

Thunderbird shares code with Firefox and also does not support security keys. In fact, if you've been relying on "Normal password" authentication for IMAP access to your Gmail account, it's now time to change. Right-click on your account in Thunderbird and choose "Settings", then click on "Server Settings" and change the "Authentication method" to OAuth2. Click on "OK", and then click on "Get messages" to force authentication. You'll get a password prompt and then a dialog like the one above, allowing you to authenticate using Google Authenticator or an SMS code. Once that's done, Thunderbird will be happy for the foreseeable future. If you're using the Provider for Google Calendar plugin, that should also prompt for re-authentication.

Third-party Sites

One of the beauties of adopting this technology is that it can eliminate the need for passwords for other web sites and applications. The OAuth2 protocol referred to above allows Google to function as an identity provider for other relying web sites - part of an underlying technology called federated identity management. I use this as part of my own systems - for example, I no longer use a password when logging in to my own web site, but instead have configured the content management system (in this case, Moodle) to use OAuth2 to get Google to confirm my identity. Not only can the site be more confident this really is me - I've used two-factor authentication to prove it - but I don't have to type in a password.
My own web site now supports login with Google credentials

Many other sites also use two-factor authentication or federated identity management; for example DropBox now supports security keys, as does GithubSalesforce and many other sites. You can set up Facebook to use Google Authenticator by going to your settings, then choosing "Security", then clicking on "Code Generator" and clicking on "Set up another way to get security codes" and scanning the resultant QR code. The Facebook app on your phone can also be used as a generator.

You can log in to The Guardian, O'Reilly Media and many other sites using federated identity management with Google, Facebook, LinkedIn or Twitter acting as identity providers. Of course, social media networks are keen to have you use them as identity providers since then they can track you across other sites, although they have other ways of doing that anyway. However, as a Google Apps user I am a Google customer, rather than a consumer of free services, so I prefer to use only my Google ID for authentication.

Summary

Email accounts are too valuable to protect with only a password, especially a weak or reused password. Two-factor authentication providers better security and lowers the risk of identity theft. It takes only a few minutes to set up and rarely inconvenient thereafter. Use it.


Wednesday, August 12, 2015

How To Bury Your Message - And Get Shot in the Process


Oracle hit the news today for all the wrong reasons. The company's Chief Security Officer, Mary Ann Davidson, wrote a blog post on the topic of "security researchers" who are hired by enterprises - in this case, Oracle's own customers - to go hunting for vulnerabilities in their systems. Because those systems are very often running on Oracle's platforms and products - not just the database, but all the stuff Oracle acquired from Sun, like Solaris and Java - the hired bug-hunters turn their attention to those products, using disassemblers to reverse engineer the code and automated tools to scan for vulnerabilities.

Unfortunately, Davidson chose to bang the drum a little too hard on the reverse-engineering aspect, predictably triggering a response from some "security researchers". Which , in turn, led Oracle to delete the blog post. However, nothing ever dies on the Internet, and so the blog can still be read via the Internet Wayback Machine, here (https://web.archive.org/web/20150811052336/https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t).

End-user licence agreements which prohibit tracing, debugging and other reverse engineering techniques do pose something of a problem in the security world. Yes, we ought to honour them, especially since we accepted those terms and conditions. No, we shouldn't use these kinds of techniques to snoop around our suppliers' intellectual property. But the bad guys couldn't give a rat's patootie about these moral quibbles - they're hacking away for fun and profit, and if there's a vulnerability to be found, so much the better.

However, "security researchers" ought to know better. A junior "security researcher" armed with some automated tools for vulnerability is the white hat equivalent of a script kiddy - he knows just enough to be dangerous. This is one of the two points Davidson really ought to have emphasized - hiring this kind of person just encourages them. And here comes the second point: it's counterproductive.

Hiring "security researchers" of this type in an attempt to secure systems is the kind of futile endeavour that the much-misunderstood Kind Canute would have railed against. How many vulnerabilities are there in a complex business system? A handful? A few dozen? Hundreds? You don't know. You'll never know.

So you hire a "security researcher" to hammer on the code with his automated tools and Bingo! He finds one. So what? Out of the unknown n vulnerabilities in that subsystem, you've found one. Which leaves an unknown n-1 vulnerabilities. What are you going to do about them?

The answer is, you're going to deploy a variety of other controls in a defense-in-depth strategy to prevent entire classes of exploits. You'll use a DMZ configuration, an application-layer firewall, an IPS (Intrusion Prevention System) and a whole bunch of other things to make your systems and the business processes they support as resilient as possible. You have to do that.

Searching for, reporting, and patching vulnerabilities one by one is an inefficient strategy which ignores the fundamental asymmetry of the information security business:
The bad guys only have to be lucky once. The defenders have to be lucky every time.

Trying to do exactly the same thing - find vulnerabilities - faster than all the hackers out there is not a sensible strategy, unless and until you've done everything else you can to make your systems and business processes resilient. Otherwise, your resources - time and money - can be better employed elsewhere.

As Davidson explains, Oracle and other software developers already use static source code analysis tools to scan the original source code for vulnerabilities (among other errors). There's not much point in doing it all over again. There's a bit more point to performing dynamic testing, against complete systems - that's much more likely to turn up architectural issues and configuration problems, and - as Davidson unfortunately chose to over-emphasise - it doesn't violate the EULA.

So why do it? Because software companies pay bounties; the vigilante bug-hunter is in it for the money, and a little fame might be nice, too. But if you're going to play that game, do it properly - if you're going to hunt for vulnerabilities, follow through - document it and develop a proof-of-concept that shows it really is a vulnerability. Don't just submit the output of an automated scan and sit back with hand outstretched. If you do the whole job, properly, then you actually will be a Security Researcher.

(And I hope Mary Ann Davidson enjoys her next job, where the Marketing Department will hopefully put an approval process in place for her blog articles.)