Online Security Is a Total Pain, But That May Soon Change

freepik

Originally published on Wired.com

Staying secure online is a pain. If you really want to protect yourself, you have to create unique passwords for every web service you use, turn on two-factor authentication at every site that supports it, and then encrypt all your files, e-mails, and instant messages.

At the very least, these are tedious tasks. But sometimes they’re worse than tedious. In 1999, researchers at Carnegie Mellon University found that most users couldn’t figure out how to sign and encrypt messages with PGP, the gold standard in e-mail encryption. In fact, many accidentally sent unencrypted messages that they thought were secured. And follow-up research in 2006 found that the situation hadn’t improved all that much.

As many internet users seek to improve their security in the wake of ex-government contractor Edward Snowden exposing the NSA’s online surveillance programs, these difficulties remain a huge issue. And it’s hard to understand why. Do we really have to sacrifice convenience for security? Is it that security software designers don’t think hard enough about making things easy to use—or is security just inherently a pain? It’s a bit of both, says Lorrie Cranor, an expert in both security and usability and the director of Carnegie Melon’s CyLab Usable Privacy and Security Laboratory, or CUPS for short. “There isn’t a magic bullet for how to make security usable,” she says. “It’s very much an open research project.”

How to Make Things Usable

A big part of the problem, she says, is that security experts haven’t paid enough attention to the human side of things over the years. “There’s a lot of focus on getting the encryption right and not enough investment in looking at the usability side,” she says. Many security researchers will show her papers on topics like e-mail encryption or secure file transfer and tell her they think it’s “usable” because their friends say it’s easy to use. “But they haven’t done any testing,” she says. “They don’t know how to do testing and there’s no criteria for knowing if these types of things are usable.”

Security tools are notoriously hard to evaluate. For example, the Electronic Frontier Foundation is looking into sponsoring a crypto usability prize to promote the development of more user-friendly tools. But before it can offer a prize, the organization is conducting research into how to measure the usability of nominated projects. With a normal application, such as a Word processor, a usability tester can just make a list of core tasks and verify whether the user can figure out how to do them in a reasonable amount of time. But with security tools, you need to test whether users make mistakes that undermine security, and what the user experience is like when someone is actively trying to trick them into handing over data.

That often means the interface design needs to be considered from the very beginning of a project. “It’s not the sort of thing you can have the crypto guys build something and then throw it over the fence to the usability people and say ‘make it work,’” Cranor says.

This is especially clear in the case of e-mail. A big part of why PGP is so hard to use is that the earliest e-mail systems weren’t designed with encryption and privacy in mind, and now, software developers are trying to bolt security onto existing systems through plugins. Today, open source teams like Mailpile are trying to create new e-mail clients that are built from the ground up to support PGP, but e-mail remains limited in other ways. For example, even if you encrypt your e-mail it will still be possible for someone who intercepts a message—or seizes an e-mail server—to see who you’ve been sending mail to and receiving mail from.

That has led to a few projects to reinvent private messaging from scratch, such as Darkmail, a collaboration that brings together and Ladar Levison of Lavabit—the email service used by Edward Snowden used and PGP creator Phil Zimmermann. But that even if we start from a clean slate, Cranor says, there’s no clear way of making secure communications software usable, largely because the field has been so neglected for so long.

Why Change Is on the Way

But the problem isn’t just that crypto geeks don’t prioritize usability. There hasn’t been a strong demand for usable security software in the past, Cranor says. One of the best examples of usable security is SSL/TLS, the protocol used to encrypt web traffic. There was a strong business incentive on the part of banks and e-commerce companies to make encryption work well, and work as seamlessly as possible. But other areas, such as personal e-mail encryption, there’s been much less investment. That’s because, until very recently, the primary market for most security software has been the IT departments of large corporations and governments. “[IT professionals] care about usability but not to the extent that users do,” she says. “So they’ll buy something even if it isn’t very usable.”

That’s starting to change in the post-Snowden era, as average users start to worry more about privacy. Startups like Virtru are raising venture capital to make communications both more secure and easier to use. And Google released a preview of End-to-End, a PGP plugin for Chrome earlier this month. But it’s not necessarily in the company’s best interest to have all of its e-mail pass through its servers encrypted because it makes money by scanning e-mails for ad targeting. Therein lies another problem with the security technology market: consumers are often willing to trade privacy not only for convenience, but also in lieu of paying for services.

But with major privacy breaches becoming more common, just about every tech company is at least paying lip service to the idea of privacy. Gone are the days of Facebook Mark Zuckerberg proclaiming that privacy is no longer a social norm and Google chairman Eric Schmidt declaring that: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

Facebook has actually been working with CUPS to find ways to encourage users into being more careful about what they post online. The collaboration is the result of additional CUPS research, which found that minor tweaks can help users make better privacy decisions, such as making users wait 10 seconds after writing something before they can post it, or showing them photos of five of their friends at random to remind them of who is likely to see the post.

These sorts of collaborations between web companies, usability experts, and privacy and security researchers are what we need more of. It may always be at least a little cumbersome to use encryption or secure passwords, but there’s plenty that can be done to make easier on us.

Correction 1:00 AM EST 07/01/14: An earlier version of this article said that CUPS was working with Facebook on its Privacy Dinosaur tool. CUPS is working with Facebook, but not specifically on Privacy Dinosaur, which Facebook developed on its own.

By , Wired.com

Cyber Security – Good Fences Make Good Neighbours

A Good Fence

A good fence serves two purposes: 1) to keep bad company away and 2) to protect the good company within. And so your website, application or online shop needs good fences – the kind that will deter (and indeed effectively detain) the rogues from harassing you or your visitors, and the kind that will give your visitors a sense of protection while they’re with you, knowing that their personal information and credit card data on your servers is kept safe from those same scoundrels.

Now, the terms rogues and scoundrels tends to conjure up charming images of people up to a bit of mischief and being the “right” kind of naughty – the type of thing people get into trouble for at school but ultimately, no-one is harmed and everyone has a good laugh. I don’t mean that at all. The people trying to steal your data are trying to take whatever they can and they will endeavour to leave your clients penniless and robbed to the full extent of their abilities. It’s like being mugged on the street, only you can lose a whole lot more than the cash you carry on hand.

If you’re thinking that security isn’t as big as problem as everyone makes it out to be, think again. The biggest companies with the biggest budgets can be victims just as surely as small businesses. And if you think that the bad guys are only after the big companies, then remember that, as with real life, the houses of the rich and the poor alike get broken into. So the attackers who made a show of Sony may be top-class hackers, but there are plenty of slightly less elite hackers more than happy to destroy your company for their gain.

So you need good fences. And by fences I mean this:

Very good fences

Appropriate Fences

 

This post will take a look at the kinds of problems that your walls need to withstand. There are services (like CodeClimate) that will scour the code used in your application and attempt to highlight security vulnerabilities, but hopefully, even without their service, there’s enough information here to get you started (or worried enough to get started).

HTTPS

HTTPS is a method of keeping all traffic over HTTP more secure by encrypting it, making it harder for anyone to read the communication being sent between your user and your server (like their login details, credit card details, etc.). But even with the added security of SSL certificates, there are still very interesting exploits being found including 2012’s discovery of BREACH, a vulnerability that allows hackers to recover text from compressed traffic over HTTPS, and HeartBleed, 2014’s newest vulnerability that allows hackers access to the secret keys used by SSL certificates. Fortunately, news of these vulnerabilities gets widely disseminated and very quickly at that, so keeping an eye on the tech radar should alert you when you need to take action. For our two main products, BEEtoolkit and the Supplier Management System, we pride ourselves on our application security and it took a developer about 2 hours to fix it. Remember that just as the big alerts are letting everyone know how to fix the bug, they’re also letting everyone know that there IS a bug and how to exploit it. So put your favourite techie on speed-dial (number 1, of course – your mom can shift over one place) and simply be prepared to get someone to fix the holes when they arise.

I am, you may have noticed, assuming you’re using an SSL certificate on your web application. And this is because you are using an SSL certificate, aren’t you? If not, please don’t shake your head too visibly and embarrass yourself – just go through either of these tutorials to help you get the certificate and install it: DigiCert, GoDaddy.

Denial of Service (DOS)

The DOS attack is one that seeks to make your web application unusable by giving the server more work that it can handle. This is typically done by using a script to perform a certain action (ideally a time-consuming one like an API call to another website) many times per second, making your servers strain so hard to process the work, that the entire machine starts to slow down and may even crash. While it lasts, the website is essentially unusable, slowed down to pace of government workers, so that the legitimate users wanting to buy their coffee beans are already facing withdrawal symptoms by the time they eventually checkout.

The big brother to this vulnerability is the Distributed Denial of Service (DDOS) attack that is actioned against a particular website by a community of hackers (often as a form of group protest) or bots (machines used to take part in this DOS). It’s distributed on account of the fact that it is coming from many different sources. So while a DOS may be easily stopped by blacklisting the offending machine, the source of the DOS attack, a DDOS is virtually unstoppable until it’s run its course. But the DOS vulnerabilities that are in your power to fix, the kind that you should fix just to keep the common DOS attackers at bay, will be found in the code base (either in your code or in the libraries you use) and running a third party tool to try and reveal these exploits can be well worth the time and effort.

SQL Injection

Structured Query Language (SQL) is the language used to write code against a database. It can be exploited by enterprising individuals entering data that runs updates directly on the database in ways that should not be possible. For example, let’s say Captain Jack Sparrow is wanting to check his order of rum (because no good pirate can be caught without a stash handy), so he logs in with his username (KappinJack – he clearly signed up while he was drunk) and his password. To see if this user has an account with you, you query it in the database:

select * from users where user_name = ‘KappinJack‘;

Now, you can easily see the bit the user filled in on the website – the username is right there. But if the rum ran out early and he felt particularly crabby, he might instead enter KappinJack’; drop table users; select 1 as ‘user into the username field and then the database will try to find the user with this query:

select * from users where user_name = ‘KappinJack’; drop table users; select 1 as ‘user‘;

Indeed, he’s getting no rum today but all of your users have just been deleted. By fiddling with the input, he managed to delete a whole table in your database.

Alternatively, he could log in and go to the page with order requests. When entering a search criteria, he enters a “special” search string so that the page loading orders wouldn’t run this:

select * from orders where user_name = ‘KappinJack’;

but this:

select * from orders where user_name = ‘KappinJack’ or ‘1’=’1‘;

Since ‘1’ is always equal to ‘1’, he gets the results for all orders rather than just his own.

SQL Injection vulnerabilities strike at the very heart of your web application, ie. your data, and can allow attackers to give themselves admin access, destroy data, expose data, or almost anything that can be done with SQL commands. Fortunately, most frameworks have clearly established patterns for countering this attack but it should not be taken lightly. And now, if you’ve never seen it before, you can understand this classic comic (amongst geeks) on XKCD.

Cross-Site Scripting

Cross-site scripting is the vulnerability where users can possibly load content onto your website that will abuse the trust already established between your website and your user. For example, when a user (lets call her Alice) logs into your website, there are going to be certain privileges and data (possibly sensitive data) stored in the cookie, a file that sits on her own computer. This allows her to remain logged in even if she closes the browser and picks up her reading about the Kardashians the next day. Now if another user, Mad Hatter, logs in, he could enter information into a free text field (a comment on a blog, for instance) that includes a script tag, a little bit of code that will reference a javascript file on his own server and every time anybody views the comments for that particular blog post, this javascript call will run (although it will remain unseen by the user). And it can do awful things like send the cookie contents to the Mad Hatter, allowing him to impersonate Alice and access the site as if he were her. Not a wonderland at all.

So for our products, we like to make it very easy to our users to upload data, which we do through spreadsheet imports, which carefully sanitises all data being saved and that will later be shown on page somewhere. So a new supplier called “Pinky Winks <script src=”http://super_dodgy_site.com/steal_cookies.js”></script>&#8221; is not going to run that javascript file but instead show the nefarious intent of the user who uploaded it (in sanitising the data, that script tag becomes visible). Out of interest, that funny character you see at the end of the script tag is the result of sanitising this article so I can’t run a malicious script against anyone reading this.

Social Engineering

This is possibly the most insidious form of attack since it relies on your great weakness, your users, many of whom  can be easily manipulated into divulging information that should not be given away. Social engineering attacks include phishing (very popular with the banking crowd) and a quid pro quo attack, where the attacker claims to be from technical support and may use remote access to install malware onto the victims machine. While this doesn’t directly compromise your website/web application, it’s a concern for users who may compromise their security on your website and educating your users may be required to close that hole.

Conclusion

Just like in days of yore, your foes will find new ways to try and breach your walls, so security is an ongoing concern – the fences will deteriorate over time and will need to be repainted, rebuilt and upgraded. And it will need to be, should need to be, done pro-actively. It’s no use waiting until your data has been breached and your company has lost face and the credibility of your users – you need to try and prevent the catastrophe before it happens. Some companies are big enough to withstand the knock (like Sony, Monster, Verisign, Google), but you really don’t want to have to see if yours is.

Hopefully this post has alerted you to a few of the real security holes that you can close up to ensure better security for your users and a harder time for the bad guys.

By Richard Cochrane