I recently attended the EuroS&P conference, and the co-located EuroUSec workshop. This year both were held in Stockholm, Sweden. Conferences and workshops are interesting events to visit, and both very different to each other.
The main conference was quite a varied mix of different topics. Quite a few went completely over my head, some were incredibly interesting insights into new research. The university where this was held, KTH, was very nice. The major problem that happened was the fact that there were two conferences at the same time, both sharing a restaurant. The result queues were not very fun.
If I could make one complaint to the conference organisers: If you’re going to list prices online, make sure to tell people in advance that you don’t include the cost of VAT. Finding that out at the last minute really puts a spanner in the works.
The Keynote speaker, Melanie Rieback, had a really good story to tell. She built a unicorn company – Entirely non-profit, open, freelance, and yet still able to compete on the marketplace with the industry leaders. Especially good was the observation that a lot of problems come from the Silicon Valley operating model – grow fast, exit fast. It’s very uplifting to know that running a tech company in a more relaxed manner is possible, and I’d certainly like to see more of it.
False Sense of Security. This talk presented some research analysing how banking apps try to detect for a jailbroken iOS, and then either warn or refuse entry. I think there’s definitely scope for warning users, but I’m not a fan of when apps outright block “rooted” OSes. The cat-and-mouse game that ensues is kind of interesting – scanning for the footprints of being jailbroken, in some cases taking advantage of potential security holes presented by the broken OS sandbox. Most amusing is the finding that some app developers have just copied and pasted detection code from Stack Overflow.
Up-To-Crash. Updating libraries is a horrible mess. Every couple of months GitHub’s automated security bug detection system flags old repositories of mine about vulnerable libraries. This talk presented a tool that could try to automatically detect when an app’s libraries could be updated. It uses a concept called a “Monkey Troop” to try and find crashes when a library updates. It seems like a really good idea, but I’m not sure how good a substitute it will ever be for a developer just, you know, maintaining their app. If the developer isn’t around to do that, they won’t be able to run the auto-update checker either.
Exploit Mitigations. A bit off my area of expertise, this talk was about embedded systems. One notable thing I quite liked was that the speaker suggested that the only concrete way we will get more security in embedded (and IoT) devices is if either end users or the government convinces OEMs to implement it. This would cause a price increase because of the additional overhead on each device made, but for me (and many security experts) that is a sacrifice worth making.
Cryptocurrency & Cybercrime
Deanonymisation. Given that Bitcoin and other cryptos don’t offer privacy, can you find a way to link transactions to one another? This talk describes how you could potentially link supposedly anonymous transactions to a common actor, if you had a node in the network that was very well connected and recorded all the trafic that passed through it. As a theory it’s nice, but because it only acts on live data and can’t be applied retroactively, I’m not sure it would be of any use for catching thieves or retrieving stolen coins.
Ekiden. Smart Contracts. With so many people losing trust in centralised systems, they might be the way forward. A magical unicorn protocol where you ask for work, someone else does it, and gives you the response. This talk proposes a way to manage these (I don’t think I followed it very well). Until I can see a Smart Contract real-time application akin to Word Online or the upcoming generation of online streaming games, that won’t succumb to attacks like cryptokitties, I won’t buy Smart Contracts as a viable concept.
Understanding eWhoring. Online vice crime. Encompassing everything from blackmail & extortion to a bizarre pyramid scheme. I’d say it was one of the most well-presented talks at the conference (both the funny use of cat pictures as a proxy for illegal images, and the fact that this wasn’t a dry talk about formally verifying protocols). The research analysed online forums where people trade image packs which are then used to trick people in online chat rooms, and at every stage people are exploited for money. The solutions involved to tackle this may well affect legitimate sex workers – a necessary sacrifice because other than educating end users (good luck), I doubt there’s much you could do other than resort to technical measures like image fingerprinting. The brig trouble with this approach is that you create a market for new, unfingerprinted, images. Unpleasant, but fascinating, stuff.
Cryptography & Protocols
There were quite a number of sessions dealing with cryptography and protocols, unfortunately much of these went into quite rigorous detail of the proofs. This is good, but not really of great interest to me.
Benchmarking Flaws. One less in-depth talk discussed some of the issues that researchers show when reporting on their security. How they sometimes abuse their results, misreport them, or make mistakes. Its annoying that people make mistakes, more annoying that some people deliberately outright lie and mis-report their results.
Tell Me you Fixed It. This talk presents an amusing idea: When you scan for, and find, vulnerabilities, quarantine the victims until they patch their system. They partnered with an ISP and just lock them off from the web except for certain resources. This looks like a pretty good, if aggressive, way of motivating people to patch their systems.
Issue First, Activate Later. How to you handle secure communication between vehicles, when they might not have access to the web? You also need to maintain privacy, avoid linkability, properly authenticate, be secure from (for example) Sybil attacks. The proposed solution is just to load all of the certificates onto a device at manufacture time, and distribute “unlock” keys for these at a later date, out of band. Interesting idea, as long as you can keep those keys secure.
In encryption we don’t Trust. This research studied some German participants over multiple years to see how their attitudes towards encrypted communications changed (or didn’t). The researchers were quite lucky in having conducted the early study before Whatsapp really took off and became popular. They shared some of the mental models that participants had, which as ever is interesting to see how the layperson sees the internet. Most of the participants either didn’t notice their chat apps enabling encryption, nor did they understand the identity verification systems. People care about encryption and private communications, but they don’t understand the tech that enables it.
PILOT. Indoor positioning is a tricky business. With no or little GPS data to go on, how to do you figure out where a device is? Commonly WiFi signals are used. Because this can lead to extremely precise location accuracy, how does a user go about keeping their privacy? This talk presented a solution in the form of some two-party computation. I’m not sure I fully understand this, but the general idea seems to be to send of parts of the positioning data to two servers that don’t collaborate, and then only the originating device, on recombining the results, can know where it is. A good idea, but that “no collaboration” is a big assumption.
In a similar vein, the Rethinking Location Privacy talk suggested maintaining privacy during positioning by modelling movement, and then using that to generate a false “precise” position based on your rough location. This faked position could be sent to a server so you can keep your real position private while still getting fairly reliable localised services.
There were two talks about the privacy of electronic voting, but the most interesting was the third which dealt with paper voting. In Is your vote overheard? The researcher showed how the strategic placement of microphones (and a lot of free time) could let them accurately predict how you had votes based off of the sound of pencil on paper on a table. The take away being that if you want truly secure and private voting, electronic voting aside, not even paper voting is totally invulnerable.
Mitch is a ML tool presented by researchers which can have random interactions with a website and try to detect if CRSF vulnerabilities are present. It does this by analysing the kinds of information present during a Request-Response cycle over HTTP. By now, CSRF shouldn’t even be a problem as most libraries include ways to avoid it by default. Yet, here were are, with the vulnerabilities still present. This will more be useful for an attacker (regardless of hat colour) because if the developer is aware that CSRF is an issue, they’ve probably fixed it, and if they don’t know, then they’re not going to run this tool.
Domain Impersonation is Feasible is a review of lots of different CAs, including Let’s Encrypt, and tries to see just how good they are at verifying the real owner of a server. I’ve never had to deal with a commercial CA before, so it’s interesting to see how they actually verify an owner. The big, nicest, takeaway from this for me is that Let’s Encrypt is as good as (in some cases better) than most commercial CAs at owner verification. Nice.
Using Guessed Passwords to Thwart Online Guessing. Password stuffing is apparently very common, in some cases accounting for a majority of traffic on certain sites. How do you avoid it without adversely affecting users? You can’t just block after X attempts, else legitimate users will be blocked too. You can’t block IP ranges, else one single infected computer doing guessing could drop an entire internal network. The proposed solution is to measure incorrect guesses leading up to a correct guess, and once that happens, check if the guesses before it look like they were guessed by an attacker or a legitimate user. It looks like there are some big issues in how to store this kind of sensitive data.
The MALPITY solution to malware is to make tarpits – things that slow the spread of malware down. The general idea of detecting and locking off infected machines at a network level is an interesting one, and seems to work. But it relies on the malware themselves having bugs in them. One quick patch, and I’m not sure the tar pit solution will be very effective.
New to Me
A number of the talks I attended were about subjects where I either know nothing or am a novice.
DroidEvolver. ML is one of the trendy topics right now. This talk presented a tool that tries to learn and evolve over time, as malware evolves. Not knowing much about AI, this brought up a number of points new to me: Poisoning attacks, and how a continually learning system could end up forgetting how to detect older malware.
Steroids for DOPed Applications. Researchers sure are masters of puns aren’t they? DOP is something that I have 0 knowledge of. Of what I can tell it seems to me like a logical way to design applications, given how big data is the way of the industry right now. But I’m not sure how much use I would have for it. This talk presented a compiler that could be used to plan and design attacks on programs by reverse engineering the flows that data takes through it – a pretty cool idea, though I imagine it would be very tricky to actually use it in practice against a remote system.
ReplicaTEE. This talk started going on about “Enclaves”. It took me a while to catch on to what they meant by this. As far as I can tell, you use an enclave on a cloud host to run applications in an environment, like a VM, but where the host provider (which you might not trust) can’t get in to it. Good idea! This talk was presenting a way to manage spinning up and stopping multiple enclaves when you might not even trust / be able to access the system that starts them in the first place. Very un-useful to me, given that I don’t manage a massive cloud service, but good ideas all the same.
The European workshop on usable security was the primary reason for my visit, and certainly held my attention far more than the main conference. The attitude is also a lot more personal and (to my eye) informal here.
A number of talks looked at usable security (and privacy) for end users.
Why Johnny Fails to Protect his Privacy detailed some of the reasons that users end up not opting for the most private settings in services. Everything from privacy fatigue, privacy paradoxes to a simple lack of awareness of issues. Interesting for me, given I try as much as possible to protect my privacy, to my own detriment sometimes.
Don’t Punish all of us measured attitudes towards the roll out of a 2FA system. The attitude is generally positive when it works well, but with so many time it doesn’t work things start to tumble downhill. The talk suggests that some simple UX fixes could have prevented a lot of upset. I think 2FA is nice, but even for me it can be a bit of a hassle, and I’m not sure there’s any time when it has actually saved me from being hacked.
Analysis of Three Language Spheres. Every country in the world has passwords, but how do they differ. This was very interesting to see how attitudes to passwords in leaked data sets differ from English, Chinese and Japanese users. Of course they don’t have access to the same alphabets, and often they may be limited to ASCII characters. English users seem to favour letters, Chinese numbers and Japanese a combination of both. The research also goes in to some of the types of words / dates used and how they differ across cultures. By analysing these data sets, a model was created which could start to guess passwords better if it knew the locale of the user beforehand, due to the similarities in that culture’s passwords. This won’t matter once we all start using password managers though… I wonder how use of those differs across cultures.
Detecting Misalignments between Security and User Perceptions. Security is hard, and this talk looked into how users could mis-understand the security of a system. The application tested was PEP, supposedly intended to improve email privacy as a re-imagining of PGP. The talk presented a kind of modelling of user perception which I’m not sure I entirely followed, but it looks like a neat way to analyse it.
A review of URL Phishing Features was an attempt to catalogue different aspects of URL phishing. There are two main parts – those features that are user-facing (such as the context you see a URL in) and computer-facing (such as character substitutions). Identifying common features is useful to teach users about them, but you have to be careful due to the fact that their use evolves over time.
Two of the talks were “Visions”, Pilots and prototypes of ideas, related to smart homes.
Shining Light on Smart Homes was an interesting proposal to design a system to help users visualise and choose connected devices that meet their expectations of privacy and security. An interesting idea, but it would take a lot of work to maintain it in terms of scouring the market and rating devices on their security and privacy.
Usable Authentication in the Smart Home is a project aiming to research how to authenticate devices in an easy way where passwords are not usable (assuming passwords ever were usable, but that’s another matter). It raises an interesting point I had never considered before – when you have multiple people in a home with devices that might personalise to each user, how do you distinguish between them. A smartphone is the obvious choice, but not exactly usable and seamless.
Developers need usable security, too. One of these was a talk given by me, but I won’t go into that in detail. It went well enough, I think.
A Survey on DCS looked at the current state of the art in research. Some interesting observations coming out of the end of it – Security should be a hard requirement in an application design, not merely a non-functional one tacked on; and that development organisations should have security champions that can be turned to.
2 Fast 2 Secure was an analysis of a company’s behaviours after they had suffered a security breach. It’s nice to see inside a company to see how attitudes have changed, less nice to see that they have changed not necessarily for the better. Security theatre is quite prominent, which is never good, as that undermines trust of the system put in place. The research identifies a number of themes that can be used to classify behaviour. I don’t plan on doing any of that kind of thing, but it was interesting to hear about anyway.
I’ve mostly written this blog post for myself to try and review what I saw and heard at the conference and workshop. I’ve used words like interesting a lot… it’s certainly an interesting adjective. I think the organisers, IEEE, are going to publish the papers if you’re interested to read them in full. Hopefully something here sparks your interest.