Computer Misuse Act Response

A photo of the British Parliament at westminster

The UK Government recently put out a public consultation call for information regarding the UK’s Computer Misuse Act (1990). That’s the piece of legislation that says what you can and can’t do with computers. I decided to respond, and post what I had to say here, in case it ends up going unnoticed. And lets face it, this is Westminster we’re talking about. They don’t care what a random Scottish guy has to say about the way of things.

If you’re British and want to make your own response, the information to do so is here: https://www.gov.uk/government/consultations/computer-misuse-act-1990-call-for-information. The question numbers refer to the calls in the linked document.


Q1, 2: Context

In the context of higher education, my university currently runs a 1-semester module dedicated to the CMA and other laws that relate to computers (such as data protection laws, investigative powers). This is somewhat tedious but does provide a valuable insight into how strict the law is.

Other modules, such as those that deal with ethical hacking and network security need to have very strict protections (using dedicated VMs for example) so that any students don’t accidentally run afoul of the CMA while they are conducting their studies.

Overall, I think that the way the CMA is presented (from my own experience) in an educational context it does come across as somewhat of a barrier where there is a lot of red tape. For students that may want to go into computer security, being told that you need explicit, ideally written, permission before attempting any research comes across as rather archaic and off-putting.

Q3-6: Offences

A change of attitude regarding ransomware

Currently there is a rampant problem of ransomware. A ransomware group attacks a company, by encrypting the disk contents of said company, and the company are forced to pay a fee to recover their files. Many companies currently pay (either out of pocket, or claiming on cyber insurance) to unlock their files as this is the easiest and quickest means to recover files. But this reinforces the effectiveness of the “business tactic” the attackers are using and makes it a profitable business.

Proposal: Make it illegal for companies, or cyber insurers, to pay ransoms to attackers

Upside: It would become unprofitable for ransomware groups to attack victims in the UK if there is no legal way for their victims to pay. Thereby reducing the incentive to attack in the first place.

Downside: Companies that do not have rigorous backup strategies would lose their data. So, the government ought to provide additional training / guidance for business on how to conduct effective backups strategies. The increased threat of losing data could also be a way to encourage businesses to invest in backups.

I argue that negligence towards computer security that leads to breaches (especially from mid to large companies that do have the resources to work on it) should be considered, if not misuse of computers, some kind of criminal negligence.

Consider Dark Patterns used to undermine the law as computer misuse

Under surveillance capitalism, many large companies (often overseas in America or increasingly china) use surveillance capitalism to make their profits. If they inform users of how they will use customer data, and get clear informed consent, this is all legal. However, some companies use “dark patterns” or other methods of obfuscation or confusion to trick users into agreements that they may not have made were they properly informed.

Proposal: Companies or individuals that use dark patterns to trick or mislead users should be held accountable

Upside: It would incentive providers of technologies to be more upfront about how their services operate. It would also help to rebalance the relationship between the public and actors who store data on them.

Downside: Defining what legally constitutes a dark pattern is incredibly tricky, and there is the danger of government overreach into freedom of design and expression. So further consultation would be necessary.

Artificial Intelligence

Question 5 specifically deals with “future areas”. One emerging future area is AI, and the use of algorithmic approaches to the state of the art where previously a human would have been present at every step. I do not have any specific proposals for this, but some questions worth considering:

  • Say a malicious actor sets up an AI that probes for security flaws and in doing so causes damage. Who is at fault? The actor that set up the AI, the original author of the AI, or the AI itself? My intuition would be the actor that set up the AI, but is the law clear on this?
  • If there is a flaw in an AI, is abusing that to commit a crime considered computer misuse? For example, an attacker uses a flaw in a computer vision system to make an autonomous car crash itself. Who is at fault? The attacker, manufacturer of the car, manufacturer of the AI? The attacker if obviously at fault, but do the others share some of the blame? Is the law clear on this?
  • Currently machine learning algorithms tend to be unexplainable (research into such algorithms that can “explain” the conclusions they come to is ongoing). Should the law require that AI in critical systems (that affect lives or livelihoods) be able to explain their conclusions? That would make things safer, at the cost of slowing down progress.

Q7-9: Protections

Protections for “White-hat” security researchers:

Right now, there is currently a major flaw in the CMA in that if a security researcher acting in the best of interests discovers a vulnerability in a security system, before seeking explicit permission to do so, they are breaking the law. This should not be the case as it means currently there is a big barrier to entry as any security researcher first has to contact an organisation to get permission, and only then can they start to look for security bugs. And any organisation that does not like having security flaws pointed out to them can take a security researcher acting in the best of interests to court.

Proposal: Allow security researchers who are acting with the best interests of other parties in mind the right to look for security flaws / bugs without fear of persecution.

Upside: More security researchers can look for bugs without fear of persecution if the other party they are investigating is not receptive to such ideas.

Downside: Some nefarious hackers may try to use this as a defence if they are caught, so the law would need a rigorous definition of what constitutes a “white-hat” / “blue team” / “good guy” hacker / activities.

The right to bypass & discuss DRM, Technical Protections and Anti-Tamper technology

DRM is a means that publisher originally intended to protect against copyright theft. However, in recent years this has expanded far beyond the initial meaning, leading to situations where software and hardware are “protected”, making it illegal for researchers or hobbyists to investigate, modify or repair the inner workings of technology. Under some interpretations of DRM rules, even discussing or sharing information is considered a breach of the law. This makes for a very hostile environment when it comes to investigating technology. As more technology becomes embedded, such as the Internet of Things, researchers having the right to bypass DRM becomes increasingly important.

Proposal: Assert the right to investigate, bypass and freely discuss DRM or other technical protections

Upside: Researchers would be able to see past DRM and de-obfuscate parts of technology to identify security flaws. They would not need to fear repercussions if they wanted to share these ideas with others in the community. They may also be able to offer “patches” or repairs for DRM-encumbered products which contain vulnerabilities.

Downside: This would undermine efforts to use DRM to protect against copyright theft. However due to such fallacies as the “analogue hole” and the rampancy of internet piracy despite existing DRM protections, the assertion that DRM was ever useful in the first place is questionable.

EULAs & Terms of Service

Related to all this is the use of terms of service and end user license agreements which also may preclude a safe environment for security researchers. The law ought to supersede any such texts that prohibit proactive security research.

Q10-11: Powers

No comment

Q12: Jurisdiction

It is difficult to comment on the reach of jurisdiction. The global nature of the internet means it is extremely easy for an attacker to commit a crime in the UK while being located abroad. Or even to commit a crime in the UK, from the UK, but using a proxying service to appear from abroad.

Q13: Sentences

Young offenders

In recent times there have been instances of children or young adults committing crimes through computer misuse. I would suggest that sentencing towards these be lenient. As I noted at the start of my reply, university courses that deal with computer science will dedicate time to teaching the law. But democratisation of computer science and hacking education means many new hackers will not take the traditional paths and may miss such education.

If it is simply a question of probing, hacking, we should not needlessly punish people who offer enormous potential to the ethical hacking community, should we give them training in the “right” way of doing things. However, intent to cause harm should still be investigated and prosecuted accordingly.

Computer Bans

I have heard in reporting on legal cases (not necessarily just the CMA) that one outcome is for an offender to receive a computer ban. In today’s world that seems incredibly unjust, as life without using computers would be tiresome and difficult. Everything from banking, accessing government resources to finding a job increasingly requires computers. Even communicating over the phone may require the use of a pocket-sized computer. I would propose that we ensure that no future offenders receive computer bans, given that computers may be necessary to their healthy reintegration in society.

Q14: General

Protections for users of encryption

I am seeing a lot of government vocalising about the dangers of encryption and how some criminals use it to hide from the law. However, encryption, the ability to secure files and communications, is a cornerstone of modern computer security. Any attempt to undermine encryption using “backdoors” for government or law enforcement agencies would totally undermine the security of the system in question.

Proposal: Ensure that people maintain the right to use encryption unhindered (without government backdoors, weakening) to secure themselves from bad actors

Upside: Normal users of technology would be safe against the possibility of attackers abusing any government backdoors, as there would not be any backdoors.

Downside: Law enforcement may have a harder time getting access to specific sets of encrypted data. However traditional investigative methods and investigation of unencrypted metadata would still allow for tracking down criminals.

Q16-17: International Best Practice

No comment

Join the Conversation

  1. Thank you for bringing this to peoples’ attention. I think I will try to find a moment to draft a response myself.

    I am a little unsure about the proposal to make ransom payments to attackers. I agree that the only way to make it unprofitable for these groups, is to keep adequate backups*, and never pay – but I doubt that would prevent attacks occurring against UK-targets. A lot of ransomware attacks are opportunistic and minimally- (or un-) targeted, therefore the attackers receive ransom payments from victims across the globe. Cutting-out their small UK “market” doesn’t really hurt them much, as their incentive to attack remains largely unchanged due to still receiving payments from the entire rest of the world. Therefore, it might take more effort for them to specifically exclude the UK from a given attack, than it would to just release a blanket global attack indiscriminately. Given that these people simply don’t care about their victims, they have simply no incentive to do anything to avoid destroying UK computer-users’ files.

    “The L337 haxxor group has encrypted your files. Please send 0.0000001 bitcoin to this address. Oh, you’re in the UK? Bad luck, old thing. It’d be easier to find Lord Lucan than your files by now. Oh dear, how sad, never mind”.

    That said, maybe it’d be a good thing and force people to consider backups properly. Especially corporate users who should really know better. Additionally, if it works here, then other governments would follow suit, which would have the simultaneous benefit of truly removing the ransomware revenue stream, and the UK appearing to “lead the world” with something.

    * Using “The Cloud”** is not a backup. Using a spare hard disk that you leave permanently plugged-in to your Windows computer isn’t a backup. It’s not a backup, until it’s both offline and redundant, and you’ve tested you can retrieve all your stuff from it.

    ** There’s not such thing as “the cloud”, just other people’s computers. And whatever honey-sweet marketing-BS says, those other people don’t care about the safety of your files anywhere near as much as you do.

    1. You’re right. It is rather harsh. But seeing as this is just a call for ideas, that seems like the perfect time to throw out radical ideas. And then they can be teased into something more concrete and viable.

  2. You pose many valid questions and proposals and I must agree with you. Although I do not live in the U.K. many of our Canadian laws are just as outdated and obscure.
    DRM – one of my pet peeves also. I have many books I’ve bought from Amazon. What happens if I try to convert them to read in Calibre in Linux (there is no Amazon book viewer except for Windows and dedicated e-readers. Also my Kindle or Windows reader will not allow me to import other e-content I’ve bought from other retailers or from Gutenberg.

    EULAs & Terms of Service
    Big one here… I remember getting a computer with no OS in the 90’s. Going to buy Windows 3.11 on the box it said something like:
    ‘Removing the wrap from this box indicates your acceptance of MS’s Terms of Service and EULA’. You can’t get at the documentation unless you tear off the shrink wrap and open the box.
    I put it back and bought OS/2.

    Great thoughts here. Well thought out and written :thumbs_up.

Comment

Leave a Reply to lonm Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.