4 alternatives to encryption backdoors

End-to-end encrypted communication has been a boon to security and privacy over the past 12 years since Apple, Signal, email providers and other early adopters began deploying the technology. At the same time, law enforcement authorities around the world have been pushing for technological solutions to open up the chain of end-to-end protected encrypted content, arguing that the lack of visibility provides a haven for criminals, terrorists and child abusers to hatch their plans with impunity.

In 2016, Apple won a now-famous legal showdown with FBI Director James Comey to unlock an encrypted phone used by a mass shooter in San Bernardino, California. In 2019, Attorney General William Barr reignited the so-called backdoor debate to advocate ways to break encryption to thwart those who distribute child sex abuse material. Last month, the UK government launched a public relations campaign to lay the groundwork for the destruction of end-to-end encryption, ostensibly to crack down on child sex offenders.

Cybersecurity experts and privacy advocates have consistently condemned these efforts as misguided attempts to break what they see as a key way to keep the internet safer and users better protected from malicious intent. Yet it’s hard to argue that abuse, crime, and malware aren’t rapidly increasing on the Internet.

The question then arises: without introducing harmful encryption backdoors, how can organizations identify criminals communicating on their networks if their communications are hidden? Experts who spoke at last week’s Engima conference offered a few solutions.

4 suggestions for spotting threats and questionable content

The first step to solving this dilemma is to define end-to-end encryption, said Mallory Knodel, CTO at the Center for Democracy and Technology (CDT). “What end-to-end encryption is is surprisingly not agreed upon. There is some agreement or convergence around the use of end-to-end encrypted messaging, for example, the use of the Signal protocol. The IETF strives to standardize what is called the Messaging Layer Security protocol, which can be used in messaging, video, and a variety of contexts.

The concept remains largely undefined. “Should E2EE include perfect secrecy or disclaimer or other features? It’s not necessarily agreed,” Knodel said. “Now is a really critical time to define what end-to-end encryption features are required, what breaks it and what doesn’t.”

Citing a study published by CDT last August, Knodel said several proposals have been made on how to detect threats and questionable content in end-to-end encrypted environments. The first is user reporting, where users can block and report suspicious content, often using empowerment features. “We didn’t think it was terrible,” she said, particularly if it was done in a privacy-enhancing way.

The downside of flagging users as a way to spot unwanted content is that plausible deniability, a key feature of end-to-end encryption that allows users to doubt the existence of their data, would not be possible, a Knodel said.

Another technique is metadata analysis, which examines the typical type of metadata that typically accompanies most content transmitted over the Internet, such as file size, file type, date and time sent. , who sent it, who received it, etc. right now. “We wouldn’t necessarily suggest creating more metadata to do analytics, but in general what metadata is there could be a way to do some degree of content moderation, especially in terms of behavior,” said- she declared. “Platforms should always reduce the amount of metadata they retain for sure.”

A third technique for spotting unwanted content is tracking, used in India and Brazil mainly on WhatsApp to search for misinformation. It’s a scheme that has nothing to do with content review but asks where the data comes from, who is the first person to send this message, how many people have seen it, etc.

“I see traceability as enhanced metadata. That’s exactly what we don’t want platforms to do,” Knodel said. “Platforms probably don’t track every origin of a message, who it goes through, who sees it. law enforcement We would refuse traceability.

A fourth model is called a perceptual hash, which compares known and forbidden content in a database to content circulating on the network using a “fingerprint” derived from the forbidden content. “It’s not something we recommend. We wonder if it is effective or good enough.

Finally, predictive matching is a way to identify unwanted content and find matches between “bad stuff” and new content. “Predictive modeling is essentially worse than perceptual hashing, so we reject it,” she said.

No silver bullet for online abuse

Riana Pfefferkorn, a researcher at the Stanford Internet Observatory, considers these methods to be content-aware because they don’t necessarily require access to the content from the provider. The Observatory surveyed 13 online providers, such as WhatsApp, Facebook Messenger, Instagram Messaging and other providers that collectively serve most internet users around the world.

Based on the survey results, it is clear that while end-to-end encryption prevents automated analysis as an abuse detection tool because it is content-dependent, it does not affect user reports. or metadata because these are oblivious to the content. It’s also clear, Pfefferkorn said, that encryption doesn’t uniformly affect vendor abuse detection efforts. Specifically, tools that ignore content are considered far less useful than automated child sexual abuse information (CSAI) searching in an end-to-end encrypted environment.

This alone should not be an excuse to crack the encryption. “For policy makers, the big takeaway I want you to take away from this conference is that there is simply no silver bullet for online abuse. Automated content analysis is too often seen as a silver bullet and panacea for online abuse,” said Pfefferkorn. Additionally, “CSAI content is unique. It cannot be the basis for building a trust and safety program, let alone There is no guarantee that automated content analysis will continue to be effective against CSAI as it is now.

Rising Hatred and Harassment Calls for New Ideas

Google researcher Kurt Thomas said the recent rapid rise in online hate and harassment should prompt a rethink of how providers handle this type of content. “The problem is that a lot of the existing protections that we have in this context are really focused on for-profit cybercrime,” he said. “We’ve made great strides in warning people about spam, phishing and malware and preventing them from going to dangerous websites. We warned them about data breaches and password reuse, as well as behaviors that put them at risk of being hacked. None of these correspond to security and privacy needs in the context of hate and harassment. We need to expand our security threat models to deal with these attacks that don’t have the same scale or incentives to generate profits. »

Actors of hate threats and harassment are not motivated by money, Thomas said. “The goal is to silence their [victim’s] voice, damage their reputation, reduce their sexual or physical safety or even reduce their financial security or ability to function independently”.

Addressing hate and harassment “is going to require a unique combination of warnings, nudges, moderation, automated detection, or even just conscious design,” Thomas said. “How we resolve toxic content will be fundamentally different from how we go about monitoring, impersonating or leaking intimate content online.”

Copyright © 2022 IDG Communications, Inc.

About Janet Young

Check Also

Technique protects privacy when making online recommendations – Eurasia Review

Algorithms recommend products when we shop online or suggest songs we might like when we …