Give Me E2EE or Give Me Death

PLUS: Beware the Tiny Stick of Regulation

Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray with help from Catalin Cimpanu. It's supported by the Cyber Initiative at the Hewlett Foundation and founding corporate sponsor Proofpoint.

Lady Liberty by Miltiadis Fragkidis on Unsplash

Signal says it will pull out of the UK market if the country's Online Safety Bill forces it to weaken its encryption. Signal won't be asked to weaken its encryption, but it may well be asked to make other compromises.

Meredith Whittaker, president of the Signal Foundation nonprofit, told the BBC that the organisation "would absolutely, 100% walk" if forced to weaken the privacy of its messaging system.

Although the UK's proposed Online Safety Bill aims to make the internet safer (here is a good background overview) it has received its fair share of criticism over time. Advocates of strong encryption are particularly concerned about sections that give the regulator the power to tell companies that they must "use accredited technology to identify CSEA [child sexual exploitation and abuse] content, whether communicated publicly or privately by means of the service, and to swiftly take down that content". (The Act also covers terrorism-related content like beheading videos etc. Grim.)

In this case the regulator, Ofcom, gets to decide what technology is accredited based on meeting minimum standards of accuracy. Advocates worry that these technologies will undermine or even break the strong end-to-end encryption offered by many services.

Simply put, the intent here is to force communications platforms to do more about CSEA and terrorism-related content. The government perception here is that many companies do the bare minimum to tackle the problem and regulation is needed to force them to take meaningful action.

There are many approaches that can mitigate online harms including the presence of CSEA material without compromising end-to-end encrypted messaging. These can include, for example, examining metadata for suspicious behaviour, preventing inappropriate contact between children and adults, and making it harder to recreate accounts after being banned. Many of these measures have a privacy impact, but we think it can be justified.

At a high level, the UK legislation's idea to force companies to do more is similar to Australia's 2021 Online Safety Act. It requires that companies meet Basic Online Safety Expectations (aka the Expectations) which are essentially set by ministerial decree and so can be amended easily. When it comes to encryption the Expectations state:

If the service uses encryption, the provider of the service will take reasonable steps to develop and implement processes to detect and address material or activity on the service that is unlawful or harmful.

As an aside, the Expectations also immediately rules out steps that would weaken or make encryption less effective. This is good, although we'd like to have seen this in the legislation rather than in the more easily amended Expectations.

In addition to empowering the Minister to set the Expectations, the Act also gives the eSafety Office the ability to issue "transparency notices" that require companies to report on how they are implementing these expectations.

The eSafety Office has already published a summary of the first tranche of replies it received from companies including Apple, Meta, WhatsApp and Microsoft. It makes for interesting reading, mostly because it lays out just how different various platforms' approaches to mitigating CSEA material are. It's also immediately clear that there are opportunities for improving their approaches that have nothing to do with encrypted messaging.

There is variety, for example, in how robustly different services try to prevent offenders from creating new accounts after they've been banned from a platform. All of the providers eSafety asked took some steps, but some used a far wider range of indicators to prevent re-registration.

Last week the eSafety Office issued a second tranche of transparency notices to TikTok, Twitch, YouTube, and Twitter among others (good luck getting answers from Twitter, lol). It is pretty clear that it intends to use its powers to strongly encourage service providers to up their game.

This is where we think Signal faces a risk. Even though it won't be asked to undermine encryption, Signal may well be asked what other measures it can take to mitigate CSEA material — maybe in Australia, maybe in the UK, or maybe even in the EU or US.

Signal deliberately takes steps to minimise the amount of information it collects and only responds to law enforcement requests with the account's creation date and the date that account last connected to its service. This doesn't leave much scope to mitigate CSEA material risks.

We still think there are options for Signal, though. Client-side detection of CSEA — designed to be as privacy preserving as possible — might be one of them.

A hit on illegal material during a client-side scan doesn't have to result in a report to law enforcement. Escalating account restrictions on subsequent detections (banned for 48 hours, a week, two weeks, two months etc) would do at least something to counter CSEA proliferation over Signal. Our feeling, however, is Signal will fight any such suggestion tooth and nail.

It's a doggedly principled position, but we don't know how much sway it will hold with regulators.

Beware the Tiny Stick of Regulation

The Biden administration has foreshadowed its soon-to-be-released national cyber security strategy (Thursday, apparently!) may involve shifting at least some liability for poor design and security vulnerabilities on to software companies. The rhetoric is tough, but we're expecting the action to be timid.

In recent weeks there has been a steady drumbeat of news releases spreading the word that private industry will be expected to do more when it comes to cyber security efforts. In early January, for example, The Washington Post reported a draft copy of the strategy calls for regulation of all critical sectors and even the shifting liability "onto those entities that fail to take reasonable precautions to secure their software".

CISA Director Jen Easterly has also been hitting the same theme. This week, in a robust speech, Easterly described many technology products as "dangerous-by-design". Easterly continued:

We’ve normalised the fact that technology products are released to market with dozens, hundreds, or thousands of defects, when such poor construction would be unacceptable in any other critical field.

Easterly argues that we've collectively become accustomed to technology manufacturers making unsafe products and that it is time for a "fundamental shift" to "value safety over other other market incentives like cost, features, and speed to market".

We don't agree with Easterly's framing that these products are inherently unsafe and are essentially accidents waiting to happen as most serious cyber security problems are the result of adversary action rather than accidental happenstance. But we entirely agree with the broader thrust of her argument that manufacturer incentives need to be realigned:

In sum, we need a model of sustainable cybersecurity, one where incentives are realigned to favour long-term investments in the safety and resilience of our technology ecosystem, and where responsibility for defending that ecosystem is rebalanced to favour those most capable and best positioned to do so.

Easterly also mentioned the possibility of shifting liability for their products to software manufacturers:

The government can also play a role in shifting liability onto those entities that fail to live up to the duty of care they owe their customers… To this end, government can work to advance legislation to prevent technology manufacturers from disclaiming liability by contract, establishing higher standards of care for software in specific critical infrastructure entities, and driving the development of a safe harbour framework to shield from liability companies that securely develop and maintain their software products and services.

We've railed before about software vendors shipping critically important software with moronic vulnerabilities, so on one level we like the idea of some sort of punitive response (punish the WICKED!). But shifting liability is a notoriously thorny issue as is discussed in this Washington Post article.

And there is a tension here with some of the other levers the government would like to pull.

The US government would also like to improve vendor transparency. Easterly only touched on this in her speech, but in a recent Foreign Policy article she wrote:

When most companies detect a cyber-intrusion, too often their default response is: call the lawyers, bring in an incident response firm, and share information only to the minimum extent required. They often neglect to report cyber- intrusions to the government for fear of regulatory liability and reputational damage. In today’s highly connected world, this is a race to the bottom…

From a defensive perspective, the U.S. government must instead move to a posture of persistent collaboration. Such a culture shift requires that sharing become the default response, where information about malicious activity, including intrusions, is presumed necessary for the common good and urgently shared between industry and government. Government and industry must work together with reciprocal expectations of transparency and value, where industry does not have to be concerned about punitive sanction. [emphasis added]

These two goals — improving transparency and shifting liability — could require actions that are at cross purposes. The threat of liability discourages transparency, for example, and encouraging transparency may require some protection from liability.

We think, however, that from the government's point of view there is a hierarchy of needs and it needs transparency first. Only then is it possible to decide whether the consequences of vendors shipping insecure software are serious enough to justify the "big stick" of shifting liability.

Fortunately, despite the strong rhetoric we think that the administration has got these priorities in order and has signalled that although shifting liability is a possibility, transparency is its first step.

Overall, however, we are optimistic that a discussion about shifting liability is happening. It reflects that the upcoming cyber security strategy actually identifies one of the root causes of insecurity, namely that manufacturer incentives aren't aligned with producing secure software. Let's see what the actual strategy announces.

Three Reasons to be Cheerful this Week:

  1. Biggest ever hack back: Oasis, a decentralised finance cryptocurrency platform has recovered USD$225m worth of cryptocurrency out of the USD$320-odd million stolen in the Wormhole Exploit in February of 2022. Oasis's official announcement says that the recovery was authorised by a UK court order and was possible because a "Whitehat group" provided a previously unknown vulnerability. So, situation normal and cryptocurrency security is still poor, but  we suppose it is good news when vulns are used to recover funds rather than steal them.
  2. Honeypot used to catch Dota2 game cheaters: Gaming company Valve banned over 40,000 accounts for using cheat software. When Valve patched the game to disable the cheat it used "honeypot" data that wouldn't be accessed in normal gameplay to detect use of the cheat. [more at Risky Biz News]
  3. NSA's Home Network best practice guide: NSA has released a guide for securing your home network. It's ok, although there is some tension between "best practice" and practical. For example, it advises that people "limit sensitive conversations when you are near baby monitors, audio recording toys, home assistants, and smart devices". Isn't that everywhere now? We'd like to see NSA's practical guide to securing your home network.

Seriously Risky Business is supported by the Hewlett Foundation's Cyber Initiative and  corporate sponsor Proofpoint.

Proofpoint has released its 2023 State of the Phish report.

Tines No-code Automation For Security Teams

Risky Business publishes sponsored product demos to YouTube. They're a great way for you to save the time and hassle of trying to actually get useful information out of security vendors. You can subscribe to our product demo page on YouTube here.

In this video demo, Tines CEO and co-founder, Eoin Hinchy, demonstrates the Tines automation platform to host Patrick Gray.

Shorts

Russia's KPI-Driven Cyber War

CyberScoop has published a great interview with Victor Zhora, the deputy chairman of Ukraine's cyber security organisation. The whole interview is worth reading and here are a few key points we noticed.

Although Zhora was expecting "highly destructive and intense" offensive cyber operations from Russian forces, he's concluded that their impact "can't be compared with the impact from conventional weapons".

Zhora also thought that Russian cyber forces tend to use "cyber operations for psychological operations or cyberespionage".

Zhora also speculated about the internal dynamic that led to Russian cyber forces conducting meaningless attacks. Referring to Russian attacks on allied infrastructure, Zhora said:

I think these attempts are meaningless in terms of any influence on the global posture, but internally in Russia I understand why they continue doing this: They are military officers; they are hackers that wear uniforms. They need to display some activities and report to their generals that they are doing something, and that they are successful somehow.

LastPass Breach Has a Whiff of DPRK About It

The people responsible for the LastPass breach compromised a senior DevOps engineer's home media server to get keys to access to LastPass's customer vault backups.

After an initial compromise of LastPass's development environment in August last year the attacker stole encrypted credentials to access cloud-based backups of customer vaults, but not the key to decrypt them. The attacker then targeted a DevOps engineer's Plex media server and deployed a keylogger, which was then used to capture the employee's LastPass vault master password. This allowed the threat actor to access the engineer's corporate LastPass vault and hence the cloud storage.

Ars Technica has good coverage which notes that Plex was also breached in the time frame during which LastPass's engineer was targeted. Usernames, email addresses and password data was lost in this breach.

The Plex breach may just be a coincidence, though. A friend of Risky Business says they've seen an engineer's Plex servers used as a vector in an attack against crypto assets before. There are apparently some post-authentication RCEs in it that are quite tasty. That campaign wound up being attributed to a North Korean threat actor.

Was it North Korea in the Drawing Room with the Plex post-auth RCE? Hopefully we'll find out eventually.

Medibank Hack: Pretty Standard Stuff

Australian health insurance firm Medibank, victim of a comprehensive data breach we covered here, has released a few more details about the incident. The attacker used the stolen credentials of a third party IT supplier to gain access via a firewall that was misconfigured to allow password authentication without a client certificate. Once inside they stole more credentials. It's definitely good to learn a bit more, but it'd be nice to see a more comprehensive report, rather than just some dot points in a half-year earnings report.

Australian Cyber Security Strategy Discussion Paper

The Australian Government has launched its cyber security strategy discussion paper and is accepting submissions.

When "Mature" Doesn't Mean What You Think It Means

A CISA red team assessment of a "large critical infrastructure organisation with a mature cyber posture" is both interesting reading and a bit grim. Despite this notionally mature posture, CISA's team "obtained persistent access to the organisation’s network, moved laterally across multiple geographically separated sites, and gained access to systems adjacent to the organisation’s sensitive business systems".

The red team poked the victim organisation 13 times with "measurable events" designed to elicit a response. Ten of them passed by unnoticed. There were a few bright spots — the red team had to resort to phishing to gain access because the infrastructure wasn't easily exploitable, passwords were good and there weren't any credentials just lying around. But when that's the good news, woof.

Risky Biz Talks

In addition to a podcast version of this newsletter (last edition here), the Risky Biz News feed  (RSS, iTunesor Spotify) also publishes interviews.

In our last "Between Two Nerds" discussion Tom Uren and The Grugq discuss cyber power rankings — do they make sense or are they all rubbish.

From Risky Biz News:

NewsCorp breach update: US news agency News Corp provided an update on a security breach it disclosed in January 2022, when it revealed that a threat actor had gained access to a portion of its enterprise network, including the details of Wall Street Journal and New York Post reporters. In a data breach notification letter filed with authorities more than a year after the original disclosure, NewsCorp says the breach was far larger than it initially disclosed and that hackers had been in its network for almost two years, since February 2020. At the time, Mandiant said the breach was believed to have been carried out by a threat actor with a China nexus. [More in BleepingComputer]

US Treasury sanctions Russian cyber and influence firms: The US Treasury on Friday imposed new sanctions on Russian and foreign entities supporting the Kremlin's illegal invasion of Ukraine…

For the first time since Russia's invasion, the US Treasury has also sanctioned cyber-adjacent entities.

The biggest name on this list is 0Day Technologies, a Moscow-based IT company and a known contractor for Russia's FSB intelligence agency. In March 2020, a hacktivist group named Digital Revolution hacked and leaked data from 0Day's network, including details about Fronton, a tool that could be used for DDoS attacks but also for orchestrating social media disinformation campaigns. [much more on Risky Biz News]

Western countries lack robust knowledge on Ghostwriter group: Western countries don't appear to have an answer or robust knowledge about the operators behind Ghostwriter (aka UNC1151), a threat actor that has been engaged in a mixture of hack-and-leak and dis/misinformation campaigns over the past half-decade, a report [PDF] from Cardiff University has concluded.

The group, believed to contain a mixture of Belarussian and Russian operators, has been active since 2016. Its activities have included hacking news sites to post fabricated stories, planting manufactured documents on government websites, and leaking doctored evidence across social media. [much more on Risky Biz News]