Act of God or Act of Hacker, It's All the Same to Us

PLUS: Oracle to audit TikTok's algorithm and more...

Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray and supported by the Cyber Initiative at the Hewlett Foundation and founding corporate sponsor Proofpoint.

Lloyd's of London, an insurance marketplace, has issued a directive that its insurers must remove coverage for catastrophic state-backed hacks from insurance policies.

In its market bulletin Lloyd's defines the attacks that should excluded from insurance policies as:

State-based cyber attacks that a) significantly impair the ability of a state to function or (b) that significantly impair the security capabilities of a state.

Pretty serious stuff, so this makes sense for insurers as attacks of this magnitude are probably uninsurable. At one level, this simply seems part of a trend towards higher premiums and reduced coverage to make up for increased payouts as ransomware claims have cruelled the cyber insurance market.

Jon Bateman, a Senior Fellow at the Carnegie Institute and author of "War, Terrorism, and Catastrophe in Cyber Insurance" told Seriously Risky Business he "viewed it as a positive move by Lloyd's".

"These new exclusions… they are superior to what came before, these pre-cyber era general war exclusions which continue to be widely used but are very vague and uncertain as to how they apply to cyber", he said.

Daniel Woods, cyber risk and insurance researcher at the University of Edinburgh, told Seriously Risky Business he agreed that providing more clarity around policy exclusions was good. He pointed out, however, that the exclusion combined two things that are conceptually different: state-based attacks and catastrophic attacks. If a truly significant incident isn't insurable, it doesn't really matter whether it was state-backed. It is not clear to this newsletter that the chance of an accidental or non-state backed catastrophe is all that different from the chance of a state-backed one. Why exclude one and not the other?

Leaving aside the issue of attribution, if the insurance industry isn't willing to cover these kinds of risks do governments need to step in? Woods isn't sure it will be necessary, even though governments have intervened in similar situations in the past.

After the September 11 terrorist attack reinsurers excluded terrorism coverage from policies, which flowed through to exclusions by primary insurers, which then resulted in property developers halting major projects. The US passed the Terrorism Risk Insurance Act (TRIA) as a kind of 'backstop'. TRIA forced insurers to provide affordable terrorism coverage by committing to sharing losses with the insurance industry on a sliding scale — the larger the loss the more the government would pay.

Woods can't see how the exclusions mandated by Lloyd's could flow through to economic consequences that would motivate governments and lawmakers to provide a government backstop.

Besides, the lack of insurance coverage for these type of events probably doesn't matter. Bateman points out that for Lloyd's these exclusions "are not trying to solve a societal problem, they are trying to solve an insurance problem" and ultimately the incidents that Lloyd's are excluding are so significant the government will end up carrying the can regardless of whether insurance was available or not. Woods agrees, and thinks that perhaps the difference between insurance being available or not boils down to "whether [the incident] affects the government's budget or an insurer's balance sheet".

Despite this, however, Bateman thinks the exclusion from Lloyd's has implications for other areas of government policy.

"It is a signal from the insurance industry about… the line beyond which a cyber event would be considered uninsurable. That signal can then be a clue to the government about when they should step in and pick up the baton."

Bateman argued that some form of government backstop was appropriate and that it would be cheaper and more effective to create one in advance of a major cyber incident, rather than being reactive. And both experts agreed that a well designed government guarantee wouldn't encourage companies to ignore cyber security requirements safe in the knowledge that the government would have their back (known in insurance as moral hazard).

The challenge for governments will be to find a way to align company incentives with societal needs in the midst of a cyber catastrophe. In the ransomware attack on Colonial Pipeline, for example, the company shut down the pipeline because they were concerned that they would not be able to bill customers. This is pretty clearly an example where the right decision for the company was not the right decision for the US government or its citizens. It's possible the US government would have preferred that Colonial continued to ship fuel, even if it had to be reimbursed somehow at government expense. In a truly significant cyber incident how would the government know about and then influence these kinds of decisions? And how would affected organisations know that a mechanism existed to involve the government?

In Australia, The Security of Critical Infrastructure Act implements a compulsory "government assistance" scheme in which companies can be directed to take certain actions in a major cyber security incident. The power is pretty broad: "Direct an entity to do, or refrain from doing, a specified act or thing".

We can't see a similar rule making it into US regulations, but Bateman thinks that in some cases the US government might be able to use the Defense Production Act to compel companies like Colonial to continue operations.

To sum up, policymakers should have a look at their options to compel actions now, rather than during a catastrophic, nationally significant cyber incident.

Oracle to Audit TikTok Algorithms

TikTok's algorithm is being audited by Oracle to ensure it isn't being manipulated by Chinese authorities. This directly addresses the concern that TikTok could be used to influence Western audiences to the PRC's advantage.

TikTok's US user data is stored in an Oracle data centre in Texas, which, combined with logical controls is intended to minimise data access from the PRC. The hosting deal is intended to reassure the US government about access to US user data from the PRC.

A TikTok spokesperson told Axios that the new arrangement gives Oracle "regular vetting and validation" of TikTok's algorithm "to ensure that outcomes are in line with expectations and that the models have not been manipulated in any way".

This effort doesn't put Oracle's hands on the levers, it just allows Oracle to occasionally see where the levers are set, so we would call it a mitigation, not a solution. In this sense, the auditing effort is akin to TikTok's effort to minimise mainland China access to US user data. Despite what we presume are TikTok's best efforts — the company's future in Western markets depends upon it after all — US user data has still been repeatedly accessed by China-based employees.

So we don't think auditing will really work in any effective way and we don't think it will entirely assuage fears TikTok could be used for malicious influence. But we think it is enough to delay US government action for the time being.

The future of TikTok really depends upon the future behaviour of the PRC government. Bellicose rhetoric and confrontational behaviour will make TikTok unacceptable regardless of what mitigations the company can put in place.

Father's Google Account Auto-Nuked by CSAM Controls

The New York Times covers the story of father who sent images of his naked son to a doctor for diagnosis of a genital infection and was subsequently reported to police and investigated for child sexual abuse. The man's Google account was disabled and despite him being cleared by police it hasn't been reinstated. Per The Times:

Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.

Police tried to contact the man to inform him, but of course he couldn't answer his phone or email.

The man used an Android phone and it appears the image was flagged after being uploaded to Google Photos, where a system automatically attempts to identify never before seen Child Sexual Abuse Material (CSAM). This is in contrast to systems such as PhotoDNA which identify known images that have been previously confirmed to be CSAM and are therefore far less likely to find false positives.

And this wasn't the only case where a Google account was wrongfully banned. The Times found a second man whose Google account was disabled when he sent images of his child's genitals to a paediatrician for diagnosis. Child sexual abuse is a terrible crime, but Google shouldn't be punishing innocent parents trying to get treatment for their children.

Complaints About China's Vulnerability Disclosure Laws a Message to Others

Speaking on a panel at the Black Hat security conference, Robert Silvers, a senior official in the Department of Homeland Security, warned that the PRC's vulnerability disclosure laws may mean its intelligence agencies could get early access to newly discovered vulnerabilities.  This sentiment was previously expressed in the Cyber Safety Review Board report into the Log4J incident.

This newsletter previously examined the PRC's vulnerability disclosure laws and concluded that they have multiple purposes. They are part of a strategy to exert control over private sector hackers, are intended to improve domestic security and also to funnel vulnerabilities through China's intelligence agencies.

We're sceptical that hectoring Chinese authorities about their vulnerability disclosure process will be effective at changing practices. Adam Segal, Director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations agrees, but thinks that's beside the point. He told us:

I don't think the USG can do much to influence disclosure laws in PRC. Security researchers within Chinese companies may be receptive to the common-good, win-win argument about public disclosure, but the USG has no way to shield them from Chinese laws that would punish them for acting on those motives. And Chinese officials are likely to believe there is a high degree of coordination between the private sector and US intelligence agencies, no matter what US officials say about the Vulnerabilities Equities Process publicly disclosing the vast majority of vulnerabilities. I would think the report and the public statements as less an effort to change China and more a signal to other countries who have not yet developed VEPs of what a "good" process should look like.

Three Reasons to be Cheerful this Week:

  1. LockBit ransomware gang DDOSed: A DDoS attack has hit LockBit's data leak sites after the group started leaking data purportedly stolen from cyber security firm Entrust. Curiously, the DDoS attack contains a message suggesting that LockBit "DELETE_ENTRUSTCOM_MOTHERF****ERS".
  2. Major Cyber Incident Investigations for Dummies: Victoria Ontiveros, Tarah Wheeler and Adam Shostack have published a guide to setting up the governance structures to investigate and learn lessons from major cyber incidents. We are a fan and it's on GitHub!
  3. Secure Open Source Rewards: a new program run by the Linux Foundation and sponsored by the Google Open Source Security Team will reward security researchers and developers for making security improvements to critical open source projects. The project says its "scope is comparatively wider in the type of work it rewards" to complement other open source improvement incentive programs.

In this video demo, Proofpoint's Executive Vice President of Cybersecurity, Ryan Kalember, walks Patrick Gray through Proofpoint's Nexus People Explorer. It helps to manage risk by showing who the most targeted and most vulnerable people in your organisation are.

Shorts

Spying, Not Sabotage: Russian Cyber Operations in Ukraine.

Trustwave has published an overview of the malware and access vectors used by Russian forces to attack Ukraine. Most interestingly, the timeline they publish shows destructive wiper attacks occurred early in the war but stopped in April. Espionage operations weren't detected early in the war, but continue to this day.

Assuming this reflects reality rather than just the fog of war it's interesting to speculate about why this might be so. Does intelligence gathering just yield a better return on investment for the Russians? Or does the state of the conflict on the ground make destructive cyber operations less useful?

Binance Executive Deepfaked

The CCO of cryptocurrency exchange Binance, Patrick Hillman, says that scammers used a video deepfake of him in an attempt to scam multiple cryptocurrency projects. He learned of the attempted scam when he:

received several online messages thanking me for taking the time to meet with project teams regarding potential opportunities to list their assets on Binance.com. This was odd because I don’t have any oversight of or insight into Binance listings, nor had I met with any of these people before.

Odd indeed. He goes on to say that the "deep fake was refined enough to fool several highly intelligent crypto community members".

Risky Biz Talks

In addition to a podcast version of this newsletter (last edition here), the Risky Biz News feed  (RSS, iTunesor Spotify) also publishes interviews.

In our last "Between Two Nerds" discussion Tom Uren and The Grugq discuss Predatory Sparrow, the "hacktivist" crew obsessed with norms.

And Catalin Cimpanu interviews Vitali Kremez, CEO of Advanced Intelligence, about the impending downfall of the Ransomware-as-a-Service ecosystem.

From Risky Biz News:

Explosive whistleblower report alleges shoddy security at Twitter: If you spent your Tuesday under a rock, then you probably missed one of the biggest tech stories of the year. Well respected white-hat hacker and Twitter's former Head of Security Peiter "Mudge" Zatko came forward to disclose a series of alleged cybersecurity and leadership lapses taking place inside Twitter.

In an explosive whistleblower report filed with the US government, Zatko painted a grim picture of his former employer's security practices, describing them as negligent and a national security risk. (continued)

While we have no doubt there's some merit to his claims, the first ten pages of the complaint itself — which deals with Twitter's bot measurement —— are massively off target. That does make us wonder a little about the rest of it. Patrick Gray and Adam Boileau talked through the problems with Mudge's whistleblowing in the most recent edition of the weekly Risky Business podcast.

Bitcoin ATMs hacked: Bitcoin ATMs have been getting hacked across the world after a mysterious threat actor discovered a zero-day vulnerability in the ATM's web-based admin management panel.

The attacks, which appear to have been taking place since the start of the month, have been targeting Bitcoin ATMs managed via Crypto Application Server (CAS), a cloud-based system from Panama-based company General Bytes. (continued)

VIASAT hack impact in France: According to the minutes of a closed-doors meeting held by Stéphane Bouillon, Secretary General of Defense and National Security for the French Parliament's National Assembly, the Russian hack of the VIASAT satellite internet network had "affected" French ambulance (phone number: 15) and firefighting (phone number: 18) emergency services. The extent of the impact was not detailed. (h/t @SwitHak)