IC Reform is Great, but Decent Privacy Laws Would Be Even Better

PLUS: How Lawyers Ruin Incident Response

Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray with help from Catalin Cimpanu. It's supported by the Cyber Initiative at the Hewlett Foundation and this week's edition is brought to you by Thinkst Canary.

IC Reform is Great, but Decent Privacy Laws Would Be Even Better

Photo by Alejandro Barba on Unsplash

An ODNI report into the US Intelligence Community's use of Commercially Available Information (CAI) has caused quite a stir, but most of the resulting press coverage has missed the forest for the trees.

It is a problem that the IC is using this information without any guiding policy. It is a much, much bigger problem that this information is being collected for sale in the first place.

The report examines how the IC is using CAI, what privacy and civil liberty protections cover its use, and makes recommendations about how this data should be used in the future. It defines CAI as information that can be bought by the public and excludes data that is commercially available only to governments.

Overall, the report finds that the IC is buying "a large amount" of this information and getting a great deal of value from it, but doesn't have appropriate policies and processes in place to protect US citizens' privacy and civil liberties.

Traditionally, IC policies haven't considered publicly available information to be sensitive, so there haven't been strict regulations against its use. CAI was considered "just" a subset of publicly available information, albeit one that you have to pay for.

Of course, this is no longer true at all, and the report does a good job of explaining why CAI can be very intrusive:

Today’s CAI is more revealing, available on more people (in bulk), less possible to avoid, and less well understood than traditional PAI. It is only a little oversimplified to say that when Executive Order 12333 was adopted, U.S. persons generally understood that the White Pages and the New York Times were public, but also understood that it was possible to choose an unpublished telephone number and (usually) to keep oneself out of the newspaper. Today, in a way that far fewer Americans seem to understand, and even fewer of them can avoid, CAI includes information on nearly everyone that is of a type and level of sensitivity that historically could have been obtained, if at all, only through targeted (and predicated) collection.

In other words, the type of information you can buy rivals the type of information that the IC once needed dedicated efforts and powerful authorities to collect.

The report provides some examples of how IC agencies are using this kind of data, and in our view, most of them are rather pedestrian. The Defense Intelligence Agency (DIA) is using CAI to assist its analysis of scientific research, NSA is using commercial threat intelligence to enrich SIGINT sources, and the broader IC also claims CAI is used to support humanitarian assistance missions.

However, it's clear the report has only really scratched the surface of how the IC uses this data. This is made obvious by its first recommendation, that the IC should catalogue, "to the extent feasible", the use of CAI across the community.

So, while we don't think the hyperbolic claims that "intelligence agencies are flouting the law" are justified by the report, it doesn't rule out the possibility.

There is definitely potential for CAI to be misused. The report notes that it can be "combined, or used with other non-CAI data, to reverse engineer identities or deanonymize various forms of information". It also cites previous intelligence abuses such as "LOVEINT" incidents (where officials have spied on potential or actual romantic partners) and notes that "in the wrong hands, sensitive insights gained through CAI could facilitate blackmail, stalking, harassment, and public shaming".

This isn't a theoretical risk. We already have examples of this happening, but outside of the IC. Nearly two years ago a small Catholic publication used commercially available data to out a US Catholic priest as a Grindr user. You can read our analysis of that saga here.

So where to from here? The ODNI report appears to recognise the seriousness of the threat:

The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits. Yet smartphones, connected cars, web tracking technologies, the Internet of Things, and other innovations have had this effect without government participation. While the IC cannot willingly blind itself to this information, it must appreciate how unfettered access to CAI increases its power in ways that may exceed our constitutional traditions or other societal expectations.

Reading the tea leaves, we think there'll be some reform here. Historically, the IC's policy framework didn't treat publicly available information (and therefore CAI) as sensitive, but the report's authors think that needs to change. They write that changes to the CAI landscape have "considerably undermined the historical policy rationale for treating PAI [publicly available information] categorically as non-sensitive information".

For example, for law enforcement agencies to get location information about a person from a telco requires probable cause and a warrant, but "the same type of information on millions of Americans is openly for sale to the general public".

So, IC policies currently treat this information as publicly available, and they purchase it.

The only limits on these types of purchases seem to be internal policies. The DIA, for example, purchases commercially available geolocation metadata aggregated from smartphones worldwide, but queries on US locations require prior approval from DIA management.

Some agencies have policies that are a bit more considered. They consider the "volume, proportion and sensitivity" of CAI when it comes to data that might hold US person identifying information prior to purchase. But the report finds that the policy landscape here is essentially ad hoc and varies a lot between agencies. CAI acquisition decisions just don't have the "rigor and focus" on data privacy that they should have.

So, the report doesn't find that there is necessarily anything egregiously wrong with the way that the IC is using CAI, but that there isn't a good policy framework to be sure that what they are doing is right either.

The report's recommendations focus on telling the IC how to proceed. These essentially boil down to treating CAI data as potentially sensitive and then balancing the intelligence benefits against risks.

This is good, as far as it goes, and the DNI is well placed to tighten up policies across the IC. But only tackling the use of this information narrowly within just the IC ignores risks that we think are bigger and more concerning.

We think abuse of CAI by US domestic law enforcement agencies is both more likely and more harmful. Services that skirt, if not break, US Fourth Amendment protections already exist. Fog Reveal, for example, is marketed to US domestic law enforcement agencies as a way to track people via mobile phone advertising data.

Given the sheer number of domestic US law enforcement agencies — and the lack of a clear organisational hierarchy — it will be harder to impose consistent standards when it comes to the use and exploitation of CAI data.

Then there are the national security risks. This data is almost certainly being exploited by foreign adversaries, too. If it's useful for the US intelligence community it'll also be useful to foreign intelligence agencies. Chinese cyber espionage groups have already stolen similar information from the US Office of Personnel Management, Marriott hotels, United Airlines, Equifax and Anthem. Those thefts covered a range of complementary data including security clearance, travel, financial and insurance information, and it's not hard to see how purchased CAI data could complement these other data sets.

So while the report is right — the US IC needs more rigorous processes and better clarity on how to handle CAI — focussing solely on tidying up IC behaviour would be a mistake.

The US needs federal data privacy legislation that prevents some of this data from being collected and sold in the first place. Otherwise Americans can expect foreign adversaries and local law enforcement to ramp up their collection of this stuff over time. Efforts to restrict the sale of CAI to foreign adversaries is a good start, but bigger reform is needed.

Left alone, this will only get worse.

Cyber Insurance Makes Us Mushrooms: Kept in the Dark and Fed Sh*t

Research due to be presented at the upcoming USENIX security symposium examines how cyber insurance and lawyers affect incident response and it's… not good.

Theoretically, cyber insurance should help to improve security by providing firms with incentives to adopt effective security practices. In a kind of virtuous cycle, insurers would be able to learn which security measures were actually effective by analysing claims data from their portfolio of insured companies.

In the context of incident response, however, the paper finds that the desire to avoid litigation risk encourages behaviour that prevents the wider market from learning lessons from security incidents.

In the current paradigm, cyber insurers use a panel of pre-approved incident response firms and appoint lawyers to direct investigations. Lawyers are used in an attempt to prevent IR reports and written communications from being used in litigation against the victim firm by claiming that they are covered by attorney-client privilege.

Ultimately, courts didn't agree that IR reports should be protected by attorney-client privilege, but the unintended consequence has been that lawyers are now just avoiding written reports. One IR professional told the researchers "a request for a formal report is made in less than five percent of cases, because in such a report we would have to document all the screw ups".

Even in the minority of cases when reports are produced, most lawyers told the researchers that recommendations for security improvements in particular should only be communicated orally or via PowerPoint, because writing it down could provide a "road map to litigation". A list of recommendations could imply the incident could have been avoided if the security measures had already been in place.

There's a space for policymakers to step in here, and the paper suggests "confidentiality protections specifically tailored to cybersecurity investigations". That is, provisions that encourage IR firms to produce reports because they can be shared with stakeholders but can't be used by litigants.

We agree that's a good idea.

Listen to Patrick Gray and Tom Uren discuss this edition of the newsletter in the Seriously Risky Business podcast:

Three Reasons to be Cheerful this Week:

  1. Mt Gox hackers indicted: Two Russian nationals, Alexey Bilyuchenko and Aleksandr Verner have been charged by US prosecutors for hacking the Mt. Gox crypto exchange way back in 2011 and then running the BTC-e exchange which was allegedly used for money laundering.
  2. WebAuthn and Passkeys FTW: In its Secure Sign-in Trends report Okta finds that the most secure authentication methods like WebAuthn are also the easiest to use. MFA adoption is also up, including 90% of administrators. WebAuthn is the underlying spec for Passkeys.
  3. Nope, that's it: We couldn't find a third item to be happy about this week. Pretty depressing, when you think about it. Here's a picture of a rainbow coloured puppy instead. Aww.

This edition is brought to you by Thinkst Canary. Most companies find out way too late that they've been breached. Thinkst Canary changes this. Deployed and Loved on all seven continents. Deploys in minutes; almost zero admin overhead. It just works! Listen to Haroon Meer talking about honeypots with Tom Uren in this interview.

Shorts

Colonial Pipeline Hackers Got 702'd

The Biden administration has revealed more specific information about how Section 702 collection has helped the US government investigate ransomware attacks.

Section 702 collection allowed the US government to recover most of the USD$4.4m ransom paid by Colonial Pipeline during its ransomware event. In the case of an unnamed nonprofit that was attacked, 702 data helped the government to identify the affected organisation and mitigate the attack so that it was able to recover without paying a ransom.

It appears that Congress is still circumspect on renewing the authority, so we expect to hear about other examples of 702 rescuing kittens from trees and so on.

For Cyber Criminals, Espionage Is the New Black

ESET has identified that a group known as Asylum Ambuscade is a cybercrime group that also conducts cyber espionage operations.

The group was first identified by Proofpoint in March 2022 when it was targeting European government staff assisting Ukrainian refugees shortly after the beginning of Russia's invasion of Ukraine.

ESET reports that previously Asylum Ambuscade has been operating since 2020, mostly cybercrime, sometimes cyber espionage.

Another group, best known for deploying the Cuba ransomware, was recently identified by both Trend Micro and Blackberry as having pivoted from cybercrime to cyber espionage. Both Asylum Ambuscade and Void Rabisu have been targeting Ukraine, although none of the security firms have attributed the groups to any particular country.

On the other side of the conflict, Russian cybersecurity company BI.ZONE published a report on Core Werewolf, a former cybercrime group that appeared in 2021 and has since turned to cyber espionage. BI.ZONE says that since 2021, the group has launched attacks on Russian organisations associated with the military-industrial complex and critical information infrastructure.

Well, That Escalated Quickly

On June 6, blockchain security firm Elliptic linked the hack of more than USD$35 million worth of crypto-assets from Atomic Wallet users to North Korea's Lazarus hacking group. By June 13, that number had grown to USD$100m.

The company says it could make the attribution because the stolen funds are being laundered via the exact same steps the Lazarus gang has used in the past.

The funds are also being transferred to Sinbad, a cryptocurrency mixer that the gang has used in the past.

Risky Biz Talks

You can find the audio edition of this newsletter and other fine podcasts and interviews in the Risky Biz News feed (RSS, iTunes or Spotify).

In our last "Between Two Nerds" discussion Tom Uren and The Grugq look at the "hallmarks" of a state operation.

From Risky Biz News:

Ukrainian hackers wipe equipment of major Russian telco: A Ukrainian hacking group named the Cyber Anarchy Squad has breached the network of Russian telco Infotel JSC and wiped some of its routers and networking devices.

The incident took place last Thursday—June 8—and brought the telco's network to a full stop for 32 hours. Reports in both Russian and Ukrainian media say the attack had a huge impact on Russia's financial system, which was unable to process any electronic transactions for more than a day. [more on Risky Biz News]

CISA orders federal agencies to secure internet-exposed routers, firewalls, and VPNs: The US Cybersecurity and Infrastructure Security Agency (CISA) has issued a new Binding Operational Directive (BOD) and has ordered federal civilian agencies to limit access from the internet to the management interfaces of networking equipment.

The new BOD 23-02 applies to routers, switches, firewalls, VPN servers, proxies, load balancers, and out-of-band server management interfaces such as the iLo and iDRAC. It applies to management interfaces hosted on a multitude of protocols, ranging from HTTPS to SSH, SMB, RDP, and others. [more on Risky Biz News]