Why Russia's Plan to Hide Spy Data Will Fail

PLUS: Facial Recognition Isn’t Probable Cause

Your weekly dose of Seriously Risky Business news is written by Tom Uren and edited by Patrick Gray with help from Catalin Cimpanu. It's supported by the Cyber Initiative at the Hewlett Foundation and this week's edition is brought to you by no-code automation platform Tines.

You can subscribe to an audio version of this newsletter as a podcast by searching for "Risky Business News" in your podcatcher or subscribing via this RSS feed. Find this edition here and on Apple podcasts:

A spy hiding in the Kremlin, Stable Diffusion

The Russian government wants to protect its intelligence and law enforcement officials from journalists, activists and foreign intelligence agencies by giving itself the power to change personal information in the systems of local data operators.

This is shutting the gate after the horse has bolted, but we can see why they want to try.

Our colleague Catalin Cimpanu reported on the draft legislation in Risky Business News on August 7:

The Russian government has submitted a bill to the Duma (the Russian Parliament) that would grant the military, law enforcement, and intelligence agencies the power to edit, anonymise, or delete the personal data of certain groups of people—presumably their own employees.

At first reading, the proposed law appears to allow these agencies to freely edit the personal information of their own employees in order to protect their identities or hide deep cover agents.

The law will allow the Ministry of Defense, the Ministry of Internal Affairs, the FSB, the FSO, and the SVR to access the systems of local data operators in order to "clarify, extract, depersonalise, block, delete, or destroy personal data."

It is easy to see what the Russian government is worried about here.

Investigative research outfit Bellingcat has published an impressive series of reports that have blown off the lid on nefarious Russian government activities, including operations run by Russia's internal security agency, the FSB and its military intelligence agency, the GRU.

This series of reports, for example, examined suspects in the Novichok poisoning of Sergey Skripal and his daughter Yulia. These reports identified the suspects, who travelled under assumed identities, as Russian GRU officers Anatoliy Chepiga, Dr. Alexander Mishkin and Denis Sergeev. Starting with passport photos released by UK authorities, Bellingcat used a range of data sources, including Russia's data 'black market', to uncover their true identities and the fact they were working for the GRU.

In its report examining Sergeev’s role as the "likely coordinator" of the poisoning operation, Bellingcat used phone metadata from a whistleblower working at a Russian mobile operator to retrace his steps. Bellingcat writes that while Sergeev was waiting for a delayed flight at Moscow's Sheremetyevo airport:

[He] used the waiting time to send a few messages and download some large files. In the two extra hours he was forced to wait, he exchanged several messages using Telegram, Viber, WhatsApp, and Facebook messenger, and downloaded 3 large files. At 9:15 he got a call from “Amir”, and they spoke for about 3 minutes. 45 minutes later, just before he finally boarded the Airbus A321, he called Amir again to tell him he was finally taking off. He would speak to Amir — and only to him, many times during the next three days.

Another Bellingcat article describes how easy it is to acquire Russian data:

Due to porous data protection measures in Russia, it only takes some creative Googling (or Yandexing) and a few hundred euros worth of cryptocurrency to be fed through an automated payment platform, not much different than Amazon or Lexis Nexis, to acquire telephone records with geolocation data, passenger manifests, and residential data. For the records contained within multi-gigabyte database files that are not already floating around the internet via torrent networks, there is a thriving black market to buy and sell data. The humans who manually fetch this data are often low-level employees at banks, telephone companies, and police departments. Often, these data merchants providing data to resellers or direct to customers are caught and face criminal charges. For other batches of records, there are automated services either within websites or through bots on the Telegram messaging service that entirely circumvent the necessity of a human conduit to provide sensitive personal data.

For example, to find a huge collection of personal information for Anatoliy Chepiga — one of the two GRU officers involved in the poisoning of Sergey Skripal and his daughter — we only need to use a Telegram bot and about 10 euros. Within 2-3 minutes of entering Chepiga’s full name and providing a credit card via Google Pay or a payment service like Yandex Money, a popular Telegram bot will provide us with Chepiga’s date of birth, passport number, court records, license plate number, VIN number, previous vehicle ownership history, traffic violations, and frequent parking locations in Moscow.

Hacked data plays a role here too. This newsletter previously covered another Bellingcat investigation that identified 'Maria Adela Kuhfeldt Rivera' as a GRU 'illegal' or deep cover spy apparently attempting to infiltrate a NATO command centre in Naples. That investigation was kicked off after the Belarusian Cyber Partisans hacktivist group gave Bellingcat a Belarusian border crossing database they'd acquired. (Bellingcat identified Maria Adela as a potential person of interest based on her passport number, as the GRU had been issuing its spies with sequentially numbered passports.)

Other investigations have examined the attempted poisoning of Russian opposition figure Alexy Navalny, GRU hackers, and Russia's involvement in the downing of flight MH17.

While Russia views increasing control over data as a way of preventing its espionage operations being compromised, the reality is that editing personal information 'in situ' is a band aid rather than a cure. This may reduce the amount of information available to researchers in the future, but Bellingcat has been harvesting leaked data sources for years. And historical data is useful to future investigations:

Bellingcat has acquired dozens of leaked databases over the past few years, giving us a large number of data points to cross-reference and verify any new data we acquire. If the birth date of an FSB officer in an explosive new dossier does not match that of a Moscow oblast vehicle registration database from 2013, then something is amiss — but we rarely run into such issues, and did not have any of these contradictions in this investigation.

An absence of historical data can make an identity look weird. In Wired, Bellingcat's founder, Eliot Higgins, described anomalies in the digital footprints of the men accused of poisoning Skripal:

Our lead investigator, Christo Grozev, has collected many of these leaked databases over the years, and those indicated that 'Alexander Petrov' and 'Ruslan Boshirov' [the fake identities used by the GRU operatives] had appeared on those databases out of nowhere, as if they were born full-grown adults.

Amending individual records may make a difference in the longer term, but we doubt it'll help in the short term, Bellingcat and others have hoovered up too much information for these measures to make much of a dent for now.

Of course, Russia is not the only country in this situation. Although the US’ data privacy and security situation may not be quite as dire, adversaries are also collecting US data for counterintelligence purposes.

In perhaps the most serious of these hacks, Chinese intelligence services managed to steal security clearance information from the US Office of Personnel Management. The hack reportedly led to the CIA pulling its officers out of China because they would not have State Department background checks on file. In this case, as in the case of Petrov and Boshirov, it was the absence rather than the presence of data that was telling.

The PRC has likely combined the OPM information with other large data sets stolen from airlines, hotels, health insurers and credit reporting agencies and used it to identify CIA operatives overseas, among other things.

In the modern world, where individuals generate mountains of data that is stored and analysed for various reasons, there is no single measure governments can take to keep their covert operatives from being outed. Fun!

Facial Recognition Isn’t Probable Cause

Reports this week highlight the promise and peril of law enforcement use of facial recognition technology.

On the promising side, Thomas Brewster at Forbes describes what we would characterise as the responsible use of facial recognition technology at the US Department of Homeland Security (DHS). In the example Brewster cites, the DHS' Homeland Security Investigations unit (HSI, and yes, really, Homeland Security's Homeland Security Investigations unit) used an unspecified facial recognition tool to find matches to a man and infant in a child sex abuse video passed to them by British investigators.

The tool found a Facebook profile that matched the adult. When HSI examined the man's profile, it found photos that strengthened the facial recognition match and appeared to contain the child in the video.

The article describes other responsible facial recognition measures, including extensive logging and auditing of queries, and the use of cropped faces with the tool. A former HSI investigator, Jim Cole, told Forbes that facial recognition was only suitable for developing leads and could not be used as the sole basis for an arrest.

"Facial recognition can never be the basis for probable cause," he said. "That’s a pretty major safeguard."

At the other end of the spectrum, The New York Times' Kashmir Hill reports that an eight-months pregnant woman in Detroit has filed a lawsuit claiming she was wrongfully arrested for carjacking and robbery on the basis of facial recognition technology.

In this case surveillance footage from a petrol station of a woman suspected of being an accomplice to a robbery was used to run a facial recognition search. This resulted in a match to a low resolution photo of Porcha Woodruff. Woodruff's photo was then included in a 'photo lineup' where the robbery victim was asked to identify the suspect.

This is oh-so-very dumb. Facial recognition technology is very good at finding similar faces, but cannot be the sole piece of evidence used to determine guilt or innocence.

The New York Times quotes Gary Wells, a psychology professor who has studied the reliability of eyewitness identifications, as saying:

"It is circular and dangerous. You’ve got a very powerful tool that, if it searches enough faces, will always yield people who look like the person on the surveillance image."

The Times says that only six people have reported being falsely accused of a crime because of facial recognition technology. All six have been black and three of these cases have involved the Detroit Police Department.

At Microsoft, All Days Are Dog Days

Late last week, Amit Yoran, CEO of vulnerability management platform Tenable, accused Microsoft of being "grossly irresponsible" in addressing a security flaw a Tenable researcher found in Microsoft's Azure platform.

The flaw would enable an unauthenticated attacker to access other Azure customers’ applications and sensitive data and Yoran says "our team very quickly discovered authentication secrets to a bank".

Despite the seriousness of the issue, Microsoft took more than 90 days to implement a partial fix. Yoran continued:

That means that as of today, the bank I referenced above is still vulnerable, more than 120 days since we reported the issue, as are all of the other organisations that had launched the service prior to the fix. And, to the best of our knowledge, they still have no idea they are at risk and therefore can’t make an informed decision about compensating controls and other risk mitigating actions. Microsoft claims that they will fix the issue by the end of September, four months after we notified them. That’s grossly irresponsible, if not blatantly negligent. We know about the issue, Microsoft knows about the issue, and hopefully threat actors don’t.

Microsoft had originally scheduled a fix for 28 September, but miraculously managed to fix the bug just one day after Yoran's scathing criticism was published.

This follows last week's news that Senator Ron Wyden had asked the US government to investigate what he called Microsoft's "negligent cybersecurity practices".

Some lawmakers have taken up Wyden's challenge. The House Committee on Oversight and Accountability has launched an investigation into the breach of Microsoft's cloud computing services by a likely Chinese actor that led to the compromise of US government email accounts.

You can read the committee's letters to the State and Commerce departments here and here.

Three Reasons to be Cheerful this Week:

  1. SBOMs take root: Research from Sonatype indicates that, at least for enterprises with annual revenues over USD$50m, Software Bill of Materials (SBOM) adoption is booming. Nearly three quarters of these companies in the US and UK have adopted SBOMs since Biden's 2021 executive order on Improving the Nation's Cybersecurity.
  2. USD$20m for school cybersecurity: The Biden administration has launched an effort to improve K-12 school cybersecurity. There are a raft of initiatives and AWS has pledged USD$20m to cybersecurity training. The cloud services provider will deliver training, incident response and also security reviews for education technology companies.
  3. Stronger together: Okta and Google's Chronicle have combined forces to produce a set of detections that will make it easier to identify various different sorts of attacks in cloud environments.

In this Risky Business News sponsor interview, Catalin Cimpanu talks with Tines co-founder and CEO Eoin Hinchy about how organisations can maximise the potential of their security teams during an economic downturn, with a concentration on why human error and burnout caused by excessive workloads on security teams can be a risk.


Ukraine Thwarts Sandworm Op Targeting Military Systems

Ukraine's State Security Service (SBU) announced it had blocked a Russian 'Sandworm' (ie GRU Russian military intelligence) effort to compromise its combat data exchange system. The SBU thinks the Russians captured Ukrainian tablets in the battlefield and then started to develop capabilities to compromise the broader network. The technical report indicates the Russians had been working on a suite of about ten different tools.

With Friends Like DPRK, Who Needs Enemies?

Cyber security firm SentinelOne has discovered a North Korean compromise of a sanctioned Russian missile development firm.

The firm, NPO Mashinostroyeniya, aka NPO Mash, had been infected with malware that had previously been attributed to two distinct DPRK-related actors. One malware targeted NPO Mash's Linux email server, another was a Windows backdoor. These capabilities are complementary, but it's not clear what this tells us about the actors involved. Are they working together or sharing tools? Really just one group? Separately tasked? Who knows.

SentinelOne was able to discover the breach because an NPO Mash IT employee (presumably accidentally) uploaded an email archive to what Reuters described as a "private portal used by cybersecurity researchers worldwide".

Risky Biz Talks

You can find the audio edition of this newsletter and other fine podcasts and interviews in the Risky Biz News feed (RSS, iTunes or Spotify).

In our last "Between Two Nerds" discussion Tom Uren and The Grugq look at China's evolving espionage playbook.

From Risky Biz News:

Bitfinex hack: A New York man arrested last year for laundering cryptocurrency from the Bitfinex hack has admitted, in an unexpected turn of events, to being the hacker behind the incident. Ilya Lichtenstein's guilty plea comes seven years after he hacked the company in 2016, and stole almost 120,000 Bitcoin. The stolen funds were worth $71 million at the time of the hack but are now worth more than $4.5 billion. Lichtenstein was detained in February 2022, together with his wife, Heather Morgan. Morgan pleaded guilty to money laundering charges but said she didn't find out about the hack until 2020, at which point she helped her husband cover the crime. The couple's arrest drew intense media coverage in 2022 due to Morgan using some of her acquired wealth to finance a rapper career.

Seriously Risky Business covered the couple's indictment previously, although at the time it wasn't clear who was responsible for the hack.

Russian "hacktivism," part I: Italy's cybersecurity agency says that a wave of DDoS attacks has impacted the public websites of several Italian government agencies, banks, and other private businesses on Monday. The DDoS attacks took place as Italian Prime Minister Giorgia Meloni visited Kyiv this week, where she announced a sixth military aid package to Ukraine. A Russian "hacktivist" group named NoName057(16) took credit for the attacks, accusing Italy of rUsSoPhObIa. The DDoS attacks targeting Italy come after the group previously targeted Spain the previous weeks. [Additional coverage in ANSA]

Russian "hacktivism," part II: Totally-not Russian "hacktivist" group Anonymous Sudan has issued threats of "reprisals" if France decides to send military troops to Niger to restore the country's democratically-elected government and overthrow a small Russian-backed military junta that has taken control of the African country. [Additional coverage in Le Parisien]