WeChat's Privacy Policy is Utterly Pointless

PLUS: CSAM Scanning, A Signal Emerges From the Noise

Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray with help from Catalin Cimpanu. It's supported by the Cyber Initiative at the Hewlett Foundation and this week's edition is brought to you by Kroll Cyber.

Photo by Lianhao Qu on Unsplash

A Citizen Lab analysis of Chinese social networking app WeChat has entirely missed the point by over indexing on the app’s privacy policy. WeChat is a ubiquitous, surveillance-friendly application that provides the PRC with unfettered access to its users' messages. Fiddling with its privacy policy won't fix that.

WeChat's domestic Chinese version, WeiXin, is what is known as a "super-app". Primarily a messaging app, it also serves as a major financial transaction platform and can run "Mini Programs", WeChat's equivalent of apps from an app store. These Mini Programs cover the entire spectrum from ecommerce, health, gaming, and also include government service apps such as COVID-19 contact tracing apps that were compulsory during the pandemic.

WeChat's international version isn't so all-encompassing, but it does contain many similar features. Tencent, the company behind WeChat, separates the international (WeChat) and domestic version of the app (Weixin) into two "services" run by separate subsidiaries in Singapore and Shenzhen respectively. WeChat considers the mainland Chinese version, Weixin, to be a "third party". So, funnily enough, it has a completely different privacy policy.

Citizen Lab researchers analysed the international version of the app using a variety of methods including reverse engineering, dynamic analysis, and the examination of network traffic captures, then assessed their findings against the app's privacy policy.

The research found that WeChat collects personal data that includes location, phone numbers, and equipment identifiers. The international version does a reasonable job of following its privacy policy and some core features (such as Messaging and Moments) collect the minimum amount of data they need to perform their advertised function.

Other features of WeChat, such as Search, Channels and Mini Programs are covered by Weixin's privacy policy and are more intrusive. All Mini Programs, for example, send extensive usage data to Weixin by default to enable analytics. And it's not clear within the WeChat app when you've crossed the boundary between WeChat and Weixin services. It's also not possible to grant location permissions on a per-Mini Program basis, and once location permissions are granted to WeChat any Mini Program can access location regardless of whether it needs it to provide its intended functionality or not.

Despite these flaws, WeChat's data harvesting practices aren't that different from many Western platforms. They are worse, but only just. The bigger issue here has less to do with WeChat's privacy policy and more to do with the company being under the PRC's thumb.

The Chinese government has a track record of surveilling and persecuting particular groups it calls the "five poisons" — dissident Uyghurs, Tibetans, Taiwanese, democracy activists, and the Falun Gong.

Because WeChat does not support end-to-end encryption, Chinese security services can access private messages sent via the platform if they want to. The Chinese government is even known to station police officers inside major internet companies. And censorship on WeChat is well documented.

In Xinjiang, the province which is half Uyghur by population, police also use a range of other mechanisms to get information from mobile devices. These include mandatory installation of apps that search phones for "dangerous" files, and searches at security checkpoints that take advantage of physical access to devices.

People in Xinjiang are also actively discouraged from using encrypted alternatives to WeChat such as WhatsApp and from using VPNs. Use of these kinds of apps can lead to police interrogation, detention and imprisonment.

In the face of this kind of organised repression, WeChat's privacy policy is no defence at all.

Citizen Lab's report punts on how high-risk users can protect themselves:

We caution no amount of adjustments can make the app completely "safe" for certain high-risk threat models. We can recommend alternative encrypted or anonymous messaging systems, but we also recognize that most WeChat users are on WeChat out of necessity. For high-risk users, we recommend talking to a security professional about your particular concerns to see what you can do to limit, manage, or reduce your exposure to risk while using the app.

Maya Wang, a Human Rights Watch expert on China's use of technology for mass surveillance, told us that many Uyghurs "stopped contacting their families because of the risks involved, as everything they say can be construed as 'terrorism' and 'extremism' in Xinjiang".

"Many of them devise schemes like changing their WeChat profile picture weekly as a way to tell their families that they’re OK", Wang said.

But ultimately, there is simply no safe way of communicating via a PRC-friendly communications platform. WeChat is part of the PRC's architecture of censorship, surveillance and repression.

A better privacy policy won't help one bit.

CSAM Scanning: A Signal Emerges From the Noise

As the UK's Online Safety Bill is discussed in the House of Lords, the mostly fruitless debate about "breaking" encryption and countering Child Sexual Abuse Material (CSAM) continues.

On the one side are encryption advocates that refer to proposals for client-side scanning as "magical thinking" or assert that technology is not a "magic wand". On the other side are government regulators and lawmakers essentially saying technologists need to come up with new and better approaches. This newsletter previously explored this dynamic in Give Me E2EE (end-to-end encryption) or Give Me Death and unsurprisingly, nothing has changed.

However, a new report by the UK's National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) charts a way forward. It proposes a framework that can be used to evaluate CSAM detection and prevention tools for E2EE environments.

Previously, CSAM detection tools have mostly been assessed by looking at metrics such as  classification accuracy, false positive rates and usability. REPHRAIN's proposed framework doesn't just focus on by-the-numbers performance metrics, but includes assessment of human rights-related factors such as transparency, explainability, fairness, accountability, and privacy.

The report steps through its proposed framework for five Proof-of-Concept CSAM detection tools designed for E2EE environments and draws conclusions based on the exercise.

One of the main takeaways of the report is:

Striking a fair balance between the rights and interests of all individuals concerned, i.e., law-abiding users, (potential) CSAM victims, and perceived perpetrators, is a key issue. Although none of the PoC tools propose to weaken or break the end-to-end encryption protocol, from a Human Rights perspective, the confidentiality of the E2EE service users’ communications cannot be guaranteed when all content intended to be sent privately within the E2EE service is monitored pre-encryption.

In other words, there are no easy solutions here. Technology is not a magic bullet that can be used only against evildoers, but neither is encryption a magic shield that works only for law-abiding citizens. Both are dual-use technologies and policymakers will have to make difficult decisions.

The report also notes that CSAM detection solutions should include "technical, legal, operational, and/or contractual safeguards by design to prevent the repurposing of such technologies prior to any deployment in a real-life E2EE application". This addresses the "slippery slope" argument that is often employed by encryption advocates. CSAM detection technologies inherently come with some risk of misuse. But rather than binning the technology entirely, other controls could be put in place. And these controls don't necessarily need to be technical. The Australian government, for example, explicitly ruled out using its Assistance and Access Bill (TOLA) to break end-to-end encryption after pushback from tech and rights groups.

We think that REPHRAIN has found the right approach to assessing CSAM detection solutions and balancing the rights of everyone concerned and we agree on the importance of safeguards.

These types of laws are coming, and not just to the UK. We suggest paying attention to the type of work coming out of organisations like REPHRAIN instead of tuning into the extremes on each side of the "debate".

Listen to Patrick Gray and Tom Uren discuss this edition of the newsletter in the Seriously Risky Business podcast:

Three Reasons to be Cheerful this Week:

  1. Biden's SIGINT Executive Order works, for now: the EU has adopted a new legal framework that allows data transfers between the EU and US after the European Court of Justice (ECJ) struck down the previous agreement in 2020. Biden's SIGINT executive order, which we think is kafkaesque, but also good, was drafted to address ECJ concerns about US intelligence collection efforts. We expect the new data transfer agreement will be challenged yet again, but at least the EO will help with trade until then.
  2. CISA disrupts ransomware in progress: In a recent Lawfare podcast, CISA's Executive Assistant Director for Cybersecurity, Eric Goldstein, described how cyber security firms and researchers notify CISA when a US organisation has been compromised by a ransomware group. CISA then contacts the affected organisation and passes on technical details and concrete steps that should be taken to prevent harm. Goldstein says that CISA has "actively prevented a ransomware group from achieving their objectives" in over 400 of these notifications this calendar year alone.
  3. Microsoft revokes 100+ malicious drivers: Microsoft has revoked more than a hundred malicious drivers that were used for browser hijackers, rootkits and gaming cheats. [more on Risky Business News]

In this Risky Business News sponsor interview, Catalin Cimpanu talks with Scott Hanson, Head of Global Security Operations at Kroll, on how the company has adopted Detection-as-Code for its approach to writing, managing, and rolling out detection rules for its customers:


FBI's 702 Access Ain't All That

Lawfare has a Q&A-style explainer on the FBI's use of Section 702 surveillance powers.

The new information here, which we haven't seen previously, is that the FBI gets access to only a relatively small portion of the entire 702 dataset. Although the US intelligence community (IC) looks at many targets for many different purposes, it's only once the FBI has opened a "full predicated investigation" does data about the relevant targets flow through into the database that the FBI has access to.

Opening a full predicated investigation requires specific information about a crime or national security threat and supervisory approval.

This greatly limited the amount of Section 702 collection the FBI is given access to. In 2022, for example, there were about 246,000 Section 702 targets across the intelligence community, but the FBI was given access to the communications of only 7,900 of them, about 3.2 percent.

The FBI's access to this data has become controversial as the organisation was previously recklessly cavalier about querying the 702 database. Although 702 collection is focussed on foreigners overseas, these targets can and do talk to US citizens whose communications then get ingested as well. This is known as 'incidental collection'.

Inside the FBI's Hive Takedown

Politico has a long read exploring the FBI's hack of the Hive ransomware gang. It all unfolded much as you'd expect, but it's a fun read.

The UK Wants to Nuke Banking Malware with Metadata

The Record examines the possibility that internet connection records (ICRs), metadata about internet communication collected under authorities granted by the UK's Investigatory Powers Act (IPA), could be used to disrupt online fraud and cyber crime. The IPA grants law enforcement and intelligence agencies powers to obtain communications.

An independent review of the IPA cited a theoretical example where metadata could be used to search for "devices simultaneously connecting to legitimate banking applications and to malicious control points". This would indicate potential fraud taking place, and tipping off law enforcement or the bank could potentially disrupt crime in progress.

We do wonder if shoehorning this into an intelligence-focussed framework is the right approach. CISA's recent efforts show you can have at least modest success with voluntary cooperation (see Reasons to be Cheerful #2).

Risky Biz Talks

You can find the audio edition of this newsletter and other fine podcasts and interviews in the Risky Biz News feed (RSS, iTunes or Spotify).

In our last "Between Two Nerds" discussion Tom Uren and The Grugq look at European Union efforts to make laws to protect journalists from spyware.

From Risky Biz News:

$126 million go missing from Multichain in apparent hack: Roughly $126 million worth of crypto-assets have been mysteriously transferred from the accounts of cryptocurrency platform Multichain in an apparent hack, according to blockchain security firms PeckShield, SlowMist, Lookonchain, and CertiK.

The incident took place on Friday, June 7.

Multichain—which is a platform that interconnects different blockchain platforms and allows users to exchange tokens—has shut down to investigate the incident.

[more on Risky Business News]

US and Canada warn of new Truebot malware variant: Cybersecurity agencies from the US and Canada have issued a joint security alert and are warning about malicious campaigns spreading new versions of the Truebot malware.

First spotted way back in 2017, Truebot is a malware downloader that was created and is operated by Silence, a financially motivated cybercrime crew. It is typically used as an initial infection point through which second-stage payloads are delivered on compromised hosts.

According to US and Canadian officials, new versions of the Truebot malware are currently being distributed through phishing campaigns containing malicious redirect hyperlinks.

[more on Risky Business News]

"Bug bounty" security engineer arrested for crypto-heist: The US Department of Justice has arrested and charged a New York security engineer named Shakeeb Ahmed for hacking a cryptocurrency exchange in July 2022. Officials say Ahmed used skills he learned on his job to manipulate one of the platform's smart contracts and steal $9 million worth of crypto-assets. In a practice that has become quite common in the cryptocurrency world, Ahmed reached a secret agreement with the victim to keep $1.5 million of the stolen funds as a "bug bounty reward" and return the rest if the company didn't refer the hack to law enforcement. Officials didn't name the exchange, but the date of the hack and the value of the stolen and returned funds match the attack on the Cream Finance platform. The case marks the first time the FBI and DOJ have detained one of these hackers that reached "white hat bug bounty" agreements with their victims.