Srsly Risky Biz: Thursday, August 12
At Last, Genuine Hacktivists (Maybe) Appear
Your weekly dose of Seriously Risky Business news is written by Tom Uren, edited by Patrick Gray and supported by the Cyber Initiative at the Hewlett Foundation, AustCyber and founding corporate sponsors CyberCX and Proofpoint.
At Last, Genuine Hacktivists (Maybe) Appear
It's a bad time to be a Belarusian KGB agent.
An activist group calling itself the Belarusian Cyber Partisans has conducted escalating compromises since September 2020, aiming to disrupt the Belarusian security apparatus as citizens agitate for political change. This week the hacktivists revealed the extent of their compromise of information pertaining to the Belarusian security apparatus and, hoo boy, they really have the goods.
The group's cyber campaign began after the disputed reelection of President Lukashenka in August 2020 -- his first election since 1994 in which he's faced any kind of opposition. The campaign started with a series of defacements of government websites, including placing President Lukashenka and Interior Minister Yury Karyarev on Belarus's most-wanted list for "war crimes against the Belarus people".
And it's only escalated from there.
In September 2020 the Cyber Partisans hijacked the online broadcasts of Belarus's state TV channels to show security forces brutally detaining protestors. They also released the personal details of law enforcement officers.
Now, after getting access to interior ministry data, they apparently have enough information to dox the entire security apparatus that supports the ruling regime including:
- A database that contains the personal details of all Belarusian citizens including passport photos and home and work addresses, which includes details on who works for the Belarusian KGB. (They never changed the name, which gives you an idea of what a bunch of fluffy bunnies they are.)
- 10 years of emergency calls, including the details of regime supporters who snitched on neighbours and co-workers
- The police database that includes officer case histories
- Information on pro-regime Telegram propagandists
- Tapped phone calls illustrating how the regime persecutes the opposition
The mere knowledge this information may be leaked could affect the balance of power between activists and security forces. Security forces may become reluctant to continue aggressively supporting the regime, especially if they sense the tide of public opinion is shifting against them and details of their future misdeeds may leak.
If Belarus is a tinderbox, this hack could be a spark. The selective release of material from this cache could spark community outrage and protests -- it contains all the ingredients required to paint an ugly picture of the repressive tactics used by the government.
The Belarusian KGB Chairman has blamed "foreign special services" for the hacks, but of course he would. Given there have been repeat defacements of some government websites, it's also possible the government just isn't very good at securing its systems. It's entirely possible these breaches were perpetrated by real, actual hacktivists and not faketivists like The Cutting Sword of Justice or Guardians of Peace.
It's also possible the Cyber Partisans have some insider knowledge or access -- they don't describe themselves as hackers so much as IT professionals.
It certainly could be a foreign intelligence service operation, but it feels like hacktivists making the most of absolutely terrible cyber security.
Courts Find Incident Response isn't Actually Legal Work. Nice Try, Though.
The longstanding practice of hiding incident response reports behind a cloak of legal privilege is looking, well, a bit wobbly.
Breached companies will often commission incident response work through their legal counsel, the idea being that work commissioned "in preparation for litigation" is off limits to plaintiffs.
However, two recent court rulings have found these incident response reports aren't being commissioned just to prepare for litigation, so legal privilege doesn't apply to them. Risky Biz first flagged this issue over a year ago in this piece about a similar ruling in a lawsuit stemming from the Capital One bank breach.
IR reports often make for a pretty embarrassing read, so it's no surprise most companies would much prefer to keep them in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'. Just one example: The House Oversight and Government Reform Committee report into the Equifax breach found it was entirely preventable and highlighted poor management, complex and outdated IT systems, and a failure to implement responsible security measures.
But the recent rulings are, in some ways, good news. IR reports are probably the most authoritative information about breaches and, at least in principle, this sort of 'ground truth' should form the basis in fact that lawsuits should be built upon.
Further, increased transparency will allow affected consumers (and partners) to understand the scope of breaches, exactly how they have been harmed and what further risks exist.
When made public, the information in a breach report could be used to improve security more broadly. If IR reports were more widely available, patterns of failure and the underlying root causes could be identified and addressed.
Increasing transparency by removing legal privilege is not all roses though. A shift away from legally protected reports also changes the incentives for firms to invest in incident response in the first place. If firms are afraid that reports will be used as ammunition in lawsuits they may limit scope, engage in increasingly complex legal contortions that maximise legal privilege while limiting how useful reports can be for security remediation, or even ask for verbal reports.
That would be bad, but we suspect a court will hardly be impressed by a company failing to commission incident response and remediation. That would look at least as bad as an embarrassing report.
Whatever happens with regard to legal privilege issues, we may start seeing some decent reporting made public soon anyway thanks to the Cyber Safety Review Board (CSRB) being established under President Biden's May Executive Order on Cybersecurity.
The board's role is to examine "significant cyber incidents," and the CSRB is often described as an NTSB for cyber. (The NTSB conducts air crash investigations in the US). Beyond very broad direction to assess "threat activity, vulnerabilities, mitigation activities, and agency responses," one of its first jobs is to set its own agenda by providing recommendations regarding its "mission, scope and responsibilities" and "sources of information," among other things.
We have some suggestions.
Firstly, the Board's mission should be to improve the effectiveness of economy-wide cyber security practice by producing public reports that -- like NTSB reports -- examine incidents systemically and seek to identify the root cause. This is different from the typical focus of incident response, which is to understand and remediate current intrusions.
These public reports should contain recommendations for all stakeholders, including platform companies such as Microsoft, product vendors and for firms and their cyber security functions. These don't need to be enforceable recommendations as they will have political weight that encourages change.
Our second suggestion is that CSRB reports -- again like NTSB reports -- be inadmissible as evidence in criminal or civil proceedings. The public benefit that comes from aligning incentives to produce an accurate and reliable report outweighs the tactical benefits gained by making this evidence available in court.
In the short term CSRB reports will likely have to rely, at least in part, on incident response reports that have already been produced. The CSRB will face challenges recruiting a suitably qualified workforce to conduct entirely independent investigations, and it would also be a duplication of effort when an independent firm has already conducted an incident investigation. But these reports should build upon what is already available and examine the systemic issues that are the root causes of cyber security failures.
The NTSB isn't a perfect analogy for the CSRB. Cyber attacks are not like plane crashes -- there is typically a human adversary, and the cyber security landscape is far more diverse than the highly regulated aviation industry. But there are lessons to be learnt from a model that has worked well.
Apple to Scan iDevices for Illegal Content
Apple announced new features that will be applied in iOS 15 designed to protect children and limit the spread of Child Sexual Abuse Material (CSAM).
Firstly, the Messages app will warn children about explicit photos before they are viewed or sent. This is opt-in for family accounts and children can still view and send explicit material if they proceed through the warning messages, but parents of children under 12 can choose to receive a notification if they do (the child will be warned about their parents being notified).
This gives children and parents a few more tools to deal with explicit messages, and Alex Stamos, former Facebook CSO and current Director of the Stanford Internet Observatory, told the Risky Business Podcast he viewed the privacy trade-offs as "quite reasonable" and likes that it "help[s] parents protect child accounts but... also prompts people to protect themselves".
Secondly, and more controversially, Apple will use on-device scanning of photos that are to be shared via iCloud Photos to detect known CSAM material. Apple will use a hashing technology it calls NeuralHash to compare uploaded photos to a database of verified CSAM provided by the National Centre for Missing and Exploited Children (NCMEC).
If a number of photos match (more than one, but Apple don’t say exactly how many) Apple will be able to decrypt the photos and forward them to law enforcement after it has confirmed that they really are CSAM. At first glance this is a bit strange -- iCloud is not currently end-to-end encrypted so Apple could scan photos server-side, the process typically used at companies that handle user generated content. Stamos and others have speculated that Apple is planning to announce the rollout of end-to-end encrypted iCloud backups; thus on-device CSAM scanning is the price of end-to-end encrypted backups.
Apple has narrowed the scope of its on-device detection as much as it can: it only applies to previously identified CSAM, only on photos that will be shared via iCloud, and only when a number of photos are detected. But these are all choices that Apple has made that could potentially be changed. The EFF is concerned about the "slippery slope"; that Apple will be forced to search for other content by an authoritarian government. Stamos, meanwhile, worries Apple could be compelled by law to filter out other content on their devices. His alternative: if Apple's concern is sharing of CSAM in shared photo libraries then... don't encrypt shared photo libraries.
Current US legislation requires companies to report the CSAM that they detect, but doesn't require them to actively seek it out. There is a trend, however, towards legislation that obliges companies to be more proactive in dealing with online harms. Draft regulations imposed under Australia's Online Safety Bill 2021 require service providers to take "reasonable steps to develop and implement processes to detect and address material ... that is or may be unlawful or harmful," and draft UK Online Safety legislation has different wording but similar intent. A European Commission effort is also examining the "responsibilities of online service providers to detect and report child sexual abuse".
What should a service do to prevent harm when it is end-to-end encrypted? Apple equates on-device with "private" and has decided that on-device scanning is the best way to square the circle and protect privacy while also detecting harmful and illegal material.
Shorts
Muh Muh Muh Myyyyy Corona
US intelligence agencies have reportedly collected (wink) a massive trove of genetic data relating to viruses being studied by the Wuhan Institute of Virology. This comes during the US intelligence community's 90-day effort to investigate the origins of Covid-19. There are two competing theories -- the first that the pandemic is the result of the virus leaping from an infected animal to a human, the second that the virus escaped in a laboratory accident.
This analysis could shed more light on the origin of the pandemic. The Bloom Lab has already recovered and analysed SARS-CoV-2 sequences from the early Wuhan outbreak that were originally uploaded (and then subsequently deleted, possibly deliberately) from US and Chinese sequence databases. Although the sequences were removed from the database search interface, Bloom was able to recover them by insecure direct object reference. The sequence data suggests that the virus was circulating in Wuhan before the outbreak at the Huanan Seafood Market.
CNN also reports the US IC's lack of analysts with clearances who speak fluent Mandarin biology-dork is making things tough.
More data may be out there. Can the intelligence community get to it? And what will it tell us?
Would a Rose not Smell as Sweet
Brian Krebs examines the name-changing merry-go-round that ransomware cyber criminals use to avoid law enforcement (and now increasing diplomatic) attention. Ideally, ransomware crews would like to develop enduring reputations for "trustworthiness" and "reliability" -- so that victims are sure that payment will result in decrypted files -- so this name-hopping is somewhat counterproductive and there is a cost in developing a new reputation.
In Krebs' admittedly small selection of criminal gangs, groups appear to be changing names more frequently. In the absence of firm data on ransomware payments it is possible to interpret this as some sort of success -- at least gangs are having to invest in new reputations?
Public and Private, Hand in Hand, Skipping Through the Meadows
At Blackhat CISA Director Jen Easterly announced the Joint Cyber Defence Collaborative, a public-private sector collaboration to promote national resilience by unifying cyber defense plans and operations. It's a good idea, but the music sucked.
A Decent Blockchain Pay Day
PolyNetwork, a blockchain interoperability network, was hacked and cryptocurrency stolen -- at time of writing around USD$400m worth was in the attacker's presumably sweaty virtual paws. PolyNetwork sent an open letter to the hacker, saying "The amount of money you hacked is the biggest one in the defi history," and urged them to return the stolen assets (DeFi is decentralised finance).
This seems to have at least partially worked. About half the money has been returned. Some parts of the cryptocurrency community are coordinating to make it difficult for the hacker to actually launder the stolen funds, but at this point it is unclear if all the money will be returned.
Apple's Corellium Lawsuit Dead Before Arrival
In a win for security research, Apple has dropped its lawsuit against Corellium, the maker of iOS virtualisation software for security research. A judge already tossed out the bulk of Apple's case last year -- the remaining item slated for litigation involved Apple's claims of copyright protection bypasses under the DMCA. To our non-expert eyes that case looked weak, and it seems we were right. Apple withdrew its claim before it could get to court.
Well. That's Embarrassing.
The Cybersecurity Atlas, a European Commission compendium of organisations with cyber security expertise, has been hacked and its member database released. This information was already publicly available, but together with access to the Commission's portal the breach could have allowed legit looking phishing attacks.
Ni Hao, Fellow Iranians
A Chinese espionage group tried to disguise itself as Iranian hackers -- by using "/Iran" in file paths, Farsi error messages, and Iranian-associated webshells -- while attacking Israeli organisations including government bodies and technology and telecommunications companies. This newsletter previously reported Chinese groups using compromised home routers to obfuscate their origins. This appears to be a different group, so this may indicate a high-level directive to improve opsec.
A Thousand e-Monkeys at a Thousand e-Typewriters
Interesting government research! Researchers from Singapore's Government Technology Agency found that OpenAI GPT-3's platform wrote better phishing emails than they did. It is possible that bureaucrats just aren't very good at writing engaging emails, and certainly AI will be able to write a lot more without all the tea breaks government employees take.
Like Cloudflare, but for Crime!
Researchers describe a new type of criminal service: a Traffic Distribution System or TDS that they call Prometheus. A TDS is a malware delivery service that intelligently manages malware deployment. TDS customers provide compromised web hosts and malicious files: the TDS deploys the malware to the compromised hosts and manages how visitors interact with the malware. Phishing emails steer victims to links into the TDS system, and Prometheus manages redirection so that only vulnerable visitors are given the payload before all visitors are redirected to a legitimate url. This makes malware deployment easier and also provides some protection against detection from bots.
Conti Ransomware Leak a Snooze-fest
The Conti ransomware crew's technical manuals and guides have been leaked by a disgruntled associate. The techniques described are pretty well-known. Conti received USD$20m in cryptocurrency in 2020.