| CARVIEW |
This year’s report is now out: https://www.gov.uk/government/publications/huawei-cyber-security-evaluation-centre-hcsec-oversight-board-annual-report-2021
It reports that the issues described in previous years have been addressed but then concludes, curiously: “there has been no overall improvement over the course of 2020 to meet the product software engineering and cyber security quality expected by the NCSC”. I can only conclude that Huawei addressed the symptoms (identified problems from previous reports) without carrying out the wider process changes that would have produced better quality in the first place. One step forward, one step back.
]]>What can be learn from what seems to be a new level of active cyber operations? Here are some of the lessons:
- The fact that such consequential targets exist (and are vulnerable) is the result of natural but dangerous incentives to develop monocultures. It’s a truism that convenience trumps security. And one consequence is that it’s always easier to piggyback on something that’s already been done than to develop a new solution. So we have only a few processor technologies, a few operating systems, a few transport protocols, a few cloud providers; and now only a few large-scale systems management environments. Choosing to use the existing systems is almost always cost effective, but the hidden cost is the loss of resilience that comes from putting all of our computatonal eggs in one basket.
Those who build these pervasive systems have failed to appreciate that with great power comes great vulnerability, and so great requirements for security. Instead these systems rely on the same barely workable mechanisms that are used in the rest of cyberspace. - Government cybsersecurity orgainsations have, with the best of intentions, worked themselves into an untenable situation with respect to protecting against such attacks. It was natural in, say, the Five Eyes countries to give the responsibility for cybersecurity to the signals intelligence organisations (NSA, GCHQ, ASD, CSE). After all, they had the expertise with digital communication of all kinds, and they used cyber tools for their own surveillance and espionage purposes, and so they were experts in many of the issues and technologies.
However, these signals intelligence agencies are constrained to act only against those outside the countries they belong to (with some brief exceptions post 9/11). When they spun off cybersecurity centres, to protect their domestic environments against cyber attacks, they found themselves in a legal and procedural no-mans-land where they, in general, didn’t have the ability to act in any meaningful way, and were reduced to an educational mandate. So we have the (interesting and novel) spectacle of remediation of systems affected by the Microsoft Exchange hack by the FBI, not the NSA (although surely rhe NSA was involved).
Western countries need to (re)think the role that their cybersecurity centres will play in the face of serious, large-scale, state-driven cyber attacks — not just practically but legislatively.
Criminals hide the connection between the origin of the proceeds and their availability for use by creating a chain (and often now a network) of movements, each step of which makes it harder to connect the outcome with the origin. Traditionally, this meant moving cash into the financial system, shuffling it around, possibly internationally, and using it to buy expensive objects (cars, houses, jewels, businesses) that seemed increasingly innocent as their distance from the original crime increased.
Financial intelligence units try to follow these chains, but it’s easy to see that the advantage lies with the criminals. Adding another link to the chain is easy, but finding that it exists is much harder.
Criminals are especially interested in mechanisms that completely obscure links in the chain. Changing ownership is one way to do this. A simple way to change ownership is to put the value in someone else’s name, perhaps a spouse or relative. But the mechanism of corporate entities makes this much easier. Networks of businesses can be constructed, and own one another, making it extremely difficult to work out who is the ‘real’ owner — the so-called beneficial owner. (Concealing who really owns value is also a common tax evasion strategy.)
A proposal which is becoming increasing popular as a way to make connections more visible is to create beneficial ownership registries. These are managed by governments, and require any corporate entity to explain who the beneficial owners are; usually with some threshold requirement that this applies to any beneficial owner who owns more than 25% of the entity.
There are some practical problems with this idea. The beneficial owners have to be identifiable in a world of 7 billion people, which means that quite a lot of detail must be provided about each owner. How much of this detail should be public? The idea works better if people in other countries can see and identify beneficial owners, because moving value to another country is a good way to break the chain. But there are countervailing privacy issues, especially as most beneficial owners are not doing anything nefarious.
Each country must build and manage its own beneficial ownership registry; and how can all governments be convinced to participate? Any that do not become magnets for money laundering which may be morally objectionable, but also very profitable for their financial institutions.
The threshold also creates issues. If it is 25%, which is typical today, then only four people have to get together to create an entity whose ownership can be legitimately concealed. If it is 10% then ten participants are required to conceal ownership. This might seem cumbersome for criminals but sadly “money laundering as a service” now exists, and the organisations that provide this service have the resources to aggregate value from many different criminals and mix and match it.
So beneficial ownership registries may help stamp out a good deal of shady practice, but they may not help much with stopping money laundering.
The other way to break chains is the use cryptocurrencies. All transactions within a blockchain are visible, but what is hidden is the identity of those making the transactions. So a criminal can put value into a cryptocurrency using one identity (really a public-private key pair) and take it out again using another identity, and the chain has been completely broken (as long as the key pairs, the identities, are kept separate).
Fortunately, cryptocurrencies are not ideal as places to hold value, even briefly, because the exchange rates between them and the financial system tend to fluctuate wildly and unpredictably. Inserting and removing value is also a relatively slow process.
The bottom line is that it is easy for criminals to disconnect the crimes they commit from the value that they produce, and so the current legal basis for prosecuting money laundering is almost unusable — which the prosecution statistics bear out.
The other approach to limiting money laundering is to regulate the financial system more carefully. This includes things like limiting the ability to move cash in quantity, requiring an increasing array of agents who handle value to report transactions above a certain size or that seem suspicious, and requiring banks and financial institutions to keep track of who they’re dealing with. The problem is that the entities that are required to make reports have considerable discretion about when to do so, and considerable incentives not to because they make money from the transactions involved. The recent spate of breathtaking fines for banks that have violated money laundering rules strongly suggests that they have decided that transactions that are probably illicit but have a fig leaf of deniability are worth doing; and the fines, if they are caught, are simply a cost of doing business.
In the end, the only mechanism that can actually prevent money laundering is unexplained wealth orders, which I’ve written about before. These target the end of the chain, where criminals want to take the value produced by their crimes and use it for their lifestyle. UWOs force the recipients of value to account for where it came from, so the size and complexity of the chain doesn’t matter.
]]>- Use high levels of positive language;
- Avoid negative language completely;
- Stay away from policy;
- Don’t mention your opponent.
Joe Biden’s speech at Gettysburg was a textbook example of how to do this (and it’s no easy feat avoiding mentioning your opponent when it’s Trump).
He should have stopped after the first five minutes (HT Bob Newhart “On the backs of envelopes, Abe”, also Lincoln himself, 271 words).
After the first five minutes it got rambling and repetitive. The media hates speeches that fit our model, and so the only sound bites came from the second half, which was much less well-written.
]]>- uses high levels of positive language;
- avoids all negative language;
- stays away from policy and talks in generalities
- doesn’t talk about the opposing candidate
https://www.sciencedirect.com/science/article/pii/S0261379416302062
The reason this works is that the choices made by voters are not driven by rational choice but by a more immediate appeal of the candidate as a person. The media doesn’t believe in these rules, and constantly tries to drive candidates to do the opposite. For first time candidates this pressure often works, which is partly why incumbents tend to do well in presidential elections.
But wait, you say. How did Trump win last time? The answer is that, although he doesn’t do well on 2 and 4, Hillary Clinton did very poorly on all four. So it wasn’t that Trump won, so much as that Hillary Clinton lost.
Based on this model, and its historical sucess, Biden is doing pretty much exactly what he needs to do.
]]>Their conclusion was that, although they had become suspicious of attempts to include malicious code in switches and other products, they couldn’t actually conclude that there had been such attempts because the code was so poorly constructed.
Now a different case has come to light. Huawei was contracted to build a repository for the Papua-New Guinea government’s data and operations. It opened in 2018.
A report was commissioned by the PNG government, and carried out by the Australian Strategic Policy Institute (paid for by Australia’s DFAT). Those who’ve seen the report say that it points out that:
- Core switches were not behind firewalls;
- The encryption used an algorithm known to be broken two years earlier;
- The firewalls had also reached the end of their lives two years earlier.
In other words, the installation was not fit for service.
The article (below) takes the view that this was malice. But Huawei’s track record again makes it impossible to tell.
As well as making it easy for Huawei to access the system illicitly, the level of security also made it possible for any other country to gain access as well. This is one of the major undiscussed issues around Huawei — maybe they are beholden to the Chinese government and might have to share data with them, but the quality of their security means that the threat surface of their equipment is large. So using Huawei equipment risks giving access to Russia, Iran, and North Korea, as well as China.
The PNG project was paid for by a loan from a Chinese bank. Sadly there was no budget for maintenance so the entire system degraded into uselessness before it could even get seriously started. But the PNG government still owes China $53 million for building it (Belt and Road = Bait and Switch?).
(behind a paywall, but there are other versions).
]]>https://www.bnnbloomberg.ca/did-a-chinese-hack-kill-canada-s-greatest-tech-company-1.1459269
Most headlines that contain a question can be answered “No” without reading the article, but this one is an exception.
]]>There are three kinds of provenance:
- Where did an object come from. This kind of provenance is often associated with food and drink: country of origin for fresh produce, brand for other kinds of food, appellation d’origine contrôlée for French wines, and many other examples. This kind of provenance is usually signalled by something that is attached to the object.
- Where did an object go on its way from source to destination. This is actually the most common form of provenance historically — the way that you know that a chair really is a Chippendale is to be able to trace its ownership all the way back to the maker. A chair without provenance is probably much less valuable, even though it may look like a Chippendale, and the wood seems the right age. This kind of provenance is beginning to be associated with food. For example, some shipments now have temperature sensors attached to them that record the maximum temperature they ever encountered between source and destination. Many kinds of shipments have had details about their pathway and progress available to shippers, but this is now being exposed to customers as well. So if you buy something from Amazon you can follow its progress (roughly) from warehouse to you.
- The third kind of provenance is still in its infancy — what else did the object encounter on it way from source to destination. This comes in two forms. First, what other objects was it close to? This is the essence of Covid19 contact tracing apps, but it applies to any situation where closeness could be associated with poor outcomes. Second, where the objects that it was close to ones that were expected or made sense?
The first and second forms of provenace don’t lead to interesting data-analytics problems. They can be solved by recording technologies with, of course, issues of reliability, unforgeability, and non-repudiation.
But the third case raises many interesting problems. Public health models of the spread of infection usually assume some kind of random particle model of how people interact (with various refinements such as compartments). These models would be much more accurate if they could be based on actual physical encounter networks — but privacy quickly becomes an issue. Nevertheless, there are situations where encounter networks are already collected for other reasons: bus and train driver handovers, shift changes of other kinds, police-present incidents; and such data provides natural encounter networks. [One reason why Covid19 contact tracing apps work so poorly is that Bluetooth proximity is a poor surrogate for potentially infectious physical encounter.]
Customs also has a natural interest in provenance: when someone or something presents at the border, the reason they’re allowed to pass or not is all about provenance: hard coded in a passport, pre-approved by the issue of a visa, or with real-time information derived from, say, a vehicle licence plate.
Some of clearly suspicious, but hard to detect, situations arise from mismatched provenance. For example, if a couple arrive on the same flight, then they will usually have been seated together; if two people booked their tickets or got visas using the same travel agency at the same time then they will either arrive on different flights (they don’t know each other), or they will arrive on the same flight and sit together (they do know each other). In other words, the similarity of provenance chains should match the similarity of relationships, and mismatches between the two signal suspicious behaviour. Customs data analytics is just beginning to explore leveraging this kind of data.
]]>These contact tracing apps work as follows: each phone is given a random identifier. Whenever your phone and somebody else’s phone get close enough, they exchange these identifiers. If anyone is diagnosed with Covid, their identifier is flagged and all of the phones that have been close to the flagged phone in the past 2 weeks are notified so that users know that they have been close to someone who subsequently got the disease.
First, Canada is very late to the party. This style of contact tracing app was first designed by Singapore, Australia rolled its version out at the end of April, and many other countries have also had one available for a while. Rather than using one of the existing apps (which require very little centralised and so specialised infrastructure), Canada is developing its own — sometime soon, maybe.
Second, these apps have serious drawbacks, and might not be effective, even in principle. Bluetooth, which is used to detect a nearby phone, is a wireless system and so detects any other phone within a few metres. But it can’t tell that the other phone is behind a wall, or behind a screen, or even in a car driving by with the windows closed. So it’s going to detect many ‘contacts’ that can’t possibly have spread covid, especially in cities. Are people really going to isolate based on such a notification?
Third, these apps, collectively, have to capture a large number of contacts to actually help with the public health issue. It’s been estimated that around 80% of users need to download and use the app to get reasonable effectiveness. Take up in practice has been much, much less than this, often around 20%. Although these apps have been in use for, let’s say, 45 days in countries that have them, I cannot find a single report of an exposure notification anywhere.
Governments are inclined to say things like “Well, contact tracing apps aren’t doing anything useful now, but in the later stages they’ll be incredibly useful” (and so, presumably, we don’t have to rush to build them). But it’s mostly about being seen to do something rather than actually doing something helpful.
]]>Smarter criminals are now exfiltrating files that they find which might be embarrassing to the organisation whose site they’ve hacked. Almost any organisation will have some dirty laundry it would rather not have publicised: demonstrations of incompetence, inappropriate emails, strategic directions, tactical decisions, ….
The criminals threaten to publish these documents within a short period of time as a way to increase the pressure to pay the ransom. Now even an organisation that has good backups may want to pay the ransom.
Actually finding content that the organisation might not want made public is a challenging natural language problem (although there is probably low-hanging fruit such as pornographic images). But, like the man (allegedly Arthur Conan Doyle) who sent a telegram to his friend saying “Fly, all is discovered” (The Strand, George Newnes, September 18, 1897, No. 831 – Vol. XXXII) and saw him leave town, it might not be necessary to specify which actual documents will be published.
]]>