The perennial excuse for our dismal performance in cyber security keeps showing up again and again. Some “experts” state that 95% of cyber security breaches occur due to human error, i.e. not following the recommended procedures. There’s a sleight of hand in these statements in that many breaches include human error, but do not occur due to that error. While the 95% number might be suspect, the real point is different: even following all the “recommended security procedures” will not protect our systems from cyber attacks.
It’s true that attackers often use users’ mistakes. But the reason is simple and obvious – human errors do make it easier to penetrate a system. In effect they represent a shortcut for an attack, but by no means do they eliminate many other ways to do it. Why would an attacker take a more complicated route if he can use a shortcut?
Of course, users’ awareness of security is not common or comprehensive. This was vividly demonstrated by one very important Government agency not that long ago. Its board, after a thorough (and expensive) “expert” study mandated that employees use a six-letter password instead of the old and “insecure” four-letter one.
This is a pretty pathetic solution, but the much bigger question is: do the users really need to follow or even know complicated procedures? The answer is: no, not at all.
Indeed, cyberspace presents us with a wonderful opportunity to build very user-friendly effective security systems. It’s quite possible to build cyber security systems that would be extremely strong, even mathematically unhackable, that would require the user only to select the party he is going to communicate with, and then to indicate “secure.” No other security-related actions would be needed. This is very different from our current security technology based on concepts of physical space, where the weakest link in the security chain is the human factor. But up until now we have failed to take advantage of this great property of cyberspace.
If, as it is claimed, our cyber security misery is a “people” problem, this is true only in very narrow sense. It’s not the users who are the problem; the problem belongs to the people who design and build our worthless cyber security systems.
The current anti-encryption political push by a choir of government bureaucrats is picking up steam and has lately been joined by the head of the British MI5. The usual scarecrow of terrorism is invoked and used bluntly in public statements that border on unabashed propaganda. I did not want to write about it, but what is going on is just too much to take. The real goal of the whole campaign is suspect, so it’s worth taking a closer look at the issues involved.
Point one – ideological: We view ourselves as a democracy. With that in mind we need to understand that encryption has existed for at least four thousand years. During that time most of the rulers were ruthless tyrants and for all of them their #1 priority was to protect their rule. But even they did not crack down on private encryption – because it’s not practical (see Point three below), and they could not enforce it anyway. We, on the other hand, are facing a bunch of bureaucrats demanding the practical end to meaningful private encryption. How come a democracy can have more restrictions for its citizens than a country suffering under the rule of a tyrant?
Point two – technical: During all these thousands of years encryption algorithms have been consistently and quickly cracked by experts, usually employed by the government. Only a very few encryption algorithms withstood scrutiny for a few years, and those strong algorithms were developed by government experts and have always been well outside the reach of the general public at the time. Contrary to popular belief, all commercially available algorithms have been cracked very quickly after their introduction. Governments have traditionally been very shy about disclosing this. The situation is no different now. If a target used commercially available encryption algorithms its communications have been quickly cracked. So, what is the technical difference in the current situation? The simple answer is the sheer volume of information passing through the Internet. Individual communications can be cracked, but not the entire Internet traffic. That’s what the government bureaucracy is after: the ability to read ALL the traffic, i.e. all of our communications.
Point three – practical: the purpose of encryption is to assure privacy of communications. There are many other ways to do this other than by encryption. One vivid example: when we were hunting Bin Laden for years he did not use the Internet at all, he used messengers. He could just as well have used the regular mail. Furthermore, it’s well known that the 9/11 terrorists were communicating over regular phones, but in Aesopian language. For example they referred to a terrorist act as a “wedding.” So are our bureaucrats next going to demand the right to read all our mail, or make a terror suspect of anyone who mentions a wedding over the phone?
Conclusion: The simple truth is that the Government can penetrate any commercial encryption available to terrorists. That is if they actually go after terrorists. However, they are now demanding the right to go after everyone, mostly law abiding citizens. If that demand is denied there’s still nothing to prevent them from going specifically after terror suspects.
The moral here is pretty straightforward: if we call ourselves an uncorrupt democracy we should be very careful about giving our bureaucrats too much power, inasmuch as they want more power than tyrants of history could not get. Furthermore, the bigger danger here is that loosing civil rights is a very slippery slope.
For quite some time I’ve been puzzled by the alleged Chinese hacking of our databases. I could understand if they hacked our advanced research and development– that would save them time, effort and money. But why the databases? Then it dawned on me: it’s a savvy business strategy.
We routinely encounter problems with our databases. One organization can’t find our file, another somehow has the wrong information about us, and all too often they certainly can’t get their act together, and we see classic cases of the left hand not knowing what the right one is doing . The pre-9/11 non-sharing of intelligence is a good illustration. In other words, we have a somewhat messy general situation with our databases; we’re used to taking this in stride; and we just sigh when we have to deal with some organization that accuses us of something we aren’t guilty of.
The Chinese understood the problem, but they just never got used to it. For many centuries they had a much bigger population than other countries, but somehow they always managed to know exactly who is who, who is related to whom, and what he/she is doing.
So naturally they wanted to have the same level of knowledge about the rest of the world. To their dismay, in the US they found disorganized databases and mismatching records. So they had to process all that information to make sense of it for themselves. And suddenly they saw a perfect business opportunity: they would develop a gigantic and very efficient database of the US, and then sell this data back to us piecemeal, retail. This would give them full and exact knowledge of the US, and the US would pay for the project, with a significant profit for the Chinese. For us this would be a very valuable service, a kind of of involuntary outsourcing where we (both the Government and the private sector) can get relevant and reliable data at a modest price. Makes perfect business sense.
This approach has a special bonus for the US Government: when buying data abroad they won’t have to deal with privacy restrictions imposed by the US Constitution and constantly debated by Congress. The logic is impeccable: we bought it abroad, and if the Chinese know it, we are entitled to know what they know about us.
The recently discovered vulnerability in the Android operation system that affected 1 billion smartphone users (corrected to a mere 950 million, according to the phone manufacturers) followed a typical path:
a) The next gaping security hole is discovered by researchers, who alert the manufacturers;
b) The manufacturers make a patch for future buyers;
c) The manufacturers and service providers do nothing to help or even alert the affected users;
d) The researchers lose patience and publicly disclose their discovery of the flaw;
e) The manufacturers report that they “fixed the glitch within 48 hours,” and keep quiet about the customers affected.
The frustrating part of this all too familiar pattern is that it ignores the victims – the customers who already bought their phones. These customers were assured by the manufacturers’ marketing and sales people at the time of purchase that the product (a phone in this case) is very secure, and is equipped with a top-notch security system– so their privacy is assured. Software patches like the one in question are very easy to incorporate into new phones. However, it would cost money to fix the defective products already out there, and this seems deter to the companies from making the fix.
But the most interesting aspect of the situation is not what the manufacturers say, but rather what they don’t say. It should be understood that the vulnerability discovered presents not one problem but two. One is that the phones without the patch can be hacked at some point in the future. The other is that the phones already hacked are under the hackers’ control. So the most important questions is, can that control be reliably taken away from the hacker and returned to the customer? The manufacturers notoriously acknowledge the simple first problem, but quietly ignore the existence of the second, much bigger one.
In practical terms, even if the fix is installed on an affected phone, the real question is: does it neutralize the effect of the hack? In other words, if my phone was hacked, the perpetrators have established control over it. Does the fix eliminate that control? A pretty safe bet here is that it does not. The fix just prevents another hack using the same method. But in that case, what’s my phone worth now when I no longer can assume my privacy, or security of financial transactions? It looks like the manufacturers may not be complying with the implied warranty laws. At the very least this is a priority research problem for our increasingly numerous legal experts.
Every aspect of these issues is fast approaching a real-world test.– especially urgently given the proliferation of smartphone-based payment systems like Apple Pay and Google Wallet.
The public’s perception of our cyber vulnerability hasn’t so far been really tested . We’ve been lucky to get away with our critical infrastructure practically undefended. However, sooner or later we’ll run out of out luck, and we’ll get hurt– badly.
The major reason for our apathy in the face of cyber attacks is probably the tendency to react mostly to something that is tangible. But as yet we haven’t had much in the way of tangible losses to cyber attacks: banks have been able to reimburse money lost in their hacked customers’ accounts, stolen identities have generally been somewhat restored, we aren’t aware of anyone being killed due to the Office of Personnel Management (OPM) hack – in other words, a small part of the population has experienced inconveniences, but not tangible losses.
The latest example of a cavalier attitude to this danger comes (surprise, surprise!) from the auto industry. Cars have been hijacked in experiments for quite some time, but these were limited to the Government and some private labs. Car manufacturers have surely been aware of the potential problem. Indeed, Fiat Chrysler was notified of the successful hijack of one of their cars quite some time ago, but only after the media exposed it did they recall 1.4 million cars. And of course the vulnerability isn’t limited to Fiat Chrysler– it’s far wider.
This situation is fast heading for a test. Significant efforts are being made to develop self-driving vehicles, and the results are already very promising. However, the parties involved seem oblivious of the fact that a self-driving car is much more vulnerable to cyber attacks other than the kind that affected the Fiat-Chrysler vehicle. The latter can be somewhat mitigated by separating the car’s communication equipment from its primary performance systems. But clearly this isn’t going to be the case with self-driving cars – outside interaction is going to be a major factor of their safety features. Those very safety features themselves can be manipulated by a malicious party, with catastrophic results. There would be lost lives along with many other unpleasant results. Hacking self-driving vehicles would likely become a favorite weapon of focused assassins (not to mention unfocused crazies who would prefer it to random shooting in movie theaters). That would certainly be a very real test of our tolerance for remaining undefended in cyberspace.
The obvious conclusion that begs to be considered here is that the entities that are working on the self-driving car had better solve the cyber security problem now instead of finding out that their technological marvels cannot be certified for safety reasons. Or worse, pushing the certification through and then facing the inevitable consequences.
The somewhat belated (just 7 months!) timid admission by the Federal Government that security clearance files of Government employees and contractors had been hacked was hardly a shocking surprise. Media discussion largely focused on the breadth of the security breach – the number hacked started with 4 million and pretty quickly grew to 21 million. But the depth of the security breach was not really addressed. It is, however, a major aspect of the loss.
Unbeknownst to most of the general public, and ironically even to many of those Government employees who are actually responsible for this security breach, all security files are not created equal. At one end of the spectrum are security clearance files of personnel whose proximity to Government secrets is very limited and often only symbolic, such as facilities maintenance workers. At the other end of the spectrum are people with security clearances well beyond the proverbial Top Secret level, those who are entrusted with the deepest and most sensitive Government secrets, such as nuclear arms and Government communications security experts.
Candidates who apply for a Government job are routinely asked to a sign privacy release allowing the Government to conduct an investigation into the applicant’s background that would otherwise be in violation of their privacy rights. Of course, applicants usually sign the form without looking at it too much. But even the lowest level of security clearance is far more invasive than your “thorough” bank investigation before granting you a loan. At the low end there’s a cursory search to make sure that the applicant has no significant criminal offences and is not involved in known criminal organizations. For a high-end clearance it’s a totally different story. Thorough investigation may include numerous connections, including relatives and present and past personal friends, hobbies, clubs affiliations, financial transactions over a significant period of time, and so on. Present and past spouses, “close friends” and partners are definitely of interest. Investigations may include interviews with neighbors and government informants, and maybe even one or another form of surveillance.
Many who are subjected to such investigation don’t realize how much of their personal privacy they surrender to the Government, but surrender they do, and some of them find that out only if things turn sour in their relations with the Government. However, they all at least implicitly rely on the Government to guarantee the security of their private information.
The OPM hack shattered that expectation. If the hack was done as alleged, by the Chinese, it is also most certain that the Russians had also done it before. Moreover, whichever intelligence service has the files, they may well trade some of them in exchange for other intelligence. Needless to say, among all those in supersensitive jobs are clandestine intelligence operatives, including the DEA, CIA, and Special Forces, and this situation puts their lives in real and immediate danger.
As a practical matter, those affected should demand to know exactly what information was stolen. Classified as it may be, it is not classified anymore. After all, if the Chinese know something about me, I am certainly entitled to know what they know too.
One more unanswered but very important question: do those files contain personal biometrics beyond fingerprints (leaking which is bad enough) — such as DNA, and retinal scans? I haven’t seen anyone asking that.
The New York Stock Exchange is down and United Airlines is not flying for half a day. Naturally, everyone’s wondering, What’s going on? The public wants an explanation from the FBI and the affected institutions, and fast.
The response is quite astounding.
Voila! Cyber security problem solved: from now on all hacks are to be called “computer glitches.” United and the NYSE computer network outages are only the latest glaring examples of a classic bureaucratic solution to the problem – defense by semantics.
This “expert” explanation means two things: a) it’s a fairytale designed for little children and big fools; and b) the “cyber security experts” of the affected entities and the FBI probably have no clue as to what happened. That’s a very good indication of an expertly executed cyber attack – the effect is obvious but the attack has not even been detected — and forget figuring out that “the Chinese” or the “the Russians” did it. Because it is really unfathomable to imagine that programmers working on critical programs like these found “glitches.” Such programs are written and implemented by highly qualified programmers and software engineers and are tested numerous times under all imaginable circumstances. Furthermore, they’ve been running for quite some time with no “glitches” detected, and all those systems have built-in redundancies precisely in case of a “glitch.”
The “glitch” explanation is very convenient for those who failed to provide cybersecurity of this country.
All these events are a clear indication of our massive cyber security failure. This failure was inevitable. On the one hand in the last quarter century widely known cyber attack technology has advanced dramatically, and is becoming increasingly widespread. What a while ago only a few government agencies in the world could do can now be done by a lot of people, often by mere script kiddies, and certainly by our sworn enemies who aren’t restricted in what they can attack—the more damage the better. On the other hand, our cyber security has not advanced at all for the same quarter of a century. This is the inconvenient truth, despite of all the marketing and politically soothing statements from the entrenched cybersecurity establishment.
It is really sad that people responsible and paid for providing our cyber security are getting away with this cyber defense by semantics. No doubt the next step is to make the term “hacking” politically incorrect and make everyone use “computer glitch” instead. When that fairy tale runs out, they’ll think of another term. That’s assuming our computers are still functioning.
The just released report by 13 well known cryptographers opposing the US and British governments’ sweeping demand for encryption keys has directly addressed the increasing threat of government’s insatiable thirst for power. Any government objection to this report is bound to be disingenuous.
However, there’s one more angle that begs for further exposure. The overall issue is not a scientific or technical question; it’s an ideological one. The frequently heard loud claim that inevitably we have to give up our privacy so the government can protect us is flatly untrue and hypocritical. It is indeed technologically feasible today to build a system where everyone of us would wear an irremovable collar equipped with cameras, microphones, and GPS that would communicate our location and immediate surroundings every instant “totally securely” to some highly trusted government agency. The government may argue that a) it would only access this information upon some court order; and b) it would solve a lot of crimes and save a lot of lives. True, such a system would make the police’s job very easy, would solve a lot of crimes, and save a lot of lives. But the real question is: do we want to live in that kind of society? In the American spirit the politest answer would be, “Hell, no!” And as always with this kind of hypothetical system criminals would quickly find a solution to neutralize it, leaving us with the situation that only-law abiding citizens would be subject to this massive electronic prison.
Even as we see deeper and deeper assaults on our civil rights and liberty in the manner described above, the government is more than a little shy talking about other intelligence-gathering techniques that require more skill than a slightly trained operator just pushing a few computer keys. These methods are well known among professionals, they have existed for a long time, and can be applied to any target. The drawback of course is that they are less convenient for the operators, require greater skills, and do not include a global bulk collection of information on everyone.
Well, maybe this is just what we, the people, need and want.
There are three points that radically distinguish US cybersecurity industry from any other.
One – every cybersecurity company seems to be the self-declared “world leader in cybersecurity.” This can be easily verified by visiting their websites. I haven’t been able to detect any #2. Surprisingly, comedians and cartoonists don’t explore this hilarious situation.
Two – the industry as a whole is de-facto exempted from any product liability, even any implied warranty liability. This is a truly unique break that the cybersecurity industry has been getting away with for over thirty years. In the US every manufacturer is obligated at the very least to make sure that its products are reasonably fit for their intended uses . For example, a car manufacturer must make sure that its cars are at least drivable and can deliver a user from point A to point B. A hammer manufacturer has to make sure that its hammer handles do not break, at least not before you bring one home. The Uniform Commercial Code (UCC) is very explicit about this, and there have been millions of court cases where this principle has been upheld.
But not for the cybersecurity industry. Every firewall gets hacked even before it’s delivered to the first customer. On a daily basis we hear of “big” cases that one or another organization has been hacked with huge losses for millions of people. And don’t forget that only a small fraction of hacks is detected. We never hear about the undetected “big” cases and thousands of smaller ones. But nobody is held responsible despite of many billions of dollars in losses incurred by individuals, companies, and governments. The Government does promise to prosecute hackers – if they can catch them.
The interesting twist here is that every company assures its customers that their personal information and the money in their accounts is secure. Ironically, they assure their customers before they are hacked, while they are being hacked, and after they’ve been hacked. Somehow we listen to them and nod in agreement.
Three – the cybersecurity industry gets countless billions of our dollars for research and development of cybersecurity products. In fact, we spend more on this in a year than the entire cost of the Apollo program that put a few good men on the Moon. Amazingly, these funds seem to be going into a black hole. Nothing comes back. No product, no results, no responsibility for wasted money– the taxpayers money.
The most remarkable thing about this is that we, the people, have put up with this situation for over thirty years.
On a positive note: this industry should be a bonanza for investors — assured high returns with no risk. Stock brokers should take a note.
In the latest example of the bureaucracy’s detachment from reality, the White House has just announced an executive order to sanction cyber attackers. Not to mention the checkered record of effectiveness of other sanctions, cyber sanctions are very difficult even to fathom.
For starters, who are we going to sanction? The whole history of cyber attacks clearly shows that determining the real source of cyber attacks with any degree of certainty is extremely difficult. Out of the thousands of attackers around the world we are able to identify only a handful in a year. Furthermore, those identified are not the most dangerous ones. The best we can do is to say that an attack came from a certain country.
So, who are we going to sanction — and how?
An individual attacker? Most of the identified hackers are not the most dangerous ones, often just scrip kiddies. But even then, how are we going to sanction them? Prohibit a teenager from the Ukraine to enter the United States? He doesn’t have the money to come here anyway. Bar him from McDonalds? He’ll find another place to get a hamburger. Prohibit a cyber dude from Nigeria from exporting oil? He’s unlikely to have any. Deny a sale of an F-16 to a company in Croatia? They probably don’t have a hangar to keep it in anyway.
A country? Unlikely. It’s a well known fact that most of the attacks, and definitely the most damaging ones, are “bounced” many times through “innocent” computers in other countries before being sent to the target. Given that, we’ll rapidly end up sanctioning the rest of the world. While this would become perpetual fodder for the press, it would be unlikely to have any real impact on cyber attacks. If anything, it would even increase them, when they start sanctioning us in retaliation. And chances are their sanctions would be more damaging than ours.
Besides, cyber attacks are already a felony anyway. That should be sanction enough — though it doesn’t seem to work.
The big question is, do we really understand what we’re doing in cybersecurity? It doesn’t seem so.
A better solution may be to sanction our own bureaucracy.