A while ago I published a post, “A hack is forever,” explaining that a competent hack is extremely difficult to eradicate from a cyber system – there is no certainty that the system is really clean. However, there is flip-side aspect of this in cyberspace that is not commonly understood: by way of dubious consolation, the hacker cannot be certain that he really got away with the crime.
Cyberspace is an information and communications space. In essence, we don’t really care what media our information is stored on, we care about the utility aspects, such as the efficiency of the storage, how quickly and conveniently information can be retrieved, and so on. Similarly, we don’t really care what communications channels we use, we care about our communications’ speed, reliability, security, etc.
Cyberspace has one very significant property: information cannot be destroyed. Just think about it. We can destroy only a copy of the information, i.e. we can destroy a physical carrier of the information, such as a floppy, a thumb drive, a letter, or a hard drive (often that’s not an easy task either). However, we cannot destroy the information itself. It can exist in an unknown number of copies, and we never really know how many copies of a particular piece of information exist. This is particularly true given the increasing complexity of our cyber systems — we never really know how many copies of that information were made during its processing, storage, and communication, not to mention a very possible intercept of the information by whoever, given our insecure Internet. Such an intercept can open an entirely separate and potentially huge area in cyberspace for numerous further copies of the information.
Back to the consolation point: cyber criminals of all sorts can never be sure that there is not a copy somewhere of whatever they have done. That copy can surface, someday, usually at the most inopportune time for the perpetrator.
One practical aspect of this is that Congress perhaps should consider increasing the statutes of limitations for cyber crimes, or crimes committed via cyberspace.
The perennial excuse for our dismal performance in cyber security keeps showing up again and again. Some “experts” state that 95% of cyber security breaches occur due to human error, i.e. not following the recommended procedures. There’s a sleight of hand in these statements in that many breaches include human error, but do not occur due to that error. While the 95% number might be suspect, the real point is different: even following all the “recommended security procedures” will not protect our systems from cyber attacks.
It’s true that attackers often use users’ mistakes. But the reason is simple and obvious – human errors do make it easier to penetrate a system. In effect they represent a shortcut for an attack, but by no means do they eliminate many other ways to do it. Why would an attacker take a more complicated route if he can use a shortcut?
Of course, users’ awareness of security is not common or comprehensive. This was vividly demonstrated by one very important Government agency not that long ago. Its board, after a thorough (and expensive) “expert” study mandated that employees use a six-letter password instead of the old and “insecure” four-letter one.
This is a pretty pathetic solution, but the much bigger question is: do the users really need to follow or even know complicated procedures? The answer is: no, not at all.
Indeed, cyberspace presents us with a wonderful opportunity to build very user-friendly effective security systems. It’s quite possible to build cyber security systems that would be extremely strong, even mathematically unhackable, that would require the user only to select the party he is going to communicate with, and then to indicate “secure.” No other security-related actions would be needed. This is very different from our current security technology based on concepts of physical space, where the weakest link in the security chain is the human factor. But up until now we have failed to take advantage of this great property of cyberspace.
If, as it is claimed, our cyber security misery is a “people” problem, this is true only in very narrow sense. It’s not the users who are the problem; the problem belongs to the people who design and build our worthless cyber security systems.
The somewhat belated (just 7 months!) timid admission by the Federal Government that security clearance files of Government employees and contractors had been hacked was hardly a shocking surprise. Media discussion largely focused on the breadth of the security breach – the number hacked started with 4 million and pretty quickly grew to 21 million. But the depth of the security breach was not really addressed. It is, however, a major aspect of the loss.
Unbeknownst to most of the general public, and ironically even to many of those Government employees who are actually responsible for this security breach, all security files are not created equal. At one end of the spectrum are security clearance files of personnel whose proximity to Government secrets is very limited and often only symbolic, such as facilities maintenance workers. At the other end of the spectrum are people with security clearances well beyond the proverbial Top Secret level, those who are entrusted with the deepest and most sensitive Government secrets, such as nuclear arms and Government communications security experts.
Candidates who apply for a Government job are routinely asked to a sign privacy release allowing the Government to conduct an investigation into the applicant’s background that would otherwise be in violation of their privacy rights. Of course, applicants usually sign the form without looking at it too much. But even the lowest level of security clearance is far more invasive than your “thorough” bank investigation before granting you a loan. At the low end there’s a cursory search to make sure that the applicant has no significant criminal offences and is not involved in known criminal organizations. For a high-end clearance it’s a totally different story. Thorough investigation may include numerous connections, including relatives and present and past personal friends, hobbies, clubs affiliations, financial transactions over a significant period of time, and so on. Present and past spouses, “close friends” and partners are definitely of interest. Investigations may include interviews with neighbors and government informants, and maybe even one or another form of surveillance.
Many who are subjected to such investigation don’t realize how much of their personal privacy they surrender to the Government, but surrender they do, and some of them find that out only if things turn sour in their relations with the Government. However, they all at least implicitly rely on the Government to guarantee the security of their private information.
The OPM hack shattered that expectation. If the hack was done as alleged, by the Chinese, it is also most certain that the Russians had also done it before. Moreover, whichever intelligence service has the files, they may well trade some of them in exchange for other intelligence. Needless to say, among all those in supersensitive jobs are clandestine intelligence operatives, including the DEA, CIA, and Special Forces, and this situation puts their lives in real and immediate danger.
As a practical matter, those affected should demand to know exactly what information was stolen. Classified as it may be, it is not classified anymore. After all, if the Chinese know something about me, I am certainly entitled to know what they know too.
One more unanswered but very important question: do those files contain personal biometrics beyond fingerprints (leaking which is bad enough) — such as DNA, and retinal scans? I haven’t seen anyone asking that.
Media attention to cyber attacks can be divided into two categories: the endless stream of examples of institutions hacked, and cautious descriptions of potential (and very real) horrors of our vital systems being attacked.
But there’s one area of cyber attacks that conspicuously has received little or no media attention: political hacking.
Practically all our electronic systems are vulnerable to cyber attacks to varying degrees. Political systems rank close to the high end of vulnerability, and indeed most of them are virtually undefended against even a low-skilled hacker.
It’s pretty obvious that these systems are extremely valuable targets for some political activists, and certainly for the aggressive ones. It’s not too far-fetched to assume that some of these activists, whatever their focus, either possess hacking expertise themselves or have access to guns for hire, whether for money or some other consideration.
This hacking potential may impact our lives more than we realize. It’s not technically difficult to hack into a voting system and “adjust” the outcome to the hacker’s liking. Such hacking targets can be at any level – from a poll on a local ordinance to allow dogs on the beach to the election for a head of state. Importantly, it’s not easy to detect such interference, and even if done successfully it takes a lot of time. We can only imagine the political mess if a couple of months after an election it’s determined that a group of teenagers (or some mysterious “Russian or Chinese hackers”) materially changed the outcome, and the wrong people were sworn in to their new hard-earned and increasingly expensive jobs.
On a more subtle indirect note such “adjustments” can be made to the results of public opinion polls, manipulating public opinion in a very effective way.
While it’s understandable that nobody wants to discuss this classic case of a very hot potato, nevertheless we have to realize that we ignore this threat at our own peril.
There are three points that radically distinguish US cybersecurity industry from any other.
One – every cybersecurity company seems to be the self-declared “world leader in cybersecurity.” This can be easily verified by visiting their websites. I haven’t been able to detect any #2. Surprisingly, comedians and cartoonists don’t explore this hilarious situation.
Two – the industry as a whole is de-facto exempted from any product liability, even any implied warranty liability. This is a truly unique break that the cybersecurity industry has been getting away with for over thirty years. In the US every manufacturer is obligated at the very least to make sure that its products are reasonably fit for their intended uses . For example, a car manufacturer must make sure that its cars are at least drivable and can deliver a user from point A to point B. A hammer manufacturer has to make sure that its hammer handles do not break, at least not before you bring one home. The Uniform Commercial Code (UCC) is very explicit about this, and there have been millions of court cases where this principle has been upheld.
But not for the cybersecurity industry. Every firewall gets hacked even before it’s delivered to the first customer. On a daily basis we hear of “big” cases that one or another organization has been hacked with huge losses for millions of people. And don’t forget that only a small fraction of hacks is detected. We never hear about the undetected “big” cases and thousands of smaller ones. But nobody is held responsible despite of many billions of dollars in losses incurred by individuals, companies, and governments. The Government does promise to prosecute hackers – if they can catch them.
The interesting twist here is that every company assures its customers that their personal information and the money in their accounts is secure. Ironically, they assure their customers before they are hacked, while they are being hacked, and after they’ve been hacked. Somehow we listen to them and nod in agreement.
Three – the cybersecurity industry gets countless billions of our dollars for research and development of cybersecurity products. In fact, we spend more on this in a year than the entire cost of the Apollo program that put a few good men on the Moon. Amazingly, these funds seem to be going into a black hole. Nothing comes back. No product, no results, no responsibility for wasted money– the taxpayers money.
The most remarkable thing about this is that we, the people, have put up with this situation for over thirty years.
On a positive note: this industry should be a bonanza for investors — assured high returns with no risk. Stock brokers should take a note.
In the latest example of the bureaucracy’s detachment from reality, the White House has just announced an executive order to sanction cyber attackers. Not to mention the checkered record of effectiveness of other sanctions, cyber sanctions are very difficult even to fathom.
For starters, who are we going to sanction? The whole history of cyber attacks clearly shows that determining the real source of cyber attacks with any degree of certainty is extremely difficult. Out of the thousands of attackers around the world we are able to identify only a handful in a year. Furthermore, those identified are not the most dangerous ones. The best we can do is to say that an attack came from a certain country.
So, who are we going to sanction — and how?
An individual attacker? Most of the identified hackers are not the most dangerous ones, often just scrip kiddies. But even then, how are we going to sanction them? Prohibit a teenager from the Ukraine to enter the United States? He doesn’t have the money to come here anyway. Bar him from McDonalds? He’ll find another place to get a hamburger. Prohibit a cyber dude from Nigeria from exporting oil? He’s unlikely to have any. Deny a sale of an F-16 to a company in Croatia? They probably don’t have a hangar to keep it in anyway.
A country? Unlikely. It’s a well known fact that most of the attacks, and definitely the most damaging ones, are “bounced” many times through “innocent” computers in other countries before being sent to the target. Given that, we’ll rapidly end up sanctioning the rest of the world. While this would become perpetual fodder for the press, it would be unlikely to have any real impact on cyber attacks. If anything, it would even increase them, when they start sanctioning us in retaliation. And chances are their sanctions would be more damaging than ours.
Besides, cyber attacks are already a felony anyway. That should be sanction enough — though it doesn’t seem to work.
The big question is, do we really understand what we’re doing in cybersecurity? It doesn’t seem so.
A better solution may be to sanction our own bureaucracy.
Once in a while we see a common cyber call to arms: “Let’s use data encryption and, voila, our problems will be over.” A typical example of this is the AP article http://cnsnews.com/news/article/no-encryption-standard-raises-health-care-privacy-questions.
This is a very common misconception. Encryption per se does not protect against hacking. Surely, encrypted files look impressive, with their very long strings of seemingly random characters. It must be mindboggling for a casual observer to imagine that anyone can actually decipher that without the secret key.
However, the reality is vastly different.
Strength of encryption is based on two main ingredients – the encryption algorithm and the secret key. Most encryption algorithms, and certainly all commercially available algorithms, are well known. They have been researched, and solutions—the ability to decrypt them without the secret key—have been found for most of them. The only undefeated algorithm so far remains the so-called “one time pad,” where the key is used only once. But even that algorithm’s strength rests on the quality and security of the key — issues that are far from trivial.
However, the main practical problem with encryption is the distribution system for the key. As in the example of a health system cited above, we are talking about a massive database with many millions of records. Sure, it’s not too difficult to encrypt all that data. But then what? The database has many legitimate users, sometimes thousands, and each one of them must have the secret key. It’s not difficult to obtain the key, one way or another, from at least one of them. Such a single breach would defeat the whole encryption scheme. I’ve often heard someone proudly declaring at a party, “I encrypt all data files in my computer.” Sometimes I will casually ask, “But where do you keep your key?” The answer invariably is, “In the computer.” Usually that person doesn’t understand that the key in his computer is also available to anyone who bothers to hack into his computer.
All in all, data encryption is a good concept, but the practicality of its deployment in databases with many users can only protect against middleschoolers. It would have marginal protection against smart highschoolers, and it would certainly be fruitless against professional cyber attackers.
Encryption per se would be just another expensive exercise in wishful thinking. It should be clearly understood: ENCRYPTION PER SE DOESNOT PROTECT AGAINST HACKING.
We have very little time to cure our stone age cyber defensive technology.
The CNN story citing testimony by Admiral Michael Rogers, head of U.S. Cyber Command, to a House Select Intelligence Committee November 20 sounded like shocking news. He stated that China can take down our power grid. http://www.cnn.com/2014/11/20/politics/nsa-china-power-grid/index.html
Shocking as it may be, if this is still “news,” surprise, surprise — it’s been known to everyone who was anyone in cyber security for over 25 years. First it was just the Russians, then the Chinese, then some vague criminals acting on behalf of “nation-states” were gradually added to the list.
Never mind the Russians and the Chinese – they also both have enough nuclear weapons to kill every squirrel in America. What is really troubling is the cyber security trend. Our cyber defensive capabilities have hardly improved for over a quarter-century. However, hackers’ attacking capabilities are improving constantly and dramatically. This is not a good equation — sooner or later these lines will cross. This means that a large number of unknown hackers will be able to take down our power grid and also decimate our power-intensive facilities, such as oil refineries, gas distribution stations, and chemical factories.
Now, think terrorists. They would be delighted to do exactly that, whether you kill them afterwards or not. This isn’t news, but it’s an increasingly troubling reality. We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense. That won’t happen until the doomed legacy password and firewall paradigms are abandoned and replaced by fundamentally different technologies.
Utter incompetence of high-level officials is not exactly a scarce phenomenon. However, it’s rarely displayed so vividly as it was by Troels Oerting, the head of Europol’s Cybercrime Center, in his recent interview with the BBC’s Tech Tent radio show.
Mr. Oerting proudly declared that international law enforcement just needs to target a “rather limited group of good programmers.” He went further, proudly stating “We roughly know who they are. If we can take them out of the equation then the rest will fall down.” Voila, easy and simple. Arrest the 100 known dudes and cybercrime disappears. He didn’t specify what it means to know “roughly”–you either do or you don’t, and that is exactly, not “roughly.”
The man obviously hasn’t a clue. The trouble is that he’s speaking for Europol and the EU. And the idea that the EU’s main cybercrime law enforcement unit assesses the cybercrime situation this way is truly troubling. It would simply mean that the cybercriminals don’t have much to worry about.
The reality is drastically different. There are many thousands of programmers around the world good enough to hack most of the attractive targets. Many of them, for one reason or another, are disappointed with their employment or personal situation. Given the current dire state of our cybersecurity, making a few bucks off easy targets is really tempting. This temptation looks even more attractive if the target is a rich bank or some large allegedly unethical company. This often satisfies the conscience of many of the hackers. The continuing deterioration of the European economy worsens the situation.
Add the “script kiddies” to the equation and Mr. Oerting’s job becomes even harder than he probably can envision in his worst nightmares. He should also know that really good programmers only publish their crumbs for the script kiddies, scripts they developed long ago. They keep their best stuff for themselves.
Furthermore, many of these off-the-grid programmers have their own very large botnets capable of performing rather sophisticated operations that they can offer to all sorts of customers as a service.
All in all, Mr. Oerting should urgently realize that he is mainly dealing with the mediocre cybercriminals who are not good enough to be stealthy. Really good “top-100” programmers don’t get caught. I wouldn’t be at all surprised if one of them, having read Mr. Oerting’s statements, would hack his next target through this top EU cyber cop’s computer, just to demonstrate the point.
Every day we read articles on cybersecurity and privacy referring to “backdoors.” This term needs some clarification. I’ve seen all sorts of explanations of the term and its origin, including even linking it to Internet pornography. While the current situation in cybersecurity is certainly reminiscent of pornography, the origin and nature of cyber backdoors is very different.
The term is borrowed from residential architecture and means just what it says. It’s not the supposedly well-protected “front door,” but a relatively obscure entrance for casual private use, commonly having weaker protection for the residents. In cyber systems it’s exactly that: a supposedly secret entry point supplementary to the main entry point to a system, granting simplified logon procedures with deeper access to those in the know.
And that’s where the real problem lies.
First of all, any additional entry point to a network inevitably weakens a system’s security. The more entry points there are the more difficult it is to arrange and manage security. So, point one here is that even the very fact that any backdoor exists automatically weakens the security of a network.
Secondly, simplified entry procedures for the backdoors always mean they have weaker security than the front doors. For example, it’s not uncommon to have a backdoor to a network that creates a shortcut around a stronger VPN (Virtual Private Network) system protecting a front door, with the backdoor protected by a firewall that is always more vulnerable. So, point two here is that the common setup of a backdoor weaker than the front door always compromises the system.
Now, what’s the rationale for creating backdoors? For hackers, it’s pure and simple: it allows perpetual deep and undetected access to the system. The only risk is that it can be discovered and eliminated. So what? The hacker can simply make a different backdoor. With the Government it’s a totally different story; they seem to think that if a company creates a backdoor for them it’s for the Government’s exclusive use. The problem is that if a backdoor exists it can be discovered and hacked by anybody.
Believing that a backdoor is exclusive is fundamentally flawed. It’s as flawed as the wishful thinking in some government circles that they can develop a cyber security technology that they alone can hack. This is an arrogant assumption that historically has been defeated time and time again. You are never the smartest guy on the planet. Period.
So, in addition to all other issues involved in the Government’s pursuit of backdoor data collection, the uncomfortable but obvious conclusion is that by requiring backdoors they further weaken the already weak enough security of our networks, making them easier prey for any attacker.