Tag Archives: computer security

A hack is forever, but so are its fingerprints

A while ago I published a post, “A hack is forever,” explaining that a competent hack is extremely difficult to eradicate from a cyber system – there is no certainty that the system is really clean. However, there is flip-side aspect of this in cyberspace that is not commonly understood: by way of dubious consolation, the hacker cannot be certain that he really got away with the crime.
Cyberspace is an information and communications space. In essence, we don’t really care what media our information is stored on, we care about the utility aspects, such as the efficiency of the storage, how quickly and conveniently information can be retrieved, and so on. Similarly, we don’t really care what communications channels we use, we care about our communications’ speed, reliability, security, etc.
Cyberspace has one very significant property: information cannot be destroyed. Just think about it. We can destroy only a copy of the information, i.e. we can destroy a physical carrier of the information, such as a floppy, a thumb drive, a letter, or a hard drive (often that’s not an easy task either). However, we cannot destroy the information itself. It can exist in an unknown number of copies, and we never really know how many copies of a particular piece of information exist. This is particularly true given the increasing complexity of our cyber systems — we never really know how many copies of that information were made during its processing, storage, and communication, not to mention a very possible intercept of the information by whoever, given our insecure Internet. Such an intercept can open an entirely separate and potentially huge area in cyberspace for numerous further copies of the information.
Back to the consolation point: cyber criminals of all sorts can never be sure that there is not a copy somewhere of whatever they have done. That copy can surface, someday, usually at the most inopportune time for the perpetrator.
One practical aspect of this is that Congress perhaps should consider increasing the statutes of limitations for cyber crimes, or crimes committed via cyberspace.

Self-driving cars–the hacking factor

The public’s perception of our cyber vulnerability hasn’t so far been really tested . We’ve been lucky to get away with our critical infrastructure practically undefended. However, sooner or later we’ll run out of out luck, and we’ll get hurt– badly.
The major reason for our apathy in the face of cyber attacks is probably the tendency to react mostly to something that is tangible. But as yet we haven’t had much in the way of tangible losses to cyber attacks: banks have been able to reimburse money lost in their hacked customers’ accounts, stolen identities have generally been somewhat restored, we aren’t aware of anyone being killed due to the Office of Personnel Management (OPM) hack – in other words, a small part of the population has experienced inconveniences, but not tangible losses.
The latest example of a cavalier attitude to this danger comes (surprise, surprise!) from the auto industry. Cars have been hijacked in experiments for quite some time, but these were limited to the Government and some private labs. Car manufacturers have surely been aware of the potential problem. Indeed, Fiat Chrysler was notified of the successful hijack of one of their cars quite some time ago, but only after the media exposed it did they recall 1.4 million cars. And of course the vulnerability isn’t limited to Fiat Chrysler– it’s far wider.
This situation is fast heading for a test. Significant efforts are being made to develop self-driving vehicles, and the results are already very promising. However, the parties involved seem oblivious of the fact that a self-driving car is much more vulnerable to cyber attacks other than the kind that affected the Fiat-Chrysler vehicle. The latter can be somewhat mitigated by separating the car’s communication equipment from its primary performance systems. But clearly this isn’t going to be the case with self-driving cars – outside interaction is going to be a major factor of their safety features. Those very safety features themselves can be manipulated by a malicious party, with catastrophic results. There would be lost lives along with many other unpleasant results. Hacking self-driving vehicles would likely become a favorite weapon of focused assassins (not to mention unfocused crazies who would prefer it to random shooting in movie theaters). That would certainly be a very real test of our tolerance for remaining undefended in cyberspace.
The obvious conclusion that begs to be considered here is that the entities that are working on the self-driving car had better solve the cyber security problem now instead of finding out that their technological marvels cannot be certified for safety reasons. Or worse, pushing the certification through and then facing the inevitable consequences.

That Office of Personnel Management (OPM) hack: the depth of the damage

The somewhat belated (just 7 months!) timid admission by the Federal Government that security clearance files of Government employees and contractors had been hacked was hardly a shocking surprise. Media discussion largely focused on the breadth of the security breach – the number hacked started with 4 million and pretty quickly grew to 21 million. But the depth of the security breach was not really addressed. It is, however, a major aspect of the loss.
Unbeknownst to most of the general public, and ironically even to many of those Government employees who are actually responsible for this security breach, all security files are not created equal. At one end of the spectrum are security clearance files of personnel whose proximity to Government secrets is very limited and often only symbolic, such as facilities maintenance workers. At the other end of the spectrum are people with security clearances well beyond the proverbial Top Secret level, those who are entrusted with the deepest and most sensitive Government secrets, such as nuclear arms and Government communications security experts.
Candidates who apply for a Government job are routinely asked to a sign privacy release allowing the Government to conduct an investigation into the applicant’s background that would otherwise be in violation of their privacy rights. Of course, applicants usually sign the form without looking at it too much. But even the lowest level of security clearance is far more invasive than your “thorough” bank investigation before granting you a loan. At the low end there’s a cursory search to make sure that the applicant has no significant criminal offences and is not involved in known criminal organizations. For a high-end clearance it’s a totally different story. Thorough investigation may include numerous connections, including relatives and present and past personal friends, hobbies, clubs affiliations, financial transactions over a significant period of time, and so on. Present and past spouses, “close friends” and partners are definitely of interest. Investigations may include interviews with neighbors and government informants, and maybe even one or another form of surveillance.
Many who are subjected to such investigation don’t realize how much of their personal privacy they surrender to the Government, but surrender they do, and some of them find that out only if things turn sour in their relations with the Government. However, they all at least implicitly rely on the Government to guarantee the security of their private information.
The OPM hack shattered that expectation. If the hack was done as alleged, by the Chinese, it is also most certain that the Russians had also done it before. Moreover, whichever intelligence service has the files, they may well trade some of them in exchange for other intelligence. Needless to say, among all those in supersensitive jobs are clandestine intelligence operatives, including the DEA, CIA, and Special Forces, and this situation puts their lives in real and immediate danger.
As a practical matter, those affected should demand to know exactly what information was stolen. Classified as it may be, it is not classified anymore. After all, if the Chinese know something about me, I am certainly entitled to know what they know too.
One more unanswered but very important question: do those files contain personal biometrics beyond fingerprints (leaking which is bad enough) — such as DNA, and retinal scans? I haven’t seen anyone asking that.

Latest cyber lunacy: we are going to sanction the rest of the world!

In the latest example of the bureaucracy’s detachment from reality, the White House has just announced an executive order to sanction cyber attackers. Not to mention the checkered record of effectiveness of other sanctions, cyber sanctions are very difficult even to fathom.
For starters, who are we going to sanction? The whole history of cyber attacks clearly shows that determining the real source of cyber attacks with any degree of certainty is extremely difficult. Out of the thousands of attackers around the world we are able to identify only a handful in a year. Furthermore, those identified are not the most dangerous ones. The best we can do is to say that an attack came from a certain country.
So, who are we going to sanction — and how?
An individual attacker? Most of the identified hackers are not the most dangerous ones, often just scrip kiddies. But even then, how are we going to sanction them? Prohibit a teenager from the Ukraine to enter the United States? He doesn’t have the money to come here anyway. Bar him from McDonalds? He’ll find another place to get a hamburger. Prohibit a cyber dude from Nigeria from exporting oil? He’s unlikely to have any. Deny a sale of an F-16 to a company in Croatia? They probably don’t have a hangar to keep it in anyway.
A country? Unlikely. It’s a well known fact that most of the attacks, and definitely the most damaging ones, are “bounced” many times through “innocent” computers in other countries before being sent to the target. Given that, we’ll rapidly end up sanctioning the rest of the world. While this would become perpetual fodder for the press, it would be unlikely to have any real impact on cyber attacks. If anything, it would even increase them, when they start sanctioning us in retaliation. And chances are their sanctions would be more damaging than ours.
Besides, cyber attacks are already a felony anyway. That should be sanction enough — though it doesn’t seem to work.
The big question is, do we really understand what we’re doing in cybersecurity? It doesn’t seem so.
A better solution may be to sanction our own bureaucracy.

Cyber Backdoors: myth and reality

Every day we read articles on cybersecurity and privacy referring to “backdoors.” This term needs some clarification. I’ve seen all sorts of explanations of the term and its origin, including even linking it to Internet pornography. While the current situation in cybersecurity is certainly reminiscent of pornography, the origin and nature of cyber backdoors is very different.
The term is borrowed from residential architecture and means just what it says. It’s not the supposedly well-protected “front door,” but a relatively obscure entrance for casual private use, commonly having weaker protection for the residents. In cyber systems it’s exactly that: a supposedly secret entry point supplementary to the main entry point to a system, granting simplified logon procedures with deeper access to those in the know.
And that’s where the real problem lies.
First of all, any additional entry point to a network inevitably weakens a system’s security. The more entry points there are the more difficult it is to arrange and manage security. So, point one here is that even the very fact that any backdoor exists automatically weakens the security of a network.
Secondly, simplified entry procedures for the backdoors always mean they have weaker security than the front doors. For example, it’s not uncommon to have a backdoor to a network that creates a shortcut around a stronger VPN (Virtual Private Network) system protecting a front door, with the backdoor protected by a firewall that is always more vulnerable. So, point two here is that the common setup of a backdoor weaker than the front door always compromises the system.
Now, what’s the rationale for creating backdoors? For hackers, it’s pure and simple: it allows perpetual deep and undetected access to the system. The only risk is that it can be discovered and eliminated. So what? The hacker can simply make a different backdoor. With the Government it’s a totally different story; they seem to think that if a company creates a backdoor for them it’s for the Government’s exclusive use. The problem is that if a backdoor exists it can be discovered and hacked by anybody.
Believing that a backdoor is exclusive is fundamentally flawed. It’s as flawed as the wishful thinking in some government circles that they can develop a cyber security technology that they alone can hack. This is an arrogant assumption that historically has been defeated time and time again. You are never the smartest guy on the planet. Period.
So, in addition to all other issues involved in the Government’s pursuit of backdoor data collection, the uncomfortable but obvious conclusion is that by requiring backdoors they further weaken the already weak enough security of our networks, making them easier prey for any attacker.

A Hack Is Forever

Announcements by major companies and Government organizations that they’ve been hacked and have lost millions of private records that we entrusted to them are now as routine as the morning weather forecast on TV news. These announcements are usually followed by an assurance that from now on everything will be just fine, along with an urgent request that everyone change their passwords. Requirements for the passwords are getting more sophisticated – instead of a plain four-letter word they are supposed to be a little longer and include some characters requiring the shift key.

This is totally useless advice for two reasons: one is that these “sophisticated” passwords are in practice just as easy prey for a modern computer as the proverbial four-letter word, and the second is that no real hacker is going after your individual account unless he happens to be your curious next-door teenager or your nosy grandmother. In the real world hackers aren’t dumb. Why would they go after a few million accounts one-by-one if they can simply hack the organization’s server at the root or Administrator level and get all the data in every account with just a single hack? Any hacker worth his salt knows this, and this is exactly what hackers do – they hack the server, and  that makes our individual passwords irrelevant.

These “change-your-password-for-a better-one” announcements likely have some other subliminal agenda. It looks like the real reason for asking you to change your password is to make you feel responsible for your data security. In other words, to blame the victim.

Furthermore, victims are majorly misled in a couple of other ways too. First of all, after a hack all your private personal data are gone, and they’re available to any criminal is cyberspace for a nominal fee. You cannot take them back. You can change your password, but you cannot change your name, date of birth, social security number, address, phone number; even changing your mother’s maiden name is difficult. All these are available to identity thieves.

And there’s another aspect that your favorite bank won’t tell you about: every competent hacker will leave a dormant cyber mole deep inside the hacked system. These are practically impossible to detect despite all political and marketing claims to the contrary. So even if the entire security program of a system is changed the cyber mole will report all the changes to its master. Including your new sophisticated password.

So a hack is forever.

Kaspersky and Symantec Kicked Out of China – For a Reason

The great cyber triangle of US-Russia-China seems to be shaping up in a definitive way. For a while China was technologically and skill-wise behind the US and Russia, the two early leaders in cyberspace, but it’s catching up, and fast.

It was announced last week that Kaspersky Lab and Symantec have been taken off the list of approved vendors in China’s government cybersecurity software market.  Reuters recently reported one example: http://www.reuters.com/article/2014/08/03/us-china-software-ban-idUSKBN0G30QH20140803

Traditionally very polite, the Chinese did not cyberwhine, did not make any fuss, did not lay any blame, but simply took the pair off the list. Some Western and Russian analysts were very quick to assume and announce  that this was a trade protectionist move to favor China’s national cybersecurity companies. That’s definitely wrong. If that were true, China would bar foreign companies from the country altogether – their private market is huge and very profitable. But they didn’t; they specifically only addressed their government cyberspace security. Apparently Chinese cyber experts found some extracurricular activities in products from both companies, which is not terribly surprising. Furthermore, they probably realized that detecting all the malware in modern software is practically impossible, and correctly decided to keep the foreign security well-wishers away, at least from their government.

The Chinese perception of individual privacy is different from the Western, and they don’t seem to be very concerned about the privacy of the regular common users, at least currently. However, they will probably watch Kaspersky’s and Symantec’s products sold to the Chinese private sector very carefully from now on. If they detect any sizeable collection of data from customers’ computers they will probably bar Kaspersky, Symantec, or both from doing business in China altogether.

The great cyber triangle is definitely becoming more and more equilateral. Interestingly, for the first time that I can recall, China is taking the lead in a trend that is logical and most likely to continue.

Why do Russia and China not cyberwhine?

Usually in my posts I try to provide answers. This time I can only manage a question, but it’s an interesting one.

We constantly hear complaints, if not outright whines, about the US being attacked in cyberspace, either by China or Russia. We’ve gotten used to these attacks, and our response is becoming more and more like “what else is news?”

But there’s an interesting angle here: in the more-or-less symmetrical US-Russia-China great cyber triangle we rarely if ever hear about Chinese or Russians being hacked. Is it that they are not being attacked? Not at all. For example, recently Russia detected a five-fold increase in powerful DDoS attacks over the last year, the longest one lasting ninety days. That one was by any standard a major cyber security event. Was it a big media deal in Russia? Not really– it was barely mentioned.

Initially I thought this difference was mainly a cultural thing. In Russia boys grow up in a culture where if you’re beaten up, you don’t cry “Mommy, he hit me!”, and for sure you don’t complain to teachers or the police. Just heal your bruises and learn to defend yourself. I believe that in China the culture in this respect is somewhat similar. The reaction to cyber attacks on the US is just the opposite. Instead of developing a really effective technology of cyber defense and immediate counterattack, we whine loudly time after time and waste our credibility with vague threats, when everyone knows there will be no real response.

However, cultural difference is probably not the reason for Russia’s and China’s  mute response. As an example of the opposite response, we can recall frequent border disputes between Russia and China in 1960s (over the areas where nobody was present for many miles except a few occasional border guards). During those clashes there were extensive media coverage on both sides, with many diplomatic notes saying something like “This is the 104th serious warning.”

So, the question remains: compared to our constant whining, what is the reason for the very muted Russian and Chinese responses to cyber attacks?

Don’t Blame the Hacking Victim; Blame the Cyber Security Product

“People are the weakest link in security” is an adage that has proven valid over the centuries. It’s also a common rationale for explaining cyber security breaches. It sounds like a pretty convincing explanation, but is this proposition really true?

There’s one important factor in these historical failures: otherwise good security systems—i.e. if a human being had not made a mistake, the system would have remained undefeated. That’s a fundamentally different situation from what we have now with our legacy cyber security systems. These systems are built on current technologies that have for some time been well proven to be thoroughly flawed. Virtually every firewall and router delivered to the first customer has already been hacked, and thus proven unfit for their intended purpose even before they are installed. The human factor in cyber security is only a very convenient excuse for the failure.

But clearly, the human factor is not the real reason for the failure.

Router vulnerability is especially critical because it can be exploited to perform “man-in-the-middle” cyber attacks that can very quickly cripple entire networks. Router manufacturers regularly blame their customers for failing to reset the default password on the router. Never mind that the new password would delay a competent hacker by just a few minutes at best. But officially it’s the customer’s fault and “human failure” is the cause.

Blaming the customer for equipment failure is not generally a successful business strategy, but, cyber security companies somehow manage to get away with it – perhaps because of the still somewhat mysterious nature of cyberspace.

There’s a very simple conclusion to be drawn here: currently available cyber security technology is not anywhere at the level where the “human factor” is the weakest link. The weakest link is the fundamentally flawed cyber security technologies that fail well before the “human factor” can even come into play.

So, stop blaming the customers. The real cause of the failure is the human factor of those who are supposed to protect our cyberspace assets with real security technologies but consistently fail to do so –while charging their customers heftily for products that are known to be unfit for the purpose.

Symantec Dead Wrong, Again

In a recent Wall Street Journal article Symantec declares the current antivirus products dead and announces their “new” approach to cyber hacking: instead of protecting computers against hacking they will offer analysis of the hacks that have already succeeded.

http://online.wsj.com/news/article_email/SB10001424052702303417104579542140235850578-lMyQjAxMTA0MDAwNTEwNDUyWj

This is the equivalent of a pharmaceutical company failing to develop an effective vaccine, and offering instead  an advanced autopsy that hopefully will determine why the patient has died.

At its core this approach is based on two assumptions: 1) that developing effective antivirus products is impossible, and 2) that detecting damage that has already been done is easier than defending the computer.

Let’s take a quick look at both these assumptions.

It’s true, of course, that Symantec, along with a few other cyber security vendors, has failed to develop anti-hacking protection systems, because all these systems were based on the same fatally flawed firewall technology. However, that doesn’t mean such products cannot be developed if they are based on valid new cyber security principles. Cloning for one.

The second Symantec assumption, that they can detect the damage already done, doesn’t look convincing either. It’s hard to understand how one can “minimize damage” when the damage has already been done. Moreover, detecting damage, especially stolen data, is significantly more difficult than the task they have already conspicuously failed at. Modern malware is very good at morphing itself, possibly multiple times, into a variety of forms, splitting itself in several components and hiding in the depths of increasingly complex operating systems.

The bottom line is that it’s true that the currently deployed antimalware technology is dead– but this “new” approach is even more dead. The only likely benefit is that the participants will get a few billion dollars from the Government for their “advanced” research.

Conclusion:  instead of offering a cyber coroner’s facilities we’d be much better off developing fundamentally new technologies.  Essentially, new cyber vaccines.