Tag Archives: Sheymov

Spycraft101 Apple podcast episode #97

https://podcasts.apple.com/us/podcast/97-defecting-from-the-ussr-with-olga-sheymov/id1567302778?i=1000614816892

This week, Justin chats with Olga Sheymov. Olga has worked in high technology, on arts projects, and as a television producer. Her TV credits include the Long-Running series Russia Today, produced from 1997 until 2015, and Your Source TV among other projects. But long before she began her media career, Olga and her husband Victor Sheymov defected to the US and were smuggled out of the Soviet Union and into the Carpathian Mountains by a team from the Central Intelligence Agency in 1980. Victor was a high ranking member of the KGB and proved to be an incredibly valuable source of information for the US government for years to come, although their relationship with the CIA and FBI encountered many problems, to say the least.

Spycraft101 Apple podcast episode #97

First Recorded Breach of Security

A standard first dictionary definition of security is freedom from danger. Danger or threat, as it is often labeled, has to be present or assumed to be present otherwise there is no need for security. In recent years, threat has conventionally been defined by security professionals as the sum of the opposition’s capability, intent (will), and opportunity, and can be expressed thus:

Threat = Capability + Intent (will) + Opportunity

Indeed, without a capability, an attack cannot take place. An
attacker must possess a specific capability for a specific attack. For instance, the Afghan Taliban cannot carry out a nuclear missile attack on the United States even if they have full intent and an opportunity. Intent or will is also a necessary ingredient. North Korea has the capability for a nuclear strike on South Korea, but many factors keep their will in check. Similarly, Iran may have a capability to attack a defenseless American recreational sailboat in its territorial waters, and be perfectly
willing to do so, but American recreational sailors just do not go there, providing no opportunity.

Furthermore, applying this formula usually does not produce precise results since ingredients such as capability and opportunity are usually not known exactly and often are just assumed. A classic example of this is the infamous case of Iraq’s weapons of mass destruction as justification of the last Iraq war.

The first recorded breach of security occurred in the Garden of
Eden. Apparently, there was a sense of threat, and Cherubim guarding it with flaming swords were the security measures taken. However, the security measures were insufficient, and that allowed the serpent to infiltrate the Garden of Eden and do his ungodly deed.
In fact, there is no perfect security. We can only provide degrees of protection (i.e., if there is a threat, risk is always present, though its level may vary. Often this is reflected in the statement that risk is a combination of threat and vulnerability:

Risk = Threat + Vulnerability

This looks logical since vulnerability means exposure to a certain
threat. This also leads to the assertion that:
Vulnerability is a deficiency of protection against a specific
attack.
A reasonably comprehensive definition of security would probably
be something like:
A set of measures that eliminate, or at least alleviate the
probability of destruction, theft, or damage to a being, an object, a process, or data, including the revelation of a process, or of the
content of information.

A hack is forever, but so are its fingerprints

A while ago I published a post, “A hack is forever,” explaining that a competent hack is extremely difficult to eradicate from a cyber system – there is no certainty that the system is really clean. However, there is flip-side aspect of this in cyberspace that is not commonly understood: by way of dubious consolation, the hacker cannot be certain that he really got away with the crime.
Cyberspace is an information and communications space. In essence, we don’t really care what media our information is stored on, we care about the utility aspects, such as the efficiency of the storage, how quickly and conveniently information can be retrieved, and so on. Similarly, we don’t really care what communications channels we use, we care about our communications’ speed, reliability, security, etc.
Cyberspace has one very significant property: information cannot be destroyed. Just think about it. We can destroy only a copy of the information, i.e. we can destroy a physical carrier of the information, such as a floppy, a thumb drive, a letter, or a hard drive (often that’s not an easy task either). However, we cannot destroy the information itself. It can exist in an unknown number of copies, and we never really know how many copies of a particular piece of information exist. This is particularly true given the increasing complexity of our cyber systems — we never really know how many copies of that information were made during its processing, storage, and communication, not to mention a very possible intercept of the information by whoever, given our insecure Internet. Such an intercept can open an entirely separate and potentially huge area in cyberspace for numerous further copies of the information.
Back to the consolation point: cyber criminals of all sorts can never be sure that there is not a copy somewhere of whatever they have done. That copy can surface, someday, usually at the most inopportune time for the perpetrator.
One practical aspect of this is that Congress perhaps should consider increasing the statutes of limitations for cyber crimes, or crimes committed via cyberspace.

Don’t blame the victim; fix the cyber technology

The perennial excuse for our dismal performance in cyber security keeps showing up again and again. Some “experts” state that 95% of cyber security breaches occur due to human error, i.e. not following the recommended procedures. There’s a sleight of hand in these statements in that many breaches include human error, but do not occur due to that error. While the 95% number might be suspect, the real point is different: even following all the “recommended security procedures” will not protect our systems from cyber attacks.
It’s true that attackers often use users’ mistakes. But the reason is simple and obvious – human errors do make it easier to penetrate a system. In effect they represent a shortcut for an attack, but by no means do they eliminate many other ways to do it. Why would an attacker take a more complicated route if he can use a shortcut?
Of course, users’ awareness of security is not common or comprehensive. This was vividly demonstrated by one very important Government agency not that long ago. Its board, after a thorough (and expensive) “expert” study mandated that employees use a six-letter password instead of the old and “insecure” four-letter one.
This is a pretty pathetic solution, but the much bigger question is: do the users really need to follow or even know complicated procedures? The answer is: no, not at all.
Indeed, cyberspace presents us with a wonderful opportunity to build very user-friendly effective security systems. It’s quite possible to build cyber security systems that would be extremely strong, even mathematically unhackable, that would require the user only to select the party he is going to communicate with, and then to indicate “secure.” No other security-related actions would be needed. This is very different from our current security technology based on concepts of physical space, where the weakest link in the security chain is the human factor. But up until now we have failed to take advantage of this great property of cyberspace.
If, as it is claimed, our cyber security misery is a “people” problem, this is true only in very narrow sense. It’s not the users who are the problem; the problem belongs to the people who design and build our worthless cyber security systems.

The Attack on Private Encryption

The current anti-encryption political push by a choir of government bureaucrats is picking up steam and has lately been joined by the head of the British MI5. The usual scarecrow of terrorism is invoked and used bluntly in public statements that border on unabashed propaganda. I did not want to write about it, but what is going on is just too much to take. The real goal of the whole campaign is suspect, so it’s worth taking a closer look at the issues involved.
Point one – ideological: We view ourselves as a democracy. With that in mind we need to understand that encryption has existed for at least four thousand years. During that time most of the rulers were ruthless tyrants and for all of them their #1 priority was to protect their rule. But even they did not crack down on private encryption – because it’s not practical (see Point three below), and they could not enforce it anyway. We, on the other hand, are facing a bunch of bureaucrats demanding the practical end to meaningful private encryption. How come a democracy can have more restrictions for its citizens than a country suffering under the rule of a tyrant?
Point two – technical: During all these thousands of years encryption algorithms have been consistently and quickly cracked by experts, usually employed by the government. Only a very few encryption algorithms withstood scrutiny for a few years, and those strong algorithms were developed by government experts and have always been well outside the reach of the general public at the time. Contrary to popular belief, all commercially available algorithms have been cracked very quickly after their introduction. Governments have traditionally been very shy about disclosing this. The situation is no different now. If a target used commercially available encryption algorithms its communications have been quickly cracked. So, what is the technical difference in the current situation? The simple answer is the sheer volume of information passing through the Internet. Individual communications can be cracked, but not the entire Internet traffic. That’s what the government bureaucracy is after: the ability to read ALL the traffic, i.e. all of our communications.
Point three – practical: the purpose of encryption is to assure privacy of communications. There are many other ways to do this other than by encryption. One vivid example: when we were hunting Bin Laden for years he did not use the Internet at all, he used messengers. He could just as well have used the regular mail. Furthermore, it’s well known that the 9/11 terrorists were communicating over regular phones, but in Aesopian language. For example they referred to a terrorist act as a “wedding.” So are our bureaucrats next going to demand the right to read all our mail, or make a terror suspect of anyone who mentions a wedding over the phone?
Conclusion: The simple truth is that the Government can penetrate any commercial encryption available to terrorists. That is if they actually go after terrorists. However, they are now demanding the right to go after everyone, mostly law abiding citizens. If that demand is denied there’s still nothing to prevent them from going specifically after terror suspects.
The moral here is pretty straightforward: if we call ourselves an uncorrupt democracy we should be very careful about giving our bureaucrats too much power, inasmuch as they want more power than tyrants of history could not get. Furthermore, the bigger danger here is that loosing civil rights is a very slippery slope.

The secret reason behind the Chinese hacking

For quite some time I’ve been puzzled by the alleged Chinese hacking of our databases. I could understand if they hacked our advanced research and development– that would save them time, effort and money. But why the databases? Then it dawned on me: it’s a savvy business strategy.
We routinely encounter problems with our databases. One organization can’t find our file, another somehow has the wrong information about us, and all too often they certainly can’t get their act together, and we see classic cases of the left hand not knowing what the right one is doing . The pre-9/11 non-sharing of intelligence is a good illustration. In other words, we have a somewhat messy general situation with our databases; we’re used to taking this in stride; and we just sigh when we have to deal with some organization that accuses us of something we aren’t guilty of.
The Chinese understood the problem, but they just never got used to it. For many centuries they had a much bigger population than other countries, but somehow they always managed to know exactly who is who, who is related to whom, and what he/she is doing.
So naturally they wanted to have the same level of knowledge about the rest of the world. To their dismay, in the US they found disorganized databases and mismatching records. So they had to process all that information to make sense of it for themselves. And suddenly they saw a perfect business opportunity: they would develop a gigantic and very efficient database of the US, and then sell this data back to us piecemeal, retail. This would give them full and exact knowledge of the US, and the US would pay for the project, with a significant profit for the Chinese. For us this would be a very valuable service, a kind of of involuntary outsourcing where we (both the Government and the private sector) can get relevant and reliable data at a modest price. Makes perfect business sense.
This approach has a special bonus for the US Government: when buying data abroad they won’t have to deal with privacy restrictions imposed by the US Constitution and constantly debated by Congress. The logic is impeccable: we bought it abroad, and if the Chinese know it, we are entitled to know what they know about us.

The Android phone vulnerability has been “fixed” – really? How about Android Pay and Google Wallet?

The recently discovered vulnerability in the Android operation system that affected 1 billion smartphone users (corrected to a mere 950 million, according to the phone manufacturers) followed a typical path:
a) The next gaping security hole is discovered by researchers, who alert the manufacturers;
b) The manufacturers make a patch for future buyers;
c) The manufacturers and service providers do nothing to help or even alert the affected users;
d) The researchers lose patience and publicly disclose their discovery of the flaw;
e) The manufacturers report that they “fixed the glitch within 48 hours,” and keep quiet about the customers affected.
The frustrating part of this all too familiar pattern is that it ignores the victims – the customers who already bought their phones. These customers were assured by the manufacturers’ marketing and sales people at the time of purchase that the product (a phone in this case) is very secure, and is equipped with a top-notch security system– so their privacy is assured. Software patches like the one in question are very easy to incorporate into new phones. However, it would cost money to fix the defective products already out there, and this seems deter to the companies from making the fix.
But the most interesting aspect of the situation is not what the manufacturers say, but rather what they don’t say. It should be understood that the vulnerability discovered presents not one problem but two. One is that the phones without the patch can be hacked at some point in the future. The other is that the phones already hacked are under the hackers’ control. So the most important questions is, can that control be reliably taken away from the hacker and returned to the customer? The manufacturers notoriously acknowledge the simple first problem, but quietly ignore the existence of the second, much bigger one.
In practical terms, even if the fix is installed on an affected phone, the real question is: does it neutralize the effect of the hack? In other words, if my phone was hacked, the perpetrators have established control over it. Does the fix eliminate that control? A pretty safe bet here is that it does not. The fix just prevents another hack using the same method. But in that case, what’s my phone worth now when I no longer can assume my privacy, or security of financial transactions? It looks like the manufacturers may not be complying with the implied warranty laws. At the very least this is a priority research problem for our increasingly numerous legal experts.
Every aspect of these issues is fast approaching a real-world test.– especially urgently given the proliferation of smartphone-based payment systems like Apple Pay and Google Wallet.

Self-driving cars–the hacking factor

The public’s perception of our cyber vulnerability hasn’t so far been really tested . We’ve been lucky to get away with our critical infrastructure practically undefended. However, sooner or later we’ll run out of out luck, and we’ll get hurt– badly.
The major reason for our apathy in the face of cyber attacks is probably the tendency to react mostly to something that is tangible. But as yet we haven’t had much in the way of tangible losses to cyber attacks: banks have been able to reimburse money lost in their hacked customers’ accounts, stolen identities have generally been somewhat restored, we aren’t aware of anyone being killed due to the Office of Personnel Management (OPM) hack – in other words, a small part of the population has experienced inconveniences, but not tangible losses.
The latest example of a cavalier attitude to this danger comes (surprise, surprise!) from the auto industry. Cars have been hijacked in experiments for quite some time, but these were limited to the Government and some private labs. Car manufacturers have surely been aware of the potential problem. Indeed, Fiat Chrysler was notified of the successful hijack of one of their cars quite some time ago, but only after the media exposed it did they recall 1.4 million cars. And of course the vulnerability isn’t limited to Fiat Chrysler– it’s far wider.
This situation is fast heading for a test. Significant efforts are being made to develop self-driving vehicles, and the results are already very promising. However, the parties involved seem oblivious of the fact that a self-driving car is much more vulnerable to cyber attacks other than the kind that affected the Fiat-Chrysler vehicle. The latter can be somewhat mitigated by separating the car’s communication equipment from its primary performance systems. But clearly this isn’t going to be the case with self-driving cars – outside interaction is going to be a major factor of their safety features. Those very safety features themselves can be manipulated by a malicious party, with catastrophic results. There would be lost lives along with many other unpleasant results. Hacking self-driving vehicles would likely become a favorite weapon of focused assassins (not to mention unfocused crazies who would prefer it to random shooting in movie theaters). That would certainly be a very real test of our tolerance for remaining undefended in cyberspace.
The obvious conclusion that begs to be considered here is that the entities that are working on the self-driving car had better solve the cyber security problem now instead of finding out that their technological marvels cannot be certified for safety reasons. Or worse, pushing the certification through and then facing the inevitable consequences.

That Office of Personnel Management (OPM) hack: the depth of the damage

The somewhat belated (just 7 months!) timid admission by the Federal Government that security clearance files of Government employees and contractors had been hacked was hardly a shocking surprise. Media discussion largely focused on the breadth of the security breach – the number hacked started with 4 million and pretty quickly grew to 21 million. But the depth of the security breach was not really addressed. It is, however, a major aspect of the loss.
Unbeknownst to most of the general public, and ironically even to many of those Government employees who are actually responsible for this security breach, all security files are not created equal. At one end of the spectrum are security clearance files of personnel whose proximity to Government secrets is very limited and often only symbolic, such as facilities maintenance workers. At the other end of the spectrum are people with security clearances well beyond the proverbial Top Secret level, those who are entrusted with the deepest and most sensitive Government secrets, such as nuclear arms and Government communications security experts.
Candidates who apply for a Government job are routinely asked to a sign privacy release allowing the Government to conduct an investigation into the applicant’s background that would otherwise be in violation of their privacy rights. Of course, applicants usually sign the form without looking at it too much. But even the lowest level of security clearance is far more invasive than your “thorough” bank investigation before granting you a loan. At the low end there’s a cursory search to make sure that the applicant has no significant criminal offences and is not involved in known criminal organizations. For a high-end clearance it’s a totally different story. Thorough investigation may include numerous connections, including relatives and present and past personal friends, hobbies, clubs affiliations, financial transactions over a significant period of time, and so on. Present and past spouses, “close friends” and partners are definitely of interest. Investigations may include interviews with neighbors and government informants, and maybe even one or another form of surveillance.
Many who are subjected to such investigation don’t realize how much of their personal privacy they surrender to the Government, but surrender they do, and some of them find that out only if things turn sour in their relations with the Government. However, they all at least implicitly rely on the Government to guarantee the security of their private information.
The OPM hack shattered that expectation. If the hack was done as alleged, by the Chinese, it is also most certain that the Russians had also done it before. Moreover, whichever intelligence service has the files, they may well trade some of them in exchange for other intelligence. Needless to say, among all those in supersensitive jobs are clandestine intelligence operatives, including the DEA, CIA, and Special Forces, and this situation puts their lives in real and immediate danger.
As a practical matter, those affected should demand to know exactly what information was stolen. Classified as it may be, it is not classified anymore. After all, if the Chinese know something about me, I am certainly entitled to know what they know too.
One more unanswered but very important question: do those files contain personal biometrics beyond fingerprints (leaking which is bad enough) — such as DNA, and retinal scans? I haven’t seen anyone asking that.

Cyber defense by semantics: hacks are now called “computer glitches”

The New York Stock Exchange is down and United Airlines is not flying for half a day. Naturally, everyone’s wondering, What’s going on? The public wants an explanation from the FBI and the affected institutions, and fast.
The response is quite astounding.
Voila! Cyber security problem solved: from now on all hacks are to be called “computer glitches.” United and the NYSE computer network outages are only the latest glaring examples of a classic bureaucratic solution to the problem – defense by semantics.
This “expert” explanation means two things: a) it’s a fairytale designed for little children and big fools; and b) the “cyber security experts” of the affected entities and the FBI probably have no clue as to what happened. That’s a very good indication of an expertly executed cyber attack – the effect is obvious but the attack has not even been detected — and forget figuring out that “the Chinese” or the “the Russians” did it. Because it is really unfathomable to imagine that programmers working on critical programs like these found “glitches.” Such programs are written and implemented by highly qualified programmers and software engineers and are tested numerous times under all imaginable circumstances. Furthermore, they’ve been running for quite some time with no “glitches” detected, and all those systems have built-in redundancies precisely in case of a “glitch.”
The “glitch” explanation is very convenient for those who failed to provide cybersecurity of this country.
All these events are a clear indication of our massive cyber security failure. This failure was inevitable. On the one hand in the last quarter century widely known cyber attack technology has advanced dramatically, and is becoming increasingly widespread. What a while ago only a few government agencies in the world could do can now be done by a lot of people, often by mere script kiddies, and certainly by our sworn enemies who aren’t restricted in what they can attack—the more damage the better. On the other hand, our cyber security has not advanced at all for the same quarter of a century. This is the inconvenient truth, despite of all the marketing and politically soothing statements from the entrenched cybersecurity establishment.
It is really sad that people responsible and paid for providing our cyber security are getting away with this cyber defense by semantics. No doubt the next step is to make the term “hacking” politically incorrect and make everyone use “computer glitch” instead. When that fairy tale runs out, they’ll think of another term. That’s assuming our computers are still functioning.