Author Archives: victor sheymov

About victor sheymov

Victor Sheymov is a cyber security expert, author, scientist, inventor, and holder of multiple patents for methods and systems in cyber security. Victor Sheymov is a speaker on intelligence, offensive warfare, cyberspace, cyber security and critical infrastructure protection. Currently specializing in cyber security, Mr. Sheymov holds over 30 patents in the field of computer security. He has been granted patents in the United States, the European Union, Australia, Japan, India, Korea, and China. In a book Cyberspace and Security he explains how cyberspace is radically different from physical space. He analyzes the differences between the two, and defines the laws and characteristics of cyberspace. In a revised edition of Cyberspace and Security, Sheymov covers COHERENT DEFENSE OF LARGE SOVEREIGN CYBER SYSTEMS. Mr. Sheymov is the inventor of Variable Cyber Coordinates (VCC) method of communications.This method of communications is advantageous for establishing a high level of cyber security. By hopping IP address and other communications parameters, it provides dynamic protection of computers and computer networks through cyber agility.VCC method of communications enables building cyber security systems that render hacking attacks computationally infeasible. Furthermore, this method does not require intrusions in customer's information and systems. It provides security without violating customer's privacy or civil liberties. Victor Sheymov has 30 years of experience in advanced science. He performed scientific research involving guidance systems for use in the Soviet "Star Wars" missile defense program. Mr. Sheymov worked in the Soviet counterpart to the U.S. National Security Agency, serving in a variety of technical and operational positions. Prior to defecting for ideological reasons to the United States in 1980, Mr. Sheymov had been one of the youngest KGB Majors in its equivalent of the NSA, responsible for coordination of all aspects of KGB cipher communications security with its outposts abroad. After his arrival in the United States he worked for the NSA for a number of years. His work for the Soviet counterpart to NSA and the National Security Agency puts him in a unique position for a comparative perspective. Mr. Sheymov has testified in the United States Congress as an expert witness. He has been a keynote speaker at major government and private industry events such as NSA, a National Defense Industry convention, a National Science Foundation symposium, and he has been a guest lecturer at various Universities. Victor Sheymov is the author of "Tower of Secrets" (HarperCollins), a non-fiction book describing Soviet Communist political system, its repressive apparatus, and technical aspect of intelligence. His new book Tiebreaker, Tower of Secrets II, is the fascinating memoir of how Victor Sheymov figured out the reason behind the campaign to destroy him and his family, and the cause of the CIA’s catastrophic intelligence failures revealed with the arrest of Aldrich Ames. Sheymov then found himself at the center of the next intelligence crisis with the arrest of his longtime FBI liaison, Robert Hanssen. And now Sheymov, world-class cyber security expert and inventor, finds himself at the heart of the controversy over the solution to the rapidly growing global threat to cyber security.(Cyber Books Publishing) He has also authored articles in The Washington Post, Barron's, World Monitor, National Review and other national publications. He appeared in many national news programs including Larry King Live, 48 Hours, Dateline, McNeil-Lehrer, Charlie Rose and the McLaughlin Report. Victor Sheymov is a recipient of several prestigious U.S. awards in intelligence and security. Victor Sheymov holds an Executive MBA from Emory University and a Master's degree from Moscow State Technical University, a Russian equivalent of MIT.

Cyberspace and Security 3rd Edition

We developed our cyber defenses, largely based on variationsof a physical firewall. It does not work, and never did; it has been mathematically proven that any firewall can be penetrated.
Cyberspace is fundamentally different from physical space.

Cyberspace is an Information and Communications space.

In cyberspace information cannot be destroyed; only a copy of information can be destroyed.

Physical space is limited; cyberspace is unlimited.

Physical space has three dimensions; Cyberspace has unlimited dimensions.

In physical space visibility is unavoidable; in cyberspace visibility is optional.

In physical space only one identity is permitted; cyberspace permits multiple identities.

Power grid: when cyber lines cros

We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense.

The CNN story citing testimony by Admiral Michael Rogers, head of U.S. Cyber Command, to a House Select Intelligence Committee November 20 sounded like shocking news. He stated that China can take down our power grid.

Shocking as it may be, if this is still “news,” surprise, surprise — it’s been known to everyone who was anyone in cyber security for over 25 years. First it was just the Russians, then the Chinese, then some vague criminals acting on behalf of “nation-states” were gradually added to the list.
Never mind the Russians and the Chinese – they also both have enough nuclear weapons to kill every squirrel in America. What is really troubling is the cyber security trend. Our cyber defensive capabilities have hardly improved for over a quarter-century. However, hackers’ attacking capabilities are improving constantly and dramatically. This is not a good equation — sooner or later these lines will cross. This means that a large number of unknown hackers will be able to take down our power grid and also decimate our power-intensive facilities, such as oil refineries, gas distribution stations, and chemical factories.
Now, think terrorists. They would be delighted to do exactly that, whether you kill them afterwards or not. This isn’t news, but it’s an increasingly troubling reality. We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense. That won’t happen until the doomed legacy password and firewall paradigms are abandoned and replaced by fundamentally different technologies.

First Recorded Breach of Security

A standard first dictionary definition of security is freedom from danger. Danger or threat, as it is often labeled, has to be present or assumed to be present otherwise there is no need for security. In recent years, threat has conventionally been defined by security professionals as the sum of the opposition’s capability, intent (will), and opportunity, and can be expressed thus:

Threat = Capability + Intent (will) + Opportunity

Indeed, without a capability, an attack cannot take place. An
attacker must possess a specific capability for a specific attack. For instance, the Afghan Taliban cannot carry out a nuclear missile attack on the United States even if they have full intent and an opportunity. Intent or will is also a necessary ingredient. North Korea has the capability for a nuclear strike on South Korea, but many factors keep their will in check. Similarly, Iran may have a capability to attack a defenseless American recreational sailboat in its territorial waters, and be perfectly
willing to do so, but American recreational sailors just do not go there, providing no opportunity.

Furthermore, applying this formula usually does not produce precise results since ingredients such as capability and opportunity are usually not known exactly and often are just assumed. A classic example of this is the infamous case of Iraq’s weapons of mass destruction as justification of the last Iraq war.

The first recorded breach of security occurred in the Garden of
Eden. Apparently, there was a sense of threat, and Cherubim guarding it with flaming swords were the security measures taken. However, the security measures were insufficient, and that allowed the serpent to infiltrate the Garden of Eden and do his ungodly deed.
In fact, there is no perfect security. We can only provide degrees of protection (i.e., if there is a threat, risk is always present, though its level may vary. Often this is reflected in the statement that risk is a combination of threat and vulnerability:

Risk = Threat + Vulnerability

This looks logical since vulnerability means exposure to a certain
threat. This also leads to the assertion that:
Vulnerability is a deficiency of protection against a specific
A reasonably comprehensive definition of security would probably
be something like:
A set of measures that eliminate, or at least alleviate the
probability of destruction, theft, or damage to a being, an object, a process, or data, including the revelation of a process, or of the
content of information.

A hack is forever, but so are its fingerprints

A while ago I published a post, “A hack is forever,” explaining that a competent hack is extremely difficult to eradicate from a cyber system – there is no certainty that the system is really clean. However, there is flip-side aspect of this in cyberspace that is not commonly understood: by way of dubious consolation, the hacker cannot be certain that he really got away with the crime.
Cyberspace is an information and communications space. In essence, we don’t really care what media our information is stored on, we care about the utility aspects, such as the efficiency of the storage, how quickly and conveniently information can be retrieved, and so on. Similarly, we don’t really care what communications channels we use, we care about our communications’ speed, reliability, security, etc.
Cyberspace has one very significant property: information cannot be destroyed. Just think about it. We can destroy only a copy of the information, i.e. we can destroy a physical carrier of the information, such as a floppy, a thumb drive, a letter, or a hard drive (often that’s not an easy task either). However, we cannot destroy the information itself. It can exist in an unknown number of copies, and we never really know how many copies of a particular piece of information exist. This is particularly true given the increasing complexity of our cyber systems — we never really know how many copies of that information were made during its processing, storage, and communication, not to mention a very possible intercept of the information by whoever, given our insecure Internet. Such an intercept can open an entirely separate and potentially huge area in cyberspace for numerous further copies of the information.
Back to the consolation point: cyber criminals of all sorts can never be sure that there is not a copy somewhere of whatever they have done. That copy can surface, someday, usually at the most inopportune time for the perpetrator.
One practical aspect of this is that Congress perhaps should consider increasing the statutes of limitations for cyber crimes, or crimes committed via cyberspace.

Don’t blame the victim; fix the cyber technology

The perennial excuse for our dismal performance in cyber security keeps showing up again and again. Some “experts” state that 95% of cyber security breaches occur due to human error, i.e. not following the recommended procedures. There’s a sleight of hand in these statements in that many breaches include human error, but do not occur due to that error. While the 95% number might be suspect, the real point is different: even following all the “recommended security procedures” will not protect our systems from cyber attacks.
It’s true that attackers often use users’ mistakes. But the reason is simple and obvious – human errors do make it easier to penetrate a system. In effect they represent a shortcut for an attack, but by no means do they eliminate many other ways to do it. Why would an attacker take a more complicated route if he can use a shortcut?
Of course, users’ awareness of security is not common or comprehensive. This was vividly demonstrated by one very important Government agency not that long ago. Its board, after a thorough (and expensive) “expert” study mandated that employees use a six-letter password instead of the old and “insecure” four-letter one.
This is a pretty pathetic solution, but the much bigger question is: do the users really need to follow or even know complicated procedures? The answer is: no, not at all.
Indeed, cyberspace presents us with a wonderful opportunity to build very user-friendly effective security systems. It’s quite possible to build cyber security systems that would be extremely strong, even mathematically unhackable, that would require the user only to select the party he is going to communicate with, and then to indicate “secure.” No other security-related actions would be needed. This is very different from our current security technology based on concepts of physical space, where the weakest link in the security chain is the human factor. But up until now we have failed to take advantage of this great property of cyberspace.
If, as it is claimed, our cyber security misery is a “people” problem, this is true only in very narrow sense. It’s not the users who are the problem; the problem belongs to the people who design and build our worthless cyber security systems.

The Attack on Private Encryption

The current anti-encryption political push by a choir of government bureaucrats is picking up steam and has lately been joined by the head of the British MI5. The usual scarecrow of terrorism is invoked and used bluntly in public statements that border on unabashed propaganda. I did not want to write about it, but what is going on is just too much to take. The real goal of the whole campaign is suspect, so it’s worth taking a closer look at the issues involved.
Point one – ideological: We view ourselves as a democracy. With that in mind we need to understand that encryption has existed for at least four thousand years. During that time most of the rulers were ruthless tyrants and for all of them their #1 priority was to protect their rule. But even they did not crack down on private encryption – because it’s not practical (see Point three below), and they could not enforce it anyway. We, on the other hand, are facing a bunch of bureaucrats demanding the practical end to meaningful private encryption. How come a democracy can have more restrictions for its citizens than a country suffering under the rule of a tyrant?
Point two – technical: During all these thousands of years encryption algorithms have been consistently and quickly cracked by experts, usually employed by the government. Only a very few encryption algorithms withstood scrutiny for a few years, and those strong algorithms were developed by government experts and have always been well outside the reach of the general public at the time. Contrary to popular belief, all commercially available algorithms have been cracked very quickly after their introduction. Governments have traditionally been very shy about disclosing this. The situation is no different now. If a target used commercially available encryption algorithms its communications have been quickly cracked. So, what is the technical difference in the current situation? The simple answer is the sheer volume of information passing through the Internet. Individual communications can be cracked, but not the entire Internet traffic. That’s what the government bureaucracy is after: the ability to read ALL the traffic, i.e. all of our communications.
Point three – practical: the purpose of encryption is to assure privacy of communications. There are many other ways to do this other than by encryption. One vivid example: when we were hunting Bin Laden for years he did not use the Internet at all, he used messengers. He could just as well have used the regular mail. Furthermore, it’s well known that the 9/11 terrorists were communicating over regular phones, but in Aesopian language. For example they referred to a terrorist act as a “wedding.” So are our bureaucrats next going to demand the right to read all our mail, or make a terror suspect of anyone who mentions a wedding over the phone?
Conclusion: The simple truth is that the Government can penetrate any commercial encryption available to terrorists. That is if they actually go after terrorists. However, they are now demanding the right to go after everyone, mostly law abiding citizens. If that demand is denied there’s still nothing to prevent them from going specifically after terror suspects.
The moral here is pretty straightforward: if we call ourselves an uncorrupt democracy we should be very careful about giving our bureaucrats too much power, inasmuch as they want more power than tyrants of history could not get. Furthermore, the bigger danger here is that loosing civil rights is a very slippery slope.

The secret reason behind the Chinese hacking

For quite some time I’ve been puzzled by the alleged Chinese hacking of our databases. I could understand if they hacked our advanced research and development– that would save them time, effort and money. But why the databases? Then it dawned on me: it’s a savvy business strategy.
We routinely encounter problems with our databases. One organization can’t find our file, another somehow has the wrong information about us, and all too often they certainly can’t get their act together, and we see classic cases of the left hand not knowing what the right one is doing . The pre-9/11 non-sharing of intelligence is a good illustration. In other words, we have a somewhat messy general situation with our databases; we’re used to taking this in stride; and we just sigh when we have to deal with some organization that accuses us of something we aren’t guilty of.
The Chinese understood the problem, but they just never got used to it. For many centuries they had a much bigger population than other countries, but somehow they always managed to know exactly who is who, who is related to whom, and what he/she is doing.
So naturally they wanted to have the same level of knowledge about the rest of the world. To their dismay, in the US they found disorganized databases and mismatching records. So they had to process all that information to make sense of it for themselves. And suddenly they saw a perfect business opportunity: they would develop a gigantic and very efficient database of the US, and then sell this data back to us piecemeal, retail. This would give them full and exact knowledge of the US, and the US would pay for the project, with a significant profit for the Chinese. For us this would be a very valuable service, a kind of of involuntary outsourcing where we (both the Government and the private sector) can get relevant and reliable data at a modest price. Makes perfect business sense.
This approach has a special bonus for the US Government: when buying data abroad they won’t have to deal with privacy restrictions imposed by the US Constitution and constantly debated by Congress. The logic is impeccable: we bought it abroad, and if the Chinese know it, we are entitled to know what they know about us.

The Android phone vulnerability has been “fixed” – really? How about Android Pay and Google Wallet?

The recently discovered vulnerability in the Android operation system that affected 1 billion smartphone users (corrected to a mere 950 million, according to the phone manufacturers) followed a typical path:
a) The next gaping security hole is discovered by researchers, who alert the manufacturers;
b) The manufacturers make a patch for future buyers;
c) The manufacturers and service providers do nothing to help or even alert the affected users;
d) The researchers lose patience and publicly disclose their discovery of the flaw;
e) The manufacturers report that they “fixed the glitch within 48 hours,” and keep quiet about the customers affected.
The frustrating part of this all too familiar pattern is that it ignores the victims – the customers who already bought their phones. These customers were assured by the manufacturers’ marketing and sales people at the time of purchase that the product (a phone in this case) is very secure, and is equipped with a top-notch security system– so their privacy is assured. Software patches like the one in question are very easy to incorporate into new phones. However, it would cost money to fix the defective products already out there, and this seems deter to the companies from making the fix.
But the most interesting aspect of the situation is not what the manufacturers say, but rather what they don’t say. It should be understood that the vulnerability discovered presents not one problem but two. One is that the phones without the patch can be hacked at some point in the future. The other is that the phones already hacked are under the hackers’ control. So the most important questions is, can that control be reliably taken away from the hacker and returned to the customer? The manufacturers notoriously acknowledge the simple first problem, but quietly ignore the existence of the second, much bigger one.
In practical terms, even if the fix is installed on an affected phone, the real question is: does it neutralize the effect of the hack? In other words, if my phone was hacked, the perpetrators have established control over it. Does the fix eliminate that control? A pretty safe bet here is that it does not. The fix just prevents another hack using the same method. But in that case, what’s my phone worth now when I no longer can assume my privacy, or security of financial transactions? It looks like the manufacturers may not be complying with the implied warranty laws. At the very least this is a priority research problem for our increasingly numerous legal experts.
Every aspect of these issues is fast approaching a real-world test.– especially urgently given the proliferation of smartphone-based payment systems like Apple Pay and Google Wallet.

Self-driving cars–the hacking factor

The public’s perception of our cyber vulnerability hasn’t so far been really tested . We’ve been lucky to get away with our critical infrastructure practically undefended. However, sooner or later we’ll run out of out luck, and we’ll get hurt– badly.
The major reason for our apathy in the face of cyber attacks is probably the tendency to react mostly to something that is tangible. But as yet we haven’t had much in the way of tangible losses to cyber attacks: banks have been able to reimburse money lost in their hacked customers’ accounts, stolen identities have generally been somewhat restored, we aren’t aware of anyone being killed due to the Office of Personnel Management (OPM) hack – in other words, a small part of the population has experienced inconveniences, but not tangible losses.
The latest example of a cavalier attitude to this danger comes (surprise, surprise!) from the auto industry. Cars have been hijacked in experiments for quite some time, but these were limited to the Government and some private labs. Car manufacturers have surely been aware of the potential problem. Indeed, Fiat Chrysler was notified of the successful hijack of one of their cars quite some time ago, but only after the media exposed it did they recall 1.4 million cars. And of course the vulnerability isn’t limited to Fiat Chrysler– it’s far wider.
This situation is fast heading for a test. Significant efforts are being made to develop self-driving vehicles, and the results are already very promising. However, the parties involved seem oblivious of the fact that a self-driving car is much more vulnerable to cyber attacks other than the kind that affected the Fiat-Chrysler vehicle. The latter can be somewhat mitigated by separating the car’s communication equipment from its primary performance systems. But clearly this isn’t going to be the case with self-driving cars – outside interaction is going to be a major factor of their safety features. Those very safety features themselves can be manipulated by a malicious party, with catastrophic results. There would be lost lives along with many other unpleasant results. Hacking self-driving vehicles would likely become a favorite weapon of focused assassins (not to mention unfocused crazies who would prefer it to random shooting in movie theaters). That would certainly be a very real test of our tolerance for remaining undefended in cyberspace.
The obvious conclusion that begs to be considered here is that the entities that are working on the self-driving car had better solve the cyber security problem now instead of finding out that their technological marvels cannot be certified for safety reasons. Or worse, pushing the certification through and then facing the inevitable consequences.