Author Archives: victor sheymov

About victor sheymov

Victor Sheymov is a cyber security expert, author, scientist, inventor, and holder of multiple patents for methods and systems in cyber security. Victor Sheymov is a speaker on intelligence, offensive warfare, cyberspace, cyber security and critical infrastructure protection. Currently specializing in cyber security, Mr. Sheymov holds over 30 patents in the field of computer security. He has been granted patents in the United States, the European Union, Australia, Japan, India, Korea, and China. In a book Cyberspace and Security he explains how cyberspace is radically different from physical space. He analyzes the differences between the two, and defines the laws and characteristics of cyberspace. In a revised edition of Cyberspace and Security, Sheymov covers COHERENT DEFENSE OF LARGE SOVEREIGN CYBER SYSTEMS. Mr. Sheymov is the inventor of Variable Cyber Coordinates (VCC) method of communications.This method of communications is advantageous for establishing a high level of cyber security. By hopping IP address and other communications parameters, it provides dynamic protection of computers and computer networks through cyber agility.VCC method of communications enables building cyber security systems that render hacking attacks computationally infeasible. Furthermore, this method does not require intrusions in customer's information and systems. It provides security without violating customer's privacy or civil liberties. Victor Sheymov has 30 years of experience in advanced science. He performed scientific research involving guidance systems for use in the Soviet "Star Wars" missile defense program. Mr. Sheymov worked in the Soviet counterpart to the U.S. National Security Agency, serving in a variety of technical and operational positions. Prior to defecting for ideological reasons to the United States in 1980, Mr. Sheymov had been one of the youngest KGB Majors in its equivalent of the NSA, responsible for coordination of all aspects of KGB cipher communications security with its outposts abroad. After his arrival in the United States he worked for the NSA for a number of years. His work for the Soviet counterpart to NSA and the National Security Agency puts him in a unique position for a comparative perspective. Mr. Sheymov has testified in the United States Congress as an expert witness. He has been a keynote speaker at major government and private industry events such as NSA, a National Defense Industry convention, a National Science Foundation symposium, and he has been a guest lecturer at various Universities. Victor Sheymov is the author of "Tower of Secrets" (HarperCollins), a non-fiction book describing Soviet Communist political system, its repressive apparatus, and technical aspect of intelligence. His new book Tiebreaker, Tower of Secrets II, is the fascinating memoir of how Victor Sheymov figured out the reason behind the campaign to destroy him and his family, and the cause of the CIA’s catastrophic intelligence failures revealed with the arrest of Aldrich Ames. Sheymov then found himself at the center of the next intelligence crisis with the arrest of his longtime FBI liaison, Robert Hanssen. And now Sheymov, world-class cyber security expert and inventor, finds himself at the heart of the controversy over the solution to the rapidly growing global threat to cyber security.(Cyber Books Publishing) He has also authored articles in The Washington Post, Barron's, World Monitor, National Review and other national publications. He appeared in many national news programs including Larry King Live, 48 Hours, Dateline, McNeil-Lehrer, Charlie Rose and the McLaughlin Report. Victor Sheymov is a recipient of several prestigious U.S. awards in intelligence and security. Victor Sheymov holds an Executive MBA from Emory University and a Master's degree from Moscow State Technical University, a Russian equivalent of MIT.

Self-driving cars–the hacking factor

The public’s perception of our cyber vulnerability hasn’t so far been really tested . We’ve been lucky to get away with our critical infrastructure practically undefended. However, sooner or later we’ll run out of out luck, and we’ll get hurt– badly.
The major reason for our apathy in the face of cyber attacks is probably the tendency to react mostly to something that is tangible. But as yet we haven’t had much in the way of tangible losses to cyber attacks: banks have been able to reimburse money lost in their hacked customers’ accounts, stolen identities have generally been somewhat restored, we aren’t aware of anyone being killed due to the Office of Personnel Management (OPM) hack – in other words, a small part of the population has experienced inconveniences, but not tangible losses.
The latest example of a cavalier attitude to this danger comes (surprise, surprise!) from the auto industry. Cars have been hijacked in experiments for quite some time, but these were limited to the Government and some private labs. Car manufacturers have surely been aware of the potential problem. Indeed, Fiat Chrysler was notified of the successful hijack of one of their cars quite some time ago, but only after the media exposed it did they recall 1.4 million cars. And of course the vulnerability isn’t limited to Fiat Chrysler– it’s far wider.
This situation is fast heading for a test. Significant efforts are being made to develop self-driving vehicles, and the results are already very promising. However, the parties involved seem oblivious of the fact that a self-driving car is much more vulnerable to cyber attacks other than the kind that affected the Fiat-Chrysler vehicle. The latter can be somewhat mitigated by separating the car’s communication equipment from its primary performance systems. But clearly this isn’t going to be the case with self-driving cars – outside interaction is going to be a major factor of their safety features. Those very safety features themselves can be manipulated by a malicious party, with catastrophic results. There would be lost lives along with many other unpleasant results. Hacking self-driving vehicles would likely become a favorite weapon of focused assassins (not to mention unfocused crazies who would prefer it to random shooting in movie theaters). That would certainly be a very real test of our tolerance for remaining undefended in cyberspace.
The obvious conclusion that begs to be considered here is that the entities that are working on the self-driving car had better solve the cyber security problem now instead of finding out that their technological marvels cannot be certified for safety reasons. Or worse, pushing the certification through and then facing the inevitable consequences.

That Office of Personnel Management (OPM) hack: the depth of the damage

The somewhat belated (just 7 months!) timid admission by the Federal Government that security clearance files of Government employees and contractors had been hacked was hardly a shocking surprise. Media discussion largely focused on the breadth of the security breach – the number hacked started with 4 million and pretty quickly grew to 21 million. But the depth of the security breach was not really addressed. It is, however, a major aspect of the loss.
Unbeknownst to most of the general public, and ironically even to many of those Government employees who are actually responsible for this security breach, all security files are not created equal. At one end of the spectrum are security clearance files of personnel whose proximity to Government secrets is very limited and often only symbolic, such as facilities maintenance workers. At the other end of the spectrum are people with security clearances well beyond the proverbial Top Secret level, those who are entrusted with the deepest and most sensitive Government secrets, such as nuclear arms and Government communications security experts.
Candidates who apply for a Government job are routinely asked to a sign privacy release allowing the Government to conduct an investigation into the applicant’s background that would otherwise be in violation of their privacy rights. Of course, applicants usually sign the form without looking at it too much. But even the lowest level of security clearance is far more invasive than your “thorough” bank investigation before granting you a loan. At the low end there’s a cursory search to make sure that the applicant has no significant criminal offences and is not involved in known criminal organizations. For a high-end clearance it’s a totally different story. Thorough investigation may include numerous connections, including relatives and present and past personal friends, hobbies, clubs affiliations, financial transactions over a significant period of time, and so on. Present and past spouses, “close friends” and partners are definitely of interest. Investigations may include interviews with neighbors and government informants, and maybe even one or another form of surveillance.
Many who are subjected to such investigation don’t realize how much of their personal privacy they surrender to the Government, but surrender they do, and some of them find that out only if things turn sour in their relations with the Government. However, they all at least implicitly rely on the Government to guarantee the security of their private information.
The OPM hack shattered that expectation. If the hack was done as alleged, by the Chinese, it is also most certain that the Russians had also done it before. Moreover, whichever intelligence service has the files, they may well trade some of them in exchange for other intelligence. Needless to say, among all those in supersensitive jobs are clandestine intelligence operatives, including the DEA, CIA, and Special Forces, and this situation puts their lives in real and immediate danger.
As a practical matter, those affected should demand to know exactly what information was stolen. Classified as it may be, it is not classified anymore. After all, if the Chinese know something about me, I am certainly entitled to know what they know too.
One more unanswered but very important question: do those files contain personal biometrics beyond fingerprints (leaking which is bad enough) — such as DNA, and retinal scans? I haven’t seen anyone asking that.

Cyber defense by semantics: hacks are now called “computer glitches”

The New York Stock Exchange is down and United Airlines is not flying for half a day. Naturally, everyone’s wondering, What’s going on? The public wants an explanation from the FBI and the affected institutions, and fast.
The response is quite astounding.
Voila! Cyber security problem solved: from now on all hacks are to be called “computer glitches.” United and the NYSE computer network outages are only the latest glaring examples of a classic bureaucratic solution to the problem – defense by semantics.
This “expert” explanation means two things: a) it’s a fairytale designed for little children and big fools; and b) the “cyber security experts” of the affected entities and the FBI probably have no clue as to what happened. That’s a very good indication of an expertly executed cyber attack – the effect is obvious but the attack has not even been detected — and forget figuring out that “the Chinese” or the “the Russians” did it. Because it is really unfathomable to imagine that programmers working on critical programs like these found “glitches.” Such programs are written and implemented by highly qualified programmers and software engineers and are tested numerous times under all imaginable circumstances. Furthermore, they’ve been running for quite some time with no “glitches” detected, and all those systems have built-in redundancies precisely in case of a “glitch.”
The “glitch” explanation is very convenient for those who failed to provide cybersecurity of this country.
All these events are a clear indication of our massive cyber security failure. This failure was inevitable. On the one hand in the last quarter century widely known cyber attack technology has advanced dramatically, and is becoming increasingly widespread. What a while ago only a few government agencies in the world could do can now be done by a lot of people, often by mere script kiddies, and certainly by our sworn enemies who aren’t restricted in what they can attack—the more damage the better. On the other hand, our cyber security has not advanced at all for the same quarter of a century. This is the inconvenient truth, despite of all the marketing and politically soothing statements from the entrenched cybersecurity establishment.
It is really sad that people responsible and paid for providing our cyber security are getting away with this cyber defense by semantics. No doubt the next step is to make the term “hacking” politically incorrect and make everyone use “computer glitch” instead. When that fairy tale runs out, they’ll think of another term. That’s assuming our computers are still functioning.

The Government wants our encryption keys. Are home keys next?

The just released report by 13 well known cryptographers opposing the US and British governments’ sweeping demand for encryption keys has directly addressed the increasing threat of government’s insatiable thirst for power. Any government objection to this report is bound to be disingenuous.
However, there’s one more angle that begs for further exposure. The overall issue is not a scientific or technical question; it’s an ideological one. The frequently heard loud claim that inevitably we have to give up our privacy so the government can protect us is flatly untrue and hypocritical. It is indeed technologically feasible today to build a system where everyone of us would wear an irremovable collar equipped with cameras, microphones, and GPS that would communicate our location and immediate surroundings every instant “totally securely” to some highly trusted government agency. The government may argue that a) it would only access this information upon some court order; and b) it would solve a lot of crimes and save a lot of lives. True, such a system would make the police’s job very easy, would solve a lot of crimes, and save a lot of lives. But the real question is: do we want to live in that kind of society? In the American spirit the politest answer would be, “Hell, no!” And as always with this kind of hypothetical system criminals would quickly find a solution to neutralize it, leaving us with the situation that only-law abiding citizens would be subject to this massive electronic prison.
Even as we see deeper and deeper assaults on our civil rights and liberty in the manner described above, the government is more than a little shy talking about other intelligence-gathering techniques that require more skill than a slightly trained operator just pushing a few computer keys. These methods are well known among professionals, they have existed for a long time, and can be applied to any target. The drawback of course is that they are less convenient for the operators, require greater skills, and do not include a global bulk collection of information on everyone.
Well, maybe this is just what we, the people, need and want.

A classic hot potato—political hacking

Media attention to cyber attacks can be divided into two categories: the endless stream of examples of institutions hacked, and cautious descriptions of potential (and very real) horrors of our vital systems being attacked.
But there’s one area of cyber attacks that conspicuously has received little or no media attention: political hacking.
Practically all our electronic systems are vulnerable to cyber attacks to varying degrees. Political systems rank close to the high end of vulnerability, and indeed most of them are virtually undefended against even a low-skilled hacker.
It’s pretty obvious that these systems are extremely valuable targets for some political activists, and certainly for the aggressive ones. It’s not too far-fetched to assume that some of these activists, whatever their focus, either possess hacking expertise themselves or have access to guns for hire, whether for money or some other consideration.
This hacking potential may impact our lives more than we realize. It’s not technically difficult to hack into a voting system and “adjust” the outcome to the hacker’s liking. Such hacking targets can be at any level – from a poll on a local ordinance to allow dogs on the beach to the election for a head of state. Importantly, it’s not easy to detect such interference, and even if done successfully it takes a lot of time. We can only imagine the political mess if a couple of months after an election it’s determined that a group of teenagers (or some mysterious “Russian or Chinese hackers”) materially changed the outcome, and the wrong people were sworn in to their new hard-earned and increasingly expensive jobs.
On a more subtle indirect note such “adjustments” can be made to the results of public opinion polls, manipulating public opinion in a very effective way.
While it’s understandable that nobody wants to discuss this classic case of a very hot potato, nevertheless we have to realize that we ignore this threat at our own peril.

Product Liability—the Unique Position of the Cybersecurity Industry

There are three points that radically distinguish US cybersecurity industry from any other.
One – every cybersecurity company seems to be the self-declared “world leader in cybersecurity.” This can be easily verified by visiting their websites. I haven’t been able to detect any #2. Surprisingly, comedians and cartoonists don’t explore this hilarious situation.
Two – the industry as a whole is de-facto exempted from any product liability, even any implied warranty liability. This is a truly unique break that the cybersecurity industry has been getting away with for over thirty years. In the US every manufacturer is obligated at the very least to make sure that its products are reasonably fit for their intended uses . For example, a car manufacturer must make sure that its cars are at least drivable and can deliver a user from point A to point B. A hammer manufacturer has to make sure that its hammer handles do not break, at least not before you bring one home. The Uniform Commercial Code (UCC) is very explicit about this, and there have been millions of court cases where this principle has been upheld.
But not for the cybersecurity industry. Every firewall gets hacked even before it’s delivered to the first customer. On a daily basis we hear of “big” cases that one or another organization has been hacked with huge losses for millions of people. And don’t forget that only a small fraction of hacks is detected. We never hear about the undetected “big” cases and thousands of smaller ones. But nobody is held responsible despite of many billions of dollars in losses incurred by individuals, companies, and governments. The Government does promise to prosecute hackers – if they can catch them.
The interesting twist here is that every company assures its customers that their personal information and the money in their accounts is secure. Ironically, they assure their customers before they are hacked, while they are being hacked, and after they’ve been hacked. Somehow we listen to them and nod in agreement.
Three – the cybersecurity industry gets countless billions of our dollars for research and development of cybersecurity products. In fact, we spend more on this in a year than the entire cost of the Apollo program that put a few good men on the Moon. Amazingly, these funds seem to be going into a black hole. Nothing comes back. No product, no results, no responsibility for wasted money– the taxpayers money.
The most remarkable thing about this is that we, the people, have put up with this situation for over thirty years.
On a positive note: this industry should be a bonanza for investors — assured high returns with no risk. Stock brokers should take a note.

Latest cyber lunacy: we are going to sanction the rest of the world!

In the latest example of the bureaucracy’s detachment from reality, the White House has just announced an executive order to sanction cyber attackers. Not to mention the checkered record of effectiveness of other sanctions, cyber sanctions are very difficult even to fathom.
For starters, who are we going to sanction? The whole history of cyber attacks clearly shows that determining the real source of cyber attacks with any degree of certainty is extremely difficult. Out of the thousands of attackers around the world we are able to identify only a handful in a year. Furthermore, those identified are not the most dangerous ones. The best we can do is to say that an attack came from a certain country.
So, who are we going to sanction — and how?
An individual attacker? Most of the identified hackers are not the most dangerous ones, often just scrip kiddies. But even then, how are we going to sanction them? Prohibit a teenager from the Ukraine to enter the United States? He doesn’t have the money to come here anyway. Bar him from McDonalds? He’ll find another place to get a hamburger. Prohibit a cyber dude from Nigeria from exporting oil? He’s unlikely to have any. Deny a sale of an F-16 to a company in Croatia? They probably don’t have a hangar to keep it in anyway.
A country? Unlikely. It’s a well known fact that most of the attacks, and definitely the most damaging ones, are “bounced” many times through “innocent” computers in other countries before being sent to the target. Given that, we’ll rapidly end up sanctioning the rest of the world. While this would become perpetual fodder for the press, it would be unlikely to have any real impact on cyber attacks. If anything, it would even increase them, when they start sanctioning us in retaliation. And chances are their sanctions would be more damaging than ours.
Besides, cyber attacks are already a felony anyway. That should be sanction enough — though it doesn’t seem to work.
The big question is, do we really understand what we’re doing in cybersecurity? It doesn’t seem so.
A better solution may be to sanction our own bureaucracy.

Latest solution: Share your cyber misery

There was a great deal of anticipation about President Obama’s participation in the recent conference on cybersecurity at Stanford University, in the heart of Silicon Valley, where he met with high-tech company executives last week.
It wasn’t a shocking surprise, however, that the Government’s proposal, to be enshrined into a Presidential Order, in reality only boils down to a call for sharing misery tales between the Government and private companies. Once the political rhetoric is stripped away this approach doesn’t offer any improvement in cyber security. The reality is that hacks are usually discovered months or even years after the fact, when all the damage has already been done. That’s assuming the hack is even detected in the first place.
It’s not the best kept secret in town that most hacks, and certainly the most dangerous ones, are rarely or never detected, or only long after the fact. A great example is the recently announced international multi-bank hack that netted somewhere between $300 million and $1 billion to the unknown attackers. See http://www.nytimes.com/2015/02/15/world/bank-hackers-steal-millions-via-malware.html?emc=edit_th_20150215&nl=todaysheadlines&nlid=58721173&_r=0
The vagueness in the loss assessment speaks very loudly to how little the cyber security experts involved know about the hack even now. And, of course, they haven’t a clue as to who did it. Not to mention that it took them two years to discover the loss.
On the more optimistic side there is a rising public awareness of the problem that sooner or later will lead to a public demand for the development of a true cyber security technology. Unfortunately, this is unlikely before the pain from cyber attacks becomes really intolerable, probably as a result of a massive loss of human life.

Encryption: panacea or just an expensive “do something”?

Once in a while we see a common cyber call to arms: “Let’s use data encryption and, voila, our problems will be over.” A typical example of this is the AP article http://cnsnews.com/news/article/no-encryption-standard-raises-health-care-privacy-questions.
This is a very common misconception. Encryption per se does not protect against hacking. Surely, encrypted files look impressive, with their very long strings of seemingly random characters. It must be mindboggling for a casual observer to imagine that anyone can actually decipher that without the secret key.
However, the reality is vastly different.
Strength of encryption is based on two main ingredients – the encryption algorithm and the secret key. Most encryption algorithms, and certainly all commercially available algorithms, are well known. They have been researched, and solutions—the ability to decrypt them without the secret key—have been found for most of them. The only undefeated algorithm so far remains the so-called “one time pad,” where the key is used only once. But even that algorithm’s strength rests on the quality and security of the key — issues that are far from trivial.
However, the main practical problem with encryption is the distribution system for the key. As in the example of a health system cited above, we are talking about a massive database with many millions of records. Sure, it’s not too difficult to encrypt all that data. But then what? The database has many legitimate users, sometimes thousands, and each one of them must have the secret key. It’s not difficult to obtain the key, one way or another, from at least one of them. Such a single breach would defeat the whole encryption scheme. I’ve often heard someone proudly declaring at a party, “I encrypt all data files in my computer.” Sometimes I will casually ask, “But where do you keep your key?” The answer invariably is, “In the computer.” Usually that person doesn’t understand that the key in his computer is also available to anyone who bothers to hack into his computer.
All in all, data encryption is a good concept, but the practicality of its deployment in databases with many users can only protect against middleschoolers. It would have marginal protection against smart highschoolers, and it would certainly be fruitless against professional cyber attackers.
Encryption per se would be just another expensive exercise in wishful thinking. It should be clearly understood: ENCRYPTION PER SE DOESNOT PROTECT AGAINST HACKING.

Power grid: when cyber lines cross

We have very little time to cure our stone age cyber defensive technology.
The CNN story citing testimony by Admiral Michael Rogers, head of U.S. Cyber Command, to a House Select Intelligence Committee November 20 sounded like shocking news. He stated that China can take down our power grid. http://www.cnn.com/2014/11/20/politics/nsa-china-power-grid/index.html

Shocking as it may be, if this is still “news,” surprise, surprise — it’s been known to everyone who was anyone in cyber security for over 25 years. First it was just the Russians, then the Chinese, then some vague criminals acting on behalf of “nation-states” were gradually added to the list.
Never mind the Russians and the Chinese – they also both have enough nuclear weapons to kill every squirrel in America. What is really troubling is the cyber security trend. Our cyber defensive capabilities have hardly improved for over a quarter-century. However, hackers’ attacking capabilities are improving constantly and dramatically. This is not a good equation — sooner or later these lines will cross. This means that a large number of unknown hackers will be able to take down our power grid and also decimate our power-intensive facilities, such as oil refineries, gas distribution stations, and chemical factories.
Now, think terrorists. They would be delighted to do exactly that, whether you kill them afterwards or not. This isn’t news, but it’s an increasingly troubling reality. We have very little time to cure our stone age cyber defensive technology. But that requires changing the current equation and making cyber defense inherently more powerful that the offense. That won’t happen until the doomed legacy password and firewall paradigms are abandoned and replaced by fundamentally different technologies.