Tag Archives: Google

The Android phone vulnerability has been “fixed” – really? How about Android Pay and Google Wallet?

The recently discovered vulnerability in the Android operation system that affected 1 billion smartphone users (corrected to a mere 950 million, according to the phone manufacturers) followed a typical path:
a) The next gaping security hole is discovered by researchers, who alert the manufacturers;
b) The manufacturers make a patch for future buyers;
c) The manufacturers and service providers do nothing to help or even alert the affected users;
d) The researchers lose patience and publicly disclose their discovery of the flaw;
e) The manufacturers report that they “fixed the glitch within 48 hours,” and keep quiet about the customers affected.
The frustrating part of this all too familiar pattern is that it ignores the victims – the customers who already bought their phones. These customers were assured by the manufacturers’ marketing and sales people at the time of purchase that the product (a phone in this case) is very secure, and is equipped with a top-notch security system– so their privacy is assured. Software patches like the one in question are very easy to incorporate into new phones. However, it would cost money to fix the defective products already out there, and this seems deter to the companies from making the fix.
But the most interesting aspect of the situation is not what the manufacturers say, but rather what they don’t say. It should be understood that the vulnerability discovered presents not one problem but two. One is that the phones without the patch can be hacked at some point in the future. The other is that the phones already hacked are under the hackers’ control. So the most important questions is, can that control be reliably taken away from the hacker and returned to the customer? The manufacturers notoriously acknowledge the simple first problem, but quietly ignore the existence of the second, much bigger one.
In practical terms, even if the fix is installed on an affected phone, the real question is: does it neutralize the effect of the hack? In other words, if my phone was hacked, the perpetrators have established control over it. Does the fix eliminate that control? A pretty safe bet here is that it does not. The fix just prevents another hack using the same method. But in that case, what’s my phone worth now when I no longer can assume my privacy, or security of financial transactions? It looks like the manufacturers may not be complying with the implied warranty laws. At the very least this is a priority research problem for our increasingly numerous legal experts.
Every aspect of these issues is fast approaching a real-world test.– especially urgently given the proliferation of smartphone-based payment systems like Apple Pay and Google Wallet.

Apple-Google-FBI Phone Encryption Spat or Public Image Campaign?

Apple and Google announced encryption programs for their smartphones that supposedly increase their customers’ privacy. As a result we’ve just seen a very public privacy vs. security debate with Apple, Google, and the FBI making statements worthy of desperate pre-election politicians. An interesting aspect is that the debate rages around the technical issue of encryption, even though practically no technical information has been released. So no technical evaluation of the claims is feasible, but a closer look at the underlying issues seems in order.

First of all, the very basis of encryption as we know it is that every party privy to encrypted data has to have the key. Simply put, this means that there are always at least two keys involved. Even if you encrypt your files within your own computer with a password that you remember, there has to be a reciprocal key somewhere in you computer for validation. Otherwise, there is no encryption.

Apple and Google announced that they would no longer have a “master key,” or possibly a database of the passwords of all users on their servers. (A very interesting question pops up: how are they going to update software in your phone or computer? That wasn’t mentioned.) That sounds like they’re transferring your privacy destiny into you own hands. It’s just not so. Suppose they really aren’t going to have your password. What they’re really saying is that somebody else will have your password, presumably your mobile phone carrier. So the whole hoopla is really about them saying that they don’t want to deal with Government demands for massive amounts of our private data. They’re just saying that the Government has to deal with someone else.

The best case scenario here would be for Apple and Google encryption to be arranged in a way that your personal data such as your rolodex, your pictures and notes, etc. would be stored in your phone encrypted with your personal password, and your carrier would not have a copy of it.

Either way, the FBI has a difficult case to complain about. Their statement that encryption will hinder criminal investigation is clearly disingenuous. It’s not a matter of technical difficulty, it’s a matter of convenience and constitutionality. The only problem this would make for the FBI is that they couldn’t come to a company with a vague sweeping order for a vast amount of private data of a lot of their customers. They’d have to hack every suspect’s phone individually. This is certainly not difficult, and if they don’t know how to do it they can consult the NSA. They’d also have to go to court to obtain a search warrant for every individual suspect. Inconvenient, but that’s the way the Constitution meant it to be.

Privacy Posturing in the Great Cyber Triangle

The recent New York Times article, “Internet Giants Erect Barriers to Spy Agencies,” reflects the current political rhetoric over privacy, but it also misrepresents the reality of the situation.

http://www.nytimes.com/2014/06/07/technology/internet-giants-erect-barriers-to-spy-agencies.html

The companies cited– Google, Facebook, Yahoo, and the like– are taking steps to make NSA interception of their data more difficult. But this is a basically political move. They are merely reducing levels of voluntary cooperation with the government. The simple truth is that with the cybersecurity technology currently available and deployed these companies are not capable of protecting themselves, and ultimately their customers, from cyber attacks.

In the great US-Russia-China Cyber Triangle each government has enjoyed the quasi-voluntary cooperation of its cyber-based large companies. The other two governments were simply attacking the companies at will, and with full success. Of course, the companies’ cooperation was helpful to their host government, but it should be clearly understood that this was merely a matter of convenience and efficiency, and had little bearing on the actual result.

So the only change this new US cyber company fad  is that it will take a little more effort by the US Government to get the same results. The other two sides of the great triangle aren’t affected (nor, for that matter, are several  other governments).

This might suggest that the only way to protect people’s privacy is a legislative approach that would prohibit the Government from spying on its own citizens. But then we have to clearly understand that while we can prohibit NSA collecting Americans’ personal and private data, we cannot prevent Russia or China from doing the same. This is a symmetrical situation: Russia and China, and any other country, cannot prohibit the US collecting whatever they want. The situation would be awkward indeed if only American Government cannot collect unrestricted information on Americans. Spying is the oldest profession, and it’s going to prosper for the foreseeable future.

There’s a simple conclusion to be drawn: until and unless we develop new and truly effective cybersecurity technologies all the discussions about our privacy are just exercises in political rhetoric.