[My opinions are probably different than those of my employer. Have a grain of salt handy, please.]
Sorry, but when a twitter client starts making fun of a politically charged security issue (WikiLeaks), the problem is a bit more trivial than most people are willing to admit.
At least, from the technology perspective. When you get right down to it, technology is not what caused the breach of classified diplomatic cables. People had to leak the stuff. Period.
That’s what I find so funny about the screenshot. So many companies are bound to find an in because they can now market their product as a way to deter these types of leaks, and they’ll market it as a full solution meaning it will solve every problem under the sun.
MetroTwit obviously had their tongue's planted firmly in cheek when they labelled their update, but they made fun of what so many companies will do in the future: label their product/feature/service as a security solution.
There is no practical solution to security when people are involved. Just lots of little fixes and maybe a little bit of planning.
Actually, a lot more planning might solve a number of those little fixes.
Earlier this week Twitter disabled Basic Authentication for clients, and switched
over to their new OAuth implementation. It turns out though that OAuth is fairly
weak in a few areas, as it hasn’t really become a mature standard. While this
isn’t the end of the world, it does leave each implementer to their own devices to
cover the weak points.
This is just a quick overview of the one of the WTF’s that is Twitter OAuth, but Ars
Technica has a great article on this in detail.
One key point that Twitter seemed to miss entirely is how they handle client verification.
I.e. proving that the client in question is really who they say they are. For
instance, I use Sobees quite a bit, and have
been playing around with MetroTwit lately
too. Twitter want’s each instance of Sobees to prove that it is Sobees.
The client application does this by getting a public/private key and passing them
to the authentication mechanism.
This seems odd. How does the application store the private key? Most implementations
will probably stick it in a config file while others might encrypt it. Suffice
to say, all applications need this private key. It is very easy to extract text
from binary structures, let alone config files, so what happens if I get another client’s
Since this private key is used for identification, I could very easily stick that
key into my application and pretend that I am that application. This wouldn’t
really lead to user PII being compromised, but it can easily cause harm. Twitter’s
goal for this is to reduce spam, because if they track too much spam coming from certain
private keys they will revoke the key preventing the application from being able to
sign the user in.
Who see’s the problem here? What happens if my competitor steals my key and
starts spamming people? My key gets revoked, and I need to replace it.
If it’s a client application, that means updating it, testing it, deploying it, and
hope that the mass downtime across every instance doesn’t lose too many customers
for you. Worse yet for those that have written iPhone apps, because that could
mean weeks of delays while Apple twiddles their thumbs.
I suspect that they won’t revoke any keys once they come to their senses. Or
more likely, will revoke a key for something like TweetDeck and hear the outcry from
the large user base. After they can sign back in again, of course.