Bruce Schneier and other very respectable experts think we should be talking treaties with China and others about cyberattacks, even if the treaties are unenforceable. But they’re not just unenforceable, they’re unverifiable.
Go watch the excellent interview on searchsecurity.com with Bruce Schneier. It’s less than 7 minutes. Schneier is a top guy in cryptography and has broader interests and expertise. In his blog he often takes on the real world security measures we all deal with, like surveillance cameras and ID card standards.
Most of the interview has to do with “cybersecurity” — a term I despise but that we’re stuck with — which has come to mean national computer infrastructure security. It can refer to the security of the major networks (Verizon, AT&T, etc.), the security of military and other government networks, the security of the electrical grid or even the security of banks.
I’m not sure there’s public proof out there of it, but it’s reasonable to assume that the Chinese government is involved in attacks and planning for attacks on US infrastructure (In early 2010, Google documented cyberespionage attacks against Gmail originating from China). Perhaps most of it so far has been reconnaissance and experimentation. It gets no press at all here, but I’m sure, or at least I hope, that we’re doing the same thing to them. Schneier makes the same assumption.
Schneier is concerned about a cyberweapons arms race. Analogizing the situation to the cold war, he then goes on to suggest that the real problem was one of information: even when we had a hotline, we didn’t know everything the Russians were doing so we assumed the worst, and vice-versa.
Treaties controlling the deployment and use of such weapons could be helpful in keeping the problem under control, he suggests, and notes that former US counter-terrorism official Richard Clarke has made the same suggestion. Schneier accepts that such treaties might not be enforceable, but says they can still be worthwhile.
I don’t buy it. The big problem with these weapons, as I see it, is not that we’re not communicating with our adversaries. It’s that nothing about their use is verifiable. If an attack on a US installation is traced to some consumer or university computers in China or Taiwan or wherever, was it the Chinese or some non-state actor? We don’t know and, absent forensic examinations we won’t likely be able to perform, can’t know.
What’s the point of an agreement that neither side can verify? Let’s do a little war-gaming: You’re the United States and I’m China. How do I know that you haven’t placed untraceable logic bombs in my systems or ready-to-launch attacks against me in outside botnets? I can’t, so I have to protect myself by having them too.
It doesn’t matter if we agreed not to do such things because any such agreement would be based entirely on trust, not verification. Getting back to cold war analogies, I’m one who believes that arms treaties didn’t accomplish much worth accomplishing until Reagan’s START treaties agreed to actually reduce the numbers of weapons with verification procedures included. ICBMs are pretty easy to count. Logic bombs in complex software systems aren’t.
Don’t look to me for a better idea; I don’t have one. On the other hand, I don’t worry so much about these attacks because the short list of actors who might be able to pull them off don’t have an interest in doing so; quite the contrary. Schneier is right again when he says the real fear is of accidental use.
In this regard, he may have a reasonable point about the chain of command and use of such weapons. He wants them used only with authorization by the President or someone very close to him. Maybe this is worth discussing in some agreement, but it too is unverifiable.
My advice: Dig a logic bomb shelter for you and your family. Stock it with provisions like lots of memory and extra CPU cores. Make sure to use strong encryption to keep your neighbors out when the Chinese drop the big one.
[Editor also recommends other recent stories by Larry Seltzer: “I’ll believe Mac malware is a problem when I see it“; “Why Microsoft has to open Windows Update to third-party developers“; “Can IE9 stop Microsoft’s steady browser decline?“]
This commentary was first posted on March 4, 2011 at 11:24 a.m. ET.
Larry Seltzer is a freelance writer and consultant, dealing mostly with security matters. He has written recently for Infoworld, eWEEK, Dr. Dobb’s Journal, and is a Contibuting Editor at PC Magazine and author of their Security Watch blog. He has also written for Symantec Authentication (formerly VeriSign) and Lumension’s Intelligent Whitelisting site.