Some of you know that I work in law enforcement in a cyber-related role. As such, I get particularly het up over issues of information security, which resulted in my rather candid indictment of Federal network defenders during the OPM breach.
I try to cleave to William Golding’s brilliant essay on how to be a responsible thinker, the bottom-line of which can be expressed as “Don’t bitch about a problem unless you have a solution.”
In that spririt, allow me to solve the United States’ cybersecurity problems in a single blog post.
This will be more technical/procedural and less visceral than the last post I did on this topic, but after reading Weir’s The Martian, I’m starting to think that readers may have a tolerance for this level of unabashed geekery.
The problem here is one of OODA loops. “OODA,” is a military acronym coined by Air Force Colonel “Genghis John” Boyd, and if you’re in the military, you certainly know of it. For those of you who aren’t, it stands for “Observe, Orient, Decide, Act,” and it’s a very intricate way of expressing the speed at which an operator identifies a goal and moves to act on it.
Much of modern conflict can be thought of as a race between OODA loops.
This is a huge problem in Counterinsurgency. Lacking the kind of bureaucratic structure that plagues organizations like the U.S. government, insurgents are able to pick a mission and action it much much much faster than we can respond. We’re still getting authorization to launch a mission to defend the first target while the enemy is already on to the third.
The same problem exists in cyberspace. A “hacktivist” with a proxy hacker group working for Chinese intelligence doesn’t need to get authorized orders through a chain-of-command. They know what the strategic mission is. They just pick a target, sit down, and go at it. By the time guys like me are on scene, the damage is done.
To summarize: Networked, individual operators with Command-and-Control (C2) pushed waaaaay down the chain to the lowest level = very fast OODA loop. Big, bureaucratic organizations that need multiple sign-offs and legal scrutiny (all good things in a free society concerned with preserving rights) = slow OODA loop.
We’re too damn slow, and this is why we’re losing.
From a Computer Network Defense (CND – here’s the military definition) perspective, this is killing us. We need to be able to pivot off technical indicators in real-time, and share those indicators across all CND stakeholders (everyone responsible for network defense in the military, law enforcement and government as a whole, as well as every CND operator in the private sector at banks and power companies, etc, across the entire country from the biggest to the smallest, from the federal to the state to the local, tribal and territorial). We need to do this FASTER than the bad guys do it.
Not really. We can do it. We can do it tomorrow. We can do it pretty much for free.
Remember: This is a CND problem. If we worry about attribution and counterstrike (figuring out WHO did this, making them pay, prosecuting them) it becomes impossible, because we are now involving legal authorities that must be in place to guarantee civil liberties and civilian oversight of military power. It also doesn’t matter. Attribution is often impossible, and because the plausible deniability inherent in the use of proxy actors, even when we know who did it, we can’t do anything about it. “It wasn’t us (the Chinese/Russian/Iranian/Whoever government)! We can’t help it if our youth are so patriotic that they hacked you entirely on their own initiative! We’re terribly sorry, and we assure you we’ll prosecute them if we can ever catch them!” You have to remember that in computer forensics, it’s fairly easy to prove that an account or an address hacked something, but tying that technical indicator to a HUMAN conclusively enough to trigger law enforcement or military action is much much much harder. Almost impossible.
But if we treat it as a TECHNOLOGY problem, and focus ONLY on stopping attacks, the problem becomes much smaller and easier.
The solution is a Virtual Security Operations Center (VSOC), running on a government network, with the government playing a pivotal, but minor administrative role.
These government networks already exist: There’s the Homeland Security Information Network (HSIN), or Law Enforcement Online (LEO), or the Open Source Center (OSC). These are Internet accessible networks that provide a (theoretically) secure forum for the VSOC to exist, and that can process unclassified information among members for the purpose of sharing indicators.
So, we host the VSOC on HSIN. And what is the VSOC? A web-based chat room hosting representatives from each stakeholder. You’ve got an FBI rep on there, you’ve got a CIA rep on there. You’ve got a Phoenix Police rep and a San Diego Parks and Recreation rep and a Minnesota Dept. of Education rep. You get the idea. More importanly, you ALSO have reps from Merrill Lynch and Bank of America and Pepco and Coca-Cola and on and on and on.
All are anonymous. All are sharing indicators. “Hey, we’re seeing these malicious IP addresses.” “Hey, here’s an MD5 file hash of something we think is malware.” “Hey, here’s an account on Twitter distributing a tool to DDOS the Providence Police Department.” “Hey, I think I’ve discovered a potential security flaw in how WordPress processes a particular .jpeg comment. Am I crazy?” “Hey, I’m trying to architect a way to ingest STIX/TAXII feeds into my particular edge appliance. Has anyone been able to do this?” “Hey, someone hacked our web server and posted this logo. Has anyone ever seen it before?” “Hey, I’m on the midnight shift and I don’t know what I’m looking for in this .evtx file. Can anyone help me translate it?”
Anonymity is important. Private entities are less likely to be forthcoming if they feel that information they pass may be used to open a law-enforcement investigation. Government entities are hopelessly provincial/competitive, and that kind of jockeying for position may stymie information sharing efforts.
As it’s just a chat room, the VSOC can operate in a “always-on” status. Reps go in to their day jobs, and have the window open on their workstation as they go about their buisness. They can glance over at it every few minutes, then return to their regular tasks.
The fed plays an important role here:
1.) They provide the network.
2.) They vet each rep prior to granting access. They do the background investigation to ensure the rep is who they say they are, and represent the organization they claim to.
3.) They secure access, ensuring real (separate channel) two-factor authentication, and limited (by IP, MAC, geo-location, etc . . .) to give as much attribution/accountability as possible to the reps.
4.) Administering either a techinical exam or minimum technical qualification reqruiements for reps. Bottom line: If you don’t understand computers, you can’t be on here. No managers with “cyber portfolios.” No “policy experts” with cyber briefs. Nobody but engineers, talking about engineering topics.
5.) Here’s the doozy: They certify the forum for sharing technical indicators only. They waive the right to use ANYTHING in it for purposes of prosecution, or any other aspect of Computer Network Operations (CNO) other than CND (i.e. Computer Network Attack and Computer Network Exploitation).
6.) They guarantee anonymity for the members, unless those members willingly reveal their identity.
In this way, EVERYBODY is talking to each other ALL THE TIME in REAL TIME specifically about the subjects necessary to defend networks. There will be complications presented by this idea (for example: how do we determine who counts as a stakeholder?), but it at least addresses the issue: speeding up our OODA loop to match the enemy’s.
Best part? It’s cheap. The network infrastructure is already in place. Chatroom software isn’t new, and I’m sure the government owns the licenses necessary to make this work. And even if they don’t, there are plenty of freely available highly secure chat clients (I like Pidgin with OTR) The administrative overhead is comparatively low from a federal perspective. The real hurdle would be participation/buy-in from the stakeholders, and I’m thinking that the need for something like this is great enough that public enthusiasm might be higher than you’d think.
What the government would have to be willing to do is suborn bureaucracy to operational impact, and give the heisman to contractors who would race to make money off the enterprise. If those two things could be done successfully, I think we’d be taking the first step toward actually building a CND effort that could work.
We live in a world where anyone sitting in front of a computer anywhere in the world can suddenly decide they’re a member of “Anonymous,” quickly align with a strategic mission statement and execute an attack. A hacker in a jail cell can get a visit from his friendly FSB rep, who promises to go easy on him if he does a favor for the Russian government. Either of these actors can target and move in an instant, because there’s no bureaucracy to navigate. They’ve crowdsourced attack. To face such an adversary, we have to crowdsource defense.
We can do it for next to nothing. We can do it with existing infrastructure. No new technology. No new training. We can do it tomorrow.
We just have to want it bad enough to figure it out.