2018
Composing Defences
2018-07-10
Often, in the information security community, we bandy about terms like “defence in depth” or “layered defences.” Most of the time, it’s just a platitude for “buy more stuff.” It’s worth exploring the way these terms evolved, and how we should think about defensive architectures in the world defined not by physical space, but by network connectivity.
In the flat space of military defences in the pre-WWII area, defence in depth would refer to one of two concepts. In the first mode, it was a set of defences which interlocked in some form -- consider a castle wall, a moat, and a set of guards atop the wall. Each of these defenses, individually, was trivially defeatable, but together, they multiplied. While an adversary was busy crossing the moat, they were easy to shoot at. The moat made it hard to scale the wall. The wall gave defensive cover to the guards. In the second mode, it was about depth in distance - consider the depth of the Soviet terrain as they fell back in World War II, and the lengthening of the attacker’s supply lines as weather set in. “Never get involved in a land war in Asia” is good advice for a reason.
Integrating defences relies on some basic features of the physical world. Adversaries occupy space across a period of time. Defenders can trivially observe adversaries - the Mark One eyeball is generally ubiquitous across history. But when defences integrate, it may be easier to think of them as stacking – defence in height.
When defences fail to integrate, allowing an attacker to sequentially defeat them – consider a set of hurdles in a line – then depth may be the correct way to consider the dimension. Consider a pair of identical, locked doors, with a small, unmonitored space between them. While an attacker may take more time to defeat the doors (either using lockpicks, slides, or a purloined key), neither defence is actually made harder by the presence of the other.
Sometimes, defences don’t even stack. Defence in breadth represents a set of defences that present a choice to an adversary, where they can opt not to engage in a defence, by going around it. The postern gate provides an alternate path for a spy than the front gate; the Maginot Line could be gone around; any of a dozen servers in a network DMZ can be breached to provide access to an intranet.
The lesson for defenders is to understand both the system you’re defending, and how its defences work – or don’t – together. Increased complexity may be an indicator of defences in breadth, often with “layered” defences where the defeat of one could go undetected. Our goal should be to create defence in height, where we know how our defences work together towards defeating adversaries.
How do we approach improving our defences?
One way is to flip our mental model, and consider ourselves as attackers, and the adversary as a defender. In the same way an adversary might conduct surveillance on our defences, we need to surveil the adversary as they defeat our defences. We should consider our boundary systems as the adversary’s, and ask, “How can we see the adversary conducting an operation?” While an adversary’s dwell time inside our perimeters might not need to be long to accomplish their goals, how can we observe artifacts of their presence?
Another approach is to understand that our perimeters are almost always wider than we understand. When we try to govern our systems, we often start from the best maintained systems and work outward; adversaries will start from our worst-maintained systems and work inward. We need to aim to operationalize the same visibility and maintenance practices across our entire perimeter stack, so that we understand our risks, and not bury them deep as a footnote in our assessments.
A third approach is to reduce our perimeter entirely. Simplifying our defensive models makes them easier for us to understand, and reduces the possibilities for adversaries to penetrate through unknown ways. This may involve partitioning our system clusters, so that lateral movement is restricted, and each network architecture becomes understandable.
All of these approaches have value in improving our defenses, and restoring height to our walls in meaningful and helpful ways.
In the flat space of military defences in the pre-WWII area, defence in depth would refer to one of two concepts. In the first mode, it was a set of defences which interlocked in some form -- consider a castle wall, a moat, and a set of guards atop the wall. Each of these defenses, individually, was trivially defeatable, but together, they multiplied. While an adversary was busy crossing the moat, they were easy to shoot at. The moat made it hard to scale the wall. The wall gave defensive cover to the guards. In the second mode, it was about depth in distance - consider the depth of the Soviet terrain as they fell back in World War II, and the lengthening of the attacker’s supply lines as weather set in. “Never get involved in a land war in Asia” is good advice for a reason.
Integrating defences relies on some basic features of the physical world. Adversaries occupy space across a period of time. Defenders can trivially observe adversaries - the Mark One eyeball is generally ubiquitous across history. But when defences integrate, it may be easier to think of them as stacking – defence in height.
When defences fail to integrate, allowing an attacker to sequentially defeat them – consider a set of hurdles in a line – then depth may be the correct way to consider the dimension. Consider a pair of identical, locked doors, with a small, unmonitored space between them. While an attacker may take more time to defeat the doors (either using lockpicks, slides, or a purloined key), neither defence is actually made harder by the presence of the other.
Sometimes, defences don’t even stack. Defence in breadth represents a set of defences that present a choice to an adversary, where they can opt not to engage in a defence, by going around it. The postern gate provides an alternate path for a spy than the front gate; the Maginot Line could be gone around; any of a dozen servers in a network DMZ can be breached to provide access to an intranet.
The lesson for defenders is to understand both the system you’re defending, and how its defences work – or don’t – together. Increased complexity may be an indicator of defences in breadth, often with “layered” defences where the defeat of one could go undetected. Our goal should be to create defence in height, where we know how our defences work together towards defeating adversaries.
How do we approach improving our defences?
One way is to flip our mental model, and consider ourselves as attackers, and the adversary as a defender. In the same way an adversary might conduct surveillance on our defences, we need to surveil the adversary as they defeat our defences. We should consider our boundary systems as the adversary’s, and ask, “How can we see the adversary conducting an operation?” While an adversary’s dwell time inside our perimeters might not need to be long to accomplish their goals, how can we observe artifacts of their presence?
Another approach is to understand that our perimeters are almost always wider than we understand. When we try to govern our systems, we often start from the best maintained systems and work outward; adversaries will start from our worst-maintained systems and work inward. We need to aim to operationalize the same visibility and maintenance practices across our entire perimeter stack, so that we understand our risks, and not bury them deep as a footnote in our assessments.
A third approach is to reduce our perimeter entirely. Simplifying our defensive models makes them easier for us to understand, and reduces the possibilities for adversaries to penetrate through unknown ways. This may involve partitioning our system clusters, so that lateral movement is restricted, and each network architecture becomes understandable.
All of these approaches have value in improving our defenses, and restoring height to our walls in meaningful and helpful ways.
A Perimeter of One
2018-04-18
Even before there were enterprises we thought of as carbon vs silicon, enterprises were graphite-and-paper. In the graphite-and-paper enterprise, an organization had perceived control over all of its information assets – after all, they were written down, in hard copy, and often didn’t leave the building. While humans came into the building, the information perimeter existed at the tip of the pencil.
As computers came into the enterprise, often the first use case was to displace existing systems, and replace the graphite-and-paper enterprise with a silicon enterprise – instead of doing accounting in double-entry on fold-out ledgers, accounting took place in a general ledger application.
Yet the security world still thought of computers as just quicker versions of the graphite-and-paper world. Our perimeter still existed at the fingertips of the humans, only now those fingertips were typing on a keyboard instead of scribbling in a notebook. But our security was still based on the models of a physical perimeter. Mostly. But with a very dangerous flaw.
A physical perimeter — at least the non-human parts of it — isn’t really designed to keep adversaries out. It’s designed to slow adversaries down. To change the cost equation for adversaries, to make them risk their own safety until human guards notice their attempts to enter.
And when the silicon enterprise connected to other networks, we kept this very flawed model. Because we’d always trusted the silicon — after all, it had evolved from graphite-and-paper, which only lied when humans told it to — we weren’t prepared for how untrustworthy our computers would become. And the rate of silicon communications far exceeded our expectations for monitoring, and adversaries had little personal risk. So we relied on “securing our perimeter,” in a last-ditch attempt to keep adversaries out.
But the basis of our security controls was all about establishing perfect trust in our devices and networks. We’d require the best endpoint security, no matter where our devices were, because our security models would rely on that trust to build a credible environment. Even when our devices would travel the world in the hand of a user — the one thing we wouldn’t trust — and be used for official and personal use, we would still believe that we could trust those devices, and make them part of our enterprise.
But those devices aren’t part of our enterprise.
They’re part of the user’s perimeter, instead.
Around the turn of the millennium, enterprising CFOs realized that with the increased consumerization of the mobile phone market, there was no reason for enterprises to own and manage cellphones. Instead, at best, a cellphone allowance could be issued to employees, and those humans could be responsible for their devices.
It was a smart move financially, but one with long-lasting repercussions for the security model of enterprises. While most phones — and even early smartphones — acted as clients to some larger network, with the advent of the iPhone, the model shifted. Smartphones are now an extension of the human who carries them, not of the network that they connect to.
And since the distance between a smartphone and a laptop isn’t that large, we should consider the laptop as also part of the human who carries them. And, as a result, the enterprise really shouldn’t carry any implicit trust for them.
In the same fashion that a consumer-oriented enterprise doesn’t overly trust the security of the devices its users operate, the modern enterprise needs to function in the same fashion to its employees.
Does this mean that we just abandon employees to the dangers of the Internet? Of course not. The modern IT department has become a managed service provider, providing its clients — the human employees — with support and security services to protect that human’s cybernetic perimeter against adversaries. But that service doesn’t mean that our enterprise applications should implicitly trust those devices.
Instead, our enterprise applications should give no more trust to the devices than necessary, and only as a proxy for the specific human who carries them. This is hard work, because we’re so used to the belief of being able to trust everything on our network. But our network is the Internet now, and our mental perimeter needs to shrink to only encompass our applications. Everything else outside those applications should have no implicit trust.
And the user’s devices? They’re inside the user’s perimeter, and we should help them establish a safe perimeter of one.
As computers came into the enterprise, often the first use case was to displace existing systems, and replace the graphite-and-paper enterprise with a silicon enterprise – instead of doing accounting in double-entry on fold-out ledgers, accounting took place in a general ledger application.
Yet the security world still thought of computers as just quicker versions of the graphite-and-paper world. Our perimeter still existed at the fingertips of the humans, only now those fingertips were typing on a keyboard instead of scribbling in a notebook. But our security was still based on the models of a physical perimeter. Mostly. But with a very dangerous flaw.
A physical perimeter — at least the non-human parts of it — isn’t really designed to keep adversaries out. It’s designed to slow adversaries down. To change the cost equation for adversaries, to make them risk their own safety until human guards notice their attempts to enter.
And when the silicon enterprise connected to other networks, we kept this very flawed model. Because we’d always trusted the silicon — after all, it had evolved from graphite-and-paper, which only lied when humans told it to — we weren’t prepared for how untrustworthy our computers would become. And the rate of silicon communications far exceeded our expectations for monitoring, and adversaries had little personal risk. So we relied on “securing our perimeter,” in a last-ditch attempt to keep adversaries out.
But the basis of our security controls was all about establishing perfect trust in our devices and networks. We’d require the best endpoint security, no matter where our devices were, because our security models would rely on that trust to build a credible environment. Even when our devices would travel the world in the hand of a user — the one thing we wouldn’t trust — and be used for official and personal use, we would still believe that we could trust those devices, and make them part of our enterprise.
But those devices aren’t part of our enterprise.
They’re part of the user’s perimeter, instead.
Around the turn of the millennium, enterprising CFOs realized that with the increased consumerization of the mobile phone market, there was no reason for enterprises to own and manage cellphones. Instead, at best, a cellphone allowance could be issued to employees, and those humans could be responsible for their devices.
It was a smart move financially, but one with long-lasting repercussions for the security model of enterprises. While most phones — and even early smartphones — acted as clients to some larger network, with the advent of the iPhone, the model shifted. Smartphones are now an extension of the human who carries them, not of the network that they connect to.
And since the distance between a smartphone and a laptop isn’t that large, we should consider the laptop as also part of the human who carries them. And, as a result, the enterprise really shouldn’t carry any implicit trust for them.
In the same fashion that a consumer-oriented enterprise doesn’t overly trust the security of the devices its users operate, the modern enterprise needs to function in the same fashion to its employees.
Does this mean that we just abandon employees to the dangers of the Internet? Of course not. The modern IT department has become a managed service provider, providing its clients — the human employees — with support and security services to protect that human’s cybernetic perimeter against adversaries. But that service doesn’t mean that our enterprise applications should implicitly trust those devices.
Instead, our enterprise applications should give no more trust to the devices than necessary, and only as a proxy for the specific human who carries them. This is hard work, because we’re so used to the belief of being able to trust everything on our network. But our network is the Internet now, and our mental perimeter needs to shrink to only encompass our applications. Everything else outside those applications should have no implicit trust.
And the user’s devices? They’re inside the user’s perimeter, and we should help them establish a safe perimeter of one.