A Perimeter of One
2018-04-18
Even before there were enterprises we thought of as carbon vs silicon, enterprises were graphite-and-paper. In the graphite-and-paper enterprise, an organization had perceived control over all of its information assets – after all, they were written down, in hard copy, and often didn’t leave the building. While humans came into the building, the information perimeter existed at the tip of the pencil.
As computers came into the enterprise, often the first use case was to displace existing systems, and replace the graphite-and-paper enterprise with a silicon enterprise – instead of doing accounting in double-entry on fold-out ledgers, accounting took place in a general ledger application.
Yet the security world still thought of computers as just quicker versions of the graphite-and-paper world. Our perimeter still existed at the fingertips of the humans, only now those fingertips were typing on a keyboard instead of scribbling in a notebook. But our security was still based on the models of a physical perimeter. Mostly. But with a very dangerous flaw.
A physical perimeter — at least the non-human parts of it — isn’t really designed to keep adversaries out. It’s designed to slow adversaries down. To change the cost equation for adversaries, to make them risk their own safety until human guards notice their attempts to enter.
And when the silicon enterprise connected to other networks, we kept this very flawed model. Because we’d always trusted the silicon — after all, it had evolved from graphite-and-paper, which only lied when humans told it to — we weren’t prepared for how untrustworthy our computers would become. And the rate of silicon communications far exceeded our expectations for monitoring, and adversaries had little personal risk. So we relied on “securing our perimeter,” in a last-ditch attempt to keep adversaries out.
But the basis of our security controls was all about establishing perfect trust in our devices and networks. We’d require the best endpoint security, no matter where our devices were, because our security models would rely on that trust to build a credible environment. Even when our devices would travel the world in the hand of a user — the one thing we wouldn’t trust — and be used for official and personal use, we would still believe that we could trust those devices, and make them part of our enterprise.
But those devices aren’t part of our enterprise.
They’re part of the user’s perimeter, instead.
Around the turn of the millennium, enterprising CFOs realized that with the increased consumerization of the mobile phone market, there was no reason for enterprises to own and manage cellphones. Instead, at best, a cellphone allowance could be issued to employees, and those humans could be responsible for their devices.
It was a smart move financially, but one with long-lasting repercussions for the security model of enterprises. While most phones — and even early smartphones — acted as clients to some larger network, with the advent of the iPhone, the model shifted. Smartphones are now an extension of the human who carries them, not of the network that they connect to.
And since the distance between a smartphone and a laptop isn’t that large, we should consider the laptop as also part of the human who carries them. And, as a result, the enterprise really shouldn’t carry any implicit trust for them.
In the same fashion that a consumer-oriented enterprise doesn’t overly trust the security of the devices its users operate, the modern enterprise needs to function in the same fashion to its employees.
Does this mean that we just abandon employees to the dangers of the Internet? Of course not. The modern IT department has become a managed service provider, providing its clients — the human employees — with support and security services to protect that human’s cybernetic perimeter against adversaries. But that service doesn’t mean that our enterprise applications should implicitly trust those devices.
Instead, our enterprise applications should give no more trust to the devices than necessary, and only as a proxy for the specific human who carries them. This is hard work, because we’re so used to the belief of being able to trust everything on our network. But our network is the Internet now, and our mental perimeter needs to shrink to only encompass our applications. Everything else outside those applications should have no implicit trust.
And the user’s devices? They’re inside the user’s perimeter, and we should help them establish a safe perimeter of one.
As computers came into the enterprise, often the first use case was to displace existing systems, and replace the graphite-and-paper enterprise with a silicon enterprise – instead of doing accounting in double-entry on fold-out ledgers, accounting took place in a general ledger application.
Yet the security world still thought of computers as just quicker versions of the graphite-and-paper world. Our perimeter still existed at the fingertips of the humans, only now those fingertips were typing on a keyboard instead of scribbling in a notebook. But our security was still based on the models of a physical perimeter. Mostly. But with a very dangerous flaw.
A physical perimeter — at least the non-human parts of it — isn’t really designed to keep adversaries out. It’s designed to slow adversaries down. To change the cost equation for adversaries, to make them risk their own safety until human guards notice their attempts to enter.
And when the silicon enterprise connected to other networks, we kept this very flawed model. Because we’d always trusted the silicon — after all, it had evolved from graphite-and-paper, which only lied when humans told it to — we weren’t prepared for how untrustworthy our computers would become. And the rate of silicon communications far exceeded our expectations for monitoring, and adversaries had little personal risk. So we relied on “securing our perimeter,” in a last-ditch attempt to keep adversaries out.
But the basis of our security controls was all about establishing perfect trust in our devices and networks. We’d require the best endpoint security, no matter where our devices were, because our security models would rely on that trust to build a credible environment. Even when our devices would travel the world in the hand of a user — the one thing we wouldn’t trust — and be used for official and personal use, we would still believe that we could trust those devices, and make them part of our enterprise.
But those devices aren’t part of our enterprise.
They’re part of the user’s perimeter, instead.
Around the turn of the millennium, enterprising CFOs realized that with the increased consumerization of the mobile phone market, there was no reason for enterprises to own and manage cellphones. Instead, at best, a cellphone allowance could be issued to employees, and those humans could be responsible for their devices.
It was a smart move financially, but one with long-lasting repercussions for the security model of enterprises. While most phones — and even early smartphones — acted as clients to some larger network, with the advent of the iPhone, the model shifted. Smartphones are now an extension of the human who carries them, not of the network that they connect to.
And since the distance between a smartphone and a laptop isn’t that large, we should consider the laptop as also part of the human who carries them. And, as a result, the enterprise really shouldn’t carry any implicit trust for them.
In the same fashion that a consumer-oriented enterprise doesn’t overly trust the security of the devices its users operate, the modern enterprise needs to function in the same fashion to its employees.
Does this mean that we just abandon employees to the dangers of the Internet? Of course not. The modern IT department has become a managed service provider, providing its clients — the human employees — with support and security services to protect that human’s cybernetic perimeter against adversaries. But that service doesn’t mean that our enterprise applications should implicitly trust those devices.
Instead, our enterprise applications should give no more trust to the devices than necessary, and only as a proxy for the specific human who carries them. This is hard work, because we’re so used to the belief of being able to trust everything on our network. But our network is the Internet now, and our mental perimeter needs to shrink to only encompass our applications. Everything else outside those applications should have no implicit trust.
And the user’s devices? They’re inside the user’s perimeter, and we should help them establish a safe perimeter of one.