Four considerations for improving cloud security hygiene
2021-11-16
We think we understand what hygiene is, but what about cloud security hygiene? It’s not like our computers have teeth to brush. Although, if you have a child, you might be familiar with the challenges involved in even basic hygiene. Some of us might even have had conversations like this:
“Did you brush your teeth?”
“Yes!”
You smell in the vague vicinity of their mouth. “With toothpaste?”
“...”
Then you have to make sure they have brushed their back teeth. And the insides of their teeth, getting all the surfaces. And they flossed – deeply – between all of their teeth. And used mouthwash.
That’s not a bad model to start understanding security hygiene. There is some task you need to do regularly, and you need to do it everywhere. It’s not okay to just brush your teeth once a year, or only to brush the front teeth; you also can’t just patch software or check your security configurations once a year, or only for your most visible systems. And a vague check-in loves room for serious improvement.
Imagine a board member, asking a CEO, “Are all of your systems patched regularly?” We’re about to play a game of “Operator” as the CEO goes to find the answer, but instead of the words changing, the meaning of them changes. The board member probably really means “all of the company’s systems” and “patched within industry-standard windows based on criticality,” but that nuance will get lost. The CEO will turn to the CIO to ask the question, which implicitly reduces “all our systems” to “the systems the CIO is responsible for” and “patched regularly” becomes “patched within our internal maintenance windows.” The CIO asks their team, and so on. The metric reported back up ends up being something like “for our supported Windows servers, we apply the Microsoft patches within the planned maintenance windows 87% of the time.”
Those caveats get lost in the messaging back up to the CEO, so the company thinks it is doing just fine, when, in reality, only a small fraction of software is being tracked well. There’s a disincentive to provide better tracking, because that 87% number will go down when combined with another set of data with a lower score, and no one wants to explain why a metric just got worse.
Read the rest of this article on Dark Reading.
“Did you brush your teeth?”
“Yes!”
You smell in the vague vicinity of their mouth. “With toothpaste?”
“...”
Then you have to make sure they have brushed their back teeth. And the insides of their teeth, getting all the surfaces. And they flossed – deeply – between all of their teeth. And used mouthwash.
That’s not a bad model to start understanding security hygiene. There is some task you need to do regularly, and you need to do it everywhere. It’s not okay to just brush your teeth once a year, or only to brush the front teeth; you also can’t just patch software or check your security configurations once a year, or only for your most visible systems. And a vague check-in loves room for serious improvement.
Imagine a board member, asking a CEO, “Are all of your systems patched regularly?” We’re about to play a game of “Operator” as the CEO goes to find the answer, but instead of the words changing, the meaning of them changes. The board member probably really means “all of the company’s systems” and “patched within industry-standard windows based on criticality,” but that nuance will get lost. The CEO will turn to the CIO to ask the question, which implicitly reduces “all our systems” to “the systems the CIO is responsible for” and “patched regularly” becomes “patched within our internal maintenance windows.” The CIO asks their team, and so on. The metric reported back up ends up being something like “for our supported Windows servers, we apply the Microsoft patches within the planned maintenance windows 87% of the time.”
Those caveats get lost in the messaging back up to the CEO, so the company thinks it is doing just fine, when, in reality, only a small fraction of software is being tracked well. There’s a disincentive to provide better tracking, because that 87% number will go down when combined with another set of data with a lower score, and no one wants to explain why a metric just got worse.
Read the rest of this article on Dark Reading.