Sanitization vs. crypto

Bruce Schneier opines on NIST's proposed non-use of encryption as sanitization:
Encryption is not a generally accepted means of sanitization. The increasing power of computers decreases the time needed to crack cipher text and therefore the inability to recover the encrypted data can not be assured.


I have to admit that this doesn't make any sense to me. If the encryption is done properly, and if the key is properly chosen, then erasing the key -- and all copies -- is equivalent to erasing the files. And if you're using full-disk encryption, then erasing the key is equivalent to sanitizing the drive. For that not to be true means that the encryption program isn't secure.


While NIST has removed the paragraph, functionally, they are correct. Bruce has too many qualifications, and missed some more - if there were no implementation bugs in your crypto, if your key was properly generated, if you didn't escrow the key for DR someplace where it didn't get removed, if there are no known-plaintext or chosen-plaintext attacks against your encryption algorithm within the life of your hard drive... just a few too many ifs.

By all means, encrypt the hard drive - it's a great practice, especially in the event your hard drive is lost. But absolutely sanitize it if you also have the opportunity.

Security and Obscurity

Everyone has heard the mantra, "Security through obscurity is no security at all." I hope that people remember where it came from - when companies were announcing proprietary cryptographic algorithms, everyone pointed out that cryptanalysis is an almost impossible task to get right, so you couldn't really know how secure your algorithm was unless it had been peer-reviewed.

But this comes up every few days, when people discuss security systems and architectures. And there, I contend, obscurity is the single most important component of every security system. Because let's face it, there is no such thing as perfect security. So every architecture has its holes.

The job of a good security professional is to reduce those holes; to make exploiting the holes more expensive than the value of doing so, and to implement layered security systems so that attackers are unlikely to make it all the way through a system without tripping an alarm somewhere. Without obscurity, that's impossible.

Put another way, an attacker, or even a neutral party, has absolutely no need to know the details of your architecture.

Zipcar

Zipcar just showed up in the new parking garage at work. Interesting to note that they've now added the Scions (xA and XB), Element, and Matrix to their line-up.

I assume that means they're seeing demand for more cargo space, which seems to me to be the big gap for people who rely on public transportation.

Social Engineering Self-training

Most security systems have the annoying side effect that increasing attack volumes can degrade them, usually through tuning of defenses, or desensitization (Yes, this is a generalization). Social Engineering, on the other hand, has the nice feature that the more often someone tries to social engineer you, the less likely the next person is to succeed - even if the first attack is incompetent, and the second one highly competent.

That's because every failed attempt is a training exercise for the target.

Policy and Practice - a Talmudic distinction

It's hip, of course, to be able to use Talmudic in a description of regulatory environment - but this is actually going to use the Talmud as a source. Policy is what we write down; practice is what we do. The relationship between them is nicely covered in the first tractate of the Talmud.

Mishna. From what time can the Shma be recited in the evening? From the hour when the priests go in to eat their tithes until the end of the first watch - the words of Rabbi Eliezer.
And the Sages say: Until midnight.
Rabban Gamliel says: Until the break of day (Brokhos 2a).

There is a bunch of esoteric coverage about the start point - but what about the end point? Why are both midnight and daybreak listed?
Mishna. Whenever the Sages say "until midnight," the obligation extends until the break of day....
Then why did the Sages say "until midnight"?
In order to keep people from transgressing (Brokhos 2a).

And that is the difference between policy and practice. A well written policy should never be broken - and one way to ensure that is to have practice be more stringent than the policy.

Note that I except from this rule CYA policies, of the sort lawyers tend to write to protect organizations from liability.

(Thanks to Born to Kvetch by Michael Wex for the inspiration).

Phishing

We're all so paranoid about phishing, but it seems like we only really care about banking. I wonder, if the banking industry ever gets its game on, if identity thieves will start going after other sites.

Like LinkedIn. I've been playing with it lately (more on my observations later), and it sends out HTML email to your new contacts inviting them to link to you. If you receive one, and it was sent to a different address than the ones you've already provided, it lets you log in and register that address.

It would be pretty trivial to phish that login. At the least, I bet most people don't have a unique password there, and it would certainly let you start to build up a network of relationships - and if you're trying to get people to read your fraudulent email, it's all about getting them to trust the putative sender of a piece of email.

It's a lot of work to go after something like LInkedIn, or Evite, and I wouldn't expect to see it happen any time soon. But I really thought about it when my father-in-law called me this morning to verify that I had, in fact, generated the LinkedIn email he hadn't yet opened. Maybe we all need to be a bit more paranoid.

Disclosure Laws

At a conference recently, one of the panelists asserted that the California Disclosure Law (SB-1386) was the worst information security law in memory. I disagree. I think it is the best regulation around information security; even better than GLBA. Most information security regulations are about controls - that is, they specify how one should protect information assets. Sometimes, those are relevant controls - but in practice, every IT environment is a little different, and what works for one environment isn't going to work for another environment.

What SB-1386 has done for the industry is create a very clear cost for an information security breach. Now, companies will, hopefully, think about what controls are relevant for their environment. And maybe, just maybe, that will lead to better security.

Invisibility Cloak

Invisibility gets closer.

It's a cool concept. But once the price comes down, this is one of those potentially disruptive technologies (it reminds me a lot of Shield, by Poul Anderson). I think there are some scary uses, and some cool uses:

In the scary category:
  • concealed guns
  • concealed bombs
  • traffic hazards. People drop bricks off overpasses - what about a cloaked piece of furniture?
Fortunately, the cloaking on each of these would be far and away the most expensive item, so I doubt we'll see them anytime soon.

In the cool category:
  • Urban renewal - what if, instead of creating a hole in space that one looked through, you offshifted light, so that light appeared to go over the object? Imagine a downtown parking garage, with landscaping on the top. From the side, it appeared to be - just a park. Because just making the garage invisible is ugly - you end up with strange sightlines and perspectives. But making it actually disappear? We could have saved a lot of money on the Big Dig.
  • Architectural features - imagine a building where every other floor is invisible. Or where the pillars aren't there.
  • Privacy umbrella - have your own portable changing station at the beach! Of course, I could see some uses for this that might best fit in the scary category, come to think of it....

Infosec - Failing or Succeeding?

Noam Eppel at Vivica asserts that Information Security is a total failure:



Today we have forth and fifth generation firewalls, behavior-based anti-malware software, host and network intrusion detection systems, intrusion prevention system, one-time password tokens, automatic vulnerability scanners, personal firewalls, etc., all working to keep us secure. Is this keeping us secure? According to USA Today, 2005 was the worst year ever for security breaches of computer systems. The US Treasury Department's Office of Technical Assistance estimates cybercrime proceeds in 2004 were $105 billion, greater than those of illegal drug sales. According to the recently released 2005 FBI/CSI Computer Crime and Security Survey, nearly nine out of 10 U.S. businesses suffered from a computer virus, spyware or other online attack in 2004 or 2005 despite widespread use of security software. According to the FBI, every day 27,000 have their identities stolen.


Noam's article is a good read if you think the Internet is safe. But a lot of folks disagree with his conclusion, and I side with them. Noam's article is just a litany of all the doom and gloom statistics out there - like the ones you see on Slide 2 of every security vendor's pitch.


The enemy's gate is down

In hi-tech business, it's worth tracking the money to look at where the future of our technologies will take us. And often, you can at least look at where VCs are thinking about their money:

Mark Kvamme's keynote at ad:tech:

Yet in spite of the changes in consumer behavior, the media spend still lags behind. Kvamme noted that while average household time spent with TV is 33 percent, the average ad spend on TV is 38 percent. In contrast, average household time spend on the internet is 33 percent, but the ad spend is a "miniscule" five percent.

And, Kvamme said, breaking it down by CPM makes the differences in media weight even more apparent. A $64 CPM on network TV is a bad buy when compared to a premium internet CPM of $30, and it looks downright terrible when compared to an internet ROS CPM of $10.


And really, those ratios are even worse. With the advent of DVRs, I think a lot of us are just skimming past the commercials (Although I've noted a marked increase in the quality of advertising eye candy on TV, possibly to get folks like me to stop and watch the pretty pictures). But an Internet ad pretty much always catches your eye; and it can be better targeted than the brute demographics of TV ads -- when I hit a handful of car sites, you know that I'm a pretty good target for vehicular ads.

There's an interesting thing to watch out for here. I think there will be a lot more money moving into online advertising, and more into the streaming media space - good news for us - but I wonder if we're going to see more of the excesses of the advertising space. Blink tags. Popunders. Loud, bad music blaring inline from our browsers. Animated banners covering up that news article.

Either way, it's more media to move.

(hat tip: Craig Newmark)

False Positives

Driving in to work this morning, I discovered a wonderful failure mode of an alerting system. My car has a weight sensor in the passenger seat; if it detects a possible passenger in the seat, without a safety belt in use, it alerts you.

Now, our other car has had this, and it's just a little red light on the dash. But this car starts an audible dinging alarm, which then goes to a very fast audible alarm. I'm not sure if it will turn itself off, as I moved my backpack quickly from the passenger seat to the floor. But what was my first thought?

Man, I've got to disable that alarm.

And that's where security systems can get it wrong. If you put in a control that annoys your end users, your end users will actively work to defeat the system. And that's when you've reduced security, because usually, their workaround is more unsafe than the pre-security system (in this case, disabling the alarm might also eliminate the red warning light, which is a tolerable false positive, but would catch an unsafe passenger).

Sledgehammers

How do you perfectly secure data on a system? The hard drive should be encrypted, of course. Logging onto the system should use a one time password, as well as an asymmetric identifier. You put the computer in a locked room. Make sure the computer isn't connected to the network, of course, and, for good measure, power it down. The door should have multiple locks, so that you can enforce two-person access controls, and each needs to prove their identity with a physical token, biometrics, and a PIN.

And, of course, the last thing you should do is take a sledgehammer to the computer before leaving the room.

You wouldn't do that last step, would you? And, of course, depending on the value of the data, you probably aren't doing most of the other steps, either. And that's what security is really all about - finding the risk management balance where the protections are commensurate with the threats and value of the data.

I've found that when someone doesn’t want to implement a given security profile, they sometimes resort to the sledgehammer argument; that is, to find an extreme level of security that isn't being recommended, and assert that the absence of that level of protection therefore justifies not adding a lower level of protection.

Autoturning headlights

We just bought a new car, and it has headlights that turn to the left or the right when the steering wheel has turned in that direction. It's a pretty neat feature, although I discovered an interesting "attack" you can do with it, even unintentionally. The distance the steering wheel needs to turn to trigger the auto-adjust happens to be the curvature of the onramp from Storrow Drive to 93 North. So if you happen to be behind someone, and the steering wheel has any jitter in it (not that, you know, you'd jitter it intentionally), your headlamps will wash back and forth across the inside of their car.
As if being followed by a tall car wasn't already painful enough.

Pseudonymity

Pseudonymity, for those new to it, is the use of a semi-permanent, but incomplete or false identity. For instance, in many online communities, I'll just go by my first name, with a specific Gmail address so that people can distinguish me from all the different Andys out there. I have different pseudonyms in different spaces; in some of them, people who already know me know my pseudonym, but strangers don't.

This is a pretty common practice. It's better than anonymity for community building, as I've noticed that when people feel truly anonymous, they tend be less courteous and more inflammatory. But as the Michael Hitzlik furor has pointed out, people can still abuse pseudonymity; how do you know when 50 people are really only one person?

The answer is pretty simple. A pseudonym should be much like a nickname - you can have a different one for each group you hang out with, but it still doesn't let you pretend to be 4 or 5 different people at once.

Usenix Security Symposium

The first week of August, you'll find the USENIX security symposium in Vancouver. The invited talks this year look great, but I'm not sure I'll be able to make it. If you go, don't miss Matt Blaze's talk on wiretapping - he gave it at ICNS 2006; I thought it was one of the best research talks I've seen in a while.