2012
Take Over, Bos'n!
2012-09-11
Eleven years ago, Danny Lewin was murdered.
This is a story from before that -- and how Danny inspired me to change the web.
It starts about twelve years ago. Akamai had just launched EdgeSuite, our new, whole-site content delivery product. Instead of having to change the URLs on your objects to start with a7.g.akamai.net/v/7/13346/2d/, you could just CNAME your whole domain over, and we'd deliver everything – even the HTML. It was revolutionary, and would power our move to application delivery.
But Danny wasn't satisfied with that (Danny was rarely satisfied with anything, actually). I'd just become Akamai's Chief Security Architect – mostly focusing on protecting our own infrastructure -– and Danny came to me and said, "What will it take to convince banks to use EdgeSuite?"
I'll be honest, I laughed at him at first. We argued for weeks about how paranoid bank security teams were, and why they'd never let their SSL keys be held by someone else. We debated what security model would scale (we even considered having 24x7 security guards outside our datacenter racks). We talked about the scalability of IP space for SSL. Through all of that, Danny was insistent that, if we built it, the market would accept it - even need it. I didn't really believe him at the time, but it was an exciting challenge. We were designing a distributed, lights-out security model in a world with no good guidance on how to do it. And we did.
But I still didn't believe – not the way Danny did. Then came the phone call. I'd been up until 4 am working an incident, and my phone rings at 9 am. It's Danny. "Andy, I'm here with [large credit card company], and they want to understand how our SSL network works. Can you explain it to them?"
I begged for thirty seconds to switch to a landline (and toss cold water on my face), and off we go. We didn't actually have a pitch, so I was making it up on the fly, standing next to the bed in my basement apartment, without notes. I talked about the security model we'd built – and how putting datacenter security into the rack was the wave of the future. I talked about our access control model, the software audits we were building, and our automated installation system. I talked for forty-five minutes, and when I was done, I was convinced – we had a product that would sell, and sell well (it just took a few years for that latter half to come true).
When I got off the phone, I went to my desk, and turned that improvisational pitch into the core of the security story I still tell to this day. More importantly, I truly believed that our SSL capability would be used by those financial services customers. Like Danny, I was wrong by about a decade – but in the meantime, we enabled e-commerce, e-government, and business-to-business applications to work better.
Danny, thanks for that early morning phone call.
"When you're bossman" he added, "in command and responsible for the rest, you- you sure get to see things different don't you?"
This is a story from before that -- and how Danny inspired me to change the web.
It starts about twelve years ago. Akamai had just launched EdgeSuite, our new, whole-site content delivery product. Instead of having to change the URLs on your objects to start with a7.g.akamai.net/v/7/13346/2d/, you could just CNAME your whole domain over, and we'd deliver everything – even the HTML. It was revolutionary, and would power our move to application delivery.
But Danny wasn't satisfied with that (Danny was rarely satisfied with anything, actually). I'd just become Akamai's Chief Security Architect – mostly focusing on protecting our own infrastructure -– and Danny came to me and said, "What will it take to convince banks to use EdgeSuite?"
I'll be honest, I laughed at him at first. We argued for weeks about how paranoid bank security teams were, and why they'd never let their SSL keys be held by someone else. We debated what security model would scale (we even considered having 24x7 security guards outside our datacenter racks). We talked about the scalability of IP space for SSL. Through all of that, Danny was insistent that, if we built it, the market would accept it - even need it. I didn't really believe him at the time, but it was an exciting challenge. We were designing a distributed, lights-out security model in a world with no good guidance on how to do it. And we did.
But I still didn't believe – not the way Danny did. Then came the phone call. I'd been up until 4 am working an incident, and my phone rings at 9 am. It's Danny. "Andy, I'm here with [large credit card company], and they want to understand how our SSL network works. Can you explain it to them?"
I begged for thirty seconds to switch to a landline (and toss cold water on my face), and off we go. We didn't actually have a pitch, so I was making it up on the fly, standing next to the bed in my basement apartment, without notes. I talked about the security model we'd built – and how putting datacenter security into the rack was the wave of the future. I talked about our access control model, the software audits we were building, and our automated installation system. I talked for forty-five minutes, and when I was done, I was convinced – we had a product that would sell, and sell well (it just took a few years for that latter half to come true).
When I got off the phone, I went to my desk, and turned that improvisational pitch into the core of the security story I still tell to this day. More importantly, I truly believed that our SSL capability would be used by those financial services customers. Like Danny, I was wrong by about a decade – but in the meantime, we enabled e-commerce, e-government, and business-to-business applications to work better.
Danny, thanks for that early morning phone call.
"When you're bossman" he added, "in command and responsible for the rest, you- you sure get to see things different don't you?"
HITB Keynote
2012-07-09
I recently keynoted at Hack in the Box 2012 Amsterdam. My topic was "Getting ahead of the Security Poverty Line", and the talk is below:
After giving the talk, I think I want to explore more about the set point theory of risk tolerance, and how to social engineer risk perception. Updated versions of this talk will appear at the ISSA conference in October, and at Security Zone in December.
After giving the talk, I think I want to explore more about the set point theory of risk tolerance, and how to social engineer risk perception. Updated versions of this talk will appear at the ISSA conference in October, and at Security Zone in December.
How much capacity do you really have?
2012-07-08
I recently bought a Nissan Leaf, and I'm going to share the joys and travails of driving one.
We were going to head out blueberry picking today. Our destination was 34 miles away, and the Leaf claimed it had 80 miles of charge available. "Perfect!" I thought - I could exercise it at its full range, and trickle charge enough overnight to get to work tomorrow, where I can fully charge it.
The first five miles of our trip was uphill on an interstate. By the end of that, the Leaf claimed we had 47 miles of charge left. We turned around, went home, and switched to the Sienna for our blueberry picking adventure.
What happened here? Two things: route selection, and mileage variability. The route selection on the Leaf isn't what I'm used to: on my prior vehicles (Toyota/Lexus), when selecting a route, it would display several options. The Nissan interface didn't, although I'm sure it is there somewhere (something to go look for!). So I had selected the "long but fast route," which added 7 miles, but saved 3 minutes at normal driving speed.
Which leads to mileage variability: 80 mile range is really some number of kilowatt-hours; and different driving has different miles-per-kilowatt-hour efficiency. "Optimal" driving is at 38 mph; the "long but fast route" involved speeds that are at least 50% higher, with a concomitant reduction in efficiency. While 80 miles didn't assume optimal driving, it probably didn't expect such high speed driving.
Feature desire: If you put in a route, and the expected fuel efficiency for normal driving on that route won't get you home on your existing charge, give a warning. This probably requires some better GIS integration, but shouldn't be out of the realm of possibility.
We were going to head out blueberry picking today. Our destination was 34 miles away, and the Leaf claimed it had 80 miles of charge available. "Perfect!" I thought - I could exercise it at its full range, and trickle charge enough overnight to get to work tomorrow, where I can fully charge it.
The first five miles of our trip was uphill on an interstate. By the end of that, the Leaf claimed we had 47 miles of charge left. We turned around, went home, and switched to the Sienna for our blueberry picking adventure.
What happened here? Two things: route selection, and mileage variability. The route selection on the Leaf isn't what I'm used to: on my prior vehicles (Toyota/Lexus), when selecting a route, it would display several options. The Nissan interface didn't, although I'm sure it is there somewhere (something to go look for!). So I had selected the "long but fast route," which added 7 miles, but saved 3 minutes at normal driving speed.
Which leads to mileage variability: 80 mile range is really some number of kilowatt-hours; and different driving has different miles-per-kilowatt-hour efficiency. "Optimal" driving is at 38 mph; the "long but fast route" involved speeds that are at least 50% higher, with a concomitant reduction in efficiency. While 80 miles didn't assume optimal driving, it probably didn't expect such high speed driving.
Feature desire: If you put in a route, and the expected fuel efficiency for normal driving on that route won't get you home on your existing charge, give a warning. This probably requires some better GIS integration, but shouldn't be out of the realm of possibility.
Social Engineering Judo
2012-07-07
(or, how good customer service and getting scammed can look alike)
On a business trip a few years ago, I found myself without a hotel room (the hotel which Egencia asserted I had a reservation claimed to know nothing about me). I made a new reservation at a Marriott hotel, and then called to check-in, since I had to head off to a customer event, and wouldn't get to the hotel until around midnight (and didn't want a repeat off having no hotel room). The desk clerk informed me that I couldn't check-in yet, but she assured me that yes, I'd have a room, and it was horrible that the other hotel had left me without a room. And yes, it would have a king size bed.
When I arrived, it turned out they'd upgraded me to a penthouse suite for the night. Good customer service, right? (Yes, of course, but now I have to argue the downside.). The clerk didn't actually know if I'd had a problem earlier, so really, she let me socially engineer her (honestly, it wasn't intentional). I've been in the hospitality industry myself, and it's really hard to tell the difference between a customer with a problem whose day you can improve, and a con artist just looking to get by.
One hotel I've worked for had the policy that you could never comp the meal or room a guest was complaining about (because too many people would complain just to see if they could have a free meal), but for folks with issues, you'd comp their next stay, or a meal the next night. This usually made guests happy, and con artists only got fifty percent off (until we discovered the "guest" that hadn't paid for their last ten stays by exercising this policy).
The trick here is to empower your customer service folks -- your front line against con artists and social engineers -- to have enough flexibility to make customers happy, while reducing how much they can cost you. A room upgrade has almost no marginal cost for a midnight check-in; but a free meal is a bit more expensive.
Since drafting this post, I've noticed what seems to be a disturbing trend in the hospitality industry: very few organizations can answer the question, "how will you reduce the likelihood of this happening again?" Instead, they focus merely on, "how can I make you stop complaining?" That's the best case, but it's only a first step.
On a business trip a few years ago, I found myself without a hotel room (the hotel which Egencia asserted I had a reservation claimed to know nothing about me). I made a new reservation at a Marriott hotel, and then called to check-in, since I had to head off to a customer event, and wouldn't get to the hotel until around midnight (and didn't want a repeat off having no hotel room). The desk clerk informed me that I couldn't check-in yet, but she assured me that yes, I'd have a room, and it was horrible that the other hotel had left me without a room. And yes, it would have a king size bed.
When I arrived, it turned out they'd upgraded me to a penthouse suite for the night. Good customer service, right? (Yes, of course, but now I have to argue the downside.). The clerk didn't actually know if I'd had a problem earlier, so really, she let me socially engineer her (honestly, it wasn't intentional). I've been in the hospitality industry myself, and it's really hard to tell the difference between a customer with a problem whose day you can improve, and a con artist just looking to get by.
One hotel I've worked for had the policy that you could never comp the meal or room a guest was complaining about (because too many people would complain just to see if they could have a free meal), but for folks with issues, you'd comp their next stay, or a meal the next night. This usually made guests happy, and con artists only got fifty percent off (until we discovered the "guest" that hadn't paid for their last ten stays by exercising this policy).
The trick here is to empower your customer service folks -- your front line against con artists and social engineers -- to have enough flexibility to make customers happy, while reducing how much they can cost you. A room upgrade has almost no marginal cost for a midnight check-in; but a free meal is a bit more expensive.
Since drafting this post, I've noticed what seems to be a disturbing trend in the hospitality industry: very few organizations can answer the question, "how will you reduce the likelihood of this happening again?" Instead, they focus merely on, "how can I make you stop complaining?" That's the best case, but it's only a first step.