2011
Security Subsistence Syndrome
2011-12-13
Wendy Nather, of The 451 Group, has recently discussed "Living Below the Security Poverty Line," which looks at what happens when your budget is below the cost to implement what common wisdom says are the "standard" security controls. I think that's just another, albeit crowd-sourced, compliance regime. A more important area to consider is the mindset of professionals who believe they live below the security poverty line:
Note that I'm defining this mindset with attitude, not money. I think that's a key distinction - it's possible to have a lot of money and still be in a bad place, just as it's possible to operate a good security program on a shoestring budget. Security subsistence syndrome is about lowered expectations, and an attitude of doing "only what you have to." If an enterprise suffering from security subsistence syndrome can reasonably expect no one to audit their controls, then they are unlikely to invest in meeting security requirements. If they can do minimal security work and reasonably expect to pass an "audit3", they will do so.
The true danger of believing you live at (or below) the security poverty line isn't that you aren't investing enough; it's that because you are generally spending time and money on templatized controls without really understanding the benefit they might provide, you aren't generating security value, and you're probably letting down those that rely on you. When you don't suffer security subsistence syndrome, you start to think with discretion; implementing controls that might be qualitatively better than the minimum - and sometimes, with lower long-term cost.
Security subsistence syndrome means you tend to be reactive to industry trends, rather than proactively solving problems specific to your business. As an example, within a few years, many workforces will likely be significantly tabletized (and by tablets, I mean iPads). Regulatory requirements around tablets are either non-existent, or impossible to satisfy; so in security subsistence syndrome, tablets are either banned, or ignored (or banned, and the ban is then ignored). That's a strategy that will wait to react to the existence of tablets and vendor-supplied industry "standards," rather than proactively moving the business into using them safely, and sanely.
Security awareness training is an example of a control which can reflect security subsistence syndrome. To satisfy the need for "annual security training", companies will often have a member of the security team stand up in front of employees with a canned presentation, and make them sign that they received the training. The signed pieces of paper go into someone's desk drawer, who hopes an auditor never asks to look at them. Perhaps the business uses an online computer-based training system, which uses a canned presentation, forcing users to click through some links. Those are both ineffective controls, and worse, inefficient (90 minutes per employee means that in a 1500 person company, you're spending over an FTE just to generate that piece of paper!).
Free of the subsistence mindset, companies get creative. Perhaps you put security awareness training on a single, click through webpage (we do!). That lets you drop the time requirement down (communicating to employees that you value their time), and lets you focus on other awareness efforts - small fora, executive education, or targeted social engineering defense training. Likely, you'll spend less time and money on improving security awareness training, have a more effective program, and be able to demonstrate compliance trivially to an auditor.
Security subsistence syndrome is about your attitude, and the choices you make: at each step, do you choose to take the minimal, rote steps to satisfy your perceived viewers, or do you instead take strategic steps to improve your security? I'd argue that in many cases, the strategic steps are cheaper than the rote steps, and have a greater effect in the medium term.
1 Nothing restricts this to security; likely, enterprise IT organizations can fall into the same trap.
2 To the satisfaction of the reasonably expectable auditor, not the perfect auditor.
3 I'm loosely defining audit here, to include any survey of a company's security practices; not just "a PCI audit."
Security1 Subsistence Syndrome (SSS) is a mindset in an organization that believes it has no security choices, and is underfunded, so it minimally spends to meet perceived2 statutory and regulatory requirements.
Note that I'm defining this mindset with attitude, not money. I think that's a key distinction - it's possible to have a lot of money and still be in a bad place, just as it's possible to operate a good security program on a shoestring budget. Security subsistence syndrome is about lowered expectations, and an attitude of doing "only what you have to." If an enterprise suffering from security subsistence syndrome can reasonably expect no one to audit their controls, then they are unlikely to invest in meeting security requirements. If they can do minimal security work and reasonably expect to pass an "audit3", they will do so.
The true danger of believing you live at (or below) the security poverty line isn't that you aren't investing enough; it's that because you are generally spending time and money on templatized controls without really understanding the benefit they might provide, you aren't generating security value, and you're probably letting down those that rely on you. When you don't suffer security subsistence syndrome, you start to think with discretion; implementing controls that might be qualitatively better than the minimum - and sometimes, with lower long-term cost.
Security subsistence syndrome means you tend to be reactive to industry trends, rather than proactively solving problems specific to your business. As an example, within a few years, many workforces will likely be significantly tabletized (and by tablets, I mean iPads). Regulatory requirements around tablets are either non-existent, or impossible to satisfy; so in security subsistence syndrome, tablets are either banned, or ignored (or banned, and the ban is then ignored). That's a strategy that will wait to react to the existence of tablets and vendor-supplied industry "standards," rather than proactively moving the business into using them safely, and sanely.
Security awareness training is an example of a control which can reflect security subsistence syndrome. To satisfy the need for "annual security training", companies will often have a member of the security team stand up in front of employees with a canned presentation, and make them sign that they received the training. The signed pieces of paper go into someone's desk drawer, who hopes an auditor never asks to look at them. Perhaps the business uses an online computer-based training system, which uses a canned presentation, forcing users to click through some links. Those are both ineffective controls, and worse, inefficient (90 minutes per employee means that in a 1500 person company, you're spending over an FTE just to generate that piece of paper!).
Free of the subsistence mindset, companies get creative. Perhaps you put security awareness training on a single, click through webpage (we do!). That lets you drop the time requirement down (communicating to employees that you value their time), and lets you focus on other awareness efforts - small fora, executive education, or targeted social engineering defense training. Likely, you'll spend less time and money on improving security awareness training, have a more effective program, and be able to demonstrate compliance trivially to an auditor.
Security subsistence syndrome is about your attitude, and the choices you make: at each step, do you choose to take the minimal, rote steps to satisfy your perceived viewers, or do you instead take strategic steps to improve your security? I'd argue that in many cases, the strategic steps are cheaper than the rote steps, and have a greater effect in the medium term.
1 Nothing restricts this to security; likely, enterprise IT organizations can fall into the same trap.
2 To the satisfaction of the reasonably expectable auditor, not the perfect auditor.
3 I'm loosely defining audit here, to include any survey of a company's security practices; not just "a PCI audit."
Enterprise InfoSec Lessons from the TSA
2011-12-07
The TSA and its security practices are fairly common targets for security commentary. You can find armchair critics in most every bar, living room, and, especially, information security team. But the TSA is a great analogue to the way enterprises tend to practice information security; so maybe we can learn a thing or three from them.
We can begin with the motherhood and apple pie of security: business alignment. TSA has little to no incentive that lines up with its customers (or with their customers). TSA's metric is, ostensibly, the number of successful airplane attacks. Being perceived to reduce that number is their only true metric. On any day where there is no known breach, they can claim success - just like an enterprise information security team. And they can also be criticized for being irrelevant - just like said enterprise information security team. The business, meanwhile (both airlines and passengers), are worrying about other metrics: being on time, minimized hassle, and costs. Almost any action the TSA undertakes in pursuit of its goals are going to have a harmful effect on everyone else's goals. This is a recipe for institutional failure: as the TSA (or infosec team) acknowledges that it can never make its constituents happy, it runs the risk of not even trying.
Consider the security checkpoint, the TSA equivalent to the enterprise firewall (if you consider airplanes as VPN tunnels, it's a remarkable parallel). The security checkpoint begins with a weak authentication check: you are required to present a ticket, and ID that matches. Unfortunately, unless you are using a QR-coded smartphone ticket, the only validation of the ticket is that it appears - to a human eyeball - to be a ticket for this date and a gate behind this checkpoint. Tickets are trivially forgeable, and can be easily matched to whatever ID you present. The ID is casually validated, and goes unrecorded. This is akin to a, sadly, standard enterprise practice, to log minimal data about connections that cross the perimeter, and to not compare those connections to a list of expected traffic.
In parallel, we find the cameras. Mounted all through the security checkpoint, the cameras are a standard forensic tool - if you know what and when you are looking for something, they'll provide some evidence after the fact. But they aren't very helpful in stopping or identifying attacks in progress. Much like the voluminous logs many of our enterprises deploy. Useful for forensics, useless for prevention.
Having entered the checkpoint, the TSA is going to split passengers from their bags (and their shoes, belts, jackets, ID, and, importantly, recording devices). Their possessions are going to be placed onto a conveyor belt, where they will undergo inspection via an X-ray machine. This is, historically, the biggest bottleneck for throughput, and a nice parallel to many application level security tools. Because we have to disassemble the possessions, and then inspect one at a time (or maybe two, or three, in a high-availability scenario), we slow everything down. And because the technology to look for problems is highly signature based, it's prone to significant false negatives. Consider the X-ray machine to be the anti-virus of the TSA.
The passengers now get directed to one of two technologies: the magnetometers, or the full body imagers. The magnetometers are an old, well-understood technology: they detect efforts to bring metal through, are useless for ceramics or explosives, and are relatively speedy. The imagers, on the other hand, are what every security team desires: the latest and greatest technology; thoroughly unproven in the field, with unknown side effects, and invasive (in a sense, they're like reading people's email: sure, you might find data exfiltration, but you're more likely to violate the person's privacy and learn about who they are dating). The body scanners are slow. Slower, even, than the x-ray machines for personal effects. Slow enough that, at most checkpoints, when under load, passengers are diverted to the magnetometers, either wholesale, or piecemeal (this leads to interesting timing attacks to get a passenger shifted into the magnetometer queue). The magnetometer is your old-school intrusion-detection system: good at detecting a known set of attacks, bad at new attacks, but highly optimized at its job. The imagers are that latest technology your preferred vendor just sold you: you don't really know if it works well, and you're exporting too much information to the vendor, and you're seeing things you shouldn't, and you have to fail-around it too often for it to be useful; but at least you can claim you are doing something new.
If a passenger opts-out of the imaging process, rather than pass them through the magnetometer, we subject them to a "pat-down". The pat-down is a punitive punishment, enacted whenever someone questions the utility of the latest technology. It isn't very effective (if you'd like to smuggle a box cutter into an airport, and don't want to risk the X-ray machine detecting it, taping the razor blade to the bottom of your foot is probably going to work). But it does tend to discourage opt-out criticism.
Sadly, for all of the TSA's faults, in enterprise security, we tend to implement controls based on the same philosophy. Rather than focus on security techniques that enable the business while defending against a complex attacker ecosystem, we build rigid control frameworks, often explicitly designed to be able, on paper, to detect the most recent attack (often, in implementation, these fail, but we are reassured by having done something).
We can begin with the motherhood and apple pie of security: business alignment. TSA has little to no incentive that lines up with its customers (or with their customers). TSA's metric is, ostensibly, the number of successful airplane attacks. Being perceived to reduce that number is their only true metric. On any day where there is no known breach, they can claim success - just like an enterprise information security team. And they can also be criticized for being irrelevant - just like said enterprise information security team. The business, meanwhile (both airlines and passengers), are worrying about other metrics: being on time, minimized hassle, and costs. Almost any action the TSA undertakes in pursuit of its goals are going to have a harmful effect on everyone else's goals. This is a recipe for institutional failure: as the TSA (or infosec team) acknowledges that it can never make its constituents happy, it runs the risk of not even trying.
Consider the security checkpoint, the TSA equivalent to the enterprise firewall (if you consider airplanes as VPN tunnels, it's a remarkable parallel). The security checkpoint begins with a weak authentication check: you are required to present a ticket, and ID that matches. Unfortunately, unless you are using a QR-coded smartphone ticket, the only validation of the ticket is that it appears - to a human eyeball - to be a ticket for this date and a gate behind this checkpoint. Tickets are trivially forgeable, and can be easily matched to whatever ID you present. The ID is casually validated, and goes unrecorded. This is akin to a, sadly, standard enterprise practice, to log minimal data about connections that cross the perimeter, and to not compare those connections to a list of expected traffic.
In parallel, we find the cameras. Mounted all through the security checkpoint, the cameras are a standard forensic tool - if you know what and when you are looking for something, they'll provide some evidence after the fact. But they aren't very helpful in stopping or identifying attacks in progress. Much like the voluminous logs many of our enterprises deploy. Useful for forensics, useless for prevention.
Having entered the checkpoint, the TSA is going to split passengers from their bags (and their shoes, belts, jackets, ID, and, importantly, recording devices). Their possessions are going to be placed onto a conveyor belt, where they will undergo inspection via an X-ray machine. This is, historically, the biggest bottleneck for throughput, and a nice parallel to many application level security tools. Because we have to disassemble the possessions, and then inspect one at a time (or maybe two, or three, in a high-availability scenario), we slow everything down. And because the technology to look for problems is highly signature based, it's prone to significant false negatives. Consider the X-ray machine to be the anti-virus of the TSA.
The passengers now get directed to one of two technologies: the magnetometers, or the full body imagers. The magnetometers are an old, well-understood technology: they detect efforts to bring metal through, are useless for ceramics or explosives, and are relatively speedy. The imagers, on the other hand, are what every security team desires: the latest and greatest technology; thoroughly unproven in the field, with unknown side effects, and invasive (in a sense, they're like reading people's email: sure, you might find data exfiltration, but you're more likely to violate the person's privacy and learn about who they are dating). The body scanners are slow. Slower, even, than the x-ray machines for personal effects. Slow enough that, at most checkpoints, when under load, passengers are diverted to the magnetometers, either wholesale, or piecemeal (this leads to interesting timing attacks to get a passenger shifted into the magnetometer queue). The magnetometer is your old-school intrusion-detection system: good at detecting a known set of attacks, bad at new attacks, but highly optimized at its job. The imagers are that latest technology your preferred vendor just sold you: you don't really know if it works well, and you're exporting too much information to the vendor, and you're seeing things you shouldn't, and you have to fail-around it too often for it to be useful; but at least you can claim you are doing something new.
If a passenger opts-out of the imaging process, rather than pass them through the magnetometer, we subject them to a "pat-down". The pat-down is a punitive punishment, enacted whenever someone questions the utility of the latest technology. It isn't very effective (if you'd like to smuggle a box cutter into an airport, and don't want to risk the X-ray machine detecting it, taping the razor blade to the bottom of your foot is probably going to work). But it does tend to discourage opt-out criticism.
Sadly, for all of the TSA's faults, in enterprise security, we tend to implement controls based on the same philosophy. Rather than focus on security techniques that enable the business while defending against a complex attacker ecosystem, we build rigid control frameworks, often explicitly designed to be able, on paper, to detect the most recent attack (often, in implementation, these fail, but we are reassured by having done something).
The Unreliability of the Domain Name Service
2011-09-16
Consider the case of DNS (the Domain Name Service). This innocuous-seeming protocol, which merely translates hostnames (like www.csoandy.com) into IP addresses (like 96.17.149.33) is such an important foundation of the Internet that, without it, humans would have had a serious challenge building and engaging meaningfully with such a vast distributed system. The web as we know it would not exist without DNS. The beauty of DNS is in its simplicity: you ask a question, and you get a response.
But if you can't get a response, then nothing else works -- your web browser can't load up news for you, your email client can't fetch email, and your music player can't download the latest songs. It's important to ensure the DNS system is highly available; counterintuitively, we built it assuming that its components are likely to fail. And from that we can learn how to build other survivable infrastructures.
First, DNS primarily uses UDP, rather than TCP, for transport. UDP (often nicknamed the Unreliable Data Protocol) is more like sending smoke signals than having a conversation; the sender has no idea if their message reached the recipient. A client sends a UDP query to a name server; the name server sends back a UDP answer (had this been implemented in TCP, the conversation would instead have taken several round trips).
Because DNS has no reliability assertion from UDP (and, frankly, the reliability assertion that TCP would have provided isn't worth much, but at least provides failure notification), the implementations had to assume -- correctly -- that failure would happen, and happen regularly. So failure was planned for. If the client does not get a response within a set time window, it will try again - but then the client may query a different server IP address. Because the DNS query/response is accomplished within a single packet, there is no need for server stickiness.
An array of DNS servers can be placed behind a single IP address, with simple stateless load-balancing required - no complex stateful load balancers required (higher end DNS systems can even use IP-anycasting, to have one IP address respond from multiple geographic regions, with no shared state between the sites). Clients can and do learn which servers are highly response, and preferentially use those.
DNS also has built into itself other means of reliability, like the TTL (time to live). This is a setting associated with every DNS response which indicates how long the response is valid. A client therefore does not need to make queries for some time; if a name server fails, a client may not notice for hours.
On top of this failure-prone infrastructure -- an unreliable transport mechanism, servers that might fail at any time, and an Internet that has an unfortunate tendency to lose packets -- a highly survivable system begins to emerge, with total DNS outages a rare occurrence.
But if you can't get a response, then nothing else works -- your web browser can't load up news for you, your email client can't fetch email, and your music player can't download the latest songs. It's important to ensure the DNS system is highly available; counterintuitively, we built it assuming that its components are likely to fail. And from that we can learn how to build other survivable infrastructures.
First, DNS primarily uses UDP, rather than TCP, for transport. UDP (often nicknamed the Unreliable Data Protocol) is more like sending smoke signals than having a conversation; the sender has no idea if their message reached the recipient. A client sends a UDP query to a name server; the name server sends back a UDP answer (had this been implemented in TCP, the conversation would instead have taken several round trips).
Because DNS has no reliability assertion from UDP (and, frankly, the reliability assertion that TCP would have provided isn't worth much, but at least provides failure notification), the implementations had to assume -- correctly -- that failure would happen, and happen regularly. So failure was planned for. If the client does not get a response within a set time window, it will try again - but then the client may query a different server IP address. Because the DNS query/response is accomplished within a single packet, there is no need for server stickiness.
An array of DNS servers can be placed behind a single IP address, with simple stateless load-balancing required - no complex stateful load balancers required (higher end DNS systems can even use IP-anycasting, to have one IP address respond from multiple geographic regions, with no shared state between the sites). Clients can and do learn which servers are highly response, and preferentially use those.
DNS also has built into itself other means of reliability, like the TTL (time to live). This is a setting associated with every DNS response which indicates how long the response is valid. A client therefore does not need to make queries for some time; if a name server fails, a client may not notice for hours.
On top of this failure-prone infrastructure -- an unreliable transport mechanism, servers that might fail at any time, and an Internet that has an unfortunate tendency to lose packets -- a highly survivable system begins to emerge, with total DNS outages a rare occurrence.
The Spy Who Wasn't
2011-09-09
By now, many of you have seen either an original article when Eliot Doxer was arrested, or a more recent article covering his guilty plea. As the articles (and the original complaint) note, Mr. Doxer, then an Akamai employee, reached out to the Israeli government, offering to sell information to them. His outreach was passed along to the FBI, who acted out a multi-year cloak and dagger scenario in which Mr. Doxer was providing information -- he believed to Israeli intelligence -- that instead went solely to the FBI. Early on, Akamai was alerted to the matter on a confidential basis, and we provided assistance over the years. Obviously, we can't go into detail about that.
In pleading guilty to one count of foreign economic espionage, Mr. Doxer stipulated that he gave an FBI undercover agent, among other things, copies of contracts between Akamai and some of our customers. The Justice Department has confirmed that the Akamai information was never disclosed to anyone other than a U.S. law enforcement officer.
And we've given thanks to the FBI for their outstanding work.
What was this information?
Mr. Doxer was an employee in our Finance Department on the collections team, and, in the course of his job, he had routine and appropriate access to a limited amount of Akamai's business confidential information - like who our customers are and what they buy from us. At no time, however, was Mr. Doxer authorized to access the confidential information of our customers - including access to our production networks, our source code, or our customer configurations.In pleading guilty to one count of foreign economic espionage, Mr. Doxer stipulated that he gave an FBI undercover agent, among other things, copies of contracts between Akamai and some of our customers. The Justice Department has confirmed that the Akamai information was never disclosed to anyone other than a U.S. law enforcement officer.
Lessons Learned
We used this incident as an opportunity to review our controls, to assess whether or not a deficiency was exploited and identify areas for improvement. We looked both at this specific case, as well as the general case of insider threats, and have identified and implemented additional controls to reduce our exposure.And we've given thanks to the FBI for their outstanding work.
Password weakness
2011-08-19
Randall Munroe opines in xkcd on password strength, noting that we've trained people to "use passwords that are hard for humans to remember, but easy for computers to guess." He's both right and wrong.
First off, the security industry owes Randall a debt of gratitude for this comic; people who don't normally interact with security technologies (or only grudgingly) are discussing and debating the merits of various password algorithms, and whether "correct horse battery staple" is, in fact, more memorable and more secure than "Tr0ub4dor&3". That's an important conversation to have.
Consider the following (obvious) variants of the "weak" password algorithm presented in the comic: "Tr0ub4dor&3!", "Troubbador&3", "2roubador&3". None of these match the algorithm presented - so they don't fit into the 28 bits of entropy of concern. That doesn't make them perfect, I merely note that Randall arbitrarily drew his line around "likely passwords" where he wanted to. That's not necessarily unreasonable: for instance, if a password scheme requires 8 characters, including at least one upper case, one lower case, one number, and one symbol, assuming people will pick "Upper case, five lower case with a number thrown in, symbol that is a shifted-number" probably isn't a bad idea, and lets you ignore 99.9975% of possible 8-character passwords. But it is unreasonable if you're arguing that your specific model might be better.
Let's say that we give users the simple model proposed: pick four random words. People fail at picking random things; see Troy Hunt's analysis of the passwords revealed in the Sony Pictures breach. So if you let the user pick the word list, you'll end up with common phrases like "patriots football super bowl" or "monkey password access sucks", and adversaries will start there. Or, we can give users their passphrases, and probably discover later that there was a bug in the random number generator used to select words, and half of our users have the passphrase "caffeine programmer staccato novel".
Randall is correct that, when it comes to user-memorized secrets, longer is better. So is less predictability. Most password rules are designed to move from easy predictability (common words) to harder predictability (common words plus some interspersed silly keystrokes).
But if the only threat you're worried about are online oracle attacks, you can defend against those by looking for them, and making them harder for adversaries to conduct. But that's a mostly losing battle in the long run.
First off, the security industry owes Randall a debt of gratitude for this comic; people who don't normally interact with security technologies (or only grudgingly) are discussing and debating the merits of various password algorithms, and whether "correct horse battery staple" is, in fact, more memorable and more secure than "Tr0ub4dor&3". That's an important conversation to have.
Is it more secure?
Randall plays a trick on the audience, by picking a single strawman password implementation, and showing how weak it is compared to a preferred model. He also limits himself to a specific attack vector (against an online oracle), which makes the difference between the two seem larger than it really is.Consider the following (obvious) variants of the "weak" password algorithm presented in the comic: "Tr0ub4dor&3!", "Troubbador&3", "2roubador&3". None of these match the algorithm presented - so they don't fit into the 28 bits of entropy of concern. That doesn't make them perfect, I merely note that Randall arbitrarily drew his line around "likely passwords" where he wanted to. That's not necessarily unreasonable: for instance, if a password scheme requires 8 characters, including at least one upper case, one lower case, one number, and one symbol, assuming people will pick "Upper case, five lower case with a number thrown in, symbol that is a shifted-number" probably isn't a bad idea, and lets you ignore 99.9975% of possible 8-character passwords. But it is unreasonable if you're arguing that your specific model might be better.
Let's say that we give users the simple model proposed: pick four random words. People fail at picking random things; see Troy Hunt's analysis of the passwords revealed in the Sony Pictures breach. So if you let the user pick the word list, you'll end up with common phrases like "patriots football super bowl" or "monkey password access sucks", and adversaries will start there. Or, we can give users their passphrases, and probably discover later that there was a bug in the random number generator used to select words, and half of our users have the passphrase "caffeine programmer staccato novel".
Randall is correct that, when it comes to user-memorized secrets, longer is better. So is less predictability. Most password rules are designed to move from easy predictability (common words) to harder predictability (common words plus some interspersed silly keystrokes).
The real risk
Going back to Troy Hunt's analysis, the real risk isn't that someone will use an online oracle to brute force your password or passphrase. The real risk is that some password holder will be breached, and like 67% of users, you'll have used the same password on another site. Password strength doesn't help at all with that problem.But which one?
The answer is neither. If you're using either password scheme demonstrated by Randall, change it (e.g., add some random symbols between your words), as it's now more likely to be an adversarial target. The real question is how do we get away from passwords? SSL certificates - for all their issues - are one option. One time passwords - generated either by a dedicated token or application, or out-of-band, via SMS - also are an interesting choice.But if the only threat you're worried about are online oracle attacks, you can defend against those by looking for them, and making them harder for adversaries to conduct. But that's a mostly losing battle in the long run.
How certificates go bad
2011-03-24
The security echo chamber has gotten quite loud over the last few days over the Comodo sub-CA bogus certificate issuance. This is a good opportunity to look at what happened, why this isn't as startling as some might think, and general problems in the SSL CA model.
Anyone can generate asymmetric keypairs; what makes them interesting is when you can tie them to specific owners. The SSL model is based on certificates. A certificate is just someone's public key, some information about that public key, and a signature of the key and information. The signature is what's interesting -- it's generated by another keyholder, whose private key & certificate we call a certificate authority (CA).<
The most expensive part of issuing a certificate is verifying that the purchaser is authorized to hold one. Many CAs, including Comodo, have resellers who can instruct the CA to issue certificates; the reseller becomes what is known as the "Registration Authority (RA)." (Full disclosure: Akamai is a reseller of several CAs, including Comodo, although certificates we sign only work with the private keys that we hold on our platform.)
Second, RAs are a very weak point. Not only are they in a race to the bottom (if you can buy an SSL cert for under $50, imagine how little verification of your identity the RA can afford to do), but any one of them, if compromised, can issue certificates good for any domain in the world. And that's what happened in the case of the bogus Comodo certificates.
Kudos to Comodo for good incident response, and explaining clearly what happened. I suspect that's the rarity, not the issuance of bogus certificates.
A primer on certificates and authorities
Certificates are built on top of asymmetric cryptographic systems - systems where you have a keypair that is split into a private half (held closely by the owner) and a public half (distributed widely). Information encrypted with one half is only decryptable with the other half. If you encrypt with the public key, we call it encryption (the information is now secret and can only be read by the private key owner); if you encrypt with the private key, we call it signing (the information can be verified by anyone, but only you could have generated it). There are additional optimization nuances around hashes and message keys, but we'll gloss over those for now.Anyone can generate asymmetric keypairs; what makes them interesting is when you can tie them to specific owners. The SSL model is based on certificates. A certificate is just someone's public key, some information about that public key, and a signature of the key and information. The signature is what's interesting -- it's generated by another keyholder, whose private key & certificate we call a certificate authority (CA).<
"You've got me. Who's got you?"
How do we trust a CA that has signed a certificate? It itself might be signed by another CA, but at some point, we have to have a root of trust. Those are the CAs that our web browsers and operating systems trust to sign other certificates. You should take a gander around the list (Mozilla ships about 50 organizations as root CAs, Internet Explorer far more). Those roots can directly sign any SSL certificate, or can sign an intermediate CA, which then signs certificates.The most expensive part of issuing a certificate is verifying that the purchaser is authorized to hold one. Many CAs, including Comodo, have resellers who can instruct the CA to issue certificates; the reseller becomes what is known as the "Registration Authority (RA)." (Full disclosure: Akamai is a reseller of several CAs, including Comodo, although certificates we sign only work with the private keys that we hold on our platform.)
There are two major, fundamental flaws in this architecture.
First, the number of trusted CAs is immense. And each of those CAs can authoritatively sign certificates for any domain. This means that CA Disig (of the Slovak Republic) can issue authoritative certs for www.gc.ca, the Government of Canada's website. (Full disclosure: my wife's mother is from Slovakia, and my father's side of the family is Canadian.) Fundamentally, the list of root CAs in everyone's browser contains authorities based anywhere in the world, including governments known to be hostile to theirs. A related issue is that most enterprises have their own "private CA" which signs intranet certificates; that CA becomes valid for any domain when the user adds it to their trust chain.Second, RAs are a very weak point. Not only are they in a race to the bottom (if you can buy an SSL cert for under $50, imagine how little verification of your identity the RA can afford to do), but any one of them, if compromised, can issue certificates good for any domain in the world. And that's what happened in the case of the bogus Comodo certificates.
Kudos to Comodo for good incident response, and explaining clearly what happened. I suspect that's the rarity, not the issuance of bogus certificates.
Malware hunting
2011-02-16
Today at the RSA Conference, Akamai Principal Security Architect Brian Sniffen is giving a talk titled "Scanning the Ten Petabyte Cloud: Finding the malware that isn't there." In Brian's talk, he discusses the challenges of hunting for malware hooks in stored HTML pages of unspecified provenance, and some tips and tricks for looking for this malicious content.
In conjunction with his talk, Akamai is releasing the core source code for our vscan software. The source code is BSD3-licensed.
We are hopeful that our experiences can be helpful to others looking for malware in their HTML.
In conjunction with his talk, Akamai is releasing the core source code for our vscan software. The source code is BSD3-licensed.
We are hopeful that our experiences can be helpful to others looking for malware in their HTML.
Tanstaafl
2011-01-27
I was reading Rafal Los over at the HP Following the White Rabbit blog discussing whether anonymous web browsing is even possible:
This is a great point, although insufficiently generic, and limited to the gratis view of the web -- the allegedly free content. While you can consider the web browsing experience to be strictly transactional, it can more readily be contemplated as an instantiation of a world of relationships. For instance, you read a lot of news; but reading a single news article is just a transaction in the relationships between you and news providers.
A purchase from an online retailer? Let's consider a few relations:The merchant's relationship with: The payment card industry's interrelationships: payment gateways, acquiring banks, card brands, and card issuers all have entangled relationships.
The web is a world filled with fraud, and fraud lives in the gaps between these relationships (Often, relationships are only used one way: buyer gives credit card to merchant, who gives it to their gateway, who passes it into the banking system. If the buyer simply notified their bank of every transaction, fraud would be hard; the absence of that notification is a gap in the transaction). The more a merchant understands about their customer, the lower their cost can be.
Of course, this model is harder to perceive in the gratis environment, but is nonetheless present. First, let's remember:
Often, the product is simply your eyeballs; but your eyeballs might have more value the more the merchant knows about you. (Consider the low value of the eyeballs of your average fantasy football manager. If the merchant knows from past history that those eyeballs are also attached to a person in market for a new car, they can sell more valuable ad space.) And here, the more value the merchant can capture, the better services they can provide to you.
A real and fair concern is whether the systemic risk added by the merchant in aggregating information about end users is worth the added reward received by the merchant and the user. Consider the risk of a new startup in the gratis world of location-based services. This startup may create a large database of the locations of its users over time (consider the surveillance possibilities!), which, if breached, might expose the privacy and safety of those individuals. Yet because that cost is not borne by the startup, they may perceive it as a reasonable risk to take for even a small return.
Gratis services - and even for-pay services - are subsidized by the exploitable value of the data collected. Whether or not the business is fully monetizing that data, it's still a fair question to ask whether the businesses can thrive without that revenue source.
Can making anonymous surfing still sustain the "free web" concept? - Much of the content you surf today is free, meaning, you don't pay to go to the site and access it. Many of these sites offer feature-rich experiences, and lots of content, information and require lots of work and upkeep. It's no secret that these sites rely on advertising revenue at least partly (which relies on tracking you) to survive ...if this model goes away what happens to these types of sites? Does the idea of free Internet content go away? What would that model evolve to?
This is a great point, although insufficiently generic, and limited to the gratis view of the web -- the allegedly free content. While you can consider the web browsing experience to be strictly transactional, it can more readily be contemplated as an instantiation of a world of relationships. For instance, you read a lot of news; but reading a single news article is just a transaction in the relationships between you and news providers.
A purchase from an online retailer? Let's consider a few relations:
- The buyer's relationship with:
- the merchant
- their credit card / alternative payment system
- the receiver
- the payment gateway
- the manufacturer
- the shipping company
The web is a world filled with fraud, and fraud lives in the gaps between these relationships (Often, relationships are only used one way: buyer gives credit card to merchant, who gives it to their gateway, who passes it into the banking system. If the buyer simply notified their bank of every transaction, fraud would be hard; the absence of that notification is a gap in the transaction). The more a merchant understands about their customer, the lower their cost can be.
Of course, this model is harder to perceive in the gratis environment, but is nonetheless present. First, let's remember:
If you're not paying for something, you're not the customer; you're the product being sold.
Often, the product is simply your eyeballs; but your eyeballs might have more value the more the merchant knows about you. (Consider the low value of the eyeballs of your average fantasy football manager. If the merchant knows from past history that those eyeballs are also attached to a person in market for a new car, they can sell more valuable ad space.) And here, the more value the merchant can capture, the better services they can provide to you.
A real and fair concern is whether the systemic risk added by the merchant in aggregating information about end users is worth the added reward received by the merchant and the user. Consider the risk of a new startup in the gratis world of location-based services. This startup may create a large database of the locations of its users over time (consider the surveillance possibilities!), which, if breached, might expose the privacy and safety of those individuals. Yet because that cost is not borne by the startup, they may perceive it as a reasonable risk to take for even a small return.
Gratis services - and even for-pay services - are subsidized by the exploitable value of the data collected. Whether or not the business is fully monetizing that data, it's still a fair question to ask whether the businesses can thrive without that revenue source.