Would DNSSEC have helped Twitter?

Twitter had its name servers "stolen". Would DNSSEC have helped protect them? To be brief, no. But looking at why not will help understand the limitations of DNSSEC. It helps to start with the basics of DNS, and DNSSEC.

DNS is broken down into zones. A zone is a hierarchical collections of names, roughly grouped by the periods in the hostname -- e.g., www.csoandy.com exists under three zones: . (the root zone), .com, and .csoandy.com. Each zone has a set of servers that are authoritative when answering queries in that zone, and can delegate responsibility for a subsidiary zone to a different set of servers.

With DNSSEC, each zone has a set of keys used to sign answers in the zone; the zone signing key, which signs the records, and the key signing key, which signs the zone signing key. You learn the key signing key for a zone when receiving a delegation; the delegating server adds a special record informing you of the key for the zone they're passing you to.

doorlock.jpg

Looking at the incident, it appears that a compromised Twitter employee email account was used to reset an administrative account login with their registrar, and then that account was used to change the delegation for twitter.com. Had Twitter been using DNSSEC, Twitter would have needed to provide the public half of their key signing key to their registrar. Odds are, that would be in the same interface that was used to redirect twitter.com. An adversary would have been able to alter the DNSSEC delegation just as easily as they altered the zone delegation.

DNSSEC has a lot of strengths, but it isn't a magic bullet for all of the weaknesses in the DNS infrastructure.

Modeling Imperfect Adversaries

An important piece of risk assessment is understanding your adversaries. Often, this can degenerate into an assumption of perfect adversaries. Yet when we think about risk, understanding that our adversaries have different capabilities is critical to formulating reasonable security frameworks. Nowhere is this more true than in the streaming media space.

Brian Sniffen (a colleague at Akamai) recently presented a paper at FAST exploring ways of considering different adversaries, especially in the context of different business models. He presents some interesting concepts worth exploring if you're in the space:
  • Defense in breadth: The concept of using different security techniques to protect distinct attack surfaces, specifically looking at defeating how a specific adversary type is prone to conduct attacks.
  • Tag-limited adversaries: An extension to the Dolev-Yao adversary (a perfect adversary who sits inline on a communication stream), the tag-limited adversary may only have knowledge, capabilities, or desire to conduct attacks within a limited vocabulary.


His paper is also a good primer on thinking about streaming threat models.

Virtual Patching

Virtual patching, for those new to the term, is the practice of adding a rule to a Web Application Firewall (WAF) to filter out traffic that could exploit a known vulnerability in a protected application. This has triggered debate in the security community -- is this a good thing? Why would a developer fix the vulnerability if it is mitigated?

bandaid.jpgFirst off, Virtual Patching is, in fact, a good thing. The development turnaround time for a WAF rule is almost certainly shorter than the development cycle for the backend application, so this shortens your mitigation window. That shouldn't really be a topic of debate.

The interesting debate, then, is how we manage the underlying vulnerability. One school of thought argues that the WAF is only a stopgap, and developers should fix the vulnerability because that's the right thing to do. Another school's argument is a bit more complex. Don't fix the vulnerability. Fix the entire class of vulnerabilities.

If you have a vulnerability in a webapp, odds are that it's a symptom of a greater category of vulnerability. That CSRF bug on one page likely reflects a design flaw in not making GET requests nullipotent, or not using some form of session request management. Given the urgency of an open vulnerability, the developers will likely focus on fixing the tactical issue. Virtually patch the tactical vulnerability, and focus on the flaw in the foundation. It'll take longer to fix, but it'll be worth it in the long run.

DDoS Thoughts

We are used to measuring the efficiency of DDoS attacks in ratios of bits-per-second. An attacker wants to consume many bits-per-second for each of his own bits-per-second that he uses. He would rather send one packet to generate a storm than have to send the storm himself. We can extend this efficiency measurement to other attacks. Let's use the name "flits per second" (for fully-loaded bits) for this more general measurement of cost and pain: sending or receiving one bit-per-second costs one flit-per-second. Doing enough computation to have an opportunity cost of one bit-per-second has a cost of one flit-per-second. Enough disk access to forestall one bit-per-second costs one flit-per-second. Now we can talk about the flit capacity of the attacker and defender, and about the ratio between flits consumed on each side during an attack.

From a defensive efficiency perspective, we have two axes to play with: first, reducing the flit-to-bit ratio of an attack, by designing optimal ways of handling traffic; and second, increasing the relative client cost to produce an attack.

To reduce the flitcost, consider the history of SYN floods. SYN floods work by sending only the first packet in the three way TCP handshake; the victim computer keeps track of the half-open connection after sending its response, and waits for the attacker's followup. That doesn't come; for the cost of a single SYN packet, the attacker gets to consume a sparse resource (half-open connections) for some period of time. The total amount of traffic needed historically was pretty minimal, until SYN cookies came along. Now, instead of using the sparse resource, targets use a little bit of CPU to generate a cryptographic message, embed it in their response, and proceed apace. What was a very effective attack has become rather ineffective; against most systems, a SYN flood has a lower flit-to-bit ratio than more advanced application layer attacks.

The other axis is more interesting, and shows why SYN floods are still prevalent even today: they're cheap to produce. They don't consume a lot of cycles on the attacking systems, and don't require interesting logic or protocol awareness. The fewer resources an attacker can consume, the more likely their attack will go unnoticed by the owners of the compromised hosts used in the attack (Case in point: look at how fast Slammer remediation happened. Why? ISPs were knocked offline by having infected systems inside). Many attack programs effectively reduce to "while (1) {attack}". If the attack is making an HTTP request, filtering the request will often generate a higher rate of requests, without changing the attacker's costs. If this higher rate has the same effect on you that the lower rate did, you didn't buy anything in your remediation. You might have been better off responding more slowly, than not at all.

In the general case, this leads us to two solution sets. Traffic filtering is the set of technologies designed to make handling attacks more efficient; either by handling the attack further out in an infrastructure, classifying it as malicious for a cheaper cost than processing it, or making processing cheaper.

Capacity increases, on the other hand, are normally expensive, and they're a risky gamble. If you increase far in excess of attacks you ever see, you've wasted money. On the other hand, if you increase by not quite enough, you're still going to be impacted by an event (and now, of course, you'll be criticized for wasting money that didn't help). Obligatory vendor pitch: this is where a shared cloud infrastructure, like a CDN, comes into play. Infrastructures that measure normal usage in terabits per second have a significantly different tolerance for attack capacity planning than most normal users.

H1N1 and telework

The nervousness around H1N1 has pretty much permeated every aspect of our lives. Remember a year or two ago, the hysteria around hand sanitizers and alcohol poisoning? Gone; in its place, we have dispensers in buildings everywhere. That's the power of the fear of H1N1.

Another place is in schooling. Not too long ago, if your kid got sick, school policy was "keep them home if they have a fever or are vomiting." Sanely, this migrated to "keep them home for 24 hours after a fever." Now, however, it is "48 hours fever-free with no medications." Some schools/daycares have added "and no symptoms either," which is moderately impractical for the kids who get a three-week long lingering cough.

This affects us in the workplace. If an employee has a small child and they don't have a stay-at-home caregiver, expect that they're going to miss more time than in prior years; and that the employee actually will be stressed about this (heck, anyone trapped at home with a no-longer-sick child on a school-day is going to end up pretty stressed). Also, you may want to suggest that employees with sick children stay at home even if they aren't the primary caregiver, just to minimize workplace infections.

Key to this is a sane telework plan. Like most things, this comes down to People, Process, and Technology.

People: Do the employee and manager have a good rapport, such that working remotely does not lead to communications failures? Can the employee work without direct management? Can the employee balance the needs of daytime home-life with work?

Process: Do you have understood ways for the employee's status to be communicated? Do other employees know how to reach them? How many hours do you expect when an employee is "working from home"?

Technology: What telework capabilities do you have? (VOIP phones in the home? VTC setups?) What about remote collaboration? (A wiki, IM, ticketing system or just email?) Do your employees have enough bandwidth at home to telework? Do you have enough in your office to support them?

It's going to happen to you -- you just need a little prep. And most of that prep? You can typeset it to hand to your auditors; it's a big piece of your DRP.

Secure by design?

"How do we ensure people build secure systems?"

This was the question to the panel before mine at the Thayer School's Complex Systems Symposium. It's not a new question - it comes up every time anyone tries to tackle hard problems around internet security. But it's an unfair question, because we have never built anything securely.

The question was asked in a lecture hall. Every time the symposium took a break, the two aisles bottled up with side conversation, inhibiting the flow of people needing to exit/enter. There were several "captains of industry", extremely talented professors, and bright students in the room; yet a mob could have swooped in shouting at any minute or an attacker could have waltzed in unimpeded (I could go on and on with threat scenarios). Yet who is responsible for the poor security design of that lecture hall?

In reality, security is about making good risk decisions, and accepting that there are some attacks and adversaries that you will not defend against. For internet-connected systems, this tradeoff is harder, as the cost to your adversaries is usually small enough that attacks that are implausible in the physical world become economical (remember the half-penny skimmers?)

Compliance, Security, and the relations therein

Last week, Anton Chuvakin shared his latest in the "compliance is not security" discussion:

Blabbing "compliance does not equal security" is a secret rite of passage into the League of High Priests of the Arcane Art of Security today. Still, it is often quietly assumed that a well-managed, mature security program, backed up by astute technology purchases and their solid implementation will render compliance "easy" or at least "easier." One of my colleagues also calls it "compliance as a byproduct." So, I was shocked to find out that not everyone agrees...


I think there are two separate issues that Anton is exploring.

The first is that a well-designed security control should aid in compliance. As one of his commenters notes, a good security program considers the regulatory issues; or, more plainly, a good security control considers the compliance auditor as an adversary. If you do not design controls to be auditable, you are building risk into your system (sidebar: what security risks are worse than failing an audit?).

But the second point is more interesting. Most compliance frameworks are written to target the industry standard architectures and designs. What if you are doing something so different that a given control has no parallel in your environment? Example: You have no password authentication in your environment; what do you do about controls that require certain password settings? What if your auditor insists on viewing inapplicable settings?

Then, you have three options:

  1. Convince your auditor of the inapplicability of the controls.
  2. Create sham controls to satisfy the auditor.
  3. Find another auditor.


Security and hairdressing

I've become an amateur hairdresser in the past couple of years, thanks to my three year old (I suspect that, had I been unwilling to do so, her hair would be quite short right now). Along the way, I've realized that I know as much about hairdressing as I do about many of the disciplines InfoSec touches.

For those of you who've never braided hair, let's try a little manual experiment. Go get three strings. Tie them together at one end to a fixed object: maybe a railing. Now braid them: holding them extended, switch the middle and right one; then the (new) middle and left one. Repeat, each time making sure the middle one goes under the outer one. Do this a couple of times, until you're comfortable with it.

You now know as much about hairstyling as you probably know about some of the more esoteric security disciplines: executive extraction, availability analysis, crypto design, or fault isolation in Byzantine networks. You haven't had to deal with snarls, or working with multiple different hair types, or tried a French braid. Similarly, you may never have designed a communications protocol, or walked a perimeter, or managed IDS sensors on the other end of an intercontinental straw.

Yet every day, as infosec professionals, we are compelled to guide other professionals in how to do their jobs. My advice: be a bit humble. As smart and clever as we think we are, the professionals we deal with are smarter in their own disciplines.

The Problem with Password Unmasking

I disagree with this:

It's time to show most passwords in clear text as users type them. Providing feedback and visualizing the system's status have always been among the most basic usability principles. Showing undifferentiated bullets while users enter complex codes definitely fails to comply.

Most websites (and many other applications) mask passwords as users type them, and thereby theoretically prevent miscreants from looking over users' shoulders. Of course, a truly skilled criminal can simply look at the keyboard and note which keys are being pressed. So, password masking doesn't even protect fully against snoopers.

More importantly, there's usually nobody looking over your shoulder when you log in to a website. It's just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.


Even though Bruce Schneier agrees with it:

Shoulder surfing isn't very common, and cleartext passwords greatly reduces errors. It has long annoyed me when I can't see what I type: in Windows logins, in PGP, and so on.


Ignoring the issue of security controls enforced differently in browsers for type password versus type text, the arguments in favor of unmasking fail to address several issues:

  • What class of attackers are completely foiled by password masking? What is the exposure presented by these people? (Think: someone that happens to glance at your screen as they've interrupted you midlogin)
  • What is the additional level of likelihood of detection of someone trying to watch your fingers touchtype vs. someone simply reading off a screen?
  • What is the additional level of risk presented by a persistent display of the password vs. an ephemeral display of each keystroke as it is typed?
  • What is the additional attack surface presented by having the password sent back from a network application, rather than a one-way transmission?
  • How much of the uncommonality of shoulder surfing is due to user awareness of password protection needs, as communicated to them at every login?


All that aside, the correct answer is to reduce our dependency on passwords, both through SSO technologies, and through use of certificate and ephemeral authentication schemes.