Tuesday, January 13, 2009

Data Breaches on the Rise

In my Jan. 1 posting, SANS 2009 Security Predictions, I referenced the Identity Theft Resource Center and the timely studies that they release concerning data breach trends. Their new report details a 47% increase in US data breaches over 2007.

Not surprisingly, the two areas called out in the breach statistics were lack of encryption and lack of passwords on the breached data. Password protection is a pretty weak method of protecting your data, given the computing power available today to brute-force or dictionary attack the passwords. Even using pass-phrases instead of passwords is less than effective, depending on the length of the phrase, as some platforms simply hash and store the result, and breaking the smaller hash is relatively simple.

One sector that showed the largest increase in reported breaches was in the business category, increasing 36.6% in 2008, after posting smaller increases of 28.9% and 21% in 2007 and 2006 respectively. Government/Military and Education actually saw decent decreases over the same timeframe.

Only 2.4% of all reported breaches had encryption or other strong protective controls in use. 7.3% of the business breaches involved data in transit - either via backup tapes, removable media, or in some cases, electronic data movement, which could include misdirected or erroneous transmissions. Overall, data on the move and accidental exposure account for 35.2% of all breaches with a cause determined, and both fall into the "human error" category.

There is vigorous debate among security and privacy professionals about these trends, and the discussions typically take one of two approaches - we're a lot better at identifying breaches as they occur via preventative and detective controls, and we're doing a better job of complying with state data breach notification laws and regs. While both are true to an extent, I firmly believe there's one other point to be made.

There's still too much data being created and stored, and we've yet to achieve a good maturity model for controlling access to data based on a clear need for people to have the data in order to do their jobs. At every step in the information lifecycle, from creation to destruction, how well do we understand what data we have, where it's located, how it's protected, and who has access to it?
When information moves during use, whether from web apps into back-end systems, from collection points to databases, or from data stores to applications or tools that parse, analyze, or process the data, where are the gaps in our controls environment and how do we close them? Do you have a good methodology for testing the effectiveness of the applicable controls, including documentation about what sort of evidence is sufficient to substantiate whatever controls effectiveness rating you plunk down on a particular measure?

The Computer Security Division of the National Institute of Standards and Technology has just released SP 800-122
DRAFT Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), open for comment through March 13, 2009. NIST presents a general, but effective overview of how to protect PII that is applicable to data of all types, especially if your organization struggles to differentiate between data types. NIST obviously recommends performing data analysis to ascertain the types of data you hold, assigning risk classifications to the various data types, and implementing security and control based on risk.

Most mature organizations have some sort of information classification program, but the wheels come off somewhere between tagging the data and reporting that the data is breached. The challenge is to perform a root cause investigation to see where you went wrong. Did you fail to risk assess the data types, or did you not apply the controls that were applicable based on data and risk classifications? Or perhaps you applied the controls and it turned out your testing didn't identify control gaps or the existence of ineffective controls. Maybe you did identify control gaps, but either didn't have remediation plans in place, or the breach occurred before you could complete your remediation. Perhaps human error was involved in any (or all) of these phases, or you didn't enforce separation of duties or other best practices.

Until we all get our arms around the concept of controlling how much data we collect, retain, process, and store, and enforcing stringent protection mechanisms at every phase of the information lifecycle, we'll continue to see disturbing breach trends. It's only a matter of time until the incoming Obama administration begins to push for federal data protection and breach notification legislation as an umbrella approach to supplant the myriad state laws and regulatory agency requirements while pushing a straighforward, easily-measured enforcement framework.

At this point, I'm not convinced that moving to a federal model is a bad thing. Time will tell.


No comments:

Post a Comment

Please tell me what you think.