Showing posts with label NIST. Show all posts
Showing posts with label NIST. Show all posts

Tuesday, January 13, 2009

Data Breaches on the Rise

In my Jan. 1 posting, SANS 2009 Security Predictions, I referenced the Identity Theft Resource Center and the timely studies that they release concerning data breach trends. Their new report details a 47% increase in US data breaches over 2007.

Not surprisingly, the two areas called out in the breach statistics were lack of encryption and lack of passwords on the breached data. Password protection is a pretty weak method of protecting your data, given the computing power available today to brute-force or dictionary attack the passwords. Even using pass-phrases instead of passwords is less than effective, depending on the length of the phrase, as some platforms simply hash and store the result, and breaking the smaller hash is relatively simple.

One sector that showed the largest increase in reported breaches was in the business category, increasing 36.6% in 2008, after posting smaller increases of 28.9% and 21% in 2007 and 2006 respectively. Government/Military and Education actually saw decent decreases over the same timeframe.

Only 2.4% of all reported breaches had encryption or other strong protective controls in use. 7.3% of the business breaches involved data in transit - either via backup tapes, removable media, or in some cases, electronic data movement, which could include misdirected or erroneous transmissions. Overall, data on the move and accidental exposure account for 35.2% of all breaches with a cause determined, and both fall into the "human error" category.

There is vigorous debate among security and privacy professionals about these trends, and the discussions typically take one of two approaches - we're a lot better at identifying breaches as they occur via preventative and detective controls, and we're doing a better job of complying with state data breach notification laws and regs. While both are true to an extent, I firmly believe there's one other point to be made.

There's still too much data being created and stored, and we've yet to achieve a good maturity model for controlling access to data based on a clear need for people to have the data in order to do their jobs. At every step in the information lifecycle, from creation to destruction, how well do we understand what data we have, where it's located, how it's protected, and who has access to it?
When information moves during use, whether from web apps into back-end systems, from collection points to databases, or from data stores to applications or tools that parse, analyze, or process the data, where are the gaps in our controls environment and how do we close them? Do you have a good methodology for testing the effectiveness of the applicable controls, including documentation about what sort of evidence is sufficient to substantiate whatever controls effectiveness rating you plunk down on a particular measure?

The Computer Security Division of the National Institute of Standards and Technology has just released SP 800-122
DRAFT Guide to Protecting the Confidentiality of Personally Identifiable Information (PII), open for comment through March 13, 2009. NIST presents a general, but effective overview of how to protect PII that is applicable to data of all types, especially if your organization struggles to differentiate between data types. NIST obviously recommends performing data analysis to ascertain the types of data you hold, assigning risk classifications to the various data types, and implementing security and control based on risk.

Most mature organizations have some sort of information classification program, but the wheels come off somewhere between tagging the data and reporting that the data is breached. The challenge is to perform a root cause investigation to see where you went wrong. Did you fail to risk assess the data types, or did you not apply the controls that were applicable based on data and risk classifications? Or perhaps you applied the controls and it turned out your testing didn't identify control gaps or the existence of ineffective controls. Maybe you did identify control gaps, but either didn't have remediation plans in place, or the breach occurred before you could complete your remediation. Perhaps human error was involved in any (or all) of these phases, or you didn't enforce separation of duties or other best practices.

Until we all get our arms around the concept of controlling how much data we collect, retain, process, and store, and enforcing stringent protection mechanisms at every phase of the information lifecycle, we'll continue to see disturbing breach trends. It's only a matter of time until the incoming Obama administration begins to push for federal data protection and breach notification legislation as an umbrella approach to supplant the myriad state laws and regulatory agency requirements while pushing a straighforward, easily-measured enforcement framework.

At this point, I'm not convinced that moving to a federal model is a bad thing. Time will tell.


Tuesday, December 23, 2008

Round 1 Candidates for SHA-3

NIST announced the round 1 candidates for the SHA-3 competition, and crypto experts and math geeks everywhere had a SHA-gasm.

As background, National Institute of Standards & Technology decided to open a public competition for the development of SHA-3, the latest incarnation of the Secure Hash Algorithm.

NIST decided that a new version was needed due to the increase in attacks against SHA-1, and the fact that both SHA-1 and SHA-2 share a common framework. If SHA-1 goes down, SHA-2 won't be far behind.


The competition is supposed to run through 2012, and first round entrants needed to have their submissions in by October 31, 2008. Sixty-four submissions were received, and 51 of these were accepted. A couple of those accepted have already been broken, so it's looking like it won't take nearly as long to trim the field as originally expected.


SHA was developed as a set of cryptographic hash functions by the NSA, and a great many security protocols and applications employ SHA. Since SHA is essentially really hard math, it's not unbreakable, and over the last several years, concerns have been raised about potential mathematical weaknesses in the existing algorithms.

SHA-1 attacks in 2005 pointed out some security flaws, and everyone agreed that a stronger hash function would be needed. As of today, there haven't been any reported SHA-2 attacks or associated flaws.


The SHA-3 competition is similar to the efforts to develop Advanced Encryption Standard (AES), a block cipher adopted as an encrypted standard by the US government. Development of AES was needed after it became apparent that neither DES or 3DES were secure in this age of more powerful computing. AES is now used extensively in symmetric key encryption.


A hash function takes binary data (1s and 0s for you non-binary types) called the message, and condenses it down into the smaller message digest. To make that easier to understand, think about the Readers Digest magazine - taking longer versions and squishing them down for presentation - even though the whole story wasn't there, you got the gist of what the writer was saying.


A cryptographic hash function is a deterministic procedure (the algorithm, or really hard math) that takes a chunk of data and returns a fixed-length bit string, known as the hash value. If I go back and change any of the original chunk of data, either accidently or on purpose, then the hash value will also change. From a security perspective, I would know that the original data has been tampered with.

In this case, I would call the chunk of data the message, and the hash value the message digest. I can use the hash value as part of a digital signature (such as signing email or data transmissions to ensure integrity), for authentication purposes (proving you are who you say you are before granting access), or for any number of information security reasons.


Hopefully, the SHA-3 competition will provide a foundationally-secure algorithm that will buy us 5-10 years of grace before SHA-3 itself is broken. Letting crypto experts worldwide submit their own entries while trying their best to crack the submissions of others is a good start. It takes a special mind to be able to understand this stuff, and an incredible mind to create it.