See part 1 here, part 2 here, part 3 here.

The Funniest Controls that You Didn’t Design
Some of my most cherished stories and experiences come from customers and vendors that had the right intentions, but never seemed to follow the basic doctrines listed above on how good compensating controls are made ((By the way, if you read this and think, ‘Hey! He is talking about ME!?’, I’m not. I promise.)).

During my career I did some IT auditing for a bank that was owned by my employer. I know the drill of responding to auditor findings. They usually start with a meeting bringing all the key stakeholders together, a spreadsheet listing all the findings, and lots of grumbling about how picky “those damn auditors” are. Once the findings are separated out in the legitimate and ridiculous piles ((As defined by the assembled gallery of legumes. It’s OK, I was one of the nuttiest in my time.)), the ridiculous ones are assigned to experts to push back on the auditors. “We don’t need that control because of a control over here,” or “This gap does not apply to our environment,” are common phrases uttered in the next round of meetings with said auditors. After all of that, a happy (potentially unhappy) medium is established, and we close out the audit.

The same process is often applied to PCI, and the compensating control Cha Cha commences.

Before I poke fun at the following examples, please understand that I am only illustrating a point. At no time were these suggestions made by people who didn’t understand both the requirement, and the capabilities of the technology in question. These people were professionals; and based on their credentials and experience, they should have known better.

Encryption has always been a hotly debated topic from the early “Just Do It”® message that was pounded into our heads, to the cooler-headed “Slow down, it’s a mainframe” axiom that we live by today. My favorite failed compensating control for Requirement 3.4 comes from a vendor who missed the last ferry off Gola Island. I received a call from this vendor late one afternoon and listened to their product team try to convince me that RAID-5 was essentially an equivalent to encryption. Their argument was that because you could not take any one drive and reconstruct useful data that could be considered compromise worthy, their product should be considered valid to sell to companies as encryption.

Right.

So if one drive (probably damaged) falls off of a truck during transport, the technology does prevent someone from reconstructing all the data that was on that system. If the system was large enough, chances are that the data on the drive may not provide any use to nefarious individuals either. But that’s not really the goal of the requirement, is it? Physical theft prevention is covered in other areas of the standard. The point of the requirement is to render the data unreadable anywhere that it is stored. RAID may render the data unreadable on one physical drive, but it does not render it unreadable in any other circumstance. A simple compromise of one area of the system could lead to the access and theft of massive amounts of unencrypted data.

Speaking of encryption, disk only encryption inside data centers is not very useful either, unless additional user credentials are tied to the decryption process. Another favorite was a vendor that offered PCI compliance through an encryption appliance that was completely transparent to the operating system. So basically, you were only protecting the data as it sat on disk, in a secured facility, with gates, cameras, and Buck, the not-so-friendly security guard that looks like a hiring manager gave a night shift and a taser to the ex-bouncer of a strip club. If applications sat on disk drives housed in the unlocked part of a post office, then I could see the value here. Until then, the solution only focuses on the physical media and nothing else.

Encryption is really not the big problem with Requirement 3, key management is. Once companies figure out that encryption technologies are available for their platforms, they realize that key generation and management is a whole different problem. One vendor who apparently thought I had already checked out for the weekend make a case for using the COBOL Random Number Generator (RNG) to spit out sixteen digits (technically 128 bits of data) to use as an encryption key.

Yes, you are trying to be random and you will end up with a 128-bit key. But seriously, anyone with a basic knowledge of encryption will quickly find the problem with that approach. Not that COBOL’s RNG is less than R, but that you have eliminated a giant section of possible key space! A 128-bit key generated in that manner is the equivalent of (approx.) 53 bits of encryption, thus making it computationally feasible to brute force that key ((50 computers could do it in less than one year.)).

Look for Part 5 on Monday!

This post originally appeared on BrandenWilliams.com.

Possibly Related Posts: