Two weeks ago, we released our recent study on why companies are failing PCI. We based our report findings on 60 recent PCI assessments involving 50 different large companies. Since then, there have been multiple media outlets that have picked up and commented on the report. One in particular I’d like to review is an article by TechTarget (which interestingly enough, now has a new title).
When Keith Gosselin of the Biddeford Savings Bank in Maine was told that our report showed that nearly half of the companies are failing requirement 11.2 (quarterly scanning), he stated, “It surprises me how high that number is.” I think this was a big shocker for us as well, but after letting the shock wear off, we created a couple of hypotheses around why this is failing.
1) Companies fail this because they are doing quarterly scanning, but are not scanning until they receive a clean scan (as required by PCI Requirement 11.2). Having an automated process run is pretty easy to do, but you also must remediate and rescan. Many of our customers did not close the loop.
2) Companies are not properly scanning internal systems. When we started this process many years ago, it was typically out of the IT or Security groups. Now we are working for the office of the CFO in most cases. Back then, most companies had at least 1 guy who could get a halfway consistent Nessus engine running. Today it seems priorities have changed, but someone forgot to set up an outsourced partner to do this. VeriSign currently partners with Qualys to perform this service (both internal and external scanning).
The other thing he mentioned was that he was shocked companies did not understand their data flows. Until recently, PCI has had a much more reactive and tactical effect on our customers. Now that some companies have seen the light and understand this type of compliance requires a programmatical approach, they are beginning to get to the root of the problem–the data. They know they have to find it, and then secure it. Surprisingly enough, it is a rarity that a payment system in a company is so simple that someone knows it from start to finish. I can think of a couple of examples out of the hundreds we have done. Knowing where your data lives is something I’ve discussed before, and will continue to be a struggle for companies as they grow.
My favorite comments in this article are from Roger Nebel, Director of Strategic Security for a consulting firm in DC. He states that the report doesn’t always account for some critical differences and inter-relationships between a threat, a vulnerability, and an asset, all of which result in some level of risk.
Yep, you hit the nail on the head!
I do want to make a slight correction, however. The perceived problem is not with the report, but with the standard. In fact, there was a gentleman at the recent PCI Community Meeting with specific feedback for the council on creating a quantifiable method to calculate risk. While I seriously doubt that the council will want to create such policy around the standards, it is a valid point for those folks who are focused on doing the absolute minimum. The absolute minimum is sometimes a reality, but for those folks that complain that PCI is a moving target, I suggest they might want to strive for something north of the minimum so they are not continually chasing compliance like a cat chases a mouse.
Finally, Nebel took issue with our finding that clients who passed requirement 6 of PCI DSS still have applications at risk. My thoughts on this is that he was either mis-quoted, or he is not familiar with the PCI DSS. Specifically, if you look at the current version of PCI, requirement 6.5 lists the former OWASP Top 10. This year, OWASP released an updated version of the Top 10 which is still just a subset of the total number of application vulnerabilities that you might be able to find on any given day.
Granted, I think Nebel’s point is that if you have a super-sexy Software Development Life-Cycle (SDLC), you should not have application vulnerabilities in the first place. PCI does not require, nor does it address fully, something like that. It just requires that security is baked into the SDLC, and it does not account for human error. After all, we are all people running this process. You can detail it all day, but that does not prevent someone from inserting some cryptic looking code that opens up a vulnerability.
PCI is trying to build a baseline, and baselines are good, but do not eliminate risk. Even if you take a conservative view on the standards, you still have risk in your applications if you fully meet requirement 6. Maybe that’s another story for another time.
(Note, some sections of this post were taken from the TechTarget article to provide a common ground for comments. The full article for your reference is available at http://searchsecurity.techtarget.com/originalContent/0,289142,sid14_gci1273153,00.html.)
Possibly Related Posts:
- PCI DSS 4.0 Released plus BOOK DETAILS!
- PCI Council Loses $600K in Revenue, PO Population on the Decline
- Why PCI DSS 4.0 Needs to be a Complete Rewrite
- Orfei Steps Down
- Should you be a PCI Participating Organization?