Do you remember debate or speech class? I remember having a professor assign me the counterpoint position on an issue in which I didn’t agree. I always thought that the other guy had it easy if our beliefs were the same because he already believed what he was saying.
I recently read an article by Ariel Silverstone in CSO Magazine entitled “Where PCI DSS Still Falls Short (and How to Make it Better)” in which Ariel seems to have been put in a similar situation. Either she was asked to publish something (anything), or asked to specifically publish something on PCI; regardless, she should have spent a little bit more time on research than she did. After reading her positions, I’m pretty sure she didn’t ask any QSAs (or anyone that have been in the industry for more than a few weeks) for assistance during her research phase.
After reading the article, I felt cheated. I picked up the text thinking that I would find some insightful points by a former consultant and current professional affected by PCI. Instead, I was subjected to an article that set up the shot but didn’t follow through.
Her first points are definitely on-point, if not historical in nature. The first thing I have to take issue with is her comment on scoping and comparing PCI DSS to the SAS 70 Type II (note: not type I). While you can carve out specific scope for PCI and validate it like you would a SAS 70, that’s not necessarily a bad thing. Sometimes that is required for a business to do depending on how they are set up. It’s up to the enforcement entity that accepts the Report on Compliance (i.e., a card brand or bank) to decide if it meets their requirements. Companies that do IT hosting and management require this kind of flexibility, otherwise they would be forced to apply PCI to all of their customers.
Ariel suggests splitting up the requirements, but it is clear to me that she didn’t read past Page 3 of the PCI DSS. While in some cases you could extract the top level requirements (particularly the short ones), it is unrealistic to simply group top level requirements together. For example, some elements of Requirement 11 can overlap with Requirement 6, and others should potentially be found elsewhere in the standard. Making blanket statements about the top level requirements is a dangerous game to play.
Next she talks about the concept of segmentation. PCI DSS does not require segmentation, but if you have a diverse network, it is highly recommended to reduce scope. She takes the concept of segmentation and applies it to requirement 6.1 (ok, I guess she did make it past page 3, I stand corrected), further questioning the rationale of thirty days vs any other number of days.
Part of making a standard is that you have to draw a line in the sand SOMEWHERE. Otherwise you will have people asking those kinds of questions versus actually progressing toward a goal. The point is to patch systems quickly, or to at least mitigate the risk such that the vulnerability could not be exploited (say by removing a .DLL file. Thank YOU .art vulnerability!).
The next one is a common mistake made by many professionals–including QSAs. The old One Function Per Server requirement (2.2.1). Proper virtualization can be used (including with mainframes) to allow multiple functions in the same piece of hardware while meeting the intent of the requirement. She says she was not aware of any research that proved this requirement reduced the risk associated with breaches. I’ve not looked for it, so I’ll assume it doesn’t exist. Regardless, any certified security professional will understand the concept of an enclave, and knows that proper access controls must be put in place to prevent a vulnerability in a DNS server leading to a dump of customer data. I’ve been calling that an “intrusion path” for years. It’s the concept of many successful complex attacks. I compromise X on one server, which leads me to access Y on the same server, which then leads me to Z on another server, and so on.
It’s just plain riskier.
The next suggestion (4 if you are keeping score), along with content in Suggestion 7, illustrates Ariel’s lack of experience in the PCI world. Suggestion 4 discusses requirement 3.1 and the somewhat contradicting concept of keeping the storage of cardholder data to a minimum while using business, legal, and/or regulatory purposes to define what is stored and for how long.
One thing I’ve learned about people is that they are much less likely to stick with an off position if they are forced to write it down–especially in public companies where documentation like that could become part of legal discovery. Instead, we have conversations over the phone or in crowded coffee shops, confident that never writing it down will allow us to slip by undetected if something bad happens. I personally believe the intent of this requirement is to force businesses to write down their retention plans, see what they look like on paper, and adjust if necessary. I can’t tell you how many times I’ve been told that data must be retained for ten years, only to have the same person come back to me in a week to deliver a scaled back version of the stance on paper.
Suggestion 7 screams “outsider with a lack of experience.” I remember sitting in difficult meetings in 2004 getting yelled at by all kinds of executives because I was telling them how to run their business. You can’t go from zero to eleven in a matter of minutes (unless you are Spinal Tap of course); you have to ease your way there. If the PCI Council saw fit to require internal encryption in the next version of the standard, I firmly believe there will be a revolt.
If you are not encrypting at least some internal communication of data you deem sensitive, you have a lot more faith in the security of your satellite offices than you probably should. Ever walked into a retail store only to see the system room door wide open? Do you think the corporate data center door is also wide open (hint: it’s not, and usually surrounded by lots of security)? I’m not saying that we should encrypt everything, but that is one we should leave to the merchant or service provider to decide.
Oh, and I’m pretty sure that FTP or “automated tools” are not the same as “end-user messaging technologies.” And no, it is not OK to send unencrypted data over the internet with FTP.
Skipping back a bit, Suggestions 5 and 6 are a little odd to me. Flexibility will allow for companies to deploy something that is best suited for their environment. The standard used to suggest TripleDES and AES as two encryption algorithms to be used, but there are plenty of other algorithms that are considered strong (if not stronger) that are acceptable. For example, Two-Fish is a fantastic algorithm (implemented correctly of course), and should be allowed on PCI related systems.
Final thought on that suggestion, the PCI Council should NOT be accepting legal risk for mandating particular encryption algorithms or key strengths. She points out why with the Zimmerman reference. Every company should do their own legal analysis to determine the best course of action for them.
I think you should read her article and form your own opinion, but I view it as a slightly embarrassing show peppered with valid points here and there. A prime example of why you should fully research a hot topic like PCI before publishing a “thought leadership” piece.
Possibly Related Posts:
- PCI DSS 4.0 Released plus BOOK DETAILS!
- PCI Council Loses $600K in Revenue, PO Population on the Decline
- Why PCI DSS 4.0 Needs to be a Complete Rewrite
- Equifax is only half the problem, your SSN needs a redesign!
- Orfei Steps Down