I’m not afraid to point out misleading or questionable research findings funded by marketing groups strictly to gain headlines. Studies like the cost per record or cost per breach white papers come to mind here that give us excellent, attention grabbing headlines supported by a house of cards (specifically the cost per record studies). The information presented is unusable for risk management purposes, and is a quick way to get laughed out of a room if you quote these studies.

Doctor Tom Saves the Day, by Murray Barnes

What risk managers need is something that is comparable to their companies when trying to think about costs. Simply taking an average cost per record or an average cost per breach is not concrete enough to make risk management decisions. There’s not a way for Firm A to look at the results and understand how those could relate back to them.

Current limitations of security research:

  • We simply don’t have enough data from breaches from which to draw statistically valid conclusions.
  • Data communicated about breaches is done in an inconsistent manner, further confounding normalization efforts.
  • Privately held companies are not required to disclose breach cleanup efforts, thus limiting the population of available companies dramatically.
  • There is temptation to overfit models due to the lack of data.

All of these make it hard to create that headline and sell security products and services. It makes you wonder how much funded research goes unpublished by marketing teams because they don’t like the narrative.

So what is it we need? We need research that is transparent enough to help firms make those connections between themselves and the data on the screen. We need real dollars and sound research methods.

Some ideas:

  • Break data into various clusters based on industry, size of company, and other demographical details that help firms relate to the data.
  • Only use concrete costs (look backward, not forward) and avoid trying to capture opportunity costs or other indirect costs that you didn’t concretely measure. Show those costs as separate line items if you did.
  • Don’t overfit your models. If you use factor analysis, don’t turn around and say there are 22 factors that influence the cost of a breach. Instead, find the break on the scree plot that explains a high percentage of the influential factors and focus on those (hint, it is probably less than five).
  • Never rely on Affective Forecasting as the data you get from it will lead you down a path of the human psyche that will confound your results (see JPMorgan Chase’s odds-defying ability to stay in business after their breach). Insert eyeroll emoji here.
  • Commit to releasing the research even if it doesn’t help you sell more security widgets. If the research methods are sound, the data and results are useful.

Edit: I removed one particular firm’s name because I unfairly singled them out. That particular firm happens to be the most famous for the cost per record study, but the point is more about marketing funded research in general and the bias it brings (as well as the limited data, etc.).

This post originally appeared on BrandenWilliams.com.

Possibly Related Posts: