Risk metrics are the heart and soul of information security indicators. An increasing proliferation of tools and assessments has emerged, attempting to quantify states of information security. Given the nature of what is trying to be measured, this is arguably one of the toughest challenges in the metrics space. The recent trend is for different bodies to develop and publish their own standards, which creates confusion regarding accuracy and applicability. Why all the turmoil, competing models, and misalignment? The sad story is (queue the somber violins) we just have not figured out how to measure information security risks very well.I have seen and applied many different methods, audits, and evaluations with varying degrees of success and disappointment. I have come to the following three basic conclusions:Current tools and methods lack maturity in this area, for both accuracy and comprehensiveness (and yes, I am guilty of contributing to the pool) No silver bullet exists. A unified method, which provides a predictive overarching and detailed risk analysis, is unlikely. Different approaches have their applicability. Choose wisely There is no replacement for a security professional’s brain. From the selection of the analysis method, the gathering of relevant data, to the interpretation of the results, requires a seasoned security professional. There is no substitute which can handle the ambiguity, chaos, and relational dependencies affecting the outcomeAn example will help express some of the challenges. The OCTAVE methodology, created by Carnegie Mellon University some years ago has been battle tested veteran in this role. It is a qualitative to quantitative device which leverages the expertise of key people to give a numerical value of risk in their respective area. Because personal bias and fears, the need to allow flexible ways of answering questions, and the varying degrees of base knowledge between the experts, results can vary greatly without even factoring in the changes occurring in the threat landscape.Let me be clear, I am a fan and a longtime supporter. However, it has its limitations. I have developed several assessments based upon the model in a large environment. As long as the limitations are accepted, it is applied where it leverages its strengths, and the process is rolled out properly, the results can be very valuable.But don’t confuse value with precision. I have observed the accuracy to be +/- 40% in complex organizations. I believe this is largely due to multiple tiers of qualitative-to-quantitative analysis and the bias introduced at each level. Credible sources have expressed a better +/- 20% accuracy for smaller implementations. Although these numbers sound terrible, it is very good compared to other methods. I have great respect for the chaps at Carnegie Mellon University who created the methodology. Groups within our company have used a modified form of this approach, with advanced structures tailored to our computing ecosystem, for years with great success. The low accuracy rate is not a poor reflection on the CMU model, rather it is a stark insight on how immature we are in this field.So this is a sad story, but one which is not over. A cadre of very bright people is working to tackle this problem. In the short term, I expect to see many more methods, theories, templates, and standards emerge for specific situations. In the end, I doubt if ever we will have a unified way to measure security risks, but I hold high hopes the best will be culled to a small number which can be applied to most situations and deliver reasonable metrics.