“Risk Based Thinking” Vs. “Faith Based Thinking”

“Risk based thinking” and “risk based approaches” have become popular themes in a number of quality, project and technical management circles, including counterfeit part avoidance and detection practices. The final rule under DFARS Case 2012-D055 discusses the use of a risk-based approach for a contractor counterfeit electronic part detection and avoidance system. New industry standards introduce risk based thinking when selecting tests and inspections to detect counterfeit parts.

In the following article, Dr. David E. Frick describes the hazards of ascribing levels of risk based on esoteric analysis versus risk assessments supported by empirical data and defensible estimates …

David E. Frick, Ph.D., “The Fallacy of Quantifying Risk“, Defense AT&L Magazine, September–October 2012, p.18-21

In recent opinion piece, quality management system expert, Chris Paris (author of Eyesore 9001) discusses problems facing users with new “risk based thinking” requirements and offers an important warning …

What is ISO 9001′s “Risk-Based Thinking” Anyway?

As with other applications of “risk based thinking”, when studying basis of counterfeit part risk assessment methods, the user community should should beware of “faith based thinking” approaches that transfer risk vs reduce risk to the end user.

NOTE: For those of you in the New York City area next week, Chris Paris will be speaking on the subject of “risk based thinking” at an event sponsored by the NY/NJ Metro Section of ASQ – “Risky Business: Surviving the Future of ISO 9001:2015

Advertisements

2 thoughts on ““Risk Based Thinking” Vs. “Faith Based Thinking”

  1. Mary Lockhart says:

    These are valid criticisms of the fad for neologisms in our industry, but not of the work that non-risk management professionals must do to protect the military supply chain. There is a body of work concerned with estimating risk–Frick’s work parallels that of Douglas Hubbard who teaches non-risk managers to do this– and Monte Carlo tools to help with that. One of the “risks” folks are dancing around is the costs and transfers-of-costs associated with testing everything in the supply chain and spinning up a huge overhead in order to catch what should be a diminishing level of counterfeit parts. I say diminishing numbers, because, presumably there’ve been widespread and effective action taken to manage inventories and to prevent purchase of bad parts to start with–getting people to do their jobs. IF this has been effective, then the amount of at risk inventory should be decreasing rapidly and the permanent overhead of more, better, widespread testing will actually be catching fewer parts. I ran a Monte Carlo simulation of what this might look like, and it isn’t pretty. Risk-based thinking may be another idiot term, co-opted from another discipline (so what? I work in the defense industry where there’s nothing too banal for its own acronym), but maybe there’s something useful there.

  2. Zach Collier says:

    Some of the discussion seems to be confusing “subjective” with “anything that isn’t quantitative”, when these are two different issues. You can imagine a grid where qualitative/quantitative is on one axis and objective/subjective is on the other. Something can be subjective and still quantitative – and estimates of probability (by definition) fall into that category.

    Empirical data are useful and necessary, and in terms of looking at documented *frequencies*, can be helpful in zeroing in on an estimate of the *probability* that you’ll get counterfeits in the future – but a future estimate is always just that – an estimate. Just like in the stock market, where there is an abundance of empirical data, “past performance is not a guarantee of future results”. So it seems that risk assessment, which requires some degree of estimation, is uniquely subjective. Where people seem to go wrong is to automatically equate subjective with “bad”.

    From a practical standpoint, waiting for “perfect information” will lead to paralysis by analysis – not doing anything until we have more empirical data is probably worse than making the best decision based on the, albeit imperfect, information you have available at the time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: