Counterfeit Part Risk Analysis – moving from “subjective assessments” to risk analysis supported by empirical data and defensible estimates

Counterfeit part risk has been discussed from various perspectives. Briefings presented by DoD describe a “profile of counterfeit risk” based on the age of technologies and the susceptibility of those technologies to counterfeiting [1]. SAE Aerospace Standard AS6174 presents a counterfeit materiel risk assessment model based on “impact of supply chain” (cost of operations, degraded function, sabotage or malicious functions, personnel injury or death) and “likelihood of counterfeiting” based on production availability from original manufacturers [2]. SAE Aerospace Standard AS5553 includes a “risk stack chart” describing counterfeit electronic part risk as a function of “supplier reliability and product criticality” [3]. DfR Solutions describes counterfeit electronic part risk in terms of probability of failure versus supplier trustworthiness [4]. Integra Technologies describes types of counterfeit electronic parts, tests and inspections used to detect them, and the probability of detection [5]. A common thread weaving through all of these representations is that vulnerability to counterfeits and risk mitigation is a function of supplier selection, due diligence applied when using riskier suppliers and end use application considerations.

In a recent article, Dr. David E. Frick describes the hazards of ascribing levels of risk based on esoteric analysis versus risk assessments supported by empirical data and defensible estimates [6]. While each of the aforementioned representations are helpful toward pointing organizations in the right direction, quantitative techniques are needed to support practical applications for evaluating counterfeit avoidance approaches. In this essay, I present notional counterfeit parts risk analysis examples based on a methodology described within the “Risk Management Guide for DOD Acquisition” [7] and discuss implementation issues for DoD, A&D contractors and academia to consider when devising quantitative risk-based approaches to addressing the counterfeit parts threat.

DoD’s “Risk Management Guide” describes risk management with respect to program cost, schedule, and performance objectives. Risk analysis considers the likelihood of a root cause occurrence, identifies possible consequences, and identifies a risk level using a Risk Reporting Matrix.

Risk Management Guide for DoD Acquisition, Sixth Edition (Version 1.0), August 2006

Risk Management Guide for DoD Acquisition, Sixth Edition (Version 1.0), August 2006

Recent US Government legislation recognizes that vulnerability to counterfeit parts is directly related to supplier selection and due diligence applied when using riskier suppliers. Section 818 of the FY2012 NDAA, for example, describes categories of suppliers with implied counterfeit parts risk [ref NDAA 2012 § 818(3)(c)]. The following is a simple representation of supplier categories versus consequence of a counterfeit part quality escape using the risk reporting matrix.

'Trusted Supplier' Example (Likelihood of Counterfeit Quality Escape vs. Consequence of Failure)

‘Trusted Supplier’ Example
(Likelihood of Counterfeit Quality Escape vs. Consequence of Failure)

Like the aforementioned representations, this example is helpful toward pointing organizations in the right direction for designing enterprise level counterfeit avoidance compliance programs. This risk reporting matrix, however, has limited utility for practical applications to evaluate counterfeit avoidance approaches sensitive to specific programmatic constraints.

Notional Counterfeit Parts Risk Analysis

I present below examples of notional counterfeit parts risk analysis that are closer to what a contractor or DoD program office would find useful for evaluating counterfeit avoidance approaches.

NOTE: As I describe these risk analysis examples, please bear in mind the following …

  1. These are notional and oversimplified risk analysis examples presented for illustration purposes only. The assignment of event probabilities, though ranked consistently with industry reports, is arbitrary; the reporting results are not derived from historical data necessary for quantitative analysis. These examples more closely resemble what Dr. Frick describes as an “honest, subjective assessment”.
  2. The approach illustrated by these examples is similar to how one might approach risk analysis with respect to a broad assortment of quality escapes based on program specific parameters. It does not take into account consequences specific to a counterfeit part quality escape described in Section 818 of the FY2012 NDAA (i.e. contractor responsibilities and enterprise business system ramifications).

For the purpose of the following examples, I refer to SAE Aerospace Standard AS5553 for context [ref 4.1.3, Purchasing Process, subparagraph ‘d’]:

“Require a documented risk assessment and risk mitigation plan, specific to the intended application, for each procurement other than from an OCM or authorized supplier.”

This refers to an end use specific risk assessment (represented by the “Risk Assessment” block within Figure B-3 of AS5553) leading to decisions either to modify the assembly design or to procure the part presently used within the assembly design from a supplier other than an original manufacturer or authorized supplier.

Example 1

The scenario for this example is common to DoD sustainment programs:

  • An electronic part is no longer produced by the original component manufacturer; authorized distribution inventory has been exhausted; and the part is not available through “trustworthy” aftermarket suppliers.
  • The obsolete part is only available through the open market which poses a counterfeit parts risk.

An expedient mitigation approach would be to purchase parts from open market sources without counterfeit avoidance practices (Mitigation Approach ‘A’). Risk analysis, however, reveals that this approach would likely result in a counterfeit part quality escape and a part failure would likely result in a moderate reduction in equipment performance. Eliminating the obsolete parts from the design would remove the counterfeit part risk (Mitigation Approach ‘C’), but further risk analysis reveals the time necessary to modify the design will likely cause a moderate impact on the equipment delivery schedule. If, however, the parts are purchased from ‘Trusted Suppliers’ with robust counterfeit avoidance practices (Mitigation Approach ‘B’), both technical and schedule risk are within acceptable limits.

TS_RiskReportingExample1

Example 2

In this example the scenario and schedule risks are identical to those in ‘example 1’, but in this case, a part failure would likely result in a minor reduction in equipment performance with little to no impact on overall program objectives. In this example, technical and schedule risk fall within acceptable limits for all three mitigation approaches.

TS_RiskReportingExample2

Moving from “subjective assessments” to risk analysis supported by empirical data and defensible estimates

A number of factors need to be considered in order to transform the approach described in these examples from ‘subjective assessments’, suitable for philosophical discussions, into meaningful and practical risk analysis based on facts and data. The following are but a few of the key factors that present challenges for DoD, A&D contractors and academia in devising quantitative risk-based approaches.

Likelihood of counterfeiting across product types

The focus of counterfeit prevention should be applied based on the likelihood of a product being counterfeited and the fundamental premise that counterfeits tend to find their way into the DoD supply chain through certain categories of suppliers. The 16 March 2012 DoD memorandum on “Overarching DoD Counterfeit Prevention Guidance” [8] states that particular focus will be expected of DoD for “mission critical components”, “critical safety items”, “electronic parts”, and “loadbearing mechanical parts”. GIDEP and ERAI reports provide insight into the proportion of specific part types within the spectrum of electronic parts that are associated with counterfeit parts reported by industry. The Defense Logistics Agency recently presented a summary of its counterfeit prevention activity and included an analysis of its own counterfeit part findings [9]. A comprehensive study of such data sources would go a long way toward quantifying likelihood of counterfeiting activity associated with specific types of products.

Types of counterfeits

The specific types of counterfeit electronic parts vary significantly. This variety has a major influence on technical risk assessments. Some types of counterfeits do not work; others do not behave at all like the authentic item. The probability of a quality escape for these types of counterfeits is relatively low. Other types of counterfeits, however, closely resemble the authentic item; variations in performance and damage from counterfeiting processes are difficult to detect without extensive testing of the parts themselves. Risk assessments should weigh the consequences of a counterfeit part quality escape and the extent of due diligence applied to prevent an escape. A challenge to devising quantitative risk-based approaches is the lack of data and analysis from which to derive the probability of each type of counterfeit circulating in the supply chain and the specific defects associated with them.

“Trusted Supplier” criteria

Section 818 of the FY2012 NDAA includes the keystone to counterfeit avoidance practices recommended by industry and US government subject matter experts and industry standards such as SAE AS5553 and SAE AS6174. Should DoD consider including suppliers other than original manufacturers or their authorized dealers as ‘trusted suppliers’, DoD will by definition be accepting some level of risk that counterfeit electronic parts will enter its supply chain. Risk assessments must account for the fact that the probability of a counterfeit electronic part quality escape will depend heavily on supplier qualification requirements and the extent of inspections and tests applied to detect counterfeits.

Inspection and Testing

When using riskier suppliers, due diligence must include inspection and testing to avoid counterfeit part quality escapes. Though much progress has been made toward developing standards in this area, much work remains to be accomplished in order to better quantify the likelihood of counterfeit part quality escapes versus on the extent of tests and inspections applied to detect them. Further progress is also needed to help drive consistent and accurate results across independent test laboratories [10]. The outcome of work underway by organizations such as the SAE G19 committee, The Institute for Defense Analysis [11] and the Center for Hardware Assurance, Security, and Engineering (CHASE) [12] is critical to supporting risk-based counterfeit detection protocols.

Closing Remarks

Many representations of counterfeit part risk are helpful toward pointing organizations in the right direction for designing enterprise level counterfeit avoidance compliance programs. Risk assessment methodologies supported by empirical data and defensible estimates are needed for practical applications to evaluate counterfeit avoidance approaches sensitive to specific programmatic constraints. Collaboration between DoD, A&D contractors and academia is needed to devise quantitative risk-based approaches to addressing the counterfeit parts threat.

Henry Livingston

References

[1] Paul D. Peters, Deputy Assistant Secretary of Defense for Supply Chain Integration, “Anti-Counterfeit” (Slide 2, ‘Profile of Counterfeit Risk’), Product Support Manager’s (PSM) Conference, 06 June 2012.

[2] AS6174 – Counterfeit Materiel; Assuring Acquisition of Authentic and Conforming Materiel

[3] AS5553 – Fraudulent/Counterfeit Electronic Parts; Avoidance, Detection, Mitigation, and Disposition

[4] Greg Caswell, “Counterfeit Detection Strategies: When to Do It / How to Do It”, DfR Solutions, November 2010

[5] Counterfeit Type vs. Detection Methods, Integra Technologies LLC, May 2012.

[6] David E. Frick, Ph.D., “The Fallacy of Quantifying Risk“, Defense AT&L Magazine, September–October 2012, p.18-21

[7] Risk Management Guide for DoD Acquisition, Sixth Edition (Version 1.0), August 2006

[8] Overarching DoD Counterfeit Prevention Guidance, Hon. Frank Kendall, Acting Undersecretary of Defense for AT&L (March 16, 2012)

[9] Counterfeit Items Detection & Prevention, Christine Metz (DLA J-334), September 2012

[10] “Status of Counterfeit Detection”, Steve Walters, Honeywell, ARO/CHASE Workshop, January 2013

[11] “An Assessment of Counterfeit Detection and Confirmation Technologies”, Brian Cohen and Kathy Lee, Institute for Defense Analysis, SMTA-CALCE Counterfeit Electronic Parts and Electronic Supply Chain Symposium, June 2012

[12] “Assessment of Counterfeit Detection Technologies”, Ujjwal Guin and Mohammad Tehranipoor, Center for Hardware Assurance, Security, and Engineering (CHASE), University of Connecticut, ARO/CHASE Workshop, January 2013

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: