Archives of Personal Papers ex libris Ludwig Benner, Jr.
   - - - - - -Last updated on Thu, Aug 9, 2012
   [Investigation Catalyst Software ] [ Investigation Research Roundtable Site ]   
[ Contact "me" at ludwigbenner.org ]


This paper was published in two parts in the ISASI forum, October 1991 (24:3) and March 1992 (25:2).


Quality Management for Accident Investigations
Ludwig Benner, Jr., P.E.
Ludwig Benner & Associates
12101 Toreador Lane
Oakton, VA 22124
(703) 620-2270
Ira J. Rimson, P.E.
the validata corporation
POB 93308
Albuquerque NM 87199-3308
(703) 978 2944

Contents

Foreword

In his paper presented at the 1990 ISASI Seminar, Mr. Rimson presented a reasoned argument for developing quality controls for air safety investigations. (Rimson, 1991.) Mr. Benner's efforts to develop and apply quality controls to investigations arose during his tenure at the U.S. National Transportation Safety Board (NTSB), to ensure that staff reports satisfied the agency's quality needs. His more than twenty years' experience in safety research and investigations substantiates the urgency of the current arguments. This report attempts to add momentum to the observed and acknowledged need for more and better quality control over investigation processes and outputs.

An article in ISASI forum titled "Are These the Same Accident" (Rimson, 1983) compares two independent and contradictory outputs from the same NTSB investigation, and provides compelling evidence of the need for investigative quality controls. The need was reinforced in other research during a review which identified investigation models and objectives in most major U.S. Government accident investigation processes, and an assessment of the quality of their results. (Benner, 1983) This review of seventeen investigation program guidance documents disclosed little, if any, requirement for quality controls.

Understanding, prediction and control are fundamental concepts of scientific inquiry. This paper is limited to the "understanding" part of the inquiry process-the part which purports to describe "what" happened, and "why" it happened. The analytical output is the foundation for all else which flows from the investigation. If it is inaccurate, so is everything downstream: the judgments, conclusions, opinions, recommendations or decisions based on the report of what happened. It is the "information gateway" of the investigation, which is the reason we have selected it to be the object of our initial foray into Investigation Quality Management.

Quality Management Needs for Investigations

Little research has been directed toward defining either quality management needs or quality criteria for investigations. (Benner, 1975) One book which addresses the issue is MORT Safety Assurance Systems (Johnson, 1980), but the criteria therein are poorly defined. Existing guidance is generally found in accident investigation manuals; it is for the most part excessively generalized and abstract, rather than specific and concrete. Early efforts to develop quality standards and controls required that needs be identified by direct observations of quality management efforts at the NTSB and other organizations.

The general systems model provides orderly guidance for observing quality control efforts. Investigations are processes, as is quality control. The general systems model elements (Figure 1) help define the desired outputs of the investigation process, and what it takes to produce those outputs, by forcing orderly examination of the investigation process elements.

Figure 1. General Systems Model

If we look at what is involved in the process, we find what we must look for to define quality control needs.

Investigation Inputs:

(1) The occurrence. To make bean soup, you must have beans. To have an accident investigation you must start with an accident. We assume that an accident has occurred, and it will be investigated.

(2) Investigation knowledge and skill. The second input is the knowledge and skill of the person who investigates the accident. That person must know what an investigation is, the desired outputs, and processes by which the outputs are produced. Only then can knowledge and skill be exercised to produce the outputs. Investigators' knowledge and skills are highly variable, and influence the quality of the outputs. We usually assume that knowledge and skill adapt to meet established quality criteria. That assumes, usually without verification, that such criteria exist.

(3) Data created during the accident. The third essential input to an investigation is data created during the accident process Data are the raw material from which the investigator fashions the investigation outputs. If an accident generates no data, you have no means for determining what happened.

(4) Observations. The fourth essential input is observations of the accident data. Observation is the systematic noting and recording of characteristics of relevant objects, properties or conditions by a descriptive methods akin to scientific notation. When an accident generates data the investigator must know where to look for it, how to recognize it, and how to use it.

Accident Investigation Operations

You have to do something with the inputs to produce the output. When viewed systematically the investigation process consists of several discrete procedures, at minimum:

(1) Acquire data. The first step in the investigation process is to acquire data surviving the accident. The procedures used to acquire data affect output quality. If data are screened for Consistency with preexisting hypotheses, the output will be biased. To avoid bias, the investigator must search for and attempt to accommodate every available item of surviving data before deciding its relevance. (This process must include accommodation of the absence of expected data, which may in itself be a significant item of data!)

(2) Transform and organize the data. An investigator cannot append a smashed aircraft or component to the report. The accident objects, conditions and properties must be observed, and the observations transformed into data in a format which documents accurately a description of what happened. Data transformation is required to organize the data into a framework which will permit integration from numerous sources. Data must be organized as it is assembled in order to apply it to synthesizing and producing investigation outputs. However, it is critical that data not be filtered or discarded prior to hypothesis testing. Both the investigative and the quality control processes must provide criteria and procedures for data testing in a way which will support replication of results - the true test of scientific method. Individual investigators' approaches to when, how and, indeed, whether data are organized at all vary widely, accounting in great part for the replicability deficiencies we have all observed.

(3) Integrate data to synthesize a description of what happened. The investigator "reconstructs" the accident-builds a description of what happened-by piecing the data systematically into a validated scenario that can be tested and verified. Data integration is the step in the process when decisions are made with regard to relevance of data items. Relevant data is arrayed to build to a logical, sound hypothesis about what happened.

(4) Validate the description. An accident description is valid when a critic can reproduce the process described and achieve an identical accident. That would be the ultimate demonstration that the investigator has indeed discovered what happened and why. Unfortunately that is not practical proof for quality assurance purposes, if for no other reason than the social unacceptability of the loss. Experienced investigators employ a technique which might be called "mental motion pictures" to reproduce the accident in their minds. When we can visualize the accident process successfully in our own minds, we are generally satisfied that we have achieved a correct description of what happened.

Accident Investigation Output

Once the accident is understood it is relatively easy to produce the outputs. Choices of format are available to permit reporting the Investigation results in various ways.

(1) Forms. The most widely used method for accident reporting is the accident reporting form. Forms provide blanks to be filled in; thus the data required and the order in which it is presented should be highly replicable and amenable to statistical analysis. However, when forms require Judgments they do not offer as high a level either of replicability or validity; for example, when investigators must decide how to fit descriptions of observations into blanks on the forms.

Unfortunately, limited choices on forms often influence investigators to "fit the data to the blanks", with Consequent distortion of the description, the outputs and derivative statistical analyses. (WaIler, 1977) In one organization, 92% of the information in one category was reported as "Other" from the choice of accident factors listed on the form. This evidence of incompetent form design cannot be corrected by attempting to improve investigators' forms-preparation skills!

(2) Written descriptions. The second most widely used method of accident reporting is the written narrative description. These outputs are generally much more complete than information provided on forms. Unfortunately (again), written descriptions pose a significant dilemma: written word construction is naturally linear, and during accidents a lot of things happen simultaneously.

A derivative of linear written format is the graphic, or flow charted output. Events and causal factors charts, logic trees and similar flow charts are also used to show the flow of events that occurred during the accident process.

(3) Verbal descriptions. A third output Category is the verbal description of what happened. Typically, an investigator will make several verbal reports about what happened before the final version is documented. Although transient, and thus not readily available for analysis, verbal reports are valuable, as will be seen later in this paper.

(4) Description of data that supports the description of what happened. Another investigation output is the data collected during, or flowing from the investigation in one or more of the other outputs, in addition to the description of the accident. These data may include descriptions of systems within which the accident occurred, procedures used, photographs or sketches of the accident site, and similar aids which assist in visualizing and understanding the accident process.

(5) Other outputs. After the "what happened and why" are understood, many uses flow from the accident description. It may be used to identify and define deficiencies demonstrated by the accident, to develop risk estimates, to identify options for changing interactions among events in future operations, to assess comparative effectiveness of options for reducing future risks, and to resolve questions of responsibility, costs and other social, legal and technical issues. The quality of subsequent outputs is directly dependent on the quality of the original accident description. GIGO ("Garbage In, Garbage Out") can quickly become "Garbage In, Gospel Out"!

Feedback after the Investigation

Feedback about what happened during the accident process may occasionally be introduced explicitly into the investigation process; e.g., the UK AIB's party review. More typically the feedback is implicit, in the form of questioned outputs, demands to change outputs, controversy, litigation and hard feelings. Questions raised about the description of what happened can illuminate quality deficiencies if someone is alert to the issue and not merely defensive. Unfortunately, much investigation feedback is introduced in forums wherein differences are resolved by committee judgments according to legal rules, rather than by technical testing against scientific criteria. All the greater impetus for more robust quality management throughout the investigative process.

Outputs provide a convenient point to start enforcing quality standards; if inputs and operational processes are faulty, then outputs cannot be acceptable. Imposing output quality standards is the quickest first step toward identifying deficiencies of both inputs and investigative operations, and improving feedback.

Direct Observation of Quality Management Efforts

Some form of quality management is practiced by most investigative authorities. However, most are replete with abstract ambiguous criteria expressed in such terms as "clear", "concise", "complete", and similar unquantifiable adjectives. These words are high on abstraction and non-definitive, leaving room for a broad range of individual interpretations. Outcomes usually depend on the foibles of the last person to approve the report.

Another common quality management process is "peer review", which in most instances masks the absence of concrete or measurable criteria that are uniformly applied. The results of these processes are highly variable outputs, in some cases even when inputs are identical.

A third, relatively effective process is informal, one-on-one sharing of verbal descriptions of the accident among investigators. The process as observed was the most effective procedure for improving quality. As one investigator told another what happened, the listener tried to visualize what was being described, and arrange the data into proper sequential order. This resulted in reiterative refinement of the description until the listener could visualize the accident process logically and successfully from beginning to end. Observing the success of this process provided key insights into the process to be described in Part 2.

Without robust quality standards and rigorous quality control processes, the key investigation output product - the description of what happened and why it happened - continues to suffer from poor replicability and lack of validation. Little progress toward improving quality has been achieved because customers for the investigation output products have accepted the proffers uncritically. For many investigators the process quickly becomes a matter of satisfying the boss's criteria of acceptability.

Investigation Quality Goals

We suggest the following fundamental goals for accident investigation outputs:

First, recognize that an accident is a process, and that a description of a process can be systematized.

Second, work toward a certainty of 1.0 for all data used to produce a description of the accident process.

Finally, strive toward a certainty of 1.0 for replicability of the description of what happened.

Accident as a Process

Describing a process is substantially different from describing a cause, or causal factors, and much more demanding. Insisting on rigorous quality standards for the process description will pay much greater dividends than focusing on causes.

Data: Is it Valid?

Beware of Garbage In. Validity depends in great part on the investigator's integrity, and ability to observe and document truthfully, without distortion or misrepresentation, whether innocent or deliberate. It also depends on the methods used to acquire the data, and how the data are transformed for use and integrated. Ignoring a relevant input, or distorting data during documentation, have similar results: a warped view of what happened. Finally, the methods used to record and organize data to identify what happened are critical. A frequent invalidating deficiency occurs when the investigator records conclusions rather than observations, thereby eliminating the basis for testing the hypothesis and the report.

Description: Is it Valid?

The investigator's objective should be to develop valid descriptions of accidents from which corrective actions can be proposed, planned and implemented to reduce future occurrences and risks. Invalid descriptions produce unacceptable consequences, including injustices to individuals and organizations, misdirected remedial efforts, needless controversies, and myriad other counterproductive problems. Fuzzy, qualitative quality criteria contribute to the insubstantial outputs upon which the future safety of the aviation system are currently based.

Part 2, which will be published in the V.25, #1, ISASI forum (1992), will propose a methodology for improving quality management for accident investigation processes and products. Go to Part 2.

References (for Part 1)

Benner, L., Accident Models and Investigation Methodologies employed by Selected U.S. Government Agencies. Report to U.S. Department of Labor, Occupational Safety and Health Administration, Washington, DC. February 21, 1983.

Benner, L., "Accident Theory and Accident Investigation". Proceedings of the Society of Air Safety Investigators Annual Seminar, 1975, p.149.

Johnson, W., MORT Safety Assurance Systems. Marcel Dekker, New York, 1983, p.373.

Rimson, I.J., "Are These the Same Accident?" ISASI forum, Vol.15, #3, 1983, pp.12-13.

Rimson, I.J., 'Standards for the Conduct of Air Safety Investigation". Proceedings of the Twenty-first Annual Seminar, International Society of Air Safety Investigators. ISASI forum, Vol.23, #4, February 1991, pp.51-54.

WaIler, J. A., M.D., "Epidemiologic Approaches to Injury Research". In Rare Event/Accident Research Methodology, Proceedings of a Workshop held at the National Bureau of Standards, May 26-28, 1976. NBS Special Publication 482. U.S. Department of Commerce, Washington, DC, 1977, p.44.