Started in 1998, this page contains a list of topics for research that I created on the IPRR web site. From April 2003 to the fall of 2006, the list was supplemented with additions culled from the AIPRE forums.
The intent was to provide a wish list for anyone who might like to conduct or sponsor research in a listed area. Scroll the index to find titles of interest. For earlier discussion of issues, see list
IPRR RESEARCH NEEDS LIST
This issue involves identifying the role of an organization's culture in an accident during an investigation. Some cite an inadequate safety culture as a precursor to an accident, but little consensus exists about how to investigate this proposition during an investigation. The challenge is similar to that confronting investigators for similar management and "human factors" "programming actions."
This issue addresses the basic justification or rationale for doing investigations. Why bother investigating?
Observed investigation objectives vary widely depending on the type of investigation activity. A common objective for accident investigations, for example, is to determine the cause and prevent recurrence of similar accidents. Fire investigations typically are conducted to determine the origin of the fire to prevent recurrence. Insurance investigations are conducted to determine the amount of loss and who pays for it. Criminal investigations are conducted to verify that a crime was committed, identify the perpetrator(s) of the crime, and acquire sufficient evidence to persuade the judicial system to convict the accused of the crime. Investigations of equipment failures are conducted to identify the problem and determine what needs to be done to fix it.
In the accident and fire investigation fields, strong arguments can and have been made that a focus on prevention of the "next" similar occurrence is too narrow, and limits the value of data developed during the investigation. A consensus among informed investigators has been reached that the investigation objective of determining the cause of something is incongruent with investigator's observations of phenomena. Scientific investigation, for example, produces observed or hypothetical descriptions and explanations of phenomena, which are most valuable, when they can be translated into general laws or principles with broad application.
Are current investigation objectives too narrowly defined? Should investigators be asking themselves what they could do to increase the value of their investigation work products? How could objectives be restated to provide greater value for an individual, an organization, or an industry, or internationally? How are investigation objectives determined, and by whom, using what criteria? What alternatives could be developed? What work products would alternative objectives require? Who might be affected, and what might be the consequences of alternative objectives on investigators and their work products, as well as the users of those work products?
How are investigation successes, failures and value now identified, verified and documented? Should there be other objectives and criteria?
For additional discussion of this issue, see the 1999 Rand Report of its investigation of NTSB practices, and a slide presentation describing basic investigation problems and approaches to their resolution.
This issue addresses the underlying concepts, principles and perceptions on which an investigation process is based.
The current state of the art of investigation uses procedures that are not consistent and replicable in most non-scientific investigations. Most depend heavily on the experience and experiences of the investigator, and many investigators develop individualized investigation practices. What exactly are the investigation methodologies investigators now use?
What alternative investigation methodologies now exist or need to be developed? How are they different and how are they similar? What is the relative effectiveness, efficiency, verifiability and value of each, and how should those attributes be measured. What criteria now exist for the selection of the "best" methodology for different types of investigations? What criteria should be established for the selection of the "best" methodology by an investigator or an investigation program manager or designer? Where can in investigation program designer or an investigation designer turn for help with these issues?
What is the methodology selection decision process, and what influences the decisions now? Are academic disciplines, experience or the demands of investigations dominant influences, or are there other dominant influences? What implications does that decision have for the investigation, the investigator and the customers of investigation work products? What approaches have be devised to determine the answers to these questions? What other approaches might be explored?
Is there such an entity as an investigation technology? If so what is that technology? If not, is one needed?
A 12/99 research report by Rand Institute for Justice speaks to this issue.
This issue addresses the scoping of what an investigator investigates and reports in a specific case. What should an investigator investigate?
The scoping decision affects the cost, consistency and value of the investigator effort. The greater the scope of the investigation, the greater the cost. The more scoping decisions differ in principle, the greater the problem with inconsistency across investigations. The narrower the scope of the investigation, the greater the risk that a more universal change can be identified and implemented. The scoping decision making process is only dimly understood, and is scarcely addressed in most known publications about investigations.
Prior work has disclosed that the scoping decision depends in part on the objectives of the investigator's "customer" and on the investigator's perceptions of the nature of the phenomenon being investigated. What is the scoping decision process? What influences affect the scoping decision? What should be included in an investigation? What is the beginning and end of the phenomenon to be investigated, and how are the beginning and end of the phenomenon defined and determined in specific cases? How much of the phenomenon should the investigator try to understand and describe? For example, what principles can be applied to establishing the beginning of an accident or a fire phenomenon? Are "precursors" a part of the phenomenon, or are they to be investigated separately? How far back should an investigator explore the influences that affected what a person or object did during the phenomenon?
Is it possible to develop guiding principles for investigators to apply generally so investigations of similar phenomena will become increasingly consistent? Should the outputs of the phenomenon be examined and reported as part of an investigation? At what point in time or by what general principles can an investigator determine when to stop the investigation?
This issue addresses the "data language" into which investigators' observations are transformed and documented during investigations.
One of the challenges facing investigators is how to document observations made during an investigation. What is the "data language" they now use to document what they observe or learn during an investigation? What data language options are available? What data language should they use?
What is the documented form of the building blocks which investigators now create to support their analysis of their observations? What options are available to investigators? How are these building blocks affected by the jargon of the system or the investigator's discipline? Are the building blocks consistent so they can be related to each other in terms of sequential and temporal logic, as well as causal (cause-effect) logic? How do their data blocks lend themselves to verification and replication? How does the data language carry over into reports of investigations? Might mathematically based notations support natural language descriptions and explanations of phenomena like accidents or fires or crimes?
This issue addresses how an investigator investigates something, and the design of investigation tasks.
What tasks must investigators do during an investigation, and how do they do those tasks. What investigation tasks should investigators have to do, and how should those tasks fit together to produce the desired investigation work products? Task demands include the making of observations, the transforming of observations into documented records, the analysis and testing of the recorded observations, and the reporting of the analyses findings, to name a few.
When you want to acquire data from people what are the ideas on which an efficient investigation interview should be based? What is the basis investigators now use to formulate questions to ask? How can unknowns as well as unknown unknowns be identified by investigators as they proceed during an investigation? What other question formulating techniques could be used? What planning is required before investigators ask questions of people? How can those plans best be formulated? When an investigator wants to acquire data from objects, how is that done? When an investigator looks at something, how does he transform what is observed into an "investigation data language?" What preparations or planning do investigators need to do before they start an examination or test of an object?
How does an investigator's understanding of how people and objects acquire or generate "data" affect what is "observed" by the investigator? How do an investigator's perceptions of the phenomenon or prior experiences influence what an investigator seeks, and how can these task "biases" be neutralized by their task design? How do people store data about incidents or crimes, for example? How do investigators extract "stored data" from people or objects during an investigation? What options does an investigator have now? Are others needed? Do those principles vary among investigation types? If so, how?
How is the investigator's data organized and analyzed? What exactly do investigators do when they "analyze" data? If different techniques are used, what are the relative merits of each? Do any "progressive" analytical methods exist that investigators can use to analyze data in real time during an investigation, rather than gathering all the data and then analyzing it? What analysis techniques can provide for real time quality control procedures and data acquisition management? How can the actions of "programmers" who influence what happened be addressed to find and demonstrate cause-effect relationships in investigations?
Alternative investigation concepts and techniques continue to surface. For example, techniques such as fault trees, MORT, root cause analysis, STEP/MES and others have been advocated by their developers. What would a comparative analysis and demonstration of these techniques show?
This issue addresses the knowledge and skills investigators need to conduct satisfactory investigations. The current state of the art of investigation frustrates the development of training curricula because most current widely used processes can not be translated into measurable and verifiable training objectives, or provide other measurement benchmarks to measure training or performance effectiveness.
How are training successes and failures identified and documented?
Another major issue is whether investigators need knowledge of how the systems they investigate work, or whether a sound investigation knowledge base enables an investigator to investigate anything adequately. How much does the answer to this question depend on the methodology and resultant investigation tasks? How and by whom are investigator training objectives now selected, and what are the rationales or criteria for the selection of those objectives? Is there an adequate theoretical basis for investigations available to support answers to these questions?
Questions have been raised about the differences in the knowledge and skills required to develop descriptions and explanation of phenomena that have occurred, versus the knowledge and skills required to develop recommendations for future actions based on those descriptions and explanations. How do training needs differ among the various levels of investigation that now exist? Are the training needs really different? If so, what are the differences in the tasks, and how do those differences affect what investigators need to be trained to know and do?
These issues address the outputs produced by the investigation process.
What work products are presently produced by investigators for the various types of investigations? What are the origins of the demands imposed on investigators for those work products, and are they still valid today? For example, in several investigation fields, most work products reflect the influence of legal thinking on the output terminology and format. What specifications now exist for investigation work products? What are the origins and basis for those specifications? Do they provide an adequate basis for investigation performance assessments? How will changing demands of legal systems affect the specifications for investigation work products?
Another issue is whether reports should be limited to a description and explanation of the phenomenon investigated, or whether extraneous or other random observations should also be included or reported separately? For example, should accident or fire investigation reports address observed risk-raising conditions not related to the accident or fire, or should they be reported separately in other reports?
An emerging issue is how investigation work products could be changed to expand their use and increase their value. For example, could work products be designed to provide inputs on which to base system changes, to devise and provide guidance for monitoring activities in real time, to improve the internalization of lessons learned from investigations, or to support standards and code assessments, to name a few possibilities?
(Go back to list)
These issues address how the investigation task performance and outputs are assessed.
Investigation evaluation and assessment involves both the quality of the investigation process and the quality of the outputs. Why are current practices not subjected to standards against which to measure their investigation achievement? How do current investigation objectives, methodologies and techniques influence the ability to control the quality of the process and outputs? What criteria now exist for investigation process performance quality assurance? How were they derived, and by whom, using what basis?
What influence have current investigation objectives had on quality assurance criteria and procedures? What quality assurance practices might be adapted to ensure reasonable consistency when two or more investigators investigate independently a phenomenon, or when an individual investigator investigates several phenomena? What role should or can logic or other testing play in investigation quality assurance programs?
Another assessment question is how to assess the value of an investigation or an investigation program? When actions are recommended as the result of an investigation, how are needs, need-addressing options, tradeoffs and recommendation decisions developed? What are the investigation and program value assessment processes, and how do they perform? Are current investigation program evaluations and assessments satisfactory or do they require improvement?
Application of ISO 9000 for accident investigation organizations, for example, may be impossible, because one can not impose objective quality controls on investigations given the present state of the art. The need is for an investigation quality assurance process that would permit ISO 9000 to be implemented in this field.
A related issue is investigation failures. Investigation failures are rarely acknowledged, but they do occur and can be demonstrated. How should investigation failures be determined and addressed? Do current assessment procedures provide accountability for either investigation or program performance? If not how might this be accomplished?
The notion of cause permeates current investigation thinking. Questions have been raised about the role of cause in investigations, and alternatives tried. No in-depth study of the role of cause in investigations has been undertaken in the past, so these questions remain. What purpose(s) does cause determination serve in investigations and how can the effects of using cause notions - beneficial and harmful - be identified? Should concepts of cause and their classifications be maintained as is, changed or abandoned by investigators? What alternative concepts exist? Are root cause, proximate cause, causal factors, causal relationships, immediate cause, remote cause compatible with current knowledge of the nature of phenomena like fires, accidents, crimes, etc.? What would be the impact of alternative concepts if they were adopted?See paper on Investigating Cause, by I. J.
Rimson, with a history of cause as used din the aviation field.
A 12/99 research report by Rand Institute for Justice proposed that NTSB modify its determinations of cause. The consequences of that proposal will be interesting to follow.
This issue addresses the management processes used to plan, organize, staff, direct and control investigation processes.
Is the organization scheme currently utilized for team investigations - those involving more than one person - cost and performance efficient and timely? What investigation organization options exist or might be developed? What are the relative merits of alternative organizational approaches? How might they be demonstrate?
Why do some organizations produce work products in less time than others? Are the differences justified or necessary? What are the special task and staffing needs in team investigation, and how are they determined for each option? What communications flow in team investigations? Are they timely and efficient, or could they be improved? What is the impact of investigation objectives and methodologies on team organization, staffing and direction? How might efficiencies of team investigation be improved? Are current investigation operating controls adequate?
A 12/99 research report by Rand Institute for Justice also addressed this issue from several perspectives: managing parties used on teams, tracking progress and others.
This issue addresses actions on recommendations produced by investigation processes.
Frank Taylor has questioned why adequate action on recommendations does not always follow investigation findings. This does occur, but the reasons are not fully understood. This is clearly an area for investigation process research. Is this result attributable to some aspect of the investigation process, or is it attributable to other reasons involving management actions or perceptions, for example? Is the problem related to the recommendation development process, discussed in # 7 above? In our book (Investigating accidents with STEP, Chapter 15, Investigation Afflictions and Antidotes, Table 15-6) we list 22 problem recommendation responses that have been observed in NTSB and other investigations, but the question has not been subjected to critical examination.
This issue addresses the handling and retention of data gathered during an investigation.
Leon Horman's Evidence Evaluation Matrix provides a comments section where information about the "evidence" gathered during an investigation is recorded. This brings to mind the lack of guidance to help investigators determine and implement data documentation, processing and retention requirements. Chain of custody and data handling requirements for investigation "evidence" involving legal or potential legal action are well documented. What is not widely documented is the manner in which data and data sources are to be noted during an investigation to show what is available to support elements of the description or explanation of the phenomenon, how that should be stored, inventoried, linked to the conclusions or findings or retained. For example, Leon Horman proposes a matrix. Schaum proposes an alternative method for organizing details. Hendrick & Benner propose notations on descriptive analytical event blocks. Exploration of the data or "evidence" handing needs during and after investigations should lead to guidelines for all levels of investigations. (7 Oct 96)
This issue addresses the technology or the application of science to industrial and commercial objectives; e.g., the entire Body of methods and materials required to accomplish an objective - successful investigations.
Industrial activities rely on a technology base to achieve their objectives. The technologies supporting automobile manufacturing, banking transactions, medical devices, electric power generation and other are readily acknowledged. The technologies supporting those industrial and commercial activities result from a combination of research and empirically based materials and methods, developed over time in response to continuing pressure for progress. Technology transfer is another aspect of this issue. How can technology from one activity be productively employed in another activity? An unanswered question for investigators is the technology required to support their "industry." or commercial activities. At present, distinctive investigation technology, or unique methods and materials for the investigation processes, does not exist. Should it? Why or why not? Is the application of science to the investigation process sufficiently unique to require its own technology? what might be transferred from other technologies? what methods or materials would be included in an investigation technology? how might the issue be addressed?
This issue addresses the use of statistical analyses of investigation work products and outputs, and how those outputs can affect the validity of such analyses.
Much money is invested in statistical analyses of accident and incident data produced by investigators. Thus, investigator work products are important to statisticians. If investigator work products are flawed, what impact does this have on statistical analyses?
In the absence of some methodological discipline to assure the completeness of the description of the accident or incident phenomena, is such usage feasible? Can statistical analysis methods overcome problems created by incomplete descriptive data?
Statisticians use observations of investigator reports to develop their hypotheses and analyses. Investigators report subjective interpretations, conclusions, categorizations, and findings about causes or factors. Are statisticians justified in using such subjective data for their analyses? Is it reasonable to expect valid results from this type of secondary observation technique, or would it be better to insist on direct observations, which Vaughan acquired through reinvestigation of the Challenger Space Shuttle Launch Decision to overturn such subjective findings? If investigator reports are edited or summarized, statisticians are using data three times removed from direct observations. investigations almost never describe the time relationships among individual interactions during an incident; is this a fatal flaw in the data that preclude their use for statistical analyses? Is the use of statistical analyses to seek causal associations among abstractly defined or subjectively categorized factors in loss-producing incidents or accidents ethical if one must wait for a sufficient significant number of similarly incidents - and losses - before one can infer associations? Would the research funds invested in statistical analyses be better invested in the development of investigation process methodologies that would disclose causal relationships among interactions (and problem definitions) during each investigation?
This issue addresses the dilemma of finding qualified peer reviewers for investigation process research. Since this research topic has not been extensively addressed, the availability of qualified peer reviewers may be a problem.
This issue arose as the book reviews were being posted for the 15 February 1997 IPRR update. By comparing the reviews, it quickly becomes evident that the reviewers worked with widely differing personal criteria. The issue of criteria for evaluating investigation process research can be further explored in references by Harvey , AIChE , and Hendrick.
This issue addresses the methodology by which recommendations can be developed from existing investigation work products.
Why aren't all recommendations made by accident investigators implemented by their recipients? See comments about action on recommendations above. This issue was raised by several comments in the feedback section, has been observed in recent documentaries about investigations, and has been of concern to many organizations whose recommendations are not compulsory. The concept of having investigation agencies make non-compulsory recommendations is rooted in the independence of such agencies from operating responsibilities for the systems they investigate, so their recommendations will not be constrained by the cost, customer confidence, employee constraints, and other tradeoffs thought to constrain safety actions. Problems affecting the development of recommendations were encountered and illuminated by a research project using several kinds of investigation reports from which attempts were made to identify risks and risk controls. See Fire Risks in Explosives Transportation . Other comments in the comment section and posted on the IPRR web site at papers have indicated the lack of a generally accepted practice even among investigators in high-visibility investigations. A study and comparison of potentially viable methods and their effectiveness is clearly needed.
What kinds of logical reasoning need to be understood and used by investigators during the conduct of investigations?
Discussions of this issue continue. Deductive logic methods have been favored for many years, particularly in the pre-accident investigation field. Past investigation guidance has also leaned toward deductive logic, a la Sherlock Holmes' approach. Recently, this position is being revisited. See, for example, essays posted at http:/www.rvs.uni-bielefeld.de
What are the logic knowledge and skills required of investigators to produce work products demanded by different types of investigations?
The issue addresses the conflict between the desire for personal privacy and the availability of observed data to identify accidental occurrences or deviations from norms during operations.
Discussion of this issue has generated issues for which there are not clear answers yet, particularly with respect to use of data in litigation or regulatory enforcement. The Forum contains some message exchanges about this topic under the topic CROSSAIR CRASH posted in January.
This issue addresses the acquisition of data about aircraft accidents by the expanded use of airborne sounds, and their interpretation. While voice and data recorders are now routinely analyzed, the issue is whether airblast data from explosions or aircraft in-flight breakups or explosions could be detected by devices other than the traditional recorders, or whether traditional recorders can be tapped for more data. Also see reference to paper by Stearman about findings on the track of a flight data recorder in a contested accident investigation.
Interesting challenges confronting governments contemplating the consolidation of investigation agencies need to be researched to determine the best course of action for those governments. In addition to all the technical issues described above, issues involving agency policies such as functions to include in mission, agency objectives, investigation objectives, staffing qualification requirements, investigation methodology selection, case selection, output specifications, data acquisition, debris ownership and disposition, quality assurance, dissemination of work products, advocacy issues, recommendation analyses issues, relationship with judicial functions, media relations, among others.
Should accident investigators be ""certified" to assure all that they are qualified to perform accident investigations? If so, what qualification criteria should be established, by whom, and on what basis. What are the concepts, principles and procedures that they should be required to master to merit certification? Should such certification cover all the attributes of a "professional" or would a "craftsman" type apprenticeship and experience under a "master"" be the guiding principle?
Currently few if any investigation work products carry lists of lessons learned during an investigation. They can be inferred from conclusions, findings, causes or recommendations, but a list of lessons learned is not typical. The question is whether the investigation outputs could be modified by listing lessons learned in tabular format for more rapid perusal and use by end users of investigation reports, who must act on the investigation outputs.