INVESTIGATION CATALYST
Recommendation Development Tutorial

© 2004 by Starline Software Ltd.
PROBLEM ASSESSMENT

After identifying all the problems, you need to decide if each problem is worth fixing.

General Considerations

That task requires application of assessment criteria to rank the significance of the candidate problems. The criteria vary with the type of occurrence and purposes of an investigation or analysis. Thus they have to be developed by each organization performing investigations or analyses. Investigation Catalyst offers default measures for users who do not have their own measures.

The recommendation development process should separate those worth fixing from those you can afford to live with. Focus on the problems you select to fix. Usually this decision is heavily influenced by the extent of the likely consequences, such as future performance or outcomes if the problem is not fixed. This can be aided by using an assessment tool for arraying all the criteria and assigning values to the problem for each criterion, as in the matrix below.

Evaluation Criteria

The first task is to select evaluation criteria. This task is dependent on the practices of the organization sponsoring the project. The application provides default fields, which users can use as is, or rename and enter their own assessment measures.


The defaults shown in the lower portion of this Panel include entries to:
  • identify the exposures that might be affected.

    • The goal is to identify how frequently and for how long an activity in which the problem exists is expected to be functioning, e.g., 24/7, once monthly, daily or periodically, and about how many hours per year the problem might manifest itself. Used to help estimate the probability of the problem arising during the exposure value.

  • categorize the kind of problem, if the organization uses classifications
    • Optional: purpose is to establish class to aid in aggregation and retrieval of problem data. Is the problem attributable to design, quality, operability, maintainability, reliability, durability, configuration, or other attribute of an object, or the knowledge, skill, dexterity or other attribute of people. Try to avoid judgmental categories like failures, errors, fault, etc.

  • estimate the probability of the problem occurring.

  • estimate the severity of the effects should the problem materialize. (Alternative severity measures may be desirable for specific system operations. For example, severity levels for personal injury levels might be categorized as fatal, disabling, hospital admittance, first aid, or negligible injuries.

  • show an estimated risk level (combined probability and severity) code for the problem.
    • This or some equivalent measure is required to provide a baseline against which the effects of any change should be compared. Any changes that do not reduce this risk measure are unacceptable; the greater the improvement, the greater the value of the change.

  • assign a significance value, in terms of impacts that might be experienced.
    • Optional: purpose is to provide an indicator of analysts' perception of the importance of problem to their organization, by assigning a significance value ranging from Very high to negligible, or using a scalar value such as 1 for extremely important to 10 for irrelevant. This is a judgment of a perception, and should be interpreted only as an indicator.

  • Users may skip one or more entries, or change the names and enter other data in the text fields.

Sample rating tool

One tool for developing ratings is to use a matrix with 16-20 cells, with coordinates representing considerations of interest listed in some descending value or significance. Users can create a matrix with a spread sheet application and use it to develop these assessments, or they can use published matrixes.

In the safety field, for example, a Risk Assessment Code matrix, shown below, illustrates the format of this kind of assessment matrix. In this example, the considerations are the frequency and severity of potential loss effects, represented by four severity levels and four probability levels. I and A are the greatest severity and highest probability of a potential loss, and thus the most undesirable effects or risks. The entries in the table represent the weight to be given that risk level for ranking the priority for fixing it.



To use the table, for a future activity (exposure) make a judgment or get a consensus about the anticipated probability of a future incident, and its likely severity or range of severities. (In this matrix, marginal may be replaced by serious or two descriptors to modify the severity scale.)

If desired, relabel the entries to reflect the current value systems of the organization.

Previous | Next | Tutorials Menu