You are on page 1of 156

EUROPEAN ORGANISATION

FOR THE SAFETY OF AIR NAVIGATION

EUROCONTROL

DYNAMIC SAFETY MODELING


FOR FUTURE ATM CONCEPTS

Edition Number : 0.5


Edition Date : 08/09/2006
Status : Final Draft
Intended for : General Public

EUROPEAN AIR TRAFFIC MANAGEMENT PROGRAMME


DOCUMENT CHARACTERISTICS

TITLE

DYNAMIC SAFETY MODELING FOR FUTURE ATM


CONCEPTS
EATMP Infocentre Reference:
Document Identifier Edition Number:
Edition Date: 08/09/06
Abstract
The DRM research project was aimed at developing a simulation approach able to provide a
quantitative analysis of some critical operator’s activities considering the organizational context in
which they take place and the main cognitive processes underneath. The process was able to
provide a trial application of it in a specific case study in the ATM context. This approach within the
field of HRA is able to interact with standard risk assessment methodologies in order to “foresee”
the possible criticalities arising from human performance in the ATC working contexts. Indeed, the
simulator that has been used (named PROCOS; Trucco & Leva, 2004), tries to integrate the
quantification capabilities of the so called “first generation” human reliability assessment methods
with a cognitive evaluation of the operator.
Keywords
HRA cognitive simulation error recovery Future ATM concept
SESAR

Contact Person(s) Tel Unit


Daniela Grippa ~ 9 3330 DAP-SSH
Oliver Straeter ~ 9 5054 DAP-SSH
Author(s)
Maria Chiara Leva, Massimiliano De Ambroggi, Daniela Grippa, Randall De Garis, Paolo Trucco,
Oliver Straeter

STATUS, AUDIENCE AND ACCESSIBILITY


Status Intended for Accessible via
Working Draft ; General Public ; Intranet †
Draft † EATMP Stakeholders † Extranet †
Proposed Issue † Restricted Audience † Internet (www.eurocontrol.int) †
Released Issue † Printed & electronic copies of the document can be obtained from
the EATMP Infocentre (see page iii)

ELECTRONIC SOURCE
Path: L:\(Common)

Host System Software Size


Windows_NT Microsoft Word 10.0 6359 Kb

Page ii Final Draft Edition Number:


EATMP Infocentre
EUROCONTROL Headquarters
96 Rue de la Fusée
B-1130 BRUSSELS

Tel: +32 (0)2 729 51 51


Fax: +32 (0)2 729 99 84
E-mail: eatmp.infocentre@eurocontrol.int

Open on 08:00 - 15:00 UTC from Monday to Thursday, incl.

DOCUMENT APPROVAL

The following table identifies all management authorities who have successively approved
the present issue of this document.

AUTHORITY NAME AND SIGNATURE DATE


Please make sure that the EATMP Infocentre Reference is present on page ii.

DAP-SSH Daniela Grippa


Safety Expert

DAP-SSH
Safety Expert Oliver Straeter

DAP-SSH
Safety Domain Jacques Beaufays

DAP-SSH Alexander Skoniezki

DAP Erik Merckx

Edition Number: Final Draft Page iii


DOCUMENT CHANGE RECORD

The following table records the complete history of the successive editions of the present
document.

EDITION EDITION INFOCENTRE PAGES


REASON FOR CHANGE
NUMBER DATE REFERENCE AFFECTED

0.1 01.05.06 Initial draft all

0.5 01.09.06 Final draft all

Page iv Final Draft Edition Number:


CONTENTS

1. USE OF COGNITIVE SIMULATION FOR APPROCHING HUMAN


RELIABILITY ANALYSIS: Overview of dynamic risk modeling
approaches ..........................................................................................................3
1.1 Introduction ...............................................................................................................................3
1.2 State of the art...........................................................................................................................5

1.2.1 The simulation CES (Cognitive Environmental Simulation) ..............................................5

1.2.2 The simulator COSIMO (COgnitive Simulation MOdel) ....................................................7

1.2.3 The simulation SYBORG.................................................................................................10

1.2.4 The model TBNM (Team Behaviour Network Model) .....................................................14

1.2.5 The simulation AITRAM ..................................................................................................17

1.2.6 The simulation PROCRU (Procedures Oriented Crew Model) .......................................22

1.2.7 The simulation MIDAS (Man Machine Integration Design and Analysis System) ..........24

1.2.8 TOPAZ (Traffic Organization and Perturbation AnalyZer ) .............................................30

1.2.9 IDAC ................................................................................................................................37

1.2.10 PROCOS .........................................................................................................................42

1.3 Summary of the Chapter .........................................................................................................48

2. USE OF COGNITIVE SIMULATION FOR APPROCHING HUMAN


RELIABILITY ANALYSIS: LINK TO EUROCONTROL ACTIVITIES .................53
2.1 Link with the ConOps Framework...........................................................................................53
2.2 The use of the cognitive Simulator PROCOS and the HERA predictive approach ................58
2.3 Summary of the Chapter .........................................................................................................64

3. A QUANTITATIVE ANALYSIS OF SAFETY ISSUES: BY AN EXAMPLE


OF ONE SIMULATION TOOL ............................................................................65
3.1 Conops Use Cases Analyzed Through Procos ......................................................................65

3.1.1 The PROCOS-ConOps Task analysis ............................................................................66

3.1.2 Input required by the simulation PROCOS .....................................................................74

3.2 HERA Taxonomy and HMI Description...................................................................................76


3.3 Calibration process of the decision blocks in PROCOS using HERA-Predictive ...................82

4. SETTING UP THE SIMULATION CAMPAIGN: A STEP BY STEP


PROCEDURE .....................................................................................................99
4.1 Simulation campaign...............................................................................................................99

Edition Number: Final Draft Page v


4.1.1 Scenario setting...............................................................................................................99

4.1.2 Number of repetition of simulation runs ....................................................................... 102

4.1.3 Summary of simulation campaign ................................................................................ 103

4.2 Structure of the PROCOS reporting system ........................................................................ 103


4.3 Collection and processing of results .................................................................................... 108
4.4 Normality test on the results of the simulation campaign .................................................... 112

5. ANALYSIS OF THE RESULTS FROM THE CASE STUDY: AN


EVALUATION OF THE EXPERIENCE GAINED ..............................................114
5.1 Discussion of the results of the case study.......................................................................... 114
5.2 Error type analysis ............................................................................................................... 118
5.3 Conclusions and potential developments of the approach .................................................. 119

5.3.1 Systematic integration of the PROCOS approach as applied to CONOPS ................. 119

5.3.2 Strong findings from the pilot application ..................................................................... 120

5.3.3 Weaknesses of the current simulation approach ......................................................... 121

5.3.4 Potential developments of the approach...................................................................... 121

6. REFERENCES..................................................................................................123

ANNEX I : CONOPS USE CASE “Handle aircraft landing”................................127

ANNEX II A: Task Analysis for Use Case “Handling Aircraft Landing” in


flow chart format .............................................................................................130

ANNEX II B: Task Analysis for Use Case “Handling Aircraft Landing” in


Table Format....................................................................................................132

ANNEX III: Cognitive Flowcharts Used Within The Simulator PROCOS As


Validated For ATC Applications.....................................................................145

Page vi Final Draft Edition Number:


Edition Number: Final Draft Page vii
EXECUTIVE SUMMARY

The DRM research project was aimed at developing a simulation approach able to provide a
quantitative analysis of some critical operator’s activities considering the organizational
context in which they take place and the main cognitive processes underneath. The process
was able to provide a trial application of it in a specific case study in the ATM context.
This approach within the field of HRA is able to interact with standard risk assessment
methodologies in order to “foresee” the possible criticalities arising from human performance
in the ATC working contexts. Indeed, the simulator that has been used (named PROCOS;
Trucco & Leva, 2004), tries to integrate the quantification capabilities of the so called “first
generation” human reliability assessment methods with a cognitive evaluation of the
operator. The simulator shall allow the analysis of both error prevention and error recovery. It
should integrate cognitive human error analysis with standard hazard analysis methods
(Event Tree and Fault Tree) by means of a “semi static approach”.
The dynamism of the simulator proposed in the present work is focused on the cognitive
simulation and, therefore, on the cognitive flow chart. However the operator actions are able
to modify only the state of some equipment of the plant according to:
- a limited set of the states in which the equipment can be turned;
- the error modes identified through the Task analysis and extracted as a result of the
cognitive simulation of the operator;
- an explicit relation between the actions outcomes (correct execution or Error modes)
and equipment status modifications (the relation has been derived from the Task
analysis).
Its focus is mainly in conveying a quantitative result, comparable to those of a traditional
HRA method, taking into account a cognitive analysis of the operator as well. As a further
step the simulator considers the evaluation of error management as part of the overall
assessment from the same cognitive point of view

In order to prepare a trial application of the method a case study was considered important
for carrying out the analysis and for testing the method proposed on a specific application.
The case study refers to one of the Use Cases developed within the CONOPS framework for
the activity of Air Traffic Controllers.

Edition Number: Final Draft Page 1


The pilot study had two main objectives:
- Provide an overview of possible opportunities related to the use of a cognitive simulator
within CONOPS by investigating the future operational concept using the safety
fundamentals approach and using preliminary results of the Integrated Risk Picture
currently explored within EUROCONTROL.
- Evaluate the potential use of HERA-Predictive in combination with PROCOS for
concept evaluation (e.g., by analyzing the contributing factors to human error observed
in incidents, or by making use of experiences of approaches developed in other
industries like the CAHR method).

Page 2 Final Draft Edition Number:


1. USE OF COGNITIVE SIMULATION FOR APPROCHING HUMAN
RELIABILITY ANALYSIS: OVERVIEW OF DYNAMIC RISK
MODELING APPROACHES

1.1 Introduction
The aim for this chapter is to provide an overview of the well know and commonly applied
cognitive simulation tools and compare them underlying their advantages and limits.
A definition of cognitive simulation, also referred as simulation of cognition, has been given
by Cacciabue and Hollnagel (1995):
“the simulation of cognition can be defined as the replication, by means of computer
programs, of the performance of a person (or a group of persons) in a selected set of
situations. The simulation must stipulate, in a pre-defined mode of representation, the way
in which the person (or persons) will respond to given events. The minimum requirement to
the simulation is that it produces the response the person would give. In addition the
simulation way may also produce a trace of the changing internal mental states of the
person”.
In practice, a simulation is composed of three fundamental elements (Figure 1-1) that can be
considered necessary and sufficient for the development of a simulation of cognition:
- the theoretical cognitive model which defines conservation principles, criteria,
parameters and variable, that allow to describe cognitive and physical behaviour of
humans in a conceptual form;
- the numerical algorithms and the computational architecture, by which a theory is
implemented in a working computerised form;
- the task analysis technique, which is applied to evaluate tasks and associated
working context, and to describe procedures and actual human performances in a
formal way.

Edition Number: Final Draft Page 3


Figure 1-1 Simulation Model (Cacciabue 1998)

Cognitive simulation can be divided into two main types: qualitative and quantitative.
• Qualitative simulation describes the structure, the links and the logical and dynamic
evolution of a cognitive process, from the reception of an external stimulus to the
subsequent action. This type of simulation can be used for predicting expected
behaviours, in some well defined specific cases, where machine performance is also
simulated to the same level of precision.
• Quantitative simulation is based on the structure of a qualitative one with the addition
of a computational section and can be used to make numerical estimates of human
behaviour. The qualitative study in this case is often coupled with a simulation of the
performance of the system the operator has to interact with. The final outcome of a
quantitative simulation can be the list of the types of action or errors performed by the
operator while executing a specific task, or a probability value for each type of action,
calculated through the simulation runs.
In a wider context of cognitive simulation two different types of analysis can be distinguished:
retrospective and prospective.
• Retrospective analysis consist in the assessment of events involving human
interaction, such as accidents, incidents, or “near-misses”, with the objective of
detailed search for the fundamental reasons, fact and causes (“root causes”) that
have promoted and fostered certain human behaviour.
• Prospective analysis enables to predict and evaluate the consequence of human-
machine interaction, given an initiating event and boundary configuration of the
system.

Page 4 Final Draft Edition Number:


Figure 1-2Types of simulation and types of analysis (Cacciabue 1998)

1.2 State of the art


In this section, some of the main approaches are discussed showing the architecture of the
cognitive simulations they propose and underlying their main properties.

1.2.1 The simulation CES (Cognitive Environmental Simulation)


The simulation CES (Woods, Roth, and People, 1987) has been developed for simulating
how people form intentions to act in nuclear power plant during emergency conditions.
CES is made of three basic kinds of cognitive activities (processing mechanism):
- monitoring and analysing the plant in order to decide if the plant is in an expected or
unexpected evolution;
- building explanation for unexpected conditions;
- Managing the response by correcting the abnormalities, by monitoring the plant
response and by adapting pre-planned responses to unusual situation.
The basic psychological principle of CES is that people have limited resources, particularly in
high workload situations, and therefore not all the available knowledge can be exploited by

Edition Number: Final Draft Page 5


the operator in an optimal way. At any given time, the way in which knowledge is activated
depends on three different types of interaction:
- the interaction between knowledge driven and data driven processing;
- the interaction between resources and workload;
- The processing of the most evident and relevant information (the importance of a
process may be defined with respect on the ongoing one).
The performance of CES in different workload and environmental conditions is governed by
Performance Adjustment Factors (PAFs) by which the analyst can explore variability in
human behaviour.
The computational structure of CES contains two mayor elements:
- a knowledge base that represents the know-how of operator(s) in regard to the plant
and its behaviour;
- an inference engine which is formulated in the form of processing mechanism.
CES considers two types of competencies that are generated from a number of studies and
analysis of the working environment in which CES operate and fed into the basis pool of
knowledge:
- the theoretical knowledge of structures and functions of the plant under control;
- the empirical information deduced by investigation of operators and runs of simulation
in order to inspect the qualification of operator for emergency conditions.
The simulator CES is not able to analyse the interaction between two or more operator,
therefore it can not analyse the erroneous action produced by communication.
It presents a quite high complexity of application since it requires a simulation model for the
plant the operator has to interact with as well.

Page 6 Final Draft Edition Number:


Figure 1-3 CES mechanism and cognitive process (Woods, Roth and Pople 1987)

1.2.2 The simulator COSIMO (COgnitive Simulation MOdel)


The Cognitive Simulation Model COSIMO (Cacciabue and Colleagues, 1992a) was
developed with the purpose to describe and predict quantitatively human behaviour during
dynamic human machine-interactions, mainly in highly automated working contexts like the
control rooms of nuclear power plants and air-traffic control rooms.
The simulator is composed of two main models: system model and operator model. The first
one is a dynamic model and reproduce the operation of the nuclear plant; the second one,
instead, is made up of four elements: knowledge base, working memory, cognitive
mechanism, and cognition functions.
‰ The Knowledge Base (KB) is constituted of the Rule Based Frames (RBF) and
Knowledge Base Frames (KBS).

Edition Number: Final Draft Page 7


- RBFs are a snapshot of the configuration of the process controlled by the
operator and contain a set of appropriate actions for the management and the
performance of the selected tasks to deal with the current situation.
- KBFs are units of knowledge containing only heuristic rules as well as general
engineering and physical principles on the operation of the plant, usually
developed during training, experience and theoretical background. KBFs are
called into play in the working memory when a new planning process has to be
developed, as no RBF is available to handle the current situation.
‰ The Working memory (WM) can be subdivided into two areas:
- Peripheral Working Memory (PWM), the area of vast capacity which receives
information directly from the KB and the outside world and makes selection;
- Focal Working Memory (FWM), the area of limited capacity which continuously
receives filtered information through the PWM.
‰ The Cognitive Mechanisms, which are also referred as Primitives of Cognition,
governs the model and they are: Similarity Matching, Frequency Gambling and, less
frequently, Direct Interference.
- Similarity Matching (SM) primitive compares external cues (data perceived from
external world) and internal cues (elements that are included in KB) in order to
identify one or more procedures helpful to perform the current task.
- Frequency Gambling (FG) primitive resolves the conflict, which may occur if the
SM has selected more then one procedures, in favour of the most frequently
encountered and well know accidental situation.
- Direct Inference (DI) outlines a new action sequence not contemplated into
normal procedures on the basis of external stimuli and the KBF.
‰ The Cognitive Functions are modelled and implemented through four interrelated
cognitive activities which produce the operator action on the basis of external stimuli
and working memory: Filtering, Diagnosis, Hypothesis Evaluation and Execution.
- The Filter selects among the large number of variables, parameters and data
produced by the environment those which are actually perceived. This perception
is guided by a salience criterion based on the physical and cognitive salience
associated with the incoming environmental data.
- After the filtering process, data are interpreted and matched with the content of
the Knowledge Base using the SM mechanism in order to identify a RBF to be
executed.

Page 8 Final Draft Edition Number:


- Hypothesis valuation aims to decide whether a hypothesis can be trusted or has
to be rejected. If hypothesis selected after the diagnosis function is not supported
with sufficient evidence, the hypothesis is rejected and a new diagnosis is
initiated.
- Once a hypothesis has been selected, the WM is cleaned out and receives an
instantiation of the RBF associated with the selected explanation. This RBF is
called the Currently Instantiated Frame (CIF). The control and recovery actions
contained in the CIF are executed.
Like CES, the simulation COSIMO is not able to analyse the interaction between two or more
operator, therefore it can not analyse the erroneous action produced by communication. It
presents a quite high complexity of application as well, since it requires a simulation model
for the plant the operator has to interact with.

Figure 1-4 COSIMO conceptual description (Cacciabue & Collegues 1992)

Edition Number: Final Draft Page 9


1.2.3 The simulation SYBORG
The simulation of the Behaviour of a Group of operators (SYBORG) has been developed
within the context of nuclear energy production studies by CRIEPI (Central Research
Institute of Energy Power Industry) and it aims at studying hypothetical severe accidents
involving human factors, as well as at supporting the design of intelligent interface and
control procedures.
The simulation has two major subsystems: a power plant model and a human operator team
behaviour model. SYBORG exhibits the peculiarity of simulating two interfaces, one for the
interaction of human with the machine (Human-Machine Interaction, HMI) and one for the
group interactions (Human-Human Interaction, HHI).

Figure 1-5 SYBORG architecture (from Takano, Sasou and Yoshimura 1995)

‰ The plant simulation models the power generation system, the controls, and the
alarms in the plant.
‰ The operator model accounts for three operators: one is the leader of the team and
the others are followers with different roles. It is assumed that the leader does not
observe or touch the control panel but accumulates information of the plant via
communication. In particular, the operator model consists of following seven micro-
model:
- The attention micro model filters sensory information derived from machine
behaviour through the HMI and communication between operators through the
HHI.

Page 10 Final Draft Edition Number:


- The short term memory accumulates temporarily information from the attention
model, conveying it “smoothly” to the thinking model, with a predefined time delay.
- The thinking module is the core of the single operator model; it introduces the
“mental model mechanism” that describes and illustrates how operators predict
plant behaviour and make decisions to prevent the deterioration of its conditions;
it calculates and defines the execution of procedures and actions to be carried
out.
- The medium term memory obtains information filtered by attention micro model
and information contained in long term memory and designs the mental
mechanism of the operator. In practice, the medium term memory serves as
buffer and sustains the transfer of information between the thinking model and the
long term memory.
- The long term memory contains the knowledge necessary for the thinking model,
including plant configuration, parameters, variables, dynamic behaviour, meaning
of alarms, and predefined procedures. Furthermore, the store knowledge contains
the relation between events and parameters, events and causes, change of
parameters and interlock, change of parameter and carrying out
countermeasures.
- The action micro model implements the control actions decided by the thinking
model. It is possible to calculate the value of the operation standard time of the
action and assesses the workload produced by action.
- The utterance micro model develops the communication between the team
members. The communications are distinguished in twelve categories, for
example: Report (reading the instruments), Application (application of the
procedures)…

Edition Number: Final Draft Page 11


Figure 1-6 Individual Operator Behaviour Model (from Takano, Sasou and Yoshimura 1995)

‰ The Human-Human interface (HHI) model performs three fundamental functions: the
task assignment, disagreement management and utterance management.
- The utterance management micro model, when communication takes place,
records the communication and sends itself at the receiver. The answer has to
feedback via HHI in order to confirm the success of the communication.
- The task assignment micro model incorporates the characteristics of team
behaviour related to the cooperation with each other to deal with a work that is
divided among operators.
- The disagreement management micro model simulates the characteristic of team
behaviour related to the fact that real operators communicate to exchange plant
information and their thoughts on the plant conditions, and they decide on
countermeasures that are thought to be the best ones for the plant. The
disagreement solution micro model considers several dynamic parameters
(arousal level, confidence) and static parameters (expertness, reliability) to
describe a variety of communication processes.

Page 12 Final Draft Edition Number:


Figure 1-7 parameters using in the disagreement management ()

In order to obtain a quicker implementation, the model above explained is reduced by


applying appropriate aggregations. The properties of the Thinking, Short Term Memory,
Medium Term Memory and Long Term Memory micro-model have been assigned in two new
modules: the Skill Base Reaction (SBR) and the Knowledge Based Processing (KBP).
‰ The SBR module regards the performance of the immediate reaction (when the
warning alarms go off the operator will carefully monitor the control panel).
‰ The KBP module performs the following tasks:
- It receives the information from the external world and from the Long Term
Memory and produces the mental model;
- It selects a strategic objective on the basis the mental model produced;
- It researches the opportune countermeasures and checks the procedures carried
out until the operator notices some effects;
- It defines the priority;
- It understands the situation of the system.
Figure 1-8 shows the information flows between the modules. In order to simplify the diagram
it is presumed a lack of interrelation between the two followers and it is depicted only one
follower. All information that gets into the model goes through the attention micro-model and
then the SBR module produce an action. The information that refers to the change of
parameters of the system is going in the KBP and it aids to build a new mental model. The
key parameters are chose and transmitted to Task Assignment with their priority index. The
plant model is the plant simulation and it’s linked with the HMI module to show the simulation

Edition Number: Final Draft Page 13


output. The Leader module is the same of the follower, except for the Action micro-model
that there is only in follower module. In fact, the leader does not have action tasks but has
only management tasks.

Figure 1-8 Flows of information

The simulator is well able to describe also interaction among members of the team, is
tailored on a specific application and in order to be used it needs the input coming from the
plant simulator for which it has been build.

1.2.4 The model TBNM (Team Behaviour Network Model)


For supporting the analysis of the cognitive process of operators working in a team in
complex dynamic context Shu, Futura and Kondo have developed a model named Team

Page 14 Final Draft Edition Number:


Behaviour Network Model (TBNM). This model is made up of four micro-models: Task
Model, Event Model, Team Model, and Human- Machine Interface Model.

Figure 1-9 Team Behaviour Network Model (Shu et al. 2002)

‰ Task Model is used to depict team tasks and to identify the associated context in
which the interaction between the operators team is developed. Complex task is
subdivided and assigned to an operator in accordance with your individual
peculiarities.
‰ Event Model specifies the developments of a situation after that an initial event
occurs.
‰ Team Model defines a factors team (organizational structure, individual peculiarities
of the operator that are the root of the communication). In normal operation the team
structure is predetermined and each member of the team knows what you have to do
and how you have to communicate. The collaboration pattern is dynamic because the
environmental conditions change and the operators can execute abnormal action.
‰ Human- Machine Interface Model shows the layout of the control room and all
possible switches among states of the indicator. It is assumed that one panel is
assigned to each operator. When the incident/accident is occurring the operators help
themselves covering positions different from planned position.
The cognitive process of the team consists in four modules: identification of the symptoms,
decision making, planning and execution.

Edition Number: Final Draft Page 15


Figure 1-10 Cognitive process team (Shu et al 2002)

The current state of the system is identified depending on know-how of the operator or by
information arose from the other member of the team.
During Decision Making process the decisor-making chooses, in the bound of his authority,
an option from emergency list.
During the Planning process the planner, selected depending on his knowledge and
responsibility, chooses a procedure from list of the plans.
During the Execution process the executor, selected depending on his responsibility and
capacity, performs an operation from action list in according to operative procedure.
The performance of the cognitive process is outlined by timing fault tree like reliability
assessment of the system. The representation includes the communication between
members of the team and the interaction with dynamic context. For a quantitative
assessment is needed to know how the members of the team confront a normal event,
organize collaboration and produce the communication. Because there is not a database for
error taxonomy of performance of the team, the reliability values are assessed by simulation
results. Then it is possible assess the reliability of the team in a specific context by combining
this result with the timing fault tree.

Page 16 Final Draft Edition Number:


1.2.5 The simulation AITRAM
The simulation AITRAM aim is to contribute to the improvement of the learning process by
developing an advanced training system for aeronautical maintenance technicians. This
simulator addresses both technical and Human Factors issues and is based on innovative
concepts, new cognitive approaches and simulation technologies such as Virtual Reality.
This model integrates Human Factors and Technical competency requirements in order to
satisfy those Human Factors and Technical training objectives, which are most frequently
applied as separate elements in aviation maintenance domain.

Figure 1-11 Process model for Human Factors and Technical training integration (Mauri, et al 2001)

The making of the simulator consists in three steps: creation of the model, conceptual design
and implementation.
a. Creation of the model
The model carried out is the outcome of the integration of SHELL (Edwards, 1988) and
PIPE (Cacciabue, 1998) models. Furthermore, few suggestions are deduced from the
COCOM model (Hollnagel, 1993a,b).
o SHELL model

Edition Number: Final Draft Page 17


SHELL model has been developed with the idea to describe the relationship between
humans and other elements of the working environmental through the following
elements: Software, Hardware, Environment, and Liveware.
In the context of aeronautic maintenance, the relationship between various elements
can be:
- Liveware-Environment: this kind of relationship covers social and technical aspect
of working context in which humans are working and that can be affect the
operator behaviour.
- Liveware-Hardware describes the relationship between the technician, plant and
working tools.
- Liveware-Software is the relationship based on interaction between technician
and procedures (AMM: Adres Maintenance Manual) that he musts follow.
- Liveware-Liveware covers communications and the transfer of information
between two technicians. Furthermore, this kind of relationship includes possible
contacts with supervisor.
o PIPE model
PIPE model is based on the four main cognitive functions that describe the human
behaviour: Perception, Interpretation, Planning, and Execution. These functions are
controlled and supported by the cognitive processes of Memory and Allocation of
resources. These two cognitive processes affect the maintenance man by error
modelling and through the interrelation with other operator and environment.

Figure 1-12 PIPE model (Cacciabue 1998)

Page 18 Final Draft Edition Number:


The process starts with a stimulus and finishes with a response. Stimuli are produced
from the control machine, the work environment or the contextual conditions, while
responses are the manual actions executed in according with stimuli and the related
cognitive process.
o Integration of SHELL and PIPE model
The operator model used in the simulator is the result of the integration of SHELL and
PIPE model. The four main cognitive functions of PIPE are managed through elements
of the SHELL model. Namely, during the task performance the operator interacts to
Hardware, Software, Environment and other operator through the perception and
interpretation functions that detect and process the stimuli coming out of the plant, the
procedures or the other operator. Having the information gathered, the operator can
plan the action to be executed. The execution of the action at the time t permits the
start of a new cycle related at the time t+1.

Figure 1-13 Integration of SHELL and PIPE model (Mauri, et al 2001)

Edition Number: Final Draft Page 19


b. Conceptual design
The conceptual design consists in two steps: Data Modelling e Function Specification.
o Data Modelling
This step characterizes the elements of the model (Software, Hardware, Liveware,
and Environment) and determines the entity-relation diagram. This diagram is used to
create a database that is a fundamental for correct execution of the simulation run.
- Software: Task
The task simulation is performed by processing the Tabular Task Analysis (TTA)
(Schraagen et al. 2000). The task has to describe in great detail in order to
identify each action that the maintenance man performs and outline the effects of
the action itself. Then the TTA allows subdividing each task in units that represent
the individual action.
- Hardware: Objects and Tools
Every objects and tools used during the execution of the task have to list and
label with an unambiguous codec.
- Liveware: Technician Performance Influencing Factors (Technician PIFs)
The Technician PIFs are those factors which influenced the operator’s
performance; examples are motivation, stress, experience… The value of PIFs
can be fixed at the beginning of the simulation and can change during the
simulation run.
- Environment: Environment Performance Influencing Factors (Environment PIFs)
Environment PIFs are the external factors to the maintenance man which
influenced his performance and depend both of environmental condition and
corporate policy (weather, noise, illumination).
o Function Specification
This step carries out the design of the functions which allow the interaction between
user and simulator. To explain what the user can accomplish through this functions it
is necessary analyse the simulator structure.

Page 20 Final Draft Edition Number:


The simulation process consists in three instalments: Initial Set Up, Simulation Run
and Generation of Output Data.

Figure 1-14: Simulator stucture (Mauri, et al 2001)

- The Initial Set Up has to define the initial conditions of the process; in particular it
should detail environmental PIFs, technician PIFs, objects and tools state.
- During the Simulation Run step same or all actions belonging to the same task
are executed. Each action is performed by software trough the Action Execution
Flowchart. The path goes along the flowchart depicts the decisions assumed from
virtual maintenance man.

Edition Number: Final Draft Page 21


Figure 1-15 Action Execution Flowchart (Mauri, et al 2001)

- Generation of Output Data: at the and of process, the simulator indicates the
pathway followed, action codec, a brief description, commission and omission
errors (if they occurred), time action and time task, trend of the
Environment/Technician PIFs during the run.
c. Implementation
In order to implement the model above described is used Microsoft Visual Basic 6.
Data are managed through Microsoft Access.

1.2.6 The simulation PROCRU (Procedures Oriented Crew Model)


PROCRU is a simulation of an aircraft crew, composed of a flying pilot, a pilot-not-flying, and
a second officer, for the analysis of multi-operator situations and the evaluation of the effects
of communications among such operators (Baron et al. 1980).

Page 22 Final Draft Edition Number:


The basic structure of the model comprises the Simulation of the Aircraft under control and
the Simulation of the Single Operator.

Figure 1-16 The model PROCRU for individual crew member (Cacciabue 1998)

o The Simulation of the Aircraft includes Machine Dynamics, containing display and
control variables, and ATC/CREW model, which comprises communication with other
crew members and the external world, such as the air traffic control.
o The Simulation of the Individual Operator contains four main elements:
- The Monitoring Process, which handles display variables and incoming
communication and is affected by the situation and by psychological and external
factors such as stress.
- The assessment of the current situation (Information Processing), which is
influenced by monitored information, inherent knowledge and goals of the
operator.

Edition Number: Final Draft Page 23


- The decision of the action or other cognitive activities to carry out, which is based
on the procedure oriented modelling (Procedure Selector) and is affected by the
previous cognitive activities, the aims of the operator and the assessment of
possible consequences.
- The action implementation (Execution), which implies a process of communication
with other crew members, or the external world, and the performance of actual
control activity, either by observing (Monitor Requirements) or by operating the
control system (Control Requirements).
The simulation PROCRU also comprises the model of Knowledge Base of the Operator,
which is made up of Procedures, Description of the aircraft, Interaction Module that describe
the interaction between crew members and ATC body.
The model includes, amongst the events that are considered for situation assessment, facts
that are “not explicitly dependent on the vehicle state variables”. This means that one of the
basic requirements for modelling cyclic cognitive processes is respected, i.e., a cognitive
activity may be generated by another cognitive process and is not only the result of a
machine or context stimuli. This qualifies PROCRU as a cyclic simulation.
The simulation of communications is performed by referring to standard procedural verbal
requests or responses as is required by procedures.
PROCRU presumes the use of cognitive task analysis for preliminary definition of procedures
and actual performances carried out in the cockpit.
It can be concluded that PROCRU, although developed in the early 80ies, remains, even
today, a remarkable simulation approach worth reviewing and considering as possible means
of representing pilots’ (operators’) behaviour, even when dealing with highly automated
cockpits or control rooms, and multiple interaction processes.

1.2.7 The simulation MIDAS (Man Machine Integration Design and Analysis System)
MIDAS is a framework that accommodates models and knowledge structures for the
simulation of human-machine interactions during safety critical conditions (Corker and Smith,
1993). This workstation-based simulation system contains models of human performance
which can be used to evaluate possible new procedures, controls, and displays prior to more
expensive and time consuming human subject experiments. Several aviation applications
have demonstrated MIDAS’ ability to highlight procedural or equipment constraints and
produce human-system performance measures early in a platform’s lifecycle.

Page 24 Final Draft Edition Number:


MIDAS combines graphical equipment prototyping, a dynamic simulation, and human
performance modelling with the aim to reduce design cycle time, support quantitative
predictions of human-system effectiveness, and improve the design of crew stations and
their associated operating procedures. Furthermore, MIDAS has been conceived as a
modular structure and can, in principle, be apply to study different domain environments, at
different level of complexity.
The basic architecture of MIDAS contains a model of the system under control, the World
Representation, and the Operator Model.

Figure 1-17 MIDAS architecture (Corker and Smith 1993)

o World Model of the MIDAS supports a graphical representation for the physical
entities in an environment, using geometry either produced internally or imported from
a commercial CAD system. In addition to their physical aspects, the functionality of
controls and displays is captured by associating operating procedures and
behaviours to each graphical equipment component. These functional models are

Edition Number: Final Draft Page 25


expressed in three different formats: a time script, a stimulus response, or a finite
state machine representation. In addition to the physical and functional models for a
cockpit, the entire crew station can be place inside of a vehicle model, linked to
guidance and control models, and place inside a terrain database or gaming area.
The World Representation also contains the probabilistic module, by which failure and
malfunctions may be introduced on a probabilistic basis.
o Human Operator Model represented by MIDAS contains the following models and
structures.
- Physical Representation: a model of human figure anthropometry and dynamics.
The model, Jack, represent human figure data (e.g., size and joint limits) in the form
of a 3-D mannequin which dynamically moves through various postures and visual
fixations to represent the physical activities of a simulated human operator.
- Perception and Attention: MIDAS has focused on modelling perception agent
computes or cockpit objects imaged on the operator’s retina, tagging them as in/out
of peripheral and foveal fields of view, and in/out of focus, relative to the fixation
plane. Objects in the peripheral visual field are partially perceived. In order for
detailed information to be fully perceived, the data of interest must be in focus,
attended, and within the foveal vision for 200 ms. The perception agent also controls
simulation of commanded eye movements via defined scan, search, fixate, and
track modes. Differing stimuli salience are also accommodated through a model of
pre-attention in which specific attribute, e.g. colour or flashing, are monitored to
signal an attention shift.
- Updatable World Representation (UWR): this model contains the basic knowledge
of the operators, the information concerning procedures and equipment, the activity
of working memory on the information perceived from perception module, and the
know relationships between objects and system components. UWR contents are
defined by pre-simulation loading of required mission, procedural, and equipment
information. Data are then updated in each operator’s UWR as a function of the
mediating perceptual and attention mechanism previously described. These
mechanisms function as activation filters, it allows a higher or lower impact of the
stimuli, coming from the modelled environment, on the operator’s memory.
- Activity Representation: Tasks or activities available to an operator are contained in
the operator’s UWR and generate a majority of the simulation behaviour. MIDAS
uses a hierarchical representation. Each activity contains slots for attributes value,

Page 26 Final Draft Edition Number:


describing preconditions, temporal or logical execution constraints, satisfaction
conditions, estimate duration, priority, and resource requirements. Resources
include both physical effectors and psychomotor task loading.
- Scheduler: Activities which have their preconditions met, temporal/logical execution
constraints satisfy, and required information retrieved from memory are queued and
passed to a model of operator scheduling behaviour. Based on the user’s selected
scheduling strategy (e.g., “workload balancing” or “time minimization”), activities are
executed in priority order, subject to the availability of required resources. MIDAS
contains support for parallel activity execution, the interruption of on-going activities
by those of higher priority, and the resumption of interrupted activities.
New MIDAS design
A major effort to redesign the MIDAS system is underway so as to enable a smaller
development time for new scenarios (from several months to one or two weeks), and in order
to increase the efficiency of the running system (from around 50 times real-time to near real-
time), to facilitate the process of replacing cognitive and perceptual models (from weeks to
days), and to expand the functionality of the system. There was also a desire to update a
human operator model, in particular to account for more widely accepted views on human
information processing and its likely underlying architecture.
The approach taken in MIDAS redesign is object-oriented rapid prototyping. Initial design
efforts produced a high-level system architecture with the following elements:
- a domain model supporting components necessary for running a simulation;
- a graphics system to enable simulation visualization;
- an interface for end user specification of the target domain models;
- a simulation system for controlling the simulation and collecting data;
- a results analysis system for examining simulation data after it has been collected.
The domain model is centred on a crew station, with the following models:
- the environment encompassing the crew station;
- the vehicle containing the crew station;
- the crew station itself, particularly its contained equipment;
- the crew, meaning the human operators together with their assigned missions and
procedures.
In the redesign, the human operator model is both expanded in functionality and aligned
more closely to typical information processing models of human cognition and perception.
The model includes an anthropometric component (a simple representation of the operator’s

Edition Number: Final Draft Page 27


hands and head was used), capturing physical aspects of human behaviour, permitting
visualization of reach, fit, and fixation activities. The processing architecture of the human
operator model considers as main components the following elements: input, memory and
central cognition, output, and attention.

Figure 1-18 New MIDAS Operator Architecture

o Operator Input is received from the environment through the senses.


Visual input is obtained through an intermediate, the Visual Scene, which contains all
objects potentially visible to the human operator, either inside or outside the crew station.
Auditory input occurs through an intermediate object called the Auditory Field, containing
all signals and messages emitted by equipment, other operators, and the environment.
The perception element processes these inputs to produce a simple interpretation which is
entered into Working Memory in either Phonological Loop (linguistic material) or the Visio-
Spatial Sketchpad (non-linguistic material).

Page 28 Final Draft Edition Number:


o Memory now consists of both Long-Term and Working Memory components.
The former, similar to the existing UWR, contains both declarative and procedural
knowledge. Procedural knowledge is represented as Reactive Action Packages (RAPs)
which describe how to accomplish a given goal and consist of the methods possible for
achieving that goal, when each is most appropriate (according to the current context), and
how it is known that the goal is satisfied.
Working Memory has three main contents:
- Even Management in which new inputs are assessed to determine whether they were
expected or not (if so they are simply used to update the current context, if not, they
generally trigger the creation of new goals to handle an expected event);
- Agenda Management in which the goals on the Task Agenda are examined, based
upon priority and the current situation, to determine which one focus on next;
- Plan Execution which, after once goal is selected, is used to retrieve the appropriate
RAP from Long-Term Memory.
o Motor Control Process regulates bodily movement, manipulation of equipment, and
speech output. If required resources are available, a motor activity is created and
processed.
o Attention, within the new architecture, is planned as a limited central resource.
Therefore, for any of the behaviour described previously to occur, the responsible
process must first secure the necessary resources of attention. If these are not available,
then delay of that process, or an interruption of an ongoing activity, is necessary.

Edition Number: Final Draft Page 29


1.2.8 TOPAZ (Traffic Organization and Perturbation AnalyZer )

TOPAZ is a simulator that can be used for analysing errors of Air Traffic Controllers. It is
based on a stochastic analysis framework which implies the following five activities:
a. Develop a stochastic dynamical model for the situation considered,
b. Where necessary develop appropriate cognitive models for human operators involved,
c. Perform the stochastic analysis necessary to decompose the risk assessment,
d. Execute the various assessment activities (e.g. through Monte Carlo simulation, numerical
evaluation, mathematical analysis, or a combination of these),
e. Validation of the risk assessment exercise.

The aim of the Topaz developers was to represent for the selected encounter scenarios the
results from the qualitative safety assessment in the form of a Stochastic Differential
Equation (SDE) on a hybrid state space. Unfortunately, the direct identification of the SDE
model would be very complicated for most ATM situations. In addition to a very large state
space of the corresponding SDE, there are many interactions between the many state
components. Therefore the developers shifted their attention towards a systematic approach
to develop an SDE instantiation through the development of a specific type of Petri Net: the
Dynamically Coloured Petri Net (DCPN), (a more detailed description is in the references:
M.H.C. Everdij, H.A.P. Blom and M.B. Klompstra 1997).

Operator Model
The Operator Model used consists of a contextual human task-network model, which is
formulated in terms of a DCPN, and which effectively combines the cognitive modes of
Hollnagel (1993) with the Multiple Resources Theory of Wickens (1992), the classical
slips/lapses model (Reason, 1990) and the human capability to recover from errors
(Amalberti and Wioland, 1997). In addition a model, for the evolution of situational awareness
errors, has been developed.

Description of controller task


In order to analyze the controller's task the task analysis breaks it down into several
subtasks. This decomposition is carried out along two dimensions: first a generic dimension,
where the task is decomposed into cognitive activities at a general level which is

Page 30 Final Draft Edition Number:


independent from the scenario and operational concept. Secondly, the task is decomposed
according to a scenario/concept specific dimension, where the controller task is described at
the level of operational functions in the scenario. The task decomposition along the generic
dimension has been identified from (Buck et al., 1996). The following subtasks resulted:
1. Sensing (to gather all information which is needed to get an overview over the air traffic
situation).
2. Integration (to connect the gathered information thus forming a more global air traffic
picture).
3. Prediction (to use the more global picture to anticipate future situations and events).
4. Complementary communication (pass the information to aircraft in order to improve the
pilots understanding of the situation).
5. ATC problem solving planning (to use the understanding gained from the more global
perspective to plan and
prioritise aircraft actions).
6. Executive action (to communicate information and priorities as instructions to the aircraft in
the system).
7. Rule monitoring (to ensure that the active components of the system behave in
accordance with the ‘rules’; monitoring and taking corrective actions for exceptions).
8. Co-ordination (to coordinate laterally with other parts of the ATC organisation).
9. Over-all performance (to ensure that the objectives of the operation are achieved, and that
the infrastructure
functions correctly).
10. Maintenance and monitoring of non-human part (to ensure that all systems supporting
the controller work correctly).
In modelling the influence of the context on performance it has been adopted a mathematical
model that incorporates two control Modes from Hollnagel’s approach : tactical control and
opportunistic control. The characteristic influence of these control modes on the performance
can be explained as in the short example reported in Table 1-1.

Edition Number: Final Draft Page 31


Table 1-1: subtask related to Anticipation an Alerts (Blom, Daams, Nijhuis, 2000)
A1 Sensing:
Tactical: Whenever possible the controller scans his display to detect possible deviations from ATC
intentions. The controller divides the display into regions of interest and assesses these regions in a
particular order. If scanning is interrupted at some time instant, the controller will resume scanning starting
at the region that he was scanning when the interruption took place. Further information may also be
obtained through R/T communication.
Opportunistic: Whenever possible the controller scans his display to detect possible deviations. The
controller scans in a random fashion.
A2 Integration
Tactical: The ATCO systematically integrates the information derived from scanning to improve his mental
picture of the traffic situation. When some relevant information is not available, the ATCO may return to
sensing to actively seek information to improve his assessment of the situation.
Opportunistic: The ATCO integrates the randomly obtained information. An incomplete or even distorted
mental picture may develop.
A3 Prediction
Tactical: The ATCO extrapolates his mental picture to the future traffic situation. On the basis of the
assessment of the situation, the ATCO decides whether a problem may occur in the mid-term future
Opportunistic: The assessment of the future situation is restricted to a short time horizon and is based on
incomplete information. It is assessed whether a problem may be expected in the short-term future.
A5 Problem solving/planning
Tactical: On the basis of the assessment of the (future) situation, the ATCO decides a resolution to the
expected problem. In principle, the resolution involves re-planning the aircraft trajectories in an optimal
fashion with respect to safety, efficiency.
Opportunistic: The resolution is aimed at solving the imminent problem only.
A6 Executive action
Tactical: The controller gives a series of R/T instructions to the aircraft involved. He verifies whether the
pilot(s) readback these instructions correctly.
Opportunistic: The verification of correct readback may be omitted.
A7 Rule monitoring
Tactical: After the R/T communication the controller verifies whether the aircraft comply to his clearances
Opportunistic: This may be omitted or be performed less thoroughly.

Scheduling of subtasks
The subtasks have been scheduled according to a defined strategy. The scheduling strategy
is expressed in the following (input) task parameters:
“Pre-emption For each subtask an assumption is made whether it may pre-empt another
subtask.
Concurrency For each subtask it is known whether it may be performed concurrently with
another subtask.
Initiation For each subtask the circumstances under which the subtask should be performed
are known.
The assumptions concerning Pre-emption and Concurrency are implemented according to
priority tables (Blom, Daams, Nijhuis 2000). These tables have been identified on the basis
of ATC human factors expert knowledge.
In terms of a stack of to-be-performed subtasks this scheduling principle can be formulated
generically as the following two rules:

Page 32 Final Draft Edition Number:


Rule 1: An initiated subtask will be placed in the stack before the subtasks that it may pre-
empt.
Rule 2: If the first two subtasks of the stack can be processed concurrently, this will be done
(subtask duration will be slightly longer, however).” (Blom, Daams, Nijhuis 2000)

Mathematical model of tactical ATCO


The authors provided a description of the Topaz model from an “input-output” point of view
(Blom, Daams, Nijhuis 2000):
Initiation
Three stimuli for ATCO cognitive activity are identified: ATCO’s Anticipation, Automation
alerts and other actions. Activity triggering situations that first have to be detected by the
operator (like an aircraft severely deviating from its route) are not considered as an initiation
stimulus, since general sensing is modelled as a part of the operators task and therefore the
sensing activity has to be initiated first. For the occurrence of certain stimuli various other
ATM modules may need to function properly, such as e.g. the ATCO HMI and surveillance
for an Automation alert. Using a Petri Net each stimulus is modelled as a place, connected
with one transition that fires if initiation of the corresponding cognitive activity occurs. These
transitions produce two tokens: one token returning to the stimulus place for future
generation of cognitive activity and one token in a stack place. The stack places represent
the situation that the respective initiated cognitive activity has to wait until the operator has
completed other (more important) tasks. The places Anticipation, Alert and Other action
represent initiation of cognitive activity by own initiative, Automation alerts and other action
(e.g. a pilot request) respectively. Preconditions on occurrence of these stimuli are modelled
within the respective transitions: if the preconditions are not met the transition does not fire.
For example: the proper functioning of the ATCO HMI as a precondition for the occurrence of
an Automation alert triggering ATCO cognitive activity is modelled as a precondition for the
firing of the transition connected to the Alert place.
ATCO subtasks
The ATCO task has been divided into several sub-tasks which are each defined as the
combination of a scenario specific purpose and a generically described cognitive activity.
Three context specific purposes are modelled: the ATCO detects and corrects deviations of
aircraft from ATCO intentions, ATCO reacts to Automation alerts (initiated by Automation
tools) and ATCO performs other control activities (initiated by own initiative or through other

Edition Number: Final Draft Page 33


actions). Each subtask is represented by a place in the Petri Net, which is named after the
cognitive activity it represents.
The tokens then model cognitive activity on the subtask that corresponds to the place that
they reside in. Some cognitive activities may be performed for several purposes, leading to
several places with the same name. Below we describe the places with respect to the
cognitive activities that they represent. The places named sensing represent the situation
that the ATCO is gathering information to improve his picture of the traffic situation. The
places named integration represent the situation that the ATCO incorporates the newly
obtained information into this mental picture. The place named communication represents
the situation where the ATCO makes his knowledge of the situation available to the pilots.
The place named over-all performance describes the evaluation of sector performance as a
whole. In the prediction place, the ATCO extrapolates his picture of the traffic to the future,
while in the problem solving/planning place he synthesizes solutions to possible (future)
problems. In the executive action place the operator gives clearances to aircraft, followed by
a monitoring place where it is verified whether the aircraft complies with these clearances. In
the out place the tokens are collected after performance.
Whenever one subtask is logically performed after performing another (e.g. prediction is
performed after integration) and they have the same scenario specific purpose a transition is
drawn between those two subtasks.”
The subtask scheduling then follow the rules previous mentioned. Scheduling depends on
the relative priority of a subtask and the possible simultaneous performance of two subtasks.
Priority is coded as a number 1,2…low numbers have higher priority, and each priority level
corresponds to a colour for the Petri Net. The priority colours are up-dated whenever a new
token is initiated and when a token is collected in the out place, according to a suitable set of
assumptions.
For each subtask the time needed to complete it has a certain probability density, given the
current control mode of the ATCO and possible concurrent performance of another subtask.
In the Petri Net, the duration of performing a subtask is modelled as a delay in the firing of
the transition that has the subtask as input place. Each transition has a delay that is a
function of the priority of the token in the input place, the current control mode and the place
that the token with priority 1 resides in.
The ATCO's executive actions (i.e. the clearances given) are also modelled as a colour-type
associated with the tokens in the subtasks; where the colour-type is a set of paired numbers
describing the type of clearance given and the aircraft that the clearance is given to. It is

Page 34 Final Draft Edition Number:


assumed that the type of clearance given is determined during the executive action subtask
only and that it depends on the control mode only. So the firing of the transitions after the
executive action places also affects the Petri nets of other ATM modules: completion of
executive action means that a decision to give a clearance to an aircraft has been carried out
and therefore the firing of these transitions describes the ATCO control actions.
In the Topaz model the ATCO performance depends on the control mode, scheduling rules
and it results in a clearance. In the DCPN model of the ATCO the two control modes
identified, which are each represented by a place in the Petri Net: the place named Tactical
models the situation that the controller has a relatively high degree of control and the place
named Opportunistic models a relatively low degree of control. The switching between
control modes is modelled by transitions between the Tactical and Opportunistic places. The
resulting subnet contains one token, the place of which defines the current degree of control.
The firing of the transitions between the control modes depends on the number of tokens in
the stack places, which should provide an indication for the subjectively available time, and
the number of times that monitoring was followed by another executive action during the last
few minutes, which should be a proxy of the outcome of previous actions measured as the
number of clearances that the controller considers to be insufficiently effective.
The Petri Net of the ATC model is represented in Figure 1-19.
In the model ATCO may give erroneous clearances (e.g. switching heading and speed, or
clearance given to a different aircraft than he intended to: call-signs mixed up). These errors
are incorporated as random variations in the ATCO actions, and the error types are
represented as a colour value of the tokens in the place Clearances.
It is not clear however what type of data calibration is used for these random variations since
it is a HEP type of data.

Edition Number: Final Draft Page 35


Figure 1-19 : Petri Net of tactical ATCO model (Blom, Daams, Nijhuis 2000)

The developers performed a comparison against statistical data for the ATCO routine
monitoring concept: the period to detect severe deviations such that a comparison with
available statistical data is possible (George et al., 1973). The ATCO performance model
developed in Topaz with appropriate Petri Net models for the other relevant components in
conventional ATC provided detection time results that agreed quite well with the measured
data. However the simulator presents a high complexity in its application and in the analysis
of the results.

Page 36 Final Draft Edition Number:


1.2.9 IDAC
IDAC is a cognitive simulator based on many other HRA first and second generation HRA
methodologies. IDAC has been mainly designed to be applied in the probabilistic accident
simulation environment (ADS Accident Dynamics Simulator) (Chang and Mosleh 1998)
developed to perform dynamic probabilistic risk assessment of Nuclear Power plants.
IDAC represents the behaviour of a single operator or of a group of operators, taking into
account three generic types: decision maker, action taker and consultant ( Chang and
Mosleh 1999, Mosleh and Chang 2004).
The acronym stands for the various modules that composed the simulator which is to say a
model for information processing (I), problem solving and Decision Making (D), action
execution (A) of a crew (C).
The ADS code simulates accident scenarios and generates information about the external
world, this is used as an input for the IDAC code that in turn generates the possible response
of the crew.
The architecture of the simulator can be represented in Figure 1-20.

Figure 1-20: General architecture of the IDAC dynamic response model (Mosleh and Chang 2004)

Edition Number: Final Draft Page 37


The main elements of the IDAC response model have been described by the authors
(Mosleh and Chang 2004) as reproduced in Figure 20 where the model is placed within a
similarly high level model of how an individual operator interacts with the external world. The
blocks shown in Figure 1-20 are:
(1) The external world to a specific operator includes the system, the physical environment,
other operators, and the external resources. These are the entities that the operator has to
interact with and that are provided by the DAS.
(2) The external filter is any factor external to the operator that can block or distort the
information from the external world before being detected by the operator’s sensory organs
(e.g., visual and auditory organs). Examples of external filters are noise and view obstruction.
(3) The information that has passed the external filter enters the inner world of the operator.
The main components of the internal world are Mental State (MS), Memory, and Rules of
Behaviour.
a. The MS is represented by a set of inter-related variables (i.e., internal PIFs). It
defines the operator’s state of mind in various dimensions such as individual
differences, situation perception and appraisal, feelings about the situation, and
certain cognitive behavioural modes. MS could act as an internal filter by which the
incoming information is masked.
b. IDAC model includes three types of memory: working memory (WM), intermediate
memory (IM), and knowledge base (KB). WM stores limited information related to
the current cognitive process. IM, theoretically unlimited in capacity, stores
information related to recent cognitive processes which could be easily retrieved at
any time given appropriate stimuli. KB, also theoretically unlimited in capacity,
stores all PS/DM related knowledge obtained from training and experience.
c. Rules of Behaviour govern the cognitive, emotional, and physical responses of an
individual for a given state of PIFs and the content of memory. More specifically,
the migration of memory and MS from one state to another during the course of an
event, and the corresponding operator behaviour in the I-D-A sequence are
regulated by the Rules of Behaviour. These cognitive activities are based on the
content of the Working Memory in which the information with the highest priority is
stored. Any cognitive response of the operator to a situation which has been
brought to the operator’s attention through the information perceived is translated
into a problem statement or goal, requiring resolution. The process of problem
solving or goal resolution involves selection of a problem solving method or

Page 38 Final Draft Edition Number:


strategy. There is hierarchy of goals and sub-goals, such that complex problems
are broken down into simpler ones, and solved one at a time or concurrently, using
corresponding strategies. The problem solving process involves a series of
decisions to be made or solutions to be selected based on available alternatives.
The decisions making stage has its own strategy which is “cost-benefit
optimization”.
(4) Within the time window of interest (i.e., duration of an accident,) some PIFs are static
(e.g., human-system interface quality) and some are dynamic (e.g., number of alarms control
room generated in an accident).
(5) Actions are external manifestation of decisions (to act) formed by the cognitive process of
Problem Solving/Decision Making. The action performing process (A) executes the decision
made through the D process. The actions are skill-based, requiring little mental effort .
Through action the operators interact with the external world, which in turn generates new
information starting another Problem Solving/Decision Making cycle. The operator’s actions
could be blocked or distorted by the external filter. This interaction loop continues until the
desired system state is reached (e.g., problem solved or an undesired state of the system
reached). (Mosleh, Chang 2004).

Any cognitive response of the operator or of the crew to an external situation perceived, is
translated into a problem statement or a goal, requiring solution. The model tries to cover
also why and how a response process is initiated and why and how a goal or a solution is
selected or abandoned. In order to go through the I-D-A process dynamically and in
response to external dynamics IDAC’s model has an internal engine comprised of the Mental
States with its set of states variables and rule of behaviour, plus the information processing
engine of the Working Memory. The stimuli are an individual perception of the external world.
The tendency to act on stimuli include the individual’s internal feelings pertaining to the
stimuli (e.g. time constraints, work load etc) These results in various psychological moods
(stress, alert etc.) that could affect the individual’s behaviour.
As described by the Authors (Mosleh and Chang 2004) the cognitive engine (its parameters,
factors and rules) act on the memory and generate a cognitive behaviour in response of the
scenario within which the activity has been initiated. Part of the dynamics of the operator
response is due to the change in the external environment. Perceived raw information is
temporarily stored into the Working Memory and serve as stimuli to change the Mental
States. IDAC covers the continuum of operator cognitive processes and actions in form of

Edition Number: Final Draft Page 39


discrete cognitive events, such as the step of the information processing, goal selection and
execution of problem solving strategies to achieve the goal. The cognitive basic events and
the resulting observable actions are stochastically selected among the possible alternative
paths and the related outcomes that have been identified as potential, (each of them with an
assigned conditional probability). These probabilities are conditioned on the past history of
the sequence preceding the events, and their values are calculated as a function of the
states of the various parameters identified as influencing factors. The uncertainties identified
by the authors in connection with the probabilities evaluated by IDAC are:
- “uncertain effects and variability of factors which are not included in the current model
- Uncertainty of the degree of influence of factors included in the model on each other
and on the model output
- Stochastic variability of the spectrum of situations that are collectively approximated
by individual basic events and parameters of the model
- Intrinsic residual unpredictability of human behaviour”

In current application IDAC uses qualitative and quantitative scales in order to asses the
state of input variables and parameters (PSFs). Those elements are then used to calculate
the score for each alternative response. The completeness of the set of possible alternatives
is assumed therefore the probability of each alternative is calculated as the normalized score
of that alternative:
scorei
Pi = N

∑ score
j =1
j

Each PSF values ranges from 0 to 10. Static PSF are input to the model and quantified by
HRA analysts using conventional methods such s expert judgment and surveys. Dynamics
PSFs are function of the scenario and of the static PSFs.
In IDAC observable human actions are classified as errors in respect of the external
reference points in the following way:
1) the crew behaviour is compared with the system needs or actual system state
2) the crew behaviour is compared with the procedure requirement and
3) the procedure requirements are compared with the system needs.
A mismatch between the states and mutual requirements of any of the two reference points
can be classified as an error.

Page 40 Final Draft Edition Number:


The premise of internal reference points is that the error has occurred in the module for
which there was a correct input but an incorrect output (I.E. an error will be defined as under
action execution, A-element if the action was incorrect given a correct problem solving
process, D-element).
Currently IDAC is implemented as the HRA module of a Dynamic PSA computer code
Accident Dynamic Simulator (ADS) with its embedded models for a nuclear power plant
(which include the Relap5 thermal hydraulic simulation code). ADS uses the Discrete
Dynamic Event Tree (D-DET) approach (Amendola 1988, Acosta, Siu 1991) to generate
possible time dependent scenarios based on dynamically changing states of various systems
and operator response. Probability of a scenario overall is calculated as the product of
conditional probabilities of branches that constitute the scenario and the operator responses
are among these branches.

Edition Number: Final Draft Page 41


1.2.10 PROCOS
PROCOS is a probabilistic cognitive simulator for HRA studies. It has been developed within
the Politecnico of Milan in Italy for approaching human errors for highly procedural tasks
such as operator involved in the commissioning phase within the control room of an
ammonia urate plant.
This simulator is based on a “semi-static approach”: it provides a quantitative result,
comparable to those traditional and static first generation methods, but it takes also into
account a cognitive analysis of the operator.
PROCOS differs from the way traditional human reliability methods represent human actions
because it considers the recovery phase as well. In fact there are two different flow charts: a
flow chart to simulate the operator behaviour in normal operations and a new flow chart for
the recovery phase.
The simulator does not imply the development of a detailed model for the interaction
operator-context; the context is taken into account mainly through the use of Performance
Shaping Factors, as proposed in traditional HRA methods.

The Cognitive model of the operator


The cognitive model of the operator is based on a combination of SHELL (Edwards, 1988)
and PIPE (Cacciabue, 1998). The two models have been combined as already proposed in
the AITRAM project (see above paragraph on AITRAM).
PIPE represents the process of human cognition according to the definition of “Minimal
Modelling Manifesto” given by Hollnagel:
“A Minimal model is a representation of the main principles of control and regulation that are
established for a domain-as well as for the capabilities and limitations of the controlling
system” (Hollnagel, 1993).
SHELL has been used for organizing the information regarding the context and the
interactions between the controller and other members of the ATM team or the pilots
(Liveware), the equipment (Hardware), the procedures (Software) and so on.
The output given by a cognitive model are Error Type (and correct action) defined on the
basis of the cognitive process that leads to their occurrence. The taxonomy chosen to
describe the various Error Types is taken by Wickens (Wickens C., 1992) and consists of:

Page 42 Final Draft Edition Number:


- Error in Perception: errors regarding issues related to the detection and to the
understanding of information;
- Error in Memory: errors related to both short-term storage and more permanent
information based on the person’s training and experience;
- Error in Decision: errors related to the judgement and decision making process
required to the operators;
- Error in Response: it is sometimes possible to carry out actions that have not been
intended, an example of this is often referred as a slip of the tongue.
The error types have been linked with the error modes through a correlation matrix that
is specific for every task the error type and the connected error modes have to be
referred to. An example is shown in Table 1-2.

ERROR TYPE

Perception Memory Decision Response


Not Done Weak weak
ERROR MODE

Other Then Medium Strong medium


... Weak medium
… Strong medium
Part of strong
Table 1-2: Correlation matrix between Error Mode and cognitive Error Type

The taxonomy used for the recovery phase has been proposed by Kontogiannis
(Kontogiannis, 1997), breaking down the error handling process in three phases: Detection,
Localization or explanation and correction.
- Error in Detection: the error happen in the phase in which the error is detected. The
detection can take place at different stages of the task execution:
o Detection in outcome stage
o Detection in execution stage
o Detection in planning stage
- Error in Localisation or explanation: after having detected the error, the operator tries
to identify its causes but he makes a mistake.
- Error in Correction: after having detected the error and identified its causes, the
operator develops and executes an action in order to recover the error but he makes
a mistake.

Edition Number: Final Draft Page 43


Architecture of the simulator
The basic architecture of the simulator comprises the Operator Module, the Task Execution
Module and the Human Machine Interface Module.

OPERATOR CONTROL PLANT/ PROC. CONSEQUECE RECOVERY CONSEQUENCE EVOLUTION

Uscita s im u la zion e

Uscita s im u la zion e

S tep
o Output

TASK EXECUTION MODULE


Potential Error HD State
Modes HD State
EMs
EMs

ERROR
TYPE HW Possible
ERROR consequential HW state
MODE state HW
Y Correct
Hardwa re/ Liveware
N
EM list
HW
interpretation?
6 8
N Y N Y
Remember a Remember a
step to be step to be
7 executed? executed?
20 10
N Y 9
10 Right step Er ror type N Y
9 in intention? MEMORY Ana lyse the
system? Error type
22
Er ror type N Y
MEMORY Ana lyse the Error type Not Done PERCEPTION

stimuli
system? 21 MEMORY
Not Done Other Than
Misordered 11 Less Than
Other Than Mor e Than
Error type N Y Later Than
11 Pa rt of
Hardwa re 12 Planned a
As well as DECISION Sooner Than
element OK? step to be
Er ror type N Y Sooner Than executed?
Opposite
Planned a 14 Not Done
12 DECISION La ter Than
step to be
executed?
Not Done

HW HW
N Scan error modes Y
Scan er ror type
N Y
modes PERCEPTION.
N Y 18 type MEMORY. Is there a ny Error
Right step

Cognitive
Is there any Err or mode availa ble?
in intention? mode a va ilable? FLOWCHART
N Y dello STEP Scan error modes
Error type
FLOWCHART Scan error modes 17 Right step in availa ble 17
DECISION
dello STEP available intention? 18 16
13
16 Er ror type
15 Misordered 13
Other Than DECISION

Expecte Actual
Ha rdware 14
Part of Scelta Error Scelta Error
element OK? Misordered
Sooner Than Mode Mode
Later Than
19 Other Than
15 Hardwar e 19
Pa rt of element OK?
14 Sooner Than
La ter Than

16
N Y

flow chart
Scan error modes
Scan er ror
type DECISION. N Y

d state state
Modes
Is there any Error
type DECISION.
mode available?
Is there any Error
FLOWCHART Scan error modes mode a va ilable?
dello STEP available 17 FLOWCHART Scan error modes
18 18 dello STEP available
16 17

Scelta Er ror
Mode
19

19 Scelta Er ror
Mode

TASK TASK
OPERATOR MODULE HMI MODULE

input

Figure 1-21: Architecture of PROCOS

- the Operator Module consists of the cognitive flowcharts for Action execution and
Recovery phase, plus the correlation matrix between Error Type and Error Mode. The
critical underlying feature of this module is the mathematical model for decision block
criteria of the flowcharts.
- the Task Execution Module referred to the procedure that has to be simulated. In the
first version of PROCOS this module was based on the Event Tree.
- the Human Machine Interface Module made up of tables regarding the hardware
state and its connection with the operator actions (task executed or error modes
committed).

The Inputs required for the simulation process are:


- Performance Shaping Factors affecting the task to be simulated (PSFs or PIFs);
- Hardware involved in the execution of the task and its possible state;

Page 44 Final Draft Edition Number:


- Steps of the task (Task Analysis);
- Possible error modes to be considered.
The main Output is to provide a probability value in respect of the operator actions identified
as critical and a probability value for the corrective action in the recovery phase as well.
The architecture of the simulator is centred on the cognitive flowchart. A cognitive flowchart
is a decision blocks diagram through which it is possible to represent the succession of
cognitive functions used by the operator in order to execute an action.

Decision blocks criteria


The mathematical model for decision blocks criteria of the flowchart is the main critical
feature of the operator module of the simulator.
Each decision block has to possible exits: “Yes” and “No”. The exit process is stochastic and
it depends on the PSFs values and the influence they have on each decision block.
If we indicate with X the possible outcome of a decision Block, X is a Bernoulli’s variable.
If the following values are associated with X:
Yes Î X = 1
No Î X = 0
Then the probability density function fx(x) is equal to:

⎧ p x (1 − p)1− x per x = 0 or x =1

f x ( x ) = f x ( x, p ) = ⎨ (1.1)
⎪0 otherwise

where : 0≤p≤1
1 – p = q.
The probability of having “Yes” as a possible exit of the block can be expressed as [P(X = 1)]
and it is equal to p, while the probability of having the “No” exit is [P(X = 0)] equal to q.
In order to calibrate each decision block, the value of p, the success probability of the
cognitive process in the block, has been expressed as a function of the PIFs involved for the
block (Thus also in order to evaluate the influence for the context on the cognitive process).
The SLIM method has then been chosen (Wickens, 1992), in particular the expression that
relates Human Error Probability (HEP) with a Success Likelihood Index, which is a
logarithmic function of the PSFs involved (formula 1.2), since it is “generally accepted that

Edition Number: Final Draft Page 45


changes in human responses induced by changes in external conditions can be described by
a logarithmic relationship”(Fujita & Hollnagel, 2004):

log10 ( HEP) = a ⋅ SLI + b (1.2)

where:
HEP Æ Human Error Probability
SLI = f(PSF) ÆSuccess Likelihood Index
a, b Æ parameters

The SLI index is defined as follow:


N j
SLI = ∑ ( wi ⋅ ri ) (1.3)
i =1

where:
wi Æ normalised weight of the i-th PSF for the cognitive process of the j-th block
ri Æ i-th PSF value
Nj Æ number of PSFs for the j-th block
N j
and ∑w
i =1
i =1

Figure 1-22: Mathematic relation between HEP and SLI

In the first application of PROCOS for each decision block the HEP value has been taken
from the THERP Data tables (Swain and Guttmann, 1983), chosen for an error type

Page 46 Final Draft Edition Number:


representative of the cognitive aspect described in each decision block. The value of the
median has been used in order to calculate the two parameters a and b, from the formula
(1.1), in correspondence to a SLI mean (SLImean) value for the nominal working condition
(central value of the interval for each PSF involved). The second condition was to consider
SP=0 for SLI=0 as a bound condition.
In this way has been possible to define a and b for each block.
0 = 1-10 a ⋅ 0 + b Æ b=0 (1.4)
a⋅SLI
SPTHERP = 1-10 mean Æ a (1.5)
In this way it is therefore possible to determine the probability of each exits from the block
using the SLI index:
q = 1 − p = 1 − SPblock (1.6)

1 − SPblock = HEPTHERP (1.7)

At the beginning of a simulation process, the value and the weight wi for each PFS are
extracted as a random variable from a uniform distribution in an interval e-f and winf-wsup
respectively.

The strong point of this simulator is the medium-low application complexity, especially it is
very easier then each other quantitative method present in literature. Furthermore PROCOS
can be applied to many different fields with a few efforts to perform the necessary changes.

Edition Number: Final Draft Page 47


1.3 Summary of the Chapter
The main elements analysed in this overview of dynamic risk modelling in Human Reliability
analysis that focused on Cognitive simulators is presented in Table 1-3. Each one of the
methods presented is classified according to some criteria:
- The Model for human- environment interaction
- Application complexity
- If it is Quantitative or Qualitative
- Cognitive model for the operator
- If it allow interaction between operators
- Field of Application
Furthermore table 1-4 provides a comparison among the various methods: for each of them
are emphasized the strong points, the weaknesses and the opportunities in relation to
possible application within The ConOps Concept. As far as ConOps is concern in fact it is
considered the capability of the cognitive simulator to be used as a supporting tool to carry
out Human Reliability Analysis for the use cases proposed within the ConOps framework.
Of the 9 cognitive simulators reviewed only 5 are capable of providing quantitative results
suitable for a risk assessment application.
However we can conclude that only the cognitive simulators able to provide quantitative
results are of interest for the possible applications of HRA related to CONOPS and among
the ones analyzed above apart from PROCOS, only four cognitive simulators are able to
provide quantitative results:
- CES, COSIMO, MIDAS, TOPAZ, IDAC

Of these five:
o CES and COSIMO do not have a model for the interaction between the operator and
the external environment
o TOPAZ and MIDAS do have such a model but do not have a model for the interaction
among operators
o IDAC has both models; however the interaction with the external environment is
based on a simulator of a nuclear power plant and its possible response to different
actions. Therefore the adaptability of the method to the ATM case study can be quite
expensive.

Page 48 Final Draft Edition Number:


Table 1-3: Summary Table for the Cognitive Simulator analysed
Model for
Interaction
human- Application Quantitative/ Cognitive model Field of
between
environment complexity Qualitative for the operator Application
operators
interaction
PROCRU
Yes Medium-High Qualitative Sequential Yes Aviation
(1980)
CES Qualitative/
No High Cyclic No Nuclear
(1987) Quantitative
COSIMO Qualitative/
No High Cyclic No Nuclear
(1992) Quantitative
MIDAS
Yes Quantitative Sequential No Aviation
(1993)
SYBORG
Yes Medium-High Qualitative Cyclic Yes Nuclear
(1995)
TBNM
No Qualitative Sequential Yes Nuclear
(2002)
AITRAM
Yes Medium Qualitative Sequential Yes Aviation
(2002)
TOPAZ
Yes High Quantitative Cyclic No Aviation
(2000)
IDAC
Yes High Quantitative Cyclic/Sequential Yes Nuclear
(2004)
PROCOS
Yes* Medium/Low Quantitative Sequential** Yes*** Industrial
(2004)

Notes:
- Yes* Procos do have a model for the interaction between operator and environment
however is quasi-static, which means that the behaviour of the external plant is not
simulated but it is taken into account using:
o a limited set of the states in which the equipment can be turned;
o an explicit relation between the actions outcomes (correct execution or Error
modes) and equipment status modifications (the relation has been derived
from the Hazop analysis).
- Sequential**The cognitive model is based on an information processing approach,
however the model comprise a cognitive model for possible recovery phase of action
- Yes***Interaction between operator is taken into account through the use of part of
the cognitive flowchart especially dedicated to communication processes.

Edition Number: Final Draft Page 49


Table 1-4: SWOT Analysis of the simulation models in relation to applications within the ConOps concept
Opportunities within
Strong points Weaknesses
ConOps
- It can provide an objective
means of distinguishing
which event scenarios are CES has been developed
likely to be straightforward It presents a quite high for simulating how people
to diagnose and which complexity of application form intention to act in
scenarios requiring longer since it requires a nuclear power plant
diagnosing and which can simulation model for the during emergency
lead to human error. plant the operator has to condition. It is impossible
CES - It can implement both interact with as well; to adjust the method to
(1987) quantitative and It is not able to analyse the the ATM domain because
qualitative analysis; interaction between two or CES can not simulate the
- It can be used to predict more operator; interaction between
human errors by It can’t analyse the operators (it does not
estimating the mismatch erroneous action produced include the
between cognitive by communication. communication module).
resources and demands
of the particular problem-
solving task.
It presents a high
complexity of application
since it is a closed-loop
- It comprises the system model
PROCRU is a simulation
communication model; incorporating sub models
for aircraft crew but it
- It permits the investigation for the aircraft, the
could be difficult to adjust
of questions concerning approach and landing aids
it in order to simulate the
the impact of procedural provided by ATC;
crew of air traffic
and system design It is focused on a cockpit
PROCRU controllers Its main focus
changes on the crew (flying pilot and pilot
(1980) should be shifted from the
performance and safety of not flying); it does not
aircraft crew to ATCO.
commercial aircraft; consider the ATCO point of
Furthermore It doesn’t
- It presents a model for view.
permit a quantitative
human environment It implements a qualitative
analysis.
interaction. analysis but there is not a
computational section: it
can’t be used to make
numerical estimate of
human error probabilities.
As the structure of
It presents a high
- It can implement both COSIMO is actually built it
complexity of application
quantitative and can’t analyse the
since it requires a
qualitative analysis; erroneous action produced
simulation model for the
- It can be applied both by communication. So, in
COSIMO plant the operator has to
nuclear power plants and order to adjust the
(1992) interact with;
air traffic control room but simulator COSIMO to the
It can’t analyze the
only one operator has to specific need in ConOps it
interaction between two or
work. should be added a model
more operators;
of interaction between
operators.
- It can implement both It is difficult to used; Even if MIDAS could seem
MIDAS
quantitative and It doesn’t have an applicable for many fields,
(1993)
qualitative analysis, integrated user interfaces; it can’t be adapted easily

Page 50 Final Draft Edition Number:


Opportunities within
Strong points Weaknesses
ConOps
statistical result can be Lack of to ConOps because it
obtain; validation/verification of does not simulate
- It presents a model for models; communication processes
human environment It presents an extremely between two or more
interaction. slow speed of simulation. people.
- It is modular, with the
user able to specify which
modules are active.
It implements only
- It exhibits two interfaces, qualitative analysis, there
SYBORG is tailored on a
one for the interaction of is no computational
specific application
human with the machine section, therefore it can’t
(nuclear power plant) and
(HMI) and one for the be used to make numerical
in order to be used it
group interaction (Human, estimate of human
SYBORG needs the input coming
Human Interaction, HHI). behaviour;
(1995) from the plant simulator for
Therefore it is well able to It presents a high
which it has been build. So
describe also interaction complexity of application
it is very difficult to adjust
among members of the since it requires a
the method to other field of
team. simulation model for the
application.
plant the operator has to
interact with;
- It is able to analyse the
interaction between two or
more operator; it
It implements only
comprises a team model
qualitative analysis, there
well structured. It was developed to
is no a computational
- It is the first to try to nuclear application and it
TBNM section, therefore it can’t
determine how the doesn’t know how it can
(2002) be used to make numerical
emotions personnel, will be applied on other fields
estimate of human
experience when dealing of study.
behaviour;
with difficult nuclear power
plant events, affect
attention, thought, action
and utterances.
- It exhibits a model for Despite the application
human-environment field of the method is
interaction; human factors
- It integrates Human It implements only maintenance industry it is
Factors and Technical qualitative analysis, there not simple to adjust
competency. It is suitable is no computational AITRAM to the specific
AITRAM
to evaluate the section, therefore it can’t need of ConOps.
(2002)
effectiveness of the be used to make numerical In order to make
learning process by estimate of human error numerical estimates of
developing an advanced probabilities; human behaviour a
training system for computational section
aeronautical maintenance should be added.
technicians.
- It can implement both It presents a very high It is probably possible to
quantitative and complexity of application adjust the method to the
TOPAZ qualitative analysis; and in the analysis of the specific need of ConOps
(2000) - It presents a model for results; but it in not clear how
human environment It is not clear what type of because TOPAZ is based
interaction. data is usable for the on very complex structure.

Edition Number: Final Draft Page 51


Opportunities within
Strong points Weaknesses
ConOps
- It can be used to identify calibration of the simulator. The simulator can be used
hazards, to combine for analyzing scenarios;
hazard into risk however a very high level
framework, to evaluate of details is required in the
risk and to identify scenario description.
potential mitigating Therefore it is not clear
measure to reduce risk. whether this is compatible
with the “tactical” level of
detail at which the task
analysis is currently
developed within ConOps.
Although IDAC was
It presents a very high
developed to use on to
complexity of application
- It exhibits a model for predict the likely response
and in the analysis of the
human environment quantitatively of nuclear
results;
interaction; power plant control room
The causal model used by
- It can implement both operating crew in accident
IDAC are at a preliminary
quantitative and conditions, maybe it could
stage and they are still not
qualitative analysis; be adjusted to the specific
IDAC adequately supported on
- It includes a crew model need in ConOps. However
(2004) theoretical or experimental
of three types of operators at the moment the
ground;
and characterizes the simulator works only if
There is not an explicit
interaction in terms of coupled with a code that
representation of the
communication and simulates accident
impact of memory of the
coordination. scenarios and generates
past on future actions of
information about the
the operator;
external world in a nuclear
power plant.
The behaviour of the
external plant is not
simulated but is taken into
PROCOS is adaptable to
- It presents a quasi-static account using a limited set
many field of study and
model for human of the states in which the
then it is not difficult to
environment interaction; equipment can be turned
arrange to the specific
- It takes into account the and an explicit relation
need of ConOps.
interaction between between the action
Its model is relatively
operator through the use outcomes and equipment
simple and easy to be
PROCOS of part of the cognitive status modifications;
communicated to Expert of
(2004) flowchart especially Even for Procos the
the field of analysis
dedicated to Cognitive simulator has
(namely ATC) even if they
communication been validated only
have no theoretical
processes; through the use of Expert
background on Human
- It presents a medium/low Judgment and it is based
Reliability analysis and
complexity of application. on model of cognitions that
Probabilistic Safety
are at a preliminary stage
Assessment.
therefore not adequately
supported on an
experimental ground.

Page 52 Final Draft Edition Number:


2. USE OF COGNITIVE SIMULATION FOR APPROCHING HUMAN
RELIABILITY ANALYSIS: LINK TO EUROCONTROL ACTIVITIES
EUROCONTROL has been working for many years on the development of an operational
concept able to identify the functions and processes, and their corresponding interactions
and information flows; concerned actors, their roles and responsibilities. The final outcome is
the ongoing work within the OATA project named Concept of Operations (ConOps) for 2011,
The purpose of the Concept of Operations for 2011 is to describe in sufficient detail a
European ATM system envisaged in 2011 so that users and providers alike may identify their
system requirements and associated business cases. Similarly, the concept will provide a
basis for the need for research and development.
The present deliverable should be able to clarify an overview of possible links related to the
use of a cognitive simulator within CONOPS by investigating the future operational concept
using preliminary results of the Integrated Risk Picture currently developed within
EUROCONTROL. Furthermore the work aims at evaluating the potential use of the current
human reliability approach in Eurcontrol: HERA-Predictive for concept evaluation (e.g., by
analyzing the contributing factors to human error observed in incidents, or by making use of
experiences of approaches developed in other industries like the CAHR method).

2.1 Link with the ConOps Framework

ConOps for 2011, as already said, provides a key input for the OATA project. The logical
architecture in OATA is developed using use cases which are then realised though Unified
Modelling Language process. The roles and responsibilities of the actors involved in each
use case have been identified in order to make sure that the use cases completely capture
the interactions between the concerned actors and the ATM system, The operational context
in which the actors and the system interface is provided as an important input. The use
cases also serve to provide a more detailed elaboration of particular aspects of the
scenarios, especially “what-if” situations (the alternative flows). The scenarios and use cases
shown in Annex I provide representative examples.

The use cases place themselves within a specific ConOps content: the ATM Process Model.
The process model has been developed because it presents some benefits as far as the
integrity and consistency of the approach is concerned in particular:

Edition Number: Final Draft Page 53


Process model cover the full scope of ATM

- It will relate directly to the ConOps, Scenarios and Use Cases;


- It is possible to map and check the Logical Architecture against the process model;
- Operational Improvements and Enablers and the Performance view can also relate
easily to it;
- Economical aspects related with ATM “Value Chain” within the Aviation Industry can
also be related;
- Can be a candidate to be used as reference either for Validation or Safety
The overview of the ATM Process Model is presented in Figure 2-1.

Strategic Pre-Tactical Tactical


Phase Phase Phase

Aircraft Operator 1 2 3

Airport Operator 4 5 6

Airspace Management 7 8 9

Air Traffic Control 10 11 12

ATFCM 13 14 15

Figure 2-1: ATM Process model (Eurocontrol ATM Operational concept)

In particular the Box number 12 in Figure 2-1 ad example focuses on the Air Traffic Control
Tasks and it is at a Tactical Level. It is better presented in Figure2-3 where the Air Traffic
Control activities in the so called “Day of Operation” phase, is displayed

Page 54 Final Draft Edition Number:


Results of Post Assessment AF
AM AP Scenarios and AT Transfer of Control
AT AT Historical Flight Data Proposed Flow
Ad Hoc Changes Scenarios, Airport Departure
Measures. Instructions to Ground
ML AT Request Ad Hoc Changes to Airspace Configuration Planning
Network Plan. vehicles
to Civ./Mil. Airspace Information and
Reservations and Movement Runway Flow AO
Reservations
Plans RWY Slots Instructions and Clearances
Rate Requests, to Aircraft
Information,
Pre-Tactical Instructions and Advisories and Information
Clearances for Aircraft
Human Resources
Plan AO Aircraft Operator Request
Conflict-free Trajectory
AO Deviation from Clearance

Airport Capabilities Traffic


Synchronisation, AP Target Off Block Times AT Sector Configuration
Stand Allocation Plan
Planning,
Monitoring / up-dated
AO DAPs from Aircraft Coordinating,
Flow Measures Adjustment Monitoring, DO5
AF Real Time Air Picture
(Regional / Local) Flow Measures up-dated Reacting and
Airspace Use Plan
DO3 Separation
En-route Sequence
Filed Flight Plan Assurance DO1
Maintenance Plan
Human Resources Actual Off Block Times
Allocation
Sector Configuration Plans
including Capacities Arrival Times
Sector Configuration
Catalogue of Scenarios and up-dated
Possible Flow Measures Actual In Block Times
Taxi Sequence
Agreed Risks Departure Times
Required Departure Times
Airspace Structure Mode of
Operations Aeronautical
Arrival/Departure Sequence Flight Plan Executed
Information
Airport Movement and Mode of WX -Forecast En-route Sequence
Operations
Actual WX
Required Arrival Times
Network Operations Plan Unforeseen Events
Network Operations
Arrival Management
Plan

Pre-Tactical S P T
AO
Continuous Iterative Application Process AP
AM
AT
AF

Figure 2-2: ATM Process model Air Traffic Control at a Tactical Level

If we then focus on the process highlighted in clear blue we are able to detail a bit more the
exact area on which the connection with the activity of the present project (Dynamic Risk
Modelling) can be focused (Figure 2-3).

The use cases that have been selected as examples in order to assess their level of detail
and the compatibility with a possible Cognitive Simulator model are placed within the Sub
processes of the Activities that refers to Request Information, Instructions and Clearances
(Figure 2-3).

Edition Number: Final Draft Page 55


AO Transfer of Control
Requests,
AO Aircraft Operator Request Information, Instructions to Ground vehicles
Instructions and
AO Clearances
Deviation from Clearance Instructions and Clearances to
Aircraft
AP Target Off Block Times Advisories and Information for
Aircraft
Planning, Coordinating,
AO
DAPs from Aircraft Monitoring, Reacting Conflict-free Trajectory
and Separation
AF Assurance
Flow Measures up-dated
AT Sector Configuration
DO1 up-dated

DO5
Real Time Air Picture
Human Resources
Allocation
En-route Sequence

Sector Configuration Actual Off Block Times


up-dated
Arrival Times
Taxi Sequence
Aeronautical Information
Actual In Block Times
Required Departure Times
WX - Forecast
Departure Times
Arrival/Departure Sequence
Actual WX
Flight Plan Executed
En-route Sequence
Unforeseen events

Required Arrival Times

Arrival Management
Network Operations Plan

Figure 2-3 : Processes of Daily Operations carried out by ATCO

Those sub processes are:


- Monitor and react to traffic management tools
- Monitor separation between flights
- Monitor and react to compliance monitoring tools
- Issue clearances and instructions as appropriate
- Provide advises and information as required by pilots
- Coordinate decisions with adjacent sectors
- Match stand management plan to appropriate flights
The use cases are the structures as follow:
Example Handle Aircraft Landing:
o Scope :
- System, black-box. System means an Overall ATM/CNS Target Architecture
compliant system
o Level
- User goal
o Summary

Page 56 Final Draft Edition Number:


- This Use Case for instance, describes how a Tower Runway Controller uses the
System to control the landing of an aircraft. It starts when the intermediary
approach phase is completed and the aircraft is ready for final approach and
ends when the Tower Runway Controller is ensured that the aircraft has vacated
the runway
o Actors
- Description of the main actors involved
o Preconditions
- Scenario inputs to the analysis
o Post conditions
- Possible success end states
- Possible failure end states
o Definitions
- List of the main term and abbreviations used
o Triggered
- Elements that triggered the use case events (i.e. : The Use Case starts when the
System detects that the aircraft is on final approach)
o Main Flow
- Main path, or nominal path that should be followed by the chain of events that
lead to a success end state
o Alternative Flow
- Possible deviations from the nominal path
Within the ConOps framework a model able to provide a quantitative reliability analysis of
use cases can provide useful inputs to the ATM Process model. Cognitive Simulators in
general and PROCOS in particular can constitute a useful tool for carrying out a Human
Reliability Analysis (HRA) as far as the use cases tasks are concerned, this in turn can
constitute an input model for Decision making procedures and for a Wider Dynamic Risk
assessment methods able to take into account different aspect of the possible ATM
Scenarios. PROCOS could in fact provide a quantitative results as far as the occurrence of
deviations (and therefore failures) are concerned, simulating different inputs coming from the
scenarios affecting the ATCO tasks and giving a quantitative base for evaluating their impact
on human actions.

Edition Number: Final Draft Page 57


2.2 The use of the cognitive Simulator PROCOS and the HERA-Predictive
approach

Within EURCONTROL Human Reliability Analysis has been already carried out with some
“in house’’ and ad hoc methods. A more systematic approach is under development to make
better use of incident analysis data collected with the HERA retrospective tool. This
approach, called HERA-Predictive keeps the taxonomy and qualitative structure of HERA
retrospective and complements the data collected with a statistical approach, which allows
using the data in predictive safety assessments (Isaac, Van Damme & Sträter 2004). The
approach is an adaptation of the CAHR approach developed in the nuclear domain to the
ATM environment (Sträter 2000). Currently this approach is further developed under the
heading “Virtual Advisor” as the approach should support safety assessments as some kind
of virtual expert. The following outlines how the HERA-Predictive approach in principal works
based on the retrospective analysis of events.

Regarding the structure of the prospective and retrospective HERA approach, a research
project has been set up at EUROCONTROL that reviewed the theoretical and practical
literature to determine the best conceptual framework upon which to base an ATM incident
analysis tool. The conceptual framework chosen is that of human performance from an
information processing perspective (Shorrock, Kirwan 2002; Isaac et al., 2003). The
technique and the related taxonomy are model-based. A model in fact “allows causes and
their inter-relations to be better understood. An error model provides an ‘organizing principle’
to guide learning from errors. Trends and Patterns tend to make more sense when seen
against the background of a model and more ‘strategic’ approaches to error reduction may
arise, rather than short term error reduction initiatives following each single error event.”
(Shorrock et al 2003).

The main purpose of the HERA (retrospective and prospective) classification of human error
in ATM are:

“(i) Incident investigation - To identify and classify what types of error have occurred when
investigating specific ATM incidents (by interviewing people, analyzing logs and voice
recordings, etc.).

(ii) Retrospective incident analysis - To classify what types of error that have occurred within
present ATM systems on the basis of incident reports; this will typically involve the collection

Page 58 Final Draft Edition Number:


of human error data to detect trends over time and differences in recorded error types
between different systems and areas.

(iii) Predictive error identification - To identify errors that may affect present and future
systems. This is termed Human Error Identification (HEI). Many of the classification systems
in this review are derived from HEI tools.

(iv) Human error quantification - To use existing data and identified human errors for
predictive quantification, i.e. determining how likely certain errors will be. Human error
quantification can be used for risk assessment purposes.” (Shorrock et al 2003).

In order to exploit the data for prospective assessment, the approach of HERA-Predictive
(Isaac, Straeter, Van Damme 2004) was designed based on the experiences of using event
data for safety assessment in nuclear (Straeter 2005). This approach should overcome the
current situation that, as far as Human Error Quantification is concerned, the materials
available for prediction are mostly experts’ judgment. The lack of ad hoc data for the
quantification process is therefore one of the main issues affecting HRA applications in Air
Traffic Management.

HERA-Predictive complements data gathered in simulator studies because there is a


practical limit on the amount of human performance data that can be collected through virtual
reality-based study. Here a simulator for the system and the real situation to be handled
need to be used, and the performance of the controller or of the team of Air Traffic
Controllers have to be recorded. As some human error probabilities are in fact in the order of
E-03 or even E-04, the amount of trials needed is extremely high.

The development of a numerical simulator able to represent the performance of the controller
or the team of controllers in a specified context can provide a useful mean for gathering data
and analysis safety performance of a system. In fact, it could reproduce a sufficient number
of trials for gaining an estimation of Human Error Probabilities (HEP).

The cognitive simulator (PROCOS) for supporting human reliability analysis in complex
operational context developed within Politecnico di Milano comprises two cognitive flow
charts reproducing the behaviour of a process industry operator. The flow charts are based
on a model within an information processing perspective very similar to the one underlying
the HERA classification. Therefore, it has been possible to modify the simulator in order to
take into account a more detailed insight of the context of analysis (ATM) and obtain suitable
data for a possible quantification process. In the following paragraphs, the HERA framework

Edition Number: Final Draft Page 59


will be briefly presented, then the simulator will be outlined showing the main features and
commonalities by which input coming from analysis performed within the HERA framework
can be used in the simulator in order to obtain an esteem of human error probability for some
critical ATCs tasks.

In order to classify and analyze errors in HERA the main factors to be described are shown
in Table 2-1.

Table 2-1: Main Factors to consider for analyzing human error with HERA (see e.g., Shorrock et al 2003).
Taxonomy Description

Error

Error Type What keyword can be applied to the error (including rule
breaking and violation), in terms of timing, selection or quality
of performance or communication?

Error Detail (ED) What cognitive process was implicated in the error?

Error Mechanism (EM) What cognitive function failed, and in what way did it fail?

Information Processing Levels How did the error occur in terms of psychological
(IPs) mechanisms?

Context

Task What task(s) was/were being performed by the controllers(s)


at the time that the error occurred?

Information & Equipment What was the topic of the error, the equipment used in the
error or the information involved? (e.g. what did the controller
misperceive, forget, misjudge, etc.?) What HMI element was
the controller using?

Contextual Conditions (CCs) What other factors, either internal or external to the controller,
affected the controller’s performance?

The cognitive domains covered by the Information Processing activities considered in the
accident analysis technique are:

- perception and vigilance;

- memory;

- planning and decision making;

Page 60 Final Draft Edition Number:


- response execution.

Figure 2-4: Enhanced model of human information processing used in HERA (see e.g., Shorrock et al
2003).

The model used as the main skeleton, illustrated in Figure 2-4, is extensively based on the
one proposed by Wickens (1992). The analyst uses HERA (for the Retrospective accident
analyses) following several steps associated to specific flow charts. The steps are:
a. Defining the error type.
b. Defining the error or rule breaking or violation behaviour through a flowchart.
c. Identifying the Error Detail through a flowchart.
d. Identifying the Error Mechanism and associated Information Processing failures
through flowcharts.
e. Identifying the tasks from tables.
f. Identifying the Equipment and Information from tables.
g. Identifying all the Contextual Conditions through a flowchart and tables.

Examples of the flow charts used for identifying the Error Detail can be found in Reference
(Shorrock et al 2003).
The focus of the simulator is mainly in conveying a quantitative result, comparable to those of
a traditional HRA method, taking into account a cognitive analysis of the operator as well.

Edition Number: Final Draft Page 61


As a further step the simulator considers the evaluation of error management as part of the
overall assessment from the same cognitive point of view, differing from the way traditional
human reliability methods (e.g. THERP) consider the recovery phase.
The above requirements are satisfied by the elements that need to be identified in the HERA
framework:
- Error Type
- Error Detail (ED)
- Error Mechanism (EM)
- Information Processing Levels (IPs)
- Context, Task, Information & Equipment
- Contextual Conditions (CCs)

The Information Processing Level and the Error Mechanism are embedded in the structure of
the simulator, while the other elements constitute inputs for the simulation runs.
The model used for configuring the flow chart representing the operators is based on a
combination of PIPE (Cacciabue 1998) and SHELL. PIPE represent the process of human
cognition according to the “Minimal Modelling Manifesto” (Hollnagel 1993) “A Minimal Model
is a representation of the main principles of control and regulation that are established for a
domain-as well as for the capabilities and limitations of the controlling system”. PIPE is
based, in fact, on the four main cognitive functions:
- Perception
- Interpretation
- Planning
- Execution.
The cognitive functions are influenced or triggered by input parameters such as hardware
stimuli and context stimuli. The human cognitive Path followed through these functions leads
to a response (output). The cognitive process involved makes use of the Memory/Knowledge
Base and Allocation of resources of the individual.
SHELL (Software Hardware, Environment, Liveware ,Liveware) (Hawkins 1987) has been
used for organizing the information regarding the context and the interactions between the
controller and other members of the ATM team or the pilots (Liveware), the equipment
(Hardware), the procedures (Software) and so on.
The combination of these two models shows a high numbers of commonalities with the
Cognitive model proposed in HERA. The development of flow charts for representing the

Page 62 Final Draft Edition Number:


cognitive process is within a “information Processing perspective” and it could be a good
approach for case studies where the tasks to be analyzed are highly proceduralized.
The taxonomy chosen for describing the various Error Types is taken by Wickens (1992),
thus it perfectly fits the HERA framework:
- Error in Perception: errors regarding issues related to the picking up and
understanding of information.
- Error in Memory: errors related to both short-term storage and more permanent
information based on the person’s training and experience.
- Error in Decision: errors related to the judgment and decision making process required
to the operators.
- Error in Response: it is sometimes possible to carry out actions that have not been
intended, an example of this is often referred as a slip of the tongue.
o The Inputs required for the simulation process are:
- Performance Shaping Factors affecting the task to be simulated
- Hardware involved in the execution of the task and its possible states
- Steps of the task (Task Analysis )
- Possible error modes to be considered

The above elements are perfectly in line with the elements previously outlined within the
HERA approach. The task execution module will be tailored not on an Event Tree but on a
task analysis represented through a flow chart as well. The Performance shaping Factors are
in the HERA Framework the Contextual Conditions, and the Hardware-Software involved in
task execution corresponds to what in HERA has been referred to as Information &
Equipment.
The main Output of the simulator is to provide a probability value for correct executions or
failures in respect of ATCs tasks identified as critical (with multiple trial generation) and a
probability value for the corrective action in the recovery phase as well. These probability
values depend on the CCs, directly connected to the decision boxes of the flow charts
through the decision block criteria. In this way it is possible to take into account a cognitive
point of view in the Human Error Probability generation, enabling to consider a more
formalized connection with the CCs, which are the key points for identifying organizational
corrective or preventive actions.

Edition Number: Final Draft Page 63


2.3 Summary of the Chapter
The preliminary study so far has identified the commonalities and the possible area of
application of the simulator within the ConOps framework focusing on the ATC module of the
ATM Process model at a Tactical Level. The method chosen for the HRA analysis is
compatible with the main basis of the HERA approach. The simulator previous results were
obtained for a case study in the process industry, and they have been proved to be within the
interval obtained by a traditional HRA quantification method (HEART). A more detailed
description of the results could be found in reference (Trucco, Leva et al 2004). However a
new calibration for the simulation process and different evaluation criteria for analyzing its
results should be explored.
The calibration of the parameters needed for the simulation process could be obtained using
data taken from accident analysis and other operational experience of experts for ATM could
be elicited .Herewith a similar approach is used as within the CAHR method which calibrated
nuclear operational experiences with data from the THERP handbook.(Straeter 2005).
The exploration of these possibilities is carried out through the trial application partly as
shown in the next chapters. In particular the process of interviews and on field observation
will be described in reference to the specific procedure reported in Annex I. this will lead to a
definition of the related Task analysis, CCs assessment and the link with the Information and
Equipment involved.

Page 64 Final Draft Edition Number:


3. A QUANTITATIVE ANALYSIS OF SAFETY ISSUES: BY AN
EXAMPLE OF ONE SIMULATION TOOL

In this section it will be explored the applicability of the cognitive simulator developed within
the present PhD in Politecnico of Milano, already presented in the second chapter of the
present project, and its effectiveness in the analysis of one of the ConOps Use Cases.
The level of detail of the uses cases in a comparison with the level of detailed required for
the task analysis to be input to the cognitive simulator selected for the trial application
(PROCOS) will be discussed.
Furthermore it will be presented the HERA approach, its taxonomy and HMI level of
description for carrying out some necessary modification to the simulator in order to try and
captures the elements that HERA is able to take into account. The changes performed on the
simulator for the trial applications will be discussed and presented in detail so as to discuss
the feasibility of the application and its possible results.

3.1 Conops Use Cases Analyzed Through Procos


As already discussed in the previous chapter ConOps, Concept of Operations, is a
framework aimed at describing in sufficient detail a European ATM system. The use cases
within this concept serve to provide a more detailed elaboration of particular aspects of ATM
possible scenarios, especially “what-if” situations (the alternative flows). The scenarios and
use cases shown in Annex I are representative examples. The ConOps use case selected
for the trial application is: “Handle Aircraft Landing” (OATA-P2-WP3 2-1, 2004). The use
case describes how a Tower Runway Controller manages the landing of an aircraft. It starts
when the intermediary approach phase is completed and the aircraft is ready for the final
approach. It ends when the Tower Runway controller is ensured that the aircraft has vacated
the runway.
The Use case is set up as a main flow (everything goes as planned), some possible
alternative flows (deviations from the “as planned” path, which not necessary lead to failures)
and Failures flows (where an error has occurred). The flows are sequences of subtasks.
The Use case constitutes the base for the task analysis to be analysed through the simulator
chosen for the trial application (PROCOS).

Edition Number: Final Draft Page 65


3.1.1 The PROCOS-ConOps Task analysis
The structure chosen for developing the task analysis in order to be compatible with both the
PROCOS inputs and the ConOps framework is a Flow chart based task analysis. All the
possible exits of the sub-steps are monitored and the effects on and from the equipment
involved in the task are considered. Up to the level of detail required by the Use Case itself.

The task analysis flow chart shown in annex II_A, has been developed using MS Viso (MS
Visio is also a compatible software for developing the task analysis in UML). The flow chart is
then broken down as records of an Excel table as the one reported in annex II_B. Every task
is identified through a synthetic ID Code and every exit of a sub-step and of events is in a
column (Correct or Error Type-Error Mode), where the next sub-step that has to be lined to is
reported.

It is important to underline that the flow chart for the task analysis has not to be confounded
with the flow chart developed for the information processing activities of the operator
(cognitive flow chart). To every decision blocks of the task analysis flow chart in fact is
assigned a certain exit (correct execution or error mode) according to the run of the cognitive
flow chart, which simulates the actual human execution of the single sub-steps. All the
possible exits of the sub-steps are monitored and the effects on and from the equipment
involved in the task are considered up to the level of detail required by the Use Case itself.
In this section, a detailed description of the flow chart that depicts the task will be provided.
Furthermore, the assessment of the external events present into the task will be discussed.

3.1.1.1 Structure of the task analysis


The Task analysis flow chart is made up of:
- Sub-steps, the exit of a sub-step is a correct action that in turn changes some
equipment status and goes further to the next step, or an error mode that can
constitute an irrevocable failure and therefore recorded as such and ends the
simulation process, or can be “labelled” as a warning and can be followed by other
actions. This is carried out in accordance with what has been already described in the
Use Cases and detailed then by the analysts (HF practitioners and Expert of the field
of analysis). The outcome of each sub-step is the exit of the run of the cognitive
simulator that can result in a correct step or in a certain error type. All the possible
exits of the sub-steps are monitored up to the level of detail required by the Use Case
itself.

Page 66 Final Draft Edition Number:


- Events and pilot actions, that are a stochastic simulation of the occurrence of an
external event (weather conditions), technical failure to an aircraft that disables the
plane to land as planned or pilot actions. Events outcome are “Yes” or “No” exits
(“Yes” the event takes place, “No” it does not take place). The events are linked to the
sub-task that the operators have to undertake in order to answer to the event or, in
any case, as a consequence of the new scenario setting introduced by the event.

Sub-steps
Sub-steps, also called as Sub-tasks, constitute the human actions within the task. Single
subtask has to be configured as “single unit” of human actions for which the underlying
cognitive flowchart is compatible. Each possible error type and error mode outcome should
be explored. In accordance with ConOps use cases, during the development of this project it
has not been considered the effect from and on hardware equipments. Therefore the sub-
step is defined by:
- code and description;
- type of actions (communication or action triggered by hardware stimuli);
- type of cognitive path required for performing the sub-step (skill, role or knowledge
task); frequent step has been added in order to better classify the sub-tasks that are
very frequently performed by ATCO following fixed role;
- all possible exits of the sub-step (correct or error type and error mode) with the
following step. Visualisation of the sub-step within task analysis flow chart.

Edition Number: Final Draft Page 67


Figure 3-1 Visualisation of the sub-step within the task analysis flow chart.

Figure 3-2: Sub-step definition using PROCOS interface. Click on the “available errors” bottom, it is
possible select the Error Type. In the same way it is possible choose the Error Mode.

Events and Pilot actions


External events and pilot actions (every actions in which ATCO is not the main actor), are
simulated through a classic stochastic model; indeed, for each event of the task PROCOS

Page 68 Final Draft Edition Number:


asks to assess the probability of occurrences. Therefore, during the simulation run, each
event occurs accordingly to this probability. The assessment of the probability of occurrences
of these events and pilot actions is based on historical data and experts’ judgements. Table
3-1 reports the list of events and pilot actions of the Use Case associated with their
estimated probability values.

Table 3-1: External events and pilot actions within the task
Prob.
Code Description Prob. Source
Value

E1 Object on runway 0.500000 0.500000 Expert Judgment

Pilot rejects the planned exit and


E10 2*10^(-4) 0.000200 Expert judgement
requests a different one

E11 ATCO agrees to pilot request 0.950000 0.950000 Expert judgement

E12 Pilot lands the aircraft anyway 1/100000 0.000010 Expert judgement

Expert judgement
E13 Plane B technically able to vacate 1-1/10000 0.999900
Malpensa

Expert judgement
E2 Aircraft A technical able to vacate 1-1/50000 0.999980
Malpensa

E20 Pilot B is able to land 1-1.54*E-4 0.999846 Data from Linate

Data from Malpensa


E9 Visibility is good 1-1.17*E-2 0.988300
(2003-2004)

Pilot A, aware of his position,


TP1_E1 1-1/10000 0.999900 Expert judgement
delivers vacation confirmation

Pilot A has delivered vacation


TP1_E2 1-1/10000 0.999900 Expert judgement
confirmation correctly

TP10_E Pilot A communicates delivers


1-1/10000 0.999900 Expert judgement
1 vacation confirmation

TP10_E Pilot A has communicated vacation


1-1/10000 0.999900 Expert judgement
2 confirmation correctly

TP11_E
Pilot B lands the aircraft safely 1-1/1000 0,99900 Expert judgement
1

Edition Number: Final Draft Page 69


Prob.
Code Description Prob. Source
Value
Pilot A aware of failure(unable to
TP2_E1 vacate runway) communicates the 1-1/100000 0,99999 Expert judgement
problem to ATCO
Pilot B aware of failure(unable to
TP7_E1 vacate runway) communicates the 1-1/100000 0,99999 Expert judgement
problem to ATCO

TP8_E1 Is the Pilot aware of his position 1-1/30000 0,999967 Expert judgement

Pilot A vacates runway by the


TP8_E2 1/2 0,5 Expert judgement
runway exit other than planned

Pilot recovery of awareness of his


TP9_E1 1/10 0,1 Expert judgement
position

The simulator should be able to assess the probability of the deviations from the main flow by
means of multiple trials. The task analysis and the sub steps of which it is composed
therefore constitute a very important input for the simulation process.

3.1.1.2 “Manage Aircraft Landing” task analysis


The ConOps Use Case is shown in Annex I. This section provides a more detailed
description of the task outlining the differences within the Use Case hypotheses.

The main difference between the ConOps Use Case and the task is the presence of data-link
system. In fact, even if in the Use Case there is the data-link system, the task selected for
the trial application assumes that the data-link system is not available, therefore the
instruction and the clearance are exclusively issued by voice.

Page 70 Final Draft Edition Number:


Table 3-2: Example of some sub-steps of the task analysis in table format.

3.1.1.3 “Handle Aircraft Landing” task analysis description


The actors involved in this task are:
- ATCO is the main actor that wants to make sure that the aircraft lands and safely
vacates the runway;
- Pilot A (support) is the pilot could still be on the runway;
- Pilot B (support) is the pilot for whom the landing clearance has to be issued.
As described in the ConOps Use Case, preconditions required to start the task are:
- The flight is cleared for final approach by the Executive Controller in charge of
establishing the aircraft on final approach;
- The transfer of responsibility between the Executive Controller and the Tower
Runway Controller is completed;
- In particular the Communication contact (voice, in according to the hypothesis of the
task) between the Tower Runway Controller and the Pilot is established.
At the beginning of the task two situations could occur:
a. runway is clear (no object on runway) or
b. runway is obstructed by “an object” (airplane A).
These two situations are simulated through the event E1 in Table 3-1. Now, we will describe
how the task carries on in these two cases.

Edition Number: Final Draft Page 71


a. Runway is clear (no object on runway)
In this case ATCO verifies the availability of the runway (verification process) and issues
the landing clearance. Landing clearance has to include the designator of the landing
runway, the runway exit and associated taxi-in plan.

Figure 3-3: ATCO runway availability verification process.

Pilot B could reject the planned runway exit by requesting a different one (event E10).
If Pilot B does not request a different runway exit, the task continues through readback-
hearback process. Otherwise, ATCO has to understand and process the pilot request and
then he can accept the request or confirm the runway exit proposed at the same beginning.
In both cases there will be a readback-hearback process. Now, the task proceeds with the
event “Pilot B is able to land” (event E20). For the trial application, we have decided against
simulating the case in which Pilot B is not able to land. In this case Pilot B should inform the
Tower Runway Control that instructs the pilot to a missed approach. Therefore, Pilot B lands
the aircraft.
If aircraft is not technically able to vacate, the pilot communicates the problem and the ATCO
verifies that the runway is obstructed. Any error in these steps different to a delay in

Page 72 Final Draft Edition Number:


communication between pilot and ATCO, leads to an irrevocable failure. The correct path in
this case leads to record that the aircraft has not vacated and is obstructing the runway.
If aircraft is technically able to vacate, Pilot B can vacate runway by the planned runway exit
or by a different one.
In the first case, the correct path (ATCO detects the pilot error) leads to record that the
aircraft has vacated runway by a runway exit other then planned and to transfer the
communication to the Tower Runway Controller. Any error in these steps means that the
Tower Runway Controller has lost the aircraft, therefore an irrevocable failure occurred.
In the second case, Pilot B vacates the runway by the planned runway exit. The task ends
when ATCO verifies that the aircraft has vacated runway as planned. An error in this phase
is a warning because there are not any problems for the subsequent ATCO tasks.

b. Runway is obstructed by “an object”


If the runway is initially obstructed by the Aircraft A, two situations could occur:
- Aircraft A is technically able to vacate and Pilot A vacates the runway. In this case,
the Tower runway control has to verify that the aircraft A has vacated the runway and
issues the landing clearance to Pilot B. An ATCO error is only a warning because it
leads to issue a missed approach clearance instead of a landing clearance. This
means a delay of aircraft landing but it is not a critical matter for safety.
- Aircraft is not technically able to vacate. The Tower Runway Controller has to verify
that the aircraft A has not vacated the runway and issues the missed approach to
Pilot B. An error in this phase is an irrevocable failure.
For the trial application of this project, we have decided against simulating the case in which
the ATCO does not issue the clearance and the Pilot recalls the ATCO to request
instructions.

3.1.1.4 Summary of the task analysis


Summarizing the task analysis based on the ConOPs Use Case “Handling Aircraft Landing”,
it is possible to outline the possible outcome that can be considered as a correct task and the
possible outcome that has to be considered as a failure task.

Correct task
The possible outcomes of the correct task are:

Edition Number: Final Draft Page 73


i. No other object or plane on the runway, Pilot B is issued landing clearance and
assisted till he vacates the runway.
ii. Pilot A on the runway clears the runway and ATCO is able to issue landing clearance
for Pilot B. Pilot B lands the aircraft and he is assisted till he vacates the runway.
iii. Pilot A or object on runway unable to vacate the runway. ATCO timely detects and
issues missed approach for Pilot B.
iv. No other object or plane on the runway, Pilot B is issued landing clearance and safely
lands on the runway. Pilot B unable to vacate the runway, however ATCO timely
detects and takes control of the problem
v. Pilot A on the runway clears the runway and ATCO is able to issue landing clearance
for Pilot B. Pilot B safely lands the aircraft. Pilot B unable to vacate the runway,
however ATCO timely detects and takes control of the problem
vi. No other object or plane on the runway, Pilot B is issued a landing clearance and
assisted till he vacates the runway. However Pilot B vacates the runway on an
unplanned runway. ATCO timely detects the problem and takes control of the
situation .

Failed task
The possible outcomes of the failure task are:
i. ATCO irrevocable failure.
ii. Pilot B irrevocable failure.
iii. Pilot B is unable to vacate the runway and ATCO does not timely detect.
iv. Pilot B is issued a landing clearance and assisted for vacating the runway. However
Pilot B vacates the runway on an unplanned runway exit and ATCO does not detect.
v. Warning in readback-hearback process in the aftermath of the missed approach
instruction.
vi. ATCO does not verify that the aircraft B has vacated the runway (warning).

3.1.2 Input required by the simulation PROCOS


The simulation PROCOS (Trucco, Leva 2004) is a probabilistic cognitive simulator for HRA
studies. It has been developed within the Politecnico of Milan in Italy for approaching human
errors for highly procedural tasks.
The Inputs required for the simulation process are:

Page 74 Final Draft Edition Number:


o Performance Shaping Factors affecting the task to be simulated (PSFs or PIFs);
o Hardware involved in the execution of the task and its possible state;
o Steps of the task (Task Analysis);
o Possible error modes to be considered.
The main Output is to provide a probability value in respect of the operator actions identified
as critical and a probability value for the corrective action in the recovery phase as well.

Therefore the Task Execution Module needs to be built up according to the scenario to be
simulated and the specific task to be analyzed.
Procos currently accepts input information about the scenario that follows the logic structure
of an event tree. Therefore the task analysis should be developed with a configuration able to
match the event tree logic structure.
Furthermore the step of the task has to be simple single subtask configured as “”single unit”
of human actions for which the underlying cognitive flowchart is compatible. Each possible
error types and error modes outcome should be explored and the effect form and on
hardware equipment is of course part of the task analysis as well.

Edition Number: Final Draft Page 75


3.2 HERA Taxonomy and HMI Description

As already stated in chapter three of the present report, the main purposes of HERA
classification of human error in ATM are:

“(i) Incident investigation - To identify and classify what types of error have occurred when
investigating specific ATM incidents (by interviewing people, analyzing logs and voice
recordings, etc.).

(ii) Retrospective incident analysis - To classify what types of error that have occurred within
present ATM systems on the basis of incident reports; this will typically involve the collection
of human error data to detect trends over time and differences in recorded error types
between different systems and areas.

(iii) Predictive error identification - To identify errors that may affect present and future
systems. This is termed Human Error Identification (HEI). Many of the classification systems
in this review are derived from HEI tools.

(iv) Human error quantification - To use existing data and identified human errors for
predictive quantification, i.e. determining how likely certain errors will be. Human error
quantification can be used for risk assessment purposes.” (Isaac, Shorrock et al 2003).

The possible use of the cognitive Simulator PROCOS may fit in all the above purposes, a
practical feasibility study in particular showed that:

- As far as the classification used for the scope of incident investigation – PROCOS
presents an Error Classification system that perfectly fit the HERA Classification
system. The Cognitive Flow chart of PROCOS has been slightly modified in order to
take into account some peculiarity of the cognitive work domain faced by ATCOs.
Therefore a specific section for Communication and for Rule based frequently
performed task have been introduced.

- This in turn allows the use of the data coming from past accident (Retrospective
incident analysis) for calibrating the cognitive flow chart used within the simulator as
showed in a later section of the document.

- Furthermore the task analysis and the level of detail required within the task analysis
context enable to structuralise the identification of possible Human Errors within
specific ATCO tasks (Predictive Error Identification) that are perfectly coherent with

Page 76 Final Draft Edition Number:


the one identifiable using the HERA taxonomy. The use of the simulator then enable
to quantify the probability of occurrence for the possible errors identified (Human
Error Quantification) making use of real data in the calibration process. The main
quantification process within the simulator, which is to say the cognitive flow chart
decision blocks criteria, is the main process behind the prediction of Human Error and
the exits of the decision blocks are selected for each run according to a stochastic
process that employs a combination of real data (data base of accident data
classified using HERA Taxonomy) and of Expert judgment as described in a following
section of the present document.
The cognitive domains covered by the Information Processing activities considered in the
accident analysis technique within HERA are:
- perception and vigilance;
- memory;
- violations
- planning and decision making;
- response execution

The model used for configuring the flow chart representing the operators in PROCOS is
based on a combination of PIPE (Cacciabue 1998) and SHELL. PIPE represent the process
of human cognition according to the “Minimal Modelling Manifesto” (Hollnagel 1993) “A
Minimal Model is a representation of the main principles of control and regulation that are
established for a domain-as well as for the capabilities and limitations of the controlling
system”. PIPE is based, in fact, on the four main cognitive functions: Perception,
Interpretation, Planning, Execution.
The cognitive functions are influenced or triggered by input parameters such as hardware
stimuli and context stimuli. The human cognitive Path followed through these functions leads
to a response (output). The cognitive process involved makes use of the Memory/Knowledge
Base and Allocation of resources of the individual.
The taxonomy chosen for describing the various Error Types is taken by Wickens (1992),
thus it perfectly fits the HERA framework:
- Error in Perception: errors regarding issues related to the picking up and
understanding of information.
- Error in Memory: errors related to both short-term storage and more permanent
information based on the person’s training and experience.

Edition Number: Final Draft Page 77


- Error in Decision: errors related to the judgment and decision making process
required to the operators.
- Error in Response: it is sometimes possible to carry out actions that have not been
intended, an example of this is often referred as a slip of the tongue.

The only category missing was “Violations”. Therefore the Cognitive Flow charts have been
modified in order to take into account the error Type “violation”. Furthermore other
modifications have been implemented in order to take into account specific issues related to:
- communication
- frequently performed tasks, that are mainly on a “rule based level” (Rasmussen
1987).
The cognitive flow charts are presented in Annex III. The decision blocks are those coloured
in pink; other possible blocks are reported and briefly described in Table 6.

As already mentioned the example of a possible quantitative analysis using PROCOS was
aimed at calibrating the simulation process on Data available by the analysis of past accident
using HERA retrospectively.
Therefore in Table 3-1 it is reported the possible correspondence between PROCOS
calibrated decision blocks (and therefore error types) and the Error Types (and Error Modes)
reported in HERA.

Page 78 Final Draft Edition Number:


Table 3-3. Main Blocks in the Cognitive Flow Charts. On the extreme left column the possible error type in
the HERA taxonomy have been identified in order to use Incident data for the calibration
ID Block Description Block type Possible HERA correspondent ET

HW HW stimuli HMI input


HMI input:
assigned probability for
2 Warning Devices
yes(the input is a warning
device) or no
PV-EM: No Auditory Detection
M-IP: Distraction

Operator monitoring the system PV-IP: Monitoring Failure


1 calibrated decision block
(perception of not alerted items)
PV-IP: Distraction/Pre-occupation
PV-EM: No detection of visual information

PV-IP: Visual Search Failure


PV-IP: Association Bias
PV-EM: Misidentification of information
3 Recognize stimuli calibrated decision block PV-IP: Information overload
PV-EM: Misreading of information
PV-EM: Misperception of information
HMI input (yes exit if it is a
4 Auditory?
auditory warning stimuli)

PV-IP: Information overload


Visual Perception of alerting info
5 calibrated decision block PV-IP: Expectation Bias
(visual only)
PV-EM: Late detection of visual information
PV-IP: Discrimination Problem
Distinguish target info from
6 calibrated decision block PV-IP: Visual/Sound confusion
background info
PV-IP: Information overload
PDM-IP: Incorrect assumption
PV-IP: Association Bias

7 Correct HW interpretation calibrated decision block PV-EM: Misidentification of information


PV-IP: Discrimination Problem
PV-IP: Information overload
PDM-IP: Failed to recognise risk
M-IP: Rarely used information
Remember related action M-IP: Mistored information
8 calibrated decision block
/indication
M-IP: Insufficient learning
Memory
assigned probability yes no
9 Skill based step
(input of the analyst)
10 Planned a step to be executed calibrated decision block PDM-IP: Fixation
PDM-EM: Misjudge aircraft projection
PDM-EM: insufficient plan
PDM-IP: Incorrect assumption
PDM-IP: Failure to consider side effects

Edition Number: Final Draft Page 79


ID Block Description Block type Possible HERA correspondent ET
PDM-IP: Failure to integrate information
assigned probability yes no
11 Rule based step
(input of the analyst)
HW control to be operated HMI input: Failure rate of
12
working equipment to be operated
RE-EM: Selection error
13 Correct response/execution calibrated decision block
RE-EM: Omission of action
RE-IP: Slip of the tongue
Analyse the system? (the
PDM-IP: Fixation
operator does not remember what
14 to do, does he go back to look at calibrated decision block PDM-IP: Denied risk
the system in order to recall
situation, actions) PDM-IP: Failure to integrate information
M-EM: No recall of temporary memory
M-EM: Inaccurate recall of temporary memory
Remember related action
M-EM: Misrecall information
15 execution Short Term Memory(ok calibrated decision block
HW interpretation) M-EM: No recall of information
M-EM: Forgot previous action
M-EM: Forgot a planned action
Recoverability of equipment
16 System failure state recoverable? to be operated (input of the
analyst)

M-EM: No recall of temporary memory

Remember related action M-EM: Inaccurate recall of temporary memory


17 execution Short term memory calibrated decision block M-EM: Misrecall information
(non ok HW interpretation)
M-EM: No recall of information

M-EM: Forgot a planned action

PDM-IP: Incorrect assumption


18 Meta knowledge of the right step? calibrated decision block
PDM-IP: Fixation
Analyse the system? (the
PDM-IP: Fixation
operator does not know what to
19 do, does he go back to look at the calibrated decision block PDM-IP: Denied risk
system in order to recall situation,
actions) PDM-IP: Failure to integrate information
Negligent Error
Right step in intention(following a
20 calibrated decision block Reckless Violation
planning phase)
PDM-EM: Incorrect decision making

PDM-IP: Incorrect assumption

PV-IP: Association Bias

Correct HW interpretation (no PV-EM: Misidentification of information


21 calibrated decision block
verbal communication in general) PV-IP: Discrimination Problem

PV-EM: Misreading of information


PV-IP: Information overload
PDM-IP: Failed to recognise risk
This block states the action
of the simulator to look if any
Is there any error mode error mode have been input
25
available? by the analyst in
correspondence of a specific
error type for a given sub-

Page 80 Final Draft Edition Number:


ID Block Description Block type Possible HERA correspondent ET
step

Remember related action


27 calibrated decision block
execution(ok HW interpretation)
Memory

Intention of following procedure Routine Rule Breaking


34 calibrated decision block
as they are
Organisational Induced Violation
35 Analyze the system (da 33) calibrated decision block

PV-EM: Misreading of information


Correct HW interpretation PV-IP: Information overload
36 calibrated decision block
(frequent task)
PDM-IP: Failed to recognise risk
PV-IP: Discrimination Problem
37 frequent task Input of the analyst
38 pilot communication(yes or no) Input of the analyst

39 Readback communication Input of the analyst

40 Processing info/request Input of the analyst

Communication heard/understood
41 calibrated decision block PV-EM: Mishear
correctly by ATCO

42 ATCO asks to pilot for clarification calibrated decision block PV-EM: No Auditory Detection

PV-EM: Hearback Error


43 Clarification successful (43) calibrated decision block

Correct ATCO linked to exit of


44
clearance/instruction clearance/instruction step

Pilot detects and asks for PDM-IP: Failed to recognise risk


45 calibrated decision block
clarification
PV-EM: Hearback Error

PV-EM: Mishear
46 Clarification successful calibrated decision block
PV-EM: Hearback Error

RE-EM: Unclear information transmitted


RE-EM: Unclear information transmitted
47 Pilot understood correctly calibrated decision block
PV-EM: Mishear
Correct readback (no slip of the
48 calibrated decision block
tongue) RE-IP: Slip of the tongue
ATCO detects and reissues the
49 calibrated decision block PV-IP: Expectation Bias
clearance/instruction (Incorrect
Readback) PV-EM: Hearback Error
PV-EM: Mishear
50 Clarification Successful (49) calibrated decision block PV-EM: Hearback Error
RE-EM: Unclear information transmitted

Edition Number: Final Draft Page 81


3.3 Calibration process of the decision blocks in PROCOS using HERA Data
The previous method for calibrating the Decision Blocks in PROCOS was based on the
procedure presented in chapter 1.

We can briefly summarise it here by saying that each decision block has two possible exits:
“Yes” and “No”. The exit process is stochastic and it depends on the PSFs (Performance
Shaping Factors) values and the influence they have on each decision block.

If we indicate with X the possible outcome of a decision Block, X is a Bernoulli’s variable.


If the following values are associated with X:
Yes Î X = 1
No Î X = 0
Then the probability density function fx(x) is equal to:

⎧ p x (1 − p )1− x per x = 0 or x =1

f x ( x ) = f x ( x, p ) = ⎨ (3.1)
⎪0 otherwise

where: 0 ≤ p ≤ 1
1 – p = q.

The probability of having “Yes” as a possible exit of the block can be expressed as [P(X = 1)]
and it is equal to p, while the probability of having the “No” exit is [P(X = 0)] equal to q.

In order to calibrate each decision block, the value of p, the success probability of the
cognitive process in the block, has been expressed as a function of the PSFs involved for the
block (Thus also in order to evaluate the influence for the context on the cognitive process).
The SLIM method has then been chosen (Wickens, 1992), in particular the expression that
relates Human Error Probability (HEP) with a Success Likelihood Index, which is a
logarithmic function of the PSFs involved, since it is assumed that changes in human
responses induced by changes in external conditions can be described by a logarithmic
relationship”(Fujita & Hollnagel, 2004):

Page 82 Final Draft Edition Number:


For the present application however the procedure for evaluating the q probability of the
bernoullian law for the Decision Block has been completely changed.

The data available from the HERA database in fact can provide indications for the rate of
occurrence of specific error types and also for what performance shaping factors (that is to
say Contextual Conditions in the HERA taxonomy) played a negative role in a certain event
where a certain type of error occurred. We will now refer to the PSFs calling them Contextual
Conditions (CCs) in order to be coherent with the HERA taxonomy.

The HERA dataset used for this project has the following characteristics:
- Number of recorded accident/near miss events: 62
- Number of recorded ACTO errors: 91
– Perception & Vigilance 38
– Memory 37
– Planning & Decision Making 36
– Response Execution 10
- Number of recorded occurrences of Contextual Conditions (CCs): 130
- Number of movements during the reporting period: 4million (estimate)
- Level of analysis of Contextual Conditions: Main Category

The main category of the contextual conditions available in the HERA taxonomy are the
following:
1. Pilot-Controller Communications
2. Pilot actions
3. Traffic and airspace
4. Weather
5. Documentation and procedures
6. Training and experience
7. Workplace design and HMI
8. Environment
9. Personal Factors
10. Team factors
11. Organisational factors
Their subcategories are listed in Table 3-4.

Edition Number: Final Draft Page 83


Table 3-4: Contextual Conditions in HERA
CONTEXTUAL CONDITIONS (CCs)
Training and experience
Inadequate knowledge for position
Inadequate experience on position
Inadequate time on position
Unfamiliar task in routine operations
Novel situation
Over training
Inadequate mentoring
Inadequate On-the-Job Training (OJT)
Inadequate emergency training
Inadequate Team Resource Management (TRM) training
Inadequate recurrent/continuation training
Controller under training
Controller under examination/check
Other – State:
Team factors
Controllers on the floor assisting one another with the traffic
Currency and availability of all necessary equipment
Position relief briefing
Cooperative effort to accommodate the flow of traffic
Team relations – conflicts / personality problems
Late returns to the position after breaks
Positions left temporarily unstaffed
New or temporary team assignments
Lack of responsibility
Unclear working methods
Confidence in others
Team pressures
Cooperation from supervisors from other areas in traffic flow initiatives
Support from others - flight data / maintenance
Management provision of resources and assistance as dictated by the
traffic needs
Support from other units
Staffing for the traffic requirements
Confidence in supervisor’s ability to manage the air traffic activity
Supervisory cooperation to manage the traffic during this shift
Management cooperation to assist and support the sectors/positions/
areas/facilities
Higher management cooperation to assist and support the
sectors/positions/areas/facilities
Other – State:
Organisational factors
Work environment
Safety versus efficiency – for yourself / organisation
Numbers of qualified controllers
Job satisfaction
Roster/rest duty times
Work scheduling
Adherence to rules by ATCOs
Adherence to rules by supervisors
Terms and conditions of work
Supervisory decisions in staffing and facilities
Management decisions in staffing and facilities
Supervisory decisions in safety and efficiency policies
Management decisions in safety
Environment
Noise from people – supervisors/colleagues/maintenance/visitors

Page 84 Final Draft Edition Number:


Noise from equipment
Distraction - job related
Distraction - non-job related
Air quality - temperature/humidity
Lighting problems - illumination/glare
Pollution/fumes
Asbestos
Radiation
Other – State:
Workplace design and HMI
TYPE PROBLEM
Working position/console, i.e. HMI Conflicting information
Failed or broken equipment
Surveillance, i.e. radar False information
Feedback problem
Communication, i.e. radio High false alarm rate
Illegible information
Navigation, i.e. approach aids Inaccessible information
Incorrect information
Flight information display, i.e. Flight Interference
Lack of equipment/information
Progress Strips (FPS) / display Lack of coverage/range
Lack of precision
Auxiliary equipment, i.e. generators Lost information
Mode confusion
Other Information display, i.e. weather No equipment/information
Nuisance information
Equipment warning devices, i.e. Poor design
Poor display
alarms and alerts Poor positioning
Recently introduced equipment/
Other – State: information
Equipment size problem
Suppressed information
Unavailable equipment/information
Unclear equipment/information
Unreliable equipment/information
Untrustworthy equipment/information
Visibility of equipment/information
Other – State:
Personal factors
Distracted by personal thoughts
Incapacitation – illness/collapse
General health and fitness – nutrition/hydration/exercise
Impairment – alcohol/medication/drugs
Fatigue – tiredness
Fatigue - sleep loss
Fatigue - sleep deprivation
Pain
Abnormal stress symptoms - post incident/training/checking
High anxiety/panic
Domestic/lifestyle problems
Emotional stressors
Boredom
Complacency
Confidence in self
Trust in automation
Motivation/Morale
Other – State:

Edition Number: Final Draft Page 85


Pilot-controller communications
Pilot language / accent difficulties
Similar confusable call signs
Pilot readback incorrect
Pilot experience
Situation not conveyed by pilots – urgency/party-line support
Pilot breach of R/T standards/phraseology
ATC breach of R/T standards/phraseology
Speech tone
Speech rate
Complexity of ATC transmission
Pilot high/excessive R/T workload
ATC high/excessive R/T workload
A/C struck transmitter
R/T interference
R/T cross-transmission
R/T blocked frequency
Other – State
Pilot actions
Responding to TCAS alert
Response time to ATC instructions
Correct pilot readback followed by incorrect action
Rate of turn
Rate of climb/descent
Speed changes
A/C navigational limitations not considered by pilot
Other – State
Traffic and airspace
Sector capacity limitations
Excessive traffic load
Complex traffic mix
Fluctuating traffic load with unexpected demands – off-route traffic
Holding patterns
Aircraft with similar/confusable call signs
Underload
Post peak traffic
Unusual situation – emergency or high risk
Flight in non-controlled and controlled airspace
IFR/VFR mix
Flight in transitional airspace
Airspace design characteristics - complexity, changes
Traffic management initiatives - military, medical, parachuting, student
pilot, State flight.
Other – State
Weather
TYPE CONSEQUENCE
Snow / ice / slush Taxi difficulties
Fog / low cloud Vectoring problems/abilities
Thunderstorm Route deviation
Extreme winds at high altitude Difficulty tracking aircraft/vehicles
Extreme surface winds Holding patterns
Down draft / windshear Other – State:
Other – State:
Documentation and procedures

Page 86 Final Draft Edition Number:


TYPE PROBLEM
Orders Unclear
Charts/notices Contradictory
Temporary notices Ambiguous
Advisory manuals/circulars Incorrect
Checklists Incomplete
Automated References Inaccurate
Special information (NOTAMS, Too complex
SIGMETS) New/recent changes
Arrival In revision
Landing Outdated
Special arrival procedures Not available
Landing and hold short Unclear
Clearing runway Contradictory
Simultaneous use of same runway Ambiguous
Crossing runway Incorrect
Taxi for position and hold Incomplete
Departure Inaccurate
Wake turbulence Too complex
Visual separation New/recent changes
En-route In revision
Oceanic Outdated
Noise abatement Not available
Other – State: Other – State:

We will now refer to the PSFs calling them Contextual Conditions (CCs) in order to be
coherent with the HERA Taxonomy. The CCs chosen from the above table for each category
that have to be used for the Simulation Trial are listed in Table 3-4.
They will be used for calculating an index similar to the SLI called the Failure Likelihood
Index (FLI). The formula (3.2) is the analogous to the one used for the SLI presented in
chapter 1. The main difference is that the weight of the effect that the CCs can have on the
situation is seen on a negative perspective, which is to say it takes into account only those
Contextual Conditions whose presence negatively affect the outcome of a task.
N j
FLI = ∑ ( wi ⋅ ri ) (3.2)
i =1

where:
wi Æ normalised weight of the i-th CC for the cognitive process of the j-th block
ri Æ i-th CC -value
Nj Æ number of CCs for the j-th block
N j
and ∑w
i =1
i =1

For the trial application the ri value of each CC is a Boolean value that can assume value 1 or
0 (Present or not Present) since this is the only information available at the moment from the

Edition Number: Final Draft Page 87


HERA accident data base. While the process for evaluating the wi need a more detailed
explanation.
In order to step forward in the calibration process is better to introduce the core of the new
stochastic process for the calibration of the Yes and No Exit for the decision blocks. The
procedure for evaluating the Q probability of the bernoullian law for the Decision Block will be
based on an ogivian-shaped curve expressed by the formula of Rasch (1980).
ri − μ
sn
e
PFailure of type i = ri − μ (3.3)
1+ e sn

Where:
ri = Percentage of errors in a situation given all situations of the same type in the data base
μ= 0.5 Adjustment of the Location of the crossing point (0.5 assigns rational processing)
sn= 0.075 Empirical parameter to adjust the slope of the ogivian-curve
e= natural exponent (2.718)
This curve has been proposed by O. Straeter (2005) for relating the absolute HEP=n/N with
the empirical data collected by accident databases. In a method named CAHR, O. Straeter
(2000) found out that the Rasch equation was revealed as an optimal calibration function
since it was the closest line to approximate the relation between the percentage of errors in a
certain situation given the number of situations of the same type in the database and THERP
data about absolute probability for a given type of error as shown in figure 3-4.

0 1
ri

Figure 3-4 : Percentage of errors in a situation given all situations of the same type (i) in the data base
compared to THERP HEP values for the same error types (Straeter 2000)

Page 88 Final Draft Edition Number:


Another important relation will use the ogivian curve: the relation between the FLI and ri.
At the moment we assume that the relation between ri and FLI is described by a normalised
ogivian curve:

FLI − μ '
sn '
e
ri = FLI − μ ' (3.4)
1+ e sn '

This formula needs to be calibrated, that is for each Decision Block of the Cognitive
Flowchart we need to identify the parameters μ ' and sn ' using the empirical data available in

the HERA Database.


For each decision block of the cognitive flowchart, we need to identify the parameters μ ' and

s n
' .

Table 3-6 reports an example of the Excel table used for the calibration of one decision
block.
Three anchor points are fixed.
⎧ FLI = 0
- ⎨ −3
that represents the best possible working conditions;
⎩ir = 10

⎧ FLI = 1
- ⎨ that represents the worst possible working condition;
⎩ ri = 0,9

⎧ FLI *
- ⎨ that represents the normal working condition.
⎩ri *
The third point is extracted from HERA data. The procedure is set out in the following steps:

Edition Number: Final Draft Page 89


nev i
a. The ri* value is obtained through formula ri * = (3.5)
N ev
Where:
nev i is the number of events that have at least an occurrence in the set of HERA Error
Type linked to the decision block (number of events linked to the block);
N ev is the total number of “accident/near miss” events in the database.
b. The contextual conditions linked to the block are identified (Expert Judgment, and
Empirical Data are used together).
c. Then it is possible to define the weight of CCi as the probability of having EMk (set of
Error Type linked to the decision block) given the occurrence of the i-th contextual
condition
P(CC i EM k ) ⋅ P( EM k )
wi = P( EM k CC i ) = (3.6)
P(CCi )
Where:
P (CCi EM k ) is the probability of CCi given EMk and it is possible to extract from
HERA database:
k
N CCi (number of occurrences of CC i for the block )
P(CC i EM k ) =
nevi
P ( EM k ) is the probability of EMk:

nevi
P ( EM k ) =
N t (total number of operations in the observation period )
P(CCi ) is the probability of occurrence for CCi:

P (CC i ) = P(CC i EM k ) ⋅ P( EM k ) + P(CC i EM k ) ⋅ P ( EM k )


C C

N ev − nevi N CCi + N CCi


C C
k k k k
N CCi nevi N CCi
= ⋅ + ⋅ =
nevi N t N ev − nevi Nt Nt
N CCi
=
Nt
Where:
C
EM k is the complement of EMk in the database, namely the events not
linked to the block;

Page 90 Final Draft Edition Number:


C
k
N CCi is the number of occurrences of CCi in the events not linked to the

block;
( N ev − nevi ) is the number of events not linked to the block;

N CCi is the total number of occurrences of CCi in the database.

N t is the number of operations of the ATC station to which the HERA


database observation period refers to.
Therefore:
k
N CCi
wi = (3.7)
N CCi
__
wi
d. wi is then normalized wi = (3.8)
∑ wi
e. Finally we calculate an empirical mean value for FLI that refers to a specifics ri (ri*)
for the events linked to the decision block under calibration:
f. Using(as already said) three couples of value for FLI and related ri:
- ( 0 , 10-3 ): optimal situation in which there are no Contextual Conditions;
- ( FLI* , ri* ): nominal situation extracted from the HERA database;
- ( 1 , 0.9 ): the worst situation in which all Contextual Conditions are present at
the same time.

In this way it is possible to find two parameters μ ' and s n


' for the ogivian curve in order to

make it fit better to the proposed calibration value by applying the least square method.

Table 3-5: example of the series obtained for the curve that relates ri with HEP (Formula 3.3) and the one
that then relates FLI with ri (Formula 3.4)

FLI = 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

ri = 0.00099 0.01839 0.26186 0.87043 0.9922 0.99959 0.99998 1 1 1 1

HEP = 0.00129 0.00162 0.04011 0.99289 0.99859 0.99872 0.99873 0.99873 0.99873 0.99873 0.99873

Edition Number: Final Draft Page 91


1.2 1.2

1 1

0.8 0.8

ri =
0.6 0.6
HEP =

0.4 0.4

0.2 0.2

0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
FLI

Figure 3-5: Graphic that shows how the values of ri and HEP modify as functions of FLI

Page 92 Final Draft Edition Number:


Table 3-6: Example of the table used to calibrate the ogivian curve for decision block “Operator Monitoring the system”

ri (
FLI "block
Possible HERA contextual Value normalized
ID clean (Failure error" Square
μΙ
I
Description correspondent conditions (0 or Occurrence Weight weight wi x y FIT sn
Block value Likelihood / tot Error
ET (CCs) 1) (PIFs)
index) error
task)

Pilot -
PV-EM: No Controller
Auditory Detection 2 Communication 0 2

M-IP: Distraction 2 Pilot Action 0 0


0 0.001 0.0010 1.35969E-10

PV-IP: Monitoring Traffic and


Failure 1 Airspace 1 17 0.486 0.127

PV-IP:
Operator Distraction/Pre-
monitoring occupation 3 Weather 1 0 0.000 0.000
the system PV-EM: No Documentation
1 (perception detection of visual and 0.2196 0.3871
information 16 Procedures 0 5
of not
alerted PV-IP: Visual Training and 0.2196 0.3871 0.3871 2.73542E-14 0.23524 0.03400
items) Search Failure 0 Experience 0 5
Workplace
design and
HMI 1 10 0.833 0.218

Environment 1 2 0.667 0.175


Personal
Factors 1 8 0.444 0.116 1 0.9 1.0000 0.01

Team factors 1 10 0.556 0.145


Organisational
Factors 1 5 0.833 0.218
Number of events 24 1.000 Σ Square Error 0.01

Edition Number: Final Draft Page 93


The expert judgement is used as a second source for estimating the importance (wi) of each
CC.
To gather information about the absolute contribution of each Contextual Condition (CC) to
the possible incorrect execution of a task performed by an Air Traffic Controller, a
questionnaire has been addressed to some Air Traffic Controllers who are involved in the
specific analysed task. Twenty-nine Air Traffic Controllers, coming from different country and
different airports, has been interviewed. A simple format used for collecting the result of the
questionnaire is shown in Table 3-7. The structure of the questionnaire is very simple.
After a brief introduction in which a specific task is described, participants are asked to give a
value between 0 (min) and 100 (max) that reflects the absolute contribution of each CC to
the possible incorrect execution of the task.

The questionnaire is structured as a table in which there are three columns:


- in the first one, each row describes one Contextual Condition chosen from the HERA
CCs (the worlds in bold type represent the main category of the CC);
- in the second one the Air Traffic Controller interviewed is asked to give a numerical
assessment for the weight of CC;
- in the last one some room is left so as to leave the interviewee the possibility to make
some suggestions or indications linked to the specific CC.

Table 3-7: PSFs chosen from the HERA CCs to be used for the Simulation Trial represented in the
questionnaires provided to the ATCOs interviewed
Importance
PSF Description from 0-100

Pilot Language/ Accent Difficulties

Similar confusable call signs


Communication
Pilot Controller

ATC high/excessive R/T workload

R/T interference

Excessive Traffic Load


Traffic and

Complex Traffic Mix


Airspace

Post Peak Traffic Period just after a high traffic load situation

Edition Number: Final Draft Page 94


Importance
PSF Description from 0-100

Presence of military aircrafts


Unusual Situation-emergency or high risk
Police Helicopters etc..
Number of crossing points
Airspace Design Characteristics, complexity Presence of restricted areas
Rules for some specific flying zones
i.e. Disturbance on the radar due to
Difficult Tracking aircraft /vehicles
meteorological conditions
Weather

Vectoring problems ability Not for Airport ATCT

Level of Knowledge
Training and
experience

Level of Experience

i.e. New interfaces for performing a routine


Unfamiliar Task in routine operations
task

Conflicting information

Inaccessible information
Workplace Design and HMI

Nuisance information

Usability of Hardware Ergonomic of the work station

Usability of the Software

Poor Display
Environme

Distraction-job related or non job related i.e. phone calls, chatting with a colleague

Noise from people (supervisors colleagues


visitors, maintenance etc)
nt

Fatigue-tiredness/sleep loss/sleep deprivation


Personal Factors

Too much of confidence in self

Too low of confidence in self

i.e. way procedure are applied or working


Team Factors

Clarity of Working Methods


practice when no procedure exist
Clear definition of who is responsible of
which tasks. i.e clear definition of roles and
Clarity of Responsibility Assignment
responsibilities (ask sharing) between radar
controller, coordinator, assistant)

Edition Number: Final Draft Page 95


Importance
PSF Description from 0-100

Cooperation from Supervisors Support from supervisors

Cooperation from colleagues

Work Scheduling

Being content (or not) with the job content,


Organizational Factors

relations with colleagues and management,


Job Satisfaction
job environment , promotion system ,
reward and much more
Safety Versus Efficiency (for yourself-
organization)

Managerial Decision in staffing and Equipment

The values collected are then used for establishing the range in which the mean value of the
importance of each CC (obtainable through the use of HERA data as well) can actually vary.
For each CC in fact, given the mean obtained from the HERA data and the value obtained
from the interviews it is possible to calculate the sample mean and the sample variance in
the following way:
n

∑x i
- Sample mean: x = 1
(3.9)
n
n

∑ (x i − x) 2
- Sample variance: S 2 = 1
(3.10)
n −1

Where: “n” is the number of answers belonging to the main category analysed;

“ x ” is an estimator of the population mean μ (mean value of the weight for the
CC analysed);

“S2” is an estimator of the population variance σ.

These quantities are measures of the central tendency and dispersion of the data collected.

It is then necessary to build up a confidence interval of the population mean.

Page 96 Final Draft Edition Number:


x−μ
Using the estimator “S2” for the population variance the statistic (3.11)
S
n

is distributed as t n −1 where (n-1) are the degrees of freedom of the distribution. Therefore,

we can obtain a 100*(1-α) percent confidence interval for the mean value of the weight,
which is:

s s
x − tα ≤ μ ≤ x + tα (3.12)
2 n 2 n

This provides an interval within which the value of the CC weight in question lies during a
specific simulation run. After the percentage of the confidence interval is chosen, in fact, the
value of wi of each simulation run will be extracted within the related range of values.
Therefore at each run the weight of each CC is randomly extracted by its related interval
(assuming a uniform distribution). This values are then used for evaluating the FLI (formula
3.2), taking into account that the presence of the CCs depend on the scenario that we want
to simulate, and from resulting FLI we evaluate, using formula (3.4) the correspondent ri. This
value of ri is then substituted into the ogivian-shaped curve expressed by the formula of
Rasch (3.3) and the final value for the Q of the bernoullian process described in the system
(3.1) is then finally used for deciding stochastically exit Yes or No for each simulation run.
This process needs to be performed for each decision block of the Cognitive simulator since
each decision block may present different values.

Edition Number: Final Draft Page 97


Table 3-8: Example of excel table used for estimating the interval within which the wi lies during the simulation

Edition Number: Final Draft Page 98


- Dynamic Safety Modeling for Future ATM Concepts -

4. SETTING UP THE SIMULATION CAMPAIGN: A STEP BY STEP


PROCEDURE

4.1 Simulation campaign


In this section the scenario simulated and the main features of the simulation campaign
will be described.

4.1.1 Scenario setting


For the trial application four basic scenarios have been selected. They refer to the
category of Contextual Conditions that need to be taken into account in a Boolean logic
scheme: if the value is 0 the CC is not playing a role in the simulation scenario, while if the
assigned value is 1 it is negative influencing the operator performance in the simulation
scenario. Table 4-1 describes the scenarios currently chosen:
- scenario 1: any CC is considered and everything it is ok, thus this is the best
scenario from an ATCO point of view (base case);
- scenario 2: only the external Contextual Conditions play a negative role in the
simulation;
- scenario 3: only Contextual Conditions expressing human and internal
organisational factors play a negative role in the task completion;
- scenario 4: it is the worst possible scenario where all the Contextual Conditions
are considered at their maximum potential negative affect on the ATCO
performance.

99
Scenario Scenario Scenario Scenario
1 2 3 4
External Contextual Conditions
Weather 0 1 0 1
Traffic and airspace 0 1 0 1
Pilot/controller communication 0 1 0 1
Human and Organizational Factors
Training 0 0 1 1
HMI 0 0 1 1
Work Environment 0 0 1 1
Team Factors 0 0 1 1
Personal Factors 0 0 1 1
Organization Factors 0 0 1 1
Table 4-1: Characteristics of the four scenarios chosen for the trial application.

For each scenario, different ranges of Failure Likelihood Index (FLI) have to be simulated
corresponding to the different contributions that different sets of CCs have on the FLI
value. At a first glance the extension and the location of these ranges, reported in Figure 4-1
for a limited set of decision blocks of the Cognitive Flowchart, make it reasonable to
expect a low deviation of the simulation results.

100
-

Figure 4-1: Range of scenario simulation for some characteristic decision blocks of the Cognitive Flowchart.

4-101
4.1.2 Number of repetition of simulation runs
In any experimental design problem and in any design of simulation campaign, a critical
decision is represented by the choice of number of repetitions of simulation runs. The type
of results to be taken out from the simulation, the structure of the task and the probability
of the events within the task strongly influence both the minimum number of cycles within
a simulation run and the minimum number of replicates runs requested to have
statistically significant results.
The aim of this project is to estimate a Human Error Probability (HEP) that, in the nominal
case where Contextual Conditions do not play a negative role, we expect to be not higher
than 10-3.
Some uncertain events generated during the execution of the simulated task – i.e.
handling of the aircraft landing - have probabilities of occurrence between 10-4 and 10-5
(Table 3-1); thus, some branches of the task flowchart have a very low probability of
occurrence.
In order to have a meaningful number of occurrences for any possible path of the task and
according to an empirical rule that suggests to set a number of cycles at least of one order
of magnitude higher than the lower probability of occurrence, it has been decided to
perform one million cycles for each simulation run, corresponding to one million landings
on the runway. Figure 4.3 shows an estimate of the time taken to perform a single
simulation run. It is possible to note that the simulation time is strongly dependent on both
the number of simulation cycles and the computing power of the computer used for the
simulation.
Finally, considering the available time for performing the entire simulation campaign, it has
been decided to execute 20 repetitions for each scenario, sufficient to build significant
statistics.

Edition Number: Final Draft Page 102


- Error! Reference source not found. -

Simulation time

21.07
20.20
19.12 Core Duo @ 2.00 GHz, 1GB of RAM

17.16
Pentium 4 @ 1.70 GHz, 374MB of RAM
15.21
time [h.min]

13.26
11.31
9.36
8.20
7.40
5.00
5.45
3.50
1.50 1.45
1.55
0.05
0.00
1,E+02 1,E+03 1,E+04 1,E+05 1,E+06
number of cycles for simulation run

Figure 4-2: Dependency of the simulation time on the number of simulation cycles.

4.1.3 Summary of simulation campaign


To sum up, the simulation campaign for the trial application is organised into the following
settings:
¾ Number of scenarios: 4
¾ Number of runs for each scenario: 20
¾ Number of simulation cycles for each run: 1.000.000 movements

4.2 Structure of the PROCOS reporting system


PROCOS is implemented on Access and the computer program stores any information
within Access tables. Therefore also the reporting system is made up of Access tables.
There are four different kinds of report:
i. Diagnostic report. This report records any possible error in filling the inputs data for
the simulator. For instance, if there is a link to a task code but this task does not
exist or it is not completely defined, the program will store a record similar to the one
shown in Table 4-2. A short description of the meaning of each column is the
following:
- Rep_id is the number of simulation run recorded;
- Rep_action is the type of error occurred;
- Rep_simcode is the code of the simulation the program was performing when
the error occurred;

103
- Reop_hd, Rep_stimuli, Rep_block, Rep_err_mode and Rep_err_type indicate
if the error is related to hardware, stimuli, flowchart block, error mode or error
type respectively;
- Rep_value describes the type of error occurred;
- Rep_data and Rep_time indicate when the error was recorded;
- Rep_user indicates username of the analyst who started the simulation.
Diagnostic Report
Rep_ Rep_acti Rep_si Rep_s Rep_ Rep_sti Rep_blo Rep_val Rep_err_ Rep_err_ Rep_use
Rep_task Rep_data Rep_time
Id on mcode ubtsk hd muli ck ue mode type r

Table
Code
1 ER01 sim_01 TSK_01 E10 SubTask 08/04/2006 0.00.02 MASTER
E10 did
not find!!

Table 4-2: Example of the diagnostic report of PROCOS.

ii. Total statistic report. This report records the information at the level of detail of the
task. This means that provides all the information needed to build the total statistics
of the task (see the next section). An example of some records of the total statistic
report is shown in Table 3.

Report_T

Rep Rep Rep


Re Rep Rep Rep_e Rep_exi Rep
_si _nr_ Rep_n _err Rep_e REp_occu Rep_o Rep_dat Rep_ti Rep_e
p_I _su _nr_ rr_typ t_amou _use
mco cycl r_ok _mo rr_rec rrences uttask a me xit_tot
d btsk er e nt r
de es de

1 sim TA3 1000 38716 0 0 20603 05/05/20 15.23. MAS 20603


_01 6 000 06 00 TER

1 sim TA3 1000 0 0 ND R/EXC 40819 0 05/05/20 15.23. MAS 0


_01 6 000 06 04 TER

1 sim TA3 1000 0 4314 0 0 ASVT 05/05/20 15.23. MAS 22956


_01 6 000 6 06 04 TER

1 sim TA3 1000 0 0 ND PERC 8516 0 05/05/20 15.22. MAS 0


_01 4* 000 06 42 TER

1 sim TA3 1000 0 8516 0 0 DC 05/05/20 15.22. MAS 8516


_01 4* 000 06 42 TER

1 sim TA3 1000 4511 0 0 2375 05/05/20 15.23. MAS 2375


_01 5 000 06 00 TER

1 sim TP6 1000 5994 26 0 507 05/05/20 15.22. MAS 507


_01 000 06 56 TER

Table 4-3: Example of records of the Total statistic report of PROCOS.

- The first and the second column (Rep_id and Rep_simcode) are the same of
the diagnostic report;

104
- Error! Reference source not found. -

- Rep_subtask is the code of subtask the task is ended;


- Rep_nr_cycles is the total number of the cycles simulated;
- Rep_nr_ok and Rep_nr_er represent the number of correct and incorrect tasks
respectively;
- Rep_err_mode and Rep_err_type indicate the couple of Error Type/Error Mode
of each incorrect task; Rep_occurrences are the occurrences of this couple
Error Type/Error Mode;
- Rep_err_recovery is the number of exits with an error in recovery flowchart (in
the trial application recovery flowchart does not recall, therefore this column is
always empty);
- Rep_outtask indicates the code of the exit type of the task (i.e. DC: task ended
with an error due to a delay confirmation of the ATCO);
- Rep_exit_tot and Rep_exit amount are respectively the number of requests to
Task Recovery Step and the total correct Task Recovery Step executed;
- Rep_time, Rep_data and Rep_user are the same of the Diagnostic report.
iii. Detailed report. In this report every executed subtask is recorded; thus, scrolling
through the records within this report it is possible to rebuild the entire paths of each
simulation cycle. An example of some records of the Detailed report is shown in
Table 4.

Table 4-4: Example of records of the Detailed report of PROCOS.


Report_D

Rep_subts Rep_stimu Rep_err_ Rep_err_t Rep_subts


Rep_task Rep_hw Rep_block Rep_value Rep_error
k li mode ype k_exit

TSK_01 TA1 NO C Uscita TA36


corretta dal
task
TSK_01

TSK_01 TA4 NO ST01 R Uscita TA36


corretta dal
task
TSK_01
TSK_01 TP5 NO C Uscita TA36
corretta dal
task
TSK_01
TSK_01 TA33 NO C Uscita TA36
corretta dal
task
TSK_01
TSK_01 TA36 NO C Uscita TA36
corretta dal
task
TSK_01

105
Report_D

Rep_subts Rep_stimu Rep_err_ Rep_err_t Rep_subts


Rep_task Rep_hw Rep_block Rep_value Rep_error
k li mode ype k_exit

TSK_01 TA36 NO C Termine TA36


Simulazion
e , uscita
dal subtask
: TA36

TSK_01 TA1 NO C Uscita TP5


corretta dal
task
TSK_01
TSK_01 TA2 NO C Uscita TP5
corretta dal
task
TSK_01
TSK_01 TP5 NO C Uscita da HCEIF RBE TP5 1
blocco 49 -
Err.TYPE:
RBE ,
Err.MODE
: HCEIF
TSK_01 TP5 NO C Termine HCEIF RBE TP5 1
Simulazion
e , uscita
dal subtask
: TP5
TSK_01 TA1 NO C Uscita TPA10
corretta dal
task
TSK_01
TSK_01 TA2 NO C Uscita TPA10
corretta dal
task
TSK_01
TSK_01 TA23 NO C Uscita TPA10
corretta dal
task
TSK_01
TSK_01 TPA10 NO C Uscita TPA10
corretta dal
task
TSK_01
TSK_01 TPA10 NO C Termine TPA10
Simulazion
e , uscita
dal subtask
: TPA10

In Table 4-4 the column Rep_id, Rep_simcode, Rep_data, Rep_time and Rep_user
have been hidden because they are the same of the Diagnostic report and Total
statistic report ones.
- Rep_task is the code of the task simulated;
- Rep_subtsk is the code of the subtask simulated;
- Rep_hw is the code of the hardware involved in the subtask (in the trial
application there is no hardware involved, then each row of this column is NO);
- Rep_stimuli indicates the code of the type of hardware stimuli that has
triggered the subtask;
- Rep_block indicates if the subtask is a communication process (C) or not (R);

106
- Error! Reference source not found. -

- Rep_value describes how the subtask is ended;


- Rep_err_mode and Rep_err_type indicate the couple of Error Type/Error Mode
in case of the subtask ended incorrectly;
- Rep_sub_tsk indicates the last subtask of the simulation cycle related to the
record;
- Rep_error indicates if there was an error during the simulation process (“1”
means an error occurred, “blank” otherwise).
iv. Block statistic report. This report allows to record how many times a specific block
of the flowchart occurred and how many times the output of this block was “Yes” or
“No”. The possibility of recording the activation of a block is a setting of the
simulator; in this release a maximum selection of ten blocks is allowed, but this value
can be modified. An example of Block statistic record is shown in Table 5.

Table 4-5: Example of some records of the Block statistic report of PROCOS.
Bock_statistic_report

Re Re
Re
R Rep p_ Rep p_
Re Rep Rep Rep_ Rep Rep Rep Rep_d Rep Rep Rep p_
ep _co bl _des qt Rep_q Rep_u
p_t _su _blo desbl _qta _qta _blo esblck _qta _qta …… _dat ti
_I dsi oc blck a_ ta_10n ser
ask btsk ck_1 ck_1 _1y _1n ck_2 _2 _2y _2n a m
d m k_ _10 10
e
10 y

1 sim_ TS TP5 0 0 46 Clarific 3916 8033 … 0 0 01/0 18. MAST


01 K_ ation 47 4 5/20 00. ER
01 succes 06 38
sful

1 sim_ TS TPA 43 Clarifi 427 165 0 0 … 0 0 01/0 18. MAST


01 K_ 10 cation 251 23 5/20 00. ER
01 succe 06 38
ssful

1 sim_ TS TPA 43 Clarifi 252 170 0 0 … 0 0 01/0 18. MAST


01 K_ 13 cation 06 6 5/20 00. ER
01 succe 06 37
ssful

1 sim_ TS TPA 43 Clarifi 211 834 0 0 … 0 0 01/0 18. MAST


01 K_ 1 cation 27 5/20 00. ER
01 succe 06 35
ssful

1 sim_ TS TP6 0 0 46 Clarific 102 2 … 0 0 01/0 17. MAST


01 K_ ation 5/20 52. ER
01 succes 06 21
sful

- Rep_id, Rep_simcode, Rep_data, Rep_time and Rep_user the same meaning


as in previous reports;

107
- Rep_subtsk is the code of the subtask related to the block recorded;
- Rep_block_1 to Rep_block_10 are the number of selected blocks;
- Rep_desblck_1 to Rep_desblck_10 report the description of each block;
- Rep_qta_1y and Rep_qta1n to Rep_qta_10y and Rep_qta10n are respectively
the occurrences of exit “Yes” (y) or “No” (n) of the blocks (from 1 to 10
maximum).
The Diagnostic report and the Detailed report have been used for debugging and during
the calibration process of the simulator. Indeed, while the Diagnostic report gives an
immediate index of errors in data entry (omission of requested inputs or incompatibility
among values of different inputs), the Detailed report, coupled with the Total statistic
report, is useful to detect both the cause of repetition of simulation exits and the errors in
the calculation process within the computer program, if any.
After the calibration phase the Diagnostic report and the Detailed report have not been
used any more. Indeed, the Total statistic report and the Block statistic report are enough
to calculate the statistics of interest for this work. The computer program allows to de-
select the need of the Detailed report in order to reduce the computational load and, thus,
the simulation time.

4.3 Collection and processing of results


The results of the simulation are assembled and processed in different ways to calculate
different statistics.
First of all, the probability of correct or failed task are estimated. For each scenario, the
mean value and the standard deviation of the probability of occurrence of correct/failed
task are calculated as shown in Table 4-6 for the assessment of the probability of correct
task. The total probability of failed task is the complement to 1, but it is important to
remember that different failure end states of the task are considered different than an
irrevocable failure, such as a warning error, a failure of the task due to an incorrect action
of the pilot, or the case when the ATCO does not issue the clearance (error in execution),
and the pilot which is attending instructions recalls the ATCO (not simulated).

108
- Error! Reference source not found. -

TOTAL FAILURE TASK

S1 S2 S3 S4

R1 3040 5150 453736 832511


R2 2995 5799 455899 830410
R3 3082 5245 455834 831720
R4 3086 5684 456385 829724
R5 3083 5512 452302 831001
R6 3125 5349 456275 831598
R7 3038 5373 457717 830295
R8 2966 5816 456172 831379
R9 3076 5078 456358 830879
… … … … …
… … … … …

R19 3101 5563 454660 830974


R20 3134 5425 462703 829115
average 3062 5487 456013 830517
σ
82,53501649 231,5814055 1990,706684 1571,208315
Probability
3,06E-03 5,49E-03 4,56E-01 8,31E-01
(movement)
Table 4-6: Computation of absolut probability of correct task.

Thus, the different exits from the task are recorded in order to distinguish the irrevocable
and not irrevocable (warning) errors (Figure 4-3).

Figure 4-3: Example of recording of exit task.

The exits of the task have been ranked considering the magnitude of potential negative
consequences as follows:

109
Type of exit Exit task

ATCO doesn't issue the instruction. Pilot


TA7 TA7* TA7** TA12 TA12*
recalls ATCO and requests the instruction
TA13
(no simulation)
Aircraft has safely vacated runway but
TA31 TA32 TA34* TA35*
there is a Delay Confirmation by ATCO
Aircraft has safely vacated runway (ATCO
TA36
doesn't verify visually)
Aircraft is obstructing the runway - Delay
TPA8
Understanding
TA29 TA29* TA12 TA12*
Irrevocable failure TA21 TA22 TA23 TA30* TP3
TP4 TP5 TP6

Two more “failure end states” of the task refers to actions of the pilot:

Pilot is not able to land and calls ATCO… No Simulation E20

Pilot landing error (Irrevocable failure due to pilot action) TP11

To complete the analysis of the task, the need of recovery actions of the ATCO has been
studied. Two different types of recovery have been identified:
- Recovery by procedure: it is placed at the task analysis level and represents the
possible recovery procedure described as a “deviated” path within the task
analysis;
- Recovery by clarification: it is placed at the cognitive flowchart level and
represents the recovery capabilities provided by the communication process.
For each scenario, the average of occurrences of recovery actions and the average of
occurrences of correct recovery for both “by procedure” and ”by clarification” are recorded.
Then the absolute probability of recovery action [recovery/movements] and the absolute
probability of recovery failures [failures/recovery] are calculated as shown in Figure 4-4.

110
- Error! Reference source not found. -

Figure 4-4: Computational model for the valuation of recovery actions.

4-111
After the analysis of the task, the occurrences of single error types have been analysed
(Figure 4-5). Errors in the communication process (readback-hearback process) and other
errors during task execution (hardware stimuli) are shown in the same figure in order to
underline their dependency.

Figure 4-5: Computational model for the assessment of the probability of different error types.

Each probability is referred to the number of movements and then, using an estimate of
number of movements in one year, it is also possible to refer the error probability to the
operational time.

4.4 Normality test on the results of the simulation campaign


A normality test on the results of the correct task has been performed in order to prove the
stochastic nature of the method. To this end the Anderson-Darling test has been used. As
shown in Figure 4-6 and summarized in Table 4-7, the probability of correct task, for three
out of four scenarios, clearly follows a normal distribution (p-values comprises from 0.892
to 0.967).
Only the scenario 3 has a very low p-value (0.189), under 0.5 limit. Further analyses
should be performed to identify the actual causes. Given that scenario 3 simulates the
widest spectrum of possible task end states, probably 20 repetitions are not sufficient to
demonstrate the normality distribution of results.
However, it could be said that the method provides a stochastic model with statistically
consistent results based on a large number of simulation samples.

Scenario 1 Scenario 2 Scenario 3 Scenario 4


p-value 0.892 0.967 0.189 0.963
Table 4-7: Summary of results for the Anderson-Darling test.

112
- Error! Reference source not found. -

Figure 4-6: Anderson-Darling test results.

113
5. ANALYSIS OF THE RESULTS FROM THE CASE STUDY: AN
EVALUATION OF THE EXPERIENCE GAINED
This chapter will present the results of the simulation campaign. The discussion of the
results should lead to some first conclusions referring both to the case study and to the
simulator performance, the possible strengths and weaknesses of the current simulation
approach. Furthermore the pilot study is already able to point out potential future
development for the application of PROCOS within the CONOPS framework.

5.1 Discussion of the results of the case study


In this section the results of the total task will be shown and discussed.

Probability of Probability of
correct task failed task Standard
Deviation
Mean Value Mean Value

Scenario 1 0,9969 0,0031 8,254E-05

Scenario 2 0,9945 0,0055 2,316E-04

Scenario 3 0,5440 0,4560 2,991E-03

Scenario 4 0,1695 0,8305 1,571E-03


Table 5-1: Summary of results for the probability of correct/failed task.

Overall Failure Probability of the Task


9,0E-01 8,31E-01
8,0E-01
Error probability [errors/movement]

7,0E-01

6,0E-01

5,0E-01 4,56E-01

4,0E-01

3,0E-01

2,0E-01

1,0E-01
3,06E-03 5,49E-03
0,0E+00
S1 S2 S3 S4
Scenarios

Figure 5-1: Overall failure probability of the task (mean value).

114
- Error! Reference source not found. -

Figure 5-1shows how the failure probability of the task increases as the conditions of the
scenario worsen. The relationship between the number of Contextual Conditions (CCs)
that affect the scenario and the value of the failure probability is not linear; namely, there
is not a linear dependency between the number of CCs and the failure probability, but the
increase from a scenario to another one depends on the weight of the Contextual
Conditions that play a role within those scenarios. Specifically, Figure 5-1 indicates that the
human and organisational factors (scenario 3) have a stronger impact than the external
contextual conditions (scenario 2) on the performance of the ATCO. Indeed, compared to
the base case (scenario 1), the failure probability of the task increases by two orders of
magnitude when the human and organisational factors negatively affect the ATCO, while
the order of magnitude of the failure probability remain the same also when all the
negative external conditions are considered.
One last consideration can be made regarding the overall failure probability of the task.
When the air traffic controller works in the best conditions (scenario 1), the failure
probability of the task is not zero but 10-3. This value might appear high but Figure 5-2
shows that the probability of irrevocable failure is only 10-6. Another failure end state
observed in the scenario1 is the error of omission of the ATCO in verifying that the pilot
has vacated the runway, but this situation it is not safety critical.

1,00E+00
4,04E-01 8,28E-01
Failure probability [failures/movement]

1,00E-01

1,00E-02

1,00E-03

8,51E-05
1,00E-04

1,00E-05
1,75E-06

1,00E-06

1,00E-07
S1 S2 S3 S4
ATCO doesn't issue the instruction.
Runway vacated - Delay confirmation by ATCO
Runway vacated - ATCO doesn't verify visually
Aircraft is obstructing the runway - Delay Understanding
Irrevocable failure

Figure 5-2: Probability of failure end states of the task.

115
Figure 5-2 displays as the probability of occurrences of an irrevocable failure grows as the
work conditions worsen. According to what Figure 5-1 infers Figure 5-2 outlines that the
human and organisational factors are more impacting than the external contextual
conditions.
It can be observed that the failure end states displayed with the violet and orange colours
are reversed between scenario 2 and scenario 3. This behaviour is due to the different
way the two scenarios influence the operator. Excluding irrevocable failures, when the
external conditions play a role in the task execution, the tower runway controller is prone
to do more ghastly errors.
When the human and organisational factors affect the working conditions of the ATCO, a
wider spectrum of possible failure end states of the task has been registered. Indeed, the
simulation of scenarios 3 and 4 have recorded all kind of previously defined task failures.

In order to complete the analysis of the overall task, the probability of recovery actions
due to the occurrence of a non-irrevocable failure, has been studied. Figure 5-3/Figure 5-4
shows the results.

Probability of Recovery actions


1,20,E+00
Probability [recovery/movements]

1,00,E+00

8,00,E-01

6,00,E-01

4,00,E-01

2,00,E-01

0,00,E+00
S1 S2 S3 S4
Recovery by procedure 6,04E-04 2,40E-03 5,05E-01 9,97E-01
Recovery by clarification 9,35E-01 1,00E+00 7,11E-01 1,15E-02

Figure 5-3: Probability of recovery actions (mean values).

116
- Error! Reference source not found. -

Probability of Recovery Failures


1,E+00

Probability [failures/recovery]
8,E-01

6,E-01

4,E-01

2,E-01

0,E+00
S1 S2 S3 S4
Recovery by procedure 7,72E-04 3,09E-02 6,88E-01 9,97E-01
Recovery by clarification 2,18E-03 9,90E-02 5,90E-01 9,98E-01

Figure 5-4: Probability of recovery failures (mean values).

Figure 5-3 specifies that also in the best work conditions, ATCO makes use of his recovery
skills to solve misunderstandings during communication with the pilot, if any. This means
that the iterated use of his ability to recover by clarification as feedback of a
communication process is normal. Furthermore, the probability of recovery failure is very
low in scenarios 1 and 2 (Figure 5-4); that is, if the human and organisational factors does
not affect the ATCO capabilities in performing recovery actions, the recovery will be
almost always correct.
Otherwise, when the human and organisational factors affect the performance of the
operator, the air traffic controller exploits the capability of recovery by procedure but the
probability of performing a correct recovery is very low because it is affected by operator
inner negative influencing factors.

117
5.2 Error type analysis
In this paragraph, the analysis of different error type will be presented.

ATCO Error Types


1,0E+00

1,0E-01
Error probability [errors/movement]

1,0E-02

1,0E-03

1,0E-04

1,0E-05

1,0E-06
S1 S2 S3 S4
ET Perception 0,00E+00 4,56E-05 8,71E-03 2,57E-03
ET Interpretation 1,04E-03 1,95E-03 2,68E-03 1,86E-05
ET Response/Execution 8,10E-04 2,19E-03 4,01E-02 1,22E-04
ET Communication 1,30E-06 8,43E-05 6,38E-01 9,94E-01

Figure 5-5: ATCO error types.

The probability figures of different error types depend on the calibration of each decision
block within the cognitive flowchart and on how often the specific error type can occur
within the task analysis as well.

Although, in general, the communication is very important in any task performed by air
traffic controllers, Figure 5-5 shows how much this is true for this Use Case. For the
scenario 2 the probability of error in communication is low because the probability of
correct recovery by clarification is very high.

The probability of having an interpretation error is almost invariable trough the scenarios
because the task of this Use Case does not comprise any relevant diagnostic or planning
process.

In general, the error types observed in the worst scenario focalize the attention on
communication problems; indeed the errors in communication prevail if compared with the
other error types that have a probability of occurrence of two order of magnitude less.
Looking at the scenarios 2 and 3 it is possible to observe that a wider spectrum of error
types might occur, arguing that intermediate situations are more difficult to manage – thus
to improve – than the boundary situations; indeed, if the aim is to work at the Contextual
Conditions of scenario 4 and to focalise the effort to improve the communication process,

118
- Error! Reference source not found. -

it would be sure huge vantages in safety are reached, i.e. a high rate of reduction of the
task failure probability. It is not exactly the same for scenarios 2 and 3.

When the complexity of interaction between ATCO and the context is lacking (scenario 1),
the error types committed by the air traffic controller are slips in execution of the task (e.g.
slips of the tongue) or high cognitive level mistakes (interpretation error).

Eventually, it could be said that the model ensures an estimation of the probability of error
types considering the dependency among them. Indeed, comparing the probability of error
in perception and in execution for the scenarios 3 and 4, it is possible to observe that,
given an increase in communication errors, the rate of decrease of errors in perception is
less than the one in execution because the ATCO reaches the execution phase of the
task less often.

5.3 Conclusions and potential developments of the approach


This section aims at underlining the strong findings from the pilot application as well as the
weaknesses of the proposed simulation approach. Finally, the potential developments of
the approach will be discussed.

5.3.1 Systematic integration of the PROCOS approach as applied to CONOPS


The first important outcome of the pilot application is a validated method to systematically
apply the PROCOS approach to CONOPS. The main steps in which the method can be
broken down are as follow:

1. Use Case selection and Task Analysis:


a. The Use Case is analyzed referring to the narrative of CONOPS and by
means of interviews
b. Cognitive Task Analysis of the selected Use Case
c. Flow Chart representation of the task
d. Revision of the Operator Model of PROCOS

2. Scenario Setting
a. Setting of critical Contextual Conditions (HERA) and assessment of CCs
importance (experts’ judgements)
b. Data gathering for technical and “external” events influencing the task
c. Defining the set of operational scenarios to be simulated

119
3. Calibration Process
a. HERA DB analysis and error type setting for single steps of the task
b. Calibration of the Operator Model of PROCOS with HERA dataset. Setting
the transfer function FLI(CCs) Æ HEP(FLI)
c. PROCOS model testing and validation

4. Simulation, Data Analysis and Reporting


a. Design of experiments (# simulation campaign, # runs, …)
b. Perform the simulation campaigns
c. Data analysis
d. Data reporting and discussion of results

5.3.2 Strong findings from the pilot application


The main evidences of the value of the proposed approach can be summarised as
follows:
- A new quantitative model able to exploit historical data has been developed;
- The use of a specific flowchart for the communication process assures a better
model for the liveware-livewere interactions;
- The new method used for developing the task analysis (based on flow charts) is
able to go beyond the more static fault tree /event tree modelling.
- The completion of a pilot application has brought to a complete “engineered”
method for dynamic risk modelling of human and organisational factors and it can
be used for the analysis of future ATM concepts;
- PROCOS provides a stochastic model with statistically consistent results based on
a large number of simulation samples (no other experiences documented in
literature);
- PROCOS takes into account the cognitive-related dependency among estimated
probabilities and any dependency among different error types or error modes;
- The method is able to provide probability values also for correct task and, more
important, for the corrective action in the recovery phase.
- A complete integration has been reached and good synergies have been
demonstrated among ConOps model of ATM, HERA retrospective and PROCOS
approach to HRA, in particular referring to:
o the way of modelling the task and the context,
o the complete exploitation of HERA for the calibration of the simulation
model;

120
- Error! Reference source not found. -

- The approach is able to be adapted to many different fields of study with little
efforts (e.g. process industry, nuclear, railway).

5.3.3 Weaknesses of the current simulation approach


Some weaknesses are emerged during the development of this project:
- The results of the simulation are strongly affected by the available accident
database or reports:
o the mathematical model for the relation among CCs and the error type has
been derived from the available data set of HERA, which is still incomplete
and relatively small;
o the description of the scenario to be simulated is influenced by the level of
detail of CC recording within HERA database. Till now the HERA database
classifies the CCs observed in a given accident only on their main
category, whereas it would be better and more useful to report the precise
description of the CCs identified for each event and not only the main
category they belong to.
o the HERA database only reports the presence or not of each CC for a
considered accident. Therefore, for the application of this study the value of
each CC has been considered only 0 or 1. The description of the scenario
would be more realistic if the CC influence could assume a real value
ranging from 0 to 1;
- The next case studies should tailor the task analysis in such a way that the effect
of possible new equipment could be reflected both in the changing of task
performed by ATCO and in description of the new scenario through the use of the
CCs (PROCOS is already able to consider the equipment and the interaction
between the operator actions and the equipment status).

5.3.4 Potential developments of the approach


According to a better framework of the historical data recording system, some potential
developments of the approach are suggested:
- For the trial application the CCs has been considered present (li=1) or not-present
(li=0), but in the future it will be possible to shift the model from discrete-
deterministic to continuous-probabilistic (0÷1);
- When a larger HERA dataset will be available, it will be possible to implement a
better calibration method for the ri = f (FLI ) function. Indeed, with a wider

dataset that cover every possible combinations of CCs, it is possible to extract

121
many couples of (FLI, ri), modelling the relationship between ri and FLI through the
curve that better fits empirical observation points.

ri

P(ETk 0,..., cci ,...,0)


...
P(ETk 1,..., cci ,...,0 )
P(ETk 1,..., cci ,...,1)

FLI
P (0 ,..., cc i ,..., 0 )
... FLI1
FLIi
P (1 ,..., cc i ,..., 0 )
FLIn
...
P (1 ,..., cc i ,..., 1 )

Figure 5-6: Calibration function derived from a larger set of empirical observation

122
6. REFERENCES

[1]. Acosta CG , Siu N., Dynamic event tree analysis methods (DETAM) for accident
sequence analysis. MITNE-295, Cambridge, MA: Massachusetts Institute of
Technology, 1991.
[2]. Amalberti R. and Wioland L., ( 1997) Human error in aviation, In: Aviation safety,
pp. 91-108, H. Soekkha (Ed.).
[3]. Amendola A, Accident Sequence dynamic simulation versus event trees. Reliability
Engineering and System Safety 1988; 22:3-25
[4]. Blom H. A.P., Daams J. and Nijhuis H. B., Human cognition modelling in ATM
safety assessment, 3rd USA/Europe Air Traffic Management R&D Seminar Napoli,
13-16 June 2000.
[5]. Buck, S., Biemans, M.C.M., Hilburn, B.G., (1996) van Woerkom, P.Th.L.M.,
Synthesis of functions, NLR Report TR 97054 L.
[6]. Cacciabue P.C. and Hollnagel E.,( 1995) Simulation of Cognition: applications. In
J.M. Hoc, Cacciabue P.C. and E.Hollnagel (Eds), Expertise and Technology:
cognition and Human-computer Interaction. Lawrence Erbaum Associates,
Hillsdale, New Jersey pp55-73.
[7]. Cacciabue P.C.( 1998) Modelling and simulation of Human Behaviour in System
Control, Springer & Verlag, London.
[8]. Cacciabue, P. C., Decortis, F., Drozdowicz, B., Masson, M., and Nordvik, J. P.
(1992). "COSIMO: A Cognitive Simulation Model of Human Decision Making and
Behaviour in Accident Management of Complex Plants." IEEE Transaction on
Systems, Man and Cybernetics, IEEE-SMC, 22(5), 1058-1074.
[9]. Chang, Y.H. and A. Mosleh. Dynamic PRA Using ADS with RELAP5 Code as Its
Thermal Hydraulic Module. in Probabilistic Safety Assessment and Management
(PSAM) 4. 1998. New York: Sept. 13-18, 1998: Springer.
[10]. Chang, Y.H. and Mosleh A. , Cognitive Modeling and Dynamic Probabilistic
Simulation of Operating Crew Response to Complex System Accidents (ADS-
IDACrew). 1999, CTRS-B6-06, College Park, Maryland: Center for Technology
Risk Studies, University of Maryland.

123
[11]. Corker K.M. (1999) Human Performance Simulation In The Analysis Of Advanced
Air Traffic Management Proceedings of the 1999 Winter Simulation Conference P.
A. Farrington, H. B. Nembhard, D. T. Sturrock, and G. W. Evans, eds.
[12]. Corker, K. M., and Smith, B. (1993). "An architecture and modelling for cognitive
engineering simulation analysis: application to advanced aviation analysis." 9th
AAIA Conference on Computing in Aerospace, San Diego, CA, US.
[13]. Edwards E. Human Factors in Aviation Academic Press, San Diego, 1988 CA pp
3-25.
[14]. EUROCONTROL ATM Operational concept volume 2 Concept of Operation Year
2011 Edition 1 Brussels 03.05.2005. Proposed Released Issue.
[15]. Everdij M.H.C. , Blom H.A.P. and Klompstra M.B., (1997) Dynamically Coloured
Petri Nets for Air Traffic Management Safety purposes, Proc. 8th IFAC Symposium
on Transportation Systems, pp. 184-189.
[16]. Fujita Y., Hollnagel E., “Failures without errors: quantification of context in HRA”.
Reliability engineering and System Safety, Vol.83, pg.141 – 151, 2004.
[17]. George, P.H., Johnson, A.E., Hopkin, V.D.,(1973) Radar monitoring of parallel
tracks, automatic warning to controllers of track deviations in a parallel track
system, EEC Report No 67, Bretigny.
[18]. Hawkins F.H. Human Factors in flight Adershot UK. Gower Technical Press 1987.
[19]. Hollnagel E. ,(1993). “Human Reliability analysis, context and control”. Academic
Press, London.
[20]. Hooke N. Maritime Casualties 1963-1996 London LLP 1997.
[21]. IAEA “Report on the preliminary fact finding mission following the accident at the
nuclear fuel processing facility in Tokaimura, Japan” Austria November 1999.
[22]. IAEA/ WHO/EC “ Ten Years after Chernobyl: what do we really know?” based on
the proceeding of the IAEA/WHO/EC International Conference Vienna April 1996.
International Atomic Energy Agency Division of Public Information 1997.
[23]. Isaac A. Shorrock S., Kennedy R., Kirwan B, Andresen H. and Bove T. “The
Human Error in ATM Technique (HERA-JANUS) HRS/HSP-002-REP-03
EUROCONTROL Edition 1.0 21 Feb 03.
[24]. Isaac A. Shorrock S., Kirwan B. Human error in European air traffic management:
the HERA project. Reliability Engineering and System Safety Volume: 75, Issue: 2,
February, 2002, pp. 257-272.

124
[25]. Isaac, A., Straeter, O. & Van Damme, D. A method for predicting Human Error in
ATM (HERA Predict). HRS/HSP-002-REP-07. EUROCONTROL. Brussels 2004.
[26]. Kemeny J.G. (1979) “Report of the President Commission on the accident at Three
Mile Island, Washington DC: US Government Printing Office
[27]. Kontogiannis T. , “A framework for analysis of cognitive reliability I complex
systems: a recovery centred approach”, Reliabily Engineering and System Safety,
Vol.58 1997.
[28]. Kurzman D. A killing wind: Inside Union Carbide and the Bhopal Catastrophe.
McGraw Hill Book Company 1987.
[29]. Mauri C. Owen D., Baranzini D., Model of Human Machine Integrated System,
AITRAM Deliverable D04.1 WP4, 5th Framework Programme, October 2001.
[30]. Mosleh, A. and Y.H. Chang, Model-Based Human Reliability Analysis: Prospects
and Requirements. Reliability Engineering & System Safety, 2004. 83(2): p. 241-
253.
[31]. Mould R.F. Chernobyl Record: the definitive History of the Chernobyl Catastrophe
Bristol UK Philadelphia PA Institute of Physics Publishing 2000
[32]. Rasch G. Probabilistic Model for some Intelligence and Attainment Tests.
University of Chicago Press. Chicago 1980.
[33]. Rasmussen J & Vincente K.J. “Cognitive control of Human Activities: Implication
for Ecological Interface Design” RISO-M-2660 Roskilde Denmark; Riso National
Laboratory 1987.
[34]. Reason J. , Human error, Cambridge Univ. Press, 1990.
[35]. Robins J. “The World’s Greatest Disasters” London Chancellor Press 1990
[36]. Shu, Y., Futura, K., and Kondo, S. (2002). "Team performance modelling for HRA
in Dynamic situations." Reliability Engineering and System Safety, 78, 111-121.
[37]. Smidts, C., S.H. Shen, and A. Mosleh, The IDA Cognitive Model for the Analysis of
Nuclear Power Plant Operator Response Under Accident Condition. Part I:
Problem Solving and Decision Making Model. Reliability Engineering and System
Safety, 1997(55): p. 51-71.
[38]. Smith B. R. Tyler. S. W. (1997). “The Design and Application of MIDAS: A
Constructive Simulation for Human-System Analysis”. Presented at the 2nd
Simulation Technology & Training (SIMTECT) Conference, 17-20 March 1997,
Canberra, Australia.

125
[39]. State of Alaska “The Wreck of the Exxon Valdez Final Report” Alaska Oil Spill
Commission Published February 1990.
[40]. Straeter O.Evaluation of Human Reliability on the basis of Operational Experience.
GRS-170 Cologne (Germany)GRS 2000.
[41]. Sträter, O. Cognition and safety - An Integrated Approach to Systems Design and
Performance Assessment. Ashgate. Aldershot (2005).
[42]. Subsecretaria de Avicion Civil Spain KML B-747, PH-BUF and Pan AM B-747
N736 collision at Tenerife Airport Spain on 27 March 1977.
[43]. Swain, A.D., and Guttman, H.E. (1983). “Handbook on Human Reliability Analysis
with Emphasis on Nuclear Power Plant Application”. NUREG/CR-1278, SAND 08-
0200 R X, AN.
[44]. Takano K. , K. Sasou and S.Yoshimura (1995) Simulation system for behaviour of
an operating group (SYBORG). XIV European Annual Conference on Human
Decision Making and Manual Control. Deft, The Netherlands June 14-16.
[45]. Trucco P. , LevaM.C. “A Probabilistic Cognitive Simulator for HRA studies.
PROCOS”. Politecnico of Milan – Department of Management, Economics and
industrial Engineering. 14th of January 2005.
[46]. Trucco P., Leva M.C., Corti G., Gallarati G. “A Probabilistic Cognitive Simulator For
Hra Studies” CISAP 1 Conference Proceeding Palermo 2004.
[47]. Wickens C. “Engineering Psychology and Human Performance” Second Edition
New York; Harper-Collins 1992.
[48]. Wickens.C.R “Engineering, psychology and human performance”. Harper Collins
Publishers. 2nd edition New York 1992.
[49]. Woods D. D., Roth, E. M., and People, H. E. (1987). "Cognitive Environment
Simulation: an Artificial Intelligence System for Human Performance Assessment."
Technical Report NUREG-CR-4862, US Regulatory Commission, Washington DC
US.

126
ANNEX I : CONOPS USE CASE “HANDLE AIRCRAFT LANDING”

Scope
System, black-box. System means an Overall ATM/CNS Target Architecture compliant
system.

Level
User Goal

Summary
This Use Case describes how a Tower Runway Controller uses the System to control the
landing of an aircraft. It starts when the intermediary approach phase is completed and the
aircraft is ready for final approach and ends when the Tower Runway Controller is ensured
that the aircraft has vacated the runway.

Actors
Tower Runway Controller (Primary) – wants to make sure that the aircraft lands and safely
vacates the runway.
Pilot (Support) – has to land the aircraft safely.
Executive Controller (Offstage) – has to take control of the aircraft back from the Tower
Runway Controller in case of a missed approach and wants to be informed of any runway
closures.
Multi-Sector Planner/Planning Controller (Offstage) – has to assist the Executive Controller
when handling the missed approach.
Tower Ground Controller (Offstage) – has to assume responsibility for the control of the
aircraft right after vacating the runway.
Tower Supervisor (Offstage) – wants to make it sure that runways are used according to the
airport’s traffic management policy.
ACC Supervisor (Offstage) - wants to be informed of any runway closures longer than a
specified period.
Flow Manager (Offstage) – wants to be informed of any runway closures longer than a
specified period which will affect traffic flows.

Preconditions
The flight is cleared for final approach by the Executive Controller in charge of establishing
the aircraft on final approach. The transfer of responsibility between the Executive Controller
and the Tower Runway Controller is completed. In particular the Communication contact
(voice) between the Tower Runway Controller and the Pilot is established. The System has
informed the pilot via data link on the runway traffic situation and weather at airport (e.g.
wind).
The System knows the planned runway exit for the aircraft.

127
Post conditions
Success end state
The System records that the aircraft has vacated and is no longer obstructing the
runway and communications have been transferred to the Tower Ground Controller
for the arrival taxi.
Failed end state
1. The System records that the landing has been aborted. The System knows the
current flight status e.g. either resuming a landing sequence or returned to the holding
area.
2. The System records that the aircraft has not vacated and is obstructing the runway.
Notes
Definitions
Definitions of the following acronyms and expressions used in this document “Arrival
taxi plan, initial approach, intermediary approach, final approach1, landing clearance”
are available in the OATA Glossary document.

Trigger
The Use Case starts when the System detects that the aircraft is on final approach.

Main Flow
1. The System notifies the Tower Runway Controller of the planned runway exit and
proposes to the Pilot the runway exit and associated taxi-in plan2 .
2. The Pilot confirms the proposed runway exit and associated taxi-in plan.
3. The Tower Runway Controller, using the System, verifies that the runway is available
for the landing of the aircraft.
4. The Tower Runway Controller issues the landing clearance using R/T.
5. The Pilot lands the aircraft. The System detects the landing3 and records the landing
time4.
6. The Tower Runway Controller, assisted by the System, verifies that the aircraft has
vacated the runway.
7. The System detects that the aircraft has vacated the runway via the planned exit.
Communications are transferred to the Tower Ground Controller.
8. The Use Case ends when the System records that the aircraft has safely vacated the
runway.
Alternative Flows
[2] – The Pilot rejects the planned Runway Exit.

1
The definitions of the phases of approach are covered by ICAO doc. 4444.
2
See use case taxi-in of an aircraft
3
Landing means that the aircraft remains on the surface.
4
The system informs the Tower Runway Controller, Aircraft Operator, Airport Operator and Flow Manager of the event.

128
9. The Pilot requests a runway exit other than planned from the Tower Runway Controller
by R/T.
10. The Tower Runway Controller agrees to the request and updates the runway exit using
the System.
11. The System confirms to the Pilot the runway exit and associated taxi-in plan5 using D/L.
12. The flow continues at step 2.
[7] – The Pilot vacates by a Runway Exit other than Planned

13. The System detects and notifies the Tower Ground and Tower Runway Controllers that
the aircraft does not use the planned runway exit and informs both controllers of the
actual runway exit. The System transfers communications to the Tower Ground
Controller using D/L.
14. The flow continues at step 8.
Failure Flows
[4] – The Runway is not Available (e.g. due to an Aborted Take-off).

15. The Tower Runway Controller, assisted by the System, is unable to issue a landing
clearance. The Tower Runway Controller instructs the Pilot to execute a missed
approach, notifies the System of the missed approach and instructs the Pilot by R/T to
contact the Executive Controller.
16. The Use Case ends when the System records that the aircraft has not landed.
[5] – The Pilot is unable to land.

17. The Pilot informs the Tower Runway Controller that he is unable to land and requests a
missed approach clearance.
18. The Tower Runway Controller instructs the Pilot to execute a missed approach, notifies
the System of the missed approach and instructs the Pilot by R/T to contact the
Executive Controller. .
19. The Use Case ends when the System records that the aircraft has not landed.
[6] – The Pilot does not Manage to Vacate the Runway

20. The Tower Runway Controller, assisted by the System, detects that the aircraft is
obstructing the runway.
21. The Tower Runway Controller confirms with the Pilot that he has not vacated the
runway and notifies the System that the runway is obstructed for a defined period.

The Use Case ends when the System disseminates the runway obstruction information to the
upstream ACC(s) Supervisor(s) and Tower Supervisor, the concerned Executive

5
See use case taxi-in of an aircraft

129
ANNEX II A: TASK ANALYSIS FOR USE CASE “HANDLING AIRCRAFT LANDING” IN FLOW CHART FORMAT

130
131
ANNEX II B: TASK ANALYSIS FOR USE CASE “HANDLING AIRCRAFT LANDING” IN TABLE FORMAT

Error Type Error Type Error Type Violati


ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2

The flight has been cleared for final approach by Executive Controller. The transfer of responsibility between Executive Controller and Tower Runway Controller is completed between the Executive Controller and the
1 Tower Runway Controller is completed. Communication between the Tower Runway Controller and the Pilot is established
object on
e1 runway yes: e2 no: e9
visibility is
e9 good yes: ta16 no: ta17
ATCO verifies
visually runway Warning
availability and Not clearanc
issues landing done: Not done: e plan:
ta16 clearance e10 ta17 ta17 ta24
ATCO
verifies,using
the radar, Not
runway done: Warning
availability and (Warnin Other clearanc
issues landing g / Error) than: e plan:
ta17 clearance e10 ta18 ta24 ta24
ATCO issues Slip of the
the landing tongue:
ta18 clearance e10 e10
the pilot rejects
the planned
exit and
requests a
e10 different one yes: ta19 no: tp5
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
readback of k critical clarifica able warn
landing warning: tion ok: Failure ing:
tp5 clearance ta23 ta23 e12 Exit ta23

132
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta23 hearback e12 e12 e12 Exit
ATCO
ATCO Mishear
understands d comm:
ta19 request ta20 tpa6
Incorrec
t
clarificat
ion
clarification (warning
tpa6 process ta20 ): ta20
Wrong
clearanc
Slip of the e plan:
tongue Irrevocab
ATCO process (warning): le failure
ta20 pilot request e11 e11 Exit
ATCO agrees
e11 to pilot request yes: tp3 no: tp4
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
k critical clarifica able warn
warning: tion ok: Failure ing:
tp3 readback ta21 ta21 e12 Exit ta21
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta21 hearback e12 e12 e12 Exit
clarifica
tion incor
NON rect
ok: read
readbac Irrevoc back
k critical clarifica able warn
warning: tion ok: Failure ing:
tp4 readback ta22 ta22 e12 Exit ta22

133
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
hearba
ck
commu hearba
hearbac nication ck
k error irrevoca
warning (warnin ble
error: g): failure:
ta22 hearback e12 e12 e12 Exit
Plane A
technically
e2 able to vacate yes: tp1 no: tp2
Pilot A aware
of
failure(unable
to vacate
runway)
comunicates
the problem to Not done:
tp2 ATCO ta9 tpa5
clarificat
ion non
OK: Not
Done
tpa5 audio check A e8 e8*
visibility is
e8 good yes: ta14 no: ta15
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
missed (No Failure ing failure le failure
ta14 approach tp6 simulation) Exit ta15 Exit Exit
ATCO verify
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
missed (No Failure e Failure failure le failure
ta15 approach tp6 simulation) Exit Exit Exit Exit
visibility is yes:
e8* good ta14* no: ta15*
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
ta14 missed (No Failure ing failure le failure
* approach tp6 simulation) Exit ta15* Exit Exit

134
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
ta15 missed (No Failure e Failure failure le failure
* approach tp6 simulation) Exit Exit Exit Exit
Not
ATCO heard
understands comm:
ta9 communication ta24 tpa4
clarificat
ion non
OK: Not
Done
tpa4 audio check A e7 e7*
visibility is
e7 good yes: ta12 no: ta13
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO ( able Error/Warn able Irrevocab
missed No Failure ing failure le failure
ta12 approach tp6 simulation) Exit ta13 Exit Exit
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
missed (No Failure e Failure failure le failure
ta13 approach tp6 simulation) Exit Exit Exit Exit
visibility is yes:
e7* good ta12* no: ta13*
ATCO verifies Not done: Opposit Other
visually runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Error/Warn able Irrevocab
ta12 missed (No Failure ing failure le failure
* approach tp6 simulation) Exit ta13* Exit Exit
ATCO verifies
using the Not done: Opposit Other
radar, runway Pilot e: than: Other
unavailability recalls Irrevoc Not done: Irrevoc than:
and issues ATCO able Irrevocabl able Irrevocab
ta13 missed (No Failure e Failure failure le failure
* approach tp6 simulation) Exit Exit Exit Exit
ATCO issues
the missed
approach Later than:
ta24 clearance tp6 e12

135
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
yes:
Irrevoca
Pilot lands ble
e12 aircraft anyway Failure no: tp6
clarifica
clarifica tion incor
tion ok: NON rect
END ok: read
Readback of readbac TASK: Irrevoc back
missed k critical Aircraft able warn
approach warning: B is not Failure ing:
tp6 clearance ta25 ta25 landed Exit ta25
hearba
ck
commu
hearbac nication
k error
warning (warnin
error: g): hearba
END END END ck
TASK: TASK : TASK : irrevoca
Aircraft B Aircraft Aircraft ble
is not B is not B is not failure:
ta25 hearback landed landed landed Exit
Pilot A aware
of his position, Slip of
delivers the
vacation Not done: tongue: Other
tp1 confirmation ta1 tpa3 tpa2 Than: tpa2
Clarific
Clarifica ation
tion Not Other
done: than:
tpa2 audio check B e5 e5* e5**
Clarific
Clarifica ation
tion Not Other
done: than:
tpa3 audio check A e5 e5* e5**
visibility is
e5 good yes: ta7 no: ta8
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7 approach tp6 simulation) tp6 ta8 Exit Exit

136
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8 approach tp6 tp6 Exit Exit
visibility is
e5* good yes: ta7* no: ta8*
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7* approach tp6 simulation) tp6 ta8* Exit Exit
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8* approach tp6 tp6 Exit Exit
visibility is yes:
e5** good ta7** no: ta8**
ATCO verifies Not done: Other
visually runway Pilot than: Other
unavailability recalls Slip of Not done: Irrevoc than:
and issues ATCO the Error/Warn able Irrevocab
missed (No tongue: ing failure le failure
ta7** approach tp6 simulation) tp6 ta8** Exit Exit
ATCO verifies
using the
radar, runway Other
unavailability Slip of Not done: than:
and issues the Irrevocabl Irrevocab
missed tongue: e Failure le failure
ta8** approach tp6 tp6 Exit Exit
Not
ATCO heard
understands comm:
ta1 communication e3 tpa1
Clarifica
tion Not
done:
tpa1 Audio check A e4 e4*
visibility is
e4 good yes: ta5 no: ta6

137
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ATCO verifies
visually runway Slip of
availability and the Not done:
issues landing Opposite: tongue: Warning
ta5 clearance e10 ta24 e10 ta6
ATCO verify
using the
radar, runway Slip of
availability and the
issues landing Opposite: tongue:
ta6 clearance e10 ta24 e10
visibility is
e4* good yes: ta5* no: ta6*
ATCO verifies
visually runway Slip of
availability and the Not done:
issues landing Opposite: tongue: Warning
ta5* clearance e10 ta24 e10 ta6*
ATCO verify
using the
radar, runway Slip of
availability and the
issues landing Opposite: tongue:
ta6* clearance e10 ta24 e10
visibility is
e3 good yes: ta2 no: ta3
ATCO verifies
visually runway
availability and Not done:
issues landing Warning
ta2 clearance e10 ta3
ATCO verify
using the
radar, runway
availability and Slip of the Not done:
issues landing tongue: Warning /
ta3 clearance e10 e10 Error: ta4
ATCO issues
Landing
clearance Slip of the
without tongue:
ta4 verification e10 e10
No:
Pilot
informs
ATCO and
requests
missed
Pilot B is able Yes: approach
e12 to land? tp11 (No

138
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
simulation)

Other
Than:
Irrevocabl
pilot B lands e failure
tp11 the aircraft e13 Exit
plane B
technically
e13 able to vacate yes: tp7 no: tp8
Pilot B aware
of
failure(unable
to vacate
runway)
comunicates
the problem to Not done:
tp7 ATCO ta26 tpa7
END
TASK:
Aircraft B
is Clarifica
obstructi tion Not
ng the done:
tpa7 audio check B runway e14
visibility is
e14 good yes: ta24 no: ta25
END Wrong
TASK: Other clearanc
ATCO detects Aircraft B than: e
visually that is Not done: Not done Irrevoc planning:
the aircraft B is obstructi Irrevocabl (warning / ablr Irrevocab
obstructing the ng the e Failure Error): ta failure le failure
ta24 runway runway EXIT 25 EXIT EXIT
END Wrong
ATCO detects TASK: Other clearanc
using the Aircraft B than: e
radar, that the is Not done: Not done : Irrevoc planning:
aircraft B is obstructi Irrevocabl Irrevocabl ablr Irrevocab
obstructing the ng the e failure e failure failure le failure
ta25 runway runway EXIT EXIT EXIT EXIT
END
TASK: ATCO
ATCO Aircraft B mishear
understands is d comm:
ta26 communication obstructi tpa8

139
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
ng the
runway

Delay
underst
anding
comm:
END END
TASK: TASK:
Aircraft B Aircraft
is B is
obstructi obstructi
clarification ng the ng the
tpa8 process runway runway
pilot B, aware
of his position,
vacates Other Other
tp8 runway tp10 Than: tp9 than:tpa9
END Clarific
TASK Clarifica ation
(Delay tion Not Other
confirmat done: than:
tpa9 audio check B ion) e15 e15*
visibility is
e15 good yes: ta27 no: ta28
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
exit other than than Error): failure
ta27 planned planned) ta28 EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
exit other than than e failure
ta28 planned planned) EXIT
visibility is yes:
e15* good ta27* no: ta28*

140
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
ta27 exit other than than Error): failure
* planned planned) ta28* EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
ta28 exit other than than e failure
* planned planned) EXIT
Error in
pilot recovery Error in localizati
of awareness Detection: on:
tp9 of his position ta33 tp10 tpa10
END Clarific
TASK Clarifica ation
(Delay tion Not Other
tpa1 confirmat done: than:
0 audio check B ion) e16 e16*
visibility is
e16 good yes: ta29 no: ta30
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
exit other than than Error): failure
ta29 planned planned) ta30 EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
exit other than than e failure
ta30 planned planned) EXIT

141
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
visibility is yes:
e16* good ta29* no: ta30*
END
TASK
(aircraft
ATCO verifies B has
visually that vacated Other
the aircraft B runway than:
has vacated by the Not done Irrevoc
runway by the exit other (Worning / able
ta29 exit other than than Error): failure
* planned planned) ta30* EXIT
END
TASK
ATCO verifies (aircraft
using the B has
radar, that the vacated
aircraft B has runway
vacated by the Not done:
runway by the exit other Irrevocabl
ta30 exit other than than e failure
* planned planned) EXIT
pilot B Slip of
communicates the Other
vacation Not done: tongue: than:
tp10 confirmation ta33 tpa12 tpa11 tpa11
Clarific
Clarifica ation
tion Not Other
tpa1 done: than:
1 audio check B e19 e17 e17*
Clarific
Clarifica ation
tion Not Other
tpa1 done: than:
2 audio check A e19 e17 e17*
visibility is
e17 good yes: ta31 no: ta32
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
has safely vacated Confirm Not done:
ta31 vacate runway runway) ation) ta32
ATCO verify END
using the TASK END END
radar, that the (aircraft TASK TASK
Aircraft B has B has (Delay (Delay
safely vacate safely Confirm Confirmati
ta32 runway vacated ation) on)

142
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
runway)

visibility is yes:
e17* good ta31* no: ta32*
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
ta31 has safely vacated Confirm Not done:
* vacate runway runway) ation) ta32*
END
ATCO verify TASK
using the (aircraft END END
radar, that the B has TASK TASK
Aircraft B has safely (Delay (Delay
ta32 safely vacate vacated Confirm Confirmati
* runway runway) ation) on)
ATCO
Not
ATCO heard
understands comm:
ta33 communication e19 tpa13
Clarifica
tion Not
tpa1 done:
3 audio check A e18 e18*
visibility is
e18 good yes: ta34 no: ta35
END
TASK
ATCO verify (aircraft END
visually that B has TASK
the Aircraft B safely (Delay
has safely vacated Confirm Not done:
ta34 vacate runway runway) ation) ta35
END
ATCO verify TASK
using the (aircraft END END
radar, that the B has TASK TASK
Aircraft B has safely (Delay (Delay
safely vacate vacated Confirm Confirmati
ta35 runway runway) ation) on)
visibility is
e19 good yes: ta36 no: ta37

143
Error Type Error Type Error Type Violati
ID Description Correct Execution E T Perception Interpretation Decision Error Type Communication on Error in Recovery
Correct EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM1 EM2 EM3 EM4 EM5 ER1 ER2
Other
than
(Warnin
g):
Not END
END done(War TASK
TASK ning): END (aircraft
ATCO verify (aircraft TASK B has
visually that B has (aircraft B safely
the Aircraft B safely has safely Not vacated
has safely vacated vacated done: Not done: runway
ta36 vacate runway runway) runway) ta37 ta37 )
Not
done(W
Not arning): Not
END done(War END done(War
ATCO verify TASK ning): END TASK ning): END
using the (aircraft TASK (aircraft TASK
radar, that the B has (aircraft B B has (aircraft B
Aircraft B has safely has safely safely has safely
safely vacate vacated vacated vacated vacated
ta35 runway runway) runway) runway) runway)

144
ANNEX III: Cognitive Flowcharts Used Within The Simulator PROCOS As
Validated For ATC Applications

The Cognitive Flow chart is mainly constituted by three sub elements. The Hardware stimuli
Flow chart, it describe the cognitive process of action for those task that are triggered by HMI
stimuli or external environmental stimuli in general. The following Flow chart is the
communication Flowchart and it is tailored on the human action whose main triggering
element and main outcome are communication process. There is a link between the fist part
of the communication flow chart and other parts of the HW stimuli flow chart (since some
actions triggered by human communication then proceed like those action triggered by
hardware stimuli). The last figure in annex III present the process of Recovery. Recovery
process follow three main phases:

- Error in Identification (Perception that some thing went wrong, either by Hardware
stimuli or b external communication)

- Error in Localisation (Identification of where the error occurred and process of “pattern
recognition” that can facilitate the identification of the problem)

- Error in Correction,(The actual planning and carrying out of the corrective action)

The recovery Cognitive Flowchart is triggered by the simulator every time an equipment is in
a state diverging from the expected state, and this diverging position is detectable. The
correction can have a positive outcome only if the hardware failure has been labelled as
recoverable by the analyst.

145
Edition Number: Final Draft Page 146
147
148

You might also like