SlideShare ist ein Scribd-Unternehmen logo
1 von 124
Downloaden Sie, um offline zu lesen
AB
                      HELSINKI UNIVERSITY OF
                      TECHNOLOGY
                      Faculty of Electronics, Communications
                      and Automation




                        Teppo-Heikki Saari



      Machinery Safety Risk Assessment of a Metal
                 Packaging Company




Master’s Thesis submitted in partial fulfillment of the requirements for the
degree of Master of Science in Technology

Espoo, December 15, 2009

Supervisor: Professor Jouko Lampinen
Instructor: M.Sc. Hanna N¨atsaari
                          a¨
Teknillinen korkeakoulu                                                              ¨            ¨
                                                                            Diplomityon tiivistelma
Elektroniikan, tietoliikenteen ja automaation tiedekunta

Tekij¨:
     a                       Teppo-Heikki Saari
Osasto:                      Elektroniikan ja s¨hk¨tekniikan osasto
                                               a o
P¨¨aine:
 aa                          Laskennallinen tekniikka
Sivuaine:                    Systeemi- ja operaatiotutkimus

Ty¨n nimi:
  o                          Pakkausmateriaalitehtaan koneturvallisuuden riskiarviointi


Ty¨n nimi englanniksi:
  o                          Machinery Safety Risk Assessment of a Metal Packaging Company


Professuurin koodi ja nimi: S-114 Laskennallinen tekniikka
Ty¨n valvoja:
  o                          Prof. Jouko Lampinen
Ty¨n ohjaaja:
  o                          FM Hanna N¨¨tsaari
                                       aa

Tiivistelm¨
          a




EU:n ja Suomen ty¨turvallisuuslains¨¨d¨nt¨ velvoittaa ty¨nantajaa arvioimaan ty¨ymp¨rist¨n
                     o                aa a o                o                        o     a o
riskit ty¨kyvyn turvaamiseksi ja yll¨pit¨miseksi. Vaikka vaade ty¨olosuhteiden parantamiseksi on
         o                          a a                          o
lains¨¨d¨nn¨n kautta asetettu, eiv¨t kaikki yritykset Suomessa sit¨ noudata. Erityisesti pienten
     aa a o                        a                              a
ja keskisuurten yritysten ongelmana ovat olleet resurssien ja helppok¨ytt¨isten, selkeit¨ tuloksia
                                                                      a o               a
tuottavien metodien puute.

T¨ss¨ ty¨ss¨ selvitet¨¨n mink¨laisia k¨sitteit¨ turvallisuuteen ja riskiarviointiin yleisesti liittyy,
  a a o a             aa        a      a      a
sek¨ mink¨laisia metodeita riskej¨ ja ihmisten tekemi¨ virheit¨ arvioitaessa yleisesti k¨ytet¨¨n.
    a       a                     a                    a         a                         a     aa
Lis¨ksi t¨ss¨ ty¨ss¨ arvioidaan pakkausmateriaalitehtaan riskej¨ k¨ytt¨m¨ll¨ er¨st¨ menetelm¨¨, ja
    a     a a o a                                              a a a a a a a                   aa
tutkitaan mink¨laisia tuloksia menetelm¨ tuottaa sek¨ mitk¨ tekij¨t vaikuttavat riskiarviointiproses-
                a                      a            a      a      a
siin yleisesti.

Riskin k¨sitteeseen sis¨ltyy vaaran toteutumisen todenn¨k¨isyys. T¨ss¨ ty¨ss¨ tehtaalla esiintyvien
         a              a                                a o          a a o a
riskien arviointiin k¨ytetty menetelm¨ perustuu asiantuntija-arvioihin, jolloin arvioinnin tulokset ovat
                     a                 a
luonteeltaan subjektiivisia. Menetelm¨ voikin antaa hyvin erilaisia tuloksia riippuen arvioinnin suorit-
                                        a
tajasta. Suuret vaihtelut tuloksissa johtavat ep¨varmuuteen siit¨, mitk¨ vaarat tehtaalla ovat kaikkein
                                                 a              a       a
suurimpia, ja n¨inollen arvioinnin pohjalta teht¨v¨t – mahdollisesti kalliit – p¨¨t¨kset eiv¨t ole
                  a                                a a                               aa o         a
tehty k¨ytt¨en tarkinta mahdollista tietoa ty¨ymp¨rist¨n turvallisuuden tilasta. T¨t¨ ep¨varmuutta
        a a                                    o    a o                               aa a
voidaan pienent¨¨ selkiytt¨m¨ll¨ toimintatapoja ja parantamalla menetelm¨n dokumentaatiota.
                 aa         a a a                                             a

Riskej¨ on mahdollista hallita usein eri keinoin. Lains¨¨d¨nn¨lliset keinot pyrkiv¨t pienent¨m¨¨n
       a                                                aa a o                       a         a aa
olemassaolevia riskej¨ ja ehk¨isem¨¨n uusia syntym¨st¨. Fyysiset keinot pyrkiv¨t suojaamaan
                     a        a    aa                  a a                             a
k¨ytt¨j¨¨ v¨litt¨m¨sti toiminnan aikana. Johtuen riskin ja turvallisuuden subjektiivisesta luonteesta,
 a a aa a o a
selkein ja kustannustehokkain tapa pienent¨¨ riskej¨ on turvallisuusilmapiirin parantaminen vaikut-
                                           aa       a
tamalla ty¨ntekij¨n toimiin muuttamalla h¨nen k¨ytt¨ytymismallejaan. Erilaiset ’behavioural safety’
           o     a                        a      a a
-ohjelmat ovatkin suurten organisaatioiden turvallisuuskulttuurin keskeisimpi¨ osia.
                                                                             a


Sivum¨¨r¨: 114
     aa a                Avainsanat: Koneturvallisuus, Riskiarviointi
T¨ytet¨¨n tiedekunnassa
 a    aa
Hyv¨ksytty:
   a                     Kirjasto:
Helsinki University of Technology                                       Abstract of master’s thesis
Faculty of Electronics, Communications and Automation

Author:          Teppo-Heikki Saari
Department:      Department of Electrical Engineering
Major subject:   Computational Science
Minor subject:   Systems and Operations Research

Title:           Machinery Safety Risk Assessment of a Metal Packaging Company


Title in Finnish: Pakkausmateriaalitehtaan koneturvallisuuden riskiarviointi


Chair:           S-114 Computational sciences
Supervisor:      Prof. Jouko Lampinen
Instructor:      M.Sc. Hanna N¨¨tsaari
                              aa

Abstract:


The occupational health and safety legislation of the EU and Finland require employers to assess work
environment risks in order to secure and maintain the employees’ working capacity. Although the
requirement comes through the use of legislation, it is not fulfilled by every entrepreneur in Finland.
Especially the small and middle-sized companies have had a problem with the lack of resources and
of easily applicable and productive methodology.

The aim of this study is to find out what kind of concepts are generally related to safety and risk
assessments, and what kind of methods are used to assess risk and human error. In addition, risks in
a packaging materials factory were assessed by using a certain method, and the results and factors
generally affecting the risk assessment process were analysed in this thesis.

The probability of hazard realisation is included in the concept of risk. The method used to assess
the risks at the site is based on expert judgement, which implies that the assessment results are
subjective in nature. The method can produce very different results depending on the assessor. Great
variation in results lead to uncertainty in hazard ranking, and it has an effect on the subsequent –
possibly costly – decisions that have not been made based on the most accurate information about
the safety situation of work environment. This uncertainty can be reduced by clarifying operational
modes and by improving method documentation.

It is possible to control risks in many different ways. Regulational controls aim at reducing existing
risks and preventing new ones. Physical controls directly protect the operator during the operation.
Due to the subjective nature of risk and safety, the most clear and cost-effective way of reducing risk is
improving safety climate through affecting employee actions by changing his or her behaviour patterns.
Various behavioural safety programs are a central part of safety culture in large organisations.




Number of pages: 114 Keywords: Machinery safety, Risk assessment
Department fills
Approved:              Library code:
-




    3
He   who knows and knows he knows,
He   is wise – follow him;
He   who knows not and knows he knows not,
He   is a child – teach him;
He   who knows and knows not he knows,
He   is asleep – wake him;
He   who knows not and knows not he knows not,
He   is a fool – shun him.


                                   — Arabian proverb




Science perishes by systems that are nothing but beliefs;
and Faith succumbs to reasoning. For the two Columns
of the Temple to uphold the edifice, they must remain
separated and be parallel to each other. As soon as
it is attempted by violence to bring them together,
as Samson did, they are overturned, and the whole
edifice falls upon the head of the rash blind man or the
revolutionist whom personal or national resentments
have in advance devoted to death.


                                         — Albert Pike
Preface

I wish to express my gratitude to all of those who made this thesis possible.


                                         In Helsinki, December 6, 2009



                                         Teppo-Heikki Saari




                                         ii
Contents

Preface                                                                            ii

Abbreviations                                                                      vi

1 Introduction                                                                     1
  1.1     Background . . . . . . . . . . . . . . . . . . . . . . . . . . . .        1
          1.1.1   The Site and its operations      . . . . . . . . . . . . . . .    2
  1.2     Research questions and structure . . . . . . . . . . . . . . . .          3

2 Overview of risk assessment concepts                                              4
  2.1     Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . .      4
          2.1.1   Risk, hazard, mishap, accident, incident . . . . . . . .          4
          2.1.2   Categorisation of risk . . . . . . . . . . . . . . . . . . .      5
  2.2     Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     8
          2.2.1   Categorisation and taxonomy . . . . . . . . . . . . . .           8
          2.2.2   Major error types of interest . . . . . . . . . . . . . . .       9
  2.3     Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
          2.3.1   Approaches to safety . . . . . . . . . . . . . . . . . . . 11
          2.3.2   Safety hindrances . . . . . . . . . . . . . . . . . . . . . 12
          2.3.3   Safety facilitators . . . . . . . . . . . . . . . . . . . . . 15
  2.4     Human factors     . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Overview of risk assessment methods                                              19
  3.1     Probabilistic risk assessment . . . . . . . . . . . . . . . . . . . 20
          3.1.1   Introduction . . . . . . . . . . . . . . . . . . . . . . . . 20
          3.1.2   Defining objectives and methodology and gathering in-
                  formation . . . . . . . . . . . . . . . . . . . . . . . . . 21


                                             iii
3.1.3   Identification of initiating events . . . . . . . . . . . . . 21
        3.1.4   Scenario development . . . . . . . . . . . . . . . . . . . 22
        3.1.5   Logic modelling . . . . . . . . . . . . . . . . . . . . . . 22
        3.1.6   Failure data analysis . . . . . . . . . . . . . . . . . . . 22
        3.1.7   Sensitivity analysis . . . . . . . . . . . . . . . . . . . . 23
        3.1.8   Risk acceptance criteria . . . . . . . . . . . . . . . . . 23
        3.1.9   Interpretation of results . . . . . . . . . . . . . . . . . 25
  3.2   Human reliability analysis . . . . . . . . . . . . . . . . . . . . 26
        3.2.1   Introduction . . . . . . . . . . . . . . . . . . . . . . . . 26
        3.2.2   Task analysis . . . . . . . . . . . . . . . . . . . . . . . 27
        3.2.3   Database methods . . . . . . . . . . . . . . . . . . . . 27
        3.2.4   Expert judgement . . . . . . . . . . . . . . . . . . . . . 27
        3.2.5   Technique for Human Error Rate Prediction (THERP)           28
  3.3   Other risk and error assessment methods . . . . . . . . . . . . 29
        3.3.1   Five steps to risk assessment . . . . . . . . . . . . . . . 29
  3.4   Method used by the Company . . . . . . . . . . . . . . . . . . 30
        3.4.1   Risk rating . . . . . . . . . . . . . . . . . . . . . . . . 32
        3.4.2   Method previously used at the Site . . . . . . . . . . . 37

4 Risk control and regulation                                               39
  4.1   Physical risk controls . . . . . . . . . . . . . . . . . . . . . . . 40
  4.2   Behavioural safety . . . . . . . . . . . . . . . . . . . . . . . . 43
  4.3   Regulatory standards in the EU . . . . . . . . . . . . . . . . . 44
        4.3.1   The structure of European harmonised standards . . . 45
        4.3.2   The European Machinery Directive . . . . . . . . . . . 47
  4.4   Regulatory standards in Finland . . . . . . . . . . . . . . . . . 48
  4.5   Regulatory standards in the Company . . . . . . . . . . . . . 50
        4.5.1   The Company Directives . . . . . . . . . . . . . . . . . 50
        4.5.2   OHSAS 18000 . . . . . . . . . . . . . . . . . . . . . . . 51

5 Case study                                                                52
  5.1   Analysis of current safety situation in the Company . . . . . . 52
        5.1.1   Accident statistics . . . . . . . . . . . . . . . . . . . . 52
        5.1.2   Safety culture and climate . . . . . . . . . . . . . . . . 54


                                         iv
5.1.3   Safety limitations at the Site . . . . . . . . . . . . . . . 57
  5.2   Assessing risks with the Company method . . . . . . . . . . . 58
        5.2.1   Drum line packaging area . . . . . . . . . . . . . . . . 58
        5.2.2   Manually operated slitters and power presses . . . . . . 60
        5.2.3   73mm/99mm tin can manufacturing line (CN02) . . . . 61
        5.2.4   Machine tools at maintenance department . . . . . . . 62

6 Discussion                                                                64
  6.1   Issues encountered during the assessment process . . . . . . . 64
  6.2   Comparison and critique of the methods . . . . . . . . . . . . 65
  6.3   Analysis of the results . . . . . . . . . . . . . . . . . . . . . . 67
        6.3.1   Are the results valid? . . . . . . . . . . . . . . . . . . . 69
  6.4   Addressing the issues encountered during the assessment . . . 70

7 Conclusion                                                                73

References                                                                  74

Appendix                                                                    79

A Appendices                                                                79
  A.1 Safe system of work instructions for surface grinder . . . . . . 80
  A.2 Modified Company method risk scoring components . . . . . . 81

B Risk assessment results                                                   82




                                          v
Abbreviations
ALARP   As Low As Reasonably Practicable
CCF     Common Cause Failure
DPH     Degree of Possible Harm
EEM     External Error Mode
EHS     Environment, Heath and Safety
EOC     Error of Commission
FE      Frequency of Exposure
FMEA    Failure Mode and Effect Analysis
FTA     Fault Tree Analysis
HEA     Human Error Analysis
HEP     Human Error Probability
HRA     Human Reliability Analysis
LO      Likelihood of Occurrence
LWDC    Lost Work Day Case
MRO     Maintenance, repair and operations
NP      Number of People at Risk
OHCA    Occupational Health Care Act
OSHA    Occupational Safety and Health Act
PEM     Psychological Error Mechanism
PPE     Personal Protection Equipment
PRA     Probabilistic Risk Assessment
PSA     Probabilistic Safety Assessment
PSF     Performance Shaping Factor
RCAP    Risk Control Action Plan
RCD     Residual Current Device
RHT     Risk Homeostasis Theory
RR      Risk Rating
SRK     Skill, rule and knowledge
THERP   Technique for Human Error Rate Prediction




                                  vi
Chapter 1

Introduction

1.1     Background
Since the days of the Renaissance, when gambler Girolamo Cardano (1500-
1571 AD) took the first steps in the development of statistical principles of
probability, and shortly after that Blaise Pascal and Pierre de Fermat (1654
AD) created the theory of probability by solving Paccioli’s puzzle, the con-
cept of risk has gone through several phases of evolution and it is nowadays
widely applied in nearly every facet of life. [3]
Global competition has lead to higher demands on production systems. End
customer satisfaction is dependent on the production systems’ capability to
deliver goods and services that meet certain quality requirements. To do so
the systems must be fit for use and thereby fulfil important quality parame-
ters. One such parameter is safety.
It is human to make mistakes and in any task, no matter how simple, errors
will occur. The frequency at which the errors occur depends on the nature of
the task, the systems associated with the task and the influence of the envi-
ronment in which the task is carried out. Providing safe equipment through
design and safe work environment through regulation and practises is the key
to reducing risk and removing occupational hazards in process industry.
The technology of safety-related control systems plays a major role in the
provision of safe working conditions throughout industry. Regulations re-
quire that suppliers and users of machines in all forms from simple tools to
automated manufacturing lines take all the necessary steps to protect work-
ers from injury due to the hazards of using machines. It is through the usage
of scientific methods that allow us to comprehensively identify the risks re-
lated to working with machinery and to estimate what can go wrong during
the process.
It has been recognised by many authorities that safety should be number one
priority of the industry. Yet, in many cases companies tend to cut resources
from risk assessment and a thorough analysis is never conducted. Although


                                         1
the knowledge is readily available for use, many Finnish companies – espe-
cially small and middle-sized ones – do not conduct risk assessments. One
of the aims of this thesis is to examine do the risk assessment methods work
and what kind of results they give.
My thesis studies various methods of assessing risks in a manufacturing
plant. My objective of the thesis to assess production process risks in Crown
Pakkaus Oy (hereafter referred to as the Site), a speciality packing company
part of CROWN Cork & Seal (hereafter referred to as the Company), as a
case study. I have restricted the scope of the thesis to risk analysis of ma-
chinery. Also, the other main objective is to give the reader a picture of the
key elements in the field of risk analysis and assessment. These include basic
concepts, methodology, and legislation.


1.1.1     The Site and its operations
The Site’s history goes back to the year 1876, when the family of G.W.
Sohlberg began tinsmith products manufacturing in the Helsinki city area.
Manufacturing of cans out of lacquered tinplates began in 1909 and the re-
quired machines for the printing of sheets were acquired in 1912. Premises
became too small for the business, and the company’s operations were trans-
ferred in 1948 to the existing factory premises in Herttoniemi, Helsinki. The
company acquired the first automated canline in 1959, began manufacturing
drums in 1964, and transferred to the welded cans among the first in Europe
in 1970. In 1993 the Site was merged into the Europe’s largest packaging
company Carnaud Metalbox, and later from 1996 onwards the Site has been
a part of the world’s leading packaging industry group, Crown Holdings, with
headquarters in the USA. In 1998 the Site started using the current name,
Crown Pakkaus Oy.
The Site’s clients are major chemistry and food processing companies from
Finland and from the neighbouring areas. The clients in the field of chem-
istry are mainly paint, lubricant and chemical companies. The most impor-
tant food clients are canning and vegetable companies.
The range of packaging manufactured at the Site is wide and it covers the
paint pails from 1 / 3 litre to 20 litre, drums of 200 litre, chemical pails from
34 litre to 68 litre, food cans from 73 mm to 99 mm (diameter) and seasonal
cans from 155 mm to 212 mm (diameter). The slowest manufacturing pro-
cess is capable of producing 6 pieces per minute, and the fastest 400 pieces
per minute respectively.
The Site operations can be categorised in the following way: pre-printing,
printing, manufacturing, storage and maintenance. For the processing of
color-print data, the Site has a digital reprography equipment and the equip-
ment for manufacturing print film and printing plates. There are three lac-
quering lines and three two-colour sheet offset printing lines at the print shop.
For packaging manufacturing, the Site has eight automated welding lines, 10


                                           2
lines for manufacturing tin can lids/ends, and several individual manually
operated machines e.g. power presses and slitters.


1.2     Research questions and structure
The thesis aims at answering the following questions:

   • What kind of concepts does the field of risk management deal with?

   • What kind of risks can be found in an industrial environment and
     production processes?

   • What kind of methods can be used to assess risks?

   • What matters affect the risk assessment process?

   • How is it possible to reduce the probability of occurrence of risks?

The structure of the thesis is as follows.
In the second chapter I examine different concepts related to risk and safety
analysis. Next, I examine various aspects of risk assessment methods in the
third chapter. In Chapter 4 review some methods of controlling risks. These
include physical controls and regulatory controls. My analysis of regulations
takes into account three viewpoints: the EU, Finnish Government, and The
Company. The Company case study is introduced after that in Chapter 5.
Beginning with a short description of what The Company does and what is
the main purpose of the thesis, I then present the results of the risk assess-
ment. The risks were assessed using the Company method. In Chapter 6 I
present discussion of the results. Chapter 7 includes the conclusions.




                                          3
Chapter 2

Overview of risk assessment
concepts

2.1     Basic definitions

2.1.1    Risk, hazard, mishap, accident, incident
The field of risk analysis contains several concepts that are defined in
various ways depending on the author or researcher. In this chapter are
presented definitions of concepts that I have used in this thesis. I have
chosen the definitions by their clarity, intelligibility and unambiguity. The
exact wording of concepts vary depending on the author.
Risk is a measure of the potential loss occurring due to natural or human
activities. Potential losses are adverse consequences of such activities in
form of loss of human life, adverse health effects, loss of property, and
damage to the natural environment. [33]
Accident is an unintentional event which results or could result in an injury,
whereas injury is a collective term for health outcomes from traumatic
events [1].
Incident is an undesired event that almost causes damage or injury [16].
These are events to learn from before any damage has occurred.
Much of the wording is comparable to that found in military standard
system safety requirements. In system safety literature, writers trace the
principles embodied in military standard system safety requirements to the
work of aviation and space age personnel that commenced after World War
II. The U.S. Government standards define concepts like mishap and risk in
the following way.
Mishap is an unplanned event or series of events resulting in death, injury,
occupational illness, or damage to or loss of equipment or property, or
damage to the environment. Accident. [34]
Risk is an expression of the impact and possibility of a mishap in terms of


                                          4
potential mishap severity and probability of occurrence. [34]
Hazard is a condition that is a prerequisite for an accident. [38]



2.1.2     Categorisation of risk
The VTT Technical Research Centre of Finland has prepared a risk assess-
ment toolkit for small and middle-sized enterprises [54]. The toolkit is found
on the Internet, and provides information on several types of risks and how
to control them. The risks are classified from the point of view of company
and its business, thus taking a broader stand on different business risks. For
the purpose of clarifying the concept of risk and its different aspects, I shall
now present the different risk views mentioned in the toolkit.


Personnel risks

The term ‘personnel risks’ refers to risks to a company’s operations that either
concern or are caused by its personnel. At worst, these risks could mean a
company completely losing the input of a key employee, or an employee
deliberately acting against a company’s interests. Personnel risks include:

   • Fatigue and exhaustion

   • Accidents and illnesses

   • Obsolete professional skills

   • Personal or employment-related disputes

   • Unintended personal error

   • Information leaks or theft

Small companies may be more vulnerable to personnel risks. Key expertise
may rest with one person, another may have many ideas of responsibility or
there may be no contingency arrangements in place.


Business risks

Business risks are related to business operations and decision-making. Busi-
ness risks involve profit potential. A company can neither be successful in its
operations or make a profit or fail and suffer losses. The information available
for the assessment of business risks is difficult to use because of the fact that
business risks are often quite unique. In business, you must recognise prof-
itable opportunities before others and react quickly, though decision-making

                                           5
may be difficult due to the lack of precise information.
Business risks form an extensive field. Because of risk chains, the assessment
has to reach even the most distant links in the supply chain. For instance,
a fire at the plant of a network partner can cause interruptions that lead to
a loss of sales income and clientele. Business risks may therefore arise from
the company’s own or external operations.
The character of business risks depends on the company’s field of operation
and its size. The risks of a small company differ from those of a larger one
operating in the same field. The only common factor is that, in the end,
companies always bear the responsibility for business risks themselves and
cannot take out insurance to cover them.


Agreement and liabilities

Agreements and making agreements are an essential part of business activity.
An appropriate agreement clarifies the tasks, rights and responsibilities of
the parties in agreement. An agreement risk can be caused by the lack
of an agreement or deficiencies in an agreement. An agreement risk can
be related to issues such as the way an agreement was made, a partner in
the agreement, making a quotation, general terms of agreement, contractual
penalties/compensation etc.


Information risks

Information risks have long been underestimated and inadequately managed.
All companies have information that is critical to their operation, such as
customer and production management information, product ideas, market-
ing plans, etc. There is a lot of information in different forms: personal
expertise and experience-based knowledge, agreements, instructions, plans,
other paper documents, and electronic data e.g. customer, order and salary
information.


Product risks

A company earns its income from its products and services. Launching prod-
ucts onto the market always involves risks. Errors in decision-making con-
cerning products may prove very expensive. These risks can be reduced
through systematic risk management that covers the entire range of product
operations and all product-related projects.




                                         6
Environmental risks

Environmental risks refer to risks that can affect the health and viability of
living things and the condition of the physical environment. Environmen-
tal risks can be caused by the release of pollutants to air, land or water.
Environmental damage can also be caused by irresponsible use of energy
and natural resources. Pollutants can include waste (controlled waste, spe-
cial waste), emissions to air due to production or usage of the product (e.g.
smoke, fumes, dusts, gases, etc.), releases to the ground and water systems
(e.g. effluent, chemicals, oil/fuel discharges, etc.), noise (vibration, light, etc.
if causing a nuisance), and radiation.
Environmental risks can be hidden and cause damage over a long period of
time. A disused refuse pump can contaminate the ground around it. An
environmental risk can also emerge suddenly e.g. due to an accident. A
chemical container that breaks during transport can result in the leakage
of harmful substances into the ground, a water system, the air or a surface
water drain.


Project risks

A project is a singular undertaking with an objective, schedule, budget, man-
agement and personnel. There are two main types of project:

   • Delivery projects in which a customer is promised the delivery of a
     product or a service by a defined date and under stipulated conditions.

   • Development projects in which, for instance, a new device is developed
     for a company’s own use.

These project types are often combines in small and middle-sized enterprises.
A typical project frequently calls for some development work or tailoring
before the product or service intended to meet the customer’s needs, can
be delivered. Projects are difficult and risky because each is unique and
so nearly everything is new, such as the workgroup, customer or product.
Projects are also subject to disturbances because there are usually several
projects in progress the same company, and they compete in importance as
well as for resources – at worst interfering with each other.


Crime risks

Most crimes against companies are planned beforehand. Typically, a com-
pany becomes an object of a crime because criminals observe it as a suitable
target. In addition to preventing costs caused by crime, the management of
crime risks also helps in the management of a company’s other risks. Struc-
tural protection and alarm systems can prevent fire and information risks as

                                            7
well as property risks. At the same time, indirect costs caused by interrup-
tions in production, cleaning up the consequences of vandalism and delayed
deliveries are prevented.


2.2      Error
The term error refers strictly to human actions in contrast to risk or hazard
which may be due to circumstances and environment when no human has
contributed to the situation.

      A human error is an unintended failure of a purposeful action,
      either singly or as part of a planned sequence of actions, to achieve
      an unintended outcome within set limits of tolerability pertaining
      to either the action or the outcome. [55]

There are three major components to an error [24]:

   • External Error Mode (EEM) is the external manifestation of the error
     (eg. Closed wrong valve)

   • Performance Shaping Factors (PSF) influence the likelihood of the error
     occurring (eg. Quality of the operator interface, time pressure, training,
     etc.)

   • Psychological Error Mechanism (PEM) is the ‘internal’ manifestation
     of error (how the operator failed, in psychologically meaningful terms,
     eg. Memory failure, pattern recognition, etc.)


2.2.1     Categorisation and taxonomy
The skill, rule and knowledge (SRK) based taxonomy was developed by Ras-
mussen [40] and has since been widely adopted as a model for describing
human performance in a range of situations.
Skill based behaviour represents the most basic level of human performance
and is typically used to complete familiar and routine tasks that can be car-
ried out smoothly in an automated fashion without a great deal of conscious
thought. In order to complete the task successfully, the tasks that can be car-
ried out using this type of behaviour are so familiar that little or no feedback
of information from the external or work environment is needed. A typical
range of error probability for skill based tasks is from as high as 0.005 (al-
ternatively expressed as 5.0E-03) or 1 error in 200 tasks to as low as 0.00005
(5.0E-05) or 1 error in 20,000 tasks on average. [15]
Rule based behaviour is adopted when it is required to carry out more com-
plex or less familiar tasks than those using skill based behaviour. The task

                                           8
is carried out according to a set of stored rules. Although these rules may
exist in the form of a set of written procedures, they are just as likely to be
rules that have been learned from experience or through formal training and
which are retrieved from memory at the time the task is carried out. Error
probability values for rule based tasks are typically an order of magnitude
higher than for skill based tasks. They lie within the range from 0.05 (5.0E-
02) or 1 error in 20 tasks to 0.0005 (5.0E-04) or 1 error in 2000 tasks on
average. [15]
Knowledge based behaviour is adopted when a completely novel situation is
presented for which no stored rules, written or otherwise, exist and yet which
requires a plan of action to be formulated. While there is clearly a goal to
be achieved, the method of achieving it will effectively be derived from first
principles. Once a plan or strategy has been developed, this will be put into
practice using a combination of skill and rule based actions, the outcome of
which will be tested against the desired goal until success is achieved. Knowl-
edge based tasks have significantly higher error probabilities than either skill
or rule based tasks mainly because of the lack of prior experience and the
need to derive solutions from first principles. Error probability values vary
from 0.5 or 1 error in 2 tasks to 0.005 (5.0E-03) or 1 error in 200 tasks on
average. [15]


2.2.2     Major error types of interest
In contrast to the rough labeling of errors according to their probability
of occurrence given by the SRK taxonomy, it is also possible to categorise
errors by their nature of occurrence, i.e. their root cause. The following
categorisation of different error types is omitted from [24].

   • Slips and lapses (action execution errors): The most predictable errors,
     usually characterised by being simple errors of quality of performance
     or by being omission or sequence errors. A slip is a failure of the
     execution as planned (eg. Too much or too little force applied). A
     lapse is an omission to execute an action as planned due to a failure of
     memory or storage (eg. Task steps carried out in wrong sequence).

   • Diagnostic and decision-making (cognitive) errors: These relate to a
     misunderstanding, by the operators of what is happening in the sys-
     tem and they are usually due to insufficient operator support (design,
     procedures and training). Such errors have an ability to alter accident
     progression sequences and to cause failure dependencies between redun-
     dant and even diverse safety and backup technical systems. This type
     of error includes misdiagnosis, partial diagnosis and diagnostic failure.

   • Maintenance errors and latent failures: Most maintenance errors are
     due to slips and lapses, but in maintenance and testing activities, which

                                          9
may lead to immediate failures or to latent failures whose impact is de-
     layed (and thus may be difficult to detect prior to an accident sequence).
     Most PSAs make assumptions that maintenance failures are implicitly
     included in component and system availability data. However, it is less
     clear that such maintenance data used in the PSA can incorporate the
     full impact of latent failures.

   • Errors of commission (EOC): An EOC is one in which the operator
     does something that is incorrect and also unrequired. Such errors can
     arise due to carrying out actions on the wrong components, or can be
     due to a misconception, or to a risk recognition failure. These EOCs
     can have large impact on system risk and they are very difficult to
     identify (and hence anticipate and defend against).

   • Rule violations: There are two main types of violations (Reason 1990).
     The ‘routine’ rule violation where the violation is seen as being of neg-
     ligible risk and therefore it is seen as acceptable and even a necessary
     pragmatic part of the job. The ‘extreme’ violation where the risk is
     largely understood as being real, as is the fact that it is a serious viola-
     tion. Rule violations are relatively unexpected and can lead to failure
     of multiple safety systems and barriers. PSAs rarely include violations
     quantitatively.

   • Idiosyncratic errors: Errors due to social variables and the individual’s
     current emotional state when performing a task. They are the result
     of a combination of fairly personal factors in a relatively unprotected
     and vulnerable organisational system. Some accidents fall into this
     category, and they are extremely difficult to predict, as they relate to
     covert social factors not obvious from a formal examination of the work
     context. These errors are of particular concern where, for example, a
     single individual has the potential to kill a large number of persons.
     They are not dealt with in PSA or HRA.

   • Software programming errors: These errors are of importance due to
     the prevalence of software-based control systems required to economi-
     cally control large complex systems. They are also important in other
     areas and for any safety critical software applications generally. Typi-
     cally there are few if any techniques applied which predict human errors
     in software programming. Instead, effort is spent on verifying and val-
     idating software to show it is error-free. Unfortunately complete and
     comprehensive verification of very large pieces of software is intractable
     due to software complexity and interactiveness.

Whittingham [55] divides root causes of human errors into two categories,
externally induced and internally induced errors. Externally induced hu-
man errors are the factors that have a common influence on two or more


                                          10
tasks leading to dependent errors which may thus be coupled together. Ex-
amples of these adverse circumstances are deficiencies in organisation of the
task, poor interface design, inadequate training, and excessive task demands.
Internally induced errors are sometimes called ’within-person dependency’.
They are found in the same individual carrying out similar tasks which are
close together in time or space.


2.3     Safety
Safety may be the absence of accidents or threats, or it can be seen as the
absence of risks, which for some is unrealistic. It may also be the balance
between safety and risks, i.e. an acceptable level of risk. [16] It is thus
possible to have a high risk level but even higher safety. Rochlin [45] argues
that “the ‘operational safety’ is not captured as a set of rules or procedures,
of simple, empirically observed properties, of externally imposed training or
management skill, or of a decomposable cognitive or behavioural frame”.
Safety is related to external threats, and the perception of being sheltered
from threats. Safety is not the opposite of risk but rather of fear, including
a subjective dimension, but it does not encompass positive health or aim at
something beyond prevention. Defining an organisation as safe because it
has a low rate of error or accidents has the same limitation as defining health
as not being sick. [44] Safety may be seen as an important quality of work
regardless of the frequency of accidents by regarding safety as larger than
just the absence of risk or fear. [1]


2.3.1     Approaches to safety
Technical approach

The engineering approach focuses on the development of formal reliability
and systems modelling, with only limited attention to some of the complexi-
ties of the human issues involved. [39] The risk is viewed as deriving from the
technical/physical environment. Technicians are the ones doing safety work,
and changes in the technical environment are the way to reduce accidents. A
common means for technical safety is passive prevention, which means that
safety should be managed without active participation of humans.
By means of safety rounds, audits, accident investigations, risk and safety
analyses, it is presumed possible to measure the level of safety within the
organisation. The result is then analysed, providing a basis for formulating
action plans and making decisions to reach the target level of safety. Stan-
dards and routines offer assurances that the safety activities are good enough.
[11]



                                          11
Psychological approach

The psychological approach to risk and safety focuses on the individual per-
spective, investigating perception, cognition, attitudes and behaviour. [39]
Some researchers have studied how people estimate risks and make choices
among alternatives (e.g. [48]). “Risk is largely seen as a taken-for-granted
objective phenomenon that can be accurately assessed by experts with the
help of scientific methods and calculations. The phenomenon to be explained
is primarily the malleability of risk perceptions“. [51] Individuals’ percep-
tions of risk are influenced by the arguments concerning hazards that are
prevalent in a particular society at a certain time. All organisations oper-
ate with a variety of beliefs and norms with respect to hazards and their
management, which might be formally laid down in rules and procedures, or
more tacitly taken for granted and embedded within the culture of everyday
working practices. Organisational culture may be expressed through shared
practices. The process by which culture is created and constructed should
be borne in mind when organising everyday work. [43]


2.3.2     Safety hindrances
Control and power

Many of today’s safety management systems are built on control. Managing
risk through control does not take into account the fact that individuals are
intentional in how they define and carry out tasks. D¨os and Backstr¨m
                                                         o¨                 o
[11] state that production problems which call for corrections in a hazardous
zone may be impossible to handle. The machinery or safety rules may not
be flexible when changes in production are required. Production is usually
considered more important than safety.
The question of politics and power is not addressed in most models and
discussions. The myth of individual control leads to a search for someone to
blame instead of searching for the causes of accidents. [39] It is therefore of
importance to ask who is defining the risk, safety and the accident, and who
is responsible for the consequences. Does the responsibility for risk mean
responsibility for errors?
Whittingham describes the concept of blame culture:

     ”Companies and/or industries which over-emphasise individual
     blame for human error, at the expense of correcting defective
     systems, are said to have a ‘blame culture’. Such organisations
     have a number of characteristics in common. They tend to be se-
     cretive and lack openness cultivating an atmosphere where errors
     are swept under the carpet. Management decisions affecting staff
     tend to be taken without staff consultation and have the appear-
     ance of being arbitrary. The importance of people to the success

                                          12
of the organisation is not recognised or acknowledged by man-
      agers and as a result staff lack motivation. Due to the emphasis
      on blame when errors are made, staff will try to conceal their er-
      rors. They may work in a climate of fear and under high levels of
      stress. In such organisations, staff turnover is often high resulting
      in tasks being carried out by inexperienced workers. The factors
      which characterise a blame culture may in themselves increase
      the probability of errors being made.” [55]


Work stress

One of the most important situational moderators of stress is perceived con-
trol over the environment. Karasek [22] introduced the job demand-control
model, stating that jobs which have low job demands and low levels of con-
trol (e.g. repetitive assembly line work) create strain. Control in this model
means (1) to have the power to make decisions on the job (decision author-
ity) and (2) to have use for a variety of skills in the work (skill discretion).
Stress is the overall transactional process, stressors are the stimuli that are
encountered by the individuals, and strains are the psychological, physical
and behavioural responses to stressors. These factors are intrinsic to the job
itself and include variables such as the level of job complexity, the variety of
tasks performed, the amount of control that individuals have over the place
and timing of their work, and the physical environment in which the work is
performed.
Stress can also be related to roles in the organisation. Dysfunctional roles
can occur in two primary ways: role ambiguity: lack of predictability of the
consequences of one’s role performance and a lack of information needed to
perform the role; and role conflict: competing or conflicting job demands.
The association between role conflict and psychosocial strain is not as strong
as that between ambiguity and strain. [6]


Conflict between safety and production goals

A constant demand for effective resource allocation and short-term revenues
from investment may result in priorities that are in opposition to safety,
reducing redundancy, cutting margins, increasing work pace, and reducing
time for reflection and learning. Landsbergis et al. [27] found that lean pro-
duction creates intensified work pace and demands on the workers.
Rasmussen [41] proposed a model that indicates a conflict between safe per-
formance and cost-effectiveness. The safety defences are likely to degenerate
systematically through time, when pressure toward cost-effectiveness is dom-
inant. The stage for an accidental course of events is very likely prepared
through time by the normal efforts of many actors in their respective daily
work context, responding to the standing request to be cost-effective. Ulti-


                                          13
mately, a quite normal variation in somebody’s behaviour can then release
an accident. Had this particular root cause been avoided by some additional
safety measure, the accident would very likely have been released by another
cause at another point in time. In other words, an explanation for the acci-
dent in terms of events, acts and errors is not very useful for the design of
improved safety. It is important to focus not on the human error but on the
mechanisms generating behaviour in the actual dynamic work context. [41]


Attitudes and norms

Slovic [47] stated that risk is always subjective. There is no such thing as
a real risk or objective risk. The concept of risk depends on our mind and
culture and is invented to help us understand and cope with the danger and
uncertainties of life. Slovic [47] stated that trust is an important element in
risk acceptance, and should be further investigated. To be socialised into the
work role is to understand what is accepted and what is not.
In the beginning, reactions towards obvious risks may occur, but may be
difficult to express, and safety has to be trusted. After an introductory pe-
riod, during which risk and safety knowledge may be low, perception may
be higher, but along with increased experience risks may become accepted
as normal. Holmes et al. [18] also found that blue-collar workers regarded
occupational injury risk as a normal feature of the work environment and an
acceptable part of the job. An experienced worker may become home-blind
and not react to hazards. The reinforcement by risks that have been avoided
or mastered may also provide a false sense of safety.
The risk homeostasis theory (RHT), presented by Wilde [57], stated that
people have a target level of risk, the level that they accept. This level de-
pends on perceived benefits and disadvantages of safe and unsafe behaviour.
The frequency of injuries is maintained over time through a closed loop.
Whenever one perceives a discrepancy between target risk and experienced
risk, an attempt is made to restore the balance through some behavioural
adjustment.
Organisational culture, structural secrecy and unclear communication of in-
formation are found to influence towards a normalisation of deviance, which
in turn may lead to failure to foresee risks. Deviance from the original
rules becomes normalised and routine, as informal work systems compen-
sate for the organisation’s inability to provide the necessary basic resources
(e.g. time, tools, documentation with a close relationship to action). [10]




                                          14
2.3.3     Safety facilitators
Participation

Much intervention research has emphasised the benefits of a participatory
approach. Participation will improve the information and idea generation,
engaging those who know most about the current situation. Participation
may result in a ‘sense of ownership’ and a greater commitment to a goal or
a process of change. Behavioural change is likely to be more sustainable if it
emerges from the need of the persons involved and with their active partici-
pation, rather than being externally imposed. [50]
Safety management, risk analyses and interventions are normally conducted
by experts on safety. This information and these activities are not only im-
portant for designers, technicians or safety committees. Safety work could
benefit from involving the operating people, taking an active participatory
part. Using this approach in safety intervention work, the participants in-
stead of a safety expert will own the process, being their own experts on their
special problems and abilities. [50]


Social support and empowerment

Social support has been found to be of importance for behavioural change as
well as a moderator of felt work stress. [23] Risks and injuries are delicate
subjects and particularly so if linked to personal mistakes and shortcomings.
A supportive social climate with a non-judging and respectful atmosphere is
vital to encourage sharing such experiences.
There are different sorts of social support: emotional, evaluative, informa-
tional and instrumental. [19] The effects and mechanism of social support
can be to fulfil fundamental human needs such as security, or social contact.
It can also provide support in reducing interpersonal conflicts, i.e. prioritis-
ing, and it may also have a buffering effect, modifying the relation between
a stressor and health.
Perceived self-efficacy plays an important role in the causal structure of social
cognitive theory, because efficacy beliefs affect adaptation and change. [2]
Unless people believe they can produce and foretell their actions, they have
little incentive to act or to persevere in the face of difficulties. Other motiva-
tors are rooted in the core belief that one has the power to produce effects by
one’s actions. Efficacy beliefs also influence whether people think pessimisti-
cally or optimistically and in ways that are self-enhancing or self-hindering.
[2]




                                          15
Communication

Communication is a key factor binding an organisation together. If risks and
safety are not communicated at and through all levels of the organisation,
there will be little understanding of the risks and safety. Lundgren [28] stated
that the risk communication process must be a dialogue, not a monologue
from either party. Continuous feedback and interpretations are necessary for
communication to be effective, which forms the basis for the continuous safe
operation. Communication is linked to a systems view and the capability of
finding, and analysing risks and implementing safety measures. [45]
Effective communication needs openness so that sensitive information can
be outspoken and the question of error, responsibility, blame and shame
is openly dealt with in the communication of accidents. All members of an
organisation need feedback, not only in their specific area of responsibility but
also on how the operating level functions and handles the complexity in which
they operate. It is also of importance to anchor policies, goals and changes
and to make them comprehensible and meaningful. [50] Saari stated that
knowledge of risk is not enough to bring about changes in unsafe behaviour,
and that decision-making is influenced by feelings. Therefore, social feedback
encouraging safe behaviour has been quite successful in modifying behaviour.
[46]


Learning

Learning is a key characteristic of safe organisations. [39] D¨os and Back-
                                                               o¨
str¨m [11] stated that demands on control and demands on learning and
   o
acting competently appear to come into conflict. The critical competitive
factor for success is not only competence but also its development and re-
newal.
To learn implies changing one’s ways of thinking and/or acting in relation to
the task one intends to perform. The outcome of learning has two aspects.
Within the individual, learning is expressed as constructing and reconstruct-
ing one’s cognitive structures or thought networks. Outwardly, visible signs
of learning are changed ways of acting, performing tasks and talking. Indi-
vidual experiential learning [32] can be understood as an ongoing interchange
between action and reflection, where past experiences provide the basis for
future ones. Active participation and personal action are prerequisites for
the learning process to take place.


Safety culture/climate

Safety climate reflects the symbolic (e.g. posters in the workplace, state of
the premises, etc.) and political (e.g. managers voicing their commitment to
safety, allocation of budgets to safety, etc.) aspects of the organisation which


                                          16
constitute the work environment. On the other hand, safety culture is made
up of the cognition and emotion which gives groups, and ultimately the
organisation, its character. Unlike safety management and climate, which
can often be a reactive response to a certain situation, the safety culture
is a stable and enduring feature of the organisation. [56] Flin et al. [12]
found that safety climate can be seen as a snapshot of the state of safety,
providing an indicator of the underlying safety culture of a work group, plant
or organisation. In their review of 18 studies, they identified the six most
common themes in safety climate. These were:

  1. the perceptions of management attitudes and behaviour in relation to
     safety,
  2. different aspects of the organisational safety management system,
  3. attitudes towards risk and safety,
  4. work pressure as the balance maintained between pressure for produc-
     tion and safety,
  5. the workforce perception of the general level of workers’ competence,
  6. perception of safety rules, attitudes to rules and compliance with or
     violation of procedures.

A number of techniques have been employed to measure safety culture, the
most common method is a self-completion questionnaire. Employees respond
by indicating the extent to which they agree or disagree with a range of state-
ments about safety e.g. “senior management demonstrate their commitment
to safety”. The data obtained from the questionnaires are analysed to identify
factors or concepts that influence the level of safety within the organisation.


2.4     Human factors
Human factors are defined as:

      “. . . ..environmental, organisational and job factors, and human
      and individual characteristics which influence behaviour at work
      in a way which can affect health and safety”. [17]

Good human factors in practice is about optimising the relationships between
demands and capacities in considering human and system performance (ie
understanding human capabilities and fallibilities). The term is used much
more in the safety context than ergonomics even though they mean very
much the same thing. Like Human Factors, ergonomics deals with the in-
teraction of technological and work situations with the human being. The

                                          17
job must ‘fit the person’ in all respects and the work demands should not
exceed human capabilities and limitations. The meaning of ergonomics is
hard to distinguish from human factors, but is sometimes associated more
with the physical design issues as opposed to cognitive or social issues, and
with health, well being and occupational safety, rather than with the design
of major hazard systems.
Tasks should be designed in accordance with ergonomic principles to take
into account limitations and strengths in human performance. Matching the
job to the person will ensure that they are not overloaded and that the most
effective contribution to the business results. Physical match includes the
design of the whole workplace and working environment. Mental match in-
volves the individual’s information and decision-making requirements, as well
as their perception of the tasks and risks. Mismatches between job require-
ments and people’s capabilities provide the potential for human error.
People bring to their job personal attitudes, skills, habits and personalities
which can be strengths or weaknesses depending on the task demands. In-
dividual characteristics influence behaviour in complex and significant ways.
Their effects on task performance may be negative and may not always be
mitigated by job design. Some characteristics such as personality are fixed
and cannot be changed. Others, such as skills and attitudes, may be changed
or enhanced.
Organisational factors have the greatest influence on individual and group
behaviour, yet they are often overlooked during the design of work and dur-
ing investigation of accidents and incidents. Organisations need to establish
their own positive health and safety culture. The culture needs to promote
employee involvement and commitment at all levels, emphasising that devi-
ation from established health and safety standards is not acceptable.




                                         18
Chapter 3

Overview of risk assessment
methods

In this chapter we take a look at the well-established procedures for carrying
out a risk assessment on a machine or assembled group of machines.
Carrying out a risk assessment on a machine or assembled group of machines
is a well-established procedure in a European Commission standard EN 1050.
[13] This procedure forms the basis of most safety design studies that have
to be carried out on machines to satisfy the requirements of the regulations.
The standard points out that:

   • Risk assessment should be based on a clear understanding of the ma-
     chine limits and its functions.

   • A systematic approach is essential to ensure a thorough job.

   • The whole process of risk assessment must be documented for control
     of the work and to provide a traceable record for checking by other
     parties.

EN 1050 describes risk assessment as a process intended to help designers
and safety engineers define the most appropriate measures to enable them to
achieve the highest possible levels of safety, according to the state of the art
and the resulting constraints. The standard also defines several techniques
for conducting a risk assessment, including the following: What-If method,
Failure Mode and Effect Analysis (FMEA), Hazard and Operability Study
(HAZOPS), Fault Tree Analysis (FTA), Delphi technique, Defi method, Pre-
liminary Hazard Analysis (PHA), and Method Organised for a Systematic
Analysis of Risks (MOSAR).




                                          19
3.1      Probabilistic risk assessment

3.1.1     Introduction
The Finnish work safety regulations require the employer to conduct risk
assessments that evaluate the safety of the workplace. It is stated in the
Occupational Safety and Health Act [37] that

      ”Employers are required to take care of the safety and health of
      their employees while at work by taking the necessary measures.
      For this purpose, employers shall consider the circumstances re-
      lated to the work, working conditions and other aspects of the
      working environment as well as the employees’ personal capaci-
      ties.”

In addition,

      ”Employers shall design and choose the measures necessary for
      improving the working conditions as well as decide the extent of
      the measures and put them into practice. ”

Probabilistic Risk Assessment (PRA), also known as Probabilistic Safety
Assessment (PSA), is a systematic procedure for investigating how complex
systems are built and operated. The PRAs model how human, software, and
hardware elements of the system interact with each other. The methodology
was first used in the USA in 1975 to assess and analyse the potential risks
leading to severe accidents in nuclear power plants. [42]
The study involved a list of potential accidents in nuclear reactors, estimation
of the likelihood of accidents resulting in radioactivity release, estimation
of health effects associated with each accident, and comparison of nuclear
accident risk with other accident risks. Since the WASH-1400 report the
understanding of PSA has increased and it has become a useful tool in risk
analysis. A similar method is used by the NASA in analysing the risks in
space shuttle missions. One of the most important features of PSA is its
quantitative probability assessment of different components and events.
The methodology includes several phases which can also be used indepen-
dently to examine possible failures within a system. A risk assessment
amounts to addressing three very basic questions posed by Kaplan and Gar-
rick: [21]

  1. What can go wrong?

  2. How likely is it?

  3. What are the consequences?

                                          20
The answer to the first question leads to identification of the set of undesir-
able scenarios. The second question requires estimating the probabilities (or
frequencies) of these scenarios, while the third estimates the magnitude of
potential losses.
The NASA PRA Guide [49] describes the components of the PRA a modified
version. Each component is discussed in more detail in the following.


3.1.2     Defining objectives and methodology and gath-
          ering information
Preparing for a PRA begins with a review of the objectives of the analysis.
Among the many objectives possible, the most common ones include design
improvement, risk acceptability, decision support, regulatory and oversight
support, and operations and life management. Once the object is clarified,
an inventory of possible techniques for the desired analyses should be de-
veloped. The available techniques range from required computer codes to
system experts and analytical experts.
The resources required for each analytical method should be evaluated, and
the most effective option selected. The basis for the selection should be doc-
umented, and the selection process reviewed to ensure that the objectives of
the analysis will be adequately met.
A general knowledge of the physical layout of the overall system, adminis-
trative controls, maintenance and test procedures, as well as hazard barriers
and subsystems (whose purpose is to protect, prevent, or mitigate hazard
exposure conditions) is necessary to begin the PRA. A detailed inspection of
the overall system must be performed in the areas expected to be of interest
and importance to the analysis.


3.1.3     Identification of initiating events
A system is said to operate in a normal operation mode as long as the system
is operating within its design parameter tolerances, there is little chance of
challenging the system boundaries in such a way that hazards will escape
those boundaries. During normal operation mode, loss of certain functions
or systems will cause the process to enter an off-normal (transient) state.
Once in this state, there are two possibilities. First, the state of the system
could be such that no other function is required to maintain the process or
overall system in a safe condition. The second possibility is a state wherein
other functions are required to prevent exposing hazards beyond the system
boundaries. For the second possibility, the loss of the function or the system
is considered as an initiating event (IE).
One method for determining the operational IEs begins with first drawing
a functional block diagram of the system. From the functional block di-


                                          21
agram, a hierarchical relationship is produced, with the process objective
being successful completion of the desired system. Each function can then
be decomposed into its subsystems and components, and can be combined
in a logical manner to represent operations needed for the success of that
function.


3.1.4     Scenario development
The goal of scenario development is to derive a complete set of scenarios
that encompasses all of the potential exposure propagation paths that
can lead to loss of containment or confinement of the hazards, following
the occurrence of an initiating event. To describe the cause and effect
relationship between initiating events and subsequent event progression, it
is necessary to identify those functions that must be maintained, activated
or terminated to prevent loss of hazard barriers. The scenarios that describe
the functional response of the overall system or process to the initiating
events are frequently displayed by the event trees.



3.1.5     Logic modelling
Event trees commonly involve branch points which shows if a given sub-
system (or event) either work (or happens) or does not work (or does not
happen). Sometimes, failure of these subsystems is rare and there may not
be an adequate record of observed failure events to provide a historical basis
for estimating frequency of their failure. In such cases, other logic-based
analysis methods such as fault trees or master logic diagrams may be used,
depending on the accuracy desired. The most common method used in PRA
to calculate the probability of subsystem failure is fault tree analysis.
Different event tree modelling approaches imply variations in the complexity
of the logic models that may be required. If only main functions or systems
are included as event tree headings, the fault trees become more complex
and must accommodate all dependencies among the main and support
functions within the fault tree. If support functions or systems are explicitly
included as event tree headings, more complex event trees and less complex
fault trees will result.



3.1.6     Failure data analysis
Hardware, software, and human reliability data are inputs to assess perfor-
mance of hazard barriers, and the validity of the results depends highly on
the quality of the input information. It must be recognised that historical

                                          22
data have predictive value only to the extent that the conditions under which
the data were generated remain applicable. Collection of the various failure
data consists fundamentally of the following steps: collecting and assessing
generic data, statistically evaluating facility- or overall system-specific data,
and developing failure probability distributions using test or facility- and
system-specific data. The three types of events must be quantified for
the event trees and fault trees to estimate the frequency of occurrence of
sequences: initiating events, component failures, and human error.
After establishing probabilistic failure models for each barrier or component
failure, the parameters of the model must then be estimated. Typically
the necessary data include time of failures, repair times, test frequencies,
test downtimes, and common cause failure (CCF) events. One might also
non-parametric models and simulate the results.



3.1.7     Sensitivity analysis
In a sensitivity analysis, an input parameter, such as a component failure
rate in a fault tree logic model, is changed, and the resulting change in the
top event probability is measured. This process is repeated using either dif-
ferent values for the same parameter or changing different parameters by the
same amount.
There are various techniques for performing sensitivity analyses. These tech-
niques are designed to determine the importance of key assumptions and
parameter values to the risk results. The most commonly used methods are
so-called “one-at-a-time” methods, in which assumptions and parameters are
changed individually to measure the change in virtually any input or model
assumption and observe their impact in final risk calculations.
The key challenge in engineering risk analysis is to identify the elements of the
system or facility that contribute most to risk and associated uncertainties.
To identify such contributors, the common method used is the importance
ranking. These importance measures are used to rank the risk-significance
of the main elements of the risk models in terms of their contributions to the
total risk.


3.1.8     Risk acceptance criteria
In an engineering risk assessment, the analyst considers both the frequency of
an initiating event and the probabilities of such failures within the engineer-
ing system. In a health risk assessment, the analyst assesses consequences
from situations involving chronic releases of certain amount of chemical and
biological toxicants to the environment with no consideration of the frequency
or probability of such releases.
The ways for measuring consequences are also different in health and engi-

                                           23
neering risk assessments. Health risk assessment focuses on specific toxicants
and contaminants and develops a deterministic or probabilistic model of the
associated exposure amount and resulting health effects, or the so-called
dose-response models. The consequences are usually in form of fatality. In
engineering risk assessment, the consequence varies. Common consequences
include worker health and safety, economic losses to property, immediate or
short-term loss of life, and long-term loss of life from cancer. One useful way
to represent the final risk values is by using the so-called Farmer’s curves. In
this approach, the consequence is plotted against the complementary cumu-
lative distribution of the event frequency.
Individual risk is one of the most widely used measures of risk and is defined
as the fraction of the exposed population to a specific hazard and subsequent
consequence per unit time. Societal risk is expressed in terms of the total
number of casualties such as the relation between frequency and the number
of people affected from a specified level of consequence in a given population
from exposure to specified hazards. [33]
The ALARP (as low as reasonably practicable) principle [31] recognises that
there are three broad categories of risk:

  1. Negligible risk: Broadly accepted by most people as they go about their
     everyday lives. Examples of this kind of risks might be being struck by
     lightning or having brake failure in a car.

  2. Tolerable risk: One would not rather have the risk but it is tolerable in
     view of the benefits obtained by accepting it. The cost in inconvenience
     or in money is balanced against the scale of risk and a compromise is
     accepted. This would apply to e.g. travelling in a car.

  3. Unacceptable risk: The risk level is so high that we are not prepared
     to tolerate it. The losses far outweigh any possible benefits in the
     situation.

The principle is depicted in Figure 3.1.




                                           24
!
        Figure 3.1: ALARP and risk tolerance regions (adapted from [55])

    3.1.9    Interpretation of results
    When the risk values are calculated, they must be interpreted to determine
    whether any revisions are necessary to refine the results and the conclusions.
    The adequacy of the PRA model and the scope of analysis is verified. Also,
    characterising the role of each element of the system in the final results is
    necessary Based on the results of the interpretation, the details of the PRA
    logic, its assumptions, and scope may be modified to update the results into
    more realistic and dependable values.
    The basic steps of the PRA results interpretation are:

      1. Determine the accuracy of the logic models and scenario structures,
         assumptions, and scope of the PRA.

      2. Identify system elements for which better information would be needed
         to reduce uncertainties in failure probabilities and models used to cal-
         culate performance.

      3. Revise the PRA and reinterpret the results until attaining stable and
         accurate results.




                                            25
3.2      Human reliability analysis

3.2.1     Introduction
Human actions are an essential part of the operation and maintenance of
machinery, both normal and abnormal conditions. Generally, man can en-
sure a safe and economic operation by proactive means, but in disturbances
a reactive performance may also be required. Thus, human actions affect
both the probability of risk significant events and their consequences, and
they need to be taken account in PSA. Without incorporating human error
probabilities (HEPs), the results of risk analysis are incomplete.
The measurement of human reliability is necessary to provide some assur-
ance that complex technology can be operated effectively with a minimum of
human error and to ensure that systems will not be maloperated leading to
a serious accident. To estimate HEPs, and thus human reliability, one needs
to understand human behaviour, which is very difficult to model. HEP is
defined as the mathematical ratio:
                        Number of errors occurring in a task
                HEP =                                                       (3.1)
                         Number of opportunities for error
Practically all HRA methods and approaches share the assumption that it
is meaningful to use the concept of a human error, hence to develop ways
of estimating human error probabilities. This view prevails despite serious
doubts expressed by leading scientists and practitioners from HRA and re-
lated disciplines. [14]
Extensive studies in human performance accidents conclude that

      ”. . . ‘human error’ is not a well defined category of human perfor-
      mance. Attributing error to the actions of some person, team, or
      organisation is fundamentally a social and psychological process
      and not an objective, technical one.” [59]

Also, Reason (1997) concludes that

      ”the evidence from a large number of accident inquiries indicates
      that bad events are more often the result of error-prone situations
      and error-prone activities, than they are of error-prone people.”
      [43]

Attempts to approach to the human reliability problem with the same crite-
ria as to the engineering reliability problem reveal their inconsistency. The
human failure probability can be determined precisely only for the specific
person, social conditions and short time period. Generalisation of obtained
data to different peoples, social conditions and large time periods results in
the growth of the result uncertainty.

                                          26
Nevertheless, HRA methods have been successfully used in assessing error
probabilities. Numerous studies have been performed to produce data sets
or databases that can be used as a reference for determining human error
probabilities. Some key elements of human reliability analysis are presented
in the following sections, and some specific methods for examining that cer-
tain area of human reliability are introduced.


3.2.2     Task analysis
Task analysis is a fundamental methodology in the assessment and reduction
of human error. A very wide variety of different task analysis methods exist.
An extended review of task analysis techniques is available in Kirwan and
Ainsworth. [25]
Nearly all task analysis techniques provide, as a minimum, a description
of the observable aspects of operator behaviour at various levels of detail,
together with some indications of the structure of the task. These will be re-
ferred to as action oriented approaches. Other techniques focus on the mental
processes that underlie observable behaviour, for example, decision making
and problem solving. These will be referred to as cognitive approaches.
In addition to their descriptive functions, TA techniques provide a wide va-
riety of information about the task that can be useful for error prediction
and prevention. To this extent, there is a considerable overlap between task
analysis and human error analysis (HEA) techniques, thus a combination of
TA and HEA methods will be the most suitable form of analysis.


3.2.3     Database methods
Database methods generally rely upon observation of human tasks in the
workplace, or analysis of records of work carried out. Using this method, the
number of errors taking place during the performance of a task is noted each
time the task is carried out. Dividing the number of errors by the number of
tasks performed provides an estimate of HEP as described above. However,
since more than one type of error may occur during the performance of a
task it is important to note which types of error have occurred.


3.2.4     Expert judgement
The use of expert judgement in the risk estimation step of risk assessment
aims at producing a single representation , i.e. in practise an aggregated
probability distribution of an unknown quality. A formalised procedure for
attaining this is described by several different researchers, Winkler et al. [58]
and Cooke and Goossens [5] to name but a few. Such a procedure is known
as an expert judgement protocol. The main challenge of the protocol is to

                                          27
control cognitive biases inherent in eliciting probabilities. [53]
Expert judgement elicitation and aggregation approaches can be classified
into behavioural probability aggregation and mechanical probability aggre-
gation. [4] In the behavioural probability aggregation approach, the experts
themselves produce the consensus probability distribution. The normative
expert only facilitates the process of interaction and debate. The main objec-
tive of the approach is to ensure the achievement of a shared understanding
of the physical and social phenomena and/or logical relationships represented
by the parameter elicited. It is important to note that this approach induces
strong dependence between the experts.
In the mechanistic approach, experts’ individual probability distributions are
aggregated by the decision-maker after their elicitation. The main challenge
is to specify the performance of the experts. Such a specification presupposes
at least two assumptions:

   1. data for calibrating an expert’s performance is available, and

   2. the expert has not learned from his past performance, and thus uses
      cognitive heuristics.

In the case of Bayesian mechanistic probability aggregation, the decision-
maker defines the likelihoods of the experts’ judgements and treats these
judgements as data for updating his prior belief to posterior belief according
to Bayes’ rule.


3.2.5     Technique for Human Error Rate Prediction
          (THERP)
Development of the THERP method began in 1961 in the US at Sandia Na-
tional Laboratories and the developed method was finally released for public
use in a document NUREG 1278 in 1983. [52] The stated purpose is to
present methods, models and estimates of HEPs to enable analysts to make
predictions of the occurrence of human errors in nuclear power plant opera-
tions, particularly those that affect the availability or reliability of engineered
safety systems and components.
The method describes in detail all the relevant PSFs which may be encoun-
tered and provides methods of estimating their impact on HEP. It also pro-
poses methods of combining the HEPs assessed for individual tasks in the
form of a model so that the failure probability for a complete procedure can
be calculated. This is carried out by using a method of modelling procedures
in the form of HRA event trees. The interaction between individual human
errors can then be more easily examined and the contribution of those errors
to the overall failure probability of the procedure can be quantified.
The key elements of the THERP quantification process are as follows:


                                           28
1. Decomposing tasks into elements. The first step involves breaking down
     a task into its constituent elements according to the THERP taxonomic
     approach given in NUREG 1278.

  2. Assignment of nominal HEPs to each element. The assignment of nom-
     inal HEPs is carried out with reference to the THERP Handbook.
     Chapter 20 of the Handbook is a set of tables, each of which has a
     set of error descriptors, associated error probabilities and error factors.
     The assessor uses these tables and their supporting documentation to
     determine the nominal HEP for each task element. Problems will arise
     when task elements do not appear to be represented in any of the tables.

  3. Determination of effects of PSF on each element. The determination
     of the effects of PSF should occur based on the assessor’s qualitative
     analyses of the scenario, and a range of PSFs are cited which can be
     applied by the assessor. The assessor will normally use a multiplier on
     the nominal HEP.

  4. Calculation of effects of dependence between tasks. Dependence exists
     when probability of a task is different from when it follows a particular
     task. THERP models dependence explicitly, using a five-level model of
     dependence. Failing to model dependence can have a dramatic effect
     on overall HEP, and differences in levels chosen by different assessors
     can lead to different HEPs.

  5. Modelling in a Human Reliability Analysis Event Tree. Modelling via
     an event tree is relatively straightforward, once step 1 has occurred.

  6. Quantification of total task HEP. Quantification is done using sim-
     ple Boolean algebra: multiplication of probabilities along each event
     branch, with success and failure probability outcomes summing to
     unity.


3.3     Other risk and error assessment methods

3.3.1    Five steps to risk assessment
The process for a risk assessment for the handling and use of machines fol-
lows the same general rules for all risk assessments. These rules are most
clearly described in a widely used brochure published by the UK Health and
Safety Executive (HSE) called ‘Five steps to risk assessment’. The process
is depicted in Figure 3.2.




                                          29
!
                  Figure 3.2: Five steps to risk assessment

3.4        Method used by the Company
The Company uses a risk assessment method of its own. The method is
based on the principles presented in ‘Five steps to risk assessment’ by HSE.
The risk assessment database works within the Company intranet framework
where the assessor chooses the entity to be assessed (a production line or a
machine) and then adds the risks identified.
The Company policy is that all risks scoring above 30 on the risk rating level
need to be controlled. This means defining risk control action plan (RCAP)
for every risk exceeding the level. The RCAP includes identifying existing
controls, nominating actioner, setting completion date and estimating costs.
Risks scoring higher than 100 are unacceptable and they need to be elimi-
nated urgently.
The assessing process is depicted in Figure 3.3.
 The process has the following steps:

  1. Identify activity. The machinery is used in different modes: normal
     operation, maintenance, repair, emergency.

  2. Form assessment team. Consists at least of a trained assessor and the
     machine operator.

  3. Gather information. Acquire information from previous risk assess-
     ments, accident and incident reports, work instructions, legal require-


                                         30
!
                  Figure 3.3: The Company method

    ments, operating manuals, interviews with the operators and mainte-
    nance personnel, etc.



                                     31
4. Identify hazards. Using a method applicable, identify the possible haz-
     ards within the target activity.
  5. Identify who might be harmed and how.
  6. Identify existing control measures. There are several already applied
     measures, such as guarding, safety devices, procedures, personal pro-
     tection equipment, etc.
  7. Assess risks. Using the data gathered, calculate the risk level for all
     the hazards identified using the risk rating formula below.
  8. Remove the hazards. Limit the risk as far as possible. This can be
     applied by reducing speed and force, employing good ergonomics, ap-
     plying failsafe principles, and strengthening existing control measures.
  9. Identify and implement additional controls. After re-assessing the resid-
     ual risk, inform and warn the personnel about any residual risk. This
     can take the form of signs and symbols.
 10. Document the assessment. Risk assessments should be recorded in
     the Company database. Update for new information and closure for
     assigned corrective actions.


3.4.1    Risk rating
Calculating the risk level is done based on the following formula:
               The Risk Rating = LO × F E × DP H × N P                   (3.2)
  Based on table 3.1, one estimates the risk based on the four variables.
Because each of these elements has a range of values this can sometimes lead
to difficulties in ensuring that they are applied consistently from site to site
and from risk assessor to risk assessor. The Company has provided some
guidelines in order to maintain consistency in the assessments.

Frequency of Exposure and Number of People at Risk

The number of people at risk should be calculated as the number of people
who come into contact with the hazard. Where there is a shift system in
operation then it is acceptable to calculate the number of people as the
number per shift. For example, if the task is undertaken by 2 operators per
shift in a 3 shift factory then the number of people is 2. However, one should
also remember to include other people who might also come into contact
with the hazard during each shift e.g. supervisors, quality staff, maintenance
engineers.
If there are significant differences in the frequency of exposure of different
groups of people then their risk should be assessed separately.

                                         32
Likelihood of occurrence (LO)          Degree of possible harm (DPH)
Likelihood of the identified haz-       An indication of how serious the
ard realising its potential and        harm or ill health could be
causing actual injury and/or ill
health during / or after the ac-
tivity
        Almost impossible (possible    0.1      Scratch/Bruise
0.033   only under extreme circum-
        stances)
0.5     Highly unlikely (though        0.5      Laceration/mild ill health
        conceivable)                            effect
1       Unlikely (but could occur)     1        Break – minor bone or mi-
                                                nor illness (temporary)
2       Possible (but unusual)         2        Break – major bone or seri-
                                                ous illness (permanent)
5       Even chance (could happen)     4        Loss of 1 limb/eye or serious
                                                illness (temporary)
8       Probable (not surprised)       8        Loss of 2 limbs/eyes or seri-
                                                ous illness (permanent)
10      Likely (only to be expected)   15       Fatality
15      Certain (no doubt)
Frequency of exposure (FE)             Number of people at risk (NP)
Frequency of exposure to the           The number of people who could
identified hazard during the ac-        be exposed to the hazard during
tivity                                 the activity
 0.1    Infrequently                   1        1-2 people
0.2     Annually                       2        3-7 people
1       Monthly                        4        8-15 people
1.5     Weekly                         8        16-50 people
2.5     Daily                          12       More than 50 people
4       Hourly
5       Constantly

                   Table 3.1: Risk scoring components




                                           33
Degree of possible harm

An important role of a risk assessment is to make employees aware of the
hazards and risks they face day-to-day in carrying out their jobs. DPH
chosen should therefore be realistic and reflect to a large extent accident
history within the Company or elsewhere.
The examples shown in tables 3.2, 3.3, and 3.4 show other injuries, which
might be considered of a similar gravity as the examples given in the scheme,
and also suggest some of the types of activities and accidents that commonly
lead to these injuries.


         DPH                                     Activity
  0.1    Scratch / bruise
         Splinters, skin irritation, blisters,
         superficial wounds, light swelling
  0.5    Laceration/mild ill health ef-          Handling tinplate
         fect
         Small cuts requiring stitches,          Short term exposure to solvent,
         bump to head (no loss of con-           fumes etc.
         sciousness), minor eye irritation
  1      Break – minor bone or minor             Workshop machinery
         illness (temporary)
          Contact dermatitis, fractures to        Using tools
          fingers, toes, nose, open wounds
          requiring stitches, first degree         Prolonged skin exposure to sol-
          burns                                   vents


Table 3.2: Guidelines for evaluating degree of possible harm (table 1 of 3)




                                           34
2      Break – major bone or minor           Being hit by slow moving fork-
        illness (permanent)                   lift truck – pedestrian

        Fractures to arms, legs, disloca-     Slip/trip
        tion of shoulders, hips, sprains,
        strains, slipped disc, back in-       Manual handling
        juries, noise induced hearing loss
                                              Noise level above 85dB
 4      Loss of 1 limb/eye serious ill-       Intervention on running ma-
        ness (temporary)                      chinery – coaters, presses
        Amputation of fingers (one or
        several), severe crushing injuries,   Acid or caustic handling
        second degree burns or extensive
        chemical burns, non-fatal electric    Use of low voltage electrical
        shock, loss of consciousness, con-    equipment
        cussion

 8      Loss of 2 limbs/eyes or seri-         Scrap compactors
        ous illness (permanent)
                                              Contact with sensitisers
        Asthma, cancer, coma, third de-
        gree burns                            Serious fire


Table 3.3: Guidelines for evaluating degree of possible harm (table 2 of 3)




                                        35
15     Fatality                            Working at any height over 2m
                                             Work in confined spaces where
        Immediate death or after pro-        breathing apparatus is needed
        longed treatment or illness
                                             Collision between pedestrians
                                             and lorries

                                             Electrocution

                                             Falling into deep water or into
                                             chemical tanks

                                             Motor accidents as driver or
                                             passenger

                                             Being crushed by large falling
                                             objects eg. Tinplate coil

                                             Overturning forklift truck –
                                             driver

                                             Being hit by fast moving fork-
                                             lift truck

                                             Palletisers – trapping in hoist
                                             area

                                             Long term exposure to as-
                                             bestos (carcinogen)

                                             Explosion


Table 3.4: Guidelines for evaluating degree of possible harm (table 3 of 3)




                                       36
Likelihood of occurrence

Two factors can help to choose an appropriate likelihood score:

   • Accident history – do we know that accidents occur regularly relating
     to this activity within the Company? Throughout industry generally?

   • The existing controls in place (see table 3.5)

The first column in table 3.5 shows how we can interpret the scores to reflect
levels of probability from simpler risk scoring schemes. One of such a scheme
is the method previously used at the Site. The first two categories are equal
to the lowest probability in the risk matrix approach. The next three are
equal to medium probability, and the three last ones are equal to the highest
probability.

 Interlocked guards in place, pur-      0.033      Almost impossible (possible
 pose designed equipment in use                    only under extreme circum-
 (eg. elevated platform with har-                  stances)
 ness), traffic management fully
 implemented.
  All legally required and best prac-   0.5        Highly unlikely (though con-
 tice controls in place. The em-                   ceivable)
 ployee would have to remove or
 circumvent a control to be injured.
 LOW (1)
  Adjustable guards in place,           1          Unlikely (but could occur)
 PPE and SSW, basic walkway             2          Possible (but unusual)
 marking. MEDIUM (2)
                                        5          Even chance (could happen)
   No guards, safety relies on          8          Probable (not surprised)
 operator’s competence and              10         Likely (only to be expected)
 training. HIGH (3)
                                        15         Certain (no doubt)

        Table 3.5: Guidelines for evaluating existing controls in place



3.4.2     Method previously used at the Site
Before it was required by the Company that all the sites use the methodology
described above, a similar method was used to assess the risks at the Site.
There were a couple of reasons for replacing the previous method with the
current one. First of all, the Company required the assessment teams to input
all the data in the Company Risk Assessment Database. The old method

                                              37
did not evaluate all the required parameters. Secondly, the method seemed
    to be inaccurate in distinguishing severe risks from less severe ones.
    The method was based on simple risk matrix, where assessment team selected
    values for consequence and probability from three categories. The values for
    probability and consequence are displayed in table 3.4.2.
      The resulting severity of the risk follows from a risk matrix (Figure 3.4).

                    Probability                Consequence
                   1. Unlikely        1. Mild (eg. scratch or bruise)
                   2. Possible                  2. Harmful
                   3. Probable        3. Serious (permanent damage)


    The notation of the severity is as follows:




!
                     Figure 3.4: Severity of risk in a risk matrix

       • N = negligible, there is very little risk to health and safety, no control
         measures needed
       • L = low but significant, contains hazards that need to be recognised,
         control measures should be considered
       • H = high, potentially dangerous hazards which require immediate con-
         trol measures
       • U = unacceptable, the task/operation in question is discontinued until
         the hazard is dealt with

                                              38
Chapter 4

Risk control and regulation

After assessing risks it is important to control the risk. Risk control is about
the methods applicable to get rid of or manage risks:

   • Avoidance: identifying and implementing alternative procedures or ac-
     tivities to eliminate it.

   • Contingency: having a pre-determined plan of action to come into force
     as and when the risk occurs.

   • Prevention: employing countermeasures to stop a problem from occur-
     ring or having impact on an organisation.

   • Reduction: taking action to minimise either the likelihood of the risk
     developing, or its effects.

   • Transference: transferring the risk to a third party, for example with
     an insurance policy.

   • Acceptance / Retention: tolerating the risk when its likelihood and im-
     pact are relatively minor, or when it would be too expensive to mitigate
     it.

The Company has employed several of the above-mentioned methods in dif-
ferent forms. All the safeguarding of machinery and strengthening the exist-
ing barriers aim at avoidance of risk. The result is usually reduction in the
likelihood, though. Another method used widely is prevention. This comes
in forms of regulation and standards.
In this chapter, I examine the various methods of controlling risks. Such
methods are practical (physical and behavioural) risk controls aiming at re-
duction and avoidance, and regulatory standards that aim at prevention. In
the case of Finland, three levels of such standards are examined. These lev-
els are federal level (the EU), state level (Finland), and corporate level (the
Company).

                                          39
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company
Machinery Safety Risk Assessment of a Metal Packaging Company

Weitere ähnliche Inhalte

Ähnlich wie Machinery Safety Risk Assessment of a Metal Packaging Company

Process Safety Booklet 1st issue Dec2014 FINAL
Process Safety Booklet 1st issue Dec2014 FINALProcess Safety Booklet 1st issue Dec2014 FINAL
Process Safety Booklet 1st issue Dec2014 FINALAamish J. Khan, CSP
 
Risk Analysis On It Assets Using Case Based Reasoning
Risk Analysis On It Assets Using Case Based ReasoningRisk Analysis On It Assets Using Case Based Reasoning
Risk Analysis On It Assets Using Case Based ReasoningAfeef Veetil
 
ECASTSMSWG-GuidanceonHazardIdentification
ECASTSMSWG-GuidanceonHazardIdentificationECASTSMSWG-GuidanceonHazardIdentification
ECASTSMSWG-GuidanceonHazardIdentificationIlias Maragakis
 
Minimization of Risks in Construction projects
Minimization of Risks in Construction projectsMinimization of Risks in Construction projects
Minimization of Risks in Construction projectsIRJET Journal
 
Risk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxRisk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxSUBHI7
 
Probabilistic design for reliability (pdfr) in electronics part2of2
Probabilistic design for reliability (pdfr) in electronics part2of2Probabilistic design for reliability (pdfr) in electronics part2of2
Probabilistic design for reliability (pdfr) in electronics part2of2ASQ Reliability Division
 
Process Safety Blind Spots: EXPOSED [Infographic]
Process Safety Blind Spots: EXPOSED [Infographic]Process Safety Blind Spots: EXPOSED [Infographic]
Process Safety Blind Spots: EXPOSED [Infographic]Darwin Jayson Mariano
 
Probabilistic design for reliability (pdfr) in electronics part1of2
Probabilistic design for reliability (pdfr) in electronics part1of2Probabilistic design for reliability (pdfr) in electronics part1of2
Probabilistic design for reliability (pdfr) in electronics part1of2ASQ Reliability Division
 
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...IRJET Journal
 
Risk and Testing by Graham et al
Risk and Testing by Graham et al Risk and Testing by Graham et al
Risk and Testing by Graham et al Emi Rahmi
 
Risk and testing
Risk and testingRisk and testing
Risk and testingEmi Rahmi
 
Taubenberger
TaubenbergerTaubenberger
Taubenbergeranesah
 
unit4.pptx professional ethics in engineering
unit4.pptx professional ethics in engineeringunit4.pptx professional ethics in engineering
unit4.pptx professional ethics in engineeringPoornachanranKV
 
Safeen Yaseen-Assignment-1 (HAZOP).pdf
Safeen Yaseen-Assignment-1 (HAZOP).pdfSafeen Yaseen-Assignment-1 (HAZOP).pdf
Safeen Yaseen-Assignment-1 (HAZOP).pdfSafeen Yaseen Ja'far
 
Importance Of Structured Incident Response Process
Importance Of Structured Incident Response ProcessImportance Of Structured Incident Response Process
Importance Of Structured Incident Response ProcessAnton Chuvakin
 
arriers thatd due to lackm factorsthese causalent ca.docx
arriers thatd due to lackm factorsthese causalent ca.docxarriers thatd due to lackm factorsthese causalent ca.docx
arriers thatd due to lackm factorsthese causalent ca.docxdavezstarr61655
 
IRJET- Projects in Constructions due to Inadequate Risk Management
IRJET-  	  Projects in Constructions due to Inadequate Risk ManagementIRJET-  	  Projects in Constructions due to Inadequate Risk Management
IRJET- Projects in Constructions due to Inadequate Risk ManagementIRJET Journal
 

Ähnlich wie Machinery Safety Risk Assessment of a Metal Packaging Company (20)

Process Safety Booklet 1st issue Dec2014 FINAL
Process Safety Booklet 1st issue Dec2014 FINALProcess Safety Booklet 1st issue Dec2014 FINAL
Process Safety Booklet 1st issue Dec2014 FINAL
 
Risk Analysis On It Assets Using Case Based Reasoning
Risk Analysis On It Assets Using Case Based ReasoningRisk Analysis On It Assets Using Case Based Reasoning
Risk Analysis On It Assets Using Case Based Reasoning
 
ISHN JUNE 2015 www.docx
ISHN  JUNE 2015  www.docxISHN  JUNE 2015  www.docx
ISHN JUNE 2015 www.docx
 
ECASTSMSWG-GuidanceonHazardIdentification
ECASTSMSWG-GuidanceonHazardIdentificationECASTSMSWG-GuidanceonHazardIdentification
ECASTSMSWG-GuidanceonHazardIdentification
 
Minimization of Risks in Construction projects
Minimization of Risks in Construction projectsMinimization of Risks in Construction projects
Minimization of Risks in Construction projects
 
Risk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docxRisk management planExecutive SummaryThe past.docx
Risk management planExecutive SummaryThe past.docx
 
Probabilistic design for reliability (pdfr) in electronics part2of2
Probabilistic design for reliability (pdfr) in electronics part2of2Probabilistic design for reliability (pdfr) in electronics part2of2
Probabilistic design for reliability (pdfr) in electronics part2of2
 
Process Safety Blind Spots: EXPOSED [Infographic]
Process Safety Blind Spots: EXPOSED [Infographic]Process Safety Blind Spots: EXPOSED [Infographic]
Process Safety Blind Spots: EXPOSED [Infographic]
 
risk analysis
risk analysisrisk analysis
risk analysis
 
Probabilistic design for reliability (pdfr) in electronics part1of2
Probabilistic design for reliability (pdfr) in electronics part1of2Probabilistic design for reliability (pdfr) in electronics part1of2
Probabilistic design for reliability (pdfr) in electronics part1of2
 
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...
Risk Analysis at Ship to Shore (STS) Cranes in Container Terminal Operational...
 
Risk and Testing by Graham et al
Risk and Testing by Graham et al Risk and Testing by Graham et al
Risk and Testing by Graham et al
 
UNIT IV SAFETY, RESPONSIBILITIES AND RIGHTS
UNIT IV SAFETY, RESPONSIBILITIES AND RIGHTSUNIT IV SAFETY, RESPONSIBILITIES AND RIGHTS
UNIT IV SAFETY, RESPONSIBILITIES AND RIGHTS
 
Risk and testing
Risk and testingRisk and testing
Risk and testing
 
Taubenberger
TaubenbergerTaubenberger
Taubenberger
 
unit4.pptx professional ethics in engineering
unit4.pptx professional ethics in engineeringunit4.pptx professional ethics in engineering
unit4.pptx professional ethics in engineering
 
Safeen Yaseen-Assignment-1 (HAZOP).pdf
Safeen Yaseen-Assignment-1 (HAZOP).pdfSafeen Yaseen-Assignment-1 (HAZOP).pdf
Safeen Yaseen-Assignment-1 (HAZOP).pdf
 
Importance Of Structured Incident Response Process
Importance Of Structured Incident Response ProcessImportance Of Structured Incident Response Process
Importance Of Structured Incident Response Process
 
arriers thatd due to lackm factorsthese causalent ca.docx
arriers thatd due to lackm factorsthese causalent ca.docxarriers thatd due to lackm factorsthese causalent ca.docx
arriers thatd due to lackm factorsthese causalent ca.docx
 
IRJET- Projects in Constructions due to Inadequate Risk Management
IRJET-  	  Projects in Constructions due to Inadequate Risk ManagementIRJET-  	  Projects in Constructions due to Inadequate Risk Management
IRJET- Projects in Constructions due to Inadequate Risk Management
 

Mehr von Teppo-Heikki Saari

Lanseeraussuunnitelma - Case: Securent
Lanseeraussuunnitelma - Case: SecurentLanseeraussuunnitelma - Case: Securent
Lanseeraussuunnitelma - Case: SecurentTeppo-Heikki Saari
 
Keinoja positiivisen strategiakierteen aikaansaamiseksi
Keinoja positiivisen strategiakierteen aikaansaamiseksiKeinoja positiivisen strategiakierteen aikaansaamiseksi
Keinoja positiivisen strategiakierteen aikaansaamiseksiTeppo-Heikki Saari
 
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSA
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSAPLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSA
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSATeppo-Heikki Saari
 

Mehr von Teppo-Heikki Saari (6)

Lanseeraussuunnitelma - Case: Securent
Lanseeraussuunnitelma - Case: SecurentLanseeraussuunnitelma - Case: Securent
Lanseeraussuunnitelma - Case: Securent
 
Keinoja positiivisen strategiakierteen aikaansaamiseksi
Keinoja positiivisen strategiakierteen aikaansaamiseksiKeinoja positiivisen strategiakierteen aikaansaamiseksi
Keinoja positiivisen strategiakierteen aikaansaamiseksi
 
Meshcom Modules Brochure
Meshcom Modules BrochureMeshcom Modules Brochure
Meshcom Modules Brochure
 
Heikkousmallit
HeikkousmallitHeikkousmallit
Heikkousmallit
 
Yhteistyön evoluutio
Yhteistyön evoluutioYhteistyön evoluutio
Yhteistyön evoluutio
 
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSA
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSAPLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSA
PLS-REGRESSIO KEMOMETRIAN KALIBROINTIONGELMASSA
 

Machinery Safety Risk Assessment of a Metal Packaging Company

  • 1. AB HELSINKI UNIVERSITY OF TECHNOLOGY Faculty of Electronics, Communications and Automation Teppo-Heikki Saari Machinery Safety Risk Assessment of a Metal Packaging Company Master’s Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Technology Espoo, December 15, 2009 Supervisor: Professor Jouko Lampinen Instructor: M.Sc. Hanna N¨atsaari a¨
  • 2. Teknillinen korkeakoulu ¨ ¨ Diplomityon tiivistelma Elektroniikan, tietoliikenteen ja automaation tiedekunta Tekij¨: a Teppo-Heikki Saari Osasto: Elektroniikan ja s¨hk¨tekniikan osasto a o P¨¨aine: aa Laskennallinen tekniikka Sivuaine: Systeemi- ja operaatiotutkimus Ty¨n nimi: o Pakkausmateriaalitehtaan koneturvallisuuden riskiarviointi Ty¨n nimi englanniksi: o Machinery Safety Risk Assessment of a Metal Packaging Company Professuurin koodi ja nimi: S-114 Laskennallinen tekniikka Ty¨n valvoja: o Prof. Jouko Lampinen Ty¨n ohjaaja: o FM Hanna N¨¨tsaari aa Tiivistelm¨ a EU:n ja Suomen ty¨turvallisuuslains¨¨d¨nt¨ velvoittaa ty¨nantajaa arvioimaan ty¨ymp¨rist¨n o aa a o o o a o riskit ty¨kyvyn turvaamiseksi ja yll¨pit¨miseksi. Vaikka vaade ty¨olosuhteiden parantamiseksi on o a a o lains¨¨d¨nn¨n kautta asetettu, eiv¨t kaikki yritykset Suomessa sit¨ noudata. Erityisesti pienten aa a o a a ja keskisuurten yritysten ongelmana ovat olleet resurssien ja helppok¨ytt¨isten, selkeit¨ tuloksia a o a tuottavien metodien puute. T¨ss¨ ty¨ss¨ selvitet¨¨n mink¨laisia k¨sitteit¨ turvallisuuteen ja riskiarviointiin yleisesti liittyy, a a o a aa a a a sek¨ mink¨laisia metodeita riskej¨ ja ihmisten tekemi¨ virheit¨ arvioitaessa yleisesti k¨ytet¨¨n. a a a a a a aa Lis¨ksi t¨ss¨ ty¨ss¨ arvioidaan pakkausmateriaalitehtaan riskej¨ k¨ytt¨m¨ll¨ er¨st¨ menetelm¨¨, ja a a a o a a a a a a a a aa tutkitaan mink¨laisia tuloksia menetelm¨ tuottaa sek¨ mitk¨ tekij¨t vaikuttavat riskiarviointiproses- a a a a a siin yleisesti. Riskin k¨sitteeseen sis¨ltyy vaaran toteutumisen todenn¨k¨isyys. T¨ss¨ ty¨ss¨ tehtaalla esiintyvien a a a o a a o a riskien arviointiin k¨ytetty menetelm¨ perustuu asiantuntija-arvioihin, jolloin arvioinnin tulokset ovat a a luonteeltaan subjektiivisia. Menetelm¨ voikin antaa hyvin erilaisia tuloksia riippuen arvioinnin suorit- a tajasta. Suuret vaihtelut tuloksissa johtavat ep¨varmuuteen siit¨, mitk¨ vaarat tehtaalla ovat kaikkein a a a suurimpia, ja n¨inollen arvioinnin pohjalta teht¨v¨t – mahdollisesti kalliit – p¨¨t¨kset eiv¨t ole a a a aa o a tehty k¨ytt¨en tarkinta mahdollista tietoa ty¨ymp¨rist¨n turvallisuuden tilasta. T¨t¨ ep¨varmuutta a a o a o aa a voidaan pienent¨¨ selkiytt¨m¨ll¨ toimintatapoja ja parantamalla menetelm¨n dokumentaatiota. aa a a a a Riskej¨ on mahdollista hallita usein eri keinoin. Lains¨¨d¨nn¨lliset keinot pyrkiv¨t pienent¨m¨¨n a aa a o a a aa olemassaolevia riskej¨ ja ehk¨isem¨¨n uusia syntym¨st¨. Fyysiset keinot pyrkiv¨t suojaamaan a a aa a a a k¨ytt¨j¨¨ v¨litt¨m¨sti toiminnan aikana. Johtuen riskin ja turvallisuuden subjektiivisesta luonteesta, a a aa a o a selkein ja kustannustehokkain tapa pienent¨¨ riskej¨ on turvallisuusilmapiirin parantaminen vaikut- aa a tamalla ty¨ntekij¨n toimiin muuttamalla h¨nen k¨ytt¨ytymismallejaan. Erilaiset ’behavioural safety’ o a a a a -ohjelmat ovatkin suurten organisaatioiden turvallisuuskulttuurin keskeisimpi¨ osia. a Sivum¨¨r¨: 114 aa a Avainsanat: Koneturvallisuus, Riskiarviointi T¨ytet¨¨n tiedekunnassa a aa Hyv¨ksytty: a Kirjasto:
  • 3. Helsinki University of Technology Abstract of master’s thesis Faculty of Electronics, Communications and Automation Author: Teppo-Heikki Saari Department: Department of Electrical Engineering Major subject: Computational Science Minor subject: Systems and Operations Research Title: Machinery Safety Risk Assessment of a Metal Packaging Company Title in Finnish: Pakkausmateriaalitehtaan koneturvallisuuden riskiarviointi Chair: S-114 Computational sciences Supervisor: Prof. Jouko Lampinen Instructor: M.Sc. Hanna N¨¨tsaari aa Abstract: The occupational health and safety legislation of the EU and Finland require employers to assess work environment risks in order to secure and maintain the employees’ working capacity. Although the requirement comes through the use of legislation, it is not fulfilled by every entrepreneur in Finland. Especially the small and middle-sized companies have had a problem with the lack of resources and of easily applicable and productive methodology. The aim of this study is to find out what kind of concepts are generally related to safety and risk assessments, and what kind of methods are used to assess risk and human error. In addition, risks in a packaging materials factory were assessed by using a certain method, and the results and factors generally affecting the risk assessment process were analysed in this thesis. The probability of hazard realisation is included in the concept of risk. The method used to assess the risks at the site is based on expert judgement, which implies that the assessment results are subjective in nature. The method can produce very different results depending on the assessor. Great variation in results lead to uncertainty in hazard ranking, and it has an effect on the subsequent – possibly costly – decisions that have not been made based on the most accurate information about the safety situation of work environment. This uncertainty can be reduced by clarifying operational modes and by improving method documentation. It is possible to control risks in many different ways. Regulational controls aim at reducing existing risks and preventing new ones. Physical controls directly protect the operator during the operation. Due to the subjective nature of risk and safety, the most clear and cost-effective way of reducing risk is improving safety climate through affecting employee actions by changing his or her behaviour patterns. Various behavioural safety programs are a central part of safety culture in large organisations. Number of pages: 114 Keywords: Machinery safety, Risk assessment Department fills Approved: Library code:
  • 4. - 3
  • 5. He who knows and knows he knows, He is wise – follow him; He who knows not and knows he knows not, He is a child – teach him; He who knows and knows not he knows, He is asleep – wake him; He who knows not and knows not he knows not, He is a fool – shun him. — Arabian proverb Science perishes by systems that are nothing but beliefs; and Faith succumbs to reasoning. For the two Columns of the Temple to uphold the edifice, they must remain separated and be parallel to each other. As soon as it is attempted by violence to bring them together, as Samson did, they are overturned, and the whole edifice falls upon the head of the rash blind man or the revolutionist whom personal or national resentments have in advance devoted to death. — Albert Pike
  • 6. Preface I wish to express my gratitude to all of those who made this thesis possible. In Helsinki, December 6, 2009 Teppo-Heikki Saari ii
  • 7. Contents Preface ii Abbreviations vi 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 The Site and its operations . . . . . . . . . . . . . . . 2 1.2 Research questions and structure . . . . . . . . . . . . . . . . 3 2 Overview of risk assessment concepts 4 2.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 Risk, hazard, mishap, accident, incident . . . . . . . . 4 2.1.2 Categorisation of risk . . . . . . . . . . . . . . . . . . . 5 2.2 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Categorisation and taxonomy . . . . . . . . . . . . . . 8 2.2.2 Major error types of interest . . . . . . . . . . . . . . . 9 2.3 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1 Approaches to safety . . . . . . . . . . . . . . . . . . . 11 2.3.2 Safety hindrances . . . . . . . . . . . . . . . . . . . . . 12 2.3.3 Safety facilitators . . . . . . . . . . . . . . . . . . . . . 15 2.4 Human factors . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Overview of risk assessment methods 19 3.1 Probabilistic risk assessment . . . . . . . . . . . . . . . . . . . 20 3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2 Defining objectives and methodology and gathering in- formation . . . . . . . . . . . . . . . . . . . . . . . . . 21 iii
  • 8. 3.1.3 Identification of initiating events . . . . . . . . . . . . . 21 3.1.4 Scenario development . . . . . . . . . . . . . . . . . . . 22 3.1.5 Logic modelling . . . . . . . . . . . . . . . . . . . . . . 22 3.1.6 Failure data analysis . . . . . . . . . . . . . . . . . . . 22 3.1.7 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . 23 3.1.8 Risk acceptance criteria . . . . . . . . . . . . . . . . . 23 3.1.9 Interpretation of results . . . . . . . . . . . . . . . . . 25 3.2 Human reliability analysis . . . . . . . . . . . . . . . . . . . . 26 3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2.2 Task analysis . . . . . . . . . . . . . . . . . . . . . . . 27 3.2.3 Database methods . . . . . . . . . . . . . . . . . . . . 27 3.2.4 Expert judgement . . . . . . . . . . . . . . . . . . . . . 27 3.2.5 Technique for Human Error Rate Prediction (THERP) 28 3.3 Other risk and error assessment methods . . . . . . . . . . . . 29 3.3.1 Five steps to risk assessment . . . . . . . . . . . . . . . 29 3.4 Method used by the Company . . . . . . . . . . . . . . . . . . 30 3.4.1 Risk rating . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.2 Method previously used at the Site . . . . . . . . . . . 37 4 Risk control and regulation 39 4.1 Physical risk controls . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 Behavioural safety . . . . . . . . . . . . . . . . . . . . . . . . 43 4.3 Regulatory standards in the EU . . . . . . . . . . . . . . . . . 44 4.3.1 The structure of European harmonised standards . . . 45 4.3.2 The European Machinery Directive . . . . . . . . . . . 47 4.4 Regulatory standards in Finland . . . . . . . . . . . . . . . . . 48 4.5 Regulatory standards in the Company . . . . . . . . . . . . . 50 4.5.1 The Company Directives . . . . . . . . . . . . . . . . . 50 4.5.2 OHSAS 18000 . . . . . . . . . . . . . . . . . . . . . . . 51 5 Case study 52 5.1 Analysis of current safety situation in the Company . . . . . . 52 5.1.1 Accident statistics . . . . . . . . . . . . . . . . . . . . 52 5.1.2 Safety culture and climate . . . . . . . . . . . . . . . . 54 iv
  • 9. 5.1.3 Safety limitations at the Site . . . . . . . . . . . . . . . 57 5.2 Assessing risks with the Company method . . . . . . . . . . . 58 5.2.1 Drum line packaging area . . . . . . . . . . . . . . . . 58 5.2.2 Manually operated slitters and power presses . . . . . . 60 5.2.3 73mm/99mm tin can manufacturing line (CN02) . . . . 61 5.2.4 Machine tools at maintenance department . . . . . . . 62 6 Discussion 64 6.1 Issues encountered during the assessment process . . . . . . . 64 6.2 Comparison and critique of the methods . . . . . . . . . . . . 65 6.3 Analysis of the results . . . . . . . . . . . . . . . . . . . . . . 67 6.3.1 Are the results valid? . . . . . . . . . . . . . . . . . . . 69 6.4 Addressing the issues encountered during the assessment . . . 70 7 Conclusion 73 References 74 Appendix 79 A Appendices 79 A.1 Safe system of work instructions for surface grinder . . . . . . 80 A.2 Modified Company method risk scoring components . . . . . . 81 B Risk assessment results 82 v
  • 10. Abbreviations ALARP As Low As Reasonably Practicable CCF Common Cause Failure DPH Degree of Possible Harm EEM External Error Mode EHS Environment, Heath and Safety EOC Error of Commission FE Frequency of Exposure FMEA Failure Mode and Effect Analysis FTA Fault Tree Analysis HEA Human Error Analysis HEP Human Error Probability HRA Human Reliability Analysis LO Likelihood of Occurrence LWDC Lost Work Day Case MRO Maintenance, repair and operations NP Number of People at Risk OHCA Occupational Health Care Act OSHA Occupational Safety and Health Act PEM Psychological Error Mechanism PPE Personal Protection Equipment PRA Probabilistic Risk Assessment PSA Probabilistic Safety Assessment PSF Performance Shaping Factor RCAP Risk Control Action Plan RCD Residual Current Device RHT Risk Homeostasis Theory RR Risk Rating SRK Skill, rule and knowledge THERP Technique for Human Error Rate Prediction vi
  • 11. Chapter 1 Introduction 1.1 Background Since the days of the Renaissance, when gambler Girolamo Cardano (1500- 1571 AD) took the first steps in the development of statistical principles of probability, and shortly after that Blaise Pascal and Pierre de Fermat (1654 AD) created the theory of probability by solving Paccioli’s puzzle, the con- cept of risk has gone through several phases of evolution and it is nowadays widely applied in nearly every facet of life. [3] Global competition has lead to higher demands on production systems. End customer satisfaction is dependent on the production systems’ capability to deliver goods and services that meet certain quality requirements. To do so the systems must be fit for use and thereby fulfil important quality parame- ters. One such parameter is safety. It is human to make mistakes and in any task, no matter how simple, errors will occur. The frequency at which the errors occur depends on the nature of the task, the systems associated with the task and the influence of the envi- ronment in which the task is carried out. Providing safe equipment through design and safe work environment through regulation and practises is the key to reducing risk and removing occupational hazards in process industry. The technology of safety-related control systems plays a major role in the provision of safe working conditions throughout industry. Regulations re- quire that suppliers and users of machines in all forms from simple tools to automated manufacturing lines take all the necessary steps to protect work- ers from injury due to the hazards of using machines. It is through the usage of scientific methods that allow us to comprehensively identify the risks re- lated to working with machinery and to estimate what can go wrong during the process. It has been recognised by many authorities that safety should be number one priority of the industry. Yet, in many cases companies tend to cut resources from risk assessment and a thorough analysis is never conducted. Although 1
  • 12. the knowledge is readily available for use, many Finnish companies – espe- cially small and middle-sized ones – do not conduct risk assessments. One of the aims of this thesis is to examine do the risk assessment methods work and what kind of results they give. My thesis studies various methods of assessing risks in a manufacturing plant. My objective of the thesis to assess production process risks in Crown Pakkaus Oy (hereafter referred to as the Site), a speciality packing company part of CROWN Cork & Seal (hereafter referred to as the Company), as a case study. I have restricted the scope of the thesis to risk analysis of ma- chinery. Also, the other main objective is to give the reader a picture of the key elements in the field of risk analysis and assessment. These include basic concepts, methodology, and legislation. 1.1.1 The Site and its operations The Site’s history goes back to the year 1876, when the family of G.W. Sohlberg began tinsmith products manufacturing in the Helsinki city area. Manufacturing of cans out of lacquered tinplates began in 1909 and the re- quired machines for the printing of sheets were acquired in 1912. Premises became too small for the business, and the company’s operations were trans- ferred in 1948 to the existing factory premises in Herttoniemi, Helsinki. The company acquired the first automated canline in 1959, began manufacturing drums in 1964, and transferred to the welded cans among the first in Europe in 1970. In 1993 the Site was merged into the Europe’s largest packaging company Carnaud Metalbox, and later from 1996 onwards the Site has been a part of the world’s leading packaging industry group, Crown Holdings, with headquarters in the USA. In 1998 the Site started using the current name, Crown Pakkaus Oy. The Site’s clients are major chemistry and food processing companies from Finland and from the neighbouring areas. The clients in the field of chem- istry are mainly paint, lubricant and chemical companies. The most impor- tant food clients are canning and vegetable companies. The range of packaging manufactured at the Site is wide and it covers the paint pails from 1 / 3 litre to 20 litre, drums of 200 litre, chemical pails from 34 litre to 68 litre, food cans from 73 mm to 99 mm (diameter) and seasonal cans from 155 mm to 212 mm (diameter). The slowest manufacturing pro- cess is capable of producing 6 pieces per minute, and the fastest 400 pieces per minute respectively. The Site operations can be categorised in the following way: pre-printing, printing, manufacturing, storage and maintenance. For the processing of color-print data, the Site has a digital reprography equipment and the equip- ment for manufacturing print film and printing plates. There are three lac- quering lines and three two-colour sheet offset printing lines at the print shop. For packaging manufacturing, the Site has eight automated welding lines, 10 2
  • 13. lines for manufacturing tin can lids/ends, and several individual manually operated machines e.g. power presses and slitters. 1.2 Research questions and structure The thesis aims at answering the following questions: • What kind of concepts does the field of risk management deal with? • What kind of risks can be found in an industrial environment and production processes? • What kind of methods can be used to assess risks? • What matters affect the risk assessment process? • How is it possible to reduce the probability of occurrence of risks? The structure of the thesis is as follows. In the second chapter I examine different concepts related to risk and safety analysis. Next, I examine various aspects of risk assessment methods in the third chapter. In Chapter 4 review some methods of controlling risks. These include physical controls and regulatory controls. My analysis of regulations takes into account three viewpoints: the EU, Finnish Government, and The Company. The Company case study is introduced after that in Chapter 5. Beginning with a short description of what The Company does and what is the main purpose of the thesis, I then present the results of the risk assess- ment. The risks were assessed using the Company method. In Chapter 6 I present discussion of the results. Chapter 7 includes the conclusions. 3
  • 14. Chapter 2 Overview of risk assessment concepts 2.1 Basic definitions 2.1.1 Risk, hazard, mishap, accident, incident The field of risk analysis contains several concepts that are defined in various ways depending on the author or researcher. In this chapter are presented definitions of concepts that I have used in this thesis. I have chosen the definitions by their clarity, intelligibility and unambiguity. The exact wording of concepts vary depending on the author. Risk is a measure of the potential loss occurring due to natural or human activities. Potential losses are adverse consequences of such activities in form of loss of human life, adverse health effects, loss of property, and damage to the natural environment. [33] Accident is an unintentional event which results or could result in an injury, whereas injury is a collective term for health outcomes from traumatic events [1]. Incident is an undesired event that almost causes damage or injury [16]. These are events to learn from before any damage has occurred. Much of the wording is comparable to that found in military standard system safety requirements. In system safety literature, writers trace the principles embodied in military standard system safety requirements to the work of aviation and space age personnel that commenced after World War II. The U.S. Government standards define concepts like mishap and risk in the following way. Mishap is an unplanned event or series of events resulting in death, injury, occupational illness, or damage to or loss of equipment or property, or damage to the environment. Accident. [34] Risk is an expression of the impact and possibility of a mishap in terms of 4
  • 15. potential mishap severity and probability of occurrence. [34] Hazard is a condition that is a prerequisite for an accident. [38] 2.1.2 Categorisation of risk The VTT Technical Research Centre of Finland has prepared a risk assess- ment toolkit for small and middle-sized enterprises [54]. The toolkit is found on the Internet, and provides information on several types of risks and how to control them. The risks are classified from the point of view of company and its business, thus taking a broader stand on different business risks. For the purpose of clarifying the concept of risk and its different aspects, I shall now present the different risk views mentioned in the toolkit. Personnel risks The term ‘personnel risks’ refers to risks to a company’s operations that either concern or are caused by its personnel. At worst, these risks could mean a company completely losing the input of a key employee, or an employee deliberately acting against a company’s interests. Personnel risks include: • Fatigue and exhaustion • Accidents and illnesses • Obsolete professional skills • Personal or employment-related disputes • Unintended personal error • Information leaks or theft Small companies may be more vulnerable to personnel risks. Key expertise may rest with one person, another may have many ideas of responsibility or there may be no contingency arrangements in place. Business risks Business risks are related to business operations and decision-making. Busi- ness risks involve profit potential. A company can neither be successful in its operations or make a profit or fail and suffer losses. The information available for the assessment of business risks is difficult to use because of the fact that business risks are often quite unique. In business, you must recognise prof- itable opportunities before others and react quickly, though decision-making 5
  • 16. may be difficult due to the lack of precise information. Business risks form an extensive field. Because of risk chains, the assessment has to reach even the most distant links in the supply chain. For instance, a fire at the plant of a network partner can cause interruptions that lead to a loss of sales income and clientele. Business risks may therefore arise from the company’s own or external operations. The character of business risks depends on the company’s field of operation and its size. The risks of a small company differ from those of a larger one operating in the same field. The only common factor is that, in the end, companies always bear the responsibility for business risks themselves and cannot take out insurance to cover them. Agreement and liabilities Agreements and making agreements are an essential part of business activity. An appropriate agreement clarifies the tasks, rights and responsibilities of the parties in agreement. An agreement risk can be caused by the lack of an agreement or deficiencies in an agreement. An agreement risk can be related to issues such as the way an agreement was made, a partner in the agreement, making a quotation, general terms of agreement, contractual penalties/compensation etc. Information risks Information risks have long been underestimated and inadequately managed. All companies have information that is critical to their operation, such as customer and production management information, product ideas, market- ing plans, etc. There is a lot of information in different forms: personal expertise and experience-based knowledge, agreements, instructions, plans, other paper documents, and electronic data e.g. customer, order and salary information. Product risks A company earns its income from its products and services. Launching prod- ucts onto the market always involves risks. Errors in decision-making con- cerning products may prove very expensive. These risks can be reduced through systematic risk management that covers the entire range of product operations and all product-related projects. 6
  • 17. Environmental risks Environmental risks refer to risks that can affect the health and viability of living things and the condition of the physical environment. Environmen- tal risks can be caused by the release of pollutants to air, land or water. Environmental damage can also be caused by irresponsible use of energy and natural resources. Pollutants can include waste (controlled waste, spe- cial waste), emissions to air due to production or usage of the product (e.g. smoke, fumes, dusts, gases, etc.), releases to the ground and water systems (e.g. effluent, chemicals, oil/fuel discharges, etc.), noise (vibration, light, etc. if causing a nuisance), and radiation. Environmental risks can be hidden and cause damage over a long period of time. A disused refuse pump can contaminate the ground around it. An environmental risk can also emerge suddenly e.g. due to an accident. A chemical container that breaks during transport can result in the leakage of harmful substances into the ground, a water system, the air or a surface water drain. Project risks A project is a singular undertaking with an objective, schedule, budget, man- agement and personnel. There are two main types of project: • Delivery projects in which a customer is promised the delivery of a product or a service by a defined date and under stipulated conditions. • Development projects in which, for instance, a new device is developed for a company’s own use. These project types are often combines in small and middle-sized enterprises. A typical project frequently calls for some development work or tailoring before the product or service intended to meet the customer’s needs, can be delivered. Projects are difficult and risky because each is unique and so nearly everything is new, such as the workgroup, customer or product. Projects are also subject to disturbances because there are usually several projects in progress the same company, and they compete in importance as well as for resources – at worst interfering with each other. Crime risks Most crimes against companies are planned beforehand. Typically, a com- pany becomes an object of a crime because criminals observe it as a suitable target. In addition to preventing costs caused by crime, the management of crime risks also helps in the management of a company’s other risks. Struc- tural protection and alarm systems can prevent fire and information risks as 7
  • 18. well as property risks. At the same time, indirect costs caused by interrup- tions in production, cleaning up the consequences of vandalism and delayed deliveries are prevented. 2.2 Error The term error refers strictly to human actions in contrast to risk or hazard which may be due to circumstances and environment when no human has contributed to the situation. A human error is an unintended failure of a purposeful action, either singly or as part of a planned sequence of actions, to achieve an unintended outcome within set limits of tolerability pertaining to either the action or the outcome. [55] There are three major components to an error [24]: • External Error Mode (EEM) is the external manifestation of the error (eg. Closed wrong valve) • Performance Shaping Factors (PSF) influence the likelihood of the error occurring (eg. Quality of the operator interface, time pressure, training, etc.) • Psychological Error Mechanism (PEM) is the ‘internal’ manifestation of error (how the operator failed, in psychologically meaningful terms, eg. Memory failure, pattern recognition, etc.) 2.2.1 Categorisation and taxonomy The skill, rule and knowledge (SRK) based taxonomy was developed by Ras- mussen [40] and has since been widely adopted as a model for describing human performance in a range of situations. Skill based behaviour represents the most basic level of human performance and is typically used to complete familiar and routine tasks that can be car- ried out smoothly in an automated fashion without a great deal of conscious thought. In order to complete the task successfully, the tasks that can be car- ried out using this type of behaviour are so familiar that little or no feedback of information from the external or work environment is needed. A typical range of error probability for skill based tasks is from as high as 0.005 (al- ternatively expressed as 5.0E-03) or 1 error in 200 tasks to as low as 0.00005 (5.0E-05) or 1 error in 20,000 tasks on average. [15] Rule based behaviour is adopted when it is required to carry out more com- plex or less familiar tasks than those using skill based behaviour. The task 8
  • 19. is carried out according to a set of stored rules. Although these rules may exist in the form of a set of written procedures, they are just as likely to be rules that have been learned from experience or through formal training and which are retrieved from memory at the time the task is carried out. Error probability values for rule based tasks are typically an order of magnitude higher than for skill based tasks. They lie within the range from 0.05 (5.0E- 02) or 1 error in 20 tasks to 0.0005 (5.0E-04) or 1 error in 2000 tasks on average. [15] Knowledge based behaviour is adopted when a completely novel situation is presented for which no stored rules, written or otherwise, exist and yet which requires a plan of action to be formulated. While there is clearly a goal to be achieved, the method of achieving it will effectively be derived from first principles. Once a plan or strategy has been developed, this will be put into practice using a combination of skill and rule based actions, the outcome of which will be tested against the desired goal until success is achieved. Knowl- edge based tasks have significantly higher error probabilities than either skill or rule based tasks mainly because of the lack of prior experience and the need to derive solutions from first principles. Error probability values vary from 0.5 or 1 error in 2 tasks to 0.005 (5.0E-03) or 1 error in 200 tasks on average. [15] 2.2.2 Major error types of interest In contrast to the rough labeling of errors according to their probability of occurrence given by the SRK taxonomy, it is also possible to categorise errors by their nature of occurrence, i.e. their root cause. The following categorisation of different error types is omitted from [24]. • Slips and lapses (action execution errors): The most predictable errors, usually characterised by being simple errors of quality of performance or by being omission or sequence errors. A slip is a failure of the execution as planned (eg. Too much or too little force applied). A lapse is an omission to execute an action as planned due to a failure of memory or storage (eg. Task steps carried out in wrong sequence). • Diagnostic and decision-making (cognitive) errors: These relate to a misunderstanding, by the operators of what is happening in the sys- tem and they are usually due to insufficient operator support (design, procedures and training). Such errors have an ability to alter accident progression sequences and to cause failure dependencies between redun- dant and even diverse safety and backup technical systems. This type of error includes misdiagnosis, partial diagnosis and diagnostic failure. • Maintenance errors and latent failures: Most maintenance errors are due to slips and lapses, but in maintenance and testing activities, which 9
  • 20. may lead to immediate failures or to latent failures whose impact is de- layed (and thus may be difficult to detect prior to an accident sequence). Most PSAs make assumptions that maintenance failures are implicitly included in component and system availability data. However, it is less clear that such maintenance data used in the PSA can incorporate the full impact of latent failures. • Errors of commission (EOC): An EOC is one in which the operator does something that is incorrect and also unrequired. Such errors can arise due to carrying out actions on the wrong components, or can be due to a misconception, or to a risk recognition failure. These EOCs can have large impact on system risk and they are very difficult to identify (and hence anticipate and defend against). • Rule violations: There are two main types of violations (Reason 1990). The ‘routine’ rule violation where the violation is seen as being of neg- ligible risk and therefore it is seen as acceptable and even a necessary pragmatic part of the job. The ‘extreme’ violation where the risk is largely understood as being real, as is the fact that it is a serious viola- tion. Rule violations are relatively unexpected and can lead to failure of multiple safety systems and barriers. PSAs rarely include violations quantitatively. • Idiosyncratic errors: Errors due to social variables and the individual’s current emotional state when performing a task. They are the result of a combination of fairly personal factors in a relatively unprotected and vulnerable organisational system. Some accidents fall into this category, and they are extremely difficult to predict, as they relate to covert social factors not obvious from a formal examination of the work context. These errors are of particular concern where, for example, a single individual has the potential to kill a large number of persons. They are not dealt with in PSA or HRA. • Software programming errors: These errors are of importance due to the prevalence of software-based control systems required to economi- cally control large complex systems. They are also important in other areas and for any safety critical software applications generally. Typi- cally there are few if any techniques applied which predict human errors in software programming. Instead, effort is spent on verifying and val- idating software to show it is error-free. Unfortunately complete and comprehensive verification of very large pieces of software is intractable due to software complexity and interactiveness. Whittingham [55] divides root causes of human errors into two categories, externally induced and internally induced errors. Externally induced hu- man errors are the factors that have a common influence on two or more 10
  • 21. tasks leading to dependent errors which may thus be coupled together. Ex- amples of these adverse circumstances are deficiencies in organisation of the task, poor interface design, inadequate training, and excessive task demands. Internally induced errors are sometimes called ’within-person dependency’. They are found in the same individual carrying out similar tasks which are close together in time or space. 2.3 Safety Safety may be the absence of accidents or threats, or it can be seen as the absence of risks, which for some is unrealistic. It may also be the balance between safety and risks, i.e. an acceptable level of risk. [16] It is thus possible to have a high risk level but even higher safety. Rochlin [45] argues that “the ‘operational safety’ is not captured as a set of rules or procedures, of simple, empirically observed properties, of externally imposed training or management skill, or of a decomposable cognitive or behavioural frame”. Safety is related to external threats, and the perception of being sheltered from threats. Safety is not the opposite of risk but rather of fear, including a subjective dimension, but it does not encompass positive health or aim at something beyond prevention. Defining an organisation as safe because it has a low rate of error or accidents has the same limitation as defining health as not being sick. [44] Safety may be seen as an important quality of work regardless of the frequency of accidents by regarding safety as larger than just the absence of risk or fear. [1] 2.3.1 Approaches to safety Technical approach The engineering approach focuses on the development of formal reliability and systems modelling, with only limited attention to some of the complexi- ties of the human issues involved. [39] The risk is viewed as deriving from the technical/physical environment. Technicians are the ones doing safety work, and changes in the technical environment are the way to reduce accidents. A common means for technical safety is passive prevention, which means that safety should be managed without active participation of humans. By means of safety rounds, audits, accident investigations, risk and safety analyses, it is presumed possible to measure the level of safety within the organisation. The result is then analysed, providing a basis for formulating action plans and making decisions to reach the target level of safety. Stan- dards and routines offer assurances that the safety activities are good enough. [11] 11
  • 22. Psychological approach The psychological approach to risk and safety focuses on the individual per- spective, investigating perception, cognition, attitudes and behaviour. [39] Some researchers have studied how people estimate risks and make choices among alternatives (e.g. [48]). “Risk is largely seen as a taken-for-granted objective phenomenon that can be accurately assessed by experts with the help of scientific methods and calculations. The phenomenon to be explained is primarily the malleability of risk perceptions“. [51] Individuals’ percep- tions of risk are influenced by the arguments concerning hazards that are prevalent in a particular society at a certain time. All organisations oper- ate with a variety of beliefs and norms with respect to hazards and their management, which might be formally laid down in rules and procedures, or more tacitly taken for granted and embedded within the culture of everyday working practices. Organisational culture may be expressed through shared practices. The process by which culture is created and constructed should be borne in mind when organising everyday work. [43] 2.3.2 Safety hindrances Control and power Many of today’s safety management systems are built on control. Managing risk through control does not take into account the fact that individuals are intentional in how they define and carry out tasks. D¨os and Backstr¨m o¨ o [11] state that production problems which call for corrections in a hazardous zone may be impossible to handle. The machinery or safety rules may not be flexible when changes in production are required. Production is usually considered more important than safety. The question of politics and power is not addressed in most models and discussions. The myth of individual control leads to a search for someone to blame instead of searching for the causes of accidents. [39] It is therefore of importance to ask who is defining the risk, safety and the accident, and who is responsible for the consequences. Does the responsibility for risk mean responsibility for errors? Whittingham describes the concept of blame culture: ”Companies and/or industries which over-emphasise individual blame for human error, at the expense of correcting defective systems, are said to have a ‘blame culture’. Such organisations have a number of characteristics in common. They tend to be se- cretive and lack openness cultivating an atmosphere where errors are swept under the carpet. Management decisions affecting staff tend to be taken without staff consultation and have the appear- ance of being arbitrary. The importance of people to the success 12
  • 23. of the organisation is not recognised or acknowledged by man- agers and as a result staff lack motivation. Due to the emphasis on blame when errors are made, staff will try to conceal their er- rors. They may work in a climate of fear and under high levels of stress. In such organisations, staff turnover is often high resulting in tasks being carried out by inexperienced workers. The factors which characterise a blame culture may in themselves increase the probability of errors being made.” [55] Work stress One of the most important situational moderators of stress is perceived con- trol over the environment. Karasek [22] introduced the job demand-control model, stating that jobs which have low job demands and low levels of con- trol (e.g. repetitive assembly line work) create strain. Control in this model means (1) to have the power to make decisions on the job (decision author- ity) and (2) to have use for a variety of skills in the work (skill discretion). Stress is the overall transactional process, stressors are the stimuli that are encountered by the individuals, and strains are the psychological, physical and behavioural responses to stressors. These factors are intrinsic to the job itself and include variables such as the level of job complexity, the variety of tasks performed, the amount of control that individuals have over the place and timing of their work, and the physical environment in which the work is performed. Stress can also be related to roles in the organisation. Dysfunctional roles can occur in two primary ways: role ambiguity: lack of predictability of the consequences of one’s role performance and a lack of information needed to perform the role; and role conflict: competing or conflicting job demands. The association between role conflict and psychosocial strain is not as strong as that between ambiguity and strain. [6] Conflict between safety and production goals A constant demand for effective resource allocation and short-term revenues from investment may result in priorities that are in opposition to safety, reducing redundancy, cutting margins, increasing work pace, and reducing time for reflection and learning. Landsbergis et al. [27] found that lean pro- duction creates intensified work pace and demands on the workers. Rasmussen [41] proposed a model that indicates a conflict between safe per- formance and cost-effectiveness. The safety defences are likely to degenerate systematically through time, when pressure toward cost-effectiveness is dom- inant. The stage for an accidental course of events is very likely prepared through time by the normal efforts of many actors in their respective daily work context, responding to the standing request to be cost-effective. Ulti- 13
  • 24. mately, a quite normal variation in somebody’s behaviour can then release an accident. Had this particular root cause been avoided by some additional safety measure, the accident would very likely have been released by another cause at another point in time. In other words, an explanation for the acci- dent in terms of events, acts and errors is not very useful for the design of improved safety. It is important to focus not on the human error but on the mechanisms generating behaviour in the actual dynamic work context. [41] Attitudes and norms Slovic [47] stated that risk is always subjective. There is no such thing as a real risk or objective risk. The concept of risk depends on our mind and culture and is invented to help us understand and cope with the danger and uncertainties of life. Slovic [47] stated that trust is an important element in risk acceptance, and should be further investigated. To be socialised into the work role is to understand what is accepted and what is not. In the beginning, reactions towards obvious risks may occur, but may be difficult to express, and safety has to be trusted. After an introductory pe- riod, during which risk and safety knowledge may be low, perception may be higher, but along with increased experience risks may become accepted as normal. Holmes et al. [18] also found that blue-collar workers regarded occupational injury risk as a normal feature of the work environment and an acceptable part of the job. An experienced worker may become home-blind and not react to hazards. The reinforcement by risks that have been avoided or mastered may also provide a false sense of safety. The risk homeostasis theory (RHT), presented by Wilde [57], stated that people have a target level of risk, the level that they accept. This level de- pends on perceived benefits and disadvantages of safe and unsafe behaviour. The frequency of injuries is maintained over time through a closed loop. Whenever one perceives a discrepancy between target risk and experienced risk, an attempt is made to restore the balance through some behavioural adjustment. Organisational culture, structural secrecy and unclear communication of in- formation are found to influence towards a normalisation of deviance, which in turn may lead to failure to foresee risks. Deviance from the original rules becomes normalised and routine, as informal work systems compen- sate for the organisation’s inability to provide the necessary basic resources (e.g. time, tools, documentation with a close relationship to action). [10] 14
  • 25. 2.3.3 Safety facilitators Participation Much intervention research has emphasised the benefits of a participatory approach. Participation will improve the information and idea generation, engaging those who know most about the current situation. Participation may result in a ‘sense of ownership’ and a greater commitment to a goal or a process of change. Behavioural change is likely to be more sustainable if it emerges from the need of the persons involved and with their active partici- pation, rather than being externally imposed. [50] Safety management, risk analyses and interventions are normally conducted by experts on safety. This information and these activities are not only im- portant for designers, technicians or safety committees. Safety work could benefit from involving the operating people, taking an active participatory part. Using this approach in safety intervention work, the participants in- stead of a safety expert will own the process, being their own experts on their special problems and abilities. [50] Social support and empowerment Social support has been found to be of importance for behavioural change as well as a moderator of felt work stress. [23] Risks and injuries are delicate subjects and particularly so if linked to personal mistakes and shortcomings. A supportive social climate with a non-judging and respectful atmosphere is vital to encourage sharing such experiences. There are different sorts of social support: emotional, evaluative, informa- tional and instrumental. [19] The effects and mechanism of social support can be to fulfil fundamental human needs such as security, or social contact. It can also provide support in reducing interpersonal conflicts, i.e. prioritis- ing, and it may also have a buffering effect, modifying the relation between a stressor and health. Perceived self-efficacy plays an important role in the causal structure of social cognitive theory, because efficacy beliefs affect adaptation and change. [2] Unless people believe they can produce and foretell their actions, they have little incentive to act or to persevere in the face of difficulties. Other motiva- tors are rooted in the core belief that one has the power to produce effects by one’s actions. Efficacy beliefs also influence whether people think pessimisti- cally or optimistically and in ways that are self-enhancing or self-hindering. [2] 15
  • 26. Communication Communication is a key factor binding an organisation together. If risks and safety are not communicated at and through all levels of the organisation, there will be little understanding of the risks and safety. Lundgren [28] stated that the risk communication process must be a dialogue, not a monologue from either party. Continuous feedback and interpretations are necessary for communication to be effective, which forms the basis for the continuous safe operation. Communication is linked to a systems view and the capability of finding, and analysing risks and implementing safety measures. [45] Effective communication needs openness so that sensitive information can be outspoken and the question of error, responsibility, blame and shame is openly dealt with in the communication of accidents. All members of an organisation need feedback, not only in their specific area of responsibility but also on how the operating level functions and handles the complexity in which they operate. It is also of importance to anchor policies, goals and changes and to make them comprehensible and meaningful. [50] Saari stated that knowledge of risk is not enough to bring about changes in unsafe behaviour, and that decision-making is influenced by feelings. Therefore, social feedback encouraging safe behaviour has been quite successful in modifying behaviour. [46] Learning Learning is a key characteristic of safe organisations. [39] D¨os and Back- o¨ str¨m [11] stated that demands on control and demands on learning and o acting competently appear to come into conflict. The critical competitive factor for success is not only competence but also its development and re- newal. To learn implies changing one’s ways of thinking and/or acting in relation to the task one intends to perform. The outcome of learning has two aspects. Within the individual, learning is expressed as constructing and reconstruct- ing one’s cognitive structures or thought networks. Outwardly, visible signs of learning are changed ways of acting, performing tasks and talking. Indi- vidual experiential learning [32] can be understood as an ongoing interchange between action and reflection, where past experiences provide the basis for future ones. Active participation and personal action are prerequisites for the learning process to take place. Safety culture/climate Safety climate reflects the symbolic (e.g. posters in the workplace, state of the premises, etc.) and political (e.g. managers voicing their commitment to safety, allocation of budgets to safety, etc.) aspects of the organisation which 16
  • 27. constitute the work environment. On the other hand, safety culture is made up of the cognition and emotion which gives groups, and ultimately the organisation, its character. Unlike safety management and climate, which can often be a reactive response to a certain situation, the safety culture is a stable and enduring feature of the organisation. [56] Flin et al. [12] found that safety climate can be seen as a snapshot of the state of safety, providing an indicator of the underlying safety culture of a work group, plant or organisation. In their review of 18 studies, they identified the six most common themes in safety climate. These were: 1. the perceptions of management attitudes and behaviour in relation to safety, 2. different aspects of the organisational safety management system, 3. attitudes towards risk and safety, 4. work pressure as the balance maintained between pressure for produc- tion and safety, 5. the workforce perception of the general level of workers’ competence, 6. perception of safety rules, attitudes to rules and compliance with or violation of procedures. A number of techniques have been employed to measure safety culture, the most common method is a self-completion questionnaire. Employees respond by indicating the extent to which they agree or disagree with a range of state- ments about safety e.g. “senior management demonstrate their commitment to safety”. The data obtained from the questionnaires are analysed to identify factors or concepts that influence the level of safety within the organisation. 2.4 Human factors Human factors are defined as: “. . . ..environmental, organisational and job factors, and human and individual characteristics which influence behaviour at work in a way which can affect health and safety”. [17] Good human factors in practice is about optimising the relationships between demands and capacities in considering human and system performance (ie understanding human capabilities and fallibilities). The term is used much more in the safety context than ergonomics even though they mean very much the same thing. Like Human Factors, ergonomics deals with the in- teraction of technological and work situations with the human being. The 17
  • 28. job must ‘fit the person’ in all respects and the work demands should not exceed human capabilities and limitations. The meaning of ergonomics is hard to distinguish from human factors, but is sometimes associated more with the physical design issues as opposed to cognitive or social issues, and with health, well being and occupational safety, rather than with the design of major hazard systems. Tasks should be designed in accordance with ergonomic principles to take into account limitations and strengths in human performance. Matching the job to the person will ensure that they are not overloaded and that the most effective contribution to the business results. Physical match includes the design of the whole workplace and working environment. Mental match in- volves the individual’s information and decision-making requirements, as well as their perception of the tasks and risks. Mismatches between job require- ments and people’s capabilities provide the potential for human error. People bring to their job personal attitudes, skills, habits and personalities which can be strengths or weaknesses depending on the task demands. In- dividual characteristics influence behaviour in complex and significant ways. Their effects on task performance may be negative and may not always be mitigated by job design. Some characteristics such as personality are fixed and cannot be changed. Others, such as skills and attitudes, may be changed or enhanced. Organisational factors have the greatest influence on individual and group behaviour, yet they are often overlooked during the design of work and dur- ing investigation of accidents and incidents. Organisations need to establish their own positive health and safety culture. The culture needs to promote employee involvement and commitment at all levels, emphasising that devi- ation from established health and safety standards is not acceptable. 18
  • 29. Chapter 3 Overview of risk assessment methods In this chapter we take a look at the well-established procedures for carrying out a risk assessment on a machine or assembled group of machines. Carrying out a risk assessment on a machine or assembled group of machines is a well-established procedure in a European Commission standard EN 1050. [13] This procedure forms the basis of most safety design studies that have to be carried out on machines to satisfy the requirements of the regulations. The standard points out that: • Risk assessment should be based on a clear understanding of the ma- chine limits and its functions. • A systematic approach is essential to ensure a thorough job. • The whole process of risk assessment must be documented for control of the work and to provide a traceable record for checking by other parties. EN 1050 describes risk assessment as a process intended to help designers and safety engineers define the most appropriate measures to enable them to achieve the highest possible levels of safety, according to the state of the art and the resulting constraints. The standard also defines several techniques for conducting a risk assessment, including the following: What-If method, Failure Mode and Effect Analysis (FMEA), Hazard and Operability Study (HAZOPS), Fault Tree Analysis (FTA), Delphi technique, Defi method, Pre- liminary Hazard Analysis (PHA), and Method Organised for a Systematic Analysis of Risks (MOSAR). 19
  • 30. 3.1 Probabilistic risk assessment 3.1.1 Introduction The Finnish work safety regulations require the employer to conduct risk assessments that evaluate the safety of the workplace. It is stated in the Occupational Safety and Health Act [37] that ”Employers are required to take care of the safety and health of their employees while at work by taking the necessary measures. For this purpose, employers shall consider the circumstances re- lated to the work, working conditions and other aspects of the working environment as well as the employees’ personal capaci- ties.” In addition, ”Employers shall design and choose the measures necessary for improving the working conditions as well as decide the extent of the measures and put them into practice. ” Probabilistic Risk Assessment (PRA), also known as Probabilistic Safety Assessment (PSA), is a systematic procedure for investigating how complex systems are built and operated. The PRAs model how human, software, and hardware elements of the system interact with each other. The methodology was first used in the USA in 1975 to assess and analyse the potential risks leading to severe accidents in nuclear power plants. [42] The study involved a list of potential accidents in nuclear reactors, estimation of the likelihood of accidents resulting in radioactivity release, estimation of health effects associated with each accident, and comparison of nuclear accident risk with other accident risks. Since the WASH-1400 report the understanding of PSA has increased and it has become a useful tool in risk analysis. A similar method is used by the NASA in analysing the risks in space shuttle missions. One of the most important features of PSA is its quantitative probability assessment of different components and events. The methodology includes several phases which can also be used indepen- dently to examine possible failures within a system. A risk assessment amounts to addressing three very basic questions posed by Kaplan and Gar- rick: [21] 1. What can go wrong? 2. How likely is it? 3. What are the consequences? 20
  • 31. The answer to the first question leads to identification of the set of undesir- able scenarios. The second question requires estimating the probabilities (or frequencies) of these scenarios, while the third estimates the magnitude of potential losses. The NASA PRA Guide [49] describes the components of the PRA a modified version. Each component is discussed in more detail in the following. 3.1.2 Defining objectives and methodology and gath- ering information Preparing for a PRA begins with a review of the objectives of the analysis. Among the many objectives possible, the most common ones include design improvement, risk acceptability, decision support, regulatory and oversight support, and operations and life management. Once the object is clarified, an inventory of possible techniques for the desired analyses should be de- veloped. The available techniques range from required computer codes to system experts and analytical experts. The resources required for each analytical method should be evaluated, and the most effective option selected. The basis for the selection should be doc- umented, and the selection process reviewed to ensure that the objectives of the analysis will be adequately met. A general knowledge of the physical layout of the overall system, adminis- trative controls, maintenance and test procedures, as well as hazard barriers and subsystems (whose purpose is to protect, prevent, or mitigate hazard exposure conditions) is necessary to begin the PRA. A detailed inspection of the overall system must be performed in the areas expected to be of interest and importance to the analysis. 3.1.3 Identification of initiating events A system is said to operate in a normal operation mode as long as the system is operating within its design parameter tolerances, there is little chance of challenging the system boundaries in such a way that hazards will escape those boundaries. During normal operation mode, loss of certain functions or systems will cause the process to enter an off-normal (transient) state. Once in this state, there are two possibilities. First, the state of the system could be such that no other function is required to maintain the process or overall system in a safe condition. The second possibility is a state wherein other functions are required to prevent exposing hazards beyond the system boundaries. For the second possibility, the loss of the function or the system is considered as an initiating event (IE). One method for determining the operational IEs begins with first drawing a functional block diagram of the system. From the functional block di- 21
  • 32. agram, a hierarchical relationship is produced, with the process objective being successful completion of the desired system. Each function can then be decomposed into its subsystems and components, and can be combined in a logical manner to represent operations needed for the success of that function. 3.1.4 Scenario development The goal of scenario development is to derive a complete set of scenarios that encompasses all of the potential exposure propagation paths that can lead to loss of containment or confinement of the hazards, following the occurrence of an initiating event. To describe the cause and effect relationship between initiating events and subsequent event progression, it is necessary to identify those functions that must be maintained, activated or terminated to prevent loss of hazard barriers. The scenarios that describe the functional response of the overall system or process to the initiating events are frequently displayed by the event trees. 3.1.5 Logic modelling Event trees commonly involve branch points which shows if a given sub- system (or event) either work (or happens) or does not work (or does not happen). Sometimes, failure of these subsystems is rare and there may not be an adequate record of observed failure events to provide a historical basis for estimating frequency of their failure. In such cases, other logic-based analysis methods such as fault trees or master logic diagrams may be used, depending on the accuracy desired. The most common method used in PRA to calculate the probability of subsystem failure is fault tree analysis. Different event tree modelling approaches imply variations in the complexity of the logic models that may be required. If only main functions or systems are included as event tree headings, the fault trees become more complex and must accommodate all dependencies among the main and support functions within the fault tree. If support functions or systems are explicitly included as event tree headings, more complex event trees and less complex fault trees will result. 3.1.6 Failure data analysis Hardware, software, and human reliability data are inputs to assess perfor- mance of hazard barriers, and the validity of the results depends highly on the quality of the input information. It must be recognised that historical 22
  • 33. data have predictive value only to the extent that the conditions under which the data were generated remain applicable. Collection of the various failure data consists fundamentally of the following steps: collecting and assessing generic data, statistically evaluating facility- or overall system-specific data, and developing failure probability distributions using test or facility- and system-specific data. The three types of events must be quantified for the event trees and fault trees to estimate the frequency of occurrence of sequences: initiating events, component failures, and human error. After establishing probabilistic failure models for each barrier or component failure, the parameters of the model must then be estimated. Typically the necessary data include time of failures, repair times, test frequencies, test downtimes, and common cause failure (CCF) events. One might also non-parametric models and simulate the results. 3.1.7 Sensitivity analysis In a sensitivity analysis, an input parameter, such as a component failure rate in a fault tree logic model, is changed, and the resulting change in the top event probability is measured. This process is repeated using either dif- ferent values for the same parameter or changing different parameters by the same amount. There are various techniques for performing sensitivity analyses. These tech- niques are designed to determine the importance of key assumptions and parameter values to the risk results. The most commonly used methods are so-called “one-at-a-time” methods, in which assumptions and parameters are changed individually to measure the change in virtually any input or model assumption and observe their impact in final risk calculations. The key challenge in engineering risk analysis is to identify the elements of the system or facility that contribute most to risk and associated uncertainties. To identify such contributors, the common method used is the importance ranking. These importance measures are used to rank the risk-significance of the main elements of the risk models in terms of their contributions to the total risk. 3.1.8 Risk acceptance criteria In an engineering risk assessment, the analyst considers both the frequency of an initiating event and the probabilities of such failures within the engineer- ing system. In a health risk assessment, the analyst assesses consequences from situations involving chronic releases of certain amount of chemical and biological toxicants to the environment with no consideration of the frequency or probability of such releases. The ways for measuring consequences are also different in health and engi- 23
  • 34. neering risk assessments. Health risk assessment focuses on specific toxicants and contaminants and develops a deterministic or probabilistic model of the associated exposure amount and resulting health effects, or the so-called dose-response models. The consequences are usually in form of fatality. In engineering risk assessment, the consequence varies. Common consequences include worker health and safety, economic losses to property, immediate or short-term loss of life, and long-term loss of life from cancer. One useful way to represent the final risk values is by using the so-called Farmer’s curves. In this approach, the consequence is plotted against the complementary cumu- lative distribution of the event frequency. Individual risk is one of the most widely used measures of risk and is defined as the fraction of the exposed population to a specific hazard and subsequent consequence per unit time. Societal risk is expressed in terms of the total number of casualties such as the relation between frequency and the number of people affected from a specified level of consequence in a given population from exposure to specified hazards. [33] The ALARP (as low as reasonably practicable) principle [31] recognises that there are three broad categories of risk: 1. Negligible risk: Broadly accepted by most people as they go about their everyday lives. Examples of this kind of risks might be being struck by lightning or having brake failure in a car. 2. Tolerable risk: One would not rather have the risk but it is tolerable in view of the benefits obtained by accepting it. The cost in inconvenience or in money is balanced against the scale of risk and a compromise is accepted. This would apply to e.g. travelling in a car. 3. Unacceptable risk: The risk level is so high that we are not prepared to tolerate it. The losses far outweigh any possible benefits in the situation. The principle is depicted in Figure 3.1. 24
  • 35. ! Figure 3.1: ALARP and risk tolerance regions (adapted from [55]) 3.1.9 Interpretation of results When the risk values are calculated, they must be interpreted to determine whether any revisions are necessary to refine the results and the conclusions. The adequacy of the PRA model and the scope of analysis is verified. Also, characterising the role of each element of the system in the final results is necessary Based on the results of the interpretation, the details of the PRA logic, its assumptions, and scope may be modified to update the results into more realistic and dependable values. The basic steps of the PRA results interpretation are: 1. Determine the accuracy of the logic models and scenario structures, assumptions, and scope of the PRA. 2. Identify system elements for which better information would be needed to reduce uncertainties in failure probabilities and models used to cal- culate performance. 3. Revise the PRA and reinterpret the results until attaining stable and accurate results. 25
  • 36. 3.2 Human reliability analysis 3.2.1 Introduction Human actions are an essential part of the operation and maintenance of machinery, both normal and abnormal conditions. Generally, man can en- sure a safe and economic operation by proactive means, but in disturbances a reactive performance may also be required. Thus, human actions affect both the probability of risk significant events and their consequences, and they need to be taken account in PSA. Without incorporating human error probabilities (HEPs), the results of risk analysis are incomplete. The measurement of human reliability is necessary to provide some assur- ance that complex technology can be operated effectively with a minimum of human error and to ensure that systems will not be maloperated leading to a serious accident. To estimate HEPs, and thus human reliability, one needs to understand human behaviour, which is very difficult to model. HEP is defined as the mathematical ratio: Number of errors occurring in a task HEP = (3.1) Number of opportunities for error Practically all HRA methods and approaches share the assumption that it is meaningful to use the concept of a human error, hence to develop ways of estimating human error probabilities. This view prevails despite serious doubts expressed by leading scientists and practitioners from HRA and re- lated disciplines. [14] Extensive studies in human performance accidents conclude that ”. . . ‘human error’ is not a well defined category of human perfor- mance. Attributing error to the actions of some person, team, or organisation is fundamentally a social and psychological process and not an objective, technical one.” [59] Also, Reason (1997) concludes that ”the evidence from a large number of accident inquiries indicates that bad events are more often the result of error-prone situations and error-prone activities, than they are of error-prone people.” [43] Attempts to approach to the human reliability problem with the same crite- ria as to the engineering reliability problem reveal their inconsistency. The human failure probability can be determined precisely only for the specific person, social conditions and short time period. Generalisation of obtained data to different peoples, social conditions and large time periods results in the growth of the result uncertainty. 26
  • 37. Nevertheless, HRA methods have been successfully used in assessing error probabilities. Numerous studies have been performed to produce data sets or databases that can be used as a reference for determining human error probabilities. Some key elements of human reliability analysis are presented in the following sections, and some specific methods for examining that cer- tain area of human reliability are introduced. 3.2.2 Task analysis Task analysis is a fundamental methodology in the assessment and reduction of human error. A very wide variety of different task analysis methods exist. An extended review of task analysis techniques is available in Kirwan and Ainsworth. [25] Nearly all task analysis techniques provide, as a minimum, a description of the observable aspects of operator behaviour at various levels of detail, together with some indications of the structure of the task. These will be re- ferred to as action oriented approaches. Other techniques focus on the mental processes that underlie observable behaviour, for example, decision making and problem solving. These will be referred to as cognitive approaches. In addition to their descriptive functions, TA techniques provide a wide va- riety of information about the task that can be useful for error prediction and prevention. To this extent, there is a considerable overlap between task analysis and human error analysis (HEA) techniques, thus a combination of TA and HEA methods will be the most suitable form of analysis. 3.2.3 Database methods Database methods generally rely upon observation of human tasks in the workplace, or analysis of records of work carried out. Using this method, the number of errors taking place during the performance of a task is noted each time the task is carried out. Dividing the number of errors by the number of tasks performed provides an estimate of HEP as described above. However, since more than one type of error may occur during the performance of a task it is important to note which types of error have occurred. 3.2.4 Expert judgement The use of expert judgement in the risk estimation step of risk assessment aims at producing a single representation , i.e. in practise an aggregated probability distribution of an unknown quality. A formalised procedure for attaining this is described by several different researchers, Winkler et al. [58] and Cooke and Goossens [5] to name but a few. Such a procedure is known as an expert judgement protocol. The main challenge of the protocol is to 27
  • 38. control cognitive biases inherent in eliciting probabilities. [53] Expert judgement elicitation and aggregation approaches can be classified into behavioural probability aggregation and mechanical probability aggre- gation. [4] In the behavioural probability aggregation approach, the experts themselves produce the consensus probability distribution. The normative expert only facilitates the process of interaction and debate. The main objec- tive of the approach is to ensure the achievement of a shared understanding of the physical and social phenomena and/or logical relationships represented by the parameter elicited. It is important to note that this approach induces strong dependence between the experts. In the mechanistic approach, experts’ individual probability distributions are aggregated by the decision-maker after their elicitation. The main challenge is to specify the performance of the experts. Such a specification presupposes at least two assumptions: 1. data for calibrating an expert’s performance is available, and 2. the expert has not learned from his past performance, and thus uses cognitive heuristics. In the case of Bayesian mechanistic probability aggregation, the decision- maker defines the likelihoods of the experts’ judgements and treats these judgements as data for updating his prior belief to posterior belief according to Bayes’ rule. 3.2.5 Technique for Human Error Rate Prediction (THERP) Development of the THERP method began in 1961 in the US at Sandia Na- tional Laboratories and the developed method was finally released for public use in a document NUREG 1278 in 1983. [52] The stated purpose is to present methods, models and estimates of HEPs to enable analysts to make predictions of the occurrence of human errors in nuclear power plant opera- tions, particularly those that affect the availability or reliability of engineered safety systems and components. The method describes in detail all the relevant PSFs which may be encoun- tered and provides methods of estimating their impact on HEP. It also pro- poses methods of combining the HEPs assessed for individual tasks in the form of a model so that the failure probability for a complete procedure can be calculated. This is carried out by using a method of modelling procedures in the form of HRA event trees. The interaction between individual human errors can then be more easily examined and the contribution of those errors to the overall failure probability of the procedure can be quantified. The key elements of the THERP quantification process are as follows: 28
  • 39. 1. Decomposing tasks into elements. The first step involves breaking down a task into its constituent elements according to the THERP taxonomic approach given in NUREG 1278. 2. Assignment of nominal HEPs to each element. The assignment of nom- inal HEPs is carried out with reference to the THERP Handbook. Chapter 20 of the Handbook is a set of tables, each of which has a set of error descriptors, associated error probabilities and error factors. The assessor uses these tables and their supporting documentation to determine the nominal HEP for each task element. Problems will arise when task elements do not appear to be represented in any of the tables. 3. Determination of effects of PSF on each element. The determination of the effects of PSF should occur based on the assessor’s qualitative analyses of the scenario, and a range of PSFs are cited which can be applied by the assessor. The assessor will normally use a multiplier on the nominal HEP. 4. Calculation of effects of dependence between tasks. Dependence exists when probability of a task is different from when it follows a particular task. THERP models dependence explicitly, using a five-level model of dependence. Failing to model dependence can have a dramatic effect on overall HEP, and differences in levels chosen by different assessors can lead to different HEPs. 5. Modelling in a Human Reliability Analysis Event Tree. Modelling via an event tree is relatively straightforward, once step 1 has occurred. 6. Quantification of total task HEP. Quantification is done using sim- ple Boolean algebra: multiplication of probabilities along each event branch, with success and failure probability outcomes summing to unity. 3.3 Other risk and error assessment methods 3.3.1 Five steps to risk assessment The process for a risk assessment for the handling and use of machines fol- lows the same general rules for all risk assessments. These rules are most clearly described in a widely used brochure published by the UK Health and Safety Executive (HSE) called ‘Five steps to risk assessment’. The process is depicted in Figure 3.2. 29
  • 40. ! Figure 3.2: Five steps to risk assessment 3.4 Method used by the Company The Company uses a risk assessment method of its own. The method is based on the principles presented in ‘Five steps to risk assessment’ by HSE. The risk assessment database works within the Company intranet framework where the assessor chooses the entity to be assessed (a production line or a machine) and then adds the risks identified. The Company policy is that all risks scoring above 30 on the risk rating level need to be controlled. This means defining risk control action plan (RCAP) for every risk exceeding the level. The RCAP includes identifying existing controls, nominating actioner, setting completion date and estimating costs. Risks scoring higher than 100 are unacceptable and they need to be elimi- nated urgently. The assessing process is depicted in Figure 3.3. The process has the following steps: 1. Identify activity. The machinery is used in different modes: normal operation, maintenance, repair, emergency. 2. Form assessment team. Consists at least of a trained assessor and the machine operator. 3. Gather information. Acquire information from previous risk assess- ments, accident and incident reports, work instructions, legal require- 30
  • 41. ! Figure 3.3: The Company method ments, operating manuals, interviews with the operators and mainte- nance personnel, etc. 31
  • 42. 4. Identify hazards. Using a method applicable, identify the possible haz- ards within the target activity. 5. Identify who might be harmed and how. 6. Identify existing control measures. There are several already applied measures, such as guarding, safety devices, procedures, personal pro- tection equipment, etc. 7. Assess risks. Using the data gathered, calculate the risk level for all the hazards identified using the risk rating formula below. 8. Remove the hazards. Limit the risk as far as possible. This can be applied by reducing speed and force, employing good ergonomics, ap- plying failsafe principles, and strengthening existing control measures. 9. Identify and implement additional controls. After re-assessing the resid- ual risk, inform and warn the personnel about any residual risk. This can take the form of signs and symbols. 10. Document the assessment. Risk assessments should be recorded in the Company database. Update for new information and closure for assigned corrective actions. 3.4.1 Risk rating Calculating the risk level is done based on the following formula: The Risk Rating = LO × F E × DP H × N P (3.2) Based on table 3.1, one estimates the risk based on the four variables. Because each of these elements has a range of values this can sometimes lead to difficulties in ensuring that they are applied consistently from site to site and from risk assessor to risk assessor. The Company has provided some guidelines in order to maintain consistency in the assessments. Frequency of Exposure and Number of People at Risk The number of people at risk should be calculated as the number of people who come into contact with the hazard. Where there is a shift system in operation then it is acceptable to calculate the number of people as the number per shift. For example, if the task is undertaken by 2 operators per shift in a 3 shift factory then the number of people is 2. However, one should also remember to include other people who might also come into contact with the hazard during each shift e.g. supervisors, quality staff, maintenance engineers. If there are significant differences in the frequency of exposure of different groups of people then their risk should be assessed separately. 32
  • 43. Likelihood of occurrence (LO) Degree of possible harm (DPH) Likelihood of the identified haz- An indication of how serious the ard realising its potential and harm or ill health could be causing actual injury and/or ill health during / or after the ac- tivity Almost impossible (possible 0.1 Scratch/Bruise 0.033 only under extreme circum- stances) 0.5 Highly unlikely (though 0.5 Laceration/mild ill health conceivable) effect 1 Unlikely (but could occur) 1 Break – minor bone or mi- nor illness (temporary) 2 Possible (but unusual) 2 Break – major bone or seri- ous illness (permanent) 5 Even chance (could happen) 4 Loss of 1 limb/eye or serious illness (temporary) 8 Probable (not surprised) 8 Loss of 2 limbs/eyes or seri- ous illness (permanent) 10 Likely (only to be expected) 15 Fatality 15 Certain (no doubt) Frequency of exposure (FE) Number of people at risk (NP) Frequency of exposure to the The number of people who could identified hazard during the ac- be exposed to the hazard during tivity the activity 0.1 Infrequently 1 1-2 people 0.2 Annually 2 3-7 people 1 Monthly 4 8-15 people 1.5 Weekly 8 16-50 people 2.5 Daily 12 More than 50 people 4 Hourly 5 Constantly Table 3.1: Risk scoring components 33
  • 44. Degree of possible harm An important role of a risk assessment is to make employees aware of the hazards and risks they face day-to-day in carrying out their jobs. DPH chosen should therefore be realistic and reflect to a large extent accident history within the Company or elsewhere. The examples shown in tables 3.2, 3.3, and 3.4 show other injuries, which might be considered of a similar gravity as the examples given in the scheme, and also suggest some of the types of activities and accidents that commonly lead to these injuries. DPH Activity 0.1 Scratch / bruise Splinters, skin irritation, blisters, superficial wounds, light swelling 0.5 Laceration/mild ill health ef- Handling tinplate fect Small cuts requiring stitches, Short term exposure to solvent, bump to head (no loss of con- fumes etc. sciousness), minor eye irritation 1 Break – minor bone or minor Workshop machinery illness (temporary) Contact dermatitis, fractures to Using tools fingers, toes, nose, open wounds requiring stitches, first degree Prolonged skin exposure to sol- burns vents Table 3.2: Guidelines for evaluating degree of possible harm (table 1 of 3) 34
  • 45. 2 Break – major bone or minor Being hit by slow moving fork- illness (permanent) lift truck – pedestrian Fractures to arms, legs, disloca- Slip/trip tion of shoulders, hips, sprains, strains, slipped disc, back in- Manual handling juries, noise induced hearing loss Noise level above 85dB 4 Loss of 1 limb/eye serious ill- Intervention on running ma- ness (temporary) chinery – coaters, presses Amputation of fingers (one or several), severe crushing injuries, Acid or caustic handling second degree burns or extensive chemical burns, non-fatal electric Use of low voltage electrical shock, loss of consciousness, con- equipment cussion 8 Loss of 2 limbs/eyes or seri- Scrap compactors ous illness (permanent) Contact with sensitisers Asthma, cancer, coma, third de- gree burns Serious fire Table 3.3: Guidelines for evaluating degree of possible harm (table 2 of 3) 35
  • 46. 15 Fatality Working at any height over 2m Work in confined spaces where Immediate death or after pro- breathing apparatus is needed longed treatment or illness Collision between pedestrians and lorries Electrocution Falling into deep water or into chemical tanks Motor accidents as driver or passenger Being crushed by large falling objects eg. Tinplate coil Overturning forklift truck – driver Being hit by fast moving fork- lift truck Palletisers – trapping in hoist area Long term exposure to as- bestos (carcinogen) Explosion Table 3.4: Guidelines for evaluating degree of possible harm (table 3 of 3) 36
  • 47. Likelihood of occurrence Two factors can help to choose an appropriate likelihood score: • Accident history – do we know that accidents occur regularly relating to this activity within the Company? Throughout industry generally? • The existing controls in place (see table 3.5) The first column in table 3.5 shows how we can interpret the scores to reflect levels of probability from simpler risk scoring schemes. One of such a scheme is the method previously used at the Site. The first two categories are equal to the lowest probability in the risk matrix approach. The next three are equal to medium probability, and the three last ones are equal to the highest probability. Interlocked guards in place, pur- 0.033 Almost impossible (possible pose designed equipment in use only under extreme circum- (eg. elevated platform with har- stances) ness), traffic management fully implemented. All legally required and best prac- 0.5 Highly unlikely (though con- tice controls in place. The em- ceivable) ployee would have to remove or circumvent a control to be injured. LOW (1) Adjustable guards in place, 1 Unlikely (but could occur) PPE and SSW, basic walkway 2 Possible (but unusual) marking. MEDIUM (2) 5 Even chance (could happen) No guards, safety relies on 8 Probable (not surprised) operator’s competence and 10 Likely (only to be expected) training. HIGH (3) 15 Certain (no doubt) Table 3.5: Guidelines for evaluating existing controls in place 3.4.2 Method previously used at the Site Before it was required by the Company that all the sites use the methodology described above, a similar method was used to assess the risks at the Site. There were a couple of reasons for replacing the previous method with the current one. First of all, the Company required the assessment teams to input all the data in the Company Risk Assessment Database. The old method 37
  • 48. did not evaluate all the required parameters. Secondly, the method seemed to be inaccurate in distinguishing severe risks from less severe ones. The method was based on simple risk matrix, where assessment team selected values for consequence and probability from three categories. The values for probability and consequence are displayed in table 3.4.2. The resulting severity of the risk follows from a risk matrix (Figure 3.4). Probability Consequence 1. Unlikely 1. Mild (eg. scratch or bruise) 2. Possible 2. Harmful 3. Probable 3. Serious (permanent damage) The notation of the severity is as follows: ! Figure 3.4: Severity of risk in a risk matrix • N = negligible, there is very little risk to health and safety, no control measures needed • L = low but significant, contains hazards that need to be recognised, control measures should be considered • H = high, potentially dangerous hazards which require immediate con- trol measures • U = unacceptable, the task/operation in question is discontinued until the hazard is dealt with 38
  • 49. Chapter 4 Risk control and regulation After assessing risks it is important to control the risk. Risk control is about the methods applicable to get rid of or manage risks: • Avoidance: identifying and implementing alternative procedures or ac- tivities to eliminate it. • Contingency: having a pre-determined plan of action to come into force as and when the risk occurs. • Prevention: employing countermeasures to stop a problem from occur- ring or having impact on an organisation. • Reduction: taking action to minimise either the likelihood of the risk developing, or its effects. • Transference: transferring the risk to a third party, for example with an insurance policy. • Acceptance / Retention: tolerating the risk when its likelihood and im- pact are relatively minor, or when it would be too expensive to mitigate it. The Company has employed several of the above-mentioned methods in dif- ferent forms. All the safeguarding of machinery and strengthening the exist- ing barriers aim at avoidance of risk. The result is usually reduction in the likelihood, though. Another method used widely is prevention. This comes in forms of regulation and standards. In this chapter, I examine the various methods of controlling risks. Such methods are practical (physical and behavioural) risk controls aiming at re- duction and avoidance, and regulatory standards that aim at prevention. In the case of Finland, three levels of such standards are examined. These lev- els are federal level (the EU), state level (Finland), and corporate level (the Company). 39