maintenance, optimal maintenance, performance, decision support tool, BPRR, equipment performance, corrective maintenance, maintenance cost, exponential distribution, performance indicators, preventive maintenance, Weibull distribution, parameter estimation, equipment failures, Eindhoven University of Technology, maintenance function, maintenance activities, probability distribution function, parameter estimations, mathematical model, the normal distribution, maintenance costs, decision problem, select equipment, total maintenance, maintenance policy, research question, maintenance strategy, Preventive maintenance cost, required equipment, performance measures, failure
Content:
Eindhoven University of Technology MASTER Developing a decision support tool to determine and improve equipment performance considering maintenance, with an application to a BP refinery Gorisse, R.E.A.A. Award date: 2016 Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and
moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. · Users may download and print one copy of any publication from the public portal for the purpose of private study or research. · You may not further distribute the material or use it for any profitmaking activity or commercial gain Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Download date: 07. May. 2018
Eindhoven, February 2016 Developing a decision support tool to determine and improve equipment performance considering maintenance, with an application to a BP refinery by R.E.A.A. Gorisse BSc
Industrial Engineering Student identity number 0630193 In partial fulfilment of the requirements for the degree of Master of Science in Operations Management and Logistics Supervisors: ir. dr. S.D.P. Flapper, TU/e, OPAC dr. rer. Nat. G.M. Walter, TU/e, OPAC W. Smits, BP Rotterdam Refinery b.v. W. van Halderen, BP Rotterdam Refinery b.v.
TUE, School of Industrial Engineering. Series Master Theses Operations Management and Logistics Subject headings: Availability, Maintenance, Multi criteria decision making, Performance measurement
It should be noted that the numbers in this thesis are fictitious and only provided for illustrative purpose.
I. ABSTRACT In this master thesis a decision support tool was developed for the determination and improvement of equipment performance related to maintenance. The model provides insight and support to select equipment with the largest performance gap over a specific time period based on a multicriteria decision analysis. To do so, first a framework is provided which described the criteria used for the decision problem at hand. Next to this selection procedure, a
Mathematical Model was developed to calculate and determine the optimal maintenance policy based on expected total maintenance costs for the selected equipment. To illustrate how this decision support tool and mathematical model should be used, an application to a BP refinery is provided. i
II. PREFACE AND ACKNOWLEDGEMENTS Before you lies the master thesis report as a result of the finishing phase in the master of Operations Management & Logistics at Eindhoven University of Technology. This research is conducted at the BP Rotterdam Refinery located in Europoort and lasted from September 2015 till February 2016. During these six months I had the opportunity to conduct my research in a highly competitive and dynamic environment for which I am thankful to all people how made this possible. This research could not have been conducted if my two supervisors from BPRR would not have enabled me to do so. Therefor I am grateful for their trust and their motivation to achieve these results. I would like to thank Wil Smits for his guidance, support, critical view on my progress and numerous sessions in which we spoke about all topics related to my research. Furthermore, I would like to thank Wim v. Halderen for his time and expert insights to the world of maintenance within BPRR. To start with, I would like to thank my first supervisor from the Eindhoven University of Technology, ir. dr. S.D.P. Flapper. His guidance considering the structure of my research and thesis report provided me with useful insights and ultimately to a better end result. Moreover, his knowledge and expertise on all maintenance related aspects of this research helped raising the academic value of the results. Besides my first supervisor, I would like to thank my second supervisor from the Eindhoven University of Technology, dr. G.M. Walter for his time spent on reviewing the results from my research. Finally I would like to thank my family and friends for their tremendous support during the last years of my study. Without their support, this master thesis would not have existed. Special thanks go to my parents for enabling me to study for all these years. Renй Gorisse Eindhoven, February 2016 ii
III. MANAGEMENT SUMMARY This thesis research is executed at the BP Rotterdam Refinery in the context of improving equipment performing, measured as availability, by developing a data drive decision support tool for the selection of equipment with the largest performance gap. Subsequently a mathematical model was developed to determine the optimal maintenance policy for the selected equipment.
Research Methodology Each of the defined research question was approached with a specific methodology. For the first research question a confrontation between theory and practice was used. In the second research question comparison of different theories was used to provide an aggregated procedure. Lastly in the final research question the general approach for
operational research, consisting of four model development phases was used. HOW CAN EQUIPMENT PERFORMANCE IN RELATION TO MAINTENANCE BE MEASURED? Based on existing literature a framework was developed for BPRR in which all relevant indicators of equipment performance related to maintenance are incorporated. The confrontation between indicators used in literature and indicators used at BPRR led to the selection of six indicators or criteria for the decision problem at hand. Moreover, a suggestion was made on which indicators would be useful to add to the decision problem in the future. HOW CAN EQUIPMENT WITH THE BIGGEST PERFORMANCE GAP BE SELECTED? Given the input criteria for the selection of equipment with the largest performance gap, a procedure was developed in which alternatives (units and sub units) are compared to each other. This procedure examines the performance of a specific alternative on each of the six criteria and compares this with the performance of all other alternatives. Subsequently one overall score enables the decision maker to rank all alternatives and selected the equipment with the largest performance gap. In order for the procedure to perform this analysis both on the sub unit level as well, the number of criteria was reduced to three based on the available data. A sensitivity analysis on the result of this procedure, seven different situations in which the alternatives were ranked, revealed that the procedure is quite robust in providing decision support by selecting the same equipment with the largest performance gap. Furthermore, validation and verification for experts at BPRR provided a concluding sense test. HOW CAN EQUIPMENT PERFORMANCE BE IMPROVED BY EVALUATING AND ADJUSTING MAINTENANCE POLICIES? After the equipment with the largest performance gap was selected by the developed decision support procedure, an analysis was made on the current maintenance policy. This analysis is based on a decision tree leading to justification of the analysis and a mathematical model used to estimate the total relevant cost, both maintenance cost and production loss costs. To do so, this mathematical model first enables the decision maker to estimate the failure distribution of certain equipment based on life time data. Subsequently, the maintenance policy parameters are estimated and a cost optimization leads to a decision for the optimal maintenance policy. Again, based on verification and validation from experts at BPRR it was concluded that the mathematical model provides an accurate representation of the reality. iii
CONCLUSIONS The results from this thesis research have proven to be accurate and insightful considering the suggestions for equipment selection with large performance gaps. Multiple applications of the developed procedure have risen during this project. Based on objective and data driven decision support, the developed selection procedure was able to determine accurately units with the largest performance gap. The largest benefit from this procedure is that equipment performance is based on a combination of indicators and not solely on related cost and the procedure is easy to use. Subsequently, the developed mathematical model for optimal maintenance policy determination was proven to be quite accurate in determining expected life time of pump seals. Moreover, based on the estimated failure distribution an optimal maintenance policy, given the case settings, was derived. Both the decision support tool for equipment selection and the mathematical model were constructed as Excel based tool. RECOMMENDATIONS The decision support tool should be used in the support of Asset Prioritization and could be adopted in the Global Reliability Improvement Program considering production loss accounting and vulnerability studies. Besides, the procedure should be used by BPRR to compare similar equipment (for example similar pumps at different units) to test best practices and improve overall equipment performance. A small remark was made on the data quality at BPRR. Given the fact that the decision support tool for the selection of equipment with the largest performance gap is completely data driven, the dependency of accurate and adequate data is high. This does not necessarily mean that the amount of data registration should be increased, but merely that the provide data is reliable. Final recommendation is made for BPRR to use these research outcomes in the continuous improvement of equipment performance. iv
TABLE OF CONTENTS I. Abstract ..............................................................................................................................................i II. Preface and acknowledgements .......................................................................................................ii III. Management summary ............................................................................................................... iii
List of abbreviations ................................................................................................................................ 4 1. Introduction ...................................................................................................................................... 5 1.1 Company introduction ............................................................................................................. 5 1.1.1 BP Rotterdam Refinery........................................................................................................ 5 1.2 Problem introduction ............................................................................................................... 6 1.3 Research goal .......................................................................................................................... 7 1.4 Research deliverables.............................................................................................................. 7 1.5 Research question(s) ............................................................................................................... 8 1.6 Research methodology.......................................................................................................... 10 2. Equipment performance measurement ......................................................................................... 12 2.1 Equipment performance and maintenance in literature ........................................................ 12 2.1.1 Maintenance
performance indicators ................................................................................ 12 2.1.2 Maintenance performance evaluation framework............................................................. 13 2.1.3 Conclusion from literature ................................................................................................. 15 2.2 Equipment performance and maintenance at BPRR ............................................................. 16 2.2.1 Maintenance performance evaluation framework at BPRR .............................................. 16 2.2.2 Conclusion from BPRR ...................................................................................................... 18 2.3 Confrontation between literature and BPRR ......................................................................... 19 2.4 Conclusion equipment performance and maintenance ......................................................... 20 3. Equipment selection....................................................................................................................... 21 3.1 Identify the decision problem ................................................................................................ 21 3.1.1 Describe the decision problem .......................................................................................... 22 3.1.2 Structuring the decision problem ...................................................................................... 24 3.2 Select an appropriate evaluation model................................................................................. 26 3.2.1 Selection of an evaluation model for the BPRR case ........................................................ 26 3.3 Determine preferences for a decision ................................................................................... 27 3.3.1 Construct the preferences for a decision .......................................................................... 27 3.3.2 Evaluate the alternatives ................................................................................................... 28 3.4 Determine the best decision ................................................................................................. 29 3.4.1 Select the unit with the largest performance gap for BPRR ............................................. 29 3.1.1 Select the sub unit with the largest performance gap for BPRR ...................................... 30 1
3.2 Validation and verification of equipment selection ................................................................ 31 3.2.1 Sensitivity analysis of equipment selection ....................................................................... 31 3.1.1 Verification of equipment selection ................................................................................... 32 3.1.2 Validation of equipment selection ..................................................................................... 32 3.2 Conclusion equipment selection ........................................................................................... 33 4. Maintenance policy evaluation ....................................................................................................... 34 4.1 Assessing improvement area for selected equipment.......................................................... 34 4.2 Maintenance policy evaluation, conceptualization................................................................. 35 4.2.1 Relation between replacement before and after failure .................................................... 35 4.2.2 Time to failure distribution ................................................................................................. 36 4.2.3 The renewal process ......................................................................................................... 38 4.2.4 Maintenance policies ......................................................................................................... 38 4.2.5 Input variables taken into account ..................................................................................... 39 4.3 Maintenance policy evaluation, modelling............................................................................. 40 4.3.1 Failure distribution modelling............................................................................................. 40 4.3.2 Renewal process modelling .............................................................................................. 43 4.3.3 Total cost modelling........................................................................................................... 44 4.3.4 Model assumptions ........................................................................................................... 45 4.4 Maintenance policy evaluation, model solving ...................................................................... 46 4.4.1 Failure distribution ............................................................................................................. 46 4.4.2 Renewal process modelling .............................................................................................. 49 4.4.3 Maintenance policy results ................................................................................................ 49 4.5 Maintenance policy evaluation, verification and validation .................................................... 50 4.6 Maintenance policy evaluation, implementation ................................................................... 51 4.7 Conclusion maintenance policy evaluation ............................................................................ 51 5. Conclusions and recommendations ............................................................................................... 52 5.1 Conclusions ........................................................................................................................... 52 5.1.1 How can equipment performance be measured? ............................................................. 52 5.1.2 How can equipment with the largest performance gap be selected? .............................. 52 5.1.3 How can equipment performance be improved by adjusting current maintenance policies?.......................................................................................................................................... 53 5.2 Applications for BPRR ........................................................................................................... 53 5.3 Recommendations for BPRR................................................................................................. 54 5.4 Academic relevance of this research..................................................................................... 54 5.5
future research directions ..................................................................................................... 54 References ............................................................................................................................................ 55 Figures and tables ................................................................................................................................. 60 2
Appendix A, Overview of maintenance related KPI's............................................................................ 63 Appendix B, Maintenance strategy in literature .................................................................................... 64 Appendix C, Selected KPI operational definition ................................................................................... 65 Appendix D, Example of unit OSA ........................................................................................................ 66 Appendix E, Solomon benchmark ......................................................................................................... 68 Appendix F, Functional location separation ........................................................................................... 70 Appendix G, General MCDA procedure ................................................................................................ 71 Appendix H, Example of comparison matrix questionnaire .................................................................. 72 Appendix I, Determining criteria weights .............................................................................................. 73 Appendix J, Selection of
evaluation method ......................................................................................... 74 Appendix K, Generalized criteria............................................................................................................ 75 Appendix L, User interface of decision support tool ............................................................................. 76 Appendix M, Complete maintenance policy overview .......................................................................... 77 Appendix N, Different types of censoring ............................................................................................. 78 Appendix O, Calculations for histograms .............................................................................................. 79 Appendix P, Calculations for probability plotting ................................................................................... 80 Appendix Q, Weibull MLE considering grouped data ........................................................................... 82 Appendix R, RF approximation calculations .......................................................................................... 83 Appendix S,
confidence intervals for
parameter estimation................................................................. 84 Appendix T, Data preparation for decision support tool ........................................................................ 85 Appendix U, Excel based mathematical model ..................................................................................... 88 3
LIST OF ABBREVIATIONS
Abbreviation AHP BPRR CBM CDF CDU CFR CI CM Com CR DM DMS DFR EDC E&C GRIP IFR I&RE KPI MCAP MCDA MCDM MESA MTBF MTTR OEE OSA PDF PM PROMOTHEE RF RI S/D TA(R) YTD
Description Analytical Hierarchy Process BP Rotterdam Refinery b.v. Condition Based Maintenance
cumulative distribution function Crude Distillation Unit (Asset) Constant failure rate Consistency Index Corrective Maintenance Commercial Consistency Ratio Decision maker Decision maker situation Decreasing failure rate Equivalent Distillation Capacity factor Engineering and Construction department within BPRR Global Reliability Improvement Program Increasing failure rate Integrity and reliability engineering Key performance indicator Multicriteria analysis procedure Multicriteria decision analysis Multicriteria decision method Maintenance Engineering Society of Australia Mean time between failures Mean time to repair Overall equipment effectiveness On Stream Availability Probability density function Preventive Maintenance Preference Ranking Organization METHod for Enrichment Evaluations Renewal function Random Index Shutdown of equipment Turn Around Year to Date (data from the 1st of January of a year to the current date)
4
1. INTRODUCTION
1.1
COMPANY INTRODUCTION
BP was founded by William Knox D'Arcy on April 14, 1908 and is headquartered in London,
the
United Kingdom. BP is an integrated oil and gas company that provides fuel, energy for
heat and light, lubricants and petrochemicals. It operates through two main segments:
Upstream and Downstream. The Upstream segment markets and trades
natural gas and is
responsible for its activities in oil and natural gas exploration, field development and
production, storage and processing. The Downstream segment is responsible for the
refining, manufacturing, marketing, transportation, and supply and trading. This thesis
research is conducted within the BP Refinery located in Rotterdam, the Netherlands. The
position of this refining location in the BP Global company is shown in Figure 1.
BP Global
Lubricants
Upstream
Downstream
Petrochemicals
Business to Business
Global Refining
Business to Consumer
NA Refining
ESA Refining
AP Refining
Refinery Lingen
Refinery Gelsenkirchen
Refinery Rotterdam
Refinery Castellon
Figure 1 Position of Refinery Rotterdam within BP Global
1.1.1 BP ROTTERDAM REFINERY BP Rotterdam Refinery (BPRR) is the biggest, in terms of production capacity (400.000 barrels per day), refinery in Europe. Its location is strategically chosen for the ability to deliver and ship
raw materials via sea shipment. Within BPRR, this thesis research will be conducted under the supervision of the Integrity & Reliability Engineering (I&RE) department (see Figure 2). The focus of this department is on the prevention or elimination of equipment defects by developing maintenance programs, evaluate strategic maintenance decisions and perform rootcause analyses of failures already occurred. Refinery Manager
Operations
HR&O
HSE
Engineering & Construction
Maintenance
Commercial Optimization
Process Engineering
PID
Integrity & Reliability Engineering
Projects
Process Engineering
Corrosion & Materials Team
Asset Maintenance Engineers
Reliability Program Team
Asset Engineers
Discipline Engineers
Figure 2 BP Rotterdam Refinery organogram (BPRR, 2015)
Finance
5
1.2
PROBLEM INTRODUCTION
In process industries, raw materials are transformed to products in plants, mostly consisting of multiple assets called units. All these units are connected by pipelines, instrumentation systems and so on. Based on this close relation between different units or assets, it is more important to aim for maximizing overall effectiveness of a plant or refinery rather than to focus exclusively on the efficiency of the individual equipment items (Suzuki, 1994). Within BPRR, this overall plant effectiveness is defined as the BPRR On Stream Availability (OSA). This measure is a summation of all unit availability rates multiplied by a factor representing the complexity and capacity of that unit as part of the refining process at BPRR (EDC%).
=
(1

[]
[])
Ч
%
[1.1]
The 2015 BPRR OSA was a little over (0.14%) the target goal for the preset BPRR year availability goal (97.3%). However, the BPRR OSA target is set to be 96.0% on average over the last five years (one turnaround cycle on average). Despite the performance of this year, the realization of the BPRR OSA over the last five year averages to 95.22% and thus does not reach the target. To reach this goal in the future, BPRR has to further improve their annual OSA.
100,00% 95,00% 90,00% 85,00%
Realized BPRR OSA Target BPRR OSA
Figure 3 BPRR OSA: target, monthly realization and yearly average (BPRR, July 2015) Increasing the OSA by 1%, measured over one year, will result in approximately $15 million of additional profit for BPRR. Based on this financial impact, it is essential for BPRR to perform all possible actions to ensure minimal loss of availability and reach their five year average goal. Based on this problem introduction, the resulting problem statement is: The target set for the BPRR On Stream Availability has not been met over the last five years, resulting in missing profits of approximately $15 million per year.
6
1.3
RESEARCH GOAL
The difference in realized OSA and the target OSA is caused by downtime of the refinery.
This refinery downtime can be triggered by different causes (Suzuki, 1994; Wireman, 2005):
Production scheduling absence or inadequate equipment planning;
Operator error human failure in the production process could lead to downtime;
Quality control lack of quality control resulting in rework or production interruptions;
Raw materials usage different types of raw materials could lead to change over
times or (un)expected production interruptions;
Maintenance deficiency absence of maintenance actions or lack of evaluation for
existing maintenance actions.
Given that BPRR strives for excellence in operation and maintenance activities with special care to safety considerations, many effort is put in decreasing these different forms of downtime. To do so, extensive procedures and methods exist to assess potential availability losses, analyse equipment failures occurred and recently with a procedure for finding equipment with reoccurring failures based on expert experiences. In other words, BPRR's goal is to achieve maximum equipment performance (measured by OSA) within the boundaries of safety standards and financial resources.
Based on the limited time for conducting this research and the execution within the department of I&RE at BPRR, the choice was made to focus on downtime caused by maintenance deficiencies. This choice is supported by an initial evaluation of available data performed in the preparation of this research.
Maintenance should be applied to all deteriorating equipment in order to sustain optimal equipment conditions and thereby equipment availability (Suzuki, 1994). Next to this, Gits (1992) defined that maintenance should be both efficient as well as effective. Measuring and improving this effectiveness and efficiency of executed maintenance is an important issue for management in different industries, particularly in petrochemicals (Tsang, 1998). However such an overall measurement framework of equipment performance, measured as refinery OSA, with relation to maintenance is currently absent within BPRR.
Increase the refinery OSA and thereby profits, by improving equipment performance related to maintenance taking into consideration safety standards and limited financial resources.
1.4
RESEARCH DELIVERABLES
In order for BPRR to achieve their goal, the limited resources for analysing failures and
improving OSA must be allocated to the right equipment. In this case, the right equipment is
defined as equipment with the largest performance gap. Given this description and the
research goal, first a framework needs to be established which considers the relation
between equipment performance and maintenance. This framework should provide an
overview of relevant performance indicators making it possible to determine equipment
performance within BPRR in relation to maintenance.
7
Subsequently, equipment with the largest performance gap needs to be selected. However, an overall objective procedure for equipment selection based on available
data sources is currently missing at BPRR. Therefore a procedure, accompanied by a decision support tool, processing all relevant performance indicator data will be the second research deliverable. With the aim of such a decision support tool, a method to combine all available data needs to be incorporated as well.
Finally, in addition to both a framework with all relevant indicators and the developed decision support tool for selecting equipment with the largest performance gap, this research will also provide a tool for determining the optimal maintenance policy providing possibilities for improving the equipment performance.
1.5
RESEARCH QUESTION(S)
The basis for evaluating the performance of anything is to define what is to be used for evaluation, or in case of this research which indicators are useful to determine equipment performance in relation to maintenance. This framework of relevant indicators is the first research deliverable. In order to arrive at such a framework the first research question can be formulated as follows:
1. How should equipment performance, in relation to maintenance, be measured? 1.1 How is equipment performance, in relation to maintenance, measured in literature? 1.2 How is equipment performance, in relation to maintenance, measured by BPRR? 1.3 Which criteria should be used to measure equipment performance in relation to maintenance? After the establishment of important indicators used to determine equipment performance in relation to maintenance, equipment with the largest performance gap needs to be selected. Within BPRR equipment is clustered into units, sub units and components. So first an selection of the unit with the largest performance gap needs to be addressed, followed by the selection of the sub unit with the largest performance gap. Based on the large amount of sub units (per unit over 100 sub units exist) and their components the developed procedure needs to be able to handle these amounts. Moreover, the developed method should be transformed into a decision support tool. This will be done by answering the second research question:
2. How can equipment with the largest performance gap be selected? 2.1 How can the performance of equipment be compared and ranked to each other? 2.2 How can units with the largest performance gap be selected? 2.3 How can sub units with the largest performance gap be selected?
8
Based on the procedure to select the right equipment, with respect to the biggest performance gap, the current maintenance strategy for that equipment has to be evaluated:
3. How can equipment performance be improved by adjusting current maintenance policies? 3.1 How can be determined if evaluating the current maintenance policy is useful? 3.2 How can the current maintenance policy be compared to possible other policies? 3.3 How is the best maintenance policy selected? When combining both the research deliverables and the accompanying research questions to provide these deliverables, a complete graphical overview of this research can depicted (see Figure 4). Please note that in this figure the related chapters in this report are given.
Framework consisting of important criteria for the measurement of equipment performance in relation to maintenance Deliverables question I
Procedure for the selection of best unit and sub unit for further analysis Excel based decision support tool for the selection of the best unit and sub unit based on the defined input data Deliverables question II
Procedure to assess whether maintenance policy evaluation is useful Procedure for the evaluation and adjustment of current maintenance policies Excel based decision support tool for choosing the best maintenance policy Deliverables question III
1. How should equipment performance related to maintenance be measured? 1.1 How is equipment performance, related to maintenance, measured in literature? 1.2 How is equipment performance, related to maintenance, measured by BPRR? 1.3 Which criteria should be used to measure equipment performance related to maintenance? Research question I
2. How can equipment with the largest performance gap be selected? 2.1 How can the performance of equipment be compared to each other? 2.2 How can units with the largest performance gap be selected? 2.2 How can sub units with the largest performance gap be selected? Research question II
3. How can equipment performance be improved by adjusting current maintenance policies? 3.1 How can be determined if evaluating the current maintenance policy is useful? 3.2 How is the current maintenance policy compared to other possible policies? 3.3 How is the best maintenance policy selected? Research question III
Chapter 2
Chapter 3
Chapter 4
Report chapter
Report chapter
Figure 4 Relation between research deliverables, research questions and report structure
Report chapter
9
1.6
RESEARCH METHODOLOGY
The research question is aimed at the development of a framework considering relevant performance indicators. These indicators form the base of the decision problem at hand in directing limited resources at equipment with the largest performance gap. These indicators are selected based on a confrontation between literature and operational definitions used at BPRR. The results of this first research question are used as input for the second research question (see Figure 5).
 Theory A  Theory B  Theory C
 Practice
Research results
Figure 5 Method to develop a framework for performance indicators (van Aken et al., 2007) In the second research question the aim is defining a selection procedure, which serves as a decision support tool. Based on a comparison of different general multicriteria decision analysis approaches found in literature, an aggregate approach will be developed (see Figure 6). This approach is first used to select the unit with the largest performance gap. Subsequently the same approach will be used to select the sub unit with the largest performance gap.
 Method A
 Method B
 Method C
Research results
Figure 6 Methodology to develop a general multicriteria decision analysis Lastly a mathematical model will be developed to determine the optimal maintenance policy for the selected equipment from the second research question. Mitroff et al. (1974) describe a model based on the initial approach of operational research consisting of conceptualization, modelling, model solving and implementation. Based on the type of operational research different loops can be created within the model depicted in Figure 7.
Conceptualization
Conceptual Model
Modelling
Reality, Problem Situation
Validation
Feedback
scientific modelImplementation
Solution
Model Solving
Figure 7 Operational research methodology as described by Mitroff et al. (1974) In this figure the lines between the circles represent the research phases and the circles itself represent the research results. Based on the characteristics of the research at hand, different loops within the research methodology can be created (Mitroff et al., 1974). This last research question can be classified as a normative axiomatic quantitative research question, based on the fact that the main goal of this research question is to compare various strategies for addressing a specific problem to find an optimal solution (Bertrand & Fransoo, 2002). Accordingly, this type of research follows the cycle of "conceptualization modelling validation implementation" in the research model described by Mitroff et al. (1974).
10
In the conceptualization phase decisions have to be made about the variables that need to be included in the evaluation model and the scope of the problem and model to be addressed. The conceptual model description should also include some literature to provide theoretical concepts and terms related to the subject studied (Bertrand & Fransoo, 2002). For this research this means that all relevant variables for determining the optimal maintenance policy need to be defined. The second phase described by Mitroff et al. (1974) represents the modelling phase. This phase focusses on the causal relation between the variables described in the previous phase (Bertrand & Fransoo, 2002). The result of this phase should translate the input variables to an estimation of average total maintenance cost for each policy. The decision variable is in this case the cycle time for a specific part.
Conceptualization
Conceptual Model
Modeling
Reality, Problem Situation
Conceptual validity Operational validity
Logical validity Data validity
Scientific Model
Experimental validity
Implementation
Solution
Model Solving
Figure 8 Different types of validation for operational research (Landry et al., 1983) Validation of the developed scientific model is done in the validation phase, which includes the question whether the model presents an actual understanding or explanation of the reality. More specifically, it answers the question whether the model actually provide understanding and explanation of the characteristics within the scientific model (Bertrand & Fransoo, 2002). The main concern made by Mitroff et al. (1974) for this phase is the over concern with validation or the perfect fit between the model and reality. This validation can be performed according to the different types of validation as described by Landry et al. (1983) as can be seen in Figure 8. Lastly in the implementation phase, a description is provided on which tools and procedures are delivered to BPRR and how these deliverables should be used. In this specific case this entails a short description of the developed Excel tool used to determine an optimal maintenance policy.
11
RESEARCH QUESTION 1
Abstract
In this part of the master thesis report the first research question, considering the measurement of equipment related to maintenance, will be discussed. As a result, an overview of important criteria for performance evaluation is obtained.
Keywords Availability, Equipment effectiveness, Maintenance effectiveness, Maintenance evaluation, Maintenance performance, Performance indicator,
performance management, Reliability.
Framework consisting of important criteria for the measurement of equipment performance in relation to maintenance Deliverables question I
Procedure for the selection of best unit and sub unit for further analysis Excel based decision support tool for the selection of the best unit and sub unit based on the defined input data Deliverables question II
Procedure to assess whether maintenance policy evaluation is useful Procedure for the evaluation and adjustment of current maintenance policies Excel based decision support tool for choosing the best maintenance policy Deliverables question III
1. How should equipment performance related to maintenance be measured? 1.1 How is equipment performance, related to maintenance, measured in literature? 1.2 How is equipment performance, related to maintenance, measured by BPRR? 1.3 Which criteria should be used to measure equipment performance related to maintenance? Research question I Chapter 2 Report chapter
2. How can equipment with the largest performance gap be selected? 2.1 How can the performance of equipment be compared to each other?
2.2 How can units with the largest performance gap be selected?
2.2 How can sub units with the largest performance gap be selected? Research question II
Chapter 3
Report chapter
3. How can equipment performance be improved by adjusting current maintenance policies? 3.1 How can be determined if evaluating the current maintenance policy is useful? 3.2 How is the current maintenance policy compared to other possible policies?
3.3 How is the best maintenance policy selected? Research question III
Chapter 4
Report chapter
"When you can measure what you are speaking about, and express it in numbers, you know something about it. But when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind. It may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science." WILLIAM THOMSON (PHYSICIST, 18241907)
2. EQUIPMENT PERFORMANCE MEASUREMENT As William Thomson nicely formulates, first equipment performance related to maintenance has to be made measureable before a decision can be made on which equipment to be selected based on a performance gap. In practice the measurement of equipment performance related to maintenance tends to focus on evaluating direct costs, due to the fact that maintenance is often seen as an expense (Tsang, 1998). However, equipment performance related to maintenance is based on more indicators than cost alone. In this chapter equipment performance related to maintenance is firstly described from a theoretical point of view based on literature. Subsequently, equipment performance related to maintenance at BPRR is described, followed by a comparison of both these descriptions. In this way a complete list of indicators will be developed as input for the decision problem at hand.
2.1
EQUIPMENT PERFORMANCE AND MAINTENANCE IN LITERATURE
The deterioration of equipment, and hence its production capability, starts whenever the technical system becomes operative. In addition to the normal wear and deterioration, other factors may influence the production capability such as operating errors or production settings different to the designed settings (Suzuki, 1994; Wireman, 2005; Muchiri et al., 2011). In order to ensure the capability of these technical systems (also described as equipment or assets), a complete strategy for maintenance is required. In literature this strategy is also described as the maintenance function. The Maintenance Engineering Society of Australia (MESA) defined this function as follows: "The maintenance function can be described as all engineering decisions and associated actions necessary and sufficient for the optimization of a specified capability." Capability in this definition relates to the ability to perform a specific action. Characteristics of capability include availability, production rate, quality and degradation (Murray et al., 1996).
Due to the fact that the performance and competiveness of manufacturing companies is dependent on the reliability and productivity of their production facilities, these companies need a good track of equipment performance and their relation to this maintenance function (Muchiri et al., 2011). In order to keep track of performance, indicators have to be defined to measure progress and gaps.
2.1.1 MAINTENANCE PERFORMANCE INDICATORS Performance indicators in general are used to identify gaps between current and desired performance and provide an indication of progress toward closing or minimizing these gaps. Additionally, performance measures provide an important link between strategy and management action, thus supporting the implementation and execution of improvement projects (Neely et al., 2005).
When looking specifically at maintenance indicators, Campbell and ReyesPicknell (2006) distinguish three main categories of performance measures: 1. Measures of equipment performance (availability, reliability and OEE); 2. Measures of cost performance (operations & maintenance labour, material costs); 3. Measures of process performance (ratio planned/unplanned work, schedule compliance).
All of these measures are diagnostic of nature (Simons, 1995) and determine whether the maintenance function remains in control or compares favourably with its counterparts elsewhere within or outside (benchmark) the organization. As a result these measures are frequently used for the support of operational control and benchmarking (Tsang, 1998). As described by Tsang (1998), using these measurements makes it difficult to determine the contribution of maintenance to the business success. In order to achieve this, the performance measurements must be connected to the developed strategy of the maintenance function in a complete framework.
12
2.1.2 MAINTENANCE PERFORMANCE EVALUATION FRAMEWORK Maintenance managers should gain insight in the relation between the input of the maintenance process and outcome in terms of total contribution to manufacturing performance and business strategic objective in order to make eligible decisions considering the maintenance function (Tsang, 1998; Muchiri et al., 2011). In order to gain such insight, the development and implementation of a rigorously defined performance measurement system and key performance indicators is essential. Moreover, Muchiri et al. (2011) state that maintenance performance measurements should not be viewed individually, but in relation to other organizational functions such as production and planning. Essential here is that the indicator is able to provide the right insight in progress and functioning. Good indicators should support monitoring and control of performance, help identify performance gaps, support continuous improvement and help allocate resources to the right place (Neely et al., 2005). Most performance measures are defined as outcome measures because they reflect the outcome of past decisions. This type of measure can also be defined as lag indicator. Performance drivers are often used to predict future outcomes and are also referred to as lead indicators. In order to form complete and accurate measures both type of indicators need to be used (Tsang, 1998).
Table 1 Relation between maintenance indicator type and category
Type of measure
Type of indicator
Indicator category
Result measure
Lagging indicators
Equipment performance Cost performance
Process measure
Leading indicators
Process performance
Both maintenance process indicators (leading) and maintenance results indicators (lagging) are important for measuring the performance of the maintenance function. The cost and equipment performance indicators are instrumental in doing performance analysis of the maintenance function and identifying the performance gaps that should trigger management actions. They provide a good basis of conducting a root cause analysis to find causes for performance gaps, which lead to improvement of the maintenance function (Muchiri et al., 2011).
MMAAIINNTTEENNAANNCCEESSTTRRAATTEEGGYY FFOORRMMUULLAATTIIOONN Corporate strategy
MMAAIINNTTEENNAANNCCEEEEFFFFOORRTT//PPRROOCCEESSSS Work Execution
MMAAIINNTTEENNAANNCCEERREESSUULLTTSS
Equipment Performance
Maintenance Cost
Manufacturing Performance Requirements Manufacturing Performance Requirements
Work Identifcation
MAINTENANCE LOOP
Work Planning
Work Scheduling
Performance Analysis Performance Targets & Benchmark
LLeeaaddiinnggPPeerrffoorrmmaanncceeIInnddiiccaattoorrss
LLaaggggiinnggPPeerrffoorrmmaanncceeIInnddiiccaattoorrss
Figure 9 Maintenance function framework with performance indicators (Muchiri et al., 2011)
2.1.2.1 LEADING PERFORMANCE INDICATORS These type of indicators measure whether the desired results have been realized. In the framework of Muchiri et al. (2011) four different steps are identified in the maintenance process. For each of these process steps, different performance indicators are developed. Not all of these indicators are relevant for this research however. In the table below the indicators relevant for this research, connected or relevant for the decision problem at hand, are given. As described in Table 1, these indicators are maintenance process measures (Campbell and ReyesPicknell, 2006).
13
Indicator Category Process performance
Table 2 Leading performance indicators (Muchiri et al., 2011)
Indicator
Unit
Description
Percentage of preventive work orders Percentage of corrective work orders Mean Time To Repair (MTTR) Schedule Compliance
% % Hours %
# preventive work orders / total work orders # corrective work orders / total work orders Total downtime / # of failures Percentage of work orders completed in scheduled period before deadline
2.1.2.2 LAGGING PERFORMANCE INDICATORS The results of the maintenance process can be summarized by reliability, availability and utilization of all technical systems or equipment. One key goal for maintenance is to be cost efficient, making it almost impossible not to look at cost effectiveness of maintenance activities. Measuring these results is divided into equipment performance and maintenance costs (Muchiri et al., 2011).
Maintenance cost is in many instances influenced by the effectiveness and efficiency in which maintenance is performed. Effectiveness, as defined by Gits (1992), is based on maintenance actions carried out at the right time, thereby minimizing corrective maintenance cost. Efficiency considers cost minimization through focus on planning and resource allocation (Muchiri et al., 2011).
Indicator Category Equipment performance Cost performance
Table 3 Lagging performance indicators (Suzuki, 1994; Wireman, 2005; Muchiri et al., 2011)
Indicator
Unit
Description
Availability rate Performance rate Quality rate OEE # of equipment failures # of process failures Availability Failure frequency MTBF Reduction in number of shutdowns Production loss cost Overall maintenance cost Maintenance intensity
% % % % # # % # / time unit Time unit # $/ $/ $/ production unit
1(Total downtime / Total calendar time) (Operating hours x actual output per hour)/Operating time Total production amount defect amount / total production amount Availability rate * Performance rate * Quality rate # of failures caused by equipment failure # of failures caused by operations MTBF / MTBF + MTTR # of failures in a certain time period Mean time between failures Comparison between the number of failures in two different time periods Lost sales costs Total of corrective and preventive maintenance costs % of maintenance cost per unit of product produced in a specific time period
Both cost and equipment performance indicators are instrumental in doing performance analysis of the maintenance function and identifying the performance gaps that should trigger management actions. They provide a good basis for establishing the reasons for performance gaps, leading to improvement of the maintenance function performance (Muchiri et al., 2011). Please refer to Appendix A, Overview of maintenance related KPI's for a full overview of all maintenance related performance indicators found in literature both leading and lagging.
14
2.1.2.3 PERFORMANCE TARGETS AND BENCHMARK Good maintenance assumes that the maintenance objective and strategies are not determined in isolation, but derived from factors such as company policy, manufacturing policy and other potentially conflicting demands or constraints in the company (Swanson, 1997; Swanson, 2001; Pinjala et al., 2006). Moreover, maintenance objectives are related to realization of production targets, through the influence on availability, at required quality and within the constraints of safety and designed capabilities (Tsang et al., 1999). During time constraints this aspect of the framework could not be examined thoroughly, however a complete overview of the relation between corporate strategy and maintenance strategy is provided in Appendix B, Maintenance strategy in literature.
Based on a periodic review of performance measures in relation to the performance targets, the progress of realizing these targets can be reflected (Tsang, 1998). These targets are often a mixture of internal targets and external targets, also defined as a benchmark (Muchiri et al., 2011). Both of these targets form, together with the used indicators to measure performance, a complete framework of evaluation the complete maintenance function.
2.1.3 CONCLUSION FROM LITERATURE The relation between equipment performance and maintenance can be described according to the framework used to evaluation the maintenance function described by Muchiri et al. (2011). Moreover, by combining different sources of literature (Suzuki, 1994; Wireman, 2005; Campbell & Reyes, 2006; Muchiri et al., 2011) a complete overview of maintenance related performance indicators have been identified. These indicators are grouped according to their type and category. In Table 4 the combined overview of all relevant performance indicators is provided based on their related category and type. The list provides the necessary theoretical framework for the comparison of indicators used at BPRR.
Table 4 Overview of maintenance related performance indicators
Indicator type
Indicator category
Indicator
Availability rate
Performance rate
Quality rate
Overall Equipment Effectiveness
Equipment performance Number of equipment failures
Lagging
Number of process failures Failure frequency
Mean Time Between Failures
Reduction in number of shutdowns
Production loss cost
Cost performance
Overall maintenance cost
Maintenance intensity
Planned / Unplanned ratio
Leading
Process performance
Mean Time To Repair
Scheduling compliance
Note that the indicators of the lagging type are more in numbers than those of the leading type. This is based on the fact that they provide a good basis of conducting a root cause analysis to find causes for performance gaps (Tsang, 1998; Muchiri et al., 2011). Given the aim of finding equipment with the biggest performance gaps related to maintenance, more focus is put to these lagging indicators than to the leading indicators.
15
2.2
EQUIPMENT PERFORMANCE AND MAINTENANCE AT BPRR
In the previous paragraph maintenance evaluation and equipment performance as described in literature was given. The goal of this paragraph is to provide a comparison between the framework and accompanying indicators found in literature and those used by BPRR.
2.2.1 MAINTENANCE PERFORMANCE EVALUATION FRAMEWORK AT BPRR The three most important levels of targets are proposed by BP Global objectives, BPRR objectives and the Solomon benchmark objectives. In most cases the BP Global and BPRR objectives are overlapping, given the fact that BP Global objectives are mandatory for each of the BP refineries whilst the refinery objectives are additional to these objectives. A application of the maintenance performance evaluation framework described in the previous paragraph is depicted in Figure 10.
SSTTRRAATTEEGGYYFFOORRMMUULLAATTIIOONN BP Global strategy
MMAAIINNTTEENNAANNCCEEEEFFFFOORRTT//PPRROOCCEESSSS Work Realization
MMAAIINNTTEENNAANNCCEERREESSUULLTTSS
Equipment Performance
Production Loss Cost & Maintenance cost
BPRR strategy
Work identification
LOOP
Work Scheduling
Performance Analysis
Equipment performance targets related to maintenance
Work planning
Performance Targets & Solomon
LLeeaaddiinnggPPeerrffoorrmmaanncceeIInnddiiccaattoorrss
LLaaggggiinnggPPeerrffoorrmmaanncceeIInnddiiccaattoorrss
Figure 10 Maintenance function framework applied to BPRR
2.2.1.1 LEADING PERFORMANCE INDICATORS USED AT BPRR Ratio between planned and unplanned work orders BPRR makes use of a ratio between planned (PM02 and PM03) and unplanned (PM01) work orders. Next to these types of work orders, BPRR has three more types of work orders (see Table 5). All other forms are not taken into consideration for this research due to the scope of this research.
Order type PM01 PM02 PM03 PM04 PM05 PM06
Table 5 Types of work orders at BPRR Description General (corrective and servicing) Preventive / predictive (noncompliance) Inspection (compliance) Refurbishment Investment Annual / Standing
Used Yes Yes Yes No No No
The difference between the work orders of PM02 and PM03 is the trigger of inspection or other preventive maintenance action. The work orders of type PM03 are bound by a legal restriction of time between consecutive actions whereas PM02 is not. Planned in this case is defined as preventive actions known to be executed for more than one week. Unplanned is defined as all corrective maintenance actions carried out after equipment failure or found with an inspection.
Planned / Unplanned ratio= # work orders corrective/Total # work orders
[2.1]
Scheduling compliance The department of maintenance support provides a compliance report every month based on the scheduling compliance and providing an overview with the backlog of all open maintenance orders planned in advance (more than one week) but not carried out. Based on this compliance numbers, BPRR can see how many planned work orders are carried out according to their time frame.
16
2.2.1.2 LAGGING PERFORMANCE INDICATORS USED AT BPRR Availability rate or BPRR OSA Based on an analysis of operational definitions used at BPRR, it was found that the On Stream Availability is the theoretical equivalent of the availability rate. This measure used by BP defines the fraction of time that the equipment, in this case the units, are available for operation. As can be seen by the calculations on the next page, an additional factor is used to determine the relation between units and their availability in order to calculate the refinery availability.
Production loss duration [days / year]
BPRR Unit OSA per year = 1 
Calendar time [days / year]
[2.2]
BPRR unit impact on refinery OSA = ( 1  BPRR Unit OSA) Ч EDC %
[2.3]
EDCUnit = Production capacity Ч EDC Factor Ч On Stream Factor
[2.4]
EDCUnit % = EDCUnit/All units EDCUnit
[2.5]
Given these descriptions, there is one difference between availability rates defined in literature and
the one used by BPRR. The EDC of a unit is a factor used to determine the criticality of that specific
unit within the refinery as a whole. Moreover, based on normalization this factor is derived to a
percentage of a specific unit within the refinery. In order to determine the equipment performance for
further
research purposes, the unit OSA will be used rather than the refinery OSA as a whole.
Therefore the EDCUnit % can be seen as a unique performance indicator representing the importance of a unit in the refinery as a whole and needs to be incorporated as well. For an example calculation of
the unit OSA please refer to Appendix D, Example of unit OSA.
Failure frequency At BPRR different forms of production loss are registered in order to calculate the availability rate of equipment (unit) or the refinery as a whole. The main difference in production losses made is based on the intent of the production loss, is it planned or unplanned production loss. Subsequently, a type of production loss cause is assigned. When looking at literature, failure frequency is defined as all complete equipment breakdowns during a certain time period. Looking at BPRR, this means that failure frequency relates to all production loss assigned as unplanned production loss with the effect of a shutdown (S/D). This can either be according to equipment breakdown of the unit itself or according to equipment breakdown of a directly connected other unit (defined as a Domino).
Production loss cost Strongly connected to the availability rate, every month the production loss cost following from the availability losses are carefully reported by the department of performance management. This data is defined as the RAID production loss cost. Based on a marginal factor and the estimation of the possible production if the equipment would not have been slowed down or broken down, the production loss cost are calculated. For this research it is assumed to over the course of one year these marginal factors do not vary, in practice these margins are highly dependable on commercial strategy and impacts from other factors outside the influence of decision makers at BPRR.
Overall maintenance cost Next to the actual equipment performance, measured by the BPRR OSA, maintenance results are represented by the accompanying maintenance costs and production loss costs. In Figure 11 the current maintenance costs (material and contracting costs) are divided into reactive maintenance costs and proactive maintenance costs. From this figure, a quick conclusion can be drawn that the costs for reactive maintenance are significantly higher than the costs for proactive maintenance.
17
Reactive Maintenance Cost Proactive Maintenance Cost Total Maintenance Cost
01.2015 02.2015 03.2015 04.2015 05.2015 06.2015 07.2015 08.2015 Figure 11 Maintenance Cost over 2015 (BPRR, September 2015) According to the rule of thumb used at BPRR, reactive maintenance costs are about a factor ten higher than the proactive maintenance costs. Based on this small analysis it makes sense to evaluate whether the maintenance performance could be improved.
2.2.1.3 PERFORMANCE TARGETS AND BENCHMARK As stated in the problem introduction, the performance target considering equipment performance in relation to maintenance is defined by the OSA target. This target is set to reach 96,7% on average over the last five years, meaning that compensation between years in allowed. The periodic review of this target is executed on a monthly base, in order to keep track of progress during a year. However, this equipment performance target is not the only relevant target. Cost performance is also compared to a target set in advance, leading to financial restrictions as mentioned earlier. Additionally, every two years a benchmark study is carried out by an external consultancy bureau, Solomon Associates, comparing all worldwide refineries with each other. Based on this benchmark, external targets are set for BPRR complementing the internal targets. Calculations and indicators used by the Solomon benchmark deviate from those used internally at BPRR. For this research the internal indicators and calculations are leading and therefor used further on in this research. For completeness of this research question, calculations of Solomon availability rate are presented in Appendix E, Solomon benchmark. 2.2.2 CONCLUSION FROM BPRR Based on a complete analysis of all used performance indicators at BPRR, and their operational definitions, Table 6 was developed. As can be seen from this table, the focus of BPRR indicators is like literature more focused on lagging indicators than on leading indicators. All used indicators at BPRR are grouped based on the theoretical framework of maintenance performance indicators defined in 2.1.3. This input is used in the confrontation in the next section.
Indicator type Lagging Leading
Table 6 Performance indicators used at BPRR
Indicator category
KPI
Availability rate
Equipment performance
EDC Number of equipment failures
Number of process failures
Production loss cost
Cost performance
Overall maintenance cost
Maintenance intensity
Process performance
Planned / Unplanned ratio Scheduling compliance
18
2.3
CONFRONTATION BETWEEN LITERATURE AND BPRR
Based on a comparison between performance indicators found in literature and those used by BPRR, it was found that a large part of the most commonly used indicators are available with data. However, from Table 7 it can be observed that some widely used and acknowledged indicators from literature are absent at BPRR. The most important of these being, the performance rate, mean time before failure and the mean time to repair. Moreover, Table 7 provides an overview of criteria selected for the use further on in the research combined with an argumentation for this choice.
Indicator type Lagging Leading
Indicator category Equipment performance Cost performance Process performance
Table 7 Overview of performance indicator comparison
Indicator
Used
Argumentation
Availability rate
Yes
Measured by OSA
EDC%
Yes
Part of OSA calculations
Performance rate
No
Not measured at BPRR, suggestion
Quality rate
No
Not measured at BPRR, suggestion
Overall equipment effectiveness No
Not measured at BPRR, suggestion
Number of equipment failures
No
Similar as failure frequency
Number of process failures
No
Outside of research scope
Overall Equipment Effectiveness No
Not measured
Failure frequency
Yes Number of shutdowns in a time period
Mean Time Before Failure
No
Reduction in number of S/D
No
Not measured directly, suggestion Similar as failure frequency
Production loss cost
Yes
Lost sales cost due to downtime
Overall maintenance cost
Yes Preventive+corrective maintenance cost
Planned / Unplanned ratio
Yes Unplanned work order/total work orders
Mean Time To Repair Scheduling compliance
No
Not measured at BPRR, suggestion
No
Outside of research scope
As can be seen from this comparison, some indicators are currently missing at BPRR. In all four literature sources used for the developed of this framework, these criteria are mentioned and considered to be important. A total of six important indicators are currently missing. Based on this confrontation the recommendation would be to introduce these indicators as part of the framework. This addition should not be an big issue given the fact that some essential information is already gathered (for example data to establish MTBF). Also the applications for these additional indicators have been proven to be useful for other companies in the process industry as well (Suzuki, 1994). In Table 8 an overview of these suggested additional indicators is given.
Table 8 Addition of performance indicators suggested for BPRR
Indicator type
Indicator category
KPI
Performance rate
Quality rate
Lagging
Equipment performance Overall Equipment Effectiveness
Mean Time Between Failures
Reduction in number of shutdowns
Leading
Process performance
Mean Time To Repair
A final remark can be made on this recommendation, looking at the equipment performance indicators additionally suggested. When considering the calculations or operational definition of these indicators, Overall Equipment Effectiveness (OEE) contains three indicators itself (Availability rate, Performance rate and Quality rate). Adding OEE would thereby reducing addition KPI's to four.
19
2.4
CONCLUSION EQUIPMENT PERFORMANCE AND MAINTENANCE
Based on a confrontation between literature and practice from BPRR, six criteria or key performance indicators are defined for the measurement of maintenance performance with special attention to equipment performance and the availability rate used at BPRR (OSA).
Table 9 Selected indicators for measurement of equipment performance related to maintenance
Indicator type
Indicator category
KPI / Criteria
Unit of measure
EDC unit %
[%]
Equipment performance OSA (Availability rate)
[%]
Lagging
Cost Performance
Failure frequency Production loss cost Overall maintenance cost
[#/ t] [$] []
Leading
Process Performance Planned / Unplanned ratio

The selected indicators, for further research purposes defined as criteria, are presented in Table 9 and cover all types of measurement from the theoretical framework described by Muchiri et al. (2011). Five of these measurements or criteria follow from the direct confrontation between literature and practice. The criteria for EDC unit % is selected, not based on its compliance between literature and practice, but rather on covering the importance a certain unit within the refinery as a whole.
Table 10 Ideal criteria for measurement of equipment performance related to maintenance
Indicator type
Indicator category
KPI / Criteria
Unit of measure
OEE
[%]
Lagging
Equipment performance
Failure frequency MTBF Reduction in # shutdowns
[#/ t] [t] [%]
Production loss cost
[$]
Cost Performance
Overall maintenance cost
[]
Planned / Unplanned ratio

Leading
Process Performance
MTTR
[t]
A second result from the confrontation between theory and practice is the suggestion to add six more important indicators to the framework of indicators used at BPRR. From the accompanying calculations this would mean that EDC unit % and OSA would be incorporated in the OEE together with the Production rate and Quality rate. The ideal indicators, or criteria, used as input for the decision problem analysed in the next research question are presented by Table 10.
20
RESEARCH QUESTION 2
Abstract
In this part of the master thesis report the second research question, considering the selection of the equipment with the largest performance gap, will be discussed. As a result a multicriteria decision making procedure for the selection of the equipment, and an Excel based tool to execute this procedure, will be obtained.
Keywords Analytical Hierarchy Process, Decision Support Tool, Multicriteria decision making approach, Outranking method, Pairwise comparison, Preference function, PROMOTHEE.
Framework consisting of important criteria for the measurement of equipment performance in relation to maintenance Deliverables question I
Procedure for the selection of best unit and sub unit for further analysis Excel based decision support tool for the selection of the best unit and sub unit based on the defined input data Deliverables question II
Procedure to assess whether maintenance policy evaluation is useful Procedure for the evaluation and adjustment of current maintenance policies Excel based decision support tool for choosing the best maintenance policy Deliverables question III
1. How should equipment performance related to maintenance be measured? 1.1 How is equipment performance, related to maintenance, measured in literature? 1.2 How is equipment performance, related to maintenance, measured by BPRR? 1.3 Which criteria should be used to measure equipment performance related to maintenance? Research question I Chapter 2 Report chapter
2. How can equipment with the largest performance gap be selected? 2.1 How can the performance of equipment be compared to each other?
2.2 How can units with the largest performance gap be selected?
2.2 How can sub units with the largest performance gap be selected? Research question II
Chapter 3
Report chapter
3. How can equipment performance be improved by adjusting current maintenance policies? 3.1 How can be determined if evaluating the current maintenance policy is useful? 3.2 How is the current maintenance policy compared to other possible policies?
3.3 How is the best maintenance policy selected? Research question III
Chapter 4
Report chapter
"In a moment of decision the best thing you can do is the right thing to do; The next best thing is the wrong thing; And the worst thing you can do is nothing." THEODORE ROOSEVELT (26TH PRESIDENT OF THE UNITED STATES, 18581919)
3. EQUIPMENT SELECTION In the previous research question, six criteria were selected to represent equipment performance in relation to maintenance. Based on these criteria, equipment with the biggest performance gap needs to be selected in order to allocate resource to the right equipment. The goal of this chapter is therefore to describe a multicriteria decision making (MCDM) or multiple attribute decision making (MADM) method to select this equipment. In order to derive at such a method, first an overview will be presented of process steps as described in literature, followed by an application to the BPRR case. This selection will be done by two steps, initially BPRR units and subsequently for BPRR sub units. The reason that this procedure has to be executed two times is based on the fact that, as described by Suzuki (1994), equipment at BPRR is clustered by an breakdown structure (see Appendix F, Functional location separation).
In general a procedure of MCDM can be described by four main phases; identification of the decision problem, selection of the evaluation model, determination of the preferences and determination of the best decision. Different authors in literature describe this procedure in more detail (Dubois & Prade, 1980; Roy & Vincke, 1981; Huang & Tzeng 2011). Based on a comparison of these different descriptions (see Appendix G, General MCDA procedure), six different steps over four different phases are defined (Table 11).
Table 11 Phases and steps in the MCDA procedure
MCDA Phase
MCDA Step
1. Identify the decision problem
1. Describe the decision problem 2. Structuring the decision problem
2. Select evaluation model
3. Select the appropriate evaluation model
3. Determine preferences for a decision
4. Construct the preferences for a decision 5. Evaluate the alternatives
4. Determine best decision
6. Determine the best alternatives
3.1 IDENTIFY THE DECISION PROBLEM For the identification of the decision problem the Analytic Hierarchy Process (AHP) is used. This multicriteria decision making approach (MDMA) is focused on structuring multiple elements in a
hierarchical structure (Saaty, 1990). The main goal of this approach is to assist in arriving at a decision which is invariant to internal politics and behaviour of people. This goal is realized by deriving criteria weights according to the appropriate hierarchical system (Huang & Tzeng, 2011). In order to structure the decision problem according to the AHP process, Saaty (1990) described four steps to be performed. In Table 12 the relation between these steps and the steps from the MCDA procedure in Table 11 are depicted.
Table 12 Process steps of the AHP method in relation with the MCDA steps
MCDA Step
AHP Step
1. Describe the decision problem
1. Describe all related criteria 2. Select the most important criteria
2. Structuring the decision problem
1. Arrange criteria hierarchical 2. Derive criteria weights
21
3.1.1 DESCRIBE THE DECISION PROBLEM The first step defined in literature (Dubois & Prade, 1980; Roy & Vincke, 1981; Huang & Tzeng 2011) is a description of the decision problem at hand. In case of this research the decision of selecting the equipment with the largest performance gap. To do so, all related criteria and elements to characterize this decision problem need to be described.
3.1.1.1 DESCRIBE ALL RELATED CRITERIA
A general MCDA problem is defined mathematically (Huang & Tzeng, 2011; de Almeida et al., 2015):
{1(1), 2(1), ... , ()  }
[3.1]
In this description = { = 1,2, ... , } and represents the set of possible alternatives or actions for which to make a decision. Furthermore, = { = 1,2, ... , } is the set of criteria used to compare the alternatives from set . Then () represents the performance of alternative on criteria and combined these performances form performance table (see Table 13).
Table 13 Performance table
1
...
.1
1(.1) .. . .. (.1)
.
.
.
.
.
.
..
.
1() . . . ()
Given this notation, a decision problem can be identified if three necessary elements are present. These three elements consider a set of possible alternatives, a set of criteria and the set of performance rates from the set of alternatives over the set of criteria. Moreover, these three elements provide solely a description of the situation but not a structure for the decision problem. To do so a clear goal needs to be determined on which the decision maker can evaluate the alternatives based on their performance (Saaty, 1990). This is done in the second step of the AHP.
3.1.1.2 SELECTING THE MOST IMPORTANT CRITERIA Not all criteria related to the decision problem are needed, based on the principles for criteria defined by Keeney & Raiffa (1976), criteria are correct if five conditions are met (see Table 14).
Condition 1. Completeness 2. Operationally 3. Decomposability 4. Nonredundancy 5. Minimum size
Table 14 Conditions for the selection of criteria Description All criteria contributing to the overall objective should be incorporated Selected criteria should be easy to use and understand Selected criteria should be able to be separated into components or variables Dependency or overlap between criteria should be avoided as much as possible The number of selected criteria should be kept as low as possible
Furthermore, the total number of criteria for a decision situation should be seven at most. This amount of criteria has been proven to be the maximum (plus or minus two) in making preference judgments based on pairwise comparison (Saaty & Ozdemir, 2003).
3.1.1.3 DESCRIBE THE DECISION PROBLEM FOR THE BPRR CASE Three basic elements need to be described for the decision problem, for the case of BPRR this is the selection of the equipment with the largest performance gap. Based on the mathematical descriptions provided in 3.1 the situation for unit selection can be described by = { = 1,2, ... ,22} where represents the total set of units to be considered and is considered as the first element.
22
Subsequently, the
Decision criteria are represented by = { = 1,2,3,4,5,6}. Note that these criteria come from the previous chapter, shown in Table 15, and form the second element.
Table 15 Criteria used in the BPRR decision problem
Criterion (indicator)
Unit of measure
EDC % BPRR Unit OSA Production loss cost Maintenance cost Planned / Unplanned ratio Failure frequency
[%]
1
[%]
2
[$]
3
[]
4
[%]
5
[#]
6
The last and third element to describe the decision problem is the description of =
{1(1), 2(1), ... , 6(22)} representing the performance of the different units on the six different criteria. A complete overview of these performances is given in Table 16, based on input data from a
complete year.
Table 16 Performance table for the BPRR case
1 4,27% 16,33% $15,00 22,00
35,04%
4
2
2,00% 0,00%
$ 4,00
1,00
42,31%
0
3 12,83% 4,97% $17,00 20,00
46,49%
0
4 12,83% 3,14% $14,00 15,00
28,72%
2
5 7,78% 1,73% $13,00 10,00
29,64%
0
6
6,50% 5,84% $ 21,00
9,00
43,75%
3
7 6,50% 1,95% $ 20,00 12,00
17,31%
0
8 20,74% 0,03% $19,00 21,00
33,51%
0
9 2,57% 7,23%
$ 4,00
19,00
42,31%
0
10 1,28% 4,01% $ 10,00 16,00
42,37%
1
11 5,49% 0,56% $ 16,00
8,00
27,36%
0
12 0,82% 0,01%
$ 4,00
11,00
47,15%
0
13 0,85% 1,27%
$ 4,00
5,00
10,34%
1
14 0,41% 7,30%
$ 4,00
6,00
16,67%
2
15 0,28% 13,33% $ 4,00
13,00
32,26%
3
16 1,57% 0,28% $ 11,00
3,00
15,79%
0
17 1,82% 0,11%
$ 9,00
2,00
5,88%
0
18 1,09% 0,11%
$ 4,00
4,00
21,21%
0
19 0,45% 0,00%
$ 4,00
14,00
26,16%
0
20 1,13% 0,01% $ 12,00
7,00
40,74%
0
21 4,78% 1,33% $ 22,00 18,00
28,95%
0
22 4,01% 1,05% $ 18,00 17,00
31,25%
0
* Financial numbers in this table are fictitious and not absolute.
3.1.1.4 SELECTING THE MOST IMPORTANT CRITERIA FOR THE BPRR CASE As described by Keeney & Raiffa (1976), criteria are correct if five conditions are met. The six criteria used in this decision problem are therefore tested on these conditions (see Table 17).
Condition 1. Completeness 2. Operationally 3. Decomposability 4. Nonredundancy 5. Minimum size
Table 17 Results of checking criteria conditions Description As concluded from the previous research question, all indicator categories are represented by the six criteria making the equipment performance complete All selected criteria are directly derive from existing data sources and thereby meeting this condition All selected criteria were defined as operational formulas In order to test this, a sensitivity analysis will be carried out later on In order to test this, a sensitivity analysis will be carried out later on
The final condition as described by Saaty & Ozdemir (2003) was that the total amount of criteria used must not surpass seven, concluding that also this condition is met. 23
3.1.2 STRUCTURING THE DECISION PROBLEM Now that the decision problem is identified and described by the three necessary elements, a clear structure needs to be established. The AHP method prescribes a hierarchical system to structure all related factors in one framework. 3.1.2.1 ARRANGE CRITERIA HIERARCHICALLY Saaty (1990) argues that arrangement in a hierarchy serves two purposes. The first purpose is to provide an overall view of the complex relationship inherent in the situation and the second purpose is aiding the decision makers to assess whether the issues in each level are of the same magnitude.
Goal
Overall objective
Aspect Criteria
Dimension 1
...
Dimension k
...
Dimension c
g11 ... g1r
gk1 ... gks
gc1 ... gch
Alternatives
a1
...
ai
...
am
Figure 12 Hierarchical system for MCDA (Huang & Tzeng, 2011)
In Figure 12 a general hierarchical system based on the AHP process is depicted. In this figure, the goal or overall objective is on top of the hierarchy. According to Saaty (1990) one should provide enough details to identify the attributes or criteria that contribute to the solution or realization of the overall objective. This is represented by the dimension or aspect level of the hierarchy. The third level of the hierarchy consists of the criteria selected in the first two steps of the AHP method. Lastly all possible alternatives are represented with their connection to all criteria.
3.1.2.2 DERIVE CRITERIA WEIGHTS The fourth step in the AHP method entails the determination of weights for each criterion. These weights are determined by pairwise comparisons (Saaty, 1977; 1980; 1990; 1994; 2008). In order to establish measurement scales in the AHP process, a paired comparison between elements is used. The most effective way to concentrate judgment is to take a pair of elements and compare them on a single property without concern for other properties or other elements (Saaty, 1990). As a result, a pairwise comparison matrix can be obtained as shown in Table 18.
Table 18 Pairwise comparison matrix based on Saaty (1990)
1
2

1 1/1 1/2  1/
2 2/1 2/2  2/





/1 /2  /
Based on the perception of the stakeholders involved in the decision problem, these pairwise comparisons are calculated. Based on a survey, stakeholders can express their preference for a criterion compared to another criterion also described as the importance of that criterion. Furthermore, the stakeholder can express the extent of preference by using the fundamental scale developed by Saaty (1990). Please refer to Appendix I, Determining criteria weights for an example of the calculation of criterion weights using the Eigenvalue method (Saaty, 1977; 1980; 1990).
24
As a result of this pairwise comparison, Table 13 can be extended by adding the criterion weights as seen in Table 19. Now that both the decision problem is structured and criterion weights are added, the first phase of the MCDA procedure is finished. Subsequently the most appropriate MCDA method, considering the evaluation of alternative, needs to be selected.
Table 19 Results of factor weights using the AHP
Criterion
...
Weight
...
1
1(1)
...
(1 )
1 ( )
...
( )
3.1.2.3 ARRANGE FACTORS HIERARCHICALLY FOR THE BPRR CASE Based on the defined steps in 3.1.2, the decision problem is further structured. The selection of important factors or criteria is already done in the previous chapters. Therefore the criteria used in the identification of the decision problem are directly adopted without another selection. All relevant criteria defined in are arranged hierarchical according to the description provided in section 3.1.2.1. The result of this step is shown in Figure 13. The goal or general objective is to maximize equipment performance considering relevant aspects to maintenance. These aspects are based on the type of performance indicators in the framework from research question one (see 2.2.2).
Goal
Maximum equipment performance
Aspect
Equipment performance
Process performance
Cost performance
Criteria
Unit OSA
Failure frequency
EDC %
Planned / Unplanned ratio
Maintenance costs
Production loss costs
Alternatives
Unit 1
Unit 2
...
Unit 21
Figure 13 Hierarchy structure of the decision problem
Unit 22
3.1.2.4 DERIVE FACTOR WEIGHTS FOR THE BPRR CASE
Based on the expert opinion of two stakeholders at BPRR, it was conceived that each of the criteria
used in the evaluation of alternatives should be based on the same weight. Therefore the weight of
each criterion is 1/6, based on the simply calculation:
= 1/
[3.2]
Note that this means that the effect of a weight factor in this case is negligible, due to the fact that all criteria are multiplied by the same factor. However, the weight factor is not omitted from the calculation. If the weight factor would be excluded, then () cannot be defined within the specified range. If in the future BPRR decides that weight factors for each criterion should differ, than please refer to Appendix H, Example of comparison matrix questionnaire and for an example of how to determine the weights.
25
3.2 SELECT AN APPROPRIATE EVALUATION MODEL The second phase of the MCDA procedure entails the selection of the right evaluation method. In order to select the right MCDA, the guidelines set by Guitouni & Martel (1998) are used. The relation between these guidelines and the MCDA step(s) is depicted in Table 20.
Table 20 Process steps in selecting an evaluation method in relation with the MCDA steps
MCDA Step
Selection guidelines
1. Preference elucidation and modelling
2. Decision problematic addressed
3. Select the appropriate evaluation model 3. Input information and features
4. Discrimination power of criteria
5. Method hypothesis
3.2.1 SELECTION OF AN EVALUATION MODEL FOR THE BPRR CASE In Appendix J, the complete case of BPRR is assessed based on guidelines two through six. As a result, an outranking method is needed which can deal with pairwise comparison between two alternatives. Furthermore, the complete performance of equipment is considered so compensation between criteria is allowed. Finally it was found that the criteria might not be independent. Based on these characteristics the PROMOTHEE outranking method seems to be the best evaluation model.
The AHP is not used in expressing and forming the preference since this method assumes inner and outer independence between different hierarchical levels and elements in the decision structure (Saaty, 1990; Guitouni & Martel, 1998). Based on the fact that the criteria selected for this decision problem are not independent, the PROMOTHEE II method is selected given the ability to cope with interdependent criteria. Also this method provides the ability to handle the input information and provides a complete preorder instead of the partial preorder as with the PROMOTHEE I method.
The PROMOTHEE I method is aimed at finding a partial preorder were some of the alternatives might not be ordered based on the possible outcome of incomparability. The PROMOTHEE II method however, is aimed at finding a complete preorder only distinguishing outranking and indifference. Based on this difference the PROMETHEE II method is preferred over PROMETHEE I (Brans et al, 1984; 1985; Guitouni & Martel, 1998; Huang & Tzeng, 2011).
The PROMOTHEE method is, as stated before, an outranking method. This type of evaluation method generally consists of three steps (Huang & Tzeng, 2011) as depicted in Table 21. These steps will be further described in 3.3.
Table 21 Process steps of PROMOTHEE II method in relation with the MCDA steps
MCDA step
Outranking method step
4. Construct the preferences for a decision
1. Construction of generalized criteria 2. Determination of a ranking relation
5. Evaluate the alternatives
1. Evaluation of the results of ranking
26
3.3 DETERMINE PREFERENCES FOR A DECISION Constructing the preference entails the defining when one alternative is chosen over another. To do so first the generalized criteria is defined, which makes it possible to define an ranking relation between all alternatives.
3.3.1 CONSTRUCT THE PREFERENCES FOR A DECISION In the previous paragraph the PROMETHEE II was selected for the evaluation of alternatives. As defined in Table 21 for this type of method preferences need to be constructed based on a generalized criteria construction and the determination of an outranking relation.
Table 22 PROMOTHEE II process steps for constructing the preferences
MCDA step
Outranking method step
4. Construct the preferences
1. Construction of generalized criteria 2. Determination of an outranking relation
3.3.1.1 CONSTRUCTION OF GENERALIZED CRITERIA
In order to define a preference function, the type of criteria for the preference must be selected from
six different types of generalized criteria (see Appendix K, Generalized criteria for a full overview).
These generalized criteria are used to determine when an alternative is preferred over another
alternative. In the basic form of criteria, neither indifference nor preference thresholds are used to
determine the best alternative. This type is proposed in this case based on the preorder structure of
the PROMOTHEE II method. The preference function for usual criteria is defined as (Brans et al.,
1984):
(, ) = {10
 > 0
[3.3]
The preference function is defined for the comparison of two alternatives, , where , based on criteria . This function is then given by (, ) = [] and indicates the degree to which alternative is preferred to alternative ( over ). In this function the distance between performances of the two alternatives is defined as  = ()  ().
3.3.1.2 CONSTRUCTION OF GENERALIZED CRITERIA FOR THE BPRR CASE
In the case of BPRR we are only interested in the case were a certain unit or sub unit is outranking
the other unit or sub unit. The extents of the outranking or the cases of incomparability are not
interesting for this thesis research. The addition of extents or preferences could provide possibilities
for further research. As a result the following preference function can be used:
(, ) = [()  ()]
=
{
0 1
()  () > 0
[3.4]
3.3.1.3 DETERMINATION OF A RANKING RELATION
Now both the generalized criteria and corresponding preference function are defined, as well as the
outranking relation between two alternatives. The next step in the MCDA procedure is to evaluate the
alternatives based on the constructed preferences. The PROMOTHEE method provides a preference
index based on the preference function described in the previous paragraph. Considering all possible
criteria, (, ) describes the preference index over all these criteria:
(, ) = (, ) =1
[3.5]
Note that in this equation the outcomes of the preference function are multiplied by the weight factor derived using the AHP method.
27
3.3.1.4 DETERMINATION OF A RANKING RELATION FOR THE BPRR CASE
The calculation of the preference index defined by formula [3.5] is based on the input of two
alternatives considering all related criteria. An example of this calculation is given by [3.6]. In this case
alternative 1 and 2 are compared on the six criteria. Note that alternative 1 is preferred over
alternative 2 on every single criteria except criteria five.
6
(1, 2) = [(1)  (2)] 1/6
=1 = 1[1(1)  1(2)] 1/6 + 2[2(1)  2(2)] 1/6 + 3[3(1)  3(2)] 1/6 + 4[4(1)  4(2)] 1/6 + 5[5(1)  5(2)] 1/6 + 6[6(1)  6(2)] 1/6 = 1[4,27%  2,00%] 1/6 + 2[16,33%  0%] 1/6 + 3[$15  $4] 1/6 + 4[22  1] 1/6 + 5[35,04%  42,31%] 1/6 + 6[4  0] 1/6 = 1 Ч 1/6 + 1 Ч 1/6 + 1 Ч 1/6 + 1 Ч 1/6 + 0 Ч 1/6 + 1 Ч 1/6 = 5/6
[3.5]
3.3.2 EVALUATE THE ALTERNATIVES
If all preference indexes are calculated for each pair of alternatives from , the outranking flows from
the PROMOTHEE method can be defined. This net outranking flow is mathematically defined by:
() = +  
[3.6]
In this equation + represents the positive outranking flow or incoming flow and  represents the
negative outranking flow or outgoing flow. These two outranking flows are defined by [3.7] and [3.8].
+()
=
1 
1
(,
\{}
)
()
=
1 
1
(,
\{}
)
[3.7] [3.8]
Note that in [3.7] and [3.8] the outranking flows are multiplied by a fraction representing the amount
of alternatives each alternative faces. The net outranking flow is now used to assess which alternative
is preferred over the other and is defined for the range of 1 () 1:
( ) () > () ( ) () ()
[3.9]
As a result final result, another column can be added to Table 19, forming Table 23.
Table 23 Results of adding net outranking flows to the MCDA
Criterion
...
Weight
...
()
1() . . . ()
1() . . . ()
() ()
3.3.2.1 EVALUATE UNITS FOR THE BPRR CASE
As described in the previous section, an outranking factors needs to be calculated based on an
incoming flow and an outgoing flow. In de formula below an example calculation for the BPRR case in
the case of alternative one is given.
+(1)
1
=
22

1
(1,
\{1}
)
= 1/21 Ч ((1, 2) + (1, 3) + + (1, 21) + (1, 22))
= 1/21 Ч (5/6 + 3/6 + + 2/6 + 4/6) = 0,8667
[3.7]
28
The value calculated for alternative one, considering the incoming flow is 0,8667. Note that this value
represents the value of alternative one outperforming all other 21 alternatives based on all six criteria.
In a similar way the outgoing flow can be calculated for alternative one (see below).
(1)
1
=
22

1
(,
\{1}
1)
=
0,1333
[3.8]
Now that both the incoming and outgoing flow are calculated, the total ranking flow is given by [3.6].
In the case of alternative one this is 0,7333.
() = +   = 0,8667  0,1333 = 0,7333
[3.6]
These calculations are done for all 22 alternatives and added to table 16 resulting in the table below.
Please note that only the first three rows of the complete table are given.
Table 24 First three alternative with added net outranking flows for the BPRR case
Criterion
Weight /
/
/
/
/
/
()
1
4,27% 16,33%
2
2,00% 0,00%
3
12,83% 4,97%
$15,00 $ 4,00 $17,00
22,00 35,04% 4 1,00 42,31% 0 20,00 46,49% 0
0,7333 0,4857 0,4857
3.4 DETERMINE THE BEST DECISION This last step of the MCDA procedure shows the result in terms of most preferred alternative based on the outranking results. The height of the net outranking flow is used to determine the best alternative, e.g. the highest net outranking flow is the best alternative. To do so the added net outranking flows are to be ranked to their values, largest on top followed by the second largest and so on. Assuming that the net outranking flow of alternative is higher than that of , Table 25 is derived.
Table 25 Results of adding the net outranking flow to the MCDA
Criterion Weight
1() 1()
... ... ... ...
() ()
() () ()
3.4.1 SELECT THE UNIT WITH THE LARGEST PERFORMANCE GAP FOR BPRR Based on the results presented in Table 24 and sorting the outranking flow in a descending matter, Table 26 is obtained. Based on discussion with experts at BPRR it is concluded that for the two assets, the two major clusters of units at BPRR, the highest unit is selected to perform the same analysis for their corresponding sub units.
Criterion Weight 1 6 3
Table 26 Top three units with the largest performance gap for the BPRR case
/
/
/
/
/
/
()
4,27% 16,33% $15,00 22,00
35,04% 4
0,7333
6,50%
5,84% $ 21,00 9,00
43,75% 3
0,5905
12,83%
4,97% $17,00 20,00
46,49% 0
0,4857
Based on a combination of expert opinions from two reliability engineers, a maintenance engineer, the reliability program manager and a more extensive data analysis, this research will focus on alternative three for the following research steps and questions. The argument used for this selection is based on a more elaborate performance analysis. The first two alternative had specific problem during the time frame used for this example calculations, whereas alternative three was found to be more prominent when looking at multiple years.
29
3.1.1 SELECT THE SUB UNIT WITH THE LARGEST PERFORMANCE GAP FOR BPRR Following from the selected unit within BPRR, the complete MCDA procedure will be repeated accordingly to select the sub unit with the largest performance gap. However, the decision problem has to be adjusted for this subsequent selection based on data availability of the selected criteria.
Table 27 Criteria used for sub unit selection
Criterion
Unit of measure
Maintenance cost
[]
Planned / Unplanned ratio
[%]
Failure frequency
[#]
Number label for 4 5 6
Given the adjustment of the decision problem at hand, the number of criteria used for sub unit selection is reduced to three. Based on this reduction, the weights of the criteria as defined in step two of the MCDA method need to be adjusted as well to 1/3 instead of the previous weights of 1/6.
Outranking flow
Alternatives (sub units)
Figure 14 Overview of sub unit ranking
Given the increasing number of alternatives for the selection of sub units, a complete overview of the problem description is omitted. In Figure 14 the graphical overview of the ranking is provided, please not that only the positive part of the ranking (() > 0) is shown due to the large amount of alternatives for 3.
Table 28 Selected sub unit for further analysis
Alternative Type Number
3.1
H
2101
3.2
E
2102
.
P
2101
3.4
P
2106
3.5
TP
2103
3.6
E
2121
3.7
E
2112
Based on a combination of expert opinions from two reliability engineers, a maintenance engineer, the reliability program manager and a more extensive data analysis, this research will focus on alternative three (crude feed pump 2101) for the following research question. The argument used for this selection is based on the fact that the top two sub units are listed as high due to process related problems. Although resulting in a performance gap, these unplanned maintenance actions are considered to be part of the operational process. Moreover, given the fact that this research is aimed at equipment performance in relation to maintenance rather than operational planning, alternative three is selected.
30
3.2 VALIDATION AND VERIFICATION OF EQUIPMENT SELECTION Decision makers within BPRR are to be using information obtained from results provided by this model. As stated by Sargent (2013), individuals affected by decisions based on the results provided by the developed model might be concerned with the correctness of the model results. Therefor the verification and validation of the developed model have to be tested. Firstly, the two remaining conditions for criteria selection are tested by analysis criteria weights. Secondly, the outcome and selection procedure itself will be verified and validated. 3.2.1 SENSITIVITY ANALYSIS OF EQUIPMENT SELECTION In section 3.1.1.4 two criteria conditions were kept unchecked. To find an answer if the selected criteria meet these two conditions, nonredundancy and minimum size, a sensitivity analysis is performed on the criteria weights and omission of criteria. The aim here is to check whether certain criteria can be omitted from the selection procedure, without changing the results of the selection.
To do so, seven different situations are tested changing both the number of criteria used as well as criteria weights and even the input from data. A complete overview of situations descriptions is given in Table 29. The first step was to change criteria weights (difference between situation one to situation two), followed by the omission of a criteria (difference between situation two to three) and lastly by omission of yet another criteria (difference between situation three to four). Table 29 Description of the different situations tested Sit. Description 1 Initial situation, all six criteria are accounted for with the same criteria weights 2 Equal weights on a higher hierarchy, based on indicator category (cost, process and equipment) 3 EDC% is omitted from the analysis, all other criteria are given the same weights 4 Both EDC% and Pl. / Unpl. ratio are omitted, all other criteria are given equal weights 5 EDC% and OSA are used as one, equal weights for all criteria on lowest hierarchy 6 EDC% and OSA are used as one, equal weights for all criteria on higher hierarchy 7 EDC% and OSA are used as one, Pl. / Unpl. ratio is omitted, other criteria have equal weights
The first transition is based on the fact that criteria weights are divided equally on a certain hierarchy.
Initially (situation 1) this is done on the lowest hierarchy. Change the equal weights to a higher
hierarchy, please recall Figure 13, results in situation two. The second change is based on the
operational definitions of criterion and . When looking on a unit level these criteria should be used separately, however looking at refinery level these two might be transformed into one criterion
representing refinery OSA. The last step is to omit the only leading indicator, based on the suggestion
of the reliability engineer. A corresponding overview of criteria weights for each situation is given by
Table 30.
Table 30 Criteria weight corresponding to the situations tested
Situation
1
1/6 1/6 1/6 1/6 1/6 1/6
2
1/9 1/9 1/6 1/6 1/3 1/9
3

1/5 1/5 1/5 1/5 1/5
4
 1/25 1/25 1/25  1/25
The results of this sensitivity analysis are given in Table 31, presenting the resulting rank of units with the largest performance gap. As can be concluded from this table, the outcome of the selection procedure whilst alternating criteria weights and omission of two criteria is considered to be robust. In three out of four situation the same unit is presented as best decision and considered to have the largest performance gap. More interesting is that four out of four situations result in the same runner
31
up. Even when examining the three units with the largest performance gap in all four situations result in the same decision for equipment selection.
Table 31 Results of weight sensitivity analysis over four situations
Situation
12
3
4
Largest performance gap
1 3 1 1
Second largest performance gap 6 6 6 6
Third largest performance gap 3 1 3 4
Fourth largest performance gap 4 8 10 3
Fifth largest performance gap 8 10 14 21
From this analysis it can be concluded that in order to meet the conditions set by Keeney & Raiffa (1976) and further simplify the selection procedure, two criteria could be omitted from the analysis when the remaining criteria are presented with equal criteria weights (situation four). This also provides an initial confirmation that the reduced selection procedure for sub unit selection is valid.
3.1.1 VERIFICATION OF EQUIPMENT SELECTION Verification in this case is defined as ensuring that the developed decision support tool and its implementation are correct. Given the acknowledgement and confirmation of different experts within BPRR it was found that the procedure developed in this research question is providing a rank for unit as well as sub unit with a performance gap and thereby confirming model verification.
3.1.2 VALIDATION OF EQUIPMENT SELECTION Based on the definition provided in literature (Kirk and Miller, 1986; Finlay & Wilson, 1997; Sargent, 2013) validity is used to measure the goodness of the output produced by the developed model. Different forms of validity exist and not all of them are relevant for this case. For the decision support tool data validity, conceptual validity and face validity are tested.
The data validation is done as defined by Sargent (2013). Before the data was used, input was checked for systematic bias. During this check, some of the input data from the OSA database was omitted, due to extreme values or outliers. These outliers included higher calendar time loss than possible within one month (for example, 31 days of OSA loss in a month with only 30 days). Thereby covering the accuracy of the input data. Based on a simple test the precision of input data was determined. The exactitude for a criterion, for example the number of decimal numbers for the OSA criterion, was tested by trial and error to find a value in which no changes occurred in the initial results based on the most extensive input.
The results from the model are checked by three BPRR experts (maintenance head, maintenance engineer and reliability engineer) and based on their confirmation, the system was found to be conceptually valid. The generation of a ranking of all relevant equipment is found to be logical and does not present extreme non logical outcomes.
As mentioned with the conceptual validity, experts where consulted on the results of the model to determine its added value. Moreover, during a session within BPRR a bigger audience was presented with the results and based on their input more applications for the model were found. With a comparison of the current asset prioritization list, it was found that most of the "top" equipment with the largest performance gap found with this model are already incorporated in this list. This results in two conclusions, the first being the confirmation of face validity for the model based on the fact that results seem reasonable. Secondly, this confirms that current BPRR processes seem to be accurate in allocation resources to the right equipment, although not complete.
32
3.2 CONCLUSION EQUIPMENT SELECTION The aggregate MCDA procedure, based on three different general MCDA procedures from literature, consists of six steps spread out over four phases. It was found that the structuring of the problem can best be done by the use of the Analytical Hierarchy Process suggested by Saaty (1990). This method was chosen based on the ability to establish weight factors for criteria based on pairwise comparison and the ability to present the decision problem in a hierarchy to perform an extra control whether all criteria found in the previous chapter should be incorporated into the decision problem.
In the second MCDA phase the seven guidelines of Guitouni & Martel (1998) were used in order to determine which evaluation model is most appropriate for the decision problem at hand. Based on the input information, desired outcome of outranking and the degree of compensation the method of PROMOTHEE II was selected to perform steps four through six. During these steps preferences are constructed based on the distance in performance between two alternatives. Moreover, the alternatives are then evaluated based on their outranking flows and the alternative with the highest net outranking flow is selected as the best alternative.
MCDA Phase 1. Identify the problem 2. Select evaluation model 3. Determine preferences 4. Determine best decision
Table 32 Overview of aggregated MCDA procedure
MCDA Step
Method
1. Identify the decision problem
AHP
2. Structuring the decision problem
AHP
3. Select the appropriate evaluation model 7 Guidelines
4. Construct the preferences 5. Evaluate the alternatives 6. Determine the best alternatives
PROMOTHEE II PROMOTHEE II PROMOTHEE II
Result General MCDA description Hierarchy and criteria weights Outranking method Performance gap determination Outranking table of alternatives Selection of best alternative
Table 32 provides a clear overview of all the steps to be performed in the MCDA procedure and the methods used to perform these steps. The main results from this procedure can be described by Figure 12 representing the hierarchical structure of the decision problem and Table 23 representing the evaluation of all possible alternatives in order to select to equipment with the largest performance gap. After execution of the selection procedure, alternative was selected.
Criterion Weight
Table 33 Selected unit with the largest performance gap
/
/
/ /
/
12,83% 0,6444% $ 20 20 23,4184%
/ 14
() 0,5873
After the selection of the appropriate unit, the same procedure was used to select the sub unit with the largest performance gap. To do so, the procedure defined for the selection of the unit with the largest performance gap had to be adjusted considering the absence of data related to the criteria of equipment importance (EDC sub unit %, availability loss and production loss costs). As a result crude pump P2101 was selected as input for maintenance policy evaluation in the third and last research question. The complete procedure was modelled in an Excel based tool (please refer to Appendix L, User interface of decision support tool).
Table 34 Selected sub unit with the largest performance gap
Criterion Weight /
/
/
()
.
143
56,89% 2 0,3896
33
RESEARCH QUESTION 3
Abstract
In this part of the master thesis report the third research question, considering the redesign of the maintenance policies for the selected equipment, will be discussed. As a result for this research question a mathematical model will be developed to determine to optimal replacement policy.
Keywords Age based policy, Block policy, Clustering activities, Corrective maintenance, Greedy heuristic, Maintenance policy, Preventive maintenance, Run to failure, Total maintenance cost.
Framework consisting of important criteria for the measurement of equipment performance in relation to maintenance Deliverables question I
Procedure for the selection of best unit and sub unit for further analysis Excel based decision support tool for the selection of the best unit and sub unit based on the defined input data Deliverables question II
Procedure to assess whether maintenance policy evaluation is useful Procedure for the evaluation and adjustment of current maintenance policies Excel based decision support tool for choosing the best maintenance policy Deliverables question III
1. How should equipment performance related to maintenance be measured? 1.1 How is equipment performance, related to maintenance, measured in literature? 1.2 How is equipment performance, related to maintenance, measured by BPRR? 1.3 Which criteria should be used to measure equipment performance related to maintenance? Research question I Chapter 2 Report chapter
2. How can equipment with the largest performance gap be selected? 2.1 How can the performance of equipment be compared to each other?
2.2 How can units with the largest performance gap be selected?
2.2 How can sub units with the largest performance gap be selected? Research question II
Chapter 3
Report chapter
3. How can equipment performance be improved by adjusting current maintenance policies? 3.1 How can be determined if evaluating the current maintenance policy is useful? 3.2 How is the current maintenance policy compared to other possible policies?
3.3 How is the best maintenance policy selected? Research question III
Chapter 4
Report chapter
"Any fool can know; The point is to understand." ALBERT EINSTEIN (THEORETICAL PHYSICIST, 18791955)
4. MAINTENANCE POLICY EVALUATION In the previous chapter, the selection of equipment with the biggest performance gap was discussed. The result of this equipment selection was alternative three of the units and alternative three of the sub units. This alternative of the sub unit is defined as pump 2101 or (P2101) and used as an example to show how the maintenance policy evaluation model can be used to determine the right maintenance policy based on expected maintenance costs over a certain period of time.
4.1 ASSESSING IMPROVEMENT AREA FOR SELECTED EQUIPMENT
First a small root cause analysis is performed to provide insights in the reason why these alternatives were selected. To do so a quick downtime analysis was performed considering the availability data and different production loss types. Figure 15 is the result of this downtime analysis, showing the different types of downtime. As can be seen from this figure most of the downtime is related to commercial margin and therefor caused by market situations. This type of downtime was considered outside of the scope already in chapter 2 and therefore not further analysed. Given the characteristics of the selected sub unit P2101, unplanned slowdown caused by equipment and unplanned slowdown caused by pumps are of interest. Given the fact that these two types for the second major part of unplanned downtime, it makes sense to proceed with the selected sub unit.
0,00% 0,10% 0,20% 0,30% 0,40% 0,50% 0,60% Planned Com. margin Market Planned Planned S/D Fouling Unplanned Slowdown Fouling Planned Slowdown Fouling Planned Commercial feed... Unplanned Slowdown... Unplanned Domino slowdown... Planned Slowdown Equipment Unplanned Slowdown Furnace Unplanned Slowdown Pump Planned Slowdown Operations Unplanned Slowdown... Unplanned Slowdown Rotating Planned TAR TAR Unplanned Unplanned S/D... Unplanned Unplanned S/D... Figure 15 Downtime analysis based on OSA data for the CDU3
jan14 feb14 mrt14 apr14 mei14 jun14 jul14 aug14 sep14 okt14 nov14 dec14
Corrective Preventive Figure 16 Related maintenance cost for sub unit P2101
A quick analysis of the maintenance cost related to unit 3 reveals that most of these costs consist of corrective rather than preventive maintenance cost (see Figure 16). A similar figure resulted from analysis of P2101. This provides indicates that proceeding with these alternatives is justified for improving equipment performance related to maintenance.
Equipment with largest performance gap from selection procedure
Yes PM present? No
PM evaluation Downtime casued by equipment failure? No Stop maintenance
policy analysisYes No
Corrective maintenance cost > preventive maintenance cost?
Yes (Adjusted) PM for equipment
Figure 17 Decision tree to determine if maintenance policy should be adjusted
34
4.2 MAINTENANCE POLICY EVALUATION, CONCEPTUALIZATION As defined in the research methodology, first a conceptual model has to be formed considering the evaluation and selection of the best maintenance policy. Based on theoretical concepts, the variables included in the mathematical model and a relation between these variables need to be defined. For the BPRR case, a clear description is needed on replacement decisions for equipment. To do so, this section will provide information on replacement options, failure distributions and maintenance policies considered for the developed of the mathematical model.
4.2.1 RELATION BETWEEN REPLACEMENT BEFORE AND AFTER FAILURE As Jardine & Tsang (2013) state the biggest problem for maintenance and reliability engineers is the uncertainty in the prediction when equipment will fail. Equipment might have different failure modes and multiple failure causes. In the case of this research, only two states of equipment are assumed described as good or failed. This implies that inspection of equipment is not considered based on the fact that the delay time is close to zero and therefore no possibility to detect degradation exists.
Good Degraded Failed
Delay time
(As good as) new Degradation detectable Failure
0
Time
Figure 18 Delay time (Christer, 1999)
With delay time, failures would follow a degradation process rather than immediate failure as assumed in this research. This degradation process can by simplified by defining transitions between the stages good, degraded, and failed (see Figure 18). In the degraded state, a failure can be prevented by replacing the component before it reaches the failed state. The delay time can then be described as the time between the possibility to first notice a degraded state, until the time the equipment actually fails (Christer, 1999). Given the fact that equipment is or good or failed, a relation between related cost for the two possible states can be defined.
Cost / unit time
Optimal Policy
Total Cost Production loss costs Maintenance policy cost
Maintenance policy (frequency of replacement) Figure 19 Graphical determination of optimal maintenance policy (Jardine & Tsang, 2013) As mentioned by Jardine & Tsang (2013), this direct relation exists between input (maintenance cost) of the maintenance policy against its output (reduction in production loss costs caused by downtime). In Figure 19 this relation is depicted and an optimal policy is determined by looking at the minimal total related cost consisting of maintenance cost (related replacement before failure) and production loss costs (related to replacement after failure). The optimal maintenance policy from Figure 19 is only based on minimizing total cost, in practice other factors (for example safety or targets considering availability) might be considered as well with an acceptable financial impact (Jardine & Tsang, 2013). 35
4.2.2 TIME TO FAILURE DISTRIBUTION
Given the two possible states of equipment, the relation of changing states depends on the expected
time to failure of equipment. This time to failure is also described in literature as the life time of
equipment. As Lewis (1996) describes this expected time to fail is not a single deterministic value,
but rather a random variable based on a certain probability distribution. This probability distribution of
expected time to failure is therefore defined as the failure distribution. In general the failure
distribution consists of a probability density function (PDF), denoted as the time to failure given the
probability that equipment fails at a certain point in time . The probability that equipment fails at or
before a certain point in time is then calculated with the cumulative distribution function (CDF). The
relation between these two function is given by (Montgomery & Runger, 2006):
() = ( ) = ()
[4.1]
0
4.2.2.1 FAILURE RATE The behaviour of equipment failures through time can prove to be quite revealing. Based on both the failure PDF and accompanied CDF, the failure rate or hazard rate is defined by (Lewis, 1996):
()
() = (1  ())
With 0
[4.2]
This failure rate is usually described in literature by the general characteristic of a bathtub curve (see Figure 20). Based on this figure three different forms of hazard or failure rate are distinguished; Increasing (IFR), Decreasing (DFR) and Constant (CFR). Most of the mechanical equipment is characterized by an initial wearin period in which the failure rate is decreasing, follow by a span of increasing failure rate (Lewis, 1996). Although components might exhibit moderate early failures or aging effects leading to increasing failure rates, the constant failure rate is often a quite adequate approximation of the reality (Lewis, 1996; BenDaya et al., 2000).
DFR
CFR
IFR
h(t)
t Figure 20 The bathtub curve (Lewis, 1996) Based on the characteristics of this failure rate, preventive replacement actions need to satisfy two necessary conditions in order to be useful (Jardine & Tsang, 2013): Total cost of replacement must be greater after failure than before failure. This may be caused by production loss costs due to the unplanned nature of the replacement action; The failure rate of the component must be increasing. In the case of CFR, replacing a component preventively is not logical due to the fact that it will not impact the probability that the component will not fail in the next period. Similar, when considering DFR replacing the component with a preventive action will only increase the probability that the component will fail in the following period. 4.2.2.2 DISTRIBUTION TYPES The mostly used failure distributions in reliability engineering are the normal distribution, the Weibull distribution and the exponential distribution (Mann et al., 1974; Lawless, 1982; Lewis, 1996;). For this research the Weibull distribution is used to model failure distribution. As will be explained in the following paragraph, this type of distribution has multiple characteristic probability density function shapes based on the value of the specified parameters. An example of this is the exponential function for which one of the two parameters used in the Weibull distribution are assumed to be equal to one. 36
Weibull Distribution For this research, the two parameter Weibull distribution is considered. This type of distribution is defined by a shape parameter and the scale parameter . The CDF of this type of distribution is given by [4.3] (Lewis, 1996; Dodson, 2006; Hinne, 2008):
() = 1  exp [(/)],
with 0
[4.3]
The expected value ([] also denoted by ) and the variance ([] also denoted by 2) are given by:
1 [] = = (1 + )
and,
[]
=
2
=
2
[
(1
+
2 )

2
(1
+
1 )]
[4.4]
The Weibull distribution is often used for reliability engineering due to the fact that this distribution
type can described all three different failure rate and behaves as other distribution when the shape
parameter is changes (Dodson, 2006; Hinne, 2008). In Table 35 an overview of these behaviours
based on the range of the shape parameter is given.
Table 35 Characteristic behaviour of the Weibull distribution
Range of shape parameter Corresponding distribution behaviour Corresponding failure rate
0 < < 1
Asymptotic shape
Decreasing failure rate
= 1
Exponential asymptotic shape
Constant failure rate
1 < < 3.6
Lognormal shape
Increasing failure rate
> 3
Normal shape
Increasing failure rate
A corresponding graphical example of these different failure distribution characteristics (left side) and corresponding failure rate characteristics are given in Figure 21.
=0,8; =60 = 1; =60 = 2,5; =60 =4; =60
=0,8; =60 = 1; =60 = 2,5; =60 =4; =60
Probability density Failure rate
Time Figure 21 Characteristic behaviour of the Weibull distribution
Time
Exponential Distribution The exponential distribution is, as described in the previous section, a special case of the Weibull distribution and has a unique feature considering its characteristic of memorylessness. In other words this type of distribution has a constant failure rate independent of time. Also in contrast to the Weibull distributions, this type of distribution has only one parameter . This characteristic makes this distribution the most widely used in reliability engineering representing random failures (Lewis, 1996). The corresponding CDF of this type of distribution is a simplification of [4.3] and therefore given by:
() = 1  exp[],
with 0
[4.5]
The corresponding expected value and variance are given by:
[] = = 1/ and, [] = 2 = 1/2
[4.6]
Even though equipment may be subject to moderate early failures or aging effects, the constant
failure rate approximation based on the exponential distribution is often quite adequate. The
magnitude of earlyfailure effects are often limited by strict quality control and similarly aging effects
can be minimized by the right preventive maintenance (Lewis, 1996).
37
4.2.3 THE RENEWAL PROCESS For this research two states of equipment are considered, good and failed. In the case of preventive maintenance the goal is to prevent equipment failure by either replacing a component with a new one or repair the component to a good as new state. Contrasting, if equipment has already failed, the failed component is replaced by a new component returning it to the good state. These renewals can be described as the renewal process and consist of estimating how many replacements will occur in a certain time interval. The concept of the renewal theory or process is depicted in Figure 22. N(t) Sr
0123
r
t
r+1 Time
Figure 22 Depiction of renewal theory approach (Jardine & Tsang, 2013)
In this figure () represents the number of failure in interval (0,t) and represents the time up to the failure. In order to calculate the expected number of renewals in a certain time period (),
the following formula is used based on the failure probability function and corresponding cumulative
distribution function (Lewis, 1996):
() = E[N(t)] = () + (  )()
With 0
[4.7]
0
4.2.4 MAINTENANCE POLICIES A common approach to improving the availability of equipment is through preventive replacement. Components subject to deterioration are, ideally, replaced before they fail. Thus it is necessary to identify components worth considering preventive replacement for. Subsequently, if the component is identified for preventive replacement, the optimal replacement strategy must be determined.
Category Corrective
Type
Time based
Performance
Preventive
based
Condition based
Table 36 Maintenance policies used by BPRR
Trigger
Description
Run to failure
No preventive actions are executed for equipment
Calendar time (Age based) Based on a fixed time interval
Key date (Block policy)
Based on a specific date independent of equipment performance
Based on a fixed measure of performance (for
Fixed measure threshold example running time) preventive maintenance
actions are executed
Based on monitored condition of equipment, through
Equipment condition
measurement or inspection, preventive maintenance actions are executed. This can either be done by
continuous monitoring or periodic inspection
A complete overview of possible maintenance policies and their description based on different literature resources is provided in Appendix M, Complete maintenance policy overview. Maintenance policies used in this research are based on the used policies at BPRR. Maintenance policies used at BPRR are described by Table 36 Maintenance policies used by BPRR. From this table a sub selection was made, considering the availability of data at BPRR. This sub selection is used as input for the mathematical model at hand and consists of: Replace only on failure (run to failure policy); Constant interval (block policy); Lifespan of component (age based policy);
38
4.2.5 INPUT VARIABLES TAKEN INTO ACCOUNT Based on the relations of all elements in the conceptual model, the following input variables are taken into account for the mathematical model:  Time to failure distribution (based on estimates and historical data);  Expected cost of renewal preceding failure;  Expected cost of renewal after failure;  Renewal interval (legal time limit to interval is taken into account);  Time horizon (time period in which cost are considered).
In order to provide mathematical descriptions of all these input variables, the variables given in Table
37 are introduced.
Table 37 Variables taken into account for selected maintenance policies
Description
Variable
Expected number of failures during interval (0, t) or Renewal Function (RF) () = [()]
Cumulative distribution function
()
Probability density function
()
Hazard function or failure rate function
()
Total cost of replacement in interval (0, T)
()
Cost of preventive maintenance Cost of corrective maintenance Cost of downtime Cost of preventive maintenance action Cost of corrective maintenance action Cost of production loss Mean time to failure (MTTF), mean of ()
or
Time between two consecutive renewals Average total cost per time unit
T = [()]
(Expected) Cycle cost
()
(Expected) Cycle length
()
39
4.3 MAINTENANCE POLICY EVALUATION, MODELLING The second phase of the research model adopted for this research question consists of actual modelling the described relations from the conceptual model. For the BPRR case this means that the failure distribution needs to be modelled with all its different aspects and methods. The other two aspects of the conceptual model, the renewal process and maintenance policy (cost) are modelled as well. To conclude, restrictions of the developed model are presented as model assumptions.
4.3.1 FAILURE DISTRIBUTION MODELLING As previously stated, the biggest problem with preventive maintenance is the uncertainty of equipment failure. In most cases large data sets available to conduct analysis and accurately estimate failures are lacking. However, in literature (Lewis, 1996; BenDaya et al., 2000) some procedure are provided to find these failure distributions based on the gathered data. For this research three methods are considered. The first method, a qualitative approach, is probability plotting and used to gain initial insight in the collected data. Subsequently a quantitative approach is used to estimate distribution parameters. Finally three goodnessoffit test are used to determine how accurately the proposed estimation represent or fit the sample data. Before the estimations can be made and accuracy tested, life time data needs to be collected. 4.3.1.1 LIFE TIME DATA The most common way of obtaining data about the life length of equipment is to collect historical data based on the actual running times of the equipment, also denoted in literature as field data. A second method of collecting the data is to run specially designed experiments to achieve a sample, also denoted in literature by laboratory data (Rinne, 2008).
Next to the way of collecting life time data, different forms of life time data exists. Most types of data consist of complete data, or uncensored, meaning that the value of each sample is known. Data considered to be incomplete is denoted as censored data either based on incomplete test (the tested equipment did not fail yet or the starting point of life time is missing), or due to preventive replacement (Lewis, 1996). Following from this distinction, different forms of censoring exist. For a complete overview of different types of censoring refer to Appendix N, Different types of censoring.
Besides being complete or incomplete, the data can be grouped or ungrouped. In this setting grouped data is considered to be data for which the value of a sample is not known in detail, but assigned to a specified interval. For example, a certain pump is subject to failure but rather than the month of failure the time period (05 months) is used to assign the failure to. 4.3.1.2 PARAMETER ESTIMATION VIA PROBABILITY PLOTTING The first step in probability plotting is to provide an initial guess of the possible failure distribution based on the development of a histogram. Often the number of data points is too small to construct a histogram with enough resolution. These situation occur most often when large expensive pieces of equipment must to tested to failure for each data point (Lewis, 1996). The calculation for the development of a histogram are provided in Appendix O, Calculations for histograms.
After an initial guess is made the method of median rank statistics is used to determine an estimation for the cumulative distribution function. The first step of this method is to rank or sort the sampling
data in ascending order. Secondly the cumulative distribution function is approximated by (correcting
for less than 15 to 20 data points):
()
=
 +
0,3 0,4
[4.8]
40
This method is aimed at deriving a graphical estimate of how well the estimated cumulative distribution function (CDF) based on data points fits the estimated distribution. Basically this means that the equation for the CDF is transformed in the form of linear regression by [4.9] (Lewis, 1996).
= +
[4.9]
This transformation is based on the least squares method, in which the mean values of differences between the data points and the estimated function, given by [4.10], are minimized.
=
1
( =1

)2
[4.10]
The last step is to plot the estimation against the data points to provide a visual goodnessoffit test. Given this complete procedure of probability plotting, failure distribution can be plotted on data points even when limited data is available. Final remark is that the 2 can be used to determine if the data points include any extreme values, resulting in a decrease of the measure. Please refer to Appendix P, Calculations for probability plotting for additional calculations used to estimate parameters using probability plotting.
4.3.1.3 PARAMETER ESTIMATION VIA MAXIMUM LIKELIHOOD ESTIMATION
Besides estimation approach of probability plotting, many statistical approach exist to estimated distribution parameters. For this research, the Maximum Likelihood approach will be considered. This approach is based on the establishment of a maximum likelihood function () which is then maximized by taking the derivate for the parameters to be estimated and equal this to zero [4.12]. This method is also defined as the Maximum Likelihood Estimation (MLE) and widely described in literature (Mann et al., 1974; Lawless, 1982; Desphande & Purohit, 2005). In general this approach consists of two elements, the likelihood function [4.11] and the set of likelihood equations [4.12]:
() = () , with and () ln () =1
[4.11]
() = 0
[4.12]
When looking specifically at the distributions considered for this research, Rinne (2008) and Dodson
(2006) provide good descriptions of MLE for the Weibull distribution. From the Weibull MLE, the
exponential MLE can be derived as well, given their clear relation described earlier. The maximum
likelihood function for the twoparameter Weibull distribution, considering complete data is given by:
(, ) = () + () +
(  1) ln() 
=1
=1
[4.13]
Taking the first derivate for both the parameters individually, provides [4.14] and [4.15]. Estimations
for both parameters are found by solving these sets of equations. The first step is solve this for and then use the found estimation to estimate . In the case of grouped data, other equations must be
solved. Please refer to Appendix Q, Weibull MLE considering grouped data for these equations.
(,
)
=

=1
=
0
[4.14]
(,
)
=
+
ln() =1

=1
ln( )
=
0
[4.15]
In a similar way the maximum likelihood equations for the exponential distribution are given by:
()
=

=1
=
0
[4.16]
41
4.3.1.4 DETERMINE GOODNESS OF FIT FOR SPECIFIED PARAMETERS The use of graphical plotting procedures to test the goodness of fit of the data to the distribution assumed usually works well if the plotted data falls nearly perfectly into a straight line. Since subjective judgement must be used to determine whether or not the plot seems to be a straight line, it is often difficult in less clearcut cases to decide whether or not to reject the hypothesized distribution (Mann et al., 1974). The fundamental assumption, or 0, is here described as the population being sampled is represented by the fitted distribution. The statistical test will then determine if the assumption that used data actually can be represented by the specified distribution or not (reject 0 and adopt 1) (Lewis, 1996). 0: 1: The cutoff value to determine when to reject 0 is called the pvalue, mostly denoted by , and represents the confidence of the estimation or specified distribution. The relation 100(1)% describes the probability that a value is found at least as extreme as shown by specified distribution . For example, with an pvalue of 0,05 there is a 5% chance of obtaining an observation or sample which does not fit the specified distribution. When selecting a smaller pvalue the more likely, and thus more accurate, it is that 0 will be rejected (Montgomery & Runger, 2006).
For this research three different tests are used to determine the goodnessoffit. For the probability plotting the 2 test will be used and for the MLE both the MannScheuerFertig test and AndersonDarling test will be used. Two popular and often used tests, the KolmogorovSmirnov and chisquared test, are omitted based on critical remarks of these tests given small sample sizes and proven better accuracy of the chosen tests (Littell et al. (1979); Lockhart et al. (1986); Dodson, 2006; Rinne, 2008).
Test for probability plotting
The 2 method is used to test how accurate the estimated regression line is able to represent the
data. This is done by calculating the coefficient of determination [4.17]. This measurement is equal to
one if all data point fall exactly on the estimated line. Following from this, 2 should be as close to
one as possible in order to find a good fit between data and estimated failure distributions.
2
=
(
2
(  )2  2)( 2 
2)
[4.17]
MannScheuerFertig Test Leaps from the extremevalue distribution, that is, differences of adjacent ordered observations divided by their expectations, have distributions that are approximately exponential with expectations exactly equal to one, variances approximately equal to one and covariances essentially zero (Mann et al., 1974). Mann et al. (1973) observed that, therefore, these leaps, when multiplied by two, have approximately independent chisquare distributions with two
degrees of freedom each.
=
=[12]+1
ln((+1)
)
ln(())
[2=]1
ln((+1)) 
ln(()
)
[4.18]
Where represents the observation from T (where 1, 2, ... , are the first failure times for a
test consisting of observations). The test is onesided and values of greater than the critical value,
provided by Mann et al. (1973), result in rejecting 0. The test is valid for complete data samples as well as rightcensored samples (Mann et al., 1973). Comparing the performance of with several
other test statistics has been studied by different authors (Littell et al. (1979); Lockhart et al. (1986))
and concluded that has good power and is more accurate than traditional chisquared or
KolmogorovSmirnov tests (Rinne, 2008).
42
AndersonDarling test A goodnessoffit test using a member of the quadratic class is usually more powerful than a test based on a member of the supreme class, such as the chisquared or KolmogorovSmirnov tests. The AndersonDarling test, 2, is such a quadratic class and is based on the approach that weight are given to observations in the tails. This approach tends to detect alternatives where more such observations will arise (Anderson, 1982; Rinne, 2008).The null hypothesis for this test is that the population follows the Weibull distribution with the specified parameters. Let 1, 2, ... , be the failure times for a test consisting of observations. The 2 test statistic is than given by:
2
=

=1
(2 {

1)
[ln( )
+
ln(1

(+1))]}

In this test is defined as the cumulative distribution function evaluated at .
[4.19]
4.3.2 RENEWAL PROCESS MODELLING In some situations the RF is required to be accurate for a sufficiently large time horizon. For such situations, a numerical method (Xie, 1989) or an approximation (Jiang, 2008) can be used to directly compute the RF. In the other situations such as analysis and optimization of various replacement policies, the RF is required to be accurate for a relatively small time horizon (e.g., the interval from zero to a time close to the expected life time) and a simple approximation is desired to simplify the optimization algorithm and solving process. The following requirements for such approximations are desired (Jiang, 2010):
Simplicity: the approximation has a closedform expression and can be directly used without a need of further numerical computation(s); Accuracy: the approximation is accurate from an engineering perspective for the potential possible range of the decision variable (e.g., a preventive replacement age). Applicability: this requirement is two folded. One is that the range that it is accurate should be as large as possible; and the other is that it is applicable for more distribution families rather than a specific distribution.
Some efforts have been made to develop such approximations for the RF, and the Weibull distribution specifically (Spearman, 1989; From, 2001). However, it appears that there is no approximation that is both sufficiently accurate and relatively simple. This may be due to the complexity of the shape of RF (Jiang, 2008; Jiang, 2009; Jiang, 2010). Initially Jiang (2008) presents an approximation based on a combination of the Gammanormal distribution. In later research presented by the same author, this approximation is changed into a combination of the cumulative distribution function (CDF) and the cumulative hazard function (CHF) of the specified Weibull distribution.

() () + [(: , ) + (
)]
=2
[4.20]
Although the new proposed approximation provides an approximation based solely on the specified Weibull distribution instead of the combination of Gammanormal truncation, this latest approximation was not found to be support by other literature sources. The choice was thereby made to use the initial approximation also described and confirmed by Cui & Xie (2003) and calculated as [4.20]. For the complete calculations of this approximation please refer to Appendix R, RF approximation calculations. In this appendix also a short description of the newly proposed approximation is given.
43
4.3.3 TOTAL COST MODELLING By combining the all related cost and the failure distribution of the components considered for analysis, a general cost function can be defined for a specific time period (Jardine & Tsang, 2013):
()
=
=
() ()
[4.21]
In this formula the total expected cost per cycle consist of both preventive maintenance cost as well as corrective maintenance cost in which the production loss cost are incorporated. This general cost formula is different per maintenance policy, based on the trigger for cost to occur. For example if the run to failure maintenance policy is chose, the preventive maintenance cost are nonexisting and the total expected cost per cycle equal the corrective maintenance cost. In the following paragraph the total cycle cost will be determined for each of the three selected maintenance policies for BPRR.
4.3.3.1 TOTAL COST CALCULATIONS FOR DIFFERENT MAINTENANCE POLICIES When considering the run to failure policy, replacement actions are only executed upon equipment failure as depicted in Figure 23. The time between two failure triggered replacements is defined as one replacement cycle. Failure replacement Failure replacement
0
Time
Figure 23 Replacement cycle in case of run to failure policy
The corresponding expected maintenance cost can then be defined by transforming [4.21] to this specific strategy. In [4.22], represents the average time between two consecutive failures,
otherwise denoted by the MTBF. Note that there are no preventive maintenance cost incorporated.
[()]
=
=
() ()
[4.22]
In a similar way, the concept of the block policy can be defined according to Figure 24. This policy does not take into consideration how many failures already have occurred during one replacement cycle, which is defined by the period of time between two consecutive preventive replacements. This specific replacement cycle is fixed (Jardine & Tsang, 2013). Preventive replacement Failure replacements T
0
One cycle Time
Figure 24 Replacement cycle in case of the constant interval policy (blockpolicy)
For this policy the expected cost during a time interval can be described by [4.23]. The cost for one
replacement cycle are given by one preventive replacement at the end of the cycle and the expected
number of failures during the replacement cycle multiplied by the cost of corrective replacement.
[(T)]
=
+
(T) T
Ч
=
() ()
[4.23]
The last considered maintenance policy, the age based policy, can graphically be represented by
Figure 26. In this case two possible scenarios or replacement cycles exist. Either the component
reaches the fixed age at which it is preventively replaced, or the component fails before reaching this
fixed age. In that case the component is replaced correctively and the replacement cycle start again,
see Figure 25 for a representation of these two scenario's.
44
Preventive replacement T
Failure replacement M(T)
0
Time
0
Time
Figure 25 Possible replacement cycles in case of age based policy
The accompanying expected replacement cycle costs are defined by cost of a preventive cycle
multiplied by the possibility of having a preventive cycle plus the cost of a failure cycle multiplied by
the possibility of having a corrective cycle.
Failure replacement
Preventive replacement
T
0
One cycle
Time
Figure 26 Replacement cycle in the case of age based policy
This expected cost is then divided by the expected replacement cycle length. This length is subsequently calculated by length of a preventive cycle multiplied by the probability of a preventive cycle, plus the length of a corrective cycle multiplied by the probability of a corrective cycle. The result is given by [4.24].
[(T)]
=
(T) Ч + (1  (T)) Ч xT=0 () + (1  (T))
=
() ()
[4.24]
4.3.4 MODEL ASSUMPTIONS The selected maintenance policies are subject to some policy specific assumptions presented in Table 38 Assumptions for selected maintenance policies. In general the time to replace a component is neglected.
Table 38 Assumptions for selected maintenance policies
Policy
Assumption(s)
Failure based Time to replace is negligible
Time to replace is negligible
Age policy
Attractive for components with IFR to than no preventive maintenance Time to replace is negligible
Block policy Attractive for components with IFR to than no preventive maintenance
Next to the policy specific assumptions, some general model assumptions need to be defined in order to support the model solving: Spare parts are always available; There are no restrictions to manpower; Facility operates 24 hours per day, seven days per week; Single sub unit failure is assumed and sub units are not considered to be redundant; Cost for renewal after failure > cost for renewal preceding failure (+/ 3 times) Only seal leakage is considered as pump failure mode; Upon the time of maintenance action, the component (pump seal) will be replaced with a new identical component; Planned maintenance actions are always planned during scheduled downtime and therefore do not lead to production loss costs; Replacement interval might be bound by a maximum legal set interval.
45
4.4 MAINTENANCE POLICY EVALUATION, MODEL SOLVING In the previous chapter the equipment was selected based on a decision support tool. In the last two paragraphs the mathematical model was described for the maintenance policy evaluation. All variables and their relation were defined, and in this model solving step the mathematical model will provide some insights in possible adjustments. 4.4.1 FAILURE DISTRIBUTION By combining the information from the OSA data and the information found in the work orders and maintenance cost, a complete analysis of P2101 and its components was performed. As a result from this data combination, failure frequencies were found for the pump seals. In the period 2001 till 2015 substantial data could be retrieved to solve the described model. This will be done in the same order as the proposed in the previous paragraph of modelling. 4.4.1.1 LIFE TIME DATA In the case of BPRR, historical data or field data was used to determine the failure distribution. This field data is provided by the SAP system and contains all the necessary information considered needed for failure distribution modelling. Furthermore the collected life time data is considered to be complete and ungrouped, meaning that only data points are used registered after an actual failure and not on a specified interval. The failure data from all P2101 pumps (A, B and C) are provided in Figure 27. As can be seen, most data point are retrieved for P2101C. This pump provides the most stable, however short, life time samples compared to pump A and B. From a technical point of view these pumps are identical being initially build as redundant pumps. In this case initially means that in the current operation, the pumps are no longer treated as redundant but used as increased capacity. Based on the assumptions that these three pumps are identical the failure frequencies are pooled together as suggested in literature to increase the sample size (Laweless, 1982; Lewis, 1996; Deshpande & Purohit 2005; Rinne, 2008).
A1 A2 A3
Specific pump seal
B1 B2
C1 C2 C3 C4 C5 C6 0
20
40
60
80
100
120
140
Lifespan of a pump seal
Figure 27 Life time data collected at BPRR for P2101 To summarize, in total eleven uncensored ungrouped life time samples were collected from the SAP system at BPRR. These samples will be used for the failure distribution parameter estimation in the next section.
46
4.4.1.2 PARAMETER ESTIMATION VIA PROBABILITY PLOTTING The first step in probability plotting, as defined in the modelling phase, is gaining initial insights in the possible PDF and CDF of the sampled data. Based on the approximation for a reasonable interval the data was grouped for an interval length of 25 months. Given this interval, the frequencies found are grouped and a histogram was made. In Table 39 these grouped frequency are presented and the corresponding histogram is depicted in Figure 28.
Table 39 Interval frequencies
Time
Interval Frequency
025 months
1
5
2650 months
2
3
5175 months
3
1
76100 months
4
1
>100
5
1
Frequency
6
5
4
3
2
1
0 025 mnd 2650 mnd
5175 76100
mnd
mnd
>100 mnd
Figure 28 Frequency histogram of pump seal lifespan
The next step is to transform these frequency into a probability distribution function accordingly. The results of this step are given by Figure 29 (calculations used to determine this are found in Appendix O, Calculations for histograms). From Figure 29, both the Weibull and exponential failure distribution might provide the right distribution in this case. The particular bell shape for the normal distribution is not found and therefor the normal distribution will not be tested in the probability plotting step.
f(xn) F(xn)
1,00 0,80 0,60 0,40 0,20 0,00
1,00
0,80
0,60
0,40
0,20
025 2650 5175 76100 >100 mnd mnd mnd mnd mnd
0,00
025 2650 5175 76100 >100 mnd mnd mnd mnd mnd
Figure 29 Normalized histogram for PDF and CDF
The previous step provided some initial insight in possible distribution for the lifespan of pump seals. However, the approximation of the CDF (()) is calculated using formula [4.8] in order to develop a probability plot. For the median rank to be calculated, the samples are ranked in ascending order and subsequently the values for the CDF are calculated. Using Figure 29 as starting point, first the exponential distribution is tested. Based on the procedure described in the modelling step, an approximation is provided for and . The complete results of estimating the CDF by the median rank and the approximation of and are provided in Table 40.
Item ()
C3 1 0,061 3 0,063
A2 2 0,149 8 0,161
C1 3 0,237 10 0,270
Table 40 Approximation for and
C6
C5
C2
A1
4
5
6
7
0,325 0,412 0,500 0,588
11
25
29
29
0,392 0,532 0,693 0,886
C4 8 0,675 31 1,125
B1 9 0,763 58 1,440
A3 10 0,851 80 1,903
B2 11 0,939 115 2,790
It might be useful to provide a graphical representation based on this table as described by Lewis (1996) in order to support the least square method. Figure 30 provides this graphical representation.
47
3,000 2,500 2,000
y=ln(1/(1F)) y=0,024t
1,500
1,000
0,500
0,000 0
20
40
60
80 100 120
Figure 30 Probability plot for the exponential distribution
From the resulting calculations in Table 40 the parameters for the exponential distribution are approximated and in a similar way the probability plotting was executed for a Weibull distribution. The results of the parameter approximation are given in Table 41.
Table 41 Results of parameter estimation using probability plotting
Distribution Exponential
Estimated parameters = 0,024
MTTF 42 months
Weibull
= 1,022
= 38,37 38 months
4.4.1.3 PARAMETER ESTIMATION VIA MLE As described in the modelling phase, more accurate parameter estimations are derived by using the MLE. Based on the likelihood equations presented in 4.3.1.3 Parameter estimation via maximum likelihood estimation, the parameters for both the Weibull distribution and the exponential distribution are calculated. The results of this estimation is given in the table below.
Table 42 Results of parameter estimation using MLE
Distribution Exponential
Estimated parameters = 0,028
MTTF 36,27 months
Weibull
= 1,112
= 37,39 36,35 months
The developed Excel based tool is used to calculate these parameter estimations. Next to the parameter estimations this tool provides also confidence intervals for the estimated parameters both quantitative and graphical. Please refer to Appendix S, Confidence intervals for parameter estimation to see an example of these results.
4.4.1.4 DETERMINE GOODNESS OF FIT FOR SPECIFIED PARAMETERS The first test to determine the goodnessoffit is given by the 2 calculation. For this specific case of pump seals for P2101 the exponential function provides the better fit for the data points as can be concluded from Table 43. As a result this implies already that the optimal maintenance policy will be the runtofailure policy given the constant failure rate of the exponential distribution.
Distribution Exponential Weibull
Table 43 Results of the test statistic
Estimated parameters = 0,024
= 1,022
= 38,37
0,9763 0,9681
48
The results from the other two goodnessoffit tests are presented in the table below. As can be concluded from Table 44, the data can both be represented by the Weibull and exponential distribution. Note that from these two test it also follows that the exponential distribution is a better fit to the failure data (the AndersonDarling statistic value is lower for the exponential distribution).
Table 44 Results of statistical tests to determine goodnessoffit
Test
Elements
Exponential
Statistic value MannScheuerFertig test Critical value ( = 0,01)
0,5049 0,82
Conclusion
Accepted
Statistic value
0,2404
AndersonDarling test
Critical value ( = 0,01)
0,998
Conclusion
Accepted
Weibull 0,5049 0,82 Accepted 0,2578 0,998 Accepted
4.4.2 RENEWAL PROCESS MODELLING Now that the failure distribution for the pump seals is assumed to be exponential, an estimation can be made considering the expected number of failures in or before a certain point in time. This insight is provided by the renewal theory formula given by [4.7]. Solving this for the exponential distribution results in [4.25].
() = E[N(t)] = 0,024
With 0
[4.25]
As stated before due to the fact that the exponential distribution already implies that the failure rate of
the pump seal is constant and therefor the optimal maintenance policy follows directly from it, let's
assume now that the Weibull distribution would have been the best fit. Given the approximation of
() by Jiang (2008), the calculated expected number of failures for a specific time period is given by:
 36,35
() () + [0,953 (: 0.56, 64.6) + 0,047 (
)]
=2
48,72
[4.20]
4.4.3 MAINTENANCE POLICY RESULTS The previous two paragraphs contained the establishment of the failure distribution and renewal function for the pump seal of P2101, with the exponential failure distribution as a result. Due to the fact that preventive maintenance is not optimal (given the constant failure rate), for these calculations it is assumed that the best fitted distribution was based on the Weibull distribution with the MLE parameters as specified. The following step is to calculate the expected maintenance cost for the three selected maintenance policies (run to failure, age based and block policy). From the maintenance cost data and the production loss data, estimations are made considering the cost of replacement before failure and cost of replacement after failure (Table 45).
Time to failure distribution Weibull
Table 45 Overview of input parameters for calculation
Parameter(s)
Preventive
Corrective
maintenance cost maintenance cost
= 1,022 = 38,37
958
21.958
Based on the assumption that replacement before failure is done in planned downtime of the equipment, this will not result in production loss costs. These production loss costs will only occur if replacement is done after failure and is based on a estimation of combined production loss data and OSA downtime data. Using the maintenance cost estimations, selected failure distribution parameters, approximation of expected number of replacements in cycle length T and the maintenance policy cost formulas [4.22],
49
[4.23] and [4.24] the results for the different maintenance policies are calculated through an Excel based model. The results of this Excel based model is shown in.
Table 46 Results for the optimal maintenance policy given the fitted Weibull distribution
T M(T) Failure based Age based
Block
1 0,05 6 0,22 12 0,44 17 0,62 18 0,66 19 0,70 24 0,87 36 1,29 48 1,67
604,02* 3624,12 7248,23 10268,33 10872,35 11476,37 14496,47 21744,70 28992,93
1322,35 613,29 569,44 564,35 564,24* 564,30 566,10 573,65 580,89
2040,75 964,23 884,40 861,30 857,94 854,57 840,17 811,04 785,15
120 3,73 72482,34
600,73
690,56*
As can be concluded from this table, the optimal policy would be the age based policy with a replacement cycle of 18 months. These maintenance policy settings will provide the best expected equipment performance given the fact that production loss cost are incorporated in the corrective maintenance cost. Thereby finding the minimum of total expected costs results also in the minimal expected downtime of P2101. Note however that initially the best strategy was the run to failure given the fact that the best fitted distribution of the sample data was the exponential distribution. Furthermore, based on safety concerns and a service contract with the supplier of the pump seals, a preventive strategy has to be adopted, as proposed by the results in Table 46.
4.5 MAINTENANCE POLICY EVALUATION, VERIFICATION AND VALIDATION Based on a discussion with a pump seal expert, it was concluded that the mathematical model presents maintenance policy cost for the determination of an optimal maintenance policy thereby verifying the developed model. Data validity was guaranteed by checking all relevant failure sample data from the work orders and corresponding cost from SAP at BPRR. Based on the description of failure related work orders, the failure mode of the crude pump was derived. Time between registered work orders considering breakdowns are used as input for the failure distribution fitting. Moreover, based on historical cost corresponding to the corrective work orders the corrective and preventive maintenance cost were estimated. To ensure usage of the developed mathematical model, the user interface and ease of use for the decision maker should be guaranteed for the Excel based tool. Based on several tutorials and feedback from endusers, the Excel tool was adjusted to meet expectations and ensure interface validity as defined by Finlay & Wilson (1997). Finally the general validity of the mathematical model was checked by confirming that the model provides appropriate results for the selected decision problem at hand. When providing accurate equipment life time data and accurate cost estimations, optimal maintenance policies can be determined using the developed model. Furthermore the outcome of the mathematical model considering the Weibull distribution calculations was tested with experts.
50
4.6 MAINTENANCE POLICY EVALUATION, IMPLEMENTATION
In order to implement the developed mathematical model in this research a tool is built in Excel.
Figure 31 provides an overview of the tool and its results (for more information on the built Excel tool
please refer to Appendix U, Excel based mathematical model. The input required for the calculations
performed by the tool consists of:
Estimates of the cost parameters for preventive and corrective maintenance;
A set of failure time data.
Figure 31 User interface of the developed Excel tool for the mathematical model The estimation of cost parameters consists of providing an estimate for the cost of replacement prior to failure and the cost of replacement after failure. For future research and improvement of this mathematical model, some more effort should be put in defining and estimating these costs. For example the addition of a cost breakdown of materials and labour would provide some useful insight and more accurate cost
parameter estimates. Moreover, the production loss cost should be defined more accurately since these costs provide a great influence given their magnitude compared to preventive maintenance cost at BPRR. Incorporated in the Excel based tool is a module developed to convert time to failure sample data into Weibull distribution parameters by using maximum likelihood estimation (MLE). For this part of the tool it necessary to collect failure time data from data sources as SAP and the eChoke database. The estimated parameters are also tested for goodnessoffit with the described test from the mathematical model. Next to this confidence intervals are calculated to provide extra insight. 4.7 CONCLUSION MAINTENANCE POLICY EVALUATION In this part of the thesis research a simple procedure was developed to assess if the improvement can be found in maintenance and the corresponding mathematical model to determine optimal maintenance policies was constructed. It was furthermore concluded that based on current available data it is possible to derive a failure distribution for certain components. However, it should also be noted that this was somewhat of a unique example and that analysis on large scale are not possible given the current data. For implementation on a larger scale additional data is needed considering accurate sub system failure data including failure mode and relation with the corresponding availability losses to provide the right assessment if maintenance policies should be adjusted. The complete mathematical model was built in a Excel based tool, automating the constructed calculations and procedures. The model was lastly validated and verified as described in the research methodology. 51
5. CONCLUSIONS AND RECOMMENDATIONS This chapter presents the conclusions related to the three research questions and resulting applications of the deliverables and recommendations for BPRR are described. Additionally, suggestions for future research are provided.
5.1 CONCLUSIONS During this thesis research project two different Excel based tool were developed. The first tool enables BPRR to select equipment, based on a data driven multicriteria decision analysis, with the largest performance gap. The second tool enables BPRR to estimate failure characteristics of equipment based on life time data and subsequently determine the optimal maintenance policy considering the minimization of total relevant cost.
5.1.1 HOW CAN EQUIPMENT PERFORMANCE BE MEASURED? First a framework was constructed to ensure a complete overview of criteria used of the determination of equipment performance related to maintenance. This framework was constructed based on a comparison and confrontation between literature and BPRR. This framework considered the different characteristics of equipment performance and maintenance related performance indicators as basis for the categorization of criteria. Moreover, the framework covers all the necessary corporate elements and relation between them. For example the adjustment of corporate strategy to the targets set for equipment performance. As a result six indicators were selected.
Table 47 Criteria for measurement of equipment performance and maintenance effectiveness
Measure category
Type of measure
Indicator category
KPI / Criteria
Equipment performance
Equipment effectiveness
EDC unit % OSA (Availability rate)
Lagging
Reliability and Maintainability
Failure frequency
Cost Performance
Reliability and Maintainability Maintenance cost
Production loss cost Overall maintenance cost
Leading
Process Performance
Work identification
Planned / Unplanned ratio
5.1.2 HOW CAN EQUIPMENT WITH THE LARGEST PERFORMANCE GAP BE SELECTED? Equipment with the largest performance gap can be selected based on a multicriteria decision analysis considering two stages. Before the selection of equipment is possible, first the decision problem at hand was structured based on a hierarchy and decision problem elements (criteria, alternatives and performance of alternatives on criteria) were determined. By transforming the equipment performance from six different criteria into one measure, it became possible to compare equipment (units and sub units). Based on this calculated criterion, a complete ranking became available enabling decision makers to select equipment with the largest performance gap.
For the BPRR case specifically, first a unit with a large performance gap was selected (3). Subsequently, the procedure had to be adjusted considering the absence of three out of six criteria for the selection of the sub unit with the largest performance gap. Based on the adjusted decision analysis, a sub unit was selected with the largest performance gap (P2101).
52
The results of this developed selection procedure are all provided by an Excel based tool calculating the single criterion based on the specified input. Both the developed Excel model as the results following from this model were verified and validated accordingly.
Set of units
Selection of best unit for further analysis
Set of sub units
Selection of best sub unit for further analysis
Components
Figure 32 Selection procedure of equipment with the largest performance gap
5.1.3 HOW CAN EQUIPMENT PERFORMANCE BE IMPROVED BY ADJUSTING CURRENT MAINTENANCE POLICIES? As a result from the third research question a mathematical model was developed to determine the optimal maintenance policy based on total expected cost. The choice was made to select the run to failure policy, age based policy and the block policy given the possibilities within BPRR to register maintenance policies in the SAP system. The first step of the mathematical model involves the failure distribution modelling. Given the life time data of the selected sub unit (P2101), it became possible to fit a failure distribution to a specific component of the sub unit (a pump seal).
After the failure distribution was fitted, cost estimations were made based on available data considering maintenance cost and production loss costs resulted from the sub unit failures. As a result the expected maintenance costs for the three selected maintenance policies was calculated. Unfortunately the results provided an constant failure rate for this case implying that run to failure provided the optimal policy. In order to illustrate the use of the mathematical model, results were obtained assuming a Weibull failure distribution. In general the developed model can be used to determine optimal maintenance policies and failure time characteristics for selected equipment. The complete mathematical model was constructed in a Excel based tool enabling automated calculations.
5.2 APPLICATIONS FOR BPRR The developed decision support tool for the selection of equipment with the largest performance gap should be used to support current Asset Prioritization processes. Currently, bad actors or equipment with the largest performance gap are selected based on expert opinions rather than on data. Based on this data, results are more objective. It should be noted however that the developed tool is merely a support tool and not a decision forming tool and that analysis of the results by experts is still needed.
In this research the selection procedure was used to select equipment with the largest performance gap in a specified time period. However, the results from the decision support tool should also be used to determine trends, based on multiple time periods as input, in equipment performance. This will lead to a more accurate resources allocation than decisions based on one specified time period. For example the equipment performance during one complete turnaround cycle (+/ 6 years) might be compared to the six year separately to identify trends. The advantage for BPRR will be twofold, firstly results might prove more similar to those of the international Solomon benchmark (also considering more than one year) and secondly results are less incident driven.
Furthermore, the developed decision support tool should be used to compare identical or similar sub units (pumps, heaters etc.) within the complete refinery instead of ranking their equipment performance within the boundaries of a unit (pumps of crude unit one with pumps of crude unit two).
53
The big advantage for BPRR would be that this analysis would provide insight in the performance of for example identical pumps on similar units. Moreover, based on application good practices could be identified an applied to equipment with large performance gaps. For example the strategy for crude feed pump one is different from crude feed pump two, but performance of crude feed pump two is better than that of crude feed pump one. This would provide difference in ranking values and thereby trigger adjustment of the strategy for crude feed pump one similar to that of crude feed pump two. Lastly, given the results of the develop decision support tool it is possible to provide input for continuous improvement. By regularly performing the analysis and checking the results, will provide the ability to determine whether the implemented adjustment provide actual improvement of equipment performance. Thereby enabling continuous improvement of equipment performance. 5.3 RECOMMENDATIONS FOR BPRR Given that the developed decision support tool is completely based on the input of current data, the first recommendation for BPPR is increase data availability and data reliability. Work order information registered in SAP needs to be accurate to perform accurate analysis. For example the assignment of maintenance cost to equipment identifiers or functional locations has to be improved. Also, by adding a relation between downtime causes and effects in the form of production loss cost to the current eChoke or OSA data it will be possible to determine equipment failure characteristics on a larger scale. The developed tools in this research provide useful input for different aspects of the Global Reliability Improvement Program. For this program focus is put to the improvement of production loss accounting and vulnerability studies. The procedure for equipment selection provides support for both of these program aspects. Deliverables form this research should be added to the tool package of BPRR to perform relative quick analysis on equipment performance. Given the restricted time available for this thesis research some assumptions are made considering the developed mathematical model for maintenance policy evaluation. Further development of the provided mathematical model will provided a wider range of possible maintenance policy adjustments, as well as more realistic representations of both failure behaviour and expected maintenance cost. 5.4 ACADEMIC RELEVANCE OF THIS RESEARCH This research presented a framework considering the measurement of equipment performance related to maintenance. As concluded from literature, equipment performance related to maintenance is often just evaluated by financial results. The results is that combination of different types of performance indicators used for the measurement and evaluation of equipment performance is lacking in literature. Moreover, the combination between this framework of indicators and a suggested method to rank the alternatives is absent as well. The results of this research present a framework to close this gap. 5.5 FUTURE RESEARCH DIRECTIONS The developed mathematical model is only based on three possible maintenance policies and a simplified cost calculation. In order to provide more accurate results for the determination of optimal maintenance policy, future research should focus on extending the proposed model by adding more maintenance policies such as performance based maintenance policies or condition based maintenance policies. A further suggestion would be the addition of delay time and inspection intervals to the proposed model. 54
REFERENCES Aken, J.E. van, Berends, H. & Van der Bij H. (2007). Problem solving in organizations. Cambridge, Cambridge University Press. Anderson, P.K. (1982). Testing goodness of fit of Cox's regression model. Biometrics, 38, 6777. Belson, W.A. (1986). Validity in Survey Research. London: Grower. Bertolini, M., Bevilacqua, M., Ciarapica, F.E., & Giacchetta, G. (2009). Development of RiskBased Inspection and Maintenance procedures for an oil refinery. Journal of Loss Prevention in the Process Industries, 22, 244253. Bertrand, J.W.M. & Fransoo, J.C. (2002). Operations management research methodologies using quantitative modeling. International Journal of Operations & Production Management, 2002; vol. 22, no. 2; pp. 241264. Bevilacqua, M., & Braglia, M. (2000). The analytic hierarchy process applied to maintenance strategy selection. Reliability and System Safety, 70, 7183. Brans, J.P., B. Mareschal, & Vincke, P.H.. (1984). PROMETHEE: A new family of outranking methods in multicriteria analysis. In Operational Research, NorthHolland: Elsevier Science Publishers. Brans, J. P., & Vincke, P.H. (1985). A preference ranking organization method (The PROMETHEE method for multiple criteria decisionmaking). Management Science, 31(6), 647656. Campbell, J. D., & ReyesPicknell, J. V. (2006). Uptime: Strategies for excellence in maintenance management (2nd ed.). New York: Productivity Press. Christer, A. (1999). Developments in delay time analysis for modelling plant maintenance. The Journal of the Operational Research Society, 50(11), 11201137. Cui L, & Xie M. (2003). Some normal approximations for renewal function of large Weibull shape parameter. Communications in Statistics, 32(1), 116. Dekker, R. (1996). Applications of maintenance optimization models, a review and analysis. Reliability Engineering and System Safety, 51, 229240. Deshpande, J.V., & Purohit, S.G. (2005). Life Time Data: Statistical Models and Methods.
world scientific Publishing Co: Singapore. Dodson, B. (2006). The Weibull Analysis Handbook (2nd edition). ASQ Quality Press: Wisconsin. Dubois, D., & H. Prade. (1980). Fuzzy sets and systems. New York: Academic Press. Dyer, J.S. (1990). Remarks on the analytic hierarchy process. Management Science, 36(3), 249258. Fernandez, O., Labib, A.W., Walmsley, R., & Petty, D.J. (2003). A decision support maintenance management system, development and implementation. International Journal of Quality & Reliability Management, 20, 965979. Finlay, P.N., & Wilson, J.M. (1997). Validity of Decision Support Systems, Towards a Validation Methodology. System Research Behavior Science, 14(3), 169182. 55
Fleischer, J., Weismann, U., & Niggeschmidt, S. (2006). Calculation and optimization model for costs and effects of availability relevant service elements. Proceedings of LCE 2006. From, S.G. (2001). Some new approximations for the renewal function. Communications in Statistics, 30(1), 113128. Garg, A., & Deshmukh, S.G. (2006). Maintenance management, literature review and directions. Journal of Quality in Maintenance Engineering, 12, 205238. Geraerds, W.M.J. (1985). The cost of downtime for maintenance: preliminary considerations. Maintenance Management International, 5, 1321. Gits, C.W. (1992). Design of maintenance concepts. International Journal of Production Economics, 24, 217226. Guitouni, A., & Martel, J.M. (1998). Tentative guidelines to help choosing an appropriate MCDA method. European Journal of Operational Research , 109(2), 501521. Huang, J., & Tzeng, G. (2011). Multiple attribute decision making: Methods and applications. Boca Raton, CRC Press. Huang, S.H., Dismukes, J.P., Mousalem, A., Razzak, R.B., & Robinson, D.E. (2003). Manufacturing productivity improvement using effectiveness metrics and simulation analysis. International Journal of Production Research, 41, 513527. Jardine, A. K. S., & Tsang, A. H. C. (2013). Maintenance, replacement, and reliability: Theory and applications (Second edition). Boca Raton: CRC Press. Jiang, R. (2008). A Gammanormal series truncation approximation for computing the Weibull renewal function. Reliability Engineering and System Safety, 93, 616626. Jiang, R. (2009) A simple approximation for the Weibull renewal function. International Conference on Industrial Engineering and Engineering Management. Jiang, R. (2010). A simple approximation for the renewal function with an increasing failure rate. Reliability Engineering and System Safety, 95, 963969. Jonsson, P., & Lesshammar, M. (1999). Evaluation and improvement of manufacturing performance
Measurement Systems the role of OEE. International Journal of Production Management, 19, 5578. Keeney, R.L., & H. Raiffa. (1976). Decision with multiple objectives: Preferences and value tradeoffs. New York: John Wiley and Sons. Kirk, J., & Miller, M.L. (1986). Reliability and Validity in
Qualitative research. Qualitative Research Methods, 1. Landry, M., Malouin, J.L., & Oral, M. (1983). Model validation in operations research. European Journal of Operational Research, 14, 207220. Lawless, J. F. (1982). Statistical Models and Methods for Lifetime Data.
John Wiley & Sons, Ltd. Lewis, E. E. (1996). Introduction to reliability engineering (2nd ed.). New York: Wiley. 56
Littell, R.C., McClave, J.T. & Offen, W.W. (1979). Goodnessoffit tests for the twoparameter Weibull distribution. Communications in Statistics Simulation and Computation, 8, 257269. Lockhart, R.A., O'Reilly, F. & Stephens, M.A. (1986). Tests for the extremevalue and Weibull distributions based on normalized spacing's. Naval Research Logistics Quarterly, 33, 413421. Mann, N.R., Scheuer, E.M., & Fertig, K.W. (1973). A New goodnessoffit test for the twoparameter Weibull or extremevalue distribution with unknown parameters. Communications in Statistics, 2, 383400. Mann, N. R., Schafer, R. E., & Singpurwalla, N. D. (1974). Methods for statistical analysis of reliability and life data. London: Wiley. Marler, R.T., & Arora, J.S. (2004). Survey of multiobjective optimization methods for engineering. Structured Multidisciplinary Optimization, 26, 369395. Mitroff, I.I., Bets, F., Pondy, L.R. & Sagasti, F. (1974), "On managing science in the systems age: two schemas for the study of science as a whole systems phenomenon". Interfaces, vol. 4, no. 3, pp. 4658. Montgomery, D.C., & Runger, G.C. (2006). Applied statistics and probability for engineers. Wiley: Hoboken. Moubray, J. (1997). Reliability Centered Maintenance (Second Edition). Oxford: ButterworthHeinemann. Muchiri, P., & Pintelon, L. (2008). Performance measurement using overall equipment effectiveness (OEE): Literature review and practical application discussion. International Journal of Production Research, 46(13), 35173535. Muchiri, P., Pintelon, L.M., Gelders, L.F., & Martin, H. (2011). Development of maintenance function performance measurement framework and indicators. International Journal of Production Economics, 131, 295302. Murray, M., Fletcher, J. K., Kohler, P., Chambers, J., & Ledwigde, T. (1996). Capability Assurance: A Generic Model for Maintenance. ICOMS96, Melbourne, Paper 72, 15. Nakajima, S. (1988). Introduction to TPM: Total Productive Maintenance. Cambridge: Productivity Press. Neely, A., Gregory, M., & Platts, K. (2005). Performance measurement system design. International Journal of Operations & Production Management, 25, 12281263. Pintelon, L.M., & Gelders, L.F. (1992). Maintenance management decision making. European Journal of Operational Research, 58, 301317. Pintelon, L.M., Pinjala, S.K., & Vereecke, A. (2006). Evaluating the effectiveness of maintenance strategies. Journal of Quality in Maintenance Engineering, 12, 720. Rinne, H. (2008). The Weibull distribution: A handbook. Boca Raton Fla:
Chapman & Hall/CRC. Roy, B., & Vincke, P. (1981). Multicriteria analysis: survey and new directions. European Journal of Operational Research, 8, 207218. 57
Saaty, T.L. (1977). A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology, 15(3), 23481. Saaty, T.L. (1980). The analytic hierarchy process. New York: McGrawHill. Saaty, T.L. (1990). How to make a decision: The Analytic Hierarchy Process. European Journal of Operational Research, 48, 926. Saaty, T. L. (1994). Highlights and critical points in the theory and application of the Analytic Hierarchy Process. European Journal of Operational Research, 74(3), 426447. Saaty, T.L. (2003). Decisionmaking with the AHP: why is the principal eigenvector necessary? European Journal of Operational Research, 145, 8591. Saaty, T. L., & Ozdemir, M. S. (2003). Why the magic number seven plus or minus two. Mathematical and Computer Modelling, 38(34), 233244. Sargent, R.G. (2013). Verification and validation of
simulation models. Journal of Simulation, 7, 1224. Simons, R. (1995). Levers of control: How managers use innovative
control systems to drive strategic renewal. Harvard Business School Press, Boston, MA. Solomon Associates (2014). BP Rotterdam Refinery Euro 2014 Solomon Benchmark Report. Spearman, M.L. (1989). A simple approximation for IFR Weibull renewal functions. Microelectronics and Reliability, 29(1), 7380. Suzuki, T. (1994). TPM in process industries. Portland: Productivity Press. Tsang, A.H.C. (1998). A strategic approach to managing maintenance performance. Journal of Quality in Maintenance Engineering, 4, 8794. Tsang, A. H.C., Jardine, A. K., & Kolodny, H. (1999). Measuring maintenance performance: a holistic approach. International Journal of Operations & Production Management, 19(7), 691715. Tsang, A.H.C. (2002). Strategic dimensions of maintenance management. Journal of Quality in Maintenance Engineering, 8, 739. Tsang, A.H.C., Yeung, W.K., Jardine, A.K.S., & Leung, B.P.K. (2006). Data management for CBM optimization. Journal of Quality in Maintenance Engineering, 12, 3751. Van, Montfort, M.A.J. (1970). On testing that the distribution of extremes is of type I when type II is the alternative. Journal of Hydrology, 11, 421427. Vanneste, S.G., & Wassenhove, L.N. (1995). An integrated and structured approach to improve maintenance. European Journal of Operational Research, 82, 241257. Waeyenbergh, G., & Pintelon, L.M. (2002). A framework for maintenance concept development. International Journal of Production Economics, 77, 299313. Waeyenbergh, G., & Pintelon, L.M. (2004). Maintenance concept development: A
case study. International Journal of Production Economics, 89(3), 395405. Wang, H. (2002). A survey of maintenance policies of deteriorating systems. European Journal of Operational Research, 139, 469489. 58
Williamson, R.M. (2006). Using Overall Equipment Effectiveness: the Metric and the Measures. Columbus, Strategic Work Systems. Wireman, T. (2001). Climbing the ladder to world class maintenance status. Engineering and Mining Journal, 202(8), 26. Wireman, T. (2005). Developing performance indicators for managing maintenance (2nd ed.). New York, Industrial Press. Xie, M. (1989). On the solution of renewaltype integral equations. Communications in Statistics, 18(1), 281293. 59
FIGURES AND TABLES Figure 1 Position of Refinery Rotterdam within BP Global ..................................................................... 5 Figure 2 BP Rotterdam Refinery organogram (BPRR, 2015)................................................................... 5 Figure 3 BPRR OSA: target, monthly realization and yearly average (BPRR, July 2015) ........................ 6 Figure 4 Relation between research deliverables, research questions and report structure .................. 9 Figure 5 Method to develop a framework for performance indicators (van Aken et al., 2007)............. 10 Figure 6 Methodology to develop a general multicriteria decision analysis .......................................... 10 Figure 7 Operational research methodology as described by Mitroff et al. (1974) ............................... 10 Figure 8 Different types of validation for operational research (Landry et al., 1983) ............................ 11 Figure 9 Maintenance function framework with performance indicators (Muchiri et al., 2011) ........... 13 Figure 10 Maintenance function framework applied to BPRR .............................................................. 16 Figure 11 Maintenance Cost over 2015 (BPRR, September 2015) ....................................................... 18 Figure 12 Hierarchical system for MCDA (Huang & Tzeng, 2011) ........................................................ 24 Figure 13 Hierarchy structure of the decision problem ......................................................................... 25 Figure 14 Overview of sub unit ranking ................................................................................................ 30 Figure 15 Downtime analysis based on OSA data for the CDU3 .......................................................... 34 Figure 16 Related maintenance cost for sub unit P2101 ...................................................................... 34 Figure 17 Decision tree to determine if maintenance policy should be adjusted ................................. 34 Figure 18 Delay time (Christer, 1999).................................................................................................... 35 Figure 19 Graphical determination of optimal maintenance policy (Jardine & Tsang, 2013) ................ 35 Figure 20 The bathtub curve (Lewis, 1996) ........................................................................................... 36 Figure 21 Characteristic behaviour of the Weibull distribution.............................................................. 37 Figure 22 Depiction of renewal theory approach (Jardine & Tsang, 2013) ........................................... 38 Figure 23 Replacement cycle in case of run to failure policy ................................................................ 44 Figure 24 Replacement cycle in case of the constant interval policy (blockpolicy).............................. 44 Figure 25 Possible replacement cycles in case of age based policy ..................................................... 45 Figure 26 Replacement cycle in the case of age based policy.............................................................. 45 Figure 27 Life time data collected at BPRR for P2101 .......................................................................... 46 Figure 28 Frequency histogram of pump seal lifespan ......................................................................... 47 Figure 29 Normalized histogram for PDF and CDF ............................................................................... 47 Figure 30 Probability plot for the exponential distribution ..................................................................... 48 Figure 31 User interface of the developed Excel tool for the mathematical model.............................. 51 Figure 32 Selection procedure of equipment with the largest performance gap ................................. 53 Figure 33 Strategic maintenance performance management process (Tsang, 1998) .......................... 64 Figure 34 Overview of downtime grouping based on the Solomon benchmark .................................. 68 Figure 35 Equipment breakdown structure (Suzuki, 1994) ................................................................... 70 Figure 36 User interface of the developed decision support tool ......................................................... 76 Figure 37 Complete overview of possible policies ................................................................................ 77 Figure 38 Different types of censoring.................................................................................................. 78 Figure 39 Visual representation of Weibull shape parameter confidence intervals over time .............. 84 Figure 40 Maintenance data (Tsang et al., 2006) .................................................................................. 85 Figure 41 Equipment breakdown structure (Suzuki, 1994) ................................................................... 85 Figure 42. Classification of different production losses within BPRR ................................................... 86 Figure 43 Input form for mathematical model....................................................................................... 88 Figure 44 User interface of probability plotting ..................................................................................... 88 Figure 45 User interface of MLE estimation of parameters.................................................................. 88 Figure 46 AndersonDarling test for estimated parameters .................................................................. 89 Figure 47 MannScheuerFertig test for estimated parameters ............................................................ 89 60
Table 1 Relation between maintenance indicator type and category.................................................... 13 Table 2 Leading performance indicators (Muchiri et al., 2011) ............................................................. 14 Table 3 Lagging performance indicators (Suzuki, 1994; Wireman, 2005; Muchiri et al., 2011) ............ 14 Table 4 Overview of maintenance related performance indicators....................................................... 15 Table 5 Types of work orders at BPRR ................................................................................................. 16 Table 6 Performance indicators used at BPRR...................................................................................... 18 Table 7 Overview of performance indicator comparison ...................................................................... 19 Table 8 Addition of performance indicators suggested for BPRR......................................................... 19 Table 9 Selected indicators for measurement of equipment performance related to maintenance .... 20 Table 10 Ideal criteria for measurement of equipment performance related to maintenance ............. 20 Table 11 Phases and steps in the MCDA procedure............................................................................. 21 Table 12 Process steps of the AHP method in relation with the MCDA steps .................................... 21 Table 13 Performance table ............................................................................................................... 22 Table 14 Conditions for the selection of criteria .................................................................................... 22 Table 15 Criteria used in the BPRR decision problem........................................................................... 23 Table 16 Performance table for the BPRR case................................................................................. 23 Table 17 Results of checking criteria conditions ................................................................................... 23 Table 18 Pairwise comparison matrix based on Saaty (1990) ............................................................... 24 Table 19 Results of factor weights using the AHP................................................................................ 25 Table 20 Process steps in selecting an evaluation method in relation with the MCDA steps.............. 26 Table 21 Process steps of PROMOTHEE II method in relation with the MCDA steps ........................ 26 Table 22 PROMOTHEE II process steps for constructing the preferences .......................................... 27 Table 23 Results of adding net outranking flows to the MCDA ............................................................ 28 Table 24 First three alternative with added net outranking flows for the BPRR case .......................... 29 Table 25 Results of adding the net outranking flow to the MCDA ....................................................... 29 Table 26 Top three units with the largest performance gap for the BPRR case................................... 29 Table 27 Criteria used for sub unit selection ......................................................................................... 30 Table 28 Selected sub unit for further analysis ..................................................................................... 30 Table 29 Description of the different situations tested ......................................................................... 31 Table 30 Criteria weight corresponding to the situations tested .......................................................... 31 Table 31 Results of weight sensitivity analysis over four situations ..................................................... 32 Table 32 Overview of aggregated MCDA procedure ............................................................................ 33 Table 33 Selected unit with the largest performance gap .................................................................... 33 Table 34 Selected sub unit with the largest performance gap.............................................................. 33 Table 35 Characteristic behaviour of the Weibull distribution ............................................................... 37 Table 36 Maintenance policies used by BPRR ...................................................................................... 38 Table 37 Variables taken into account for selected maintenance policies ............................................ 39 Table 38 Assumptions for selected maintenance policies .................................................................... 45 Table 39 Interval frequencies ................................................................................................................ 47 Table 40 Approximation for yz and xz.................................................................................................... 47 Table 41 Results of parameter estimation using probability plotting .................................................... 48 Table 42 Results of parameter estimation using MLE .......................................................................... 48 Table 43 Results of the R2test statistic................................................................................................. 48 Table 44 Results of statistical tests to determine goodnessoffit........................................................ 49 Table 45 Overview of input parameters for calculation ......................................................................... 49 Table 46 Results for the optimal maintenance policy given the fitted Weibull distribution .................. 50 Table 47 Criteria for measurement of equipment performance and maintenance effectiveness ........ 52 Table 48 Complete overview of indicators used in literature ................................................................ 63 Table 49 Overview of selected KPI's for decision problem .................................................................. 65 61
Table 50 Input variables needed to calculate selected KPI's ................................................................ 65 Table 51 Example calculation for BPRR OSA ........................................................................................ 66 Table 52 Example of functional location used to BPRR ........................................................................ 70 Table 53 Comparison of MCDA procedures in literature ...................................................................... 71 Table 54 The fundamental scale by Saaty (1990) .................................................................................. 72 Table 55 Random index for the comparison matrix order ..................................................................... 73 Table 56 Guidelines for the selection of the right evaluation method (Guitouni & Martel, 1998) ......... 74 Table 57 Choices made for the decision problem of the BPRR case.................................................... 74 Table 58 Different types of generalized criteria (Brans et al., 1984) ..................................................... 75 Table 59 Description of specified maintenance policies ....................................................................... 77 Table 60 Definition of variables used for M(t) approximation (Jiang, 2008) .......................................... 83 Table 61 Confidence intervals for the Weibull parameter estimations ................................................. 84 Table 62 Different forms of downtime used at BPRR .......................................................................... 86 Table 63 Different data sources used to measure the selected criteria ............................................... 87 62
APPENDIX A, OVERVIEW OF MAINTENANCE RELATED KPI'S In order to arrive at a complete overview of theoretical indicators, an overview was constructed based on four different sources. In Table 48, this overview is presented as a complete list of mentioned indicators (KPI's) based on their type (leading or lagging) and category (equipment, cost or process). In the last four columns of the table, the sources in which the indicator was mentioned are presented.
Indicator type Lagging Leading
Indicator category Equipment performance Cost performance Process performance
Table 48 Complete overview of indicators used in literature
KPI
Suzuki Wireman (1994) (2005)
Availability rate
x
x
Performance rate
x
x
Quality rate
x
x
Standard production rate
x
x
Number of equipment failures
x
x
Number of process failures
x
x
OEE
x
x
Failure frequency
x
x
Failure severity
x
x
Mean Time Before Failure
x
Reduction in number of S/D
x
x
PM achievement rate
x
x
CM trend
x
x
Production loss cost
x
Overall maintenance cost
x
x
Unit maintenance cost
x
Maintenance cost reduction
x
Unexpected failure repair cost
Trend in maintenance fees
x
Reduction in spare inventory
x
Maintenance cost rate
x
Planned / Unplanned ratio
x
Mean Time To Repair
x
x
Scheduling compliance
x
Campbell & Reyes (2006) x x x x x x x x x
Muchiri et al. (2011) x x x x x x x x x x x x x x
Not all of the listed indicators are incorporated in this thesis research. For an indicator to be used in this research, at least half of the sources needs to mention or describe the indicator. Moreover, if all four different sources mention a indicator, the indicator is considered to be highly important and must be used in the ideal situation to determine equipment performance related to maintenance.
63
APPENDIX B, MAINTENANCE STRATEGY IN LITERATURE B.1 FORMULATE A CORPORATE STRATEGY The corporate strategy forms the foundation of the maintenance strategy because it should support the corporate mission in achieving both
core values and predefined goals (Tsang, 1998). A good example of input from a corporate strategy for the maintenance strategy is the maximization of asset utilization in order to achieve maximum profit from this asset.
Corporate strategy
Maintenance strategy
Maintenance objectives Measures & Targets
Action plans Structural & infrastructural changes
Implement plans
Periodic review
Perfromance measurement
Figure 33 Strategic maintenance performance management process (Tsang, 1998)
B.2 OPERATIONALIZE THE STRATEGY
The second step is translating the input from the corporate strategy into long term objectives and the
development of performance indicators to measures process and status of these objectives. These
objectives can both be defined from past results by extrapolating, or by benchmarking bestinindustry
organizations (Tsang, 1998). Performance indicators need to be evaluated for feasibility in order to be
used. Tsang (1998) provides to following questions to assess this feasibility:
 What type of data will be required?
 What will the involved cost be?
 How should this data be captured?
 How to analyze the captured data?
 What will be the expected quality of the captured data?
B.3 DEVELOP ACTION PLANS TO ACHIEVE THE STRATEGIC OBJECTIVES Development of an information system to capture data required for the measurement is also a crucial part of this step. Impediments for the successful implementation of performance measures will occur if one of the following two situations arises (Tsang, 1998):  The reward and recognition system is out of tune with the performance measures in use;  The strategy is not linked to resource allocation.
The success of the maintenance concept is depending on the maintenance work management as described by Campbell and ReyesPicknell (2006). This cycle consists of work identification, work planning, work scheduling, work assignment, work execution and closing a job. Maintenance work is generally identified according to the type of maintenance demand trigger. This trigger can be from proactive maintenance (preventive, predictive or failure finding) or reactive maintenance (failure based). The planning and scheduling entail what and when these actions should be performed. Finally the execution of these actions is done to accomplish the required equipment condition and performance.
B.4 PERIODIC REVIEW OF PERFORMANCE AND STRATEGY Correlations analysis could be applied to validate causal relationships between measures. When the data fail to validate these relationships or progress is not satisfactory, the cause for this should be identified. Reflection of the implementation or design of the action plans should be examined (Tsang, 1998).
64
APPENDIX C, SELECTED KPI OPERATIONAL DEFINITION For each of the selected performance indicators in the first research question, an operational definition is provided in this appendix. With operational definition the calculation on how to determine these values as used by BPRR are meant. Recall that the selected indicators (see Table 49).
Indicator type Lagging Leading
Table 49 Overview of selected KPI's for decision problem
Indicator category
# Description of indicator
Equipment performance 1
EDC %
2 Availability rate / OSA
3
Failure frequency
Cost performance
4 Production loss cost
5 Overall maintenance cost
Process performance 6 Planned / Unplanned ratio
Unit of measure [%] [%] [#/ t] [$] [] 
For each of these indicators, the following calculations were used to derive the values for each of the specified equipment. Note that the numbering of the formulas is in line with the number of criteria. For example indicator 1, the EDC%, is calculated with [C.1.1] and [C.1.2].
EDCUnit = Production capacity Ч EDC Factor Ч On Stream Factor EDCUnit % = EDCUnit/All units EDCUnit
Production loss duration [days / year]
BPRR Unit OSA per year = 1 
Calendar time [days / year]
BPRR unit impact on refinery OSA = ( 1  BPRR Unit OSA) Ч EDC %
Failure frequency = # Failures/t [years] Production loss cost = Production loss duration Ч Standard production rate Ч Product value Overall maintenance cost = PM cost + CM cost # of CM orders Planned / Unplanned ratio = # of CM orders + # of PM orders
[C.1.1] [C.1.2] [C.2.1] [C.2.2] [C.3] [C.4] [C.5] [C.6]
Also, based on the ideal situation the following variables would have been needed as input for the calculation of all relevant indicators (this full overview is presented by Table 50).
Table 50 Input variables needed to calculate selected KPI's
#
Input variables
Unit of measure
I
Number of failures
II
Availability rate
III
Performance rate
IV
Quality rate
V
Production loss duration
VI
Standard production rate
VII
Product value
VIII
Preventive maintenance cost
IX
Corrective maintenance cost
X
Number of PM work orders
XI
Number of CM work order
XII
Calendar time
XIII
Average actual production rate
XIV
Actual production amount
XV
Quality defect
XVI
Amount of repossessing
[#] [%] [%] [%] [t, days] [weight/t, days] [$/weight] [] [] [#] [#] [t, days] [weight/t, days] [weight] [weight] [weight]
65
APPENDIX D, EXAMPLE OF UNIT OSA
In order to calculate the OSA for unit A, a couple of key characteristics need to be known. These key
characteristics are given in the table below.
Unit name : Nameplate capacity : Nameplate throughput :
A 570 3/ 13.680 3/
EDC % unit A : 6,5% Year / Month :  Days in month : 31
Besides the key characteristics of unit A, the production volumes per day for this unit need to be known (see the table below). Based on the total actual production volume per day (2nd column in the table below), the actual average throughput per hour can be calculated (3rd column in the table below). Subsequently the difference between the actual average hourly throughput and the nameplate capacity of the unit, provides insight in the amount of production loss for a specific day (column 4 and 5). Note that in some cases the production loss is negative, which indicates over capacity production for that day (column 6 and 7). Based on expert opinions and operators who were active during specific time periods in this month, the production losses are allocated to causes.
Day day 1 day 2 day 3 day 4 day 5 day 6 day 7 day 8 day 9 day 10 day 11 day 12 day 13 day 14 day 15 day 16 day 17 day 18 day 19 day 20 day 21 day 22 day 23 day 24 day 25 day 26 day 27 day 28 day 29 day 30 day 31 Total
Total m3 12.405 12.601 13.083 12.966 13.449 13.748 13.645 13.681 13.718 13.638 13.523 13.547 13.413 13.363 13.466 13.268 12.755 12.989 13.106 13.187 13.107 12.991 13.220 13.060 13.115 13.610 13.705 8.389 13.605 13.737 13.698 407.790
Actual ave. throughput m3/hour 517 525 545 540 560 573 569 570 572 568 563 564 559 557 561 553 531 541 546 549 546 541 551 544 546 567 571 350 567 572 571
Table 51 Example calculation for BPRR OSA
Production loss
m3/hour 53 45 25 30 10 1 2 7 6 11 13 9 17 39 29 24 21 24 29 19 26 24 3 220 3  689
m3/day 1.275 1.079 597 714 231 35 42 157 133 267 317 214 412 925 691 574 493 573 689 460 620 565 70 5.291 75  16.498
Over Capacity
m3/hour 3 0 2 1 2 1
m3/day 68 1 38 25 57 18 207
Allocation of Production Losses Not Domino S/D Slowdown Com. allocated

53



45



25



30



10









1











2

7





6

11



13



9



8
9


19
20


14
15


12
12


11
10


12
12


14
15


9
10


13
13


12
12




3




220






3








220
326
128
15
Total 53 45 25 30 10 1 2 7 6 11 13 9 17 39 29 24 21 24 29 19 26 24 3 220 3  689
Based on the production losses allocation, the production loss volume (cubic meters per month) can be recalculated to production loss time (hours and days) by using formulas 1.9 and 1.10.
Production loss Domino S/D Production loss Domino S/D Production loss Domino S/D Impact Domino S/D
m3/month hours days (24 h) 66
5.291 = 570 = 9,28 = 24 = 0,39 = 31 Ч 100% =
5.291 9,28 0,39 1,25%
With the same calculations used for the OSA impact for unit A, given the Domino S/D production loss, the impact of other production losses can be calculated.
Production loss Slowdown Production loss Slowdown Production loss Slowdown Impact Slowdown
m3/month hours days (24 h)
7.827 13,73 0,57 1,85%
Production loss Com. Production loss Com. Production loss Com. Impact Com.
m3/month hours days (24 h)
3.072 5,39 0,22 0,72%
Not Allocated Production Loss Not Allocated Production Loss Not Allocated Production Loss Impact not allocated
m3/month hours days (24 h)
354 0,62 0,03 0,08%
The last step required is to calculate the actual unit A OSA, described by formula 1.12. This is done by summing all production losses not related to TAR, COMMERCIAL or NOT ALLOCATED causes, and subtracting them from one.
Total days with production loss relevant for OSA Total impact on A OSA Total impact on A Utilization On Stream Availability unit A Impact of OSA unit A on Refinery OSA
days (24 h)
= 1,25% + 1,85 % = = 0,72% + 0,08% = = 1 3,09% = = (196,91%) * 6,5% =
0,96 3,09% 0,81% 96,91% 0,20%
67
APPENDIX E, SOLOMON BENCHMARK Different types of downtime according to Solomon benchmark (Solomon Associates):  Turnaround downtime = downtime caused by all maintenance activities classified as T/A;  NonTurnaround Mechanical downtime = downtime caused by all maintenance activities not classified as T/A;  Regulatory/Process downtime = downtime caused by scheduled and unscheduled outages for inspections or processrelated problems. Includes: outages for regulated inspections and testing, furnace decokings and catalyst regenerations not reported as T/A;  Other downtime = downtime caused by outages caused by circumstances outside the affected unit's boundaries. This includes the domino effect and market conditions;  Slowdown downtime = downtime caused by reduced speed operation.
Calendar Time
Mechanical Availability Operational Availability
Regulatory / Process Downtime
T/A Downtime
NonT/A Maintenance Downtime
On Stream Factor
Other downtime
On Stream Factor SD Slowdown
Figure 34 Overview of downtime grouping based on the Solomon benchmark
Turnaround Annualized Downdays () =
Total hours down for T/A () 24 Ч /
[E.1]
In this equation T/A Interval is an interval, expressed in years, between the two most recent turnarounds computed from actual turnaround dates reported previously.
NonTurnaround Maintenance Downdays () = Study year hours + Prior year hours 24 Ч 2
[E.2]
In this equation Study year NTAMDD hours reflects the total number of hours down for maintenance in the year considered for study. Likewise the Prior year NTAMDD hours reflect the total number of hours down for maintenance in the year before the studied year.
Regulatory / Process Downdays () = Study year hours + Prior year hours 24 Ч 2
[E.3]
Other Downdays () = Study year hours + Prior year hours 24 Ч 2 68
[E.4]
Slowdown Days () =
Study year hours + Prior year hours 24 Ч 2
[E.5]
Mechanical availability (MA) is a measure for the refinery's reliability. The MA is calculated for a specific unit or a group of units as follows:
Mechanical availability (MA) =
Ч(365,N25T+avgDays)
1
Ч 100%
[
]
[E.6]
Where:  is the EDC determined for a specific unit ;  and are calculated as presented in formulae [] and [];  is the average days for the study year and the prior year;  When calculating the refinery MA, the becomes equal to the .
Operational availability (OA) =
Ч(365,25+(NTavgDays+))
1
Ч 100%
[
]
[E.7]
OnStream Factor (OSF) =
Ч(365,25+(NTavg+Days+))
1
Ч 100%
[
]
[E.8]
OnStream Factor including Slowdowns () =
Ч(365,25+(NTa+vgDays++))
1
Ч 100%
[
]
[E.9]
69
APPENDIX F, FUNCTIONAL LOCATION SEPARATION As described in T.1 Maintenance related data the equipment identifier is the link between the different data sources used for the analysis of equipment performance. In Table 542 an example is provided what the result is provided by the Excel based VBA tool. Given the application to the functional location data base of BPRR, approximately 20.000 functional location are converted for the purpose of data preparation.
Level 1 2 3 4 5 6 7 8 Total
Table 52 Example of functional location used to BPRR
Name
Example Description
Plant
1001
Europoort
Unit System
2100 40
CDU3 Crude feed
Subsystem
01
To Desalter D2101
Equipment unit
1L
Pipeline
Lower equipment unit
P
Pump indication
Maintainable item
21001
Number of pump
Part
A
Possible redundancy
1001210040011LP21001A
It should be noted that the difficult part of separating the functional location is the separation of combined functional locations. For example P21001A represents pump 21001 part A, which in its turn indicates that pump 21001 consists of more than one actual pump by the indicator A (redundancy in the form of P21001B and/or P21001C might exist).
The functional location is separated in a couple of steps which are shortly described here:
1. Separate the different levels of functional location defined by the "" markers; 2. Separate the levels consisting of both numbers and letters to form a clear functional location; 3. Rank the separate functional location back in a clear structure a conversion reference.
The result of this data preparation is that at this moment a conversion label can be added to existing data making it possible to cluster and combine data from different types of sub units and collect more accurate information than in the case the complete functional location would have been used. The keen observer might state that without this separation of the functional location the different data sources might not be compatible.
BP Rotterdam Refinery
1001
Asset
Asset
nvt
Unit
Unit
Unit
2100
Sub unit
Sub unit
Sub unit
Sub unit
P2104A
Component
Component Resources Cost elements Budget Schedule Scope of work Work schedule
Component
Component
Figure 35 Equipment breakdown structure (Suzuki, 1994)
70
APPENDIX G, GENERAL MCDA PROCEDURE In literature many authors (Hueng & Tzeng, 2011) describe an approach for multicriteria decision analysis or multicriteria decision methods. In order to develop one aggregate procedure, three different approach from literature are compared (see Table 53).
Article Dubois & Prade (1980) Roy & Vinkce (1981) Hueng & Tzeng (2011)
Table 53 Comparison of MCDA procedures in literature
Phase 1. Identify the problem
Phase 2. Select evaluation model
Phase 3. Determine preferences
Step 1. Define the nature of the problem Step 2. Construct a hierarchy for evaluation
Step 3. Select the appropriate evaluation model
Step 4. Obtaining the relative weights and performance scope
Step 2. Articulate
and model the
Step 1. Structure the decision problem

preferences Step 3. Aggregate
the alternative
evaluation
Step 2. Construct the
Step 1. Identify the decision problem

preferences Step 3. Evaluate the
alternatives
Phase 4. Determine the best alternative Step 5. Determine the best alternative Step 4. Making recommendation s Step 4. Determine the best alternative
Looking at the first phase of identifying the problem, three different steps can be distinguished:  Define / identify the problem (Bubois & Prade 1980, Hueng & Tzeng 2011);  Structuring the decision problem (Roy & Vincke, 1981);  Construct a hierarchy system for evaluation (Bubois & Prade, 1980);
Defining and identifying the problem can be seen as one and the same, therefore creating step 1 in the aggregate MCDA procedure. After the decision problem is identified, Roy & Vincke (1981) describe the best subsequent step by structuring the decision problem. This step is necessary before the second step described by Bubois & Prade (1980) can be performed. Constructing a hierarchy can be seen as part of the structuring of the decision problem only applied in the Analytical Hierarchy Process described by Saaty (1984). Therefore, two steps are incorporated in the aggregate MCDA procedure. Bubois & Prade (1980) provide a second phase in which the evaluation model should be selected in which the way preference, later used in phase three, lead to the determining the best decision in phase four. Although Roy & Vincke (1981) and Huang & Tzeng (2011) do not incorporate this phase or step explicitly in their MCDA procedure, they both perform this step. As a result, this phase and step are incorporated in the aggregate MCDA procedure. Within the third phase, both Roy & Vincke (1981) and Hueng & Tzeng (2011) define the same two steps of constructing / modelling the preferences and later evaluate or aggregate these preferences. Although Bubois & Prade (1980) do not explicitly define this as two steps, their step four contains the same procedure as the other two MCDA methods. Concluding, two steps are incorporated in the aggregate MCDA procedure in the third phase. Lastly, all the different literature sources define a last phase containing only one step. This last step is aimed at determining the best alternative or the best decision and is therefore directly incorporated in the aggregate MCDA procedure.
71
APPENDIX H, EXAMPLE OF COMPARISON MATRIX QUESTIONNAIRE The questionnaire below can be used to, in this case, derive the comparison matrix for four different criteria. Based on the fundamental scale provide by Saaty (1990), seventeen possible values can be derived. Eight values are based on the reciprocal characteristic of a matrix.
Criterion
Criterion 1 2 3 4 5 6 7 8 9
1
/
2
1
/
3
1
/
4
2
/
3
2
/
4
3
/
4
In the Table 54 the fundamental scale by Saaty (1990) is shown to support the choice for a specific value. Notice that initially only the uneven numbers are used to determine preferences of one criterion over the other criterion. Only when the difference between two uneven preferences might prove to be too extensive, the even numbers might be used.
Intensity of importance 1 3 5 7 9 2,4,6,8
Table 54 The fundamental scale by Saaty (1990)
Definition
Explanation
Equal important
Two criteria contribute equally to the objective.
Moderate important of one over another
Experience and judgment favour one criterion over another.
Essential or strong importance
Experience and judgment strongly favour one criterion over another.
Very strong importance
A criterion is strongly favoured over another and its dominance demonstrated in practice.
Extreme importance
The evidence favouring one criterion over another is of the highest possible order of affirmation.
Intermediate values between two adjacent judgments
When compromise is needed.
72
APPENDIX I, DETERMINING CRITERIA WEIGHTS Saaty (1990) used the Eigenvalue of a matrix to determine criteria weights from pairwise comparison. By multiplying the comparison matrix (see I.1) with the vector of weights, resulting in .
1/1 1/2 1/ 1
1
[2/ 1
2/2
2
/
]
[2]
=
[2]
[I.1]
/1 /2 /
Here = [1 2 ... ] and if is an eigenvalue of (the comparison matrix), then is the eigenvector associated with it.
It is hardly possible for a subject to give reliable pairwise comparisons for an entire matrix . Saaty (1990) notes that a small worry around a simple eigenvalue as when is consistent, leads to an eigenvalue problem of the form w = , where is the largest, or principal eigenvalue of , where may no longer be consistent but is still reciprocal. If the same method is used, obtaining by solving w = and forming a matrix with the entries / , then an approximation to is obtained by a consistent matrix.
This proposes a new problem, which was also described by Saaty (1990). He argues to what extend
the results of pairwise comparison the actual opinion of the stakeholder reflect. Note that
inconsistency can be captured by a single number:  . Saaty & Ozdemir (2003) uses the consistency index () to assess the extent to which a subject's actual opinion is captured:
=
  1
[I.2]
Next to the consistency index, Saaty & Ozdemir (2003) also defined a consistency ratio (). This ratio is calculated by dividing the by the . In this case is the random index which is derived by simulating the procedure of obtaining the by putting random entries above the main diagonal chosen from all possible values provided by the fundamental scale (Saaty, 1990). The result of this simulation process are presented by Saaty & Ozdemir (2003) in the form of a table based on the order of , these results are depicted in Table 55. The goal of the is to be less than 0,1 or 10%, in which case the calculated vector is accepted. If this is not the case consistency has to be improved.
Order RI
12 00
3 0,52
Table 55 Random index for the comparison matrix order
4
5
6
7
8
9 10 11
0,89 1,11 1,25 1,35 1,40 1,45 1,49 1,52
12 1,54
13 1,56
14 1,58
15 1,59
73
APPENDIX J, SELECTION OF EVALUATION METHOD Three aspects of the MCDA method are considered when choosing the right evaluation method; input capabilities, preference elucidation and modelling and aggregation procedure. Guitouni & Martel (1998) defined seven guidelines in choosing the right evaluation method based on these aspects. The complete guidelines are given by Table 56.
Table 56 Guidelines for the selection of the right evaluation method (Guitouni & Martel, 1998)
Guideline
Description
Determine the stakeholders of the
decision process. In
G1 (stakeholder analysis)
the case of many decision makers, one should think
about group decision making methods.
Consider the decision maker cognition when choosing
a particular preference elucidation mode. This second
guideline consists of the following:
G2 (preference elucidation and modelling)
(i)
Preference elucidation mode;
(ii)
Moment of preference elucidation;
(iii) Global DM preference structure;
(iv) Resulting type of ordering alternatives.
G3 (decision problematic addressed)
Determine the decision problematic pursued by the decision maker.
Choose the MCAP that can handle properly the input
information available and for which the decision maker
G4 (input information and features)
can easily provide the required the information, the
quality and quantities of the information are major
factors in the choice of the method.
Determine the compensation degree which is
G5 (discrimination power of criteria)
acceptable for the decision maker. Look at compensation between criteria and intercriteria
information.
G6 (method hypothesis)
The fundamental hypothesis of the method are to be met (verified), otherwise one should choose another
method.
G7 (decision aid program)
Consider the possible decision support system for the chosen method.
For the case of BPRR and the decision problem at hand, for selecting the equipment with the largest performance gap, the seven guidelines are used to determine which evaluation method would serve as the best method. The choices made for each of the aspects described by the seven guidelines, are presented in Table 57. Based on these characteristics an evaluation method was selected from the overview presented by Guitouni & Martel (1998). From this comparison it resulted that the PROMOTHEE method the best evaluation method is for the considered decision problem.
Table 57 Choices made for the decision problem of the BPRR case
Aspect
Characteristic for the BPRR case
Input capabilities
information accepted, the criteria, the intercriteria information
Input information
cardinal (amount) and certain (know before analysis)
Type(s) of criteria
true criteria
Preference elucidation mode
rating
Moment of preference elucidation a priori (direct)
Multicriteria aggregation procedure outranking synthesizing approach
Discrimination power of criteria
Some compensation between criteria performance is allowed
74
APPENDIX K, GENERALIZED CRITERIA Not all criteria are evaluated to same. For example investing in two alternatives with a difference between 2, and 3, will not present the same decision problems as two alternatives with a difference of 2000, and 3000,. In other words generalized criteria are used to define when a difference in criterion values lead to difference in preference. These generalized criteria are all listed by Brans et al. (1984) and for a complete overview presented in Table 58. In this table () is defined as the preference function and the possible parameters are defined as follows: is a difference threshold. It is the largest value of d below which the decision maker considers there is indifference; is a strict preference threshold. It is the lowest value of d above which the decision maker considers there is strict preference; is a wellknown parameter directly connected with the standard deviation of a normal distribution. Table 58 Different types of generalized criteria (Brans et al., 1984)
Type of criteria
Analytical definition
Shape
Parameter
Type I : Usual criterion
() = {01,,
= 0  > 0
NA
Type II : Quasicriterion
() = {10,,

q

Type III : Vsharp criterion
() = { , 
p
1,  > 0
0

Type IV : Levelcriterion
() = {1 2
< 
q,p
1
0

Type V : Linear criterion
() = {  
< 
q,p
1
2
Type VI : Gaussian criterion
() = 1  { 22}
75
APPENDIX L, USER INTERFACE OF DECISION SUPPORT TOOL The developed decision support tool, providing the ability to determine the total outranking performance of the different units is presented by Figure 36. The performance table needs to be filled from the different input sources. The black fields are used to censor the actual units for the BPRR case. On the right hand side three buttons are enabling the decision maker to run the calculations, store the results of the calculations and reset the tool to the empty state. After the calculations are done the table below the green performance table will provide the ranking of the different units. Below this table a graphical representation of the ranking will be given. Figure 36 User interface of the developed decision support tool 76
APPENDIX M, COMPLETE MAINTENANCE POLICY OVERVIEW Different sources in literature provide useful overviews of described maintenance policies (Wang, 2002; Jardine & Tsang 2013). Based on a combination of these literature sources, Figure 37 was constructed. In this figure the green denoted policies are the preventive maintenance policies currently used at BPRR.
Corrective Maintenance Maintenance Preventive Maintenance
Failure Based Maintenance Time Based Maintenance Predictive Maintenance Condition Based Maintenance Opportunistic Maintenance
Run to Failure Agedependent policy Block policy Usage policy Continuous monitoring policy Periodic inspection policy Sequential inspection policy Continuous monitoring policy Periodic inspection policy Sequential inspection policy Block policy
Calendar time Key date Factory calendar time
Figure 37 Complete overview of possible policies
In the table below a short description of each policy will be given.
Table 59 Description of specified maintenance policies
Maintenance policy Run to failure Agedependent policy Block policy Usage policy Continuously monitoring (CBM & Predictive) Periodic inspection (CBM & Predictive) Sequential inspection (CBM & Predictive Opportunistic Maintenance
Description No preventive maintenance actions are carried out, actions are solely performed after failure Preventive maintenance actions are carried out at a specified time interval or at failure Preventive maintenance actions are carried out at a specified time interval, regardless if equipment has failed already Preventive maintenance is carried out after a specified period of production The state of equipment is continuous monitored and when reaching a critical status, preventive actions are carried out The state of equipment is periodically checked and when observed a critical status, preventive actions are carried out The component is periodically inspected. At the time of inspection, it is determined if the component needs replacement or not. If no replacement is required, it is decided when the next inspection should be executed. Large overhauls (for BPRR defined as turnaround periods)
77
APPENDIX N, DIFFERENT TYPES OF CENSORING First an distinction is made between single and multiple censoring. As Lewis (1996) describes, single censoring occurs when the tests are incomplete due to some type of time restriction. Following from this there are two types of single censoring, Type I (test ended after a fixed time period) and Type II (test ended after a fixed number of failures). In contrast, multiple censoring occurs when collected samples are not in relation with intent of sample collection. Lewis (1996) also provides a good example for this type. Assume the transmission failure of a truck fleet is sample during a time, when one of the truck fails based on a severe accident during the test period this sample data point is not representative for the test at hand and thus needs to be censored. In general there are three types of possible censoring schemes, right censored data (also called suspended data), interval censored data, and left censored data. A short description is presented here. In Figure 38 all different forms of censoring are depicted.
1
2
3
4
X
Time Figure 38 Different types of censoring Right Censored (Suspended) These are data samples for which only its minimum value is known. In reliability testing, for example, not all of the tested units will necessarily fail within the testing period. Then all that is known is the failure time exceeds the testing time. This type of data censoring is commonly called rightcensored or suspended data and represented by 3 in Figure 38. Interval Censored These are data samples for which it is only known that they lie between a certain minimum and maximum. Interval censoring arises commonly when measurements into categories or intervals are assigned. In reliability testing, for example, equipment might only be inspecting every T hours, as a result only record that a unit failed between nT and (n+1)T hours might exist and represented by 2 in Figure 38. Left Censored These are data for which we know only its maximum value. In scientific experiments, for example, we may not be able to measure some quantity because it is below the threshold of detection (e.g. chemical concentration). This situation is represented by 1 in Figure 38.
78
APPENDIX O, CALCULATIONS FOR HISTOGRAMS
In order to construct a histogram, available data must be arranged within defined intervals. Now let be the number of data points and the range of the data (maximum value minus the minimum value), then according to the rule of thumb described by Lewis (1996) a reasonable interval length can be calculated as:
= [1 + 3.3 log10()]1
[O.1]
After the construction of the frequency histogram, the subsequent step is to transform the frequency
histogram into a probability failure distribution. As defined in literature (Lewis, 1996; BenDaya et al.,
2000; Jardine & Tsang, 2013) condition [O.2] needs to be satisfied.
() = 1 
[O.2]
Lewis (1996) provides a simple method for this transformation, taken into consideration that 1, 2, ... are the frequencies with which the data appear in the various intervals and = . By approximating () by , in the interval, must be proportional to (see [O.3]).
=
[O.3]
For the histogram to satisfy the condition described by [O.2] this must yield in [O.4] = 1
[O.4]
Now combining both [O.3] and [O.4] results in O.5] which makes it possible to transform the frequency histogram into a probability failure distribution ().
=
1
Ч
[O.5]
79
APPENDIX P, CALCULATIONS FOR PROBABILITY PLOTTING
This method provides both graphical and quantitative estimates of how well the estimated cumulative distribution function (CDF) based on data points fits to existing distributions. Basically this means that the equation for the CDF is transformed in the form of [P.1] (Lewis, 1996).
= +
[P.1]
This transformation is based on the least squares method, in which the mean values of differences between the data points and the estimated function [P.2] are minimized. This is done by calculating:
=
1
( =1

)2
[P.2]
In order to evaluate y on , [P.1] is substituted into [P.2] resulting in [P.3].
=
1
( =1


)2
[P.3]
To select the values of a and b that minimize , the partial derivatives of with respect to the slope () and intersect () have to vanish. Performing this on [P.3] results in [P.4].
 2  = 0
and
  = 0
[P.4]
Based on the description of estimate calculations provided by Lewis (1996), defined by [P.5], the values for the slope and intersect can be estimated.
1
=
=1
1
=
=1
1
=
=1
2
=
1
2 =1
[P.5]
2
=
1
2 =1
By substituting all estimate calculations of [P.5] into [P.4] the following calculations may be used:

= 22
and
= 
[P.6]
The last step is to measure how well the line is able to represent the data. This is done by calculating
the coefficient of determination [P.7]. This measurement is equal to one if all data point fall exactly on
the estimated line. Following from this, it leads that this measure should be as close to one as
possible in order to find a good fit between data and estimated failure distributions.
2
=
(
2
(  )2  2)( 2 
2)
[P.7]
Given this complete procedure of probability plotting, failure distribution can be plotted on data point even is the source of data is limited. Final remark is that the 2 can be used to determine if the data points include any extreme values, resulting in a decrease of the measure. In the next section two useful distribution types will be discussed.
80
P.1 WEIBULL DISTRIBUTION
In general, for the estimation of a Weibull distribution the following equations, [P.8], are used in order
to determine estimates for the slope and intersect.
1
= ln ln [1  ()]
and
= ln
[P.8]
Based on the steps described in the previous paragraph, the estimates for the Weibull parameters
can subsequently be described as follows:
=
and
= exp( /)
[P.9]
P.2 EXPONENTIAL DISTRIBUTION
Again, in general for the estimation of the exponential the following equations are used:
1
= ln [1  ()]
and
=
[P.10]
In the same way as with the Weibull distribution, an estimate for the distribution parameter is given
by [P.10]. As a result of a single parameter, the intersect or in [P.11] is omitted.
=
[P.11]
81
APPENDIX Q, WEIBULL MLE CONSIDERING GROUPED DATA
For the estimation of the Weibull parameters, considering complete ungrouped data, the following
likelihood function and corresponding likelihood equations were adopted from Rinne (2008):
(, ) = () + () +
(  1) ln() 
=1
=1
(,
)
=

=1
=
0
[Q.1] [Q.2]
(,
)
=
+
ln() =1

=1
ln( )
=
0
[Q.3]
As described in 4.3.1.3 Parameter estimation via maximum likelihood estimation, the likelihood
function and corresponding likelihood equations differ when dealing with grouped data. Introducing
0 0 , then there are time intervals ([0, 1), [1, 2), ... , [1, )) with failed items in the th interval [1, ); = 1,2, ... , ; = =1 . Now assume that in the case of a two parameter Weibull distribution both parameters are unknown and estimated by the maximum likelihood
approach, the corresponding likelihood function is described as:
( ,
)
=
exp
{
( 1 ) }

exp
{
(
)
}
[Q.4]
In
this
formula,
it
holds
that
1(,
)
=
1

exp
{
(1) }
and
( ,
)
=
exp
{
(1) }.
Now
the
likelihood function is given by:
(,
)
=
=1
ln
[exp
{
( 1 ) }

exp
{
(
) }]
[Q.5]
The last step is to transform the likelihood function into likelihood equation to be solved in order to find the parameters estimations. These equations are, based on the likelihood function defined in [Q.5], defined by Rinne (2008) as follows:
(,
)
=
1
=1
exp
{
(   1)
(
)

( 1
)
}

1

 1
=2
=
0
(,
)
=
11
ln () exp {} 
1
+
1
=2
ln ()   1 ln (1)
exp
{(
)

( 1 ) }

1

=2
 1
ln
( 1 )
=
0
[Q.6] [Q.7]
82
APPENDIX R, RF APPROXIMATION CALCULATIONS
Recall that the renewal function (RF) is in general defined as (Lewis, 1996):
() = E[N(t)] = () + (  )()
With 0
0
[R.1]
For the Weibull distribution it is not possible to determine () analytically (Spearman, 1989; Rinne,
2008). As mentioned in 4.4.2 Renewal process modelling, Jiang (2008) provides an approximation
defined as follows:

() () + [(: , ) + (
)]
=2
[R.2]
In this formula (: , ) represents the cumulative Gamma distribution and () the cumulative
Standard normal distribution. Furthermore Jiang (2008) defined as:
 = inf {: ( ) < }
[R.3]
Where is normal chosen in literature as 106 or 107, in this research (similar to the conclusion of
Jiang (2008)) = 106 is chosen with the remark that this provides enough accuracy for application.
in this case is describing the convergence speed of the series model. In order to find an approximation of (), Jiang (2008) uses the following relations between the variables:
+ = 1 1 = (1 + ) 2 = ()
= 1 + 0,41149(1  )
2
=
2
(1
+
2 )

2
(1
+
1 )
2 =
[R.4]
Corresponding variable definitions are given in Table 60.
Table 60 Definition of variables used for M(t) approximation (Jiang, 2008)
Variable
Definition
Weibull shape parameter from MLE estimation
Weibull scale parameter from MLE estimation
() Complete Gamma function for (, ) Cumulative Gamma distribution
Gamma shape parameter
()
Gamma scale parameter Standard Normal cumulative distribution for
From the same author (Jiang 2009; Jiang 2010) a new approximation is provided replacing the approximation based on Gammanormal truncations described above. This new approximation is worth mentioning due to the fact that it solely depends on the specified distribution. Jiang (2009) defines this new approximation as:
() () ()
With , > 0
[R.5]
Justification for this approximation is based on the fact that it is assumes that () provides a lower limit for () and () provides a good upper limit for (). For future research it might prove useful to compare both approximation to test whether this new approximation provides better results for the estimation of expected number of renewals during a certain time period.
83
APPENDIX S, CONFIDENCE INTERVALS FOR PARAMETER ESTIMATION
From Dodson (2006) calculations are used to determine confidence intervals for the estimated parameters for both the exponential distribution as well as the Weibull distribution. The following calculations are used to determine both the upper and lower confidence limits:
2,21 = exp
[
]
= exp [2,21]
[S.1] [S.2]
In this formula is the [100(1  /2)]th standard normal percentile for 100(1  ) percent limits. For example, considering the 95% limits, would be equal to 1.96. Furthermore 2,21 represents the
bottom right corner of the inverted local information matrix defined as follows:
=

2 2
 (
2

2

2 2
)
[S.3]
Note that these are the second derivatives from the likelihood equations. So these confidence intervals are based on the MLE and an example of the result for the case of the pump seal, with specified parameters estimations given in chapter 4, is given below.
Table 61 Confidence intervals for the Weibull parameter estimations
Parameter Limit 95% Confidence Interval 99% Confidence Interval
Shape
Lower Upper
0,754278314 1,639231405
0,667134527 1,853354385
Scale
Lower Upper
37,36921428 38,23235371
37,2344935 38,37068492
1
Lower Confidence Limit 0,8 Expected value
0,6
Upper Confidence Limit
0,4
0,2
0
0
50
100
150
Figure 39 Visual representation of Weibull shape parameter confidence intervals over time
84
APPENDIX T, DATA PREPARATION FOR DECISION SUPPORT TOOL Now that the indicators are selected from the comparison between literature and practice, the last part of this first research question focuses on the combination of different data sources to collect the necessary input for the selected criteria. Moreover, this will provide the answer to the last sub question of how the current data can be used to provide insights in the selected criteria. T.1 MAINTENANCE RELATED DATA Tsang et al. (2006) provide a clear overview of different types of maintenance related data (see Figure 40 for this overview). Again a comparison between literature and practice seems reasonable to assess whether all needed input data as available. For this research however a scope is necessary on maintenance data used. Condition data and installation data is kept outside the scope of this research. Condition data is kept at BPRR, however not in the SAP master data system. The same goes for installation data of equipment. The only form of data from the SAP system related to maintenance action data. Failure data is provide in the form of the eChoke / OSA data files, providing an overview of equipment identifiers, event identifiers (production loss type and cause) and the date and time. Maintenance data
Failure / replacement data Equipment identifier Event identifier Failure mode / suspension Date & time
Condition data Equipment identifier Event identifier Covariates Date & time
Maintenance action data Equipment identifier Event identifier Maintenance action Start date & time Finish date & time
Installation data Equipment identifier Date & time
Figure 40 Maintenance data (Tsang et al., 2006) Special attention is given to the equipment identifier which forms a link between different data sources. For example the eChoke / OSA files used to provide failure data is based on a certain functional location (unit) which corresponds to a lot of different other functional locations used for the maintenance action data in SAP (subunit and further). Suzuki (1994) provides a good example of how this equipment breakdown structure, or functional location, can be used to create equipment identifiers. In order to make use of the link between different data sources, the functional location providing the most useful information considering sub units (type, number and part) are split up by a Excel based VBA tool, which provides the opportunity to connect work order data (maintenance action data), eChoke / OSA data files (failure data), maintenance cost data (SAP based) and production loss costs (RAID based). Please refer to Appendix F, Functional location separation for an example of this functional location separation as result from the VBA tool.
Total plant
System
System
Sub system
Sub system
Sub system
Work group
Work group
Work group
Work group
Work package
Work package
Work package
Work package
Resources Figure 41 EquipCmosteelnemtenbtsreakdown structure (Suzuki, 1994) Budget Schedule Scope of work Work schedule
85
T.2 EQUIPMENT PERFORMANCE RELATED DATA Within BPRR different classifications are used to assign production loss. Production losses are assigned to predefined causes by BPRR. These connections are shown in Table 62. The production loss category is given by the asset team leader.
Downtime category Planned Unplanned
Table 62 Different forms of downtime used at BPRR
Downtime cause
Description
Used
Com. logistics Com. margin Commercial feed Domino S/D Domino slowdown Planned S/D Slowdown TAR TAR domino TAR domino slowd TAR slowdown Commercial feed Domino S/D Domino slowdown Slowdown Unplanned S/D
Delivery of materials is causing downtime, production planning related
No
Based on strategic decisions, production margins are not favourable to produce
No
Based on strategic decisions, there is no material available for production
No
Based on planned downtime from other equipment, the unit is shut down
Yes
Based on planned downtime from other equipment, the unit cannot run at nameplate capacity
Yes
The unit is shut down for preventive maintenance
Yes
The unit operates at reduced speed maintenance
due to preventive
Yes
Turnaround maintenance is causing downtime
No
Turnaround maintenance is causing downtime
No
Turnaround maintenance is causing downtime
No
Turnaround maintenance is causing downtime
No
Based on unforeseen strategic decisions, there is no material available for production
No
Based on unplanned downtime from other equipment, the unit is shut down
Yes
Based on unplanned downtime from other equipment, the unit cannot run at nameplate capacity
Yes
The unit is shut down for corrective maintenance
Yes
The unit operates at reduced speed maintenance
due to corrective
Yes
These causes are important to assign the production loss to the right bucket for OSA calculations. All commercial losses and TAR losses are not taken into account as production losses affecting the availability, but affecting the utilization of the equipment. This is due to the fact that in both these cases, the plant was available but due to strategic and planned actions it was chosen not to use the equipment. Based on these characteristics these types of production loss are omitted from the downtime data for this research. The reasons for this is the goal of the research aims at equipment performance and not strategic and operational planning.
Design capacity of an unit
Reliability of an unit
Unscheduled Downtime
OSA of an unit
Planned Downtime
TAR
Commercial downtime
Utilisation of an unit
Within research scope Outside research scope
Figure 42. Classification of different production losses within BPRR
86
T.3 CONCLUSION ON DATA PREPARATION Moreover, all related data for these six criteria was analysed and made compatible for further use in this research. The most important steps in this data conversion where the transformation of the current functional location or equipment breakdown structure and the use of these new acquired functional locations in the allocation of maintenance cost and work orders to certain units and their sub units. Given this functional location separation the different data sources (maintenance action data, failure data, maintenance cost and production loss data) became compatible.
Table 63 Different data sources used to measure the selected criteria
Data typeData sources
Description
Maintenance action data Maintenance cost data
SAP work orders SAP cost registration
Equipment identifiers, reference date and description of action Equipment identifiers, related cost to a specific time period (month)
Production loss cost data
RAID loss cost
Availability loss related production loss costs
Availability data
eChoke / OSA
Different registered production loss
EDC data
Slowdown files
Calculation of the needed EDC unit %
87
APPENDIX U, EXCEL BASED MATHEMATICAL MODEL The Excel based model starts with providing the failure time data in the input form (see Figure 43). After this input is provided, the Excel tool will automatically start calculations considering parameter estimation, estimation goodnessoffit test, approximation of renewal function and resulting calculations for optimal maintenance policy. To do so one other input variable is needed in the form of estimated corrective and preventive maintenance costs. Figure 43 Input form for mathematical model After the failure data samples are provided, Figure 44 is presented as the dashboard for the probability plotting. Next to the visual representation given, the calculated values for the regression and corresponding goodnessoffit based on the 2 are given. Figure 44 User interface of probability plotting The second dashboard provides the calculations and characteristics for the MLE approach. Based on a solverfunction triggered by an VBA written macro, calculations are made. Both confidence intervals and estimated values are presented. Figure 45 User interface of MLE estimation of parameters 88
For the MLE estimation two statistical tests are automatically calculated. In Figure 46 the results for the AndersonDarling test are provided. Note that the yellow block provides the decision maker the opportunity to select the pvalue, reaching from 0,01 to 0,20 in this case. Figure 46 AndersonDarling test for estimated parameters Similarly the MannScheuerFertig test is presented in a user interface (as depicted in Figure 47). Again, the decision maker is provided the choice of adjusting the pvalue within a range 0,01 to 0,25. Figure 47 MannScheuerFertig test for estimated parameters The last part of the Excel based tool provides the results already discussed in chapter 4, considering the user interface for the optimal maintenance policy. 89