Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 2730691 | https://doi.org/10.1155/2020/2730691

Zhenya Wu, Jianping Hao, "A Maintenance Task Similarity-Based Prior Elicitation Method for Bayesian Maintainability Demonstration", Mathematical Problems in Engineering, vol. 2020, Article ID 2730691, 19 pages, 2020. https://doi.org/10.1155/2020/2730691

A Maintenance Task Similarity-Based Prior Elicitation Method for Bayesian Maintainability Demonstration

Academic Editor: Alessandro Lo Schiavo
Received09 Apr 2020
Revised05 Jun 2020
Accepted23 Jun 2020
Published11 Aug 2020

Abstract

Prior distribution elicitation is a challenging problem for a Bayesian inference-based mean time to repair (MTTR) demonstration because if inaccurate prior information is introduced into the prior distribution, the results become unreliable. This paper proposes a novel maintenance task representation model based on the similarity of attributed maintenance items. A novel similarity computation algorithm for maintenance tasks is then formulated on the basis of this model. Optimistic and pessimistic values are ascertained from the time data for similar maintenance tasks to obtain a prior distribution. The main idea is to separate maintenance tasks into distinct items and use attribute sets to extract key features. Each pair of items is then compared to quantify the differences between reference and candidate tasks. Candidate tasks with an acceptable difference from the reference task are taken as prior information sources for constructing the prior distribution. A case study involving a high-frequency (HF) transceiver MTTR Bayesian demonstration shows that the proposed method can effectively obtain maintenance tasks similar to those of information sources for prior distribution elicitation.

1. Introduction

Maintainability refers to the designed characteristics of systems or products that facilitate “the relative ease and economy of time and resources with which an item can be retained in, or restored to, a specified condition when maintenance is performed by personnel having specified skill levels, using prescribed procedures and resources, at each prescribed level of maintenance and repair” [1]. For many large-scale systems, the cost of system maintenance and support ranges from 60% to 75% of their total overall life-cycle cost [2]. Thus, ensuring a product has good maintainability is a key concern for product developers and users.

Maintainability demonstration is a formal process conducted by a product developer and an end customer to determine whether specified maintainability requirements have been achieved. The mean time to repair (MTTR) is one of the key metrics for describing system maintainability and the main index that is drawn upon during a maintainability demonstration. According to MIL-HDBK-470A, the number of samples for an MTTR demonstration should not be less than 30. However, it is almost impossible to obtain enough samples for a maintainability demonstration during operational tests and evaluations because the tests are expensive. Generally, the problem of insufficient samples can be dealt with by using probabilistic and statistical approaches, such as Bayesian techniques [36], bootstrap methods [7, 8], and Dempster–Shafer (D-S) evidence theory [911]. Of these, Bayesian methods have increasingly become the de facto option in reliability and maintainability engineering [12].

In a Bayesian MTTR demonstration, how to construct an appropriate prior distribution is a key challenge. Wang and Zhou [13] divide this problem into two parts. The first part focuses on whether the prior information is accurate enough to describe the actual behavior. The second part is concerned with translating this prior information into an appropriate mathematical form. Existing research on MTTR Bayesian demonstrations has mainly been focused on the latter problem. The accuracy analysis of prior information has not received the same attention. Zhang [14], Zhang [15], Zhu [16], Huang [17], Liu [18], and Wang [19] have all analyzed the accuracy of prior information from the perspective of data consistency. In their methods, when the prior maintenance time data come from the same type of the product, the accuracy of the prior data depends on whether the prior data and field data are from the same distribution by using nonparametric and parametric test methods. When the prior data come from similar products, the multilayer Bayes method and the Kullback information method are used to calculate the similarity degree between the prior data and field data. However, the consistency of the prior data and field data is a necessary but insufficient condition to determine whether the prior data are accurate. There is a possibility that the maintenance actions of the two products are not similar, but their maintenance time happens to have the same distribution. In addition, these methods all depend upon a certain amount of data to ensure validity. However, it is not always possible to obtain enough maintenance time data in practice. Hence, judging whether the prior data are accurate only through maintenance time data analysis is not reliable and feasible in some cases. Chen et al. [20] used the weighted sum of distances between product attributes to measure the similarity degrees between airplanes, which were then converted to the fusion weights of the prior distribution. However, the chosen attributes include passenger numbers, wingspan, airplane length, and load capacity, which are performance parameters and only have an indirect relationship to maintenance action. It is obvious that the similarity degree based on these attributes cannot appropriately reflect the accuracy of prior information. There, in conclusion, the limitations of existing methods mean that analyzing prior information accuracy remains an important topic of research for MTTR Bayesian demonstrations.

In this paper, we present a novel prior distribution elicitation method for an MTTR Bayesian demonstration. Rather than undertaking maintenance time data analysis, our method analyzes the accuracy of prior information based on maintenance task similarity. As the maintenance tasks directly reflect the maintainability characteristics of the product, a similarity analysis gives a comprehensive overview of the accuracy of the prior information, especially for cases with limited maintenance time data. Developing a practical prior elicitation method involves substantial challenges. First, there is a question of how to abstractly represent the original maintenance task while retaining the necessary features for similarity analysis. Second, to the best of our knowledge, there have been no reported methods that can calculate a distance measurement to quantify the similarity between maintenance tasks. Last but not least, after obtaining a similar maintenance tasks, it is not clear how to construct the prior distribution based on the time data of these tasks.

To tackle the above challenges, we first develop a novel representation model for maintenance tasks. This model is based on an attributed item sequence that uses the item entity attribute tuples and maintainability attribute value vectors to extract operations and maintainability features. To measure the similarity between maintenance tasks, a novel similarity computation algorithm is developed based on the representation model. This algorithm is able to quantify the difference between maintenance tasks. Next, an optimistic and pessimistic value method is used to construct the prior distribution.

The main contributions of this paper include the following:(1)A novel representation model that can extract the key features of a maintenance task(2)A novel similarity computation algorithm that can measure the similarity between maintenance tasks(3)A novel method for constructing the prior distribution based on the maintenance time data of similar tasks

The remainder of the paper is organized as follows: in Section 2, the proposed methodology is discussed. Then, in Section 3, the application of this methodology is shown through a case study. Section 4 provides the conclusions.

2. The Structure of the Proposed Methodology

This section presents the methodology for constructing the prior distribution based on maintenance task similarity analysis. The basic feature of the methodology is illustrated in Figure 1. The methodology is based on three main tasks:(1)Construction of the maintenance task representation models(2)Maintenance task similarity analysis(3)Elicitation of the prior distribution

2.1. Construction of the Maintenance Task Representation Model
2.1.1. Literature Review

Previous research regarding maintenance representation models mainly exists in the field of automated disassembly, which uses computers to simulate the disassembly process and calculate the most efficient sequence. All automated disassembly techniques rely on first developing a suitable product maintenance model [21]. Srinivasan and Gadh [22] used a “wave propagation” approach to establish the relationship between each component in an assembly. This approach uses tau and beta waves, which are created during the process to determine the sequence of operations necessary to remove a specific component. Homem de Mello and Sanderson [23] used an AND/OR hypergraph to give a compact representation model of all possible assembly plans. This model forms the basis for efficient planning algorithms, which enables the selection of the best assembly plan and opportunistic scheduling. Agu [21] proposed a graph-based method for representing product assemblies. This approach uses nodes to represent the individual components, while arcs represent the different types of connections between components. The node variables provide information regarding specific components, and the arc variables provide information on the physical connections between them. To find the best assembly/disassembly sequence, the main emphasis of the above methods is on the location of components within the overall assembly, which makes sense for repair/interchange tasks. However, these methods are not suitable for the analysis of fault confirmation and fault isolation tasks as these do not necessarily consist of assembly/disassembly operations. In addition, these methods do not take into account other concerns that can influence maintenance, such as environmental factors and human factors, which can have a significant influence on the maintenance process.

In our method, the maintenance task is seen as a series of operations concerning maintenance items. This process then forms the basis of a representation model referred to as an attributed item sequence that can be used to represent a maintenance task. This representation model consists of an item sequence, item entity attribute tuples, and item maintainability attribute value vectors.

2.1.2. Problem Definition

To ensure the clarity of this idea’s expression, some definitions are given below.

Definition 1. (maintenance item sequence). A maintenance item denoted by is the specified level of an item that is the direct object of a maintenance operation. For example, removing screw and disconnecting plug. Then, a maintenance item sequence, , is a series of time-ordered maintenance items that represent the items in a maintenance task.

Definition 2. (item entity attribute tuple set). An item entity attribute tuple, , is a two-tuple, where both elements in the tuple are attribute pairs. The parameter represents the type of item, and represents the corresponding maintenance operation. For example, a tuple describing “open cap” is and “screw nut” is . Then, an item entity attribute tuple set, , is the set of item entity attribute tuples for items in a maintenance task.

Definition 3. (item maintainability attribute value vector set). An item maintainability attribute set, , is a set of maintainability attributes describing the maintainability characteristics of an item. Its corresponding value vector is denoted by . Then, a maintainability attribute value vector set, , is the set of maintainability attribute value vectors for items in a maintenance task.

Definition 4. (attributed item sequence). An attributed item sequence, , is a three-tuple representing a maintenance task.
For example, the maintenance task “Repair/interchange--replace the transceiver” for the troubleshooting of the airplane HF transceiver failure includes the following procedures [24]:(1)Unscrew the nut(2)Lower the nut(3)Pull the HF transceiver from the shelf and disconnect the electrical plug(4)Dismantle the transceiver(5)Place cap on the electrical plug(6)Clean the interface and adjacent area(7)Check the interface and adjacent area(8)Remove the cap from the electrical plug(9)Check the cleanliness and condition of the electrical plug(10)Install the transceiver on the shelf(11)Press the transceiver to connect the electrical plug(12)Screw the nutThen, according to the above definitions, the representation model for the task is constructed as shown in Figure 2 (assuming the maintainability attribute set includes six attributes).

2.1.3. Identification and Formulation of Maintainability Attributes

As numerous complex factors can affect the maintenance process, researchers have used a variety of attributes or indicators to reflect maintainability characteristics. Several researchers have made use of a comprehensive evaluation method [2, 2530] to incorporate a range of attributes. The maintainability attribute sets in the above research are shown in Figure 3.

Although there are differences between the above lists of maintainability attributes, they all provide a comprehensive overview of the attributes required to understand maintainability. In practice, one of the above maintainability attribute sets can be chosen as required, or according to expert opinion.

2.2. Maintenance Task Similarity Analysis

After the construction of maintenance task representation models, a similarity analysis can be performed on these models. In this section, we propose a similarity computation algorithm for maintenance tasks. After that, the clusters of similar maintenance tasks can then be obtained.

2.2.1. Literature Review

Similarity search methods have been used in a wide variety of applications areas, such as data mining [31], face recognition [3234], image classification [35], medical engineering [36], and human behavior analysis [37]. Maintenance tasks are usually performed by maintenance staff; so, similarity searches for maintenance tasks fall under the remit of human behavior analysis. Human behavior can be represented from many perspectives, from a low level, e.g., individual motions, to an abstract level, e.g., business processes. Zhang et al. [37] proposed an extended semantic distance calculation method called linked data semantic distance (LDSD) for similarity searches in relation to human behavior. This method is based on a multilayered process model (MLPM), which decomposes behaviors into three layers: a process/task layer, an activity layer, and an action layer. However, it is difficult to employ this model for maintenance tasks because of the difficulty of obtaining enough detail regarding human behavior. Neumuth et al. [38] proposed using surgical process models (SPMs) to represent surgical interventions and introduced five similarity metrics for comparing SPMs. These metrics relate to the granularity, content, temporal aspects, transitional features, and frequency of transitions. However, no clear instructions are given as to how to combine these five metrics into a single similarity value. Obweger et al. [39] proposed a generic similarity model for time-stamped sequences in complex business events. This model calculates similarity on the basis of deviations between a query pattern and its representation in a candidate event. Additionally, this model assesses dissimilarities at the level of single events, their order, their timing, and the absence of events. However, the single-event similarity is derived from the semantic distance between mapped events, which is not suitable for maintenance task similarity analysis.

In this section, the similarity measurement between maintenance tasks is converted to a sequence matching problem. The difference between two item sequences is quantified from the perspective of maintenance time and maintainability characteristics. To allow some tolerance that can recognize acceptable differences, mapping cost functions between the attributed item sequences are introduced into the matching process. After that, similar maintenance tasks can be obtained by specifying the mapping cost threshold value.

2.2.2. Problem Definition

To ensure the clarity of this idea’s expression, some definitions are given below.

Definition 5. (similar items). For a given two items, they are defined to be similar if and only if their entity attribute sets are exactly the same; that is, they have the same type and operation.
Two attributed item sequences, and , are given assuming they both have five items, and every item in one sequence has a similar item in another sequence. Then, to quantify the difference between two sequences, a one-to-one correspondence is assigned to similar items between and as shown in Figure 4 (the items with the same color are similar).

Definition 6. (virtual item). A virtual item is a nonexistent item that is used for full-sequence matching when the existing items of the two sequences cannot all establish a one-to-one correspondence. For example, there are different items (different types or operations) or redundant items between two sequences.
The given two attributed item sequences, and , which have three similar items, a different item , and a redundant item , are shown in Figure 5.
Then, in order to quantify the impact of and on the similarity analysis, two virtual items and are added into the sequences to enable full-sequence matching (see Figure 6). The sequence of virtual items added is called the extended attributed item sequence.

Definition 7. (cosine similarity). Cosine similarity is a measure of the similarity between two high-dimensional vectors. That is, given two vectors and ,where “” indicates the inner product of two vectors and “” indicates the norm of the vector.

Definition 8. (item mapping cost, IMC). For a given two extended attributed item sequences and , under the one-to-one correspondence, the item mapping cost between two similar items isand the item mapping cost between one item and its corresponding virtual item iswhere is the item weight which represents the relative length of the mean maintenance time spent on that type of item; and are the maintainability attribute value vectors of the two items; represents the difference between the maintainability characteristics of two similar items.
Equations (2) and (3) show that the greater the difference between the maintainability characteristics of two similar items or the more maintenance time the item costs, the greater the impact of the difference on the similarity analysis.

Definition 9. (sequence mapping cost, SMC). The sequence mapping cost, , between the sequence and iswhere is the number of items in each sequence.
The SMC reflects the difference between two maintenance tasks based on the representation models. In general, the larger the value of is, the larger the difference between the two maintenance tasks is.

Definition 10. (reference maintenance task). When the equipment to be MTTR demonstrated is specified, the maintenance tasks for this equipment are defined as the reference maintenance tasks denoted by , where represents the number of task types.

Definition 11. (candidate maintenance task set). A candidate maintenance task, , is a task that is compared to the reference task. The candidate maintenance task set denoted by is the task set for the similarity search of the reference task , where represents the number of tasks. Possible sources of candidate tasks include maintenance tasks relating to equipment or components in the same system that have similar functions or that take place in a similar location.

Definition 12. (similarity calculation). For a given reference maintenance task, , with a corresponding candidate maintenance task set, , and a user-specified SMC threshold of , a similarity search will retrieve all maintenance tasks, , such thatwhere is the SMC between maintenance tasks and . If equation (5) holds, it can be stated that and are similar to the boundary. We can then obtain the cluster of similar candidate tasks for the reference task, which is denoted as . The sequence mapping cost (SMC) threshold is a user-specified value, and it is obvious that the larger the , the more candidate maintenance tasks will be determined to be similar to the reference maintenance task and then more data will be available for constructing prior distribution. However, a larger will make some candidate maintenance tasks that are less similar to the reference task similar enough for a prior distribution elicitation, which in turn makes the obtained prior distribution unreliable. Hence, it is important to achieve a balance between the quantity and quality of data when specifying the SMC threshold value. The SMC threshold value can be determined through discussion with experts based on the SMC calculation result to obtain data from the equipment or components as similar as possible under the precondition of having enough data for constructing a prior distribution.

2.2.3. Calculation of the Item Weights

In this study, the expert experience is used to estimate the weight coefficient . As human judgments can be vague or ill-defined, a fuzzy analytic hierarchy process (FAHP) is used to calculate the weight coefficient of each item. This method is mature and easy to use in engineering practice and can make the weights more scientific when combined with the fuzzy judgment of experts based on their experience. The implementation of this procedure is described below [40].

First, a priority matrix, , needs to be constructed, where the value of can be acquired through the priority matrix scale method shown in Table 1.


ScaleDefinitionIllustration

1More time consumes more time than
0.5Equal time and consume equal time
0Less time consumes less time than

According to the results of the comparison between different items, a priority matrix for each item can be constructed, as shown in Table 2.


...
...
....
....
....
...

Then, the overall priority matrix, , is given by

Now, a fuzzy consistent matrix can be constructed, where .

First, the fuzzy complementary matrix is summed line by line, where :

Then, the following transformation is implemented to construct the fuzzy consistent matrix, :

The next set of calculations begins with the weight vector of . This is given by the following:

The weight vector, , is now normalized:

Finally, the weight vector, , can be constructed, as follows:

The similarity computation algorithm based on the above definitions is shown in Algorithm 1.

Input:
Output:
(1)for each do
(2);
(3) Construct representation models for and maintenance tasks in ;
(4)for each do
(5)  Perform sequence matching between and ;
(6)  Calculate item weights;
(7)  Calculate SMC ;
(8)  ifthen
(9)   ;
(10)  end if
(11)end for
(12)end for
(13)Return ;

To illustrate the method, the maintenance task “Fault isolation” for the troubleshooting of the HF transceiver failure is taken as an example. After referring to the troubleshooting manual, the chosen candidate tasks and their procedures are shown in Table 3.


Reference taskCandidate task

HF transceiver (1) Do a check of the circuit breaker status;
(2) Do a check for 115 VAC at pins AC/4, 5, and 6 of the transceiver.
The wiring between the transceiver pin and ground terminal (1) Do a check of the circuit breaker status;
(2) Do a check for 115 VAC at pins AC/4, 5, and 6 of the HF transceiver;
(3) Do a check for a ground signal at pin AC/8 of the HF transceiver.
VHF transceiver (1) Do a check of the circuit breaker status;
(2) Do a check for 28DC at pin AC/2 of the VHF transceiver.

Assume that the maintainability attribute set includes entity reachability, visibility, maintenance space, tools, technical level of the maintainers, maintenance position, and security. The indicators are scored with a number from 0 to 10. The higher the score is, the better the maintainability is. Then, the representation models for the reference and candidate maintenance tasks are constructed, as shown in Table 4.


Reference taskCandidate task






∗ To quantify the impact of different numbers of checks at pins on the SMC calculation. The checks at pins AC/4, AC/5, AC/6, and AC/8 are treated separately.

There are two types of items in the sequences: circuit breaker and pin. Using fuzzy AHP, the priority matrices for the two items are

Then, according to equations (7)∼(11), the item weights are obtained as

The sequence matching between the reference and two candidate maintenance tasks is shown in Figure 7.

Then, drawing upon equations (1)∼(4), the SMC between the candidate and reference tasks is ascertained (the result has been multiplied by 1000 for better comparison):

After discussions with the experts, the SMC threshold is specified as . Then, because , the maintenance task for the wiring between the transceiver pin and the ground terminal is determined to be similar to the reference task.

2.3. Elicitation of the Prior Distribution

Commonly used methods for constructing a prior distribution include elicited priors, conjugate priors, and noninformative priors [41]. As the similar candidate tasks in each cluster only contain the maintenance time data for the corresponding reference task, not the whole maintenance action, we use an optimistic and pessimistic value method to estimate the parameters of the prior distribution. A normal distribution is the commonly used form of the prior distribution in MTTR Bayesian demonstrations [14, 16, 20, 42]; so, in this study, we also assumed a normal prior distribution for the parameter of interest.

Let denote the maintenance action time distribution of a specified product. The variance, , will either be known from prior information, or a reasonably precise estimate can be obtained. The prior distribution of is denoted as . According to the properties of the lognormal distribution,where is the mean of the maintenance time distribution.

Then, can be calculated as follows:

If denotes the time spent on each maintenance task and denotes the corresponding maintenance task time data set, then

Two predictions of the mean of the maintenance action time—the lower, or optimistic value, , and the upper, or pessimistic value, —can be obtained, as follows:where and are the minimum and maximum values, respectively, for the time data set corresponding to cluster .

According to equation (16), the two possible predictions of are

It can then be assumed that the range encompasses percent of the total possible values of and that the best estimate is at the midpoint of the range. Therefore, the following prior distribution estimates can be used:

3. Case Study

In this section, the implementation of an MTTR demonstration for an HF transceiver is once again used to illustrate our method.

3.1. Selection of Candidate Maintenance Tasks

An HF transceiver is part of the HF system and is installed at the front of the electronics rack in a plane. After referring to the troubleshooting manual and the aircraft maintenance manual [24, 43], we established candidate tasks for each reference task. These relate to other components in the HF system or other equipment at the front of the electronics rack. A breakdown of the tasks is shown in Table 5.


Reference taskComponent/equipmentCandidate task

Fault confirmation/checkout HF antenna coupler(1) Press the row key near the HF indicator;
(2) Press the column key near the test indicator;
(3) Press the mode key on the MCDU menu.
HF antenna
Very high-frequency (VHF) transceiver(1) Press the row key near the VHF indicator;
(2) Press the column key near the test indicator;
(3) Press the mode key on the MCDU menu.

Fault isolation The wiring between the transceiver pin AC/8 and the ground terminal(1) Do a check of the circuit breaker status;
(2) Do a check for 115 VAC at pins AC/4, 5, and 6 of the HF transceiver;
(3) Do a check for a ground signal at pin AC/8 of the HF transceiver.
(1) Do a check of the circuit breaker status;
(2) Do a check for 115 VAC at pins AC/4, 5, and 6 of the HF transceiver;
(3) Do a check for a ground signal at pin AC/8 of the HF transceiver;
(4) Do a check of the wiring from the circuit breaker to the HF transceiver pins AC/4, 5, and 6.
VHF transceiver(1) Do a check of the circuit breaker status;
(2) Do a check for 28 DC at pin AC/2 of the VHF transceiver.

Repair/interchange HF antenna coupler(1) Disconnect the electrical plug;
(2) Place cap on the electrical plug;
(3) Unscrew the nut;
(4) Lower the nut;
(5) Dismantle the antenna coupler;
(6) Place cap on the electrical plug;
(7) Clean the interface and adjacent area;
(8) Check the interface and adjacent area;
(9) Dismantle the cap from the electrical plug;
(10) Check the cleanliness and condition of the electrical plug;
(11) Install the antenna coupler on the shelf;
(12) Screw the nut;
(13) Dismantle the cap from electrical plug;
(14) Check the cleanliness and condition of the electrical plug;
(15) Connect the electrical plug to the antenna coupler.
Audio management unit (AMU)(1) Unscrew the nut;
(2) Lower the nut;
(3) Pull the AMU from the shelf and disconnect the electrical plug;
(4) Dismantle the AMU;
(5) Place cap on the electrical cap;
(6) Clean the interface and adjacent area;
(7) Check the interface and adjacent area;
(8) Dismantle the cap from the electrical plug;
(9) Check the cleanliness and condition of the electrical plug;
(10) Install the AMU on the shelf;
(11) Press the AMU to connect the electrical plug;
(12) Screw the nut.
VHF transceiver(1) Unscrew the nut;
(2) Lower the nut;
(3) Pull the transceiver from the shelf and disconnect the electrical plug;
(4) Dismantle the transceiver;
(5) Place cap on the electrical plug;
(6) Clean the interface and adjacent area;
(7) Check the interface and adjacent area;
(8) Dismantle the cap from the electrical plug;
(9) Check the cleanliness and condition of the electrical plug;
(10) Install the transceiver on the shelf;
(11) Press the transceiver to connect the electrical plug;
(12) Screw the nut.

3.2. Identification and Formulation of the Maintainability Attribute Set and Evaluation Rules

The maintainability attribute set developed by Jian et al. [26] was used for the similarity analysis of the maintenance tasks. The maintainability attributes were tailored to the characteristics of the different tasks, as shown in Table 6. The corresponding evaluation rules are shown in Table 7.


Maintainability attributesFault confirmationFault isolationRepair/interchangeCheckout

Entity reachability
Visibility
Maintenance space
Tools
Technical level of the maintainers
Maintenance position
Security


Maintainability attributeEvaluation rules

Entity reachability ()Good: can observe the maintenance component comfortably, and there is a wide observation angle; (7–10)
General: can generally see the outline of the maintenance component, though it can easily cause eye and body fatigue; (4–6)
Poor: prone to fatigue in this pose. (0–3)
Visibility ()Good: clear line of sight, and there is enough light; (7–10)
General: the line of sight is blocked, or the light is dark; (4–6)
Poor: the line of sight is seriously blocked, or the light is insufficient. (0–3)
Maintenance space ()Good: no restrictions on the maintenance space for operating posture; (7–10)
General: maintenance space for the body is basically enough, but the operator’s posture is abnormal; (4–6)
Poor: there is not enough maintenance space for a body. (0–3)
Tools ()Good: do not need the aid of auxiliary tools; (7–10)
General: need auxiliary tools sometimes; (4–6)
Poor: dependent on auxiliary tools. (0–3)
Technical level of the maintainers ()Good: familiar with the relevant technical knowledge and can quickly determine the operation process and solve the problem; (7–10)
General: is familiar with the relevant knowledge and can solve problems by referring to the operation manual; (4–6)
Poor: only a general understanding of the situation and how to carry out the relevant work. (0–3)
Maintenance position ()Good: ground; (7–10)
General: have to climb to the machine; (4–6)
Poor: need to stand outside the machine or in a similar position to the machine. (0–3)
Security ()Good: no danger of being injured by heavy objects and there are no sharp edges that may scratch or danger of electrical shock; (7–10)
General: there is a certain security threat, sometimes there are sharp edges that may cause bumps or scratches; (4–6)
Poor: there is a security threat. (0–3)

3.3. Similarity Analysis between the Maintenance Tasks
3.3.1. Construction of the Maintenance Task Representation Models

After referring to the maintenance manuals and the experts’ experience, representation models for the reference and candidate maintenance tasks were established, as shown in Table 8.


Reference taskNo.Candidate task

(a) Fault confirmation/checkout



(1)

(2)

(3)


(b) Fault isolation