Research Article | Open Access

# A Semigroup Approach to the System with Primary and Secondary Failures

**Academic Editor:**Irena Lasiecka

#### Abstract

We investigate the solution of a repairable parallel system with primary as well as secondary failures. By using the method of functional analysis, especially, the spectral theory of linear operators and the theory of -semigroups, we prove well-posedness of the system and the existence of positive solution of the system. And then we show that the time-dependent solution strongly converges to steady-state solution, thus we obtain the asymptotic stability of the time-dependent solution.

#### 1. Introduction

As science and technology develop, the theory of reliability has infiltrated into the basic sciences, technological sciences, applied sciences, and management sciences. It is well known that repairable parallel systems are the most essential and important systems in reliability theory. In practical applications, repairable parallel systems consisting of three units are often used. Since the strong practical background of such systems, many researchers have studied them extensively under varying assumptions on the failures and repairs; see [1–5] and their references.

The mathematical model of a repairable parallel system with primary as well as secondary failures was first put forward by Gupta; see [1]. This system is consisted of three independent identical units, which are connected in parallel. In the system, one of those units operates, the other two act as warm standby. If the operating unit fails, a warm standby unit is instantaneously switched into operation. The operating unit submits primary failures and secondary failures. The primary failures are the result of a deficiency in a unit while it is operating within the design limits. The secondary failures are the result of causes that stem from a unit operating in a conditions that are outside its design limits. Two important types of secondary failures are common cause failures and human error failures. A Common cause failure refers to the situation where multiple units fail due to a single cause such as fire, earthquake, flood, explosion, design flaw, and poor maintenance; see [2, 3]. A human error failure implies a failure of the system due to a mistake made by a human caused by such reasons as inadequate training, improper tools, and working in a poor lighting environment; see [4, 5]. There is one repairman available to repair these units. Once repaired, these units are as good as new. The failure rates of units and system are constant and independent. When the system is operating, the repairman can repair only one unit at a time. If all units fail, the entire system is repaired and checked before beginning further operation of these units. Unlike [4, 5], the repair times in this system are arbitrarily distributed.

The parallel repairable system with primary and secondary failures can be described by the following equations (see [1]): For , the boundary conditions are prescribed, and we consider the usual initial condition where . The most interesting initial condition is Here ; represents the probability that the system is in state at time , ; represents the probability that at time the failed system is in state and has an elapsed repair time of , ; represents failure rate of an operating unit; represents common-cause failure rates from state to state 4, ; represents human-error rates from state to state 5, ; represents failure rate of standby unit; represents constant repair rate if the system is operating; represents repair-rate when the failed system is in state and has an elapsed repair time of for which satisfies ; , , , , and are positive constants.

In [1] the author analyzed the system using supplementary variable technique and obtained various expressions including the system availability, reliability, and mean time of the failure using the Laplace transform. And then he discovered that the time-dependent availability decreases as time increases for exponential repair-time distribution under the following hypotheses.

*Hypothesis 1 . *The system has a unique positive time-dependent solution

*Hypothesis 2 . *The time-dependent solution converges to the steady-state solution as time tends to infinity, where
The availability and the reliability depend on the time-dependent solution of the system. In fact, the author used the time-dependent solution in calculating the availability and the reliability. But the author did not discuss the existence of the time-dependent solution and its asymptotic stability, that is, the author did not prove the correctness of the above hypotheses. It is well known that the above hypotheses do not always hold and it is necessary to prove the correctness. Motivated by this, we will show the well-posedness of the system and study the asymptotic stability of the time-dependent solution in this paper, by using the theory of strongly continuous operator semigroups, from [6–8]. First, we convert the model of the system into an abstract Cauchy problem in a Banach space. Next, we show that the operator corresponding to this model generates a positive contraction -semigroup. Furthermore, we prove that the system is well-posed and there is a positive solution for given initial value. Finally, we prove that the time-dependent solution converging to its static solution in the sense of the norm through studying the spectrum of the operator and irreducibility of the corresponding semigroup, thus we obtain the asymptotic stability of the time-dependent solution of this system.

In this paper, we require the following assumption for the failure rate .

*Assumption 1.1 (general assumption). *The function is measurable and bounded such that exists and

#### 2. The Problem as an Abstract Cauchy Problem

In this section, we rewrite the underlying problem as an abstract Cauchy problem on a suitable space , see [6, Definition II.6.1], also see [7, Definition II.6.1]. As the state space for our problem we choose It is obvious that is a Banach space endowed with the norm where .

For simplicity, let and we denote by the linear functionals Moreover, we define the operators on as respectively. To define the appropriate operator we introduce a “maximal operator” on given as

To model the boundary conditions () we use an abstract approach as in, for example, [9]. For this purpose we consider the “boundary space” and then define “boundary operators” and . As the operator we take and the operator is given by where .

The operator on corresponding to our original problem is then defined as Let , , , then the condition in is equivalent to (). The system of integrodifferential equations () can be written as the following equation: Let , then (2.11) is equivalent to the following operator equation: Thus, the above equations (), (), and () can be equivalently formulated as the abstract Cauchy problem If is the generator of a strongly continuous semigroup and the initial value in () satisfies , then the unique solution of (), (), and () is given by For this reason it suffices to study ().

#### 3. Boundary Spectrum

In this section we investigate the boundary spectrum of . In order to characterise by the spectrum of a scalar -matrix, that is, or on the boundary space , we apply techniques and results from [10]. We start from the operator defined by We give the the representation of the resolvent of the operator needed below to prove the irreducibility of the semigroup generated by the operator .

Lemma 3.1. *Let
**
and set Then one has
**
Moreover, if then
**
where
**
The resolvent operators of the differential operators are given by
**
for .*

*Proof. *A combination of [11, Proposition ] and [12, Theorem ] yields that the resolvent set of satisfies
For we can compute the resolvent of explicitly applying the formula for the inverse of operator matrices; see [12, Theorem ]. This leads to the representation (3.4) of the resolvent of .

Clearly, knowing the operator matrix in (3.4), we can directly compute that it represents the resolvent of .

The following consequence is useful to compute the boundary spectrum of .

Corollary 3.2. *The imaginary axis belongs to the resolvent set of that is,
*

The eigenvectors in can be computed as follows.

Lemma 3.3. *For one has
*

*Proof. *If for , (3.11)–(3.14) are fulfilled, then we can easily compute that . Conversely, condition (3.9) gives a system of differential equations. Solving these differential equations, we see that (3.11)–(3.14) are indeed satisfied.

The domain of the maximal operator decomposes, using [10, Lemma ], as

Moreover, since is surjective, is invertible for each , see [10, Lemma ]. We denote its inverse by and call it “Dirichlet operator.’’

We can give the explicit form of as follows.

Lemma 3.4. *For each the operator has the form
**
where
*

The operator can be computed explicitly for .

*Remark 3.5. *For the operator can be represented by the -matrix
where

The operators and allow to characterise the spectrum and the point spectrum of . Before doing so we extend the given operators to the product as in [13, Section ].

*Definition 3.6. * (i) (ii), (iii)(iv), (v),

*Remark 3.7. *(i) Note that . For the resolvent of is
(ii) The part of in is

Hence, can be identified with the operator .

The spectrum of can be characterise by the spectrum of operators on the boundary space as follows.

*Characteristic Equation 3.8*

Let . Then(i)(ii)If, in addition, there exists such that , then

*Proof. *Let us first show the equivalence
We can decompose as
We conclude from this that the invertibility of is equivalent to the invertibility of . From
one can easily see that is invertible if and only if . This proves (3.26). Since by our assumption , it follows that . Therefore, is not empty. Hence we obtain from [6, Proposition IV.2.17] that
since is the part of in . This shows (ii).

To prove (i) observe first that and have the same point spectrum, that is,
Suppose now that . Then there exists such that . Since , we can compute
This shows that .

Conversely, if we assume that , then there exists such that . From
we conclude that and thus
It follows from the decomposition (3.15) that and hence .

Using the Characteristic Equation we can show that is in the point spectrum of

Lemma 8.8. *For the operator one has .*

*Proof. *By the Characteristic Equation it suffices to prove that . Since
where
We can compute the th column sum of the -matrix as follows:
This shows that is column stochastic, its transpose is row stochastic, and hence . Since , also holds. Therefore, by the Characteristic Equation we conclude that

Indeed, 0 is even the only spectral value of on the imaginary axis.

Lemma 8.9. *Under Assumption 1.1, the spectrum of satisfies
*

*Proof. *For any , , we consider the resolvent equation
where . This equation is equivalent to the following system of equations: