Abstract

We study the expected discounted penalty at ruin under a stochastic discount rate for the compound Poisson risk model with a threshold dividend strategy. The discount rate is modeled by a Poisson process and a standard Brownian motion. By applying the differentiation method and total expectation formula, we obtain an integrodifferential equation for the expected discounted penalty function. From this integrodifferential equation, a renewal equation and an asymptotic formula satisfied by the expected discounted penalty function are derived. In order to solve the integrodifferential equation, we use a physics-informed neural network (PINN) for the first time in risk theory and obtain the numerical solutions of the expected discounted penalty function in some special cases of the penalty at ruin.

1. Introduction

For the first time in actuarial science, Gerber and Shiu [1] introduced the expected discounted penalty function which is also referred to as the Gerber–Shiu function. Since the seminal paper by [1], the Gerber–Shiu function has been widely studied and has become one of the most representative research directions in risk theory (see [24]). For the Gerber–Shiu function, the main goal is to consider three important random variables once at a time, namely, the time of ruin, the surplus immediately before ruin, and the deficit at ruin. Usually, the Gerber–Shiu function is used to evaluate the overall financial performance of an insurance company before going bankrupt. The special issue in volume 46, 2010, of the journal Insurance: Mathematics and Economics contains a selection of papers focused on the Gerber–Shiu function, with many further references therein. The expected discounted penalty function has attracted the interest of many actuaries since its inception. Ramsden and Papaioannou [5] considered a Markov-modulated risk model and derived an integrodifferential equation for the expected discounted penalty function, the asymptotic behavior of which was investigated in terms of Laplace transforms. Under a Lévy insurance risk process, the joint Laplace transform of ruin-time and ruin-position was presented by [6], and this Laplace transform can be used to compute the expected discounted penalty function via Laplace inversion. Preischl and Thonhauser [7] minimized expected discounted penalty functions in a Cramér–Lundberg model by choosing optimal reinsurance, showed the existence and uniqueness of the solution found by this method, and provided numerical examples involving light- and heavy-tailed claims. Martin–González and Kolkovska [8] studied a generalization of the expected discounted penalty function for a class of two-sided jump Lévy processes having positive jumps with a rational Laplace transform and provided an explicit expression for the generalized function in terms of functions depending only on the parameters of the Lévy process. The expected discounted penalty function provides a unified framework for the evaluation of various risk quantities. For a systematic study of the Gerber–Shiu theory, one can refer to references [13, 712].

In recent years, with the rapid advancement of artificial intelligence and machine learning theory, a number of papers have focused on the numerical solution of differential equations and proposed novel learning machines to solve the differential equations (see [1322]). Zhou et al. [13] constructed a neural network model, in which trigonometric function served as the activation function, added the initial condition of the integrodifferential equation satisfied by the ruin probability to the solver model, and obtained the numerical solutions to it. Ma et al. [14, 15] proposed an initial condition extreme learning machine and a novel structure automatic-determined Fourier extreme learning machine to realize numerical solutions of the partial differential equations, respectively. In particular, Raissi et al. [16] introduced physics-informed neural network (PINN), which was trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations, and solved the problem of data-driven solutions to partial differential equations. Wu et al. [19] proposed a new physics-informed neural network (PINN) for solving the Hausdorff derivative Poisson equations on irregular domains by using the concept of Hausdorff fractal derivative and transformed the numerical solution of the partial differential equation into an optimization problem including governing equation and boundary conditions. In Zhang et al. [20], a novel deep learning technique, called multidomain physics-informed neural network (MDPINN), was presented to solve forward and inverse problems of steady-state heat conduction in multilayer media. Zhang et al. [21] numerically resolved linear and nonlinear transient heat conduction problems in multilayer composite materials using multidomain physics-informed neural networks. Compared to other existing approaches, the PINN is simple, straightforward, and easy-to-program, and it has been successfully applied in different fields recently. Numerical experiments indicate that the PINN methodology is accurate and effective, which provides a new idea for solving certain differential equations (see [16, 1922]).

Although the expected discounted penalty function was proposed more than two decades ago and continues to play an important role in actuarial science research, there are still many unsolved problems such as the two here for generalizing the discount rate in this function from a constant to a random variable for the nonclassical risk model and looking for an effective numerical scheme for this function with no explicit solution (see [3, 512]).

The main goal of this paper is to partially address both of the above issues. The rest of this paper is organized as follows: In the latter part of the introduction section, the risk model of interest with a threshold dividend strategy is introduced together with the expected discounted penalty function with a random discount factor. The important technical analysis is carried out in Section 2, where the integrodifferential equation for the expected discounted penalty function is ultimately derived. In Section 3, in the case of a relatively large initial surplus, we obtain a renewal equation and, further, an asymptotic formula for the expected discounted penalty function. In Section 4, we give basic information about the structure of the neural network, the way the neural network is trained, and other basic details of the physics-informed neural network methodology to find numerical solutions of the integrodifferential equation in Section 2, and, by numerical examples, illustrate the effectiveness of the physics-informed neural network method.

In the classical compound Poisson risk model, the insurance company is assumed to collect premiums at a constant rate , whereas claims arrive successively according to the times of a Poisson process, henceforth denoted by with Poisson parameter . These successive individual claim amounts denoted by , independent of , are independent and identically distributed (i.i.d.) positive random variables with a common cumulative distribution function (c.d.f.) that has a positive finite mean and a continuous probability density function . Consequently, the i.i.d. interclaim time random variables , independent of , have an exponential distribution with mean . The aggregate claims process is defined by , where if . Thus, the insurer’s surplus process is given by , where is the initial surplus. For more on the classical compound Poisson risk model, see [23, 24] which serve as encyclopedic references for all matters concerning ruin theory.

We now enrich the classical model. We assume that the insurance company is a stock company, and dividends are paid to the shareholders according to a threshold dividend strategy. Let denote the constant barrier level, and be the annual premium rate if the surplus is below the barrier level . Let , , be the annual dividend rate, i.e., when the insurer’s surplus is above the barrier , dividends are paid at rate . Thus, the net premium rate after dividend payments is . As usual, we assume the security loading condition, that is, the condition is fulfilled. Under such a strategy, the surplus process can be expressed as follows:

See [25, 26] and references therein for this type of risk model with threshold dividend strategy, and Figure 1 shows a graphical representation of a sample path of the surplus process.

Define the time of ruin as , where if for all . Then, denotes the surplus immediately before ruin and denotes the deficit at ruin. Let be a bivariate nonnegative function which satisfies some mild integrable conditions. We now introduce the expected discounted penalty functionwhere is the indicator function of an event and , is interpreted as the accumulated interest force function. We assume that , where all are the nonnegative constants, is a Poisson process with Poisson parameter , is a standard Brownian motion, and , , and are assumed to be mutually independent (see [27, 28]). Since stochastic fluctuation of interest cannot be large in reality, and for simplicity, we might as well assume that .

In the setting of surplus processes without dividend strategy, [2830] and the references therein generalized the known results on the expected discounted penalty function with constant discount rate in [1]. Li et al. [30] provided the first systematic numerical study on, via the popular Fourier-cosine method, finite-time expected discounted penalty functions with the risk process being driven by a generic Lévy subordinator.

The expected discounted penalty function is a function of the initial surplus . Many recent research studies on ruin-related quantities can be rooted to the expected discounted penalty function with constant discount rate, i.e., . For example, by setting and , the expected discounted penalty function reduces to the ruin probability as follows:

Particularly, the first-step analysis for the expected discounted penalty function was adopted in [1] to derive a defective renewal equation, from which explicit solutions could be obtained. In the classical compound Poisson risk model without dividend strategy, Gerber and Shiu [1] gave the integrodifferential equation, the renewal equation, and the asymptotic formula for the expected discounted penalty function in the setting that , i.e., , and Wang and Ling [28] gave these results in the setting that , i.e., . In this paper, we derive the integrodifferential equation and the renewal equation for the expected discounted penalty function of (2) under the risk model (1) with a threshold dividend strategy and a random discount factor , where , , and also give a remark on the asymptotic. In order to efficiently obtain numerical results, physics-informed neural network (PINN) is used to give the numerical solutions of the expected discounted penalty function in some special cases of the penalty at ruin for the first time in risk theory.

2. Integrodifferential Equation for the Expected Discounted Penalty Function

In this section, we derive an integrodifferential equation for the expected discounted penalty function by utilizing the strong Markov property of the Poisson process at claim instants.

Clearly, the expected discounted penalty function behaves differently, depending on whether the initial surplus is below or above the barrier level . Hence, we write

Proposition 1. The expected discounted penalty function satisfies the following integrodifferential equation for :where

Proof. For , we condition on the time and the amount of the first claim. Contingent on this time of the first claim, there are two options: the first claim occurs before the time when the surplus has attained the barrier level, i.e., or it occurs after attaining the barrier. When we consider the amount of the first claim, there are two possibilities as well: after it, the surplus process starts all over again with new initial surplus or the first claim leads to ruin.
It needs to distinguish between two cases. First, and the surplus has not yet reached the barrier . In this case, the surplus immediately before the time is . Second, that is no claim occurs before the surplus exceeds the barrier . In this case, the surplus immediately before time is and there are three possibilities at time for the amount of the first claim, that is more than , less than , or between and .
In view of the strong Markov property of the surplus process at claim instants and total expectation formula, for , we obtainwhereLetting leads toWe then getwherewhere is a function which is independent of and .
We write the constant number as . Changing variables in (11) by and results inBy differentiating both sides of the equation (13) with respect to , we obtainSo, equation (5) is correct.
For , the surplus immediately before time of the first claim is , and at time , the surplus may be less than , more than , or between and . In the same way as deriving (5), it is obvious thatWe then getwhereNote that . Substituting with in (16) yieldsand then differentiating both sides of the equation (18) with respect to yields the integrodifferential equation (6).whereby which the proof is concluded.

2.1. Remarks
(1)The integrodifferential equation (5) for does not involve but the integrodifferential equation (6) for incorporates .(2)Equations (13) and (18) show that is continuous, and especially, for , , i.e., .(3)We examine when . However, the same is not true for at . To see this, let in (6) and employ the integrodifferential form of in (5) afterwards. We then have

This results in , where is a left-derivative and is a right-derivative. Thus, the expected discounted penalty function at is continuous but not differentiable.

3. Renewal Equation for the Expected Discounted Penalty Function

In this section, we first obtain a renewal equation for , . Finally, the asymptotic formula for is derived by virtue of this renewal equation.

Letwhere is the unique nonnegative solution to the Lundberg equation of :

Fromit is clear that equation (23) has a unique nonnegative root and a unique negative root which are denoted as and , respectively, because of . See [28] and Figure 2 for the solutions to equation (23) which will be used later on.

Proposition 2. The expected discounted penalty function satisfies the following renewal equation for :where .

Proof. For , multiplying both sides of (6) with and applying the product rule for differentiation, we getwhere .
By (23), then equation (26) reduces toFor , we integrate both sides of the equation (27) from to , and then haveIt follows thatBy letting , the first term on both sides of (29) vanishes. Thus, we haveBy (29) and (30), we obtainApplyingand multiplying (31) with , it follows thatChanging variables in (33) yieldswhich is the required result.

3.1. Remarks
(1)We know consists of two parts: and . To determine , we consider which have been incorporated in equation (25) for . It is known from [4] that the general solution of is of the following form:Here, is an arbitrary constant to be determined; the function is the expected discounted penalty function with no dividend strategy, that is, the expected discounted penalty function with barrier level . As shown in [1, 28], the function is the solution to the following renewal equation and its applications have been studied extensively,Here, .The second function is a nontrivial solution to the following homogeneous integrodifferential equation:with initial condition defined (without loss of generality) to be The constant , which we specify by implementing equation (35) and returning to (25), satisfyThus,(2)Equation (25) may be restated as

Since equation (40) for has the exact same form as equation (3.4) of [28], we may apply the same approach as far as [28] obtained the asymptotic formula (3.18) of [28], and then we have an asymptotic formula for the expected discounted penalty function where and are the unique nonnegative root and the unique negative root of equation (23), respectively, and the notation , for , means .

4. Numerical Results

In this section, we use the physics-informed neural network (PINN) method to find numerical solutions of the integrodifferential equation (5) for the expected discounted penalty function in three special cases of the penalty at ruin (see [1316, 1921]). In addition, we also give asymptotic numerical solutions of (6) for by the asymptotic formula (41).

4.1. Introduction to the Algorithm

Let us start by concentrating on the calculation of numerical solutions of the integrodifferential equation:where denotes the hidden solution, are the constants, and both of and are the nonnegative given functions.

The first integral on the right-hand side of (42) is the convolution . We assume that satisfies the integrodifferential equation (42) with an initial and boundary condition of , where is a given function.

We define the function as follows:

Then, we chose to jointly approximate the latent function and the convolution by a M-layer fully-connected neural network with N neurons per layer, where the neural network takes as an input and has two outputs. Let and be the two outputs of the neural network that are used to approximate the real solution of equation (42) and the convolution , respectively, where is the random initialized parameter which represents the weights and biases of the neural network. This prior assumption along with equation (43) result in a physics-informed neural network (PINN) that takes as an input and outputs which is used to approximate by (43). This network (PINN) can be derived by applying the automatic differentiation technique for the differentiating compositions of functions and by applying the output for the integral of functions and has the same parameters as the fully-connected neural network representing and , albeit with different activation functions due to the action of the integrodifferential operator. The shared parameters between the two neural networks can be learned by minimizing the following mean squared error loss function.

The mean squared error loss function is defined as follows:where

Here, are the points in the interior of the computational domain for . denote the initial and boundary training data on and account for both boundary and initial condition. Calculating requires the derivatives of the output of the fully-connected neural network and the output .

In the above PINN, an activation function should be employed to train the fully-connected neural network in order to find the relationship between the input and output. We use the hyperbolic tangent function as the activation function. The principle for this physics-informed neural network methodology is minimizing the loss function by training the neural networks so that the output of the fully-connected neural network approaches the real solution of the equation (42). Once the training is complete, extrapolation is performed and the numerical solution of equation (42) is obtained. The structure of the proposed PINN is illustrated in Figure 3, and the workflow of realizing a numerical solution of equation (42) using the PINN is as follows:Step 1: Generate the initial and boundary training data on and the collocation points in the interior of the computational domain for . The total number of training data is relatively small (a few dozen up to a few hundred points).Step 2: Specify the optimizer, the tolerance, the number of iterations.Step 3: Construct a fully-connected neural network with random initialized parameters . The model trained consists of a series of fully connect operations with a operation between each one. Specify the number of fully connect operations and the number of neurons for each fully connect operation. The first fully connect operation has an input channel corresponding to the input . The last fully connect operation has two outputs and . Define the parameters for each of the operations. Initialize the parameters for the first fully connect operation, for each of the remaining intermediate fully connect operations, and for the final fully connect operation, respectively.Step 4: Construct a physics-informed neural network by substituting and into the governing equation via automatic differentiation and arithmetic operations.Step 5: Create the loss function as shown in equation (44)Step 6: Train the fully-connected neural network to find the best parameters by minimizing the loss function .Step 7: Obtain the numerical solution by substituting the resultant parameters into the neural network .

The rest of this section is organized as follows. In the rest of Section 4.1, we use a numerical example to demonstrate the effectiveness and accuracy of the proposed PINN method for obtaining the numerical solution of the integrodifferential equation. Section 4.2 presents two numerical examples of solving equations (5) and (6), respectively, where we again use the PINN method in the first numerical example. In particular, since the integrodifferential equations in these examples have only initial condition and no boundary condition, we assume that the boundary condition is for the PINN.

By conditional probability and total probability formula, Dickson [31] obtained that the ruin probability , in the Sparre Andersen risk model with a probability density function of the interclaim time, satisfies the integrodifferential equation along with initial condition which is given bywhere is the solution of the algebraic equationand is the individual claim amount distribution function that has a positive finite mean . In the case of the exponential distribution , by equations (46) and (47), Dickson [31] obtained an explicit solution of the ruin probability , which is expressed as follows:

We take advantage of the package MATLAB R2022b to examine the effectiveness and accuracy of the PINN method for obtaining the numerical solution of equation (46). Suppose . By (49), the exact solution of the ruin probability is . Given the computational domain of , we select 25 equal points to enforce the initial condition , select 25 equally spaced points to enforce the boundary condition , and randomly select 10,000 points across the computational domain to enforce the outputs of the network to fulfill equation (46). This dataset is then used to train a 9-layer fully-connected neural network with 20 neurons and a hyperbolic tangent activation function per hidden layer by minimizing the mean square error loss function of (44) using the L-BFGS optimizer (see [16]). Then, we obtain a numerical solution of equation (46) by the above PINN.

The change in the value of the loss function is shown in Figure 4. Figure 5 displays the results of our experiment and shows a comparison between of the exact solution and the numerical solution of equation (46). Figure 6 and Table 1 give the errors of approximate solution obtained by using the PINN. The results show that the numerical solution is in good agreement with the exact solution when the initial surplus takes on different values, in particular, when the initial surplus is greater than 10.

4.2. Numerical Solutions of the Integrodifferential Equation

Example 1. Suppose , and . Solving equation (23) leads to and . We use the above PINN method to obtain the numerical solution of equation (5) for , and then get the following results:(1)Choose . For this case, the expected discounted penalty functionis the extension of the Laplace transform of the time of ruin. By equations (35)–(39), the initial condition of equation (5) is . The change in the loss function is shown in Figure 7(a), and the numerical solution of is shown in Figure 8(a), respectively.(2)Choose . For this case, . The change in the loss function is shown in Figure 7(b), and the numerical solution of is shown in Figure 8(b), respectively.(3)Choose , which was interpreted as the payoff at exercise of a perpetual American put option by Gerber and Shiu [1]. For this case, . The change in the loss function is shown in Figure 7(c), and the numerical solution of is shown in Figure 8(c), respectively.

Example 2. Suppose and the individual claim amount distribution is with density functionwhere (see Chapter 1, Examples 18 and 19 of [24]).
Solving equation (23) leads to and . These apply to the situation of Proposition 1. Hence,For the asymptotics of , we test the following three cases:(1)Choose . For this case, it follows from (41) that(2)Choose . For this case, it follows from (41) that(3)Choose , For this case, it follows from (41) thatWe take advantage of the package Mathematica 12.0 to examine the asymptotic formula (41). The numerical results of asymptotics of are copied to Table 2.

5. Conclusion

In this paper, we derive an integrodifferential equation and a defective renewal equation for the expected discounted penalty function with threshold dividend strategy and stochastic discount rate and then find the solutions of the equations. We creatively propose the PINN method for finding the numerical solution of the integrodifferential equation. Given initial and boundary conditions, this method allows finding the numerical results of the expected discounted penalty function quickly. The example in this paper demonstrates the effectiveness of the PINN method in which the numerical solution is very close to the exact solution. We hope to apply this method to find numerical solutions of the expected discounted penalty function where the individual claim amount distribution is arbitrary in the future.

Data Availability

The data used to support this study are included within the article.

Additional Points

Use of AI Tools Declaration. The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the Natural Science Research Project of Anhui Jianzhu University (Grant no. KJ0514).