Abstract

Handwritten signatures are one of the most extensively utilized biometrics used for authentication, and forgeries of this behavioral biometric are quite widespread. Biometric databases are also difficult to access for training purposes due to privacy issues. The efficiency of automated authentication systems has been severely harmed as a result of this. Verification of static handwritten signatures with high efficiency remains an open research problem to date. This paper proposes an innovative introselect median filter for preprocessing and a novel Gaussian gated recurrent unit neural network (2GRUNN) as a classifier for designing an automatic verifier for handwritten signatures. The proposed classifier has achieved an FPR of 1.82 and an FNR of 3.03. The efficacy of the proposed method has been compared with the various existing neural network-based verifiers.

1. Introduction

For the authentication of an individual, signatures are the most extensively used biometric identity [1]. For a long time, signatures have been officially recognized as a mark of identification and authority in practically all economic, social, and legal documents [2]. As a result, various attempts have always been made to authenticate handwritten signatures to protect the confidentiality of papers and the safety of all business and legal transactions [3]. Signature verification, in general, is the process of determining if a handwritten signature is authentic or faked [4]. However, owing to several obstacles such as variance in the different portions, forms of each signature signed by the same person, differences in size and orientation of signature images, and noises included in the signature images, signature verification is a very challenging process. In addition, separating authentic and fake signature photos are difficult elements of the signature verification procedure [5]. Several automatic verification technologies have been devised to reduce these hazards. Automatic signature verification techniques are often divided into two groups [6]. One is an online system, while the other is an offline system [7]. Special equipment, such as a digital pen and a digitizer, is used to collect data in an online system. It generates dynamic data such as signature location, pen pressure, velocity, and speed. In the meantime, the verification procedure is carried out in real-time [8]. However, data gathering in offline systems is done by scanning individual handwritten signatures. Signature verification is performed using features taken from the signature picture [9]. Because only static information, such as geometrical characteristics, is accessible in the scanned picture for verification of a signature, all dynamic information vanishes [10]. As a result, offline signature verification is a time-consuming process. Because, forgers seek to imitate authentic signatures, the distinction between expert forgery and genuine signatures is complex. When it comes to signing, each writer has his or her style [11]. This habit can be influenced by a variety of factors, including cultural handwriting ability, age, health, and physical and emotional condition [12]. As a result, signatures created by the same writer will never be identical in appearance. This property is termed intrapersonal variability. As a result, distinguishing between intraperson signature authenticity and intrapersonal counterfeit is difficult [13].

Several ways have been put forward in the offline handwritten signature verification field to mitigate such complications. However, most prior techniques were limited by sparse data issues [14]. It is regarded as one of the most critical aspects of the offline signature verification procedure. When the ratio of training samples to feature dimensionality is low, the statistical model parameter estimations are inaccurate, and the classification results are less than desirable [15, 16]. Several machine-learning methods were created to improve the efficiency of the offline signature verification procedure. These techniques, however, are extremely susceptible to transformations like rotation and occlusion. Furthermore, the visual elements enhance the complexity of these networks, resulting in excessive computation [17]. As a result, the paper proposes an introselect median filter (IMF) for noise removal and a new Gaussian gated recurrent unit neural network (2GRUNN) classifier for signature verification to address these issues.

One of the research topics that has seen a lot of exploration is automatic signature verification, but certain hot recent developments may be excellent candidates for intensive study.

1.1. Multiscript Variation Systems

Typically, signatures are written in a single language or indistinguishable patterns. Most verifiers make use of feature sets that are unique to each script. As a result, these verifiers do not function well for signatures with numerous scripts or for scripts that were not utilized in the training set. Separate databases with various scripts are frequently utilized to avoid such situation [18]. An analysis of single script and multiscript signatures was provided by Das et al. [19]. It also suggested a fresh method for creating a multiscript signature training database. There is still work to be done on both online and offline signature databases with the several scripts.

1.2. Acquisition Methods

Typically, data are gathered either in online or offline mode for the automatic signature verifiers [20]. Images of signatures are scanned or shot from paper in the offline mode. In online systems, signatures are collected using digital tools like digitizing pens or tablets. Then, to remove noise, background effects, and distortions brought on by the data-collecting process, these digital images are put through image preprocessing. Many different kinds of digital tablets can be utilized for this. Tablets come in a variety of forms, including passive, active, optical, acoustic, capacitive, etc. Online signatures can also be obtained using digital pens. Some criteria must be observed to preserve consistency among the data collected using various devices.

1.3. Signatures in Medical Applications

Researchers and psychologists have been interested in the use of handwriting and signatures to study human behavior and medical issues. With neuromotor control, a signature is a complex activity [21]. Moreover, it requires kinematic, perceptual-motor, and cognitive skills. Several activities, such as eye–hand coordination, visual motor planning, kinematic activities, muscle movement, motor planning, etc., are necessary for writing. As a result, psychiatric and neurodegenerative illnesses have an impact on handwriting [2224]. According to the recent studies, there is a direct link between poor handwriting and conditions like Alzheimer’s, Parkinson’s, dysgraphia, etc. It can also be used to track how well patients respond to treatment, spot disorders like these in children at a young age, and many other things.

1.4. Synthetic Signature Generation

The gathering of datasets for training purposes is one of the main drawbacks that automatic signature verification experiences [25]. Data gathering for automatic signature verifiers is a particularly challenging process because of the regulatory restrictions and people’s reluctance to freely supply their signatures. It is a time- and money-consuming process. Many researchers have suggested the use of synthetic signatures to get rid of this. Moreover, a number of artificial generator models can simulate both intrapersonal and interpersonal variability. The production of unrealistic data is still an issue for the synthetic signature generation. The most current and successful area of research for automatic verifiers is in this area.

The remaining paper is organized as follows: Section 2 survey of the related works regarding the proposed system. Section 3 explains the proposed signature authentication framework. Section 4 illustrates the results and discussions for the proposed method based on efficacy metrics. To finish, Section 5 concludes the paper with future scope.

A characteristic of a person’s signature is that it is occasionally inconsistent. Even when performed repeatedly by a single person, it differs to some extent. One of the most difficult areas of pattern recognition is offline signature verification. The researcher faces a hurdle when building a system to overcome intrapersonal and interpersonal variances because behavior is a biometric attribute that may be copied. The following summarizes some such studies and earlier publications. The summary of literature survey is presented in Table 1.

Liu et al. [26] introduced a metric learning-based region-based deep convolutional Siamese network that could be used in both writer-dependent (WD) and writer-independent (WI) situations. By extracting features and learning the similarity measure from tiny sections rather than whole signature photos, a Mutual Signature DenseNet (MSDN) was built to capture minute yet discriminative information. For the final verification decision based on local region comparison, the similarity scores of different locations were combined. In testing leveraging publically available datasets CEDAR and GPDS, the approach achieved efficacy of 6.74% EER and 8.24% EER in WI scenarios, respectively, and 1.67% EER and 1.65% EER in WD situations, respectively. However, in the face of uneven illumination, occlusion, and other circumstances, the method failed to recognize the signature.

Parcham et al. [27] developed a signature verification model that captured spatial features of signature, improved the feature extraction phase, and reduced the network’s complexity using a combination of a CNN (convolutional neural network) and CapsNet (capsule neural networks). Furthermore, a training mechanism was developed in which only the network was trained concurrently by two images at the unchanged level, resulting in a 50% reduction in the training parameters. This mechanism did not necessitate the use of two distinct networks to learn the features. Last, CBCapsNet, an amalgam of the developed CNN–CapsNet systems, was presented as a composite backbone architecture. The model improved accuracy and outperformed commonly used methods for signature verification, according to the evaluation results. However, the technique had a flaw: it had a high proportion of false positives.

Ruiz et al. [17] created an offline signature verification method that is writer-independent and resistant to fraud. The system was trained using a single authentic signature from each writer. The signature image dataset was first preprocessed. The samples were divided into distinct training, validation, and test subsets. The mentioned signature pairings were created using four distinct schemes: the original GAVAB training set, the expanded GAVAB training set, the devised method of synthetic signatures, and the GPDS synthetic dataset. For all of the schemes, the signature pair-generating method was the same. When the original and synthetic signatures were integrated and trained together, the best verification results were achieved. The precision of this framework, on the other hand, is not adequate.

Diaz et al. [4] proposed a cognitively inspired program that replicated the offline signs. Throughout the signature process, the software employed a combination of nonlinear and linear alterations to simulate the human spatial cognitive map and motor system intrapersonal variability. For the distortion of the inked picture, a piecewise sine wave function was used to induce intracomponent variability, which resulted in the distinct duplications. In the binary picture, each linked region was labeled independently, and intercomponent variability was addressed by using various horizontal and vertical independent displacements for each labeled component. After that, the signature tendency was changed, and a duplicate signature was acquired. The findings showed that the strategy preserved the signer’s intrapersonal variability and enhanced the classifier’s identification of the user.

Al-Hmouz et al. [28] proposed a novel method for verifying dynamic signatures called probabilistic dynamic time warping. In the verification phase, the approach employed dynamic temporal warping to realize distance determination. The signatures were split into numerous segments in this manner, and the likelihood of each segment was calculated using a relative distance associated with two threshold levels. A Bayes rule was used to combine all segment probabilities to arrive at the final choice. Experiments revealed an improvement in the random forgery mistake rate. However, the approach had a flaw in that it selected features incorrectly.

Ghosh [29] proposed an offline signature verification and recognition system using recurrent neural networks. The foundation of this architecture is multiscript signatures. Hindi, Bengali, and English signatures have all undergone testing. To evaluate the system’s performance, a variety of datasets including MCYT, GPDS300, CEDAR, BHSig260 Hindi, BHSig260 Bengali, GPDS synthetic, etc. were employed. In this method, the change in direction was determined using the division of a signature image into eight octants. The features employed are change in trajectory, trajectory slope, trajectory waviness, and center of mass. The verifier’s design is based on a recurrent neural network. All of the datasets demonstrate a high accuracy rate for the system. For various datasets and feature sets, the average accuracy hovers around 94%.

Zheng et al. [30] suggested an offline technique for signature verification based on micro deformations. It has multiple scripts. Hindi, Persian, and English datasets have all been used for testing. The four datasets utilized are BHSig260, UTSig, CEDAR, and SyntheticGPDS. The foundation of the system is the detection of micro deformations. This feature makes it possible to distinguish between authentic and fake signatures. For the detection of micro deformation, max pooling is applied. Micro deformations are extracted using CNNs. For the system’s testing, both intrapersonal and interpersonal variability are taken into account. When tested with four publicly available databases, the solution performs well.

Tsourounis et al. [31] suggested a writer-dependent learning strategy for effectively developing offline signature verifiers. Instead of using handwritten signatures, it trains a CNNs using handwritten text. By doing this, the issue of data scarcity and privacy concerns is resolved. The characteristics of handwritten signatures are mimicked in the handwritten writing. This initial training can be used as the feature extractor for the signature verifier or as the initialization of CNN’s fine-tuning parameters. This system verifies signatures using three publicly accessible databases: CEDAR, MCYT-75, and GPDS300 gray. The offline signature verifier’s performance has greatly increased with the addition of the feature mapping stage, which restructures the feature space based on metric learning.

Wei et al. [32] suggested a method that establishes identification and signature verification through the use of spline interpolation and two neural networks. In the feature extraction stage, both global and local features are extracted. Sixth-degree splines are discovered to be the most suitable in the process. The verifier, a CNN, achieves an accuracy of 87%. The system is resistant to counterfeiting since it is built around the identification of critical spots.

The review of the recent work suggests that deep learning algorithms are proven to be promising in the field of offline signature verification. In [33], GRUNN has been used as the classifier network and shows a high potential for being used as the classifier network. Scopus database has been used to identify recent works on signature verification based on long short-term memory (LSTM). The search string used was—“TITLE-ABS-KEY (“Signature Verification” AND “LSTM”) AND (EXCLUDE (DOCTYPE, “cp”) OR EXCLUDE (DOCTYPE, “cr”))”. Initially, the result obtained showed 13 documents but after the exclusion of conference proceedings and conference reports it has been reduced to six since the remaining seven were not matching the theme of the present study. The documents which are found related to the theme are listed below in Table 2.

The above data gives the insight that LSTM is a potential choice for signature verification which has not yet been fully explored. Also, different variations in basic LSTM networks can yield new results in combination with different types of features. It also reduces computation complexity which has been identified as a significant problem in [39]. Thus, in this paper, this technique has been further explored by using Gaussian distribution in the baseline GRUNN with various local and global features.

3. Proposed Methodology

The overall form of the signature is more essential than the little nuances when it comes to signature verification. As a result, a system that incorporates these variances and reliably predicts fake and authentic signatures is necessary. Hence, for signature verification, this research presents an IMFtechnique for noise removal and 2GRUNN classifier. Signature duplication, preprocessing, noise reduction, normalization, segmentation, feature extraction, feature selection, and classification are all parts of the proposed work. Figure 1 depicts the proposed signature verification system’s structural architecture.

3.1. Image Preprocessing

If the input signature photos are red, green, and blue (RGB) color images, the RGB images are transformed into gray-scale images for faster signature recognition. Because a gray-scale picture is a basic form of an image that just comprises multiple scales of gray pixels, this conversion is necessary. As a result, transforming the color image to grayscale requires less pixel information for processing.

3.2. Image Scaling

Because the signature in the image is often lengthy and takes up the majority of the image, it is scaled to a shorter length. The technique of resizing a picture to change its visual appearance is known as image scaling. By altering the number of pixels in the input picture, scaling enlarges or shrinks the image size.

3.3. Noise Removal Using IMF

The noises in the photographs are eliminated using the IMF after they have been preprocessed. The median filter (MF) is the most extensively used digital filtering technique for removing picture noise while maintaining the image’s edges. All of the picture pixel values are replaced with the median values of those pixels in this approach. It also sorts the pixels using a square window and replaces the center pixel with the sorted series’ mean value. The disadvantage of this filtering strategy is that each window’s median calculation takes additional time and processing effort. As a result, the median calculation is replaced in the proposed work by the introselect selection approach, which takes less time and minimizes the process’ computational complexity. IMF is the term given to the suggested approach.(i)Introselect is a mix of rapid selection and median of medians selecting techniques. Before sorting, it begins with a quick selection of the ith smallest picture pixel. After that, it uses the algorithm of the median of medians.(ii)The unsorted picture pixels are first separated into subgroups of defined size in the median of the medians approach.(iii)The medians of the subset of picture pixels are then computed, and the real medians are estimated using quick-select once again.(iv)The procedure is repeated until the smallest subsets of pixels are found. Finally, the image pixel received is converted to a noise-free pixel and used for further processing.

Algorithm 1 describes the working of proposed IMF.

The inputs and outputs of gray conversion, scaling, and noise removal operation with IMF are shown in Figure 2.

Pseudocode of proposed IMF
Input: Preprocessed image
Output: Noise-free image
Begin
Initialize the input
For each
Calculated median of medians
Partition the image pixels into subsets
For do
Estimate median
Calculate
Recognize true medians
End for
Attain smallest subset of pixels
Replace with
End for
End

The method used by the IMF, the median of medians, is renowned for its resistance to outliers. Effectively reducing noise, especially impulse or salt-and-pepper noise, is made possible by this algorithm. In comparison to conventional median filtering, the filter can offer improved noise reduction by predicting the real medians using the quick-select method. While the Gaussian filter can over smooth details, the MF can blur or smooth image edges. The IMF, on the other hand, strives to preserve edges and minute features. It can lower noise while preserving the sharpness and clarity of edges in the filtered image by combining the benefits of rapid selection and the median of medians algorithm. IMF employs a rapid selection process which significantly decreases the processing time. These advantages make IMF a better choice for preprocessing filters in the signature verification.

3.4. Image Normalization with Min–Max Model

The height and breadth of the picture pixels may vary depending on the image acquired by the camera or scanned by the scanner. As a result, the image normalization approach is employed to counteract these variances, hence improving the overall classification accuracy. The picture pixels are adjusted using min–max normalization in this case. In the min–max normalization approach, the minimum pixel value is set to zero, the highest pixel value is set to one, and all other values between the maximum and minimum intensities are replaced with a decimal number between 0 and 1. Next, from this normalized image, the signature regions are segmented using the range Sauvola method (RSM).

To distinguish a fake signature from a genuine signature, the valued traits are more significant. Because each image has its form features, it is tough to categorize them effectively. As a result, for classification, some of the necessary features to distinguish the signatures are extracted. The signature characteristics are magnitudes that may be derived from the whole pen path, with each feature described as a vector of values. Corner finding using the Harris feature, ORB feature (oriented fast and related brief), signature area, signature height-to-width ratio, slope and slope direction, skewness of signature, texture feature, center of mass, normalized area of signature, skewness, horizontal length, number of pen-ups, curvature, average curvature per stroke, number of strokes, SIFT, and crest–trough parameter features are included. The extracted features (fi) are modeled below,

Equation (3), N specifies the number of extracted features.

3.5. Signature Classification Using 2GRUNN

A variation of a LSTM neural network is mentioned as GRUNN. In [33], GRUNN has been used as the classifier network and shows a high potential for being used as the classifier network. The GRUNN is a variant of the LSTM neural network that improves the structure of the LSTM by combining its three gates into two. Each GRU node in GRUNN contains two gates: an update gate and a reset gate. The update gate is used to calculate how much data are delivered into the current state. To ignore prior state information, the reset gate is employed. It has three layers: an input layer, an output layer, and a concealed layer. The GRU neurons make up the hidden layers. The typical initialization strategy in the traditional GRUNN, on the other hand, controls the distribution of forgetting gates. Learning reliance and time-scale volatility may result from such initiation. As a result, the value of forgetting gate activations is initialized using a Gaussian (G) distribution approach. Furthermore, it only has a few hyperparameters compared to the typical gate initializations. 2GRUNN refers to the use of Gaussian distribution in the baseline GRUNN.

In the proposed classifier, the selected features are inputted to the 2GRUNN, at time t the update gate, reset gate, and standard 2GRUNN unit are evaluated as follows:

The above equations show the reset gate output, which is known as the Gaussian distribution function, which defines the reset gate weight value, indicates the hidden state of the previous layer, and the update gate output, which mentions the weight value of the update gate, defines the current output of hidden state by activation, specifies the weight values between update gate and input gate, and illustrates the 2GRUNN output unit, which is used to update the current state. The network’s Gaussian distribution function is as follows:whereas denotes the standard deviation and is known as the mean value of the function. Likewise, the tan h activation function is represented as follows:

Finally, the classifier generates two types of output: forgery signatures and legitimate signatures. By distinguishing the forged signature, this approach will be useful in identifying a person’s signature more efficiently and precisely.

4. Results and Discussion

In this section, an extensive study of the proposed framework’s ultimate outcomes is carried out. The efficacy analysis, as well as the comparison analysis, are used to demonstrate the efficacy of the job. The suggested approach is implemented using MATLAB, and the data are taken from the CEDAR dataset, which is freely accessible on the Internet.

4.1. Dataset Description

CEDAR signature is a signature verification database that stores offline signatures. Each of the 55 people who signed contributed 24 signatures, totaling 1,320 genuine signatures. Some were instructed to counterfeit the signatures of three different authors eight times for each subject, totaling 1,320 forgeries. Each signature was scanned in grayscale at 300 dpi and binarized with the use of a gray-scale histogram. Image preprocessing included two steps: salt pepper noise reduction and slant normalization. For each writer, there are 24 real and 24 forgeries in the database.

4.2. Performance Analysis of Proposed IMF

To determine its impact, the proposed IMF’s performance is validated using various existing techniques such as MF, bilateral filter, and Gaussian filtering with various performance metrics such as peak signal-to-noise ratio (PSNR), mean-square error (MSE), and structural similarity index (SSIM).

The suggested IMF is compared to several current approaches such as the MF, bilateral filter, and Gaussian filtering on the basis of PSNR, MSE, and SSIM in Table 3. A greater PSNR value results in higher image quality. As a result, the suggested IMF obtains a PSNR of 39.0921, whereas conventional approaches generate an average PSNR of 28.7246. The image quality loss caused by data compression and other processing techniques is estimated by the MSE metric value. The lower the MSE number, the more efficient the model is. The suggested IMF produces 0.000565 MSE, whereas the existing approaches obtain an average value of 0.004445. In addition, the suggested work was evaluated in terms of SSIM metrics. The suggested IMF has an SSIM of 0.972293, whereas current approaches have an average SSIM metric value of 0.857702, which is poor in comparison to the proposed IMF. It is obvious from the comparison that the suggested IMF is an error-prone model that also performs better in the signature verification framework. The numerical values of PSNR, MSE, and SSIM of all the filters under study have been summarized in Table 3.

4.3. Performance Analysis of Proposed 2GRUNN

The proposed 2GRUNN is compared to existing techniques such as GRUNN, recurrent neural network (RNN), and deep neural network (DNN) in terms of sensitivity, specificity, accuracy, precision, recall, F-measure, false positive rate (FPR), false negative rate (FNR), and Matthews correlation coefficient (MCC).

In terms of sensitivity, specificity, and accuracy, Figure 3 displays the performance evaluation of the proposed 2GRUNN and the other current approaches. The proposed 2GRUNN achieves a sensitivity of 96.97%, specificity of 98.18%, and accuracy of 97.58%, whereas existing techniques like GRUNN, RNN, and DNN achieve a sensitivity of 92.1%, specificity of 92.72%, and accuracy of 92.41%, respectively. As a result, when compared to the existing works, the proposed 2GRUNN achieved better metrics rates. As a result, the suggested technology determines whether a signature is real or counterfeit with pinpoint accuracy.

The value of performance measures such as precision, recall, and F-measure of the proposed 2GRUNN and other current works such as GRUNN, RNN, and DNN are included in Table 4. The model’s importance is judged by its greater precision, recall, and F-measure rates. The suggested approach, according to the statement, achieves 98.16% accuracy, 96.97% recall, and 97.56% F-measure. However, the current work achieves an average accuracy, recall, and F-measure rate of 92.64%, 92.1%, and 92.36%, respectively. In comparison to the intended effort, this is a little amount. As a consequence, the suggested strategy outperforms previous state-of-the-art methods and produces more notable outcomes in a variety of challenging situations.

Table 5 compares the proposed 2GRUNN’s FPR, FNR, and MCC rates to those of current works such as GRUNN, RNN, and DNN. The low-FPR and FNR rates, as well as the high-MCC value, define the model’s importance. According to the above-mentioned statement, the planned work’s FPR and FNR rates are low, at 1.82% and 3.03%, respectively. The suggested work achieves an MCC rate of 95.16%, which is greater than the previous methodologies. The current methods achieve FPR and FNR rates that vary from 4.24% to 9.7% and 5.45% to −12.12%, respectively. As a result, the suggested technique beats existing modern methods in the offline signature verification process and produces superior results.

In Table 6, a comparative analysis of various other methods with the proposed method has been presented. Each of the listed works has used the CEDAR dataset for training purposes. By observing the FAR, FRR, and EER values, it can be concluded that the proposed method is promising for signature verification and hence, very clearly shows the novelty of the research.

5. Conclusion

A novel offline signature verification scheme has been proposed on the basis of a 2GRUNN. The preprocessing phase consists of gray conversion, scaling, noise removal using IMF, and image normalization. The categorization process effectively determines if the signature is authentic or counterfeit. It also reduces computation complexity which has been identified as a significant problem in [39]. 2GRUNN is an excellent choice for offline signature verification due to its ability to effectively model sequential data, capture temporal dependencies, and address the challenges posed by variable-length signature sequences. By maintaining a memory state, GRUs can retain information from earlier time steps, enabling them to capture the intricate relationships between different pen strokes. This ability is essential for accurately distinguishing genuine signatures from forgeries. The gates control the flow of information within the network, allowing the model to update and forget information selectively. This gating mechanism is particularly valuable for signature verification, as it enables the model to focus on relevant pen strokes while filtering out noise or irrelevant details. Training this model is faster, allowing for quicker iterations during development. This advantage is particularly valuable when dealing with large datasets and complex models. Moreover, the prediction process of a trained model is also efficient, making it suitable for signature verification. The experimental analysis is then carried out, which includes performance analysis and a comparison study of the offered methodologies in terms of various performance metrics to validate the proposed algorithm’s efficacy. The new method can deal with a variety of uncertainty and produce more promising outcomes. The suggested technique obtained 96.97% sensitivity, 98.18% specificity, and 97.58% accuracy in this study, which employed a publicly accessible dataset named CEDAR. Since the proposed method is a biometric authentication system, its efficiency is determined based on false positive and false negative rates too. This system shows FPR and FNR values of 1.82 and 3.03. Overall, the suggested framework outperforms current state-of-the-art approaches while also being more dependable and resilient. The study will be expanded in the future with enhanced neural networks and a focus on the different types of biometrics.

Data Availability

The data used to support the findings of the study can be obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.