Abstract

A security violation is referred to as a personal data breach when it leads to unintentional and unlawful destruction, loss, alteration, unauthorized disclosure, or access to personal data that has been communicated, stored, or otherwise processed in some other manner. Based on the principles of information security, we can define a breach of confidentiality as the unauthorized or accidental disclosure or access to personal data, a breach of integrity as the unauthorized or accidental alteration of personal data, and a breach of availability as the unauthorized or accidental loss of access to or destruction of personal data. This paper suggests designing an intelligent consensus policy management system based on the Markov chain approach. It is a novel system that would analyze the present status of the consensus elements for future development and anticipates the possibility of possible breaches of sensitive personal data. The evaluation of the proposed strategy is based on a policy scenario that involves a hypothetical consensus and a data breach of sensitive information to music streaming services.

1. Introduction

Online streaming replaces the purchase of Compact Disc-CDs, and podcasts come and take the place of radio. In recent years, more and more people are changing their habits in the music industry because we are in the middle of rapid development [1]. The ability to listen to music directly on the mobile via the Internet, wherever we are, surpasses in ease any other method. However, the sound and listening quality are not the same as traditional. For this very reason, streaming services were created from which we can legally listen to music [2]. Most of them are designed to help the user discover new music, with the software itself suggesting what new we will listen to based on our musical preferences. Of course, a prerequisite for these services is payment. After a short trial period that requires a credit card, a small monthly fee gives access to many songs.

However, the streaming era brings new issues, the most basic of which is the acquisition of sensitive personal data and its “processing” via automated means. Personal data are information about a living person who can be identified (e.g., name, home address, e-mail address, location data, and usage data) [3]. If put together, different information can identify a particular person as personal data. The term “processing,” by automated means as well as nonautomated processing includes the following activities: data collection and registration; organization and structure; storage; adaptation and modification; retrieval; retrieval of information; use; disclosure by transmission, dissemination, or any other form of disposal; correlation or combination; and restriction; deletion or destruction of personal data [4].

To be valid, personal data processing consent must be provided clearly and concisely, in language that is easy to understand and different from other information, such as terms and conditions. In addition, consent must be freely given, specific, aware, and unequivocal to be valid [5]. The request must state the purpose for which personal data will be used [6]. In certain circumstances, data subjects have the right to refuse to be subjected to a decision exclusively based on automated processing. Although this norm is generally followed, there are a few exceptions, such as when the data subject has explicitly consented to an automated decision-making mechanism. A data breach occurs when personal data are disclosed, accidently or illegally, to unauthorized recipients, made temporarily unavailable, or altered [7, 8].

So, it is essential to have an intelligent mechanism that can manage the consent granted to have a case for creating a profile, which can regulate the use, and therefore, there is a case of data leakage. Based on the adoption of the Markov chain methodology [9], it is proposed to implement an intelligent consent management system in Music Streaming Services, which considers the current state of consent for the future development and forecast possible leaks of sensitive personal data [3, 10].

This section explores relevant work on Markov chains implementation, privacy concerns, risk perception, and user behavior in response to security recommendations [11, 12]. Wiering et al. [13] investigated multiobjective Markov decision systems in 2007 by substituting a cross benefit vector for the conventional linear reward signal. This multiobjective Markov procedure may be transferred when the weighting factor for the various reward elements is known in advance. They anticipated that the weighting function might be arbitrarily chosen and given by the actor or user after the algorithm addressed the issue. They maintained track of Pareto’s optimum stationary policies to cope with it.

Feinberg and Rothblum [14] investigated a Markov selection procedure with a different reward system and a specified beginning state dispersion. Suppose the habitation measure of a stationary policy can be described as a convex sum of the habitation evaluations of other stationary policies. In that case, the policy may be divided into states. There are requisite circumstances for dividing a stationary policy in a single state and adequate criteria for splitting it throughout the entire state area. The findings were used to limited issues to compute an optimum policy by calculating and dividing an ideal stationary policy.

O’Connor et al. [15] investigated the optimum transport issue for couples of stationary constraint Markov chains, focusing on calculating ideal transitional connections, a limited family of transfer plans that encapsulate the characteristics of Markov chains. The optimum changeover connectivity issue is solved by aligning the two chains to minimize the overall long-term price. They produced a stable conclusion for both normalized and nonregularized methods and, consequently, a probabilistic coherence result. They tested their theoretical predictions through a mock trial, indicating that the approximation technique has a shorter total runtime and a low error rate. Finally, they expanded their approaches to hidden Markov structures and demonstrated the suggested algorithms’ practical use via the implementation of computer-generated music.

Concerning privacy leaks, Yixin et al. [7] conducted semistructured surveys with customers to ascertain their perceptions of the dangers associated with data breaches, their willingness to take preventive actions, and their motivations for inactivity. They discovered that users’ mind maps of credit agencies were inadequate and erroneous to a certain extent. They discovered that this conduct is motivated by the expenditures of preventive methods, a positivity bias in evaluating one’s chance of persecution, sources of guidance, and a general inclination to wait for response until damage occurs. They reviewed the legal, technological, and pedagogical ramifications and possible approaches for improving consumer protection in the credit reporting system. Finally, they suggest future research options.

Gwebu et al. [8] investigated the relative usefulness of corporate credibility and post-breach reaction techniques in light of the considerable monetary losses related to data intrusions. The findings suggested that a firm’s brand is a critical asset for preserving the firm’s worth. Nevertheless, only particular reaction tactics are shown to lessen the fiscal effect on low-reputation organizations. At the same time, reply techniques are less critical for high firms. These results provide operators with proof counsel for preserving business assets after privacy violations and emphasize the demand for establishing more sophisticated breach management techniques. The generated theoretical justifications provided a cognitive foundation for evaluating the effectiveness of different data breach response tactics.

3. Methodology

The proposed methodology concerns implementing a policy consensus rights management system executed for each policy with a stationary Markov policy [9]. The Markov policy is considered to be a Markov chain which is nondegradable or, more generally, has a unique closed communication class (meaning that transient states outside it are allowed), which is (genuinely) finite repetitive [16]. The idea of modeling is that Markov processes are appropriate stochastic models for describing and studying stochastic systems; the future evolution of which depends solely on their present state each time and not on their specific history. An example of a Markov chain and its mathematical modeling in predicting future situations is shown in Figure 1 [1719].

Based on the adoption of the methodology of the Markov chains, it is proposed to implement an intelligent consensus policy management system, which considers the current state of the consensus data for the future development and forecasting of possible leaks of sensitive personal data [10]. A stochastic or Markov process [2022] (or evolution) from T to S is a collection of random variables defined in a probability space (Ω, F, P), where the time horizon (usually T is the set of times). S is the state space of the process, i.e., the value field of for each and . Because S is countable or finite, we are talking about a stochastic chain [23]. Ω is the sample space, i.e., the set of all possible results of the luck experiment under study. F: σ-algebra of contingencies is the field of definition of the contingencies of Ω (there are cases of sample space of Ω where we cannot consider each of its subsets as a possibility) and P is the probability measure, i.e., a function P: F such that [21, 22](1)(2)(3) which

The proposed process focuses on a system of Markov chains, i.e., processes from Τ =  0 in a countable (or simply finite) space S, which have the Markov property [2426]

The above relation states that given the value of the random variable Xn (present), the random variable Xn + 1 (future) is stochastically independent of the variables X0, X1, … , Xn − 1 (past state of the process) [27]. The explanation of the Markov property of the proposed system is that the future of the Markov chain depends on the past only through the present [28]. So, the probability [22, 24]is the probability of passing 1st order from state i to state j by the (n + 1)-th step. These probabilities do not depend on the time step n (stationary), i.e., [29, 30]so we have homogeneous chains .

Respectively, considering the 1st order transition probability table of the chain [3133],we have a stochastic table, so it is easy to find the chain in the states in succession; the probability is as follows:

Thus, for the distribution πn of the state of the chain, it is valid that [28, 32]

So,or equivalentsand sowherethe n-th order transition table (or n steps), i.e.,

If we choose π0 = π such that π = π, we obtain πn = πn , i.e., the state distribution of the chain is stationary (independent of n). Then, the chain has an unchanged or stationary distribution, and it is in statistical equilibrium. In queuing theory, systems are said to be in “statistical equilibrium” when the number of customers or objects waiting in the line oscillates so that the mean and distribution remain the same over a prolonged period.

Therefore, to find the stationary distribution π, it is enough to solve the system [21, 28, 34, 35]:

A chain will be nondegradable if the state space S is an entire closed class, i.e.,

The basic premise of the methodology is that the Markov chainis nondegradable, genuinely repetitive, and aperiodic. Then, regardless of its initial distribution is valid,andthe percentage of time spent by the chain in each iS situation is π(i), so according to the employer theorem, we have

So, the transition from one state to another implies a fee or some cost (negative fee) of the form as follows:for the n-th step, so the total reward in the first n steps is

The average fee for going i ⟶ j is

Ultimately, it applies to the pay rate (average pay per step) as follows:

A fascinating question in the modeling of the proposed methodology concerns how we will choose the stationary policy or equivalent by what criteria this choice will be made. Since expresses sensitive data, we will deal with the standard of minimizing their propagation rate, which proves to be the most suitable in many applications.

Of course, if represents nonconsent of rights, the criterion will be maximizing the consent rate, which is equivalent to the previous one [36]. A stationary policy is X0 = i, i S . Also are the probabilities of passing k-th class [37]. Then, the average (or expected) total amount of consonants in the first n time points (or steps) when X0 = i and R is applied [18, 32, 38]:or by the definition and linearity of the (bound) average value:or else

Our objective is to study the behavior of the rate of the average rate of consensus in the long run, i.e., the limit as follows:for each i S.

However, because the Markov chain has a unique closed communication class (positively repetitive), the limit is unique and independent of the initial state i [39]:with , jS is the stationary distribution of the chain below the policy R and thus the rate of change of average rate of consent or, in the long run, average rate per unit time. Therefore, a stationary policy R∗ will be optimal for the average consent rate if each stationary policy R applies [40]

So, since the time horizon is unlimited and the space of situations is finite, it turns out that there will always be a stationary policy that will be optimal in terms of the above criterion. Considering this and using the proposed methodology for each possible stationary policy, we can calculate the stationary distribution and the corresponding rate in each case. The above result is significant because it secures the right to seek an optimal approach for the stationary.

4. Privacy Leaks Protection Scenario

To protect individuals’ privacy and help restore trust and transparency in the activities between people and entities that process their data, streaming services provide consent. The data subject’s permission is any freely given, precise, informed, and unequivocal expression of the data subject’s desires by which he, by a statement or by an obvious action, accepts the processing of personal data about them. Consents are divided into “Free,” “Specific,” “Informed,” and “Indisputably indicated.” In the scenario tested by the proposed system, consents are implemented to process users’ data. The policy concerns consent if at least all four of the following conditions apply [6, 41, 42]:(1)Free means that the data subjects are selected and controlled. It is invalid consent if the data subject does not have a natural option, is obliged to consent, or will suffer an undesirable consequence if they do not consent. Unless the consent is accompanied by a nonnegotiable provision of the terms and conditions, it has been begrudged.(2)Specifically, it aims to ensure user control and transparency for the data subject.(3)Informed aims to provide information to the persons to whom the data refer before their consent and is necessary to be able to make informed decisions, to understand what they agree on, and for example, to exercise their right to withdraw their consent.(4)Indisputably indicates that there should be no doubt that the data subject has agreed to the data processing [29, 43].

With the policy improvement method starting from any R policy, we check if it is optimal or not. If not, we find a policy R′ with and check if it is optimal. Keeping in mind that the number of stationary policies is limited (on account of the fact that both the situation space and the decision space are limited), we can proceed with the procedure described above until we find the most effective policy.

The benefit of using this method is that rather than controlling all of the available stable policies, we only have to maintain a typically small fraction of them and progress from one strategy to another policy improvement.

The equation makes the beginning [4446]:which implies that asymptotically (i.e., for n ⟶ ∞: large n) holds

The quantities are relative values of the states i when a stationary policy R is applied, and their difference is equal to

The relative values express the transient effect of the initial states on the expected total rate under the application of policy R.

Then, the quantity h expresses the difference in the average total rate, if the process starts from the state i compared with whether it began to from state j, when the policy R.

Equivalently applied, this difference is essentially the maximum rate of consensus so that the system (the chain) starts from state j rather than i below state policy R.

If we assume an aperiodic chain Xn(R), then the limit exists [9, 14, 21]:

So, there is also

So,which expresses the long-term difference in the mean total rate if the process starts from state i rather than state j, under policy R.

Therefore [29],and soor, respectively,

So, for finding a rhythm and relative values based on the genuine iterative state rS, it holds [47]:where is the expected time of the first visit to r since the chain started from state i under policy R.

For the average rate , the following applies:

Combining the above two equations, we take

So, it finally applies thator elsenamely,are a solution of the original system, and therefore the request was proved.

5. Conclusions

Inside the scope of this study, we suggested a novel policy-based consensus management system intending to prevent data breaches within music streaming services. The methodology that has been proposed is solely founded on an advanced Markov chain system. This creates an intelligent consensus policy management framework that considers the current state of the consensus data to predict potential leaks of sensitive personal data and prepare for their development in the future. To be more specific, we employ Markov processes with a discrete (limited or countable) state space and a distinct parametric space. We examine this way of improvement to see whether or not a policy is optimum. It is a forward-thinking and clever system that can model challenging scenarios, making it virtually more straightforward to discover answers to questions regarding dynamic circumstances of ambiguity.

The provision of consent by streaming services helps to reestablish confidence and transparency in the interactions between individuals and the organizations responsible for processing their data. This safeguards the privacy of individuals. The data subject’s permission is any freely given, precise, informed, and unambiguous expression of the data subject’s desires by which he, by a statement or by an obvious action, accepts the processing of personal data about them. This expression of the data subject’s desires can take the form of a statement or an apparent effort. The terms “Free,” “Specific,” “Informed,” and “Indisputably indicated” are used to classify different types of consent. In the hypothetical situation examined by the suggested system, permissions are successfully applied to handle users’ data successfully.

The proposed tactic, in addition to the apparent advantages, lags because of the increased complexity even for a small space of solutions S. For example, for a set of N with two possible decisions (the same for each situation), from the simple multiplication principle, we have 2N different stationary policies. For N = 10, we have a total of 1024 stationary policies. We can overcome this obstacle with optimization methods and selecting a predefined solution space. Therefore, significant future development of the proposed system is the investigation of optimization methods using biologically inspired methods to find the optimal solution spaces that could significantly simplify the proposed methodology.

Data Availability

The data used in this study are available from the author upon request.

Conflicts of Interest

The author declares that there are no conflicts of interest.