Research Article | Open Access

Xinxing Yin, Zhi Xue, Bin Dai, "Capacity-Equivocation Regions of the DMBCs with Noiseless Feedback", *Mathematical Problems in Engineering*, vol. 2013, Article ID 102069, 13 pages, 2013. https://doi.org/10.1155/2013/102069

# Capacity-Equivocation Regions of the DMBCs with Noiseless Feedback

**Academic Editor:**Weihai Zhang

#### Abstract

The discrete memoryless broadcast channels (DMBCs) with noiseless feedback are studied. The entire capacity-equivocation regions of two models of the DMBCs with noiseless feedback are obtained. One is the degraded DMBCs with rate-limited feedback; the other is the *less* and *reversely less noisy* DMBCs with causal feedback. In both models, two kinds of messages are transmitted. The common message is to be decoded by both the legitimate receiver and the eavesdropper, while the confidential message is only for the legitimate receiver. Our results generalize the secrecy capacity of the degraded wiretap channel with rate-limited feedback (Ardestanizadeh et al., 2009) and the restricted wiretap channel with noiseless feedback (Dai et al., 2012). Furthermore, we use a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument which is complex and not intuitive.

#### 1. Introduction

Secure data transmission is an important requirement in wireless communication. Wyner first studied the degraded (the wiretap channel is said to be (physically) degraded if form a Markov chain, where is the channel input and and are the channel outputs of the legitimate receiver and wiretapper, resp.) wiretap channel in [1], where the output of the channel to the wiretapper is degraded to the output of the channel to the legitimate receiver. In Wyner’s model, the transmitter aimed to send a confidential message to the legitimate receiver and keep the wiretapper as ignorant of the message as possible. Wyner obtained the secrecy capacity (the secrecy capacity is the best data transmission rate under perfect secrecy; i.e., the equivocation at the wiretapper . The formal definition of the secrecy capacity is given in Remark 3) and demonstrated that provable secure communication could be implemented by using information theoretic methods. This model was extended to a more general case by Csiszár and Körner [2], where broadcast channel with confidential messages was studied; see Figure 1. They considered transmitting not only the confidential messages to the legitimate receiver, but also the common messages to both the legitimate receiver and the eavesdropper. The capacity-equivocation region for the extended model was determined in [2]. This region contains all the achievable rate triples , where and are the rates of the common and confidential messages and is the rate of the confidential message’s equivocation. Nevertheless, neither Wyner’s model nor Csiszár’s model considered feedback.

To explore more ways in achieving secure data transmission, [3–5] studied the effects of the feedback on the capacities of several channel models. They all showed that feedback could help enhance the secrecy in wireless transmission. In [3], Ahlswede and Cai presented both the inner and outer bounds on the secrecy capacity of the wiretap channel with secure causal feedback from the decoder and showed that the outer bound was tight for the degraded case. It was proved that, by using feedback, the secrecy capacity of the (degraded) wiretap channel was increased. After Ahlswede’s exploration, Ardestanizadeh et al. studied the wiretap channel with secure rate-limited feedback [4]. The main difference between Ardestanizadeh’s model and Ahlswede’s model is that the feedback in [4] is independent of the channel outputs, while the feedback in [3] is originated causally from the outputs of the channel to the legitimate receiver. In [4], the authors got an outer bound on the wiretap channel with rate-limited feedback through a recursive argument which was effective but not intuitive. They also showed the outer bound was tight for the degraded case. In addition, Dai et al. investigated the secrecy capacity of the restricted wiretap channel with noiseless causal feedback under the assumption that the main channel is independent of the wiretap channel [5].

However, all of these explorations [3–5] focused on sending only the confidential messages. They did not consider sending both the common and confidential messages. In fact, transmitting the two kinds of messages can be seen in many systems with feedback. For example, in the satellite television service, some channels are available to all users for free, but some other channels are only for those who have paid for them. Recently, [6] studied the problem of transmitting both the common and confidential messages in the degraded broadcast channels with feedback. Note that, like [3], the feedback in [6] was originated causally from the legitimate receiver’s channel outputs and not rate-limited. Besides, [7–9] studied the broadcast channel with feedback where no secure constraints were imposed.

To further investigate the secure data transmission with both common and confidential messages and noiseless feedback, this paper determines the capacity-equivocation regions of the following two DMBCs with both common and confidential messages. They are unsolved in the previous exploration.(i)Degraded DMBCs with rate-limited feedback, where the feedback rate is limited by and the feedback is independent of the channel outputs; see Figure 2.(ii)*Less* and *reversely less noisy* (let be the input of the DMBC, the legitimate receiver’s channel output, and the eavesdropper’s channel output. A DMBC is said to be *less noisy* if for all ; a DMBC is said to be *reversely less noisy* if for all , where is the value of the auxiliary random variable ) DMBCs with noiseless causal feedback, where the feedback is originated causally from the legitimate receiver’s channel outputs; see Figure 3.

The two channel models are characterized in Section 2. The main results presented in Section 2 subsume some important previous findings about the secure data transmission with feedback. By setting the auxiliary random variable to be constant in the secrecy capacity of the first model (see (9) in Remark 3), the secrecy capacity of the degraded wiretap with rate-limited feedback [4] is obtained. By eliminating the common message in the second model, the capacity-equivocation region of restricted wiretap channel with noiseless feedback [5] is obtained. We utilize a simpler and more intuitive deduction to get the single-letter characterization of the capacity-equivocation region, instead of relying on the recursive argument (see [4]) which is complex and not intuitive. We find that even if the eavesdropper is in a better position than the legitimate receiver, provable secure communication could also be implemented in the DMBCs with both common and confidential messages.

The remainder of the paper is organized as follows. Section 2 gives the notations and main results, that is, the capacity-equivocation regions of the two channel models. Section 3 proves Theorem 2. Section 4 proves Theorems 4 and 5. Section 5 concludes the whole work.

#### 2. Channel Models and Main Results

##### 2.1. Notations

Throughout this paper, we use calligraphic letters, for example, , , to denote the finite sets and to denote the cardinality of the set . Uppercase letters, for example, , , are used to denote random variables taking values from finite sets, for example, , . The value of the random variable is denoted by the lowercase letter . We use to denote the -vectors of random variables for and will always drop the subscript when . Moreover, we use to denote the probability mass function of the random variable . For and , the set of the typical -sequences is defined as for all , where denotes the frequency of occurrences of letter in the sequence (for more details about typical sequences, please refer to [10, Chapter 2]). The set of the conditional typical sequences, for example, , follows similarly.

##### 2.2. Channel Models and Main Results

This paper studies the secure data transmission for two subclasses of DMBCs with noiseless feedback. One is the case where the feedback is rate-limited and independent of the channel outputs (see Figure 2); the other is the case where the feedback is originated causally from the channel outputs (see Figure 3). Both models consist of a transmitter and two receivers, named receiver 1 (legitimate receiver) and receiver 2 (eavesdropper). The transmitter aims to convey a common message to both receivers in addition to a confidential message intended only for receiver 1. The confidential message should be kept secret from receiver 2 as much as possible. We use equivocation at receiver 2 to characterize the secrecy of the confidential message. and are mutually independent and uniformly distributed over and .

###### 2.2.1. Degraded DMBCs with Rate-Limited Feedback

The degraded DMBCs with rate-limited feedback (see Figure 2) are under the condition that the channel to receiver 2 is physically degraded from the channel to receiver 1; that is, or form a Markov chain, where is the channel input and are observations of receiver 1 and 2. In this model, the encoder encodes the messages and feedback into codewords , where is the length of the codeword. They are transmitted over a discrete memoryless channel (DMC) with transition probability . Receiver 1 obtains and decodes the common and confidential messages . Receiver 2 obtains and decodes the common message . More precisely, we define the encoder-decoder in Definition 1.

*Definition 1. *The encoder-decoder for the degraded DMBCs with rate-limited feedback (with rate limited by ) is defined as follows.(i)The feedback alphabet satisfies . The feedback is generated independent of the channel output symbols.(ii)The stochastic channel encoder is specified by a matrix of conditional probability distributions which denotes the probability that the message and the feedback are encoded as the channel input , where , and . Note that and are the confidential and common message sets.(iii)Decoder 1 is a mapping . The input of decoder 1 is , and the output is . The decoding error probability of receiver 1 is defined as . Similarly, Decoder 2 is defined as a mapping . The input of decoder 2 is , and the output is . The decoding error probability of receiver 2 is defined as .(iv)The equivocation at receiver 2 is defined as

A rate triple is said to be *achievable* for the model in Figure 2 if there exists a channel encoder-decoder defined in Definition 1, such that
where is an arbitrary small positive real number, are the rates of the common messages, confidential messages, and feedback, and is the equivocation rate of the confidential messages. Note that the feedback rate is limited by . The capacity-equivocation region is defined as the convex closure of all achievable rate triples . The capacity-equivocation region of the degraded DMBCs with rate-limited feedback is shown in the following theorem.

Theorem 2. * For the degraded DMBCs with limited feedback rate , the capacity-equivocation region is the set
**
where is an auxiliary random variable and form a Markov chain.*

The proof of Theorem 2 is given in Section 3. The remark of Theorem 2 is shown below.

*Remark 3. *(i) The secrecy capacity of the model in Figure 2 is defined as the maximum rate at which confidential messages can be sent to receiver 1 in perfect secrecy; that is,
where is the capacity-equivocation region. Therefore, by the definition in (8), the secrecy capacity of the degraded DMBCs with limited feedback rate is
This result subsumes the secrecy capacity of the degraded wiretap channel with rate-limited feedback (see [4]) by setting the auxiliary random variable to be constant in (9).

(ii) The capacity-equivocation region in (7) is bigger than that in [2] without feedback. This implies that feedback can be used to enhance the secrecy in the DMBCs. Note that this finding had already been verified in [3–6].

###### 2.2.2. Less and Reversely Less Noisy DMBCs with Noiseless Causal Feedback

The model in Figure 3 is based on the assumption that the channel to receiver 1 is independent of the channel to receiver 2; that is, . The definition of the encoder-decoder for this model is similar to Definition 1 except for the feedback and the encoder. Different from the model in Figure 2, the feedback in Figure 3 is originated causally from the channel outputs of receiver 1 to the transmitter. The stochastic encoder for this model at time , , is defined as , where , , (the channel outputs of receiver 1 before time ) and .

A rate triple is said to be *achievable* for the model in Figure 3 if there exists a channel encoder-decoder such that (2), (3), (5), and (6) hold. Note that the definition of “*achievable*” here does not include (4) since the feedback in the model of Figure 3 is not rate limited. The definition of secrecy capacity is the same as that in Remark 3. Then, we present the capacity-equivocation regions of the *less* and *reversely less noisy* DMBCs with noiseless causal feedback in Theorems 4 and 5, respectively.

Theorem 4. *For the less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set
**
where form a Markov chain.*

Theorem 5. *For the reversely less noisy DMBCs with noiseless causal feedback, the capacity-equivocation region is the set
**
where form a Markov chain. *

The proof of Theorems 4 and 5 is given in Section 4. The remark of Theorems 4 and 5 is given below.

*Remark 6. *(i) By the definition in (8), the secrecy capacity of the *less noisy* DMBCs with noiseless causal feedback is
The secrecy capacity of the *reversely less noisy* DMBCs with noiseless causal feedback is
Setting the auxiliary random variable to be constant in (12) and (13), the capacity-equivocation region of the model in [5] is obtained.

(ii) In the model of Figure 3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, . This implies . Therefore, it is easy to see ; that is, the upper bound on the equivocation rate in (11) for the reversely less noisy case is smaller than that in (10) for the less noisy case. This tells that when the eavesdropper is in a better position than the legitimate receiver (see the *reversely less noisy* case), the uncertainty about the confidential messages at the eavesdropper is decreased. Besides, from (13), we see that even if the eavesdropper is in a better position, the secrecy capacity is a positive value, which means provable secure communication could also be implemented in such a bad condition.

#### 3. Proof of Theorem 2

In this section, Theorem 2 is proved. The converse part of Theorem 2 gives the outer bound on the capacity-equivocation region of the degraded DMBCs with rate-limited feedback. The proof of the converse part is shown in Section 3.1. The key tools used in the proof include the identification of the random variables and Csiszár’s sum equality [2]. In Section 3.2, to prove the direct part of Theorem 2, a coding scheme is provided to achieve the achievable rate triples in . The key ideas in the coding scheme are inspired by [4]. However, [4] only considers the transmission of the confidential messages. Our coding scheme considers both the confidential and common messages.

##### 3.1. The Converse Part of Theorem 2

In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized by , we prove the converse part for the equivalent region (the fact that the two regions are equivalent follows similarly from [10, Chapter 5, problem 5.8]) containing all the rate triples such that

Now we show that all *achievable* triples satisfy (14), (15), (16), and (17).

Condition (14) is proved as follows:

To prove condition (15), we calculate where follows from Fano’s inequality and is a small positive number. Note that , where is the feedback symbol at time , .

To prove condition (16), we consider where is a small positive number and and follow from Fano’s inequality and Csiszár’s sum equality [2]; that is, .

To prove condition (17), we calculate

The last inequality in (21) follows from the Fano’s inequality and the fact that the feedback rate is limited by . Then, will be calculated as follows: where follows from the Markov chain and . Then, we introduce a random variable which is independent of and uniformly distributed over . Set . It is straightforward to see that form a Markov chain. After using the standard time sharing argument [10, Section 5.4], (19), (20), and (22) are simplified into Substituting (25) into (21) and utilizing (5), we get The last equality in (26) follows from the Markov chain .

To finish the proof of (16) and (17), we need to show that and . We first prove and : where follows from the Markov chains and and follows from the Markov chains and . Utilizing (27), we obtain

From (28), it is straightforward to see that . This proves condition (16).

Then, we prove . Since the channel model in Figure 1 is (physically) degraded, holds for every , which implies

Therefore, utilizing (28), (29), and (30), we get This proves condition (17).

The converse part of Theorem 2 is proved.

##### 3.2. A Coding Scheme Achieving

A coding scheme is provided to achieve the achievable triples . The key methods used in the scheme include the superposition coding, rate splitting, and random binning. The confidential message is split into two parts. One part is reliably transmitted using superposition coding and random binning; the other part is securely transmitted with the help of the feedback. Note that Section 3.1 has already given the outer bound on the capacity-equivocation region. When , it can be seen from (9) that the secrecy capacity for the degraded DMBCs with rate-limited feedback always equals to . Therefore, in order to investigate the effects of the feedback, the feedback rate will only be considered in this subsection.

We need to prove that all the triples for the model of Figure 2 with any feedback rate limited by are *achievable* (see Definition 1). This subsection is organized as follows. The codebook generation and encoding scheme is given in Section 3.2.1. The decoding scheme is given in Section 3.2.2. The analysis of error probability and equivocation are shown in Sections 3.2.3 and 3.2.4, respectively.

###### 3.2.1. Codebook Generation and Encoding

Split the confidential message into two parts; that is, . The corresponding variables , are uniformly distributed over and , where (when , the confidential message can be totally protected by using part of the feedback (as the shared key between the transmitter and receiver 1). The remaining part of the feedback is redundant. Therefore, in order to study the effects of the feedback on the capacity region, only comes into our consideration)

It is important to notice that is the rate of the private message , which consists of and . This means that

Define the index sets , , , and satisfying

We use , , , to index the codeword . Take such that (2) holds. Since , it is easy to see . Therefore, let , , where is an arbitrary set such that (3) holds. Let be a mapping of into partitioning into subsets of size ; that is, where .

For each , we generate a codeword according to . Then, for each , a codebook (see Figure 4) containing codewords is constructed according to , where . Those are put into bins so that each bin contains codewords. Each bin is indexed by , where , . Then, we divide each bin into subbins such that each subbin contains codewords. The codebook structure is presented in Figure 4.

Let , where is the key sent to the transmitter from receiver 1 through the secure feedback link. It is kept secret from receiver 2. The corresponding variable is uniformly distributed over and independent of and .

In order to send and , a codeword is chosen as follows. According to the common message , we first find the sequence . For the determined , there is a corresponding codebook ; see Figure 4. Then, the corresponding codeword is sent into the channel, where is chosen randomly from the set , , and (here is modulo addition over ). Figure 4 shows how to select in detail. According to , we can find the corresponding codebook . In the codebook , we choose the corresponding bin according to and . Then, in that bin, the subbin is found according to . Finally, a codeword (which is denoted by ) is randomly chosen from that subbin.

###### 3.2.2. Decoding

Receiver 2 tries to find a unique sequence such that . If there exists such a unique sequence, decoder 2 outputs ; otherwise, an error is declared. Since the size of is smaller than , the decoding error probability for receiver 2 approaches zero.

For receiver 1, he can also decode the common message since the output of channel 2 is a degraded version of the output of channel 1. Then, receiver 1 tries to find a unique codeword indexed by , , , , such that . If there exists such a unique codeword , receiver 1 calculates as (here is modulo subtraction over , and ) and finds according to . Note that receiver 1 knows the secret key . Decoder 1 outputs and . If no such or more than one such exist, an error is declared.

###### 3.2.3. Analysis of Error Probability

Since the number of is upper bound by and the DMBCs under discussion are degraded, both receivers can decode the common message with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ]. Moreover, it can be calculated that given the codeword , the number of is

So, after determining the codeword , receiver 1 can decode the codeword with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ]. This proves (6).

###### 3.2.4. Analysis of Equivocation

The proof of (5) is given below: where follows from the Markov chain , follows from the fact that is independent of , and follows from that is uniformly distributed over . The proof of the fact that is independent of is shown as follows (the proof can also be seen in [6]): where and follow from that is independent of , and and follow from that and are both uniformly distributed over . According to (38), Therefore, is independent of .

Next, we focus on the first term in (37). The method of the equivocation analysis in [2] will be used: Note that in inequality (40) is the random variable of the common message . The four terms , , , and will be bounded as follows.

Given , the number of is . By applying [12, Lemma 2.5], we obtain

Since and the channel to receiver 2 is discrete memoryless, it is easy to get

With the knowledge of and , the number of is

So, receiver 2 can decode the codeword with error probability approaching zero by using the standard channel coding theorem [11, Theorem ]. Therefore, using Fano’s inequality, we get

Moreover, using the similar deduction in [2, Section 4], we get

Substituting (41), (42), (44), and (45) into (40), we get where the equality in (46) follows from the Markov chain .

Finally, (5) is verified by substituting (46) into (37): This completes the proof of Theorem 2.

#### 4. Proof of Theorems 4 and 5

In this section, Theorems 4 and 5 are proved. In the model of Figure 3, it is assumed that the channel to receiver 1 is independent of the channel to receiver 2; that is, . To prove Theorem 4, we first give the outer bound on the capacity-equivocation region of the *less noisy* DMBCs with noiseless causal feedback in Section 4.1. Then, a coding scheme is provided to achieve the outer bound. Similarly, to prove Theorem 5, the outer bound on the capacity-equivocation region of the *reversely less noisy* DMBCs with noiseless causal feedback is given in Section 4.2. Moreover, we also provide a coding scheme to achieve the outer bound. The methods used to prove the converse parts of the two theorems are from [5]. The coding schemes are inspired by [3, 5].

##### 4.1. Less Noisy DMBCs with Noiseless Causal Feedback

We first show the converse part of Theorem 4, and then we prove the direct part of Theorem 4 by providing a coding scheme.

In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized by , we prove the converse part for the equivalent region containing all the rate triples such that

The proof of (48), (49), and (50) follows exactly the same line of proving (14), (15), and (16) in Section 3 except for the identification of the auxiliary random variable (which will be given subsequently) and therefore is omitted. We focus on proving (51):
where is a small positive number. The last inequality in (52) follows from the fact that conditioning does not increase *entropy* and Fano’s inequality. To complete the proof of (51), define a time-sharing random variable which is uniformly distributed over and independent of . Set , , , , . It is easy to see form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (52) simplifies to

Finally, utilizing in the definition of “*achievable*” and (53), we obtain (51). This completes the proof of the converse part of Theorem 4.

Next, a coding scheme is presented to achieve the rate triple . We should prove that all triples are *achievable*. Note that the noiseless feedback for the less noisy DMBCs is causally transmitted from receiver 1 to the transmitter. The scheme includes codebook generation and encoding scheme in Section 4.1.1, decoding scheme in Section 4.1.2, analysis of error probability in Section 4.1.3, and equivocation analysis in Section 4.1.4. Techniques like block Markov coding, superposition coding, and random binning are used.

To serve the block Markov coding, let random vectors , , , and consist of blocks of length . Let stand for the common messages of blocks, where are independent and identically distributed random variables over . Let stand for the confidential messages of blocks, where are independent and identically distributed random variables over . Note that in the first block, there is no . Let , , where is the output vector at receiver 2 at the end of the th block, where . Similarly, denotes the output vector at receiver 1 at the end of the th block, and denotes the input vector of the channel in the th block. These notations coincide with [6].

###### 4.1.1. Codebook Generation and Encoding

Let the common message set and the confidential message set satisfy where and satisfy (10).

Fix and . In the th block, , we generate independent and identically distributed (i.i.d) sequences according to , where is the common message to be sent in the th block. For each , generate codewords according to . Put the codewords into bins, so each bin contains codewords. The bins are denoted by , where . The codebook structure is shown in Figure 5. Reveal all the codebooks to the transmitter, receiver 1, and receiver 2.

Let be a mapping from into . Reveal the mapping to the transmitter, receiver 1, and receiver 2. Define a random variable uniformly distributed over and independent of the confidential message . It can be similarly proved from (39) that is independent of . In the first block, that is, , to send the common message (note that there is no confidential message to be sent in the first block), the transmitter tries to find and randomly choose a codeword from the corresponding codewords. In the th block (), to send the common message and confidential message , the transmitter calculates and randomly chooses a codeword from the bin . Here, is the output vector of the ()th block at receiver 1, and is the modulo addition over .

###### 4.1.2. Decoding

In the first block, as there is no confidential message, only the common message needs to be decoded for both receivers. For receiver 2, he tries to find a unique sequence such that , where is a small positive number. If there exists such a unique sequence, decoder 2 outputs ; otherwise, an error is declared. For receiver 1, he tries to find a unique sequence such that , where is a small positive number. If there exists such a unique sequence, output is ; otherwise, declare an error.

In the th block, , receiver 2 aims to decode the common message, and receiver 1 aims to decode both confidential and common messages. The method of decoding the common message for both receivers follows the same as that in the first block. Then, receiver 1 tries to find a unique sequence such that , where is a small positive number. If there exists such a unique sequence in one bin, denoting the corresponding index of that bin by , receiver 1 calculates as (here is modulo subtraction over , and receiver 1 knows ); otherwise, declare an error.

###### 4.1.3. Analysis of Error Probability

Since the number of is upper bounded by , receiver 2 can decode the common message with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ]. Moreover, since the DMBCs under discussion in Section 4.1 are *less noisy*, receiver 1 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword , the number of is . So, after determining the codeword , receiver 1 can decode the codeword with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ] and obtain the confidential message with the help of the feedback.

###### 4.1.4. Analysis of Equivocation

In this part, is proved by utilizing the methods in [5, 6]:

In the above deduction, follows from . follows from . follows from the fact that receiver 2 can choose a better way to intercept the secret key at will and is independent of and uniformly distributed over . follows from (10).

This completes the proof of Theorem 4.

##### 4.2. Reversely Less Noisy DMBCs with Noiseless Causal Feedback

In this subsection, Theorem 5 will be proved. The converse part will be shown first, and then a coding scheme is given for proving the direct part.

In order to find the identification of the auxiliary random variables that satisfy the capacity-equivocation region characterized by , we prove the converse part for the equivalent region containing all the rate triples such that

The inequalities (56), (57), and (58) can be proved using similar deduction of the converse part of Theorem 2 in Section 3 except for the identification of the auxiliary random variables. We focus on (59):
where from the Markov chain , from the assumption that the channel is *reversely less noisy* (by setting ), from that is a function of , and from the fact that conditioning does not increase *entropy* and Fano’s inequality. To complete the proof of (59), define a time-sharing random variable which is uniformly distributed over and independent of . Set , , , , . It is easy to see form a Markov chain. After using the standard time-sharing argument [10, Section 5.4], (60) simplifies to

Finally, utilizing in the definition of “*achievable*” and (61), we obtain (59). This completes the proof of the converse part of Theorem 5.

Next, a coding scheme will be provided for achieving the triple . We should prove that all triples are *achievable*. The *codebook generation*, *encoding,* and *decoding* follow exactly the lines of the coding scheme for the *less noisy* case in Section 4.1. We present the *analysis of error probability* and *equivocation* as follows.

###### 4.2.1. Analysis of Error Probability

Since the number of is upper bounded by , receiver 1 can decode the common message with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ]. Moreover, since the DMBCs under discussion in Section 4.2 are *reversely less noisy*, receiver 2 can also decode the common message with error probability approaching zero. It can be calculated that given the codeword , the number of is . So, after determining the codeword , receiver 1 can decode the codeword with error probability approaching zero by applying the standard channel coding theorem [11, Theorem ] and obtain the confidential message with the help of the feedback.

###### 4.2.2. Analysis of Equivocation

In this part, will be proved. Special attention should be paid to receiver 2 since the DMBCs are *reversely less noisy*; that is,