Abstract

Secret image sharing has been extensively and thoroughly researched. However, in the social network environment, shadow images are subject to compression or noise pollution during uploading and transmitting, which makes it challenging to recover secrets losslessly. Texts are more suited for transmission in social networks as shadows because of the broad variety of application scenarios and inherent robustness. Through a secret sharing technique of threshold, a secret is encrypted as shadows, where any or more shadows can recover the secret, while less than cannot obtain any information on the secret. In this article, we propose a generative text secret sharing scheme with topic-controlled shadows, which encrypts a secret message as a number of semantically natural shadow texts and controls the topics of shadow texts using bag-of-words models during text generation by the language model. This study also proposes two goal programming models to improve the shadow texts’ topic relevance and fluency. The shadow texts of the proposed scheme satisfy loss tolerance, semantic comprehensibility, topic controllability, and robustness. An ablation study, comparative test, and anti-detection experiment verify the effectiveness of the proposed scheme.

1. Introduction

Protecting sensitive information from malicious interference is essential when transmitted over public channels. Shannon [1] summarizes three basic information security systems, encryption system, privacy system, and concealment system. By creating a secret key, the encryption system protects the confidentiality of the message itself. The purpose of privacy systems is to prevent unauthorized users from accessing confidential messages. To protect the existence of confidential messages, the concealment system transmits them through open channels using different types of carriers. The secret sharing (SS) of threshold [2, 3] satisfies both encryption and privacy requirements. It encrypts a secret message as shares (shadows), which are distributed to different participants. Any shadows can recover the secret message, less than shadows cannot obtain anything. A wide range of applications can be performed with it, including access control, password transmission, distributed storage systems, blockchain security, and cloud computing security [47].

Secret image sharing (SIS) has been extensively studied [711] since digital images are critical media types. As social networks develop, more and more digital media is being distributed through them. Before being transmitted through social networks, digital media is compressed to reduce storage costs and increase transmission speeds. Digital images are often contaminated by multiple noises due to the defects of transmission and storage media [12]. Secret sharing schemes rely on mathematical models and precise calculations, and slight changes may result in completely different recovery results. Secret sharing schemes often generate noise-like shadow images [8, 11] because they use random during the sharing process, which can easily attract the attention of attackers during transmission. When the communication behavior is discovered, the person using ciphertext communication stands out and thus becomes a priority target for monitoring and analysis. All these realities pose significant challenges to SIS.

Since ancient times, the text has served as the primary means of communication for humans. Generally, the channel does not compress or interfere with text by noise, so the text are robust when transmitted in a public channel. These indicate that the text may be a more suitable form of data to be transmitted as the shadow in public channels than images. Since the value range of an image’s pixels is , SIS can directly establish a mapping relationship from the shared value to the pixel value, thus combining the shared values into an image form. In contrast, text, as a sequence of words with semantic relevance and syntactic rules, cannot form a direct correspondence between shared values and words. Yang et al. [13] proposed a generative text steganography scheme in which they encode candidate words in the text generation process of language model (LM). Later they choose the corresponding words for output according to the secret bits to be embedded, thus completing the mapping of binary bits to word space. With the help of this mapping method, this article proposes to encode the candidate words with the perfect binary tree in the generation process of shadow text, then determine the output words according to the shared values. Since the generation process is under the constraint of the language model, each shadow text is a fluent and natural utterance, which makes this scheme also satisfy the characteristic of the concealment system, that is, imperceptibility.

Social networks’ complex and open characteristics provide an excellent camouflage environment for transmitting shadow texts. The language characteristics of each social account are different due to the interest fields, professional directions, etc. The concealment and security of the transmission of shadow texts through social networks can be further enhanced if the semantics of shadow texts can be effectively controlled combined with the social accounts’ characteristics. Controllable text generation (CTG) can control text characteristics, such as emotion and style, while ensuring the content [1416]. CTG involves modeling , is the target attribute and is the sample to generate. A Plug and Play Language Model (PPLM) for controllable language generation was proposed by Dathathri et al. [17], which is a combination of a pretrained generative model and attribute models. We propose using this CTG method to control the topic of each shadow text through the bag of words (BoW) associated with different topic words, thus making the shadow texts more suitable for social network scenarios.

In this article, we propose a generative text secret sharing with topic-controlled shadows (GTSS) scheme, which shares a secret message as shadow texts, each of which can have a different topic, and any shadow texts can obtain the secret message. This article’s motivations and contributions are summarized as follows:(i)In response to the problems faced by the secret image sharing scheme, such as the transmission process of shadow images easily suffering from compression and noise pollution and the shadow images being susceptible to suspicion, this article proposes to use texts as shadows, which are more suitable for robust and covert transmission in the social network environment.(ii)To address the problem that the shared values cannot be directly linked to the word space, this article proposes to encode the candidate words with the perfect binary tree in the text generation process to establish the mapping from the shared value space to the word space.(iii)Due to the differences in characteristics of social network users’ speeches, we propose to control the generated shadow texts’ topics with BoW so that they can be more concealable in social network scenarios.(iv)Most importantly, we propose two goal programming models, which deeply integrate secret sharing, encoding, and controllable text generation techniques and can enhance the topic relevance and text fluency, respectively. Compared with existing generative text steganography schemes, GTSS has certain advantages in both generated text quality and detection resistance.

Preliminaries and related work regarding the proposed scheme are presented here. First, we introduce the definition of SS and the SS scheme based on the matrix theory used by GTSS. Then, we introduce the method of mapping binary bits to word space. Finally, we introduce the transformer principle and the transformer-based controllable text generation method.

2.1. Secret Sharing Based on Matrix Theory

Secret sharing can be defined as follows [18].

Share: a randomized algorithm that outputs a sequence of shares based on the input message .

Reconstruct: a deterministic algorithm that outputs a message based on a collection of or more shares.

is the space of message, is threshold. The users can be numbered as and the user holds share . Denote as a subset of users. The set of shares belonging to users is . If , then is authorized, otherwise it is unauthorized. Secret sharing aims to allow authorized sets of users/shares to recover the secret, while unauthorized sets cannot.

Yu et al. [19] proposed an SIS scheme modulo 256, which is not limited to the restriction that the modulo (denoted by ) of the traditional SS scheme must be a prime number, and making the shared value space correspond perfectly to the range of grayscale image pixel values. They first construct an sharing matrix , which satisfies the determinant of any submatrix is odd. After that, the secret pixel value to be shared is put into the first element of vector and the rest elements of take value randomly from . Then, shared values are obtained by matrix multiplication as shown in the following equations:

Each shared value corresponds to one row vector , and recovery can be performed when combinations of are obtained. During the recovery process, the recovery matrix is constructed by , which is a submatrix of . The vector can be recovered by (3), and the first element is the secret pixel value.where is the inverse matrix of the recovery matrix , which is obtained by dividing the adjoint matrix by the determinant , and consists of shared values.

In this article, we choose to encode the candidate pool using a perfect binary tree with tree height . The integer form of the code word ranges in , if we take then the shared value space corresponds perfectly to the code word space. The size of the candidate pool (CPS) is . So GTSS generalizes the SS scheme mentioned above and chooses the modulo to be , so that the range of shared values will not exceed CPS, which ensures the feasibility of information embedding and the correctness of extraction. And this scheme chooses to share secret values at a time, which means that the first elements of are secret values, and the rest elements are chosen from . Sharing secret values at a time can significantly improve the efficiency of the scheme.

2.2. Mapping Method of Binary Bits to Word Space

The natural language processing field typically considers text to be a sequence of words organized according to their semantic associations and syntactic properties. The chain rule of probability [20, 21] can be used to describe the joint probability distribution of word sequences, which is expressed as follows:

is the generation probability of the word sequence , and represents the conditional probability of generating word when are given. Conditional probability measures the degree of fitness between and the previous text. Generally, the generated text is more reasonable if the conditional probability is higher. In general, multiple candidate words are available for a given string of , which makes the generated text conform to grammatical and semantic rules.

To achieve the mapping of secret bits to words, Yang et al. [13] proposed to use a fixed-length coding (FLC) based on the perfect binary tree and a variable-length coding (VLC) based on the Huffman tree. The FLC scheme is simpler to implement, and more time-efficient [22]. Figure 1 illustrates FLC schematically, in which we choose a candidate pool size of 4 and a perfect binary tree height of 2 for illustration. At the -th time step, the FLC scheme first inputs the prefix text into LM to get the candidate words and their probability distribution at -th time step, and then intercepts the candidate pool of fixed size in descending probability order, and encodes the words using a perfect binary tree. After that, the codewords of , , , and are 00, 01, 10, and 11, respectively. Therefore, the corresponding candidate word can be chosen based on the secret bits (01) to be embedded. The output words are added to the prefix for embedding the secret bits in the next time step.

Using the method discussed above, a certain number of secret bits can be carried by each word in the generated text. We first encrypt the secret value into shared values by SS, then complete the mapping from the shared values to words in shadow texts using perfect binary tree encoding.

2.3. Transformer-Based Controllable Text Generation

On the basis of traditional text generation, controllable text generation adds control over some key information, styles, attributes, etc., making generated text meet some of our expectations.

To create a conditional generative model, Dathathri et al. used a transformer [23] to model natural language distribution and proposed PPLM [17] to sample from . Equation (5) summarizes the recurrent interpretation of transformers [24].

History matrix consists of key-value pairs from time-steps 0 to in the past. The is then sampled according to , where is a linear transformation that maps the logit vector to a vector of vocabulary size.

In the next time step, can be adjusted to increase the more relevant words’ probability in the candidate pool. Based on the conditioned attribute model , history can be shifted toward higher log-likelihood (LL) of attribute and the distribution of unmodified language model . With being the update to , will increase the likelihood that the generated text possesses the target attribute. The initial value of is zero and the attribute model is rewritten as by PPLM. Then gradient updates are made to by going as follows:where indicates the update step size and represents the scaling coefficient for the normalization term. This updating step can repeat times. The value of is usually between 3 and 10. Afterward, the updated logits are obtained by a forward pass through , where . Using the modified at time step , a new probability distribution can be generated.

GTSS chooses BoW as the attribute model to modify . After that, the modified probability distribution is arranged in descending order. The first words are encoded by a perfect binary tree. Then select the corresponding shadow words for output according to the shared values, thus completing the generation of shadow texts that satisfy specific topics. The specific method and detailed algorithm are described in the next Section.

3. GTSS Methodology

In this section, we first define text secret sharing and introduce the basic idea of GTSS, followed by a detailed description of the sharing algorithm and the recovery algorithm, after which we analyze the applicability of GTSS.

3.1. The Basic Idea

Table 1 illustrates the main notations used in this article.

This scheme uses BoW models corresponding to specific topics as attribute models. A BoW is a set of keywords that specify a topic. Equation (7) can be used to represent .where represents the conditional probability distribution for the language model output at time . By (6), we can calculate and modify to obtain the conditional probability distribution .

We define text secret sharing: a secret message is shared as shadow texts; each shadow text is a natural fluent sentence and can have a specific topic. The original secret message can be recovered by any shadow texts, while less than shadow texts cannot complete the recovery.

To achieve this function, we propose using SS based on matrix theory to share the secret message, controlling the topic of shadow text by BoW, and completing the mapping of shared values to word space using perfect binary tree coding. GTSS includes two parts sharing algorithm and recovery algorithm. The sharing algorithm includes three modules: secret sharing module, mapping module, and goal programming model. And the recovery algorithm includes reconstruct module and inverse mapping module. We explain them separately below.

3.2. The Sharing Algorithm

A schematic diagram of the sharing phase is shown in Figure 2, where we consider a scheme with threshold, , , and . We assume that the secret message is a piece of secret text, and this scheme can share any binary bits. The secret text is encoded into a binary bitstream, then sliced into several units per bit, as well as converted into secret integer values. An sharing matrix is generated before sharing, which satisfies that any submatrix’s determinant is odd, after which the secret values are continuously put into the first positions in , and the rest elements’ value range is . The secret sharing module performs matrix multiplication to obtain the shared values . The mapping module continuously generates text using the language model. By using BoW corresponding to a specific topic, the mapping module modifies the probability distribution to increase the probability of the more topic-compatible words in the candidate pool. Then the mapping module uses a perfect binary tree to encode the candidate words, and the corresponding words are chosen based on the shared values. The goal programming model (GPM) guides all the above processes.

For different applications, we propose a goal programming model GPM-topic to optimize topic relevance and another goal programming model GPM-ppl to improve text quality. Equation (8) expresses GPM-topic.

The conditional probability distribution represents the likelihood that the next word will be generated given the prior word of the -th shadow text. The probability distribution is modified by to increase the relevance to . The constraints in GPM-topic are the operations of the secret sharing module and the mapping module. Using the perfect binary tree, the mapping module completes the mapping of the shared value to the word space generated by LM. For one set of secret values, it would not be unique for the combination of shared values because the rest elements of take values in . By continuously adjusting the last elements of , we can produce different shadow word combinations. To generate more appropriate shadow texts, GPM-topic utilizes this point to find the combination of shadow words with the largest product of probabilities, which is most relevant to the topic.

The mapping module modifies the original probability distribution with BoW to get which has a higher likelihood of fitting the topic. The language model is trained on thousands of natural texts to fit the natural language probability distribution. Therefore, the modification of the probability distribution affects the fluency of the generated text, which is the price of topic control.

(9) shows the perplexity (ppl) as a metric for evaluating generated text [2527].

We can see that increasing the conditional probability decreases the perplexity and improves the quality of the word sequence. As shown in (10), we propose GPM-ppl as a method for improving shadow text quality.

The conditional probability distribution represents the likelihood that the next word will be generated given the prior word of the -th shadow text, and it is the original probability distribution obtained by LM. The rest is the same as GPM-topic. To reduce the ppl and improve the text quality, we have to find the combination of shadow words with the largest product of original probabilities. In this way, each shadow word matches the original distribution with its prior words more closely, resulting in a reduced ppl of shadow texts. Meanwhile, this decreases the ability to select words related to the topic and ultimately reduces shadow text’s relevance. Therefore, GPM selection should be based on real-world application requirements.

Algorithm 1 shows the details of the sharing method, by which we can generate natural and topic-controlled shadow texts based on the input secret bitstream. The generated shadow texts can then be sent through open channels carrying confidential messages with high robustness and concealment.

Input:
 Secret bitstream ; threshold; the number of secret units to be shared at one time ; perfect binary tree’s height (then CPS =   =  ); the topics of each shadow text ; bag of words related to ; initial words for each shadow text .
Output:
shadow texts .
(1)Slice per bits, transform each unit into integer form to get the secret values ;
(2)Construct the sharing matrix , which satisfies any submatrix’s determinant is odd;
(3)for each do
(4)  Input into LM to get of ;
(5)  ;
(6);
(7)whiledo
(8)if notthen
(9)   Create a vector with the first values being secret values, and the remaining values coming from ;
(10)   ;
(11)   for each in do
(12)    Based on , can be obtained through (6);
(13)    ;
(14)    Input the last word of and into LM, then ;
(15)    ;
(16)    Arrange the in descending order, encode the words using the perfect binary tree, and determine the word for output by the shared value ;
(17)else
(18)   Add to ;
(19);
(20)return
3.3. The Recovery Algorithm

The secret message can be recovered when or more shadow texts are obtained. Figure 3 shows the schematic diagram of the recovery phase. Based on the same text generation process as in the sharing phase, the inverse mapping module calculates the conditional probability distribution for the next time step. A perfect binary tree is used to encode the candidate pool. In order to obtain the shared values, we need not use a sampling strategy similar to the sharing stage to select the shadow words, but to find the corresponding codewords through the determined shadow words. After that, the reconstruct module is used to put the obtained shared values into the vector and multiply it with the inverse matrix of the recovery matrix to get vector , whose first elements are filled by secret values. The recovery process is shown in detail in Algorithm 2. The shadow texts obtained are assumed to be the first of the shadow texts for convenience of representation.

Input:
shadow texts ; row vectors corresponding to ; the number of secret units to be shared at one time ; height of perfect binary tree ; the topics of each shadow text ; bag of words related to .
Output:
 Original secret bitstream .
(1)Combine row vectors to obtain the recovery matrix , calculate the inverse matrix according to (3);
(2)for each shadow text do
(3)  Input the prefix of into LM to get ;
(4)  while not the end of do
(5)   Based on , can be obtained through (6);
(6)   ;
(7)   Input the last word of and into LM, then ;
(8)   ;
(9)   Arrange the in descending order, encode the words by a perfect binary tree;
(10)   Extract the codeword corresponding to and transform it to integer value, which is the shared value . Then add to ;
(11)for each in each do
(12)  Combine into vector , and calculate according to , the first elements are secret values, which are added to ;
(13)Transform each value in into the binary form of bits, then is obtained;
(14)return

In contrast to images or videos, texts are not compressed or distorted during transmission. Texts, therefore, have excellent robustness, making shadow texts very suitable for transmission in many scenarios. For example, shadow texts can be transmitted by instant messaging software such as Telegram and Skype, or by posting them on social media platforms such as Twitter and Facebook. Then, the receiver can obtain the shadow texts by browsing and copying through the above platforms, using the recovery algorithm to get the secret message.

The topic of each shadow text does not need to be transmitted but can be obtained from the shadow text itself. The topic can be identified artificially by the semantics of the shadow text, or it can be determined by which BoW has the highest number of words in the text.

3.4. Theoretical Analysis

First, we analyze the algorithm’s input and output text length.

Since GTSS uses GPT-2 [28] as the generation model, the most extended sequence length it can process at one time is 1024, so the upper limit of the length of the most extended input and output text supported is 1024. Therefore, the most extended length of both input prefix text and output shadow text is 1024.

The design idea of GTSS is that consecutive secret units are shared into shadow values, each corresponding to a word in shadow texts. When all the secret units are shared, the generation of shadow text is ended. Assuming that the length of the secret bitstream to be shared is , then is divided into secret units after being split, and GTSS shares secret units at a time, so a total of sharing process will be performed, and the shadow text will generate words later. The actual length of the final generated shadow text is the length of the prefix text plus the number of subsequent generated words, that is, .

Then we discuss the computational complexity of the algorithm. Since this article proposes a secret sharing scheme, the analysis focuses on the secret sharing module and reconstruction module for the complexity calculation.

Each sharing process is multiplied by an matrix and a matrix, as shown in (2), and a total of sharing processes will be performed, so the time complexity of the secret sharing module in GTSS is . In GTSS, or , so the time complexity is .

In the part of the shadow text that removes the prefix, one word corresponds to one shared value. So shared values can be extracted from the words at the corresponding positions in the shadow texts, and then one unit of the secret message can be recovered. The number of words corresponding to shared values in a shadow text is , and GTSS uses matrix multiplication in the recovery phase, a matrix is multiplied by a matrix, as shown in (3), so the time complexity of the reconstruction module is .

3.5. Application Analysis

The GTSS scheme proposed in this article has two main application scenarios, multi-channel transmission and access control for the secret message. The application scenario diagrams are shown in Figure 4, where represents the -th shadow text.

Considering that social networks are public channels, and each social platform has staff monitoring it, a suspicious account might be deleted or banned. Whenever the transmission encounters the above scenario, the secret message will be lost. Therefore, we can use GTSS to share the secret message as multiple shadow texts with different topics and transmit them through different social accounts or even social platforms. Thanks to the loss tolerance feature of secret sharing, even if some of the shadow texts are lost due to abnormal reasons, the secret message can be reconstructed when the receiver gets shadow texts.

In the traditional secret image sharing scheme, participants hold shadow images, where or more participants use the shadow images in hand to recover the secret message. Since an image is a digital media, a storage device is needed to keep it. In contrast, the shadow of GTSS is in the form of text, whose carrier can be a simple paper or even the participants’ memory, so it is not easily limited by the storage device and is easy to remember and manage.

4. Experiments

This section evaluates the proposed GTSS scheme regarding shadow text quality, topic relevance, and anti-detection capability, conducts an ablation study to verify each module’s effectiveness, and compresses it with text steganography schemes regarding embedding rate and perplexity.

4.1. Experimental Setup
4.1.1. Dataset

The “Microsoft Coco” [29] dataset for object recognition, captioning, and segmentation, is used to evaluate GTSS’ performance. Our corpus is the portion of the dataset used for image captions, which contains 591,753 sentences.

4.1.2. Evaluation Indicators

Perplexity, as shown in (9), is used to evaluate the fluency of shadow text. The smaller ppl indicates that the closer the statistical distribution of the generated shadow text and the natural text, the higher the text quality. Since we use BoW as the attribute model to control the topic by adjusting the conditional probability distribution at each time step, we evaluate topic relevance (TR) to using the percentage of words belonging to in the shadow text, as shown in the following equation: describes the topic relevance between and , is the length of shadow text, and is the number of words in the shadow text that belong to .

4.1.3. Language Model

A transformer-based GPT-2 model [28] with 345M parameters is used for text generation.

4.2. Some Examples

The hyperparameters of GTSS include , , , , and . Below we show some examples of shadow texts when these parameters are denoted as different values (Tables 25). The secret text to share is “Secret message.” The target topics of the shadow texts are colored and bracketed (e.g., [science]). The words of BoW are highlighted brightly (e.g., evolution). A softer highlight is used for words related to the topic but not in BoW (e.g., brain). The prefix of every sentence is underlined (e.g., It has been shown).

To further demonstrate the scalability of the shadow text generated by GTSS in terms of topic control, we add a sample of the threshold, where the topics of the first shadow text are restricted to “space” and “military,” and the topics of the second shadow text are restricted to “technology” and “science.” As shown in Table 6, the shadow texts generated by GTSS can satisfy multiple topics at the same time.

4.3. Ablation Study

An ablation study was conducted on five variants: B: the baseline with no topic control method, and are random selected; BP: the variant with no topic control method under the constraint of GPM-ppl; BT: the variant with topic control method, and are random selected; BTP: the GTSS scheme with topic control method under the constraint of GPM-ppl; and BTT: the GTSS scheme with topic control method under the constraint of GPM-topic.

In order to identify the average perplexity and topic relevance of each shadow text, we randomly select sentences from “Microsoft Coco” as the secret texts. Tables 7 and 8 show the experimental results.

Following are the conclusions we can draw from the above experimental results.(i)Through GTSS’s topic control mechanism, words that match a specific topic are more likely to be selected when creating shadow texts, resulting in shadow texts that satisfy the topic.(ii)Because of the topic control means, the modified probability distribution does not match the training samples, so the BT method without the optimization of the goal programming model has the poorest text quality.(iii)The shadow texts of BP method optimized by GPM-ppl possess the smallest complexity and highest quality. In comparison with both BT and BTT, the GPM-ppl optimized BTP method has a lower perplexity.(iv)The shadow texts of BTT method optimized by GPM-topic possess the highest topic relevance.

4.4. Comparative Experiment

Although we design a text secret sharing scheme in this article, the shared value is mapped to word space in generating shadow text, which inevitably affects the normal text generation process so that it will have a certain impact on the concealment of shadow text. In this section, to examine the concealment of shadow text, we conduct experiments on GTSS and two classical text steganography schemes Bins [25] and FLC [13] in terms of embedding rate (ER) and perplexity (ppl).

The embedding rate is the average number of effective secret bits carried by each word in the text. The Bins scheme divides the word space into blocks and then encodes the blocks. In the text generation process, the corresponding block is determined according to the secret bitstream, and the appropriate word is selected for output, thereby completing the embedding of the secret bits. Therefore, the ER of the Bins scheme is related to the size of block . The larger the , the more secret bits each word can carry, and the larger the ER, and ER =  . The FLC scheme performs perfect binary tree coding on the candidate pool and then output the codeword according to the secret bits, so ER =  . GTSS will share units of the secret message (a total of bits) into shared values at a time, and the length of each unit of the secret message is bits, and then map the shared values to the word space through the perfect binary tree. Each word in the shadow text corresponds to a shared value, so ER =  in GTSS.

To reflect the effect of parameters , , , and in GTSS, we conducted three sets of experiments on the two variants of GTSS, GTSS-BTP, and GTSS-BTT, respectively. The parameters were selected as , , , ; , , , ; and , , , . For the Bins scheme, we choose . For the FLC scheme, we choose . The experimental results are shown in Tables 911. Since the embedding rate of each scheme cannot be accurately matched, we draw two line graphs as shown in Figure 5 for a more intuitive display.

It can be concluded from the tables and figures that for the same scheme, with the increase in embedding rate, perplexity tends to increase, and the quality of the text continues to decline. For GTSS and the FLC scheme, with the increase of and , the candidate word space will become larger, and the possibility of selecting words with small conditional probability becomes larger, so the overall perplexity will increase. For the Bins scheme, as the increases, the number of words in each block will decrease, resulting in no words matching the previous text being selected, so the text quality will decrease.

For the two variants of GTSS, GTSS-BTP and GTSS-BTT, under the same embedding rate, the higher the threshold , the better the text quality. This is because ER =  in GTSS, we choose in the experiment, and , so the higher the threshold, the closer the embedding rate is to . When is equal, that is, when the size of candidate word space is equal, the high-threshold GTSS scheme has a higher embedding rate, so the text generated by the high-threshold scheme is of higher quality under the same ER.

The quality of the text generated by GTSS-BTP is better than that of the Bins scheme under various thresholds. Under the and thresholds, when ER is relatively large, the text quality of GTSS-BTP is better than that of the FLC scheme. At the threshold of GTSS-BTT, the text quality drops sharply when the ER is greater than 4. The text quality of GTSS-BTT still outperforms the Bins scheme at the high threshold and is not much different from the FLC scheme.

GTSS has an advantage over existing text steganography schemes in terms of the ppl of the generated text at higher thresholds under the same embedding rate.

4.5. Evaluation of Anti-Detection

In addition, we tested the anti-detection capability of GTSS. The text steganalysis algorithm TS-CSW [30] is used to classify the generated steganographic sentences, which uses convolutional sliding windows with multiple sizes to extract relevant features. These subtle variations in the distribution of relevant features can be used for text steganalysis. We conducted experiments for Bins, FLC, and several variants of GTSS, respectively. We chose to compare the anti-detection capability of each scheme at approximately the same embedding rate (ER 3), so that for the Bins scheme, for the FLC scheme, for the threshold GTSS, for the threshold GTSS, and for the threshold GTSS, at which point Bins, FLC, GTSS-, and GTSS- schemes all have the embedding rate of 3 bits/word, and GTSS- has an embedding rate of 3.33 bits/word.

We choose Accuracy, Precision, Recall, and F1-score, standard evaluation metrics for binary classification models, to evaluate the detection resistance of the generated text, as shown in equations (12)–(15). is positive sample success prediction, is negative sample success prediction, is negative sample error prediction, and is positive sample error prediction. The closer the accuracy of the scheme is to 0.5, or the smaller the values of Precision, Recall, and F1-score, the more resistant the generated steganographic text or shadow text is to detection. The related results are shown in Table 12.

We can see that the Accuracy of Bins, FLC, and GTSS is close to 50%, indicating that these schemes have some resistance to detection. However, in the performance of the remaining three metrics Precision, Recall, and F1-score, the variants of GTSS are better than Bins and FLC, which indicates that the shadow text generated by GTSS has better resistance to detection than the Bins and FLC schemes.

5. Conclusions

A text secret sharing scheme is proposed in this article, where the secret message is divided as topic-controlled and fluent shadow texts, and any shadow texts can reconstruct the secret message. First, we encrypt the secret message using matrix theory to get shared values. Then we use BoW to modify the conditional probability distribution in order to increase the probability of words meeting the topic. Shadow texts are generated by mapping the shared values into word space using the perfect binary tree. Most importantly, we propose two goal programming models that deeply integrate secret sharing, encoding, and controllable text generation techniques. The two GPM can enhance the fluency and topic relevance of the shadow text, respectively. We discuss two application scenarios of this scheme: multi-channel transmission and access control of the secret message. Our experimental section illustrates the effectiveness of GTSS through examples and ablation studies. Comparative and anti-detection experiments show that the text generated by GTSS has good quality and anti-detection ability. In the meantime, there are still some deficiencies in the proposed scheme, which need to be addressed in the future.(i)The limitation of the SS scheme used in GTSS itself, which can only satisfy the and thresholds, leads to the limitation of the current GTSS scheme in the values of the threshold parameters. The operations in the finite field are polynomial operations, which can avoid the restriction that the modulo number of operations must be prime in the prime field , and using it in GTSS may extend the threshold parameters.(ii)A modified or deleted word in a shadow text will cause GTSS to fail to find the corresponding word in its candidate pool at a certain point, which will further affect the extraction of shared values, as well as the recovery of the secret message. There is an urgent need to improve shadow text’s ability to withstand word modification or deletion attacks.

Data Availability

The BoW we used is available at https://github.com/yuxiaoxiaochun/GTSS.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (Grant Number: 62271496). This article is an extension of the conference paper [31]. The authors discuss it in terms of text secret sharing and add a lot of theoretical analysis and experiments.