Abstract

Tor is vulnerable to flow correlation attacks, adversaries who can observe the traffic metadata (e.g., packet timing, size, etc.) between client to entry relay and exit relay to the server will deanonymize users by calculating the degree of association. A recent study has shown that deep-learning-based approach called DeepCorr provides a high flow correlation accuracy of over 96%. The escalating threat of this attack requires timely and effective countermeasures. In this paper, we propose a novel defense mechanism that injects dummy packets into flow traces by precomputing adversarial examples, successfully breaks the flow pattern that CNNs model has learned, and achieves a high protection success rate of over 97%. Moreover, our defense only requires 20% bandwidth overhead, which outperforms the state-of-the-art defense. We further consider implementing our defense in the real world. We find that, unlike traditional scenarios, the traffic flows are “fixed” only when they are coming, which means we must know the next packet’s feature. In addition, the websites are not immutable, and the characteristics of the transmitted packets will change irregularly and lead to the inefficiency of adversarial samples. To solve these problems, we design a system to adapt our defense in the real world and further reduce bandwidth overhead.

1. Introduction

Tor is the most popular and low-latency anonymity network that provides anonymous communication services for more than two million people [1]. It includes over 3000 relays that transmit massive encrypt packets and conceal client’s information. Every relay only knows its previous and latter relay’s address.

But flow correlation attacks break this security model. Network-level adversaries, i.e., autonomous systems (ASes) have the power to observe traffic characteristics between client to entry relay and exit relay to the destination server. They can link these data (in particular packet timings and packet sizes) to deanonymize users, as shown in Figure 1. The correlation algorithm used in the beginning studies is usually a traditional method like Pearson correlation or Cosine similarity. Recent research leverages a deep learning model to correlate traffic characteristics with significantly higher accuracies than existing algorithms.

Existing defense methods to detect or mitigate traffic analysis attacks mainly focus on obfuscating encrypt packets, traffic morphing, changing network-level characteristics that does not affect the deep-learning-based attack. And to our best knowledge, existing defenses are all designed to mitigate traffic analysis attacks like website fingerprint attacks or BGP hijack attacks. There is no effective defense faced to flow correlation attack.

Against this strong deep-learning-based attack, the adversarial example is a natural choice for us to confuse CNNs model. So, we explore how effective the adversarial examples defend flow correlation attacks and how to implement defense in the real world.

First, we reconstruct the targeted model that represents state-of-the-art attack and gets the similar accuracy that Milad Nasr et al. [10] mentioned. Second, we evaluate various adversarial example methods’ effects including FGSM, C&W, Deepfool, and BIM. The experimental results show that the success rate of applying adversarial examples to defeat the flow correlation model is more than 97% with only 20% bandwidth overhead.

Third, we try to implement our defense in the real world, but we find that there are some challenges we have to face. (1) The websites are not immutable, so the characteristics of the transmitted packets are not immutable. (2) The traffic flows are “fixed” only when they are coming, which means we must know the next packet’s feature. (3) The dummy packets we add will go through entire circuit (client- > entry relay- > middle relay- > exit relay- > server). This has increased bandwidth overhead. How can we reduce these extra dummy packets after they have done their job?

To solve the first and second problem, we design a center server that termly collects traffic characteristics of websites and generates corresponding adversarial examples. To solve third problem, we design a mechanism to drop redundant dummy packets at the entry relay, which further reduces bandwidth overhead.

The key contributions of this work are as follows:(1)We propose a novel defense mechanism against deep-learning-based flow correlation attacks that inject dummy packets into flow traces by precomputing adversarial examples.(2)We further evaluate various adversarial example methods’ effects, and the experimental results show that even the worst method (FGSM) we used also gain a protection success rate of over 90% with an acceptable bandwidth overhead (30%).(3)We analyze the challenges of applying our defense in the real world and design a system to solve these challenges, including center server, full-duplex mode, and drop dummy packets mechanism.

The rest of the paper is organized as follows: Section 2 introduces related work, including the development of traffic analysis and adversarial examples. Section 3 describes our proposed method in detail. Section 4 shows the details and results of our experiment. In Section 5, we point out our limitations and give future directions. In Section 6, we conclude our work.

2.1. Flow Correlation Attack and Defense

Flow correlation attack was a type of traffic analysis attack, and the traffic analysis attack was a type of side-channel attack. Side-channel attacks always leveraged non-normal ways to infer sensitive information from well-protected systems, such as by observing traces (e.g., timing, power, or resource usage). Diao et al. [2] launched inference attacks without any permission in Android by interrupting timing analysis and applying it to interrupt logs. Liu et al. [3] presented a side-channel attack to infer user inputs on keyboards by exploiting sensors in the smartwatch. Schuster et al. [4] aimed to identify video information by using the deep-learning model to classify encrypted video streams.

Flow correlation attacks as a significant side-channel attack was applied in many fields. Shmatikov et al. [5] investigated an active attack called watermark attacks. They modified the packet flows to “fingerprint” them and analyze the tradeoffs between the amount of cover traffic, extra latency, etc. In addition, they also proposed a defense method by using adaptive padding. The work of Paxson and Zhang [6] made the traffic packets as a series of ON and OFF patterns and used these data to correlate network flows. Murdoch and Zieliński [8] developed and evaluated Bayesian traffic analysis techniques to process sampled data. Blum et al. [7] correlated the aggregate sizes of network packets over time. Sun et al. [9] further combined the asymmetric traffic analysis and BGP hijacking to deanonymize users.

All the above papers used the static metric standard statistical correlation metrics to correlate the vectors of flow timings and sizes. And to gain a higher accuracy, they need to observe the associated flow for five minutes or more. The time it take was too long to correlate lots of short-lived connections. Nasr et al. [10] were the first one to use CNNs models to learn a flow correlation function and achieve drastically higher accuracies.

There was still a big gap in the defense of flow correlation attacks, and Sun et al. [11] proposed a defense method that mainly solved the BGP hijacking and reduced the chance of adversary observed network traffic. The obfs4 [12] as a Tor official defense could randomly obfuscate packets time and size but get a poor protection success rate with an unacceptable bandwidth overhead. The ScrambleSuit [43] was a thin protocol layer above TCP whose obfuscated the transported application data by using morphing techniques and a secret exchanged out-of-band. It also had impact on defending the flow correlation attacks but has the same problem as obfs4.

There were some ways to improve classification model’s ability of defending noisy labels. Liu et al. [47] proved that any surrogate loss function could be used for classification with noisy labels by using importance reweighting. Yu et al. [45] considered the influence of noisy labels in transfer learning and proposed a novel denoising conditional invariant component (DCIC) framework. Xia et al. [46] presented granular-ball sampling that reduced the data size, improved the data quality in noisy label, and get the same classification accuracy on the original data sets. Noise filtering is an effective method of dealing with label noise, but most of them aimed at binary classification. Xia et al. [44] presented a novel label noise filtering learning method for multiclass classification. These methods mainly focus on the scenario of noisy labels that could help adversary improve their correlation model’s robustness and our methods aimed at defending against flow correlation attacks by using adversarial examples. Numan et al. [48] carried out a systematic review of clone detection techniques in static WSNs and provided a comprehensive survey of the existing centralized and distributed schemes with their drawbacks and challenges. Guo et al. [49] proposed a deep graph neural network-based Spammer detection (DeG-Spam) model to gain a better effect than baselines that could be a superior choice to correlate with flow data.

2.2. Website Fingerprint Attack and Defense

The scenario and challenge for website fingerprint attacks are very similar to our work. Adversaries get sensitive information about websites such as domain or page content by analyzing network characteristics. It used to be realized by the traditional machine learning method, but now the deep learning method is gradually emerging.

Nowadays more and more studies have been proposed to defeat website fingerprint attacks. Some research focused on the application layer [1315], defenders changed the routing algorithm or confused HTTP requests to make adversary touch real traffic as little as possible. Application-layer defense strategies were often difficult to implement because the premise of their implementation was very harsh, such as target websites only had HTTP protocols. And these methods could not defend deep-learning-based attacks (less than 60% protection success rate). Other researches focused on the network layer. They aimed to fool the classification model by inserting dummy packets. In the earlier studies [1618], they used constant rate padding to reduce information leakage caused by time intervals and traffic volume. However, these methods always require high bandwidth overhead of 150%. A recent study [19] found that inserting packages between two packets with a large time gap would reduce the bandwidth overhead. They were also useless when applied in defending deep-learning-based attacks (only achieve 9% and 28% protection success rate). Finally, there was a super sequence defense method called Walkie-talkie [20], which committed to finding a longer package trace that contains subsequences of different website traces. But it only gets a 50% protection success rate against DNN attacks. In general, no method can maintain a high success rate with a small amount of bandwidth overhead. All related works are present in Table 1.

2.3. Adversarial Examples

Adversarial examples are a series of methods to fool machine learning models, such as deep neural networks. They add perturbations to the clean input, forward it to the classifie,r and get an unexpected result. How to generate adversarial perturbations becomes a hot topic in computer vision, natural language processing, etc. There were many prior works that had shown first-order gradient-based attacks to be fairly effective in fooling DNN-based models in both image [2127], audio [2830], and text [3133] domains. The idea of such adversarial attacks was to find a good trajectory that maximally changed the value of the model’s output and pushed the sample towards a low-density region. However, to our best knowledge, there is no paper to apply adversarial examples in defending deep-learning-based flow correlation attacks.

3. Method

In this section, we introduce the target model and the specific details of defending against deep-learning-based flow correlation attacks with adversarial examples. Next, we will show our system that were designed to implement defense in the real world.

3.1. Target Model

We reconstruct the idea of Milad Nasr et al. to perform traffic correlation attacks. They use a convolutional neural network (CNN) model to learn a correlation function for Tor’s noisy network. It is composed of two convolutional layers and three fully connected layers. The input is a flow pair called , which represents two bidirectional network flows and . The specific of can be described as follows:where is the vector of interpacket delays, is the vector of packet size, and and stand for “upstream” and “downstream,” respectively (e.g., represents the upstream packet size of ).

The model hyperparameters we choose are consistent with Milad Nasr, which are presented in Table 2. To take a first look at the performance, we train our model using data set that publishes with the paper [10]. It includes pairs of associated flow pairs and pairs of nonassociated flow pairs. And we gain a similar performance as described in the paper.

DeepCorr is able to achieve such high accuracy using only 300 packets of each flow. It tells us that we must take action to prevent AS/ISP level adversaries from compromising the anonymity and privacy of Tor users. In the next chapter, we will introduce the defense effect of the adversarial sample against flow correlation attack model and the system we designed to make the defense method applicable to the real world.

3.2. Adversarial Samples against Flow Correlation Attack

Due to the popularity of artificial intelligence and deep learning, adversarial samples have appeared in various scenarios and practical applications. But in many cases, adversarial samples are usually used as a means of attack to escape detection models. In our experiments, adversarial samples are used as a means of defense to fight adversaries who eavesdropping or analyzing users’ traffic. Therefore, our defense strategies focus on improving the protection success rate, every small increase will have a huge impact on the adversaries. Because traffic flows are very large, the adversary who eavesdropping traffic will take a lot of manual analysis time if the attack success rate cannot reach 95%, which almost means that this method of attack is no longer meaningful. This is the first difference between applying adversarial samples in defending flow correlation attacks and other traditional fields. In addition, every clean image or text is “fixed” before adding perturbation. However, traffic flows will be “fixed” only when it’s coming. That means we must know the next packet’s feature and add corresponding adversarial perturbation. This is the second difference between applying adversarial sample in defending flow correlation attacks and other traditional fields.

To generate adversarial example, we use different methods: FGSM [34], C&W [36], Deepfool [37], and BIM [35]. The reason for choosing these four methods is to get a more comprehensive evaluation including gradient-based methods and optimization-based methods.

The fast gradient sign method (FGSM) was proposed by Goodfellow et al. in 2015. This algorithm performs a single gradient ascent step as the following formula: is the original input sample, presents the model parameterized by , is the label corresponding to the , and the is the loss function of the classifier. is the gradient of the given loss , which means the direction where the loss increases the most.

We can control bandwidth overhead from small to large by adjusting param .

Optimization-based attack C&W was proposed by Carlini & Wagner in 2017. This algorithm generates adversarial perturbation based on certain constraints as the following formula: is also the original input sample and the added perturbation is constrained by to keep small. The is the obtained result under constraint conditions.

The basic iterative method (BIM) was proposed by A. Kurakin in 2016, it increases the loss of the classifier by adjusting the direction after each step. It iteratively computes as following: presents the perturbed input at the iteration and clips the input in its argument at and determines the step size. The BIM algorithm starts with and runs for the number of iterations determined by the formula .

Deepfool was proposed by Moosavi-Dezfooli, it perturbs the input by a small vector, which is computed to take the resulting image to the boundary of the polyhedron at each iteration. The final perturbation is accumulated by perturbations added in each iteration when the original decision boundaries of the network change their label.

All these adversarial sample methods are designed to add perturbations to the area of the entire image. But in our scene, we can only change the traffic characteristics between client and entry relay. So, we can only change the part of matrix data. In addition, the ways we add perturbations are by padding packet to change packet size and inserting dummy packets to change interpacket delays. Thus, the value of our adversarial perturbation will always be positive. In order to achieve these requirements, we add extra constraints as follows:where presents the perturbations value we add, presents the input, and presents the area we can change.

3.3. Implement Defense in the Real World

When we think about the actual implementation of our defense in the real world, we must face other challenges. First, the websites are not immutable, and the manager could deploy new functionality, update index pages, launch new activities, etc. So, the characteristics of the transmitted packets will change irregularly and lead to the inefficiency of adversarial samples. Second, we have talked about the limit of adding perturbation in Section 3.2, and we know that only traffic between client and entry relay can be changed. Under this circumstance, we will consider two modes naturally: full-duplex and simplex, who is better? Third, due to network fluctuations, packets might be delayed or received quicker, which will cause the precomputing adversarial examples loss its effect. To meet these requirements, we design a system consisting of some components, as shown in Figure 2.

To solve the first problem, we create “traffic consensus” concept that derives from Tor consensus [38] and stores in a center server. Users can fetch this traffic consensus before connecting to the destination server and add perturbation into live traffic according to the content of traffic consensus. In this traffic consensus, we build the mapping relationship that has the key of website domain and value of corresponding traffic characteristics.

There is an automatic crawler system that collects traffic characteristics termly behind this traffic consensus. Our center server has a Tor client that will request websites that users mostly access like Alexa top 50,00 every time and check the live status by status code. In addition, we made our exit Tor traffic tunnel through our own SOCKS proxy server. Thus, we can capture ingress Tor flows and the egress Tor flows. If the monitored website is live, dump the traffic file by . Next, we will process and extract traffic characteristics including the first 300 packets’ size and delays. Finally, we will use these data to generate adversarial samples and write them to the traffic consensus with the website domain. The specific details are shown as Algorithm 1.

Input: Cycle , Time , Website Groups , Traffic File , Adversarial Samples Method
Output: Traffic consensus
(1)While %  = = 0
(2)For website in do
(3)If is live then
(4)  Dump from ingress and egress Tor flows
(5)  Extract traffic characteristics from
(6)  Adversarial samples  =  ()
(7)  Write S to the traffic consensus {}
(8)Else
(9)  Continue
(10)End if
(11)End while

To solve the second problem, we need to consider differences between full-duplex and simplex. The simplex means that only inserting dummy packets into flows from client to entry relay, it could be done more easily because we can add perturbation at Tor client directly. But it brings other problems: the area where we can add perturbation is further limited and bandwidth overhead is too large to bear.

The full-duplex means we can add perturbation form client to entry relay and entry relay to client. It has more area to add perturbation than simplex. However, the dummy packets we add will go through entire circuit (client- > entry relay- > middle relay- > exit relay- > server). This has definitely increased bandwidth overhead. Thus, we design a drop dummy packets mechanism to further reduce bandwidth overhead. The goal of our approach is to letting adversaries to eavesdrop on dummy packets, and the circuit does not pass through dummy packets. Therefore, we should have a reasonable method to drop dummy packets at the entry relay. We introduce a new control cell , which is referred to in this paper [13]. This cell will record the order of transmitted packets and be send to entry relay before communication begins. Once the entry relay receives cell, it will drop the extra dummy packets that we add at Tor client according to the cell information and send the packets that truly participate in communication to the middle relay.

In addition, the circuit from entry to the client is controlled by entry relay, which means adding perturbation is finished by entry relay. We must provide anonymity that entry relay should not know about users’ information of visiting websites. To achieve this goal, our idea is inspired by the rendezvous cookie applied in Tor onion services [39] to establish a connection between the user and an onion service. The users will send a cell that consists of the website domain, which will be visited, a cookie that is a 20-byte cryptographic nonce chosen randomly by the users, and the entry relay’s IP to the center server. Once center server receives this cell, it will send the perturbation according to the traffic consensus and the cookie generated by users to the entry relay. The entry will store this cookie and perturbation. When users begin to connect to the entry relay with the cookie, the entry relay will compare the cookie that it stores with the cookie that users send. If they are the same, the entry relay will add perturbation into flows to the client. For the third problem, because we already have the drop dummy packets mechanism and full-duplex mode, the only thing what we must do is buffering subsequent cells until the missing cell arrives at the entry relay.

Implementation: we did not implement all components, because it is a large project that needs the entire Tor community’s help to modify Tor source code. But we have designed a set of plans as mentioned above and done a lot of experiments to prove the feasibility of our defense including various adversarial samples methods’ effect, traffic consensus used time, the advantage of full-duplex brings less bandwidth overhead, etc. We will show the results in detail in the next section.

4. Results

In this section, we perform a systematic evaluation of our work. Specifically, we compare various adversarial example methods’ effects and efficiency against the flow correlation attack model. In our system, we have talked about the advantage and challenges that full-duplex brings, we will further show that our methods’ high performance. In addition, we will compare our defense with the state-of-the-art method, and we will test our defense against the traditional flow correlation attack method.

4.1. Data Set

Tor Flow Correlation Data set: in our experiments, we choose to use the public data set of DeepCorr [40]. This data set contains a large number of Tor flows that are captured by visiting Top Alexa ’s websites. The storage form in the data set is pickle file, which contains the packet size and interpacket delays. Meanwhile, flow pair that belongs to the same Tor connection(associated flow) is labeled with 1, and the flow pair that belongs to arbitrary Tor connections(nonassociated flow) is labeled with 0. We evaluate our system’s performance with 9000 flows.

Sirinam and Rimmer Data set: to our best knowledge, the public flow correlation attack data set has only one that is released by Nasr et al. [10]. But we have pointed our scenarios and challenges are very similar to website fingerprint attacks. Thus, we use two well-known WF data sets including Sirinam et al. [41] and Rimmer et al. [42] to evaluate our system’s performance. They both contain Tor users’ flow pairs and their corresponding websites. The specific details of these two data sets are presented in Table 3.

4.2. Experiment Results

We test FGSM, C&W, Deepfool, and BIM on the same test data set that contains 9000 flows and compares their protection success rate with the same bandwidth overhead (all use the perturbation norm). Except for DeepCorr flow correlation attack, we also test our defense against traditional flow correlation attacks including RAPTOR, Pearson, and Cosine. Table 4 shows the result, and we can see even the worst method FGSM could get the 71.2% protection success rate with only 25% bandwidth overhead, and it is also effective against traditional flow correlation attacks. We must point that because the Pearson and Cosine methods use the static metric to measure the correlation, any slight perturbation will have a big impact on the result. Even our method is oriented to the deep-learning-based attack, and the perturbation we added will also break the pattern that the Pearson and Cosine catch.

We also evaluate FGSM, C&W, Deepfool, BIM against website fingerprint attacks including deep-learning-based attack Var-CNN and non-DNN attacks k-NN, k-FP on the Sirinam, Rimmer data set. Table 5 shows the protection success rate of our method, and we can find adversarial examples is effective for defending WF attacks that get sensitive information by classification model.

In Chapter 3.3, we have talked about the difference between full-duplex and simplex. Full-duplex has more area to add perturbation and less bandwidth overhead because of dropping dummy packets mechanism. Figure 3 shows that how effective of two modes are with FGSM. We find full-duplex mode has a higher protection success rate than simplex mode with the same bandwidth overhead and the same adversarial example generation method. In addition, our system will update the traffic consensus termly, which means that this process must be within a tolerable time frame. We evaluate our system’s efficiency on a PC computer that has an i7 11700k CPU and four GTX 2080Ti GPU. We evaluate the total time of generating 500 websites’ traffic consensus and adversarial examples on our test data set. Table 6 shows the result, and we can see that our system is very portable. The FGSM method can update the adversarial perturbation in 1575 s seconds. And we should be aware that our hardware is limited, and anyone can extend the hardware environment to further reduce time consumption.

In Table 4, we can see the FGSM gets the worst protection success rate compared to other methods. But because it is a one-step method, it has the highest efficiency. In our system, time consumption is an important indicator because when the website we focus on become more and more, small-time consumption will be magnified a zillion time over. As for the protection success rate, FGSM get 71.2% with 20% bandwidth overhead. It looks a little low, but as we all know traffic flows are very large, adversaries who eavesdropping traffic will take a lot of manual analysis time if the attack success rate cannot reach 95%, and it almost means that this attack is no longer meaningful. Figure 4 shows the protection success rate of FGSM as bandwidth overhead changes, and we can see it also can get a 95% protection success rate with 35% bandwidth overhead, which is lower than state-of-art defense.

4.3. Comparison
4.3.1. Obfs4

To our best knowledge, obfs4 is the state-of-art and official defense. It is a Tor’s pluggable transports to defeat censorship by nation-states who block all Tor traffic. obfs4 modified packet timings and packet sizes to defeat flow correlation, by padding or splitting packets, or by delaying packets to perturb their timing characteristics. Table 7 shows that our defense protection success rate compares with obfs4. Table 8 shows that our defense bandwidth overhead compares with obfs4. Our defense has advantages both in protection success rate and bandwidth overhead.

4.3.2. ScrambleSuit

ScrambleSuit [43] is a thin protocol layer above TCP whose obfuscates the transported application data by using morphing techniques and a secret exchanged out-of-band. It also has impact on defending the flow correlation attacks. Table 7 shows that our defense protection success rate compares with ScrambleSuit. Table 8 shows that our defense bandwidth overhead compares with ScrambleSuit.

4.3.3. Blind Adversary

Blind Adversary [50] create universal adversarial perturbations by GANs (generative adversarial networks). This approach protects against both flow correlation attack and website fingerprint attack but require significant additional resources and bandwidth overhead. Table 7 shows that our defense protection success rate compares with Blind Adversary. Table 8 shows that our defense bandwidth overhead compares with Blind Adversary.

5. Limitations and Future Directions

As mentioned earlier, this work is focused on defeating CNN-based flow correlation attacks with adversarial examples. At present, there are a lot of research about defending the adversarial examples, and the adversarial training is one of the most effective approaches. Adversary can compute our adversarial perturbations and retrain their models against them to improve robustness. Future work can extend our system to defeat adversarial training and other methods that aim to reduce the effect of adversarial examples.

6. Conclusion

In this paper, we evaluate the effect of using adversarial samples to defend flow correlation attacks, and the experimental results show that we achieved a good performance. We further consider implementing our defense in the real world. And we find some challenges we must face. To solve these problems, we design a system including traffic consensus, full-duplex mode, and drop dummy packets mechanism. Our system not only makes adding adversarial perturbations become reality but also further reduce bandwidth overhead.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China NSFC (grant nos. 62072343 and U1736211) and the National Key Research Development Program of China (grant no. 2019QY(Y)0206).