Abstract

The rapid development of the Global Positioning System (GPS) devices and location-based services (LBSs) facilitates the collection of huge amounts of personal information for the untrusted/unknown LBS providers. This phenomenon raises serious privacy concerns. However, most of the existing solutions aim at locating interference in the static scenes or in a single timestamp without considering the correlation between location transfer and time of moving users. In this way, the solutions are vulnerable to various inference attacks. Traditional privacy protection methods rely on trusted third-party service providers, but in reality, we are not sure whether the third party is trustable. In this paper, we propose a systematic solution to preserve location information. The protection provides a rigorous privacy guarantee without the assumption of the credibility of the third parties. The user’s historical trajectory information is used as the basis of the hidden Markov model prediction, and the user’s possible prospective location is used as the model output result to protect the user’s trajectory privacy. To formalize the privacy-protecting guarantee, we propose a new definition, L&A-location region, based on -anonymity and differential privacy. Based on the proposed privacy definition, we design a novel mechanism to provide a privacy protection guarantee for the users’ identity trajectory. We simulate the proposed mechanism based on a dataset collected in real practice. The result of the simulation shows that the proposed algorithm can provide privacy protection to a high standard.

1. Introduction

In recent years, the booming amount of personal mobile devices with location services has promoted the development of location-based systems in wireless networks [1]. The widespread use of mobile smart devices has laid the foundation for massive data collection based on mobile perception. In these data collection systems, location-based services (LBSs) provide real-time services related to the user’s current location information. Various useful applications depend on LBSs. For example, Google Maps provides navigation services such as route suggestions and road traffic condition notifications. Groupon and Yelp [2] provide business service information based on the distances from the users’ location. Although LBSs (as shown in Figure 1) are very useful and convenient for users, these conveniences are at the expense of users’ private information. The service providers can infer an individual’s residence information, work location, and other private information by observing the user’s temporal correlated data [36].

Many methods for personal information protection are proposed as follows. One of the solutions is Private Information Retrieval (PIR) [7]. In modern cryptography, the main purpose of PIR is to allow a user to retrieve items from a server without disclosing any private information. In other words, the server does not know the user’s specific query information and retrieved data in the process. However, one major disadvantage of this technique is the enormous amount of calculation for redesigning different queries according to different query types. Most of the methods are developed based on location obfuscation, which uses a cloaking area or a perturbed location. These solutions rely on syntactic privacy models, which cannot provide a strict privacy guarantee. Unfortunately, most of the solutions only consider the stationary scenario and perturb the location at a single timestamp while neglecting the temporal correlation of the movement of the user’s locations. Hence, the adversaries can effortlessly access more private information by linking inference attacks. Most typical methods to protect users’ private information use differential privacy and -anonymity. -Anonymity as one of the principal approaches [810] ensures the probability of success to any linking attack to be lower than . However, it provides a lower privacy guarantee and data utility.

Differential privacy [11] was originally proposed by Dwork in 2006. Later, the idea is regarded as a standard for private information preservation. Although the applications of differential privacy in protecting private information have gradually become applicable in practice, some challenges still exist in the problem of continuous location sharing. First of all, in the standard privacy protection settings, only user-level privacy (whether a user appears in a dataset or not) is protected. In our setting, the trajectories of a single user are protected for a period of time. Second, the released trajectory can be identified based on road networks without temporal correlation. Furthermore, the adversary can identify the user captured by moving patterns. Finally, none of the effective released trajectory mechanism utilizes the combination of -anonymity and differential privacy.

In this paper, we propose an all-new solution to preserve the user’s trajectory privacy with -anonymity and differential privacy. As shown in Figure 2, a moving user needs to continuously share locations with untrusted service providers or other third parties in a period of time. In other words, in our solution, a user’s accurate location information is only known by him. We regard all service providers and third parties as adversaries. The adversaries have side knowledge as much as they can obtain. We propose a new privacy protection system that enables private location sharing without disclosing users’ accurate locations to these adversaries and protects users’ trajectories in a continuous time period.

The proposed system is noted as UGIS (User and Geographic Space-Indistinguishable System), and this system consists of two parts. One part is the KD-location region (KD is the -anonymity and differential privacy for short), referred to as a special region in the context. Another part is the users’ accurate trajectory processing mechanism. In the KD-location region, adversaries cannot recognize the target user. We then move on to the trajectory processing mechanism that makes a good performance to protect users’ trajectories. To our knowledge, UGIS is a better private processing mechanism that combines -anonymity and differential privacy methods to protect the location and trajectory information of users. The following is a summary of this paper’s contribution: (i)To protect the user’s accurate location, we only need to “hide” it in a special region set in which the adversaries cannot distinguish the locations or users. Accordingly, we propose a special region set based on -anonymity and differential privacy to protect the accurate location of each timestamp(ii)To show that the user’s movement is associative and temporally correlated. In our problem, the user’s location transfer is time-related. We use the Markov model to represent the user’s location change between continuous timestamps [12]. Hence, from the perspective of adversaries, the user’s location transfer model is a hidden Markov model (HMM)(iii)We focus on the transfer mechanism in a Markov model. We utilize the concept of differential privacy to add noise to this transfer mechanism to make the users’ trajectories indistinguishable

The rest of this paper is organized as follows. Section 2 is our related works. In Section 3, we discuss the notions of location privacy from the literature and analyze the weaknesses and strengths of the state-of-the-art algorithms. Also, in Section 3, we introduce the coordinate system and location transition model. We then describe several components and definitions of our UG-indistinguishable system in Section 4. Section 5 contains the illustration of the framework and the implementation of our location release algorithm. The experiment and evaluation are presented in Section 6.

In this part, we mainly make some generalizations and summaries of the previous literature. A few recent works [315] provide an overview of location privacy protection mechanisms (LPPMs) and methods. These location privacy protection mechanisms mainly use obfuscation technology to achieve the anonymity area. The most widely used approach to construct the LPPMs is -anonymity. The notion of -anonymity is commonly used to protect privacy for location-based systems in most of the works in the field. These systems mainly focus on protecting the users’ identities and preventing the adversary from inferring accurate information among users from the published user datasets. One way to implement this method is to use dummy locations mentioned in [16, 17]. However, since the output of the dummy location is controlled by the server, the adversaries can easily find out where the dummy location is not logically generated. Another method to achieve -anonymity is through the cloaking region [1820]. The disadvantage of the method is the high risk of having a too-large cloaking area to satisfy thevalue in the scene with few users. A different approach is to add certain quality constraints to provide better privacy protection [21], while [22] additionally using bandwidth constraints. Literature [7] also proposed a location privacy mechanism focusing on the evaluation based on location-based range queries. This method evaluates the degree of privacy according to the size of the cloaking area and the coverage of the sensitive area. Two methods have been proposed to deal with the adversary’s background knowledge, by expanding the anonymous area or delaying the sending of requests. Both solutions may lead to a decline in service quality. The methods based on -anonymity are improved, but the definition of differential privacy provides a more rigorous guarantee.

Several privacy-protecting methods use the differential privacy approach in recent works [23, 24]. For instance, [25] presents a way to statistically simulate the location data from a database while providing privacy guarantees. They designed an information perturbation mechanism to generate aggregated information from a large amount of locations, trajectories, and spatiotemporal data [2629]. [30] proposed a differential privacy data mining algorithm that uses a spatial quadtree decomposition technique to preprocess the locations. The work closest to ours is [31]. A large part of the research is based on the use of cloaking areas to enforce location confusion mechanisms. This method leads to a reduction in the utility of published data. [32] proposed a data sanitization method collectively manipulating users’ profiles and friend relationships. This method is not suitable for our framework setting and further research. However, the method does not solve the users’ movement trajectory problem. In this paper, our system protects the users’ accurate location with a rigorous privacy guarantee and makes the users’ trajectories indistinguishable at each timestamp.

3. Preliminary

In this section, we discuss various notions of location privacy-preserving methods such as -anonymity, differential privacy, and location transfer model. We consider a scenario where a user wants to post a query about points of interest at the current location by using a personal device (e.g., smartphone) to query a public service provider. The users expect their accurate location to be private regardless of the process of the search. Our goal is to develop a real-time privacy mechanism that provides privacy protection in a formal notion to achieve users’ expected privacy protection level. A list of frequently used symbols in this paper is all motioned in Table 1.

3.1. -Anonymity

-Anonymity is one of the privacy protection methods widely used in most location-based systems. These systems focus on protecting the user’s identity, making the adversary unable to infer which user is the true target among users. One way is to generate properly pseudo points and use the actual location and pseudo locations to perform queries to the service provider. Another way to achieve -anonymity is through a cloaking area. This approach involves creating a cloaking area that includes users sharing some points of interest, then querying the server using this cloaking area instead of the accurate location. Unfortunately, the adversaries can identify the target user when adequate side knowledge is available. Pseudo locations are only useful if they have enough similarity with the real locations from the adversaries’ point of view.

As a result, notions that abstract from adversaries’ knowledge, such as differential privacy, have more popularity later than -anonymity approaches.

3.2. Differential Privacy and Laplace Mechanism

Differential privacy (DP) [11] is a notion of private information inspired by the concept of statistics. DP guarantees to maximize the accuracy of data queries when querying from the statistical database while minimizing the chance of identifying other records. DP removes the individual characteristics while preserving statistical characteristics to protect the user’s privacy. DP has gradually become the de facto standard in data privacy due to its strong privacy guarantees in statistical analysis. Moreover, differential privacy is a semantic model that does not need to rely on the adversary’s background knowledge and provides a higher level of semantic security from private information. Differential privacy ensures that adversaries cannot infer whether a particular user is present in the original data. Releasing data according to differential privacy ensures that adversaries cannot infer any information about personal information from the “sanitized data.” The definition of differential privacy is demonstrated as follows.

Definition 1. (differential privacy). A mechanism satisfies -differential privacy if any output and database and its neighboring database can be obtained by either adding or removing a single record, and the following holds:

The Laplace mechanism [33] is commonly used to achieve -differential privacy. It is built on the sensitivity defined as follows.

Definition 2. (sensitivity). For any query , -norm sensitivity is the maximum -norm of , where and are any two instances in neighboring databases as the following equation holds how to capture the sensitivity of two neighboring databases:

The Laplace mechanism implements differential private protection by adding noise of Laplace distribution to the query result , where . As shown above, the concept of differential privacy is generally applied in the joint publishing of compound data. The standard concept makes it unsuitable for applications that involve only one person. In this paper, we propose a more rigorous privacy guarantee with -anonymity and differential privacy methods.

3.3. Coordinate System

We divide a map into grids where each grid is a state in the Markov model. The users’ real locations can be denoted by the state grids in a Markov model. Denote is the area that includes all the state grids. These areas can be divided into many spaces , where each means a unit grid in region . We set up a spatial coordinate system by which the user’s accurate longitude and latitude can be represented as -axis and -axis coordinates. Vector coordinates represent each grid unit, which more clearly shows the user’s current position and the corresponding state grid in the Markov model. In Figure 3, all grids have the same size, but in the real world, the sizes of each region are not necessarily equal.

The following example illustrates how to use these grids to denote the user’s current location. If the user is located in the area , we denote into this state coordinate system, where with and . As time goes by, the trajectory of a user’s movement is represented by a series of state in the map coordinate system.

3.4. Location Transition Model and HMM

This paper is based on the study of moving users’ trajectories, so we propose to use the random process Markov chain [3436] to simulate the movement of the user from one point to another under temporal correlations. Other constraints, such as the road networks, can also be captured by the Markov model. The kernel of the Markov model is the state transition matrix. The current state only depends on the transition matrix and its previous moment state. As mentioned before, the user’s real locations are unobservable and only known by him. Hence, for the adversaries, the user’s movement process is a hidden Markov model.

We use to represent the location of the user at timestamp . represents the probability that user appears in the area at timestamp . Therefore, we construct the Markov process as follows:

In the first-order Markov model, there is the hypothesis that the transition probability of state and the output probability of observation are only dependent on the current state. So the Markov process can be simplified to

The transition probability, is the one-step transition probability from timestamp to . The transition probability satisfies the following properties:

The sum of the transition probabilities of all possible locations of the user from timestamp to is . We implement the Markov model on the trajectory of moving users and get to denote the probability transfer at each timestamp. The transition matrix is given in our system.

4. UG-Indistinguishable System Model

To apply -anonymity and differential privacy in the area where moving users share locations on consecutive timestamps, we conduct a rigorous privacy analysis and set a special region (contains an actual user and pseudo users). Even if the adversaries capture this special region, they still cannot identify the target user.

4.1. BB Cloaking Region

The essence of applying differential privacy in location sharing is to “hide” a real location in a database by adding or removing one record to obtain a “neighboring database.” The special region can be regarded as a set of locations. The adversaries cannot infer whether the target user is in this database or not with any kinds of queries. However, such a dataset is not completely suitable for our problem. So we proposed a new notion, the black B cloaking region. The black B cloaking region is also noted as a special region to hide users’ accurate location at every timestamp. We need to compute the amount of information as follows to obtain the cloaking region at timestamp : which is also known as the amount of prior information that user is in region at timestamp . We intend to use to represent the posterior information that adversaries infer the user’s location information through existing background knowledge .

In summary, the amount of the user’s location information disclosed to the adversary is as follows:

Thus, we can obtain a definition of generating a special location set.

Definition 3. (-location set). We can set the probability that the adversary can infer the user’s current location as the posterior probability, and the probability of the user at the current location is ; then, the privacy requirement is as follows:

is the privacy threshold of the user’s current location at timestamp and . In this article, we set parameter as no greater than 0.3. We assume that the parameter is given in our framework. When the user’s location information exposed to the adversary is greater than the privacy threshold, the cloaking region needs to be generated for protecting the real location at timestamp .

We define a special region based on -anonymity and differential privacy, which intuits that the released area will not help an adversary to distinguish any instances in the region. According to Definition 1, we make a transformation that adjusts to a special region in our article. The new definition is shown as follows.

Definition 4. (BB cloaking region differential privacy). At any timestamp, the cloaking region generation by mechanismis represented as, the query function represented as, and the query result ofon the cloaking region satisfies-differential privacy, and the following holds:

The definitions guarantee that the accurate location is always protected in a location set at each timestamp. The released region is differentially private at timestamp for continuous location sharing under temporal correlation. We use the following context to explain how the special region work. In the beginning, a user moves to a new location where he may send a query (e.g., find the nearby restaurant) [37, 38]. At each timestamp, we denote the user’s individual information as . The user can be treated as a target user. Then, we assume a mechanism in our system that can obtain a set of nearest neighbor users with the same query. This set allows the existence of the dummy users, and our system can release the dummy users. Our anonymous generation process is more complicated, and the best effect is verified by experiments when . We regard these four nearest neighbor users as the new target users, respectively. Hence, we have a set of users, as shown in Figure 4.

4.2. Razor Mechanism

After obtaining a set of nearest neighbor users, we propose a new method, Razor Mechanism, to filter the similar terms. We use this mechanism to eliminate users whom we do not want to appear in the nearest neighbor users set.

Even though the adversaries have side knowledge as much as they can have, they still cannot know which nearest neighbor users are generated by the first anonymity. The Razor Mechanism uses the principle of similarity measurement [36] to filter out the pseudo users generated in the first anonymity. In services based on location information, we usually set the distance between locations as a measure of similarity. The similarity measurement of users in the anonymous area is shown in the following formula:

As shown in Figure 4(a), the data generated in the anonymous data preprocessing stage are removed as noisy data by the Razor Mechanism. Figure 4(b) shows all the remaining data in the special cloaking region, excluding the location coordinate data of yellow dots.

4.3. Drift and Puppet

We use the Razor Mechanism to filter out noisy points that we do not want to appear because the special region contains almost all similar users and possible location information. The target user’s information is also eliminated with a small probability (technically, ). This phenomenon is referred to as “drift” and can be solved with the puppet approach in the special region. We use and to denote the user’s accurate location and puppet, respectively. The definition of the puppet mechanism is shown as follows.

Definition 5. (puppet). A puppet is a cell in the special set which has the closest distance to the target user’s location :

In this equation, represents the special location set, and the function denotes the distance between two users in the special region. Note that the puppet approach does not leak any information about the target user. If the target user is in the special set, we protect the target user in the region; otherwise, the puppet is then protected in the special set. Using a puppet does not disclose whether the user’s location is in or not. We have mentioned before that our location release mechanism is treated as a black box region. It is still a black box after replacing the accurate location with a puppet.

5. Location and Trajectory Release Algorithm

5.1. Framework

The framework of the special location region release algorithm is shown in Algorithm 1. We generate a special location set at every timestamp to protect a single user’s accurate location continuously. The procedure of the generation of the special location set at timestamp is explained in the context above. First, from lines 1 to 6 in Algorithm 1, the model makes a prediction based on the hidden Markov model. At each timestamp , we compute the probability . If the current location at timestamp satisfied the privacy threshold (), the procedure moves to the next timestamp. Otherwise, the special cloaking region is generated at the current location. The process of the special location set is shown in lines 8 to 19. If the target user is filtered out of the special location set, we use a puppet in at timestamp as if it is the “target” user in the release mechanism. Our proposed algorithm uses -norm to capture the sensitivity of the special location set. After all these steps, we can obtain a special location set. According to this special location set, we can generate a special region for a single user at timestamp . In this region, no matter how much side knowledge the adversaries may have, the adversaries can no longer distinguish the target user from the users.

Framework.
Input: , TM,
Output:
1: ;
2: ;
3: ifthen
4:  Construct a special set of this location;
5: else
6:  Go to next timestamps;
7: end if
8: Construct a special location set:
9: Run -anonymity;
10: ;
11: fordo
12:  
13:  ifthen
14:   add
15:   algorithm goes on
16:  else
17:  go to line 11
18:  end if
19: end for
20:  Razor Mechanism;
21:  
22: while Check do
23:  ifthen
24:   algorithm goes on;
25:  else
26:   ;
27:  end if
28: end while
29:  Obtain sensitivity of the special set ;
30:  ;
31:  Releases this region ;
32: end;
33: return Algorithm;
Go to the next timestamps
Algorithm 1
5.2. Linking Differential Privacy to Trajectory

A user’s location trajectory is a moving path or trace reported by a moving object in the geographical space. The user’s trajectory is represented by a set of time-order points, , where each point consists of a geospatial coordinate set and timestamp (i.e., , where ). Such temporal and spatial attributes of a location trajectory can be considered powerful quasi-identifiers that can be linked to various other kinds of physical data objects [39, 40]. From the adversaries’ point of view, these trajectories may disclose users’ individual information such as users’ work, home, and points of interest (POI). Although such trajectories can be made anonymous by replacing the identifier of users with random identifiers, the users may still suffer from privacy threats.

In this paper, our approach uses the Markov model to denote users’ movement from one special region to another. We use the equation to denote a single user moving from one region (at timestamp ) to another region (at timestamp ), and denotes the transfer mechanism of users’ movement. The transfer mechanism uses Laplace noise to make users’ trajectories indistinguishable. As shown in Figure 5, we add Laplace noise to the Markov transfer mechanism to make users’ transition probability basically the same. For example, when a user moves from region to another, according to his habits, the transfer probability at each region is different (e.g., the probability from to is , to is , and to is ). According to our method, after adding Laplace noise to the transfer mechanism, the transition probability from to is , to is , and to is . In the following section, we will show the performance by the experiment results.

6. Experiment and Evaluation

In this section, we present the evaluation of our method. All algorithms are implemented in Python on macOS with the real-world datasets GeoLife and Gowalla [4143]. The GeoLife dataset is collected in (Microsoft Research Asia) GeoLife project by 182 users from April 2007 to August 2012. A timestamped sequence of points represents the GPS trajectories in this dataset. Each point contains information on latitude, longitude, and altitude. This dataset has 17,621 trajectories with a total distance of 1,292,951 kilometers and a total duration of 50,176 hours. The trajectories are updated at a frequency of every 160 seconds. The Gowalla dataset is collected by Stanford University and is a location-based social networking site where users can share their location information by signing in. The dataset collects a total of 6,442,890 check-in locations and 19,651 check-in information. The check-in data is used to train the Markov model. We implement the proposed model by the following steps.

Step 1. Input the training dataset (Gowalla) to train the Markov model and output the prediction results.

Step 2. If the prediction results we obtain from Step 1 did not satisfy the privacy threshold , then we need to generate the special location set by our mechanism at the current timestamp. Otherwise, move to the next timestamp and continue Step 1.

Step 3. Check whether the real location is in the special region or not. In the process of generating a special region, we have a small probability of filtering out the true location. So we use the nearest and the most similar location as a puppet in instead of it.

Step 4. Use the special region in each timestamp to divide the real-world map into several neighboring grids. In each grid, the adversaries cannot distinguish between the target user and the pseudo user.

Step 5. Finally, we add noises to the Markov model. In each timestamp, we add noise to the transfer mechanism to make the trajectories indistinguishable.

The performance of the release mechanism as a user moves over time is explained as follows. We treat our release mechanism at each timestamp with . Each method is run over 20 times and shows outstanding performance. Figure 6(a) shows how to hide a user’s accurate location by the first anonymity. The -axes and -axes represent the longitude and latitude, respectively. The symbol “” denotes the user’s true location. “” are the pseudo locations generated in the first anonymity. We have the SSE (sum of the squared errors) as the core indicator for the selection of value. As the value increases, the sample division gets more refined, and the degree of aggregation of each cloaking region gradually increases. Then, the SSE naturally gradually becomes smaller. In our method, we set the parameter . We have experimented for many times that the parameter size is better than others, as shown in Figure 7.

We choose four users who are the most similar to the real user and sent the same query at timestamp . In Figure 6(b), we can see the second -anonymity after Figure 6(a). In the second anonymity, we consider the first four users generated by the first anonymity as the “real” user, respectively. Then, the model generates more anonymous users by these four “real” users. In Figure 6(b), the symbol “” denotes the pseudo users generated in the second anonymity. Through these two operations, we obtained a much bigger anonymous area with many similar anonymous users. Next, through the “Razor Mechanism” in Section 4, the model filters out several pseudo users generated from the first -anonymity with “Razor,” as shown in Figure 6(c). In this part, we make full use of the Jaccard Razor. While using the “Razor Mechanism,” the actual user may be filtered out with a very small probability, which is known as the “drift” phenomenon.

When a “drift” happens by a minuscule probability, we use a surrogate user in to impersonate the target user. In the next step, the model adds Laplace noise to this special region at timestamp , which can provide a rigorous privacy guarantee. We then obtain a new cloaking region that contains the pseudo location and the true location. As shown in Figure 6(d), the area within the square is one of the grids in the real-world maps. The symbol “” denotes the pseudo users generated in the second anonymity. Finally, the model generates a noisy cloaking region. The added Laplace noise makes the new special region very stable. Noisy users are always around the real user and fake users. In this special region, the adversaries cannot distinguish the target user and the pseudo users. In Figure 8, the true trajectory is compared with the trajectories with noise added to the Markov transfer mechanism. Figure 8(a) is a randomly selected accurate trajectory of a single user in a period of time. The user’s movement is shown in Figure 8(a). The most important step in the algorithm is the addition of Laplace noise to the Markov transfer mechanism. This process makes the transfer probability stable. The noisy trajectories after adding noise with the Laplace mechanism are presented in Figure 8(b). As shown in Figures 8(a) and 8(b), the released trajectory is still close to the accurate trajectory. The special regions at every timestamp are used to divide the real-world map into grids, as shown in Figure 8(c). In this region, the adversaries’ side knowledge no longer affects privacy protections. The adversaries cannot either distinguish the accurate trajectory in the released trajectories or recognize the target user in these special regions at each timestamp. Our mechanism is the better one comparing to the normal Laplace mechanism, as shown in Figure 9.

To show the practicality of the release location area, we measure the query accuracy and recall rate of nearest neighbors for every 500 timestamps in 150 trajectories, as shown in Figure 10. In Figure 10(a), it shows that the precision declines when rises because when grows, the nearest neighbors have to be found in larger areas. And a larger location set returned. On the other hand, Figure 10(b) indicates that the recall ratio increases with greater . Figure 8 shows the comparisons of experiment results by the proposed method’s position release mechanism in this paper and those by the Laplace mechanism. The results indicate that the usability of our method is better than that of the Laplace mechanism.

7. Conclusion and Future Work

In this paper, we proposed a L&A-indistinguishable system under temporal correlation. The system uses the Markov model to denote users’ movement on the road network and then generates a special user set by -anonymity and differential privacy approaches. The proposed system can provide perfect privacy protection for a single moving user. The method is based on the hidden Markov model and learns from historical trajectories to obtain prediction results for the future timestamp.

As a direction for future work, we are interested in instantiating the system with different and more advanced mobility models and researching the impact on the system’s performance change. We look forward to making the mobile user’s personal information protected with a more rigorous privacy guarantee with a smaller loss in data utility. We aim to conduct more profound research to enhance the availability of the region release mechanism based on the existing research studies. We plan to develop a model to recommend points of interest, based on the user’s movement position information, and to recommend the community to which the user may move.

Data Availability

The coordinate data used to support the findings of this study have been deposited in the GeoLife dataset repository [44]. The check-in data used to support the findings of this study have been deposited in the Gowalla dataset repository [45].

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Our research fund is supported by the National Natural Science Foundation of China (Grant Nos. 61472096, 61272186, 61472095, and 61502410), 2019 Industrial Internet Innovation and Development Engineering, Industrial Internet Security Audit Technology and Product, On-site Emergency Detection Tools in the Field of Industrial Internet Security (KY10600200008, KY10600200021), International Governance Research Center of industrial Internet (3072020CFP0601), and Fundamental Research Funds for the Central Universities (3072020CF0604).