Privacy-Preserving Synthetic Location Data in the Real World

Sharing sensitive data is vital in enabling many modern data analysis and machine learning tasks. However, current methods for data release are insufficiently accurate or granular to provide meaningful utility, and they carry a high risk of deanonymization or membership inference attacks. In this paper, we propose a differentially private synthetic data generation solution with a focus on the compelling domain of location data. We present two methods with high practical utility for generating synthetic location data from real locations, both of which protect the existence and true location of each individual in the original dataset. Our first, partitioning-based approach introduces a novel method for privately generating point data using kernel density estimation, in addition to employing private adaptations of classic statistical techniques, such as clustering, for private partitioning. Our second, network-based approach incorporates public geographic information, such as the road network of a city, to constrain the bounds of synthetic data points and hence improve the accuracy of the synthetic data. Both methods satisfy the requirements of differential privacy, while also enabling accurate generation of synthetic data that aims to preserve the distribution of the real locations. We conduct experiments using three large-scale location datasets to show that the proposed solutions generate synthetic location data with high utility and strong similarity to the real datasets. We highlight some practical applications for our work by applying our synthetic data to a range of location analytics queries, and we demonstrate that our synthetic data produces near-identical answers to the same queries compared to when real data is used. Our results show that the proposed approaches are practical solutions for sharing and analyzing sensitive location data privately.


INTRODUCTION
People's locations are collected at a large scale by a wide range of entities (e.g., Uber and Google Maps), typically through mobile technologies. Such data is extremely private, for numerous personal, social, and financial reasons. However, being able to analyze and model location patterns is highly valuable to other businesses and researchers (and society as a whole) to enable a vast range of location-based applications, from tracking disease spread to reducing traffic congestion. The exponential growth in popularity of (open) data science has seen an ever-growing demand for the publication of a variety of location datasets (e.g., geotagged Tweets, taxi journey origins and destinations, social media check-ins). However, the risks concerning the violation of individuals' privacy present a major impediment to the free sharing of such data. Instead, the raw data has to be significantly sanitized before it can be published. This can involve aggregation into predefined regions, location perturbation, or truncation of longitude-latitude data. In this setting, the sanitization operation is controlled and performed by the data owner, whose primary concern is to minimize the privacy risk to the data subjects and their consequent liability. In many cases, this considerably limits the utility of the published data.
In contrast to crude sanitization, releasing a synthetic dataset in the same format as the original data can give more flexibility in how clients can use the published data. In many practical scenarios, the recipient of a dataset will want to use their in-house data analytics tools without any restrictions from the data provider on the way in which the data can be used, or the type of queries that can be asked. In this paper, we develop approaches for generating realistic synthetic data from real location data, while also satisfying the strict requirements of differential privacy (DP). The aim is to maximize the similarity between the original and synthetic datasets, whilst protecting the existence and location of any individual.
Existing approaches to synthetic location data generation (surveyed in Section 2) are unsatisfying for a number of reasons. They tend to adopt relatively simplistic ways to represent the data, such as fixed grids, and only materialize the population of cells within such grids. They make crude uniformity assumptions within such basic regions that do not capture realistic location distribution patterns. They also tend to be oblivious of real-world conditions, such as straits of water or uninhabitable terrain, leading to nonsensical outputs that 'locate' people in the middle of the ocean. In this paper, we propose novel solutions that overcome these limitations.
Our first approach for synthetic data generation (SDG) targets the first of these weaknesses, by considering a richer set of ways with which to model the input location data. We introduce a differentially private partitioning-based framework in which we restrict SDG to be within small private regions. We introduce gridand clustering-based methods, where we generate synthetic points within private regions using a novel adaptation of kernel density estimation that is specifically suited to our setting of multiple point generation and maintains privacy. In all steps, privacy is provided by using DP mechanisms to add noise to counts, and it is maintained through the post-processing properties of DP.
In our second approach, we incorporate 'common knowledge' about the world within the data generation process. Traditionally, DP approaches make very restrictive assumptions regarding what outside knowledge is known beyond the data itself (e.g., provenance, structure, or hierarchy). However, it is common for a dataset to be strongly restricted or influenced by an underlying structure -the nature or behavior of which is known to all. For example, location data is heavily influenced by the underlying road network, which is public knowledge. Our work is the first, to our knowledge, to exploit this underlying structure in order to generate differentially private synthetic location data. We first match the data to the given features (e.g., road segments) and materialize summary statistics using DP mechanisms. From this, we generate synthetic points along each segment using privacy-preserving micro-histograms to maintain the underlying distribution.
We perform an extensive set of experiments using real datasets with varying degrees of underlying structure. Our solutions perform significantly better than alternative approaches (up to 28x more accurate, and 3.7x faster). The proposed partitioning-based approach is preferred when the data is less well-aligned with the underlying network, or when network data is unavailable. The proposed network-based approach is extremely effective, especially when the location data is well-aligned with the underlying road network. It is also up to 37x faster than partitioning-based approaches.
Our methods further improve the real-life accuracy and utility of the generated data by incorporating public knowledge, such as streets, coastlines, and rivers. We also evaluate the practical utility of the synthetic data in answering range, hotspot, and facility location queries. The experimental results show that the synthetic data produces high quality results for these queries, thus highlighting both the strength of our approaches and the potential for widespread, real-world deployment of DP. Visualization of the real and synthetic data also improves explainability and trust in DP results.
A summary of our main contributions are: • a novel methodology and two robust methods for generating private synthetic location data with excellent performance in a range of location analytics tasks; • a new approach of incorporating public graph data (e.g., the road network) to enhance utility of private synthetic data; • a novel mechanism for differentially private kernel density estimation that is designed for multiple point sampling for synthetic data generation; and • an extensive evaluation of privacy-preserving data generation yielding several practical insights.
The rest of the paper is organized as follows. After reviewing the literature (Section 2), Section 3 introduces the problem, discusses its privacy and utility trade-off, and gives a brief overview of DP and its properties. We explain our synthetic data generation solutions in Sections 4 and 5, and evaluate them in Section 6. In Section 6, we also use the generated data to answer various location analytics queries. We conclude our work with Section 7.

RELATED WORK
Since DP has become the state-of-the-art privacy model, it has been applied to many domains, including medical, financial, and social network data. Using DP for spatial data is a continued area of focus given the significance and sensitivity of location data. For example, previous work has developed differentially private spatial decompositions [5], released spatial histograms [10], and protected temporally correlated location data [24].
There is an increasingly large body of work on private trajectory publication [e.g., 11] and synthesis [e.g., 12,14]. Although these appear to be complex variants of the location privacy problem, the solutions therein all produce outputs that correspond to arbitrary grid cells (which is not concordant with the format of the original data), whereas we generate co-ordinate data (i.e., the same form as the input data). While one could extend these solutions to generate individual points (e.g., by using uniform sampling), we show in our work that achieving high-quality results by synthesizing exact locations (while preserving the underlying characteristics of the real data) is a significant challenge. Furthermore, almost all existing works fail to fully utilize publicly-known information to boost utility at no cost to privacy. Although the work of Naghizade et al. [17] is 'context-aware', it lacks privacy guarantees, and there remains a high risk of reidentification. Other context-aware work [e.g., 1,4] uses the local setting of DP, as well as relaxed privacy definitions, which makes them incompatible with our objectives.
Notwithstanding the above differences, the problem we study is a core issue of spatial data publication with many important applications, such as advertising and better provision of public services. Our methodology addresses several practical challenges for real-life use of DP and private location data generation that are not considered in (or the focus of) previous works. Our work uniquely combines all of the following: a) satisfying the strict requirements of DP under all circumstances; b) generating synthetic datasets in the same format as the input datasets; c) contextualizing in the real world by incorporating real-world knowledge (e.g., road networks); and d) evaluating the methods with popular location analytics tasks.

PROBLEM SETTING
Given a dataset containing the real locations of individuals, we aim to generate synthetic spatial point data that satisfies -DP, and preserves as much as of the underlying distribution of the real data as possible. Specifically, our objective is to protect the existence and location of each individual in the dataset by using differential privacy. We use and to denote real and synthetic locations (in coordinate form), and P and S to denote the sets of real and synthetic locations, respectively. In this section, we outline how we seek to balance privacy and utility. We also briefly outline the setting of our problem with respect to adversaries and assumed knowledge.

Privacy
Even when a strong social motivation for data sharing or release exists (e.g., in contact tracing to help track disease spread), there remains a need for strong privacy protections. The absence of a sufficiently strong privacy model can result in deanonymization [22] or inference attacks [16]. We use DP as it provides a strong level of protection, through a guarantee of plausible deniability, to all members of a dataset.
Definition 1 ( -differential privacy [6,7]). A randomized mechanism A is -differentially private if, for any two datasets and ′ differing by one element, and for all ∈ Range(A), we have: In other words, a mechanism that satisfies -DP should return approximately similar results, even if a tuple, , is added or removed from a dataset (i.e., ′ = ± ). The Laplace mechanism is used to release the values of numeric functions of data [7]. For a function acting on , it adds random noise to the value of ( ) such that: where, Lap(·) denotes the Laplace distribution, and the scale of the noise is set by the sensitivity of , The privacy properties of multiple mechanisms can be analyzed via a composition theorem [8]. Multiple mechanisms A , each with a privacy parameter , can be combined to form one -differentially private mechanism with = . Thus, we refer to as the privacy budget for a specific task (i.e., synthetic location data generation), and apportion it into pieces. In our work, we add noise in at most three places and divide our privacy budget across these steps, where each step has a privacy budget of . That is, = 1 + 2 + 3 .
Another property of DP is its robustness to post-processing [8]. That is, we can transform the output from a DP mechanism without further privacy loss, unless we use extra knowledge about the input. When we use the Laplace mechanism for a count query, postprocessing permits rounding all values to the nearest integer, and all negative values to zero, with no adverse privacy implications.

Utility
Our aim is to generate synthetic data that maximizes utility, while meeting the above privacy guarantees. We initially assess this through two measures: normalized cell error (NCE) and mean edge distance difference (MEDD).
For NCE, we divide the region into cells (giving the set L), and obtain real and synth -the number of points in each cell for the real and synthetic datasets, respectively. NCE is then defined as: While NCE quantifies the error between just the synthetic and real datasets, MEDD quantifies the error between the two datasets with respect to a graph -here, the road network. We use MEDD to quantify the preservation of network alignment of the synthetic points. We define ( , ) to be the shortest distance from a point to its nearest edge (explained more in Section 5). MEDD is hence defined as: As we seek to establish practical data sharing mechanisms, we also assess utility through a range of location analytics tasks like range, hotspot, and facility location queries. These utility measures are described more in Section 6.

Adversaries and Assumed Knowledge
We assume that the aim of an adversary is to identify the true location of a certain individual. As our proposed methods make use of external knowledge (e.g., the road network), which is public knowledge, we assume it can also be utilized by any adversary. Given this aim, there are two primary adversary targets: membership inference and location identification. To provide protection in both regards, we use differential privacy -a widely-used, 'road-tested' technique with strong, demonstrable privacy guarantees. Through its definition (see Definition 1), each individual has a degree of plausible deniability with respect to their inclusion in the synthetic dataset (governed by a probabilistic bound; see Equation 1). This assures us that the output S does not provide the adversary with an advantage in determining the true location of an individual in the input. Adopting synthesis of location data (as opposed to publication) further weakens the relationship between real and synthetic points. As we treat each point independently, each point has its own (composable) DP guarantee. As such, our methods can be applied to trajectory data without adverse downstream consequences. That is, it would not be possible to link individual points in the synthetic data and re-identify a real trajectory.

PARTITIONING-BASED DATA GENERATION
This section details our two-stage partitioning-based approach. We first restrict data generation to be within small regions, and then generate a noisy number of points, while preserving a distributional measure of the real data. We propose a private version of kernel density estimation (KDE) to obtain representative probability distributions of point data. For the kernel function to be well-defined, it requires access to points in the database, which makes satisfying DP requirements difficult while maintaining high utility. Privatizing KDE is further complicated by our need to repeatedly sample from the private KDE to generate multiple synthetic points, a process that would potentially lead to high levels of privacy leakage ordinarily. Hence, we develop a kernel density estimate that satisfies -DP, achieves high utility, and is robust to multiple sampling.

Private Data Partitioning
Before introducing our solution for generating data, we outline how we partition our space by using differentially private grid-and clustering-based approaches from the literature.

Grid-Based Partitioning.
A simple method to privately partition data is to use a uniform grid (UGrid) that is independent of the data, thus maintaining privacy. Choosing the correct granularity, however, is important as too coarse or too fine a grid can lead to poor results. Consequently, to determine the dimensions of the grid, we utilize a guideline proposed in Qardaji et al. [19]. For an × uniform grid, we set the number of cells in each direction to be: where, is the number of points in the real dataset, P, and 1 is the privacy budget assigned to this task. This ensures that the average number of points per cell is suitably larger than the noise magnitude, and it follows the composition property of DP introduced in Section 3.1. Consequently, the total number of cells, or regions, into which the data is partitioned is 10 . We add noise to the number of points in each region using the Laplace mechanism to obtain: ′ = + Lap( 1 1 ). In many situations (e.g., non-uniform distribution of points), a uniform grid would be unsuitable as it would likely fail to capture the distribution accurately and/or add noise to the dataset in a biased manner. Therefore, we also implement an adaptive grid (AGrid) method (from Qardaji et al. [19]) whereby denser regions have more grid cells, and sparser regions have fewer cells. We follow their recommendation by first dividing the data region into an 1 × 1 uniform grid where: We add Laplace noise, controlled by 1 , to the count in each cell and then divide each cell into an 2 × 2 grid where: We conclude the partitioning phase by adding Laplace noise, controlled by 2 , to the count in each of the new smaller cells.

Cluster-Based Partitioning.
We also implement a private clustering-based approach to generate regions. We adapt the expanded uniform grid -means (EUG M) method [20,21], which has been shown to perform well while satisfying -DP. In short, EUG M consists of two steps: initial cluster centroid generation and -means-style clustering. To generate the locations of an initial set of centroids, EUG M uses the concept of sphere packing to randomly generate points within the bounds of the dataset that ensures that all centroids are evenly (but not necessarily equally) spaced across the data space. The main advantage of this method is that it can be done without access to individual data records, thus maintaining privacy. A uniform grid is then generated using Equation 5, and 1 is used to control the grid size. Data points are assigned to a grid cell, the total number for each cell is calculated, and Laplace noise of Lap( 1 1 ) is added to the count in each cell. Grid cells are then 'allocated' to their nearest centroid and a weighted -means style procedure for optimization is initiated, where the cell-centroid distances are weighted by the (noisy) number of points in each cell. We use these centroid locations to generate Voronoi regions to which each real data point is assigned. For each cluster region, we obtain the number of points and, as we have interacted with the real data again, we need to add noise to each Voronoi region's count. Hence, our final step is to add Laplace noise to get a noisy count: ′ = + Lap( 1 2 ). Once again, this is in accordance with DP's composition property (Section 3.1).
In summary, the main difference between the two partitioning methods is that clustering is (in theory) more sensitive to nonuniform point distributions (i.e., using Voronoi regions allows small clusters to form easily in dense regions). We examine this empirically in our experiments.

Private Data Generation
Generating synthetic data from a domain without imposing any constraints can be done in many ways. For example, sampling from a uniform distribution over the entire domain will maximize the entropy. However, we aim to generate synthetic data that preserves some underlying characteristics or properties of the real data.
Our task is made more difficult as we try to match more complex features of the data while imposing the strict requirements of -DP.
In this section, we introduce differentially private SDG methods for use in conjunction with any partitioning method. Note that, when generating synthetic points with any method, we can ensure that points are not generated in regions that are unlikely to contain points, such as seas and rivers. We do this by specifying 'out-ofbounds' regions from which we filter any synthetic data points that lay within these regions. More explanation of this process is given in Section 6.1.

Uniform Distribution.
As private partitioning already approximately captures an overall distribution of the points, a simple method for synthetic point generation is to sample at random from a uniform distribution. As uniform random sampling is data independent, no further noise is needed at this stage to preserve privacy (i.e., 3 = 0). We further reduce the size of the region by dividing each region into triangles where each triangle consists of the region's centroid and two adjacent vertices of the region. We generate points randomly within each triangle in proportion to each triangle's area, using the triangle point picking method [23].

Weighted Uniform Distribution.
A more nuanced approach is to use information from neighboring regions to define the point distribution. The weighted uniform distribution (WUD) approach subdivides each region and distributes points uniformly across each sub-region. The number of points in each sub-region is influenced by characteristics of the sub-region and neighboring region [25].
We split each region into sub-regions. The number of points ′ , in sub-region , is based on its area and the noisy number of points in the neighboring region. It is defined as: where, , and are the areas of , and , respectively; ′ , and ′ are the noisy number of points in the neighboring region(s) to , and , respectively; and 0 ≤ ≤ 1 is a weighting factor. By definition, ′ = ′ , . We set = 0.5 to give equal weight between the areas and populations of (sub-)regions. Once the number of points in each sub-region is determined, we generate points using the triangle method (Section 4.2.1). As the boundary regions are private (due to the partitioning method) and we only ever use the noisy number of points in any region, the post-processing property of DP negates further noise addition. Hence, 3 = 0 here.

Kernel Density Estimation.
Kernel density estimation is a statistical approach to estimate the density function of a distribution. Using KDE as a basis for synthetic data generation can better preserve the underlying characteristics of the original data.
The kernel density estimator,ˆ(x), is defined as: where, x is a two-dimensional vector consisting of -and -coordinates, is the number of points in the dataset (that is the basis for the kernel), and is the kernel function.
Kernel Density Estimator Construction. While there have been numerous attempts to privatize KDE [2,13,15], these methods are not well-suited to our setting (i.e., sampling multiple times from a private KDE). Prior efforts adopt relaxed privacy definitions, such as ( , )-DP [13], or perform post-hoc testing of KDE samples for privacy [15]. Aldà and Rubinstein [2] use the Gaussian kernel, which results in oversmoothing in our setting, leading to poor quality synthetic data. We instead use a two-dimensional Laplace kernel, owing to the widespread use of its one-dimensional counterpart in other DP work. Specifically, we use the polar Laplace distribution, which has the probability density function: where = ∥x − x ∥, is the angle between x and x , and ℎ is a normalization (or smoothing) factor. To ensure we obtain a differentially private kernel for region , it is necessary to tune the kernel function in each region such that the probability ratio between the two most distal points in is no more than , as required by Definition 1. Hence, we set the smoothing parameter for to be: where ∥ ∥ is the maximum distance between any two locations (not necessarily in P) in . Consequently, proving that this kernel function satisfies DP can be easily done by examining the probability ratio between (0, ) and (∥ ∥, ). Synthetic Data Generation. We now outline how to generate a synthetic point . To do so, we utilize a convenient property of kernel density estimation: sampling from the full KDE is equivalent to first sampling one of the points x , then sampling from the kernel around x . From Equation 10, we see that and can be sampled independently -that is, ( , ) = ( ) ( ). To this end, we first sample from ( ) = ℎ −1 exp(− /ℎ) , and then sample from ( ) = 1/2 (equivalent to sampling randomly from the uniform distribution with bounds (0, 2 ]). Once we obtain values for and , we convert to Cartesian co-ordinates and add this displacement to the sampled real point to give (i.e., = + cos , = + sin ). There is a risk that real points are sampled many times, which would lead to privacy leakage that could reveal the true location of an individual. To avoid this, we modify the sample procedure slightly. We set * = 3 / , which allows each real point to be sampled at most times (using sequential composition), meaning we achieve our target level of privacy protection. If we reach this limit, or if = 0 and ′ > 0, we simply generate a point uniformly at random, which has no negative privacy consequences. We set = 2 as ′ ≤ 2 in most cases. We repeat this sampling process until ′ points are generated in each region . Finally, as a sample generated this way has the same distribution as the KDE and the KDE satisfies DP, it follows that the synthetic data satisfies DP.

ROAD NETWORK-AND GEOGRAPHY-AWARE DATA GENERATION
The methods presented thus far follow the common assumption that there is limited knowledge of the underlying geography. In many cases, however, more significant information is available both to the data owner and to the public. For example, for a dataset of vehicle trajectories, it is reasonable to assume that all points in the dataset will correspond to points on (or very close to) segments of a city's road network. Therefore, when generating points, one should ensure that all synthetic points are similarly aligned to road segments. We can also use outside knowledge to infer where individuals may be unlikely to be located (e.g., in seas, rivers, military bases). Importantly, enforcing these constraints does not use any information not already in the public domain, and can therefore be done without using any of the privacy budget. For example, the location of roads and boundaries of seas are available (often to a high level of detail) through a range of mapping platforms and government open data repositories.
Notation. Consider the graph ( , ) that represents the road network. and represent the road segments and road intersections, respectively. For each individual location ∈ P, there exists an edge ∈ that is the closest edge (distance-wise) to . Two distance functions help us map onto (see Figure 1). The first, ( , ) gives the perpendicular distance from to . The projection of onto is denoted by ( , ). The second function, ( , ), gives the distance along between and ( , ).
Noise Addition. If the real data points are not perfectly aligned with the assumed road network, it is necessary to map-match them to edges in the graph (i.e., obtain for all ∈ P). We count the number of points for which that edge is the nearest, and denote it as . We now use this count to determine the noisy number of points that will be generated along each edge by first adding Laplace noise to . The privacy budget is represented as 1 , using the composition property of DP (see Section 3.1). A simple approach would be to use these values as the noisy counts. However, this would result in a large amount of additional noise throughout the dataset, especially when a large proportion of edges have low/zero counts. Therefore, we reduce the influence of the noise by denoting this 'intermediate' count as * , and performing a post-processing step to obtain ′ = × * * , where * = * , the sum of intermediate noisy counts for all edges. Furthermore, we set ′ = 0 for all edges where ′ ≤ , where is a threshold value. Imposing this threshold also reduces the impact of the added Laplace noise. DP is still satisfied as these are post-processing operations.
Determining the threshold value. The value of can impact the quality of the synthetic data and may vary dynamically with 1 (as the magnitude of added noise depends on 1 ). The optimal value for will balance the number of points added to edges where = 0, and the number of points 'lost' for edges where ′ ≤ and ≠ 0. However, trying to find this equilibrium directly requires knowing the true number of points on each edge, which would violate DP. To obtain a good approximation for , we use the inverse cumulative distribution function of the Laplace distribution, defined as: where, is the quantile of the Laplace distribution, is the mean of the distribution (i.e., ), and is the value of the cumulative distribution function. The intuition is that setting = 95 seeks to remove approximately 95% of the added noise, for example. When = 0, then = = 0 and so = 0 when ≤ 0.5 (disbarring negative counts), so we only need the second term. Furthermore, when 1 is very small, the above term can be very large, which also causes adverse distortion to the dataset. Thus, we impose an upper limit on the value can take, which we set to be 10. Hence, is defined as: Experimentally, we find = 0.9 (i.e., removing about 90% of the noise) to be satisfactory, so this is our default choice. Synthetic Data Generation. To generate a synthetic point along an edge, we must fix (i) the distance along that is, (ii) the perpendicular distance from that is, and (iii) the 'side' of the edge that is in relation to . For (i), we could assign a distance at random from a uniform distribution. However, for very long roads, this could result in synthetic points being far from the real point locations, which would possibly reduce the synthetic data's utility. Instead, we summarize each edge with a micro-histogram. For each edge, we create a histogram (with bins) using the values of (·, ) and, to preserve privacy, we add noise (= Lap( 1 2 )) to the count of each bin. We sample from this noisy histogram to determine the bin in which lies, and the exact value for ( , ) is determined by sampling from a uniform distribution with bounds corresponding to the bounds of the histogram bin. We sample from the histogram ′ times to generate the necessary values for ( , ) -note that ≡ . A pictorial example of this process is shown in Figure 2. For (ii), we use the same approach to determine the values of ( , ), with 3 as the privacy budget when adding noise to the histogram. When the values for ( , ) and ( , ) are set, there are two possible locations for . For (iii), we select between these two locations with equal probability to determine the final location of . When = 0, we define the range of histogram values such  Histogram bin choice. We now discuss how to choose , which affects the downstream utility of the synthetic data. We aim to balance the amount of overall noise added to an edge with the location accuracy along an edge. For example, having a high number of bins will be beneficial for describing locations accurately, but will involve high noise addition, which will negatively affect the accuracy during the histogram sampling stage. The converse is true for low . Proof. Consider a road segment with points that is divided into histogram bins. Suppose we have a range count query that covers a proportion of the road (i.e. bins). The error in answering a range query has two components: privacy noise error and non-uniformity error. Knowing that the expected magnitude of the -DP noise error per bin is √ 2/ , we know that the noise error will be proportional to √ 2 . There are, on average, points in each bucket. When answering a query that partially intersects a bin, we do not know whether points in a bin will be included in the query response (owing to non-uniformity of the data distribution), and this uncertainty is , the number of points in the th bin. Summing over queries touching each of bins, the total error is given by: In this proof, corresponds to ′ , the noisy count of the edge; and corresponds to either 2 or 3 . The value for can be chosen empirically and we find that setting =

√
gives effective results, as demonstrated in the experiments.

EXPERIMENTAL EVALUATION
In this section, we assess the accuracy and efficiency of our methods using the utility measures from Section 3.2. We also evaluate our synthetic datasets for a range of common location analytics tasks. We outline our experiments in Section 6.1, before comparing our synthetic data generation methods in Section 6.2. We then consider our application-focused queries: range and hotspots queries in Section 6.3, and facility location queries in Section 6.4. We finish the section with discussion and recommendations (Section 6.5).

Experiment Outline
Datasets. We generate synthetic data using real location data from three cities with different topographies and sizes, detailed in Table 1.
We extract only the longitude-latitude pairs of each record. Although taxi trajectory points are correlated, we consider each point to represent an independent individual in the dataset. We ignore any temporal information connected to the experiment data. We extract coastline data from OpenStreetMap and use this to define 'out-of-bounds' regions that represent major bodies of water, such as seas and rivers. We remove any points in the original data located in these out-of-bounds regions, and ensure that no synthetic points are created in these regions. The same technique can be used to add further geographical restrictions (e.g., forests, military bases) on the presence of real or synthetic individuals.
We extract the 'driveable' road network data as a graph from OpenStreetMap, using the osmnx Python package [3], with boundaries matching those detailed in Table 1. As pre-processing steps, we map-match each point in the cleaned datasets to the corresponding road network, remove any edges that are within the out-of-bounds areas, and calculate the values ( , ) and ( , ) for each point. The final number of points and edges for each city is shown in Table 1. To examine the real-world suitability of our methods, we do not correct the map-matched data to enforce alignment with the road network; we discuss this more in Section 6.2.3.
Baselines. As discussed in Section 2, most existing work only publishes count data for grid cells/clusters, as opposed to generating coordinate data. Such data can be generated in these partitions using simple uniform sampling (Section 4.2.1) and so we use these extensions of existing methods as baselines. We use the terms 'UGrid-Uni', 'AGrid-Uni', and 'Clust-Uni' to refer to the extension of the uniform grid, adaptive grid, and clustering-based partitioning methods.
Parameter Selection. For each dataset, we set the number of data points, = 20| |, where | | is the number of edges. We do this so that the number of grid cells is approximately equal to the number of edges for the road network-based solution. (We refer to this method simply as 'Road'.) This allows for a fairer comparison between the methods as the amount of added noise will be more comparable. However, for clustering-based methods, having ≈ | | would result in the regions exhibiting a grid-like structure, and so we set = 1,000. By default, = 1, but we evaluate the impact of varying and splitting the privacy budget in Section 6.2.2.
Utility Measures. We use the two measures detailed in Section 3.2: normalized cell error (NCE) and mean edge distance difference (MEDD). To calculate the NCE, we divide the entire region into a uniform grid where each individual grid cell has approximate real-life dimensions of 100m × 100m. Figure 3 shows the visual similarity between the real and synthetic data. Although all methods preserve the underlying structure to some degree, we see that utilizing the geographical constraints explicitly in the SDG stage produces synthetic data that has much stronger visual similarity to the real data than the partitioning-based methods. Quantitatively, Table 2 shows the NCE and MEDD values for the four SDG methods, as well as the three approaches for generating synthetic data points within defined regions (Section 4.2) and the runtimes for each.

Summary.
Adopting KDE for grid-based partitioning methods improves data quality, compared to extensions of existing methods. As KDE almost always outperforms WUD in accuracy terms, we adopt it as the default choice for data generation. AGrid performs similarly to UGrid, unless the city's network is more structured (e.g., New York) in which case it is markedly better. We note that it (generally) takes longer to run, which may make UGrid preferable. For clusteringbased partitioning, KDE offers notable improvements compared to other approaches, although it fails to match the grid-based approaches in accuracy terms. This is primarily because larger regions lead to flatter kernels due to the requirements of DP (i.e, ∥ ∥ is larger, meaning ℎ is larger).
Using Road offers even greater improvements, as we observe improvements of up to 28x over the baselines (vs. Clust-Uni, MEDD, New York). Furthermore, Road is up to 3.9x faster than the baselines (vs. AGrid-Uni, Porto), and up to 37x faster than KDE approaches (vs. AGrid-KDE, New York). This highlights its suitability for generating large city-scale synthetic datasets of high utility.
In Porto and Beijing, where many points are not closely aligned with the road network and the road network is less ordered, gridbased approaches are generally superior in accuracy terms. In New York, however, the real data adheres more tightly to the road network, which means Road is much better at creating high quality synthetic data and it achieves better MEDD values.

Varying Parameters.
We also examine the effects that varying the key parameters have on the quality of the data. Owing to space limitations, we omit some corresponding plots.
Privacy Budget. Figure 4 shows the effect of changing on the NCE and runtime (for Porto, although other cities exhibit similar profiles). In terms of accuracy, all methods behave as expected: accuracy decreases as decreases, due to the increase in the amount of added noise. For low , runtime is higher for partitioning-based methods as it is more likely that generated points are 'out-of-bounds' or outside the boundaries of . Runtimes for grid-based methods increase as increases beyond 5 as the number of cells grows in proportion to (cf. Equations 5-7). The runtimes for Road are consistently low for all , which further highlights its general suitability.
Privacy Budget Distribution. To examine the effect of varying the distribution of , we consider the following apportionments. First, note that, for all UGrid methods, 2 = 0 as noise is only added once during the partitioning phase. Likewise, for data generation methods that do not use KDE, recall that 3 = 0. Hence, for UGrid methods that do not use KDE, 1 = . For UGrid methods with KDE-based data generation, we consider the following percentage splits between 1 and 3 : 10-90; 20-80; 30-70; 40-60; 50-50 and their reverses. We find that, empirically, the best privacy budget split is to set 1 = 0.6 and 3 = 0.4 . This is intuitive as it achieves approximate balance in noise addition between the partitioning and data generation phases. For AGrid partitioning, we follow the guidance in Qardaji et al. [19] and set 1 = 2 . For KDE-based generation with AGrid partitioning, we consider the following percentage splits: 12.5-12.5-75; 20-20-60; 25-25-50; 33-33-33; and 40-40-20. We find that 1 = 2 = 0.4 and 3 = 0.2 gives good results. For cluster-based partitioning without KDE, we find that 1 = 2 2 is the best setting. For cluster-based partitioning with KDE, we find that setting 3 = 0.25 is best, which leaves 1 = 0.25 and 2 = 0.5 . As noted previously, cluster-based partitioning generally leads to flatter kernels as regions tend to be larger, and so a slightly higher 3 value helps to keep ℎ at a value that prevents the kernel from becoming too flat. For Road, equal division of balances noise added to edges with noise added to the micro-histograms. We use these allocations as the default settings throughout. Number of Clusters. When Clust-KDE is used, NCE values decrease as the number of initial clusters increases. This is intuitive as regions are smaller, which allows the kernel density estimate to be better tailored to the characteristics of the regions.

Real World Considerations.
We next evaluate how well our methods model characteristics of real world data, which is often messy and can exhibit high non-uniformity or skew.
Road Network Alignment. For Road, we assume that data points are well-aligned with the underlying road network. However, this is not always the case with real datasets, and there can be high error when map-matching raw data points to edges in the road network. This may be due to GPS sampling errors, map projection errors, and multi-lane roads being modeled as single lines of zero width.
Whereas we use 'uncorrected' data in the main experiments, we now perform experiments where we use the map-matched data as the input datasets (i.e., ( , ) = 0). In this new setting, we find that Road is far superior to the other methods, which perform up to 18% worse. Hence, when the data is corrected, Road is up to 10%, 10%, and 120% more accurate than UGrid-KDE, AGrid-KDE, and Clust-KDE, respectively. Uneven Population Densities. Population density in cities is rarely uniform, either across an area, or along individual roads. In the urban centers, point density may be somewhat uniform along edges, while rural and suburban areas may experience more varied densities. To examine how our methods are affected by uneven densities, we create a dataset focused on a larger area of Beijing, which includes more suburban areas. We set the expanded bounds of the studied region to the bounding box between (116.33, 39.97) and (116.48, 39.85). In the new road network, | | = 13,862 and so we set = 20| | = 277,240. We find that both UGrid-KDE and Road are relatively robust, but AGrid-KDE and Clust-KDE perform worse.

Range and Hotspot Queries
6.3.1 Range Queries. Range queries are important in location analytics to quickly assess how many customers are potentially available to a business, measure accessibility to key services within a certain time, etc. To assess this, we specify a set, L, of 100 arbitrary locations in each city (selected from the set of nodes in each city's road network), and specify a circular region defined by the radius, . To quantify the error, we use mean absolute error (MAE), in which real and synth respectively denote the number of real and synthetic points within meters of location , and: Figure 5 shows how the radius of the range query influences the error, for each method and city. For small , all partitioning-based methods outperform their respective baselines. Interestingly, although Clust-KDE is generally less competitive, it performs better in the less-ordered Porto. Road is a viable alternative when is small; although, as increases, its error increases rapidly. Likewise, AGrid methods perform notably worse for large values. However, when one considers the error in relation to the dataset size, as well as the proportion of the query range to the entire dataset domain, this behavior is acceptable. Despite this, UGrid methods offer strong alternatives, depending on the degree of road network alignment.
6.3.2 Hotspot Queries. Hotspot queries are also fundamental in location analytics for businesses to identify popular regions for advertising, for city agencies to help manage congestion and traffic flow, etc. Here, we obtain kernel density estimates for the real and synthetic datasets, at varying granularities. We use a Gaussian kernel over a × uniform grid, where denotes the grid granularity; we use granularities: = {2 6 , 2 7 , 2 8 , 2 9 , 2 10 }. Note that our kernel   function can be non-private (i.e., the kernel is tuned to the data) here as we are simply assessing the utility of the output data. We define hotspots to be locations with a density greater than the 95 th percentile. To assess query response similarity between the two datasets, we use the Sørensen-Dice coefficient (SDC), defined as: where H is the set of hotspots. Figure 6 shows similarity decrease as granularity increases, as the kernel density estimates are more sensitive to small changes in the location of individual points. All partitioning-based methods outperform their respective baselines, and Road performs especially well when the original data is well-aligned with the road network (e.g., New York, Figure 6c). However, Road performs less well with dense road networks or poorly aligned data (e.g., Porto, Figure 6b). Conversely, grid-based methods perform better in less-structured environments, but perform worse when data is well-aligned with the roads.

Facility Location Queries
Facility location is a common analytics task for which individual location data is necessary and is one possible application for our methods. Given a set F of existing facilities and a set C of candidate facilities, a facility location query (FLQ) aims to find the best locations that satisfy a stated objective function. We consider the two most common FLQ variants. In the Max-Inf case, we seek to identify the most influential candidate facilities, where influence is commonly defined as the total number of customers that the facilities attract. In the Min-Dist case, we find the facilities that minimize the total distance between customers and the facilities.
6.4.1 Outline. Consider the case where a food stand company wishes to locate a number of outlets in the center of Beijing. We intuit that a lot of business would be generated if the outlets were located at the intersections of busy roads and so we use the location set, L, (from Section 6.3.1) where each ∈ L now represents a candidate facility. For the real dataset, we use those from Section 6 and, for the synthetic datasets, we use those generated in Section 6 under the default conditions. We assume that there are no existing facilities currently in the city (i.e., F = Ø). We also define B real and B synth to be the sets of selected facilities when the real and synthetic datasets are used, respectively. We use the SDC to assess accuracy of FLQs when using synthetic data. In this setting, the SDC will capture the extent to which synthetic data identifies the same top-facilities as the real data. We use B real and B synth in place of H real and H synth from Equation 15. Table 3 shows the SDC values (when = 20) for both FLQs. We see that, irrespective of the data generation method, both variants of FLQs are answered almost identically compared to when the real data is used. This is because the optimal locations are quite robust to the noise added to achieve DP. The SDC values indicate that at least 19 of 'true' top 20 candidate facilities are selected when using the synthetic data, which further highlights its suitability for answering FLQs. We also explore the effect that changing has on the SDC values. Our methods are robust and perform equally well for all values of . In particular, they produce optimal results to the Min-Dist FLQ for all values of .

Results.
There may be some cases in which using synthetic data does not obtain similar results to FLQs. For example, when candidate facilities are close to each other, customers may be assigned to different facilities if their location is perturbed a little. Another example is in the capacitated facility location problem (when capacity constraints are strict) when 'additional' customers generated through additive noise cannot be accommodated at their nearest facility. However, overall our methods generate synthetic data that exhibit high levels of accuracy for FLQs compared to using real data. In reality, this means that researchers and companies do not need use real data for facility location. Instead, private synthetic data can be used without compromising on the accuracy of the facility location analysis.

Discussion
While both partitioning-based and road network-based approaches are effective in practice, different methods are more appropriate for different circumstances. We summarize our findings here. All methods scale well in accuracy terms. In particular, Road accommodates large datasets easily, and the error decreases with input size. Hence, Road should be the default data generation method, especially when the raw data is well-aligned with the road network. Where road network data is unavailable or the data is poorly aligned with the road network, partitioning-based approaches should be considered. UGrid-KDE and AGrid-KDE are generally comparable, although AGrid methods are particularly strong in more structured environments. For very large datasets, the difference in runtime costs between clustering-and grid-based methods is larger (cf. Equation 5 and Figure 4b -and have similar effects on runtime), and so clustering-based methods should be considered in this case.
For facility location analytics tasks, all methods perform very well and all methods can be recommended as a general purpose solution. For range queries, all methods are highly effective especially when the range query radius is small. If the range query radius is large, UGrid approaches are recommended (with consideration of the degree of network alignment). For hotspot queries, we advise using Road for datasets that are well-aligned with the road network, which is the case for most applications. UGrid-KDE and AGrid-KDE are more effective when the datasets are less well-aligned, or when the road network is less well-structured.

CONCLUSION
In this paper, we introduced novel approaches for generating synthetic location data that satisfy the requirements of -differential privacy. The proposed methods ensure that the generated data preserves the underlying characteristics of the real data, while ensuring that the existence and location of all individuals remains private. An extensive series of experiments confirms that the generated synthetic data has a high degree of similarity with the real data upon which it is based. We achieve further practical utility by incorporating public knowledge, such as road networks, coastlines, and rivers, within our methods. We have also applied our data generation methods to a range of location analytics queries and shown that the synthetic data obtains excellent results compared to the results obtained with real data. These strong results pave the way for everyday practical use of differential privacy in the real world.