Consistent Manifold Representation for Topological Data Analysis

For data sampled from an arbitrary density on a manifold embedded in Euclidean space, the Continuous k-Nearest Neighbors (CkNN) graph construction is introduced. It is shown that CkNN is geometrically consistent in the sense that under certain conditions, the unnormalized graph Laplacian converges to the Laplace-Beltrami operator, spectrally as well as pointwise. It is proved for compact (and conjectured for noncompact) manifolds that CkNN is the unique unweighted construction that yields a geometry consistent with the connected components of the underlying manifold in the limit of large data. Thus CkNN produces a single graph that captures all topological features simultaneously, in contrast to persistent homology, which represents each homology generator at a separate scale. As applications we derive a new fast clustering algorithm and a method to identify patterns in natural images topologically. Finally, we conjecture that CkNN is topologically consistent, meaning that the homology of the Vietoris-Rips complex (implied by the graph Laplacian) converges to the homology of the underlying manifold (implied by the Laplace-de Rham operators) in the limit of large data.


Introduction
Building a discrete representation of a manifold from a finite data set is a fundamental problem in machine learning. Particular interest pertains to the case where a set of data points in a possibly high-dimensional Euclidean space is assumed to lie on a relatively low-dimensional manifold. The field of topological data analysis (TDA) concerns the extraction of topological invariants such as homology from discrete measurements.
Currently, there are two major methodologies for representing manifolds from data sets. One approach is an outgrowth of Kernel PCA [27], using graphs with weighted edges formed by localized kernels to produce an operator that converges to the Laplace-de Rham operator of the manifold. These methods include versions of diffusion maps [1,8,3,2] that reconstruct the geometry of the manifold with respect to a desired metric. Convergence of the weighted graph to the Laplace-de Rham operator in the large data limit is called consistency of the graph construction. Unfortunately, while such constructions implicitly contain all topological information about the manifold, it is not yet clear how to use a weighted graph to build a simplicial complex from which simple information like the Betti numbers can be extracted.
A second approach, known as persistent homology [4,11,13], produces a series of unweighted graphs that reconstructs topology one scale at a time, tracking homology generators as a scale parameter is varied. The great advantage of an unweighted graph is that the connection between the graph and a simplicial complex is immediate, since the Vietoris-Rips construction can be used to build an abstract simplicial complex from the graph. However, the persistent homology approach customarily creates a family of graphs, of which none is guaranteed to contain all topological information. The goal of a consistent theory is not possible since there is not a single unified homology in the large data limit.
In this article we propose replacing persistent homology with consistent homology in data analysis applications. In other words, our goal is to show that it is possible to construct a single unweighted graph from which the underlying manifold's topological information can be extracted. We introduce a specific graph construction from a set of data points, called continuous k-nearest neighbors (CkNN), that achieves this goal for any compact Riemannian manifold. Theorem 1 states that the CkNN is the unique unweighted graph construction for which the (unnormalized) graph Laplacian converges spectrally to a Laplace-de Rham operator on the manifold in the large data limit. Furthermore, this is true even when the manifold is not compact, with some mild added conditions. The proof of consistency for the CkNN graph construction is carried out in Appendix A for both weighted and unweighted graphs. There we complete the theory of graphs constructed from variable bandwidth kernels, computing for the first time the bias and variance of both pointwise and spectral estimators. This analysis reveals the surprising fact that the optimal bandwidth for spectral estimation is significantly smaller than the optimal choice for pointwise estimation (see Fig. A.12). This is crucial because existing statistical estimates [28,2] imply very different parameter choices that are not optimal for the spectral convergence desired in most applications. Moreover, requiring the spectral variance to be finite allows us to specify which geometries are accessible on non-compact manifolds. Finally, combining our statistical estimates with the theory of [34], we provide the first proofs of spectral convergence for graph Laplacians on non-compact manifolds. Details on the relationship of our new results to previous work are given in Appendix A.
As mentioned above, the reason for focusing on unweighted graphs is their relative simplicity for topological investigation. There are many weighted graph constructions that converge to the Laplace-de Rham operator with respect to various geometries [1,8,3,2]. Although these methods are very powerful, they are not convenient for extracting topological information. For example, to determine the zero homology from a weighted graph requires numerically estimating the dimension of the zero eigenspace of the graph Laplacian. Alternatively, in an unweighted graph construction, one can apply the depth first search algorithm to determine the zero homology, which is significantly more efficient than the eigensolver approach. Secondly, determination of the number of zero eigenvalues requires setting a nuisance parameter as a numerical threshold. For higher-order homology generators, the problem is even worse, as weighted graphs require the construction of higher-order Laplace-de Rham operators on differential forms. In contrast, the unweighted graph construction allows the manifold to be studied using topological data analysis methods that are based on simplicial homology (e.g. computed from the Vietoris-Rips complex).
The practical advantages of the CkNN are: (1) a single graph representation of the manifold that captures topological features at multiple scales simultaneously (see Fig. 5) and (2) identification of the correct topology even for non-compact manifolds (see Fig. 6). In CkNN, the length parameter is eliminated, and replaced with a unitless scale parameter δ. Our consistent homology in terms of δ uses the same efficient computational homology algorithms as conventional persistent homology. Of course, for some applications, the unitless parameter δ may be a disadvantage; for example, if the distance scale of a particular feature is explicitly sought. However, in most cases, we feel this potential disadvantage is outweighed by the increased efficiency and accuracy for determination of the homology generators. Finally, for a fixed data set, the consistent homology approach requires choosing the parameter δ (which determines the CkNN graph) and we re-interpret the classical persistence diagram as a tool for selecting δ.
We introduce the CkNN in Section 2, and demonstrate its advantages in topological data analysis by considering a simple but illustrative example. In Section 3 we show that consistent spectral estimation of the Laplace-de Rham operator is the key to consistent estimation of topological features. In Section 4, these results are used to show that the CkNN is the unique unweighted graph construction which yields a consistent geometry and topology via a Laplace-de Rham operator. We give several examples that demonstrate the consistency of the CkNN construction in Section 5, including a fast and consistent clustering algorithm that allows more general sampling densities than existing theories. We conclude in Section 6 by discussing the relationship of CkNN to classical persistence. Theoretical results are given in Appendix A. In this article, we focus on applications to TDA, but the theoretical results will be of independent interest to those studying the geometry as well as topology of data.

Continuous scaling for unweighted graphs
We begin by describing the CkNN graph construction and comparing it to other approaches. Then we discuss the main issues of this article as applied to a simple example of data points arranged into three rectangles with nonuniform sampling.

Continuous k-Nearest Neighbors
Our goal is to create an unweighted, undirected graph from a point set with interpoint distances given by a metric d. Since the data points naturally form the vertices of a graph representation, for each pair of points we only need to decide whether or not to connect these points with an edge. There are two standard approaches for constructing the graph: 1. Fixed -balls: For a fixed , connect the points x, y if d(x, y) < .
2. k-Nearest Neighbors (kNN): For a fixed integer k, connect the points x, y if either d(x, y) ≤ d(x, x k ) or d(x, y) ≤ d(y, y k ) where x k , y k are the k-th nearest neighbors of x, y respectively.
The fixed -balls choice works best when the data is uniformly distributed on the manifold, whereas the kNN approach adapts to the local sampling density of points. However, we will see that even when answering the simplest topological questions, both standard approaches have severe limitations. For example, when clustering a data set into connected components, they may underconnect one part of the graph and overestimate the number of components, while overconnecting another part of the graph and bridging parts of the data set that should not be connected. Despite these drawbacks, the simplicity of these two graph constructions has led to their widespread use in manifold learning and topological data analysis methods [4]. Our main point is that a less discrete version of kNN sidesteps these problems, and can be proved to lead to a consistent theory in the large data limit. Define the Continuous k-Nearest Neighbors (CkNN) graph construction by where the parameter δ is allowed to vary continuously. Of course, the discrete nature of the (finite) data set implies that the graph will change at only finitely many values of δ. The continuous parameter δ has two uses. First, it allows asymptotic analysis of the graph Laplacian in terms of δ, where we interpret the CkNN graph construction as a kernel method. Second, it allows the parameter k to be fixed for each data set, which allows us to interpret d(x, x k ) as a local density estimate.
The CkNN construction is closely related to the "self-tuning" kernel introduced in [35] for the purposes of spectral clustering, which was defined as The kernel (1) leads to a weighted graph, but replacing the exponential kernel with the indicator function and introducing the continuous parameter δ yields the CkNN unweighted graph construction. The limiting operator of the graph Laplacian based on the kernels (1) and (2) was first analyzed pointwise in [32,2]. In Appendix A we provide the first complete analysis of the spectral convergence of these graph Laplacians, along with the bias and variance of the spectral estimates. The CkNN is an instance of a broader class of multi-scale graph constructions: where ρ(x) defines the local scaling near the point x. In Section 4 we will show that is the unique multi-scale graph construction that yields a consistent limiting geometry, where q(x) is the sampling density and m is the intrinsic dimension of the data. Note that if we are given a data set, neither q(x) nor m may be known beforehand. Fortunately, for points on a manifold embedded in Euclidean space, the kNN itself provides a very simple density estimator, which for k sufficiently small approximately satisfies where x k is the k-th nearest neighbor of x and m is the dimension of the underlying manifold [19]. Although more sophisticated kernel density estimators could be used (see for example [31]), a significant advantage of (4) is that it implicitly incorporates the exponent −1/m without the need to explicitly estimate the intrinsic dimension m of the underlying manifold.
In the next section, we demonstrate the advantages of the CkNN on a simple example before turning to the theory of consistent topological estimation.

Example: Representing non-uniform data
In this section we start with a very simple example where both the fixed -ball and simple kNN graph constructions fail to identify the correct topology. Fig. 1 shows a simple "cartoon" of a non-uniformity that is common in real data, and reveals the weakness of standard graph constructions. All the data points lie in one of the three rectangular connected components outlined in Fig. 1(a). The left and middle components are densely sampled and the right component is more sparsely sampled. Consider the radius indicated by the circles around the data points in Fig. 1(a). At this radius, the points in the sparse component are not connected to any other points in that component. This is too small for the connectivity of the sparse component to be realized, but at the same time is too large to distinguish the two densely sampled components. A graph built by connecting all points within the radius , shown in Fig. 1(b), would find many spurious components in the sparse region while simultaneously improperly connecting the two dense components. We are left with a serious failure: The graph cannot be tuned, with any fixed , to identify the "correct" three boxes as components.
The kNN approach to local scaling is to replace the fixed approach with the establishment of edges between each point and its k-nearest neighbors. While this is an improvement, in Fig. 2 we show that it still fails to reconstitute the topology even for the very simple data set considered in Fig. 1. Notice that in Fig. 2(a) the graph built based on the nearest neighbor (k = 1) leaves all regions disconnected, while using two nearest neighbors (k = 2) incorrectly bridges the sparse region with a dense region, as shown in Fig. 2(b). Of course, using kNN with k > 2 will have the same problem as k = 2. Fig. 2 shows that simple kNN is not a panacea for nonuniformly sampled data.
Finally, we demonstrate the CkNN graph construction in Fig. 3. An edge is added between points x and y when d(x, y) < δ d(x, x k )d(y, y k ). We denote the kth-nearest neighbor of x (resp., y) by x k (resp., y k ). The coloring in Fig. 3(a) exhibits the varying density between boxes. Assigning edges according to CkNN with k = 10 and δ = 1.15, shown in Fig. 3(b), yields an unweighted graph whose connected components reflect the manifold in the large data limit. Theorem 1 guarantees the existence of such a δ, that yields an unweighted CkNN graph with correct topology. Although we have focused on the connected components of the point set, the CkNN graph in Fig. 3 fully triangulates all regions, which implies that the 1-homology is correctly identified as trivial. Clearly, the graph constructions in Figures 1 and 2 are very far from identifying the correct 1-homology.
To complete our analysis of the three-box example, we compare CkNN to a further alternative. Two crucial features of the CkNN graph construction are (1) symmetry in x and y which implies an undirected graph construction, and (2) introduction of the continuous parameter δ which allows k to be fixed so that ρ(x) = ||x − x k || is an estimator of q(x) −1/m . There are many alternative ways of combining the local scaling function ρ(x) with the continuous parameter δ. Our detailed theoretical analysis in Appendix A shows that the geometric average δ ρ(x)ρ(y) is consistent, but it does not discount all alternatives.
For example, we briefly consider the much less common 'AND' construction for kNN, where points are connected when d(x, y) ≤ min{d(x, x k ), d(y, y k )} (as opposed to standard kNN which uses the max). Intuitively, the advantage of the 'AND' construction is that it will not incorrectly connect dense regions to sparse regions because it takes the smaller of the two kNN distances. However, on a non-compact domain, shown in Fig. 4, this construction does not identify the correct homology whereas the CkNN does. We conclude the 'AND' version of kNN is not generally superior to CkNN. Moreover, our analysis in Appendix A does not apply, due to the fact that the max and min functions are not differentiable, making their analysis more difficult than the geometric average used by CkNN.

Multiscale Homology
In Section 4 we will see that the geometry represented by the CkNN graph construction captures the true topology with relatively little data by implicitly choosing a geometry which is adapted to the sampling measure. The CkNN construction yields a natural multi-scale geometry, which is assumed to be very smooth in regions of low density and can have finer features in regions of dense sampling. Since all geometries yield the same topology, this multi-scale geometry is a natural choice for studying the topology of the underlying manifold, and this advantage is magnified for small data sets. In Fig. 5 we demonstrate the effect of this geometry on the persistent homology for a small data set with multiple scales. Following that, in Fig. 6 we show how the CkNN graph construction can capture the homology even for a non-compact manifold.
Example 1. To form the data set in Fig. 5 we sampled 60 uniformly random points on a large annulus in the plane centered at (−1, 0) with radii in [2/3, 1] and another 60 uniformly random points on a much smaller annulus centered at (1/5, 0) with radii in [1/5, 3/10]. Together, these 120 points form a "figure eight" with a sparsely sampled large hole and a densely sampled small hole as shown in Fig. 5(b)(d). We then used the JavaPlex package [30] to compute the H 1 persistence diagram for the VR complex based on the standard -ball graph construction, shown in Fig. 5(a). Note that two major generators of H 1 are found along with some "topological noise", and the generators do not exist for a common .
Since JavaPlex can build the VR complex persistence diagram for any distance matrix, we could easily compute the CkNN persistence diagram, shown in Fig Example 2. The CkNN construction has a marked advantage over the fixed-construction for non-compact data. To form the data set in Fig. 6(a) we sampled 150 points from a 2-dimensional Gaussian distribution and then removed the points of radius between [1/4, 3/4], leaving 120 points lying on two connected components with a single noncontractible hole. In this case the standard -ball persistence does not even capture the correct 0-homology for the manifold, due to the decreasing density near the outlying points, as shown in Fig. 6(c-d). Furthermore, since the true manifold is non-compact, there is no reason to expect the -ball construction to converge to the correct topology even in the limit of large data. In fact, as the amount of data is increased, the outlying points will become increasingly spaced out, leading to worse performance for the fixed -ball construction, even for the H 0 homology. In contrast, the CkNN construction is able to capture all the correct topological features for a large range of δ values, as shown in Fig. 6(e-f).
In Sections 3 and 4 we prove that the CkNN is the unique graph construction that provides a consistent representation of the true topology of the underlying manifold in the limit of large data.

Manifold topology from graph topology
In this section we delineate our notion of graph consistency. Assume that the vertices of the graph are data points that lie on a manifold embedded in Euclidean space. Our goal is to access the true topology of the underlying manifold using an abstract simplicial complex on the finite data set. The Vietoris-Rips complex is constructed inductively, first adding a triangle whenever all the faces are in the graph and then adding a higher order simplex whenever all the faces of the simplex are included. We say that a graph construction on these vertices is topologically consistent if the homology computed from the VR complex converges to the homology of the underlying manifold in the limit of large data.
Topological consistency has been shown directly for the -ball graph construction on compact manifolds without boundary [18]. This result is analogous to a result of [8], which shows that the graph Laplacian associated toball graph construction converges to the Laplace-de Rham operator on compact manifolds when the data points are sampled uniformly. In fact, [8] proves pointwise convergence on compact manifolds for any smooth sampling density, (e) (f) but their construction requires a weighted graph in general. In the special case of uniform sampling, their result reduces to the unweighted -ball graph construction (using the kernel K (x, y) = 1 ||x−y||< ). In Appendix A, combined with a result of [34], we prove spectral convergence of this graph Laplacian to the Laplace-de Rham operator, which we refer to as spectral consistency of the graph Laplacian. This is the bottom arrow in the following diagram: The left vertical arrow corresponds to the fact that the graph Laplacian L un = D − W can trivially be used to reconstruct the entire graph since the non-diagonal entries are simply the negative of the adjacency matrix. Since the graph determines the entire VR complex, the graph Laplacian completely determines the VR homology of the graph. This connection is explicitly spectral for the zero homology, since the zero-homology of a graph corresponds exactly to the zero-eigenspace of the graph Laplacian [33].
The right vertical arrow follows from the fact that from the Laplace-de Rham operator, it is possible to analytically reconstruct the metric on a Riemannian manifold, which completely determines the homology of the manifold, via the higher-order Laplace-de Rham operators [26]. In perfect analogy to the discrete case, the zero-homology of the manifold corresponds to the zero-eigenspace of the Laplace-de Rham operator. In other words, just as the discrete Laplacian completely determines the graph VR homology, the Laplace-de Rham operator completely determines the manifold homology. By establishing spectral convergence of the graph Laplacian to the Laplace-de Rham operator we complete the connection between the graph homology and the limiting manifold homology (the top horizontal arrow). This is topological consistency.
The goal of this paper is to provide a general unweighted graph construction that is spectrally consistent (and therefore topologically consistent) for any smooth sampling density. Additionally, we wish to avoid the restriction to compact manifolds. Using more complicated weighted graph constructions, recent results show that for a large class of non-compact manifolds [14] with smooth sampling densities that are allowed to be arbitrarily close to zero [2], the graph Laplacian can converge pointwise to the Laplace-de Rham operator. These results require weighted graph constructions due to several normalizations which are meant to remove the influence of the sampling density on the limiting operator. In Appendix A we analyze a construction similar to [2], but without using any normalizations. We extend the results of [2] to include spectral convergence using [34].
In the next section we summarize the results of Appendix A and use them to show that continuous k-nearest neighbors (CkNN) construction is the unique unweighted graph construction that yields a consistent unweighted graph Laplacian for any smooth sampling density on manifolds of the class defined in [14], including many non-compact manifolds.

The unique consistent unweighted graph construction
Consider a data set {x i } N i=1 of independent samples from a probability distribution q(x) that is supported on a mdimensional manifold M embedded in Euclidean space. For a smooth function ρ(x) on M, we will consider the CkNN graph construction, where two data points x i and x j are connected by an edge if This construction leads to the N × N weight matrix W whose i jth entry is 1 if x i and x j have an edge in common, and 0 otherwise. Let D be the diagonal matrix of row sums of W, and define the "unnormalized" graph Laplacian In Appendix A we show that for an appropriate factor c depending only on δ and N, in the limit of large data, c −1 L un converges both pointwise and spectrally to the operator defined by where ∆ is the positive definite Laplace-de Rham operator and ∇ is the gradient, both with respect to the Riemannian metric inherited from the ambient space. (Note that (6) can also be expressed in terms of the Laplace-Beltrami operator which differs from the Laplace-de Rham operator only by sign.) In fact, pointwise convergence follows from a theorem of [32], and the pointwise bias and a high probability estimate of the variance was first computed in [2]. Both of these results follow for a larger class of kernels than we consider here. Although the operator in (6) appears complicated, we now show that is essentially a Laplace-de Rham operator on the same manifold M, but with respect to a different metric. A conformal change of metric corresponds to a new Riemannian metricg ≡ ϕg, where ϕ(x) > 0, and which has Laplace-de Rham operator For expressions (6) and (7) to match, the function ϕ must satisfy Eliminating ϕ in the two equations results in q −(m+2) = ρ m(m+2) , which implies ρ ≡ q − 1 m as the only choice that makes the operator (6) equal to a Laplace-de Rham operator ∆g. The new metric isg = q 2/m g. Computing the volume form dṼ of the new metricg we find which is precisely the sampling measure. Moreover, the volume form dṼ is exactly consistent with the discrete inner product This consistency is crucial since the discrete spectrum of L un are the minimizers of the functional where c is a constant (see Theorem 2 in the Appendix). If the Hilbert space norm implied by f f were not the volume form of the Riemannian metricg, then the eigenvectors of L un would not minimize the correct functional in the limit of large data. This shows why it is important that the Hilbert space norm is consistent with Laplace-de Rham operator estimated by L un . Another advantage of the geometryg concerns the spectral convergence of L un to L q,ρ shown in Theorem 6 in Appendix A, which requires the spectrum to be discrete. Assuming a smooth boundary, the Laplace-de Rham operator on any manifold with finite volume will have a discrete spectrum [7]. This insures that spectral convergence always holds for the Riemannian metricg = q 2/m g since the volume form is dṼ = qdV, and therefore the volume of the manifold is exactly Since all geometries on a manifold have the same topology, this shows once again that the metricg is extremely natural for topological investigations since spectral convergence is guaranteed, and spectral convergence is crucial to determining the homology.
The fact that ρ = q −1/m is the unique solution to (8) along with the spectral consistency implies the following result.
Theorem 1 (Unique consistent geometry). Consider data sampled from a Riemannian manifold, not necessarily compact, having either a smooth boundary or no boundary. Among unweighted graph constructions (5), ρ = q −1/m is the unique choice which yields a consistent geometry in the sense that the unnormalized graph Laplacian converges spectrally to a Laplace-de Rham operator. Thus CkNN yields a consistent topology in the limit of large data.
This theorem has practical ramifications since (as shown in the previous section) a consistent graph construction will have the same limiting topology as the underlying manifold. In Examples 4 and 5 we will empirically illustrate the consistency of the CkNN choice ρ = q −1/m as well as the failure of alternative constructions.
The unnormalized graph Laplacian is not the only graph Laplacian. With the same notation as above, the "normalized", or "random-walk" graph Laplacian is often defined as L rw = I − D −1 W = D −1 L un , and has the limiting operator (see for example [32,2]; the constant c is different from the unnormalized case). Note that again ρ = q −1/m is the unique choice leading to a Laplace-de Rham operator. This choice implies that to leading order D f ≈ qρ m f = f , so the corresponding Hilbert space has norm f D f → N→∞ f, f dṼ . This implies spectral consistency since Therefore, for the choice ρ = q −1/m , both the unnormalized and normalized graph Laplacians are consistent with the same underlying Laplace-de Rham operator with respect to the metricg. The consistency of graph Laplacian constructions in terms of spectral convergence to operators was explored in a very general context in [34]. Their goal was to justify spectral clustering algorithms that use the first nontrivial eigenfunction of the graph Laplacian to segment data sets. In Appendix A we apply the theory of [34] to show spectral convergence of kernel-based graph Laplacians to Laplace-de Rham operators for a large class of manifolds and geometries. A significant restriction of the theory of [34] is that the desired eigenvalue must be isolated from the essential spectrum, which includes the range of the degree function. This limitation appears to suggest superior convergence for the normalized Laplacian, where the range of the degree function is always a single value. In Appendix A we show that the range of the degree function is {∞} for all the kernel based constructions of [1,8,14,2,3] including the unnormalized graph Laplacians used here. This removes a significant barrier to spectral convergence for a large class of graph Laplacian constructions, and for this class the normalized and unnormalized graph Laplacians have the same spectral convergence properties. Finally, we show that (assuming a smooth boundary or no boundary) the geometry implied by the CkNN graph construction has a Laplace-de Rham operator with a discrete spectrum, even when the manifold is not compact. These results allow us to make a broad statement of spectral convergence, especially for the special geometry implied by the CkNN graph construction.
We emphasize that we are not using either graph Laplacian directly for computations. Instead, we are using the convergence of the graph Laplacian to show convergence of the graph topology to that of the underlying manifold topology. Since this consistency holds for an unweighted graph construction, we can make use of more computationally efficient methods to find the topology of the graph, such as depth-first search to compute the zero-level homology. A wider class of geometries are accessible via weighted graph constructions (see for example [8,2,3]), but fast algorithms for analyzing graph topology only apply to unweighted graphs.

Further applications to Topological Data Analysis
The fundamental idea of extracting topology from a point cloud by building unweighted graphs relies on determining what is considered an edge in the graph as a function of a parameter, and then considering the graph as a VR complex. In the -ball, kNN and CkNN procedures, edges are added as a parameter is increased, from no edges for extremely small values of the parameter to full connectivity for sufficiently large values. From this point of view, the procedures differ mainly by the order in which the edges are added.
Classical persistence orders the addition of possible edges by ||x − y||, whereas the CkNN orders the edges by ||x−y|| √ ||x−x k || ||y−y k || . More generally, a multi-scale graph construction with bandwidth function ρ(x) orders the edges by . Our claim is that CkNN gives an order that allows graph consistency to be proved. In addition, we have seen in Figs. 5 and 6 that the CkNN ordering is more efficient. In this section we show further examples illustrating this fact. We will quantify the persistence or stability of a feature by the percentage of edges (out of the total N(N − 1)/2 possible in the given ordering) for which the feature persists. This measure is an objective way to compare different orderings of the possible edges.
The consistent homology approach differs from the persistent homology approach by using a single graph construction to simultaneously represent all the topological features of the underlying manifold. This requires selecting the parameter δ which determines the CkNN graph. The asymptotically optimal choice of δ in terms of the number of data points is derived in Appendix A, however the constants depend on the geometry of the unknown manifold. There are many existing methods of tuning δ for learning the geometry of data [10,2]. As a practical method of tuning δ for topological data analysis, we can use the classical persistence diagram to find the longest range of δ values such that all of the homological generators do not change. In a sense we are using the classical persistence diagram in reverse, looking for a single value of δ where all the homology classes are stable. In Examples 4 and 5 below we validate this approach by showing that the percentage of edges which capture the true homology is longest when using ordering defined by ρ = q −1/m which is equivalent to the CkNN.

A fast graph-based clustering algorithm
We consider the problem of identifying the connected components of a manifold from a data set using the connected components of the CkNN graph construction. While clustering connected components is generally less difficult than segmenting a connected domain, outliers can easily confuse many clustering algorithms. Many rigorous methods, including any results based on existing kernel methods [20,35,34,22], require the sampling density to be bounded away from zero. In other words, rigorous clustering algorithms require the underlying manifold to be compact. A common work-around for this problem is the estimate the density of the data points and then remove points of low density. However, this leaves the removed points unclustered [6,5,29,21]. We have shown that the CkNN method is applicable to a wide class of non-compact manifolds, and in particular the connected components of the CkNN graph will converge to the connected components of the underlying manifold.
Here we use the CkNN to put an ordering on the potential edges of the graph. While the full persistent homology of a large data set can be computationally very expensive, the 0-homology is easily accessible using fast graph theoretic algorithms. First, the connected components of a graph can be quickly identified by the depth-first search algorithm. Second, unlike the other homology classes, the 0-homology is monotonic; as edges are added, the number of connected components can only decrease or stay the same. This monotonicity allows us to easily identify the entire 0-homology δ-sequence by only finding the transitions, meaning the numbers of edges where the 0-th Betti number changes. We can quickly identify these transitions using a binary search algorithm as outlined below. When the goal is to find all of the transition points, Algorithm 1 can easily be improved by storing all the numbers of clusters from previous computations and using these to find the best available left and right endpoints for the binary search. Example 3. In Fig. 7, we illustrate the use of Algorithm 1 on a point set consisting of the union of three spiralshaped subsets with nonuniform sampling. In fact, the density of points falls off exponentially in the radial direction. Fig. 7(a) shows the original set, and panel (b) shows the number of components as a function of the proportion of edges. When the number of edges is between one and two percent of the possible pairs of points, the persistence diagram in (b) detects three components, shown in (c) along with the edges needed. A three-dimensional version of three spiral-shaped subsets is depicted in Fig. 7(d)-(f), with similar results.
In the next two examples, we illustrate the theoretical result that the choice of β = −1/m in the bandwidth function ρ = q β is optimal, where m is the dimension of the data.
Example 4. We begin with the zero-order homology. We will demonstrate empirically that the choice β = −1/m maximizes the persistence of the correct clustering, for 1 ≤ m ≤ 4. Consider data sampled from an m-dimensional Gaussian distribution with a gap of radial width w = 0.1 1/m centered at radius w + 3m/10. (The dependence on the dimension m is necessary to insure that there are two connected components for small data sets.) The radial gap separates R m into two connected components, the compact interior m-ball, and the non-compact shell extending to infinity with density decaying exponentially to zero.
Given a data set sampled from this density, we can construct a graph using the multi-scale graph construction which connects two points x, y if ||x − y|| < ρ(x)ρ(y). Since the true density is known, we consider the bandwidth functions ρ = q β for β ∈ [−3/2, −1/8]. For each value of β we used the fast clustering algorithm to identify the minimum and maximum numbers of edges which would identify the correct clusters. We measured the persistence of the correct clustering as the difference between the minimum and maximum numbers of edges which identified the correct clusters divided by the total number of possible edges N(N − 1)/2. We then repeated this experiment for 500 random samples of the distribution and averaged the persistence of the correct clustering for each value of β and the (d) Figure 8: Persistence of the clustering of a cut-Gaussian distribution using the multi-scale graph construction with ρ = q β , where the distribution is in dimension (a) 1 (b) 2 (c) 3 (d) 4. Persistence is measured as a percentage of the total number of possible edges in the graph, N(N − 1)/2 as a function of β. Each data point represents an average over 500 data sets, each data set starts with sufficiently many points randomly sampled from an m-dimensional Gaussian so that after points in the gap region are rejected there are N points remaining. The correct homology (two connected components) is most persistent when β is near −1/m implying the optimal multi-scale graph construction is the CkNN construction where ρ ∝ q −1/m . results are shown in Fig. 8. Notice that for each dimension m = 1, ..., 4 the persistence has a distinctive peak centered near β = −1/m which indicates that the true clustering is the most persistent using the multi-scale graph construction that is equivalent to the CkNN.
Example 5. Next, we examine the discovery of the full homology for a 1-dimensional and 2-dimensional example using Javaplex [30]. To obtain a one-dimensional example with two connected components we generated a set of points from a standard Gaussian on the t-axis with points with 0.4 < t < 0.8 removed, and then mapped these points into the plane via t → (t 3 − t, 1/(t 2 + 1)) . The embedding induces a loop, and so there is non-trivial 1-homology in the 1-dimensional example. The correct homology for this example has Betti numbers β 0 = 2 and β 1 = 1, which is exactly the same as the true homology for the 2-dimensional cut Gaussian from Fig. 8, which will be our 2dimensional example. In Fig. 9 we show the persistence of the correct homology in terms of the percentage of edges as a function of the parameter β that defines the multi-scale graph construction. As with the clustering example, the results clearly show that the correct homology is most persistent when β is near −1/m which corresponds to the CkNN graph construction.

Identifying patterns in images with homology
In this section we consider the identification of periodic patterns or textures from image data. We take a topological approach to the problem, and attempt to classify the orbifold (the quotient of the plane by the group of symmetries) by its topological signature. Note that to achieve this, we will not need to learn the symmetry group, but will directly analyze the orbifold by processing the point cloud of small s × s pixel subimages of the complete image in R s 2 without regard to the original location of the subimages. Example 6. In Fig. 10(a) we show four simple patterns that can be distinguished by homology. To make the problem more difficult, the patterns are corrupted by a 'brightness' gradient which makes identifying the correct homology difficult. From left to right the patterns are: First, stripes have a single periodicity so that β 1 = 1; second, a pattern that is periodic in both the vertical and horizontal directions, implying β 1 = 2; third, a checkerboard pattern also has only two periodicities β 1 = 2, but they have different periods than the previous pattern; fourth, a hexagonal pattern has 3 periodicities so that β 1 = 3. To see the three periodicities in the fourth pattern, notice that the pattern repeats when moving right two blocks, or down three blocks, or right one block and down two blocks; each of these periodicities yields a distinct homology class.
To identify the pattern in each image, we cut each full image into 9-by-9 sub-images, yielding 121 points in R 81 . In Fig. 10(b) we show the results of applying the fixed -ball graph construction to each of the four sets of sub-images. In order to choose we used JavaPlex [30] to compute the persistent homology and then chose the region with the longest persistence (meaning the region of where the homology went the longest without changing). In the title of each plot we show first two betti numbers for the graph constructed with this value of , we also show the length of the persistence in terms of the percentage of edges for which the homology is unchanged. In Fig. 10(c) we repeated this experiment using the CkNN construction, choosing δ from the region with the longest unchanging homology. In this set of examples, the CkNN construction is more efficient, and finds the correct orbifold homology. When the 'brightness' gradient is removed, both the fixed -ball and CkNN constructions identify the correct homology in the most persistent region. However, the 'brightness' gradient means that the patterns do not exactly meet (see the leftmost panels in Figs. 10(b,c)). For the simple stripe pattern, the -ball construction can still bridge the gap and identify the correct homology; however, for the more complex patterns, the -ball construction finds many spurious homology classes which obscure the correct homology.
Example 7. We applied the CkNN graph construction to identify patterns in real images of zebra stripes and fish scales in Figure 11. The images shown in Fig. 11 were taken from larger images [12,17]. In order to analyze the subimage spaces we first decimated the images to reduce the resolution, (by a factor of 2 for the stripes and factor of 25 for the scales) in each case to yield a 40 × 40 pixel image. We then formed the set of all 23-pixel by 23-pixel subimages shifting by two pixels in the vertical and horizontal directions to obtain 136 subimages, considered as points in R 529 . We built the rescaled distance matrix ||x−y|| √ ||x−x k || ||y−y k || using k = 5. In Fig. 11(b,e) we show the persistent homology of the VR complex associated to the respective distance matrices in terms of the parameter δ of the CkNN. Using this diagram, we chose the maximal interval of δ for which the homology was stable. In Fig. 11(c,f) we show the CkNN graph for δ chosen from this region along with the correct Betti numbers.

Conclusion
We have introduced a new method called continuous k-nearest neighbors as a way to construct a single graph from a point cloud, that is provably topologically consistent, so that correct topology can be extracted in the large data limit. For many finite-data examples from compact Riemannian manifolds, we have shown that CkNN compares favorably to persistent homology approaches. Moreover, noncompact manifolds and nonuniform sampling are handled seamlessly.
The proposed method replaces a small radius, or k in the k-nearest neighbors method, with a unitless continuous parameter δ. The theory proves the existence of a correct choice of δ, and it needs to be tuned in specific examples.   While the difference between the CkNN and the kNN constructions is fairly simple, the crucial difference is that the approximation (4) only holds for k small, relative to N. By varying the parameter δ and holding k constant at a small value we can construct multi-scale approximations of our manifold that are still consistent with the underlying manifold. This contrasts with the standard kNN approach, where the parameter k is varied and both the coarseness of the manifold approximation and the underlying manifold geometry are changing simultaneously (because the scaling function is changing).
Surprisingly, a careful analysis of the bias and variance of the graph Laplacian as a spectral estimator of the Laplace-De Rham operator (found in the Appendix) is a key element of the proof of consistency. The variance can be infinite on non-compact manifolds, depending on the geometry, creating a previously unknown barrier to spectral convergence. The variance calculation also allows us to explain why the relative error of the eigenvalue increases along with the eigenvalue, and we determine the part of the spectrum that can estimated with constant relative error, as a function of the data size N. These results lend further support to the choice of CkNN as a graph construction, since in this case spectral convergence holds for any Riemannian manifold, compact or non-compact, with smooth boundary or no boundary, and for any smooth sampling density.
We would like to thank D. Giannakis for helpful conversations. This research was partially supported by NSF grants DMS-1216568, DMS-1250936, and CMMI-130007.

Appendix A. Convergence of Graph Laplacians
We approach the problem of consistent graph representations of manifolds by assuming we have data points which are sampled from a smooth probability distribution defined on the manifold. We view this assumption as establishing a "geometric prior" for the problem. Our main goal is to approximate the Laplace-de Rham operator on the manifold, independent of the sampling distribution, and using as little data as possible. This is a natural extension of ideas developed by [1] and Coifman and collaborators [8,25,9,24,23].
The proof of consistency for any graph construction has three parts, two statistical and one analytic: (1) showing that the (discrete) graph Laplacian is a consistent statistical estimator of a (continuous) integral operator (either pointwise or spectrally), (2) showing that this estimator has finite variance, and (3) an asymptotic expansion of the integral operator that reveals the Laplace-de Rham operator as the leading order term. The theory of convergence of kernel weighted graph Laplacians to their continuous counterparts (Laplace-Beltrami and Laplace-de Rham operators) was initiated with [1] which proved parts (1) and (3) for uniform sampling on compact manifolds, and part (2) was later completed in [28]. Parts (1) and (3) were then extended to non-uniform sampling in [8], and part (2) was completed in [2]. The extension to noncompact manifolds was similarly divided. First, [14] provided the proof of parts (1) and (3), and introduced the necessary additional geometric assumptions which were required for the asymptotic analysis on non-compact manifolds. However, [2] showed that the pointwise errors on non-compact manifolds could be unbounded and so additional restrictions had to be imposed on the kernel used to construct the graph Laplacian. In order to construct the desired operators on non-compact, [2] showed that variable bandwidth kernels were required (part (1) for variable bandwidth kernels was previously achieved in [32]). In all of this previous work, parts (1) and (2) are always proven pointwise, despite the fact that most applications require spectral convergence.
The geometric prior assumes that the set of points that have positive sampling density is a smooth manifold M ≡ {x ∈ R n : q(x) > 0} where q is a smooth sampling density. Some weak assumptions on the manifold M are required. The theory of [1,8] assumes that the manifold is compact, implying that the density q must be bounded away from zero on M. The theory of [14,15,16] showed that this assumption could be relaxed, and together with the statistical analysis in [2] allows a large class of noncompact manifolds with densities that are not bounded below. In this article we require M to have injectivity radius bounded below and curvature bounded above; these technical assumptions hold for all compact manifolds and were introduced to allow application to noncompact manifolds in [14].
There are several algorithms for estimating the Laplace-de Rham operator, however a particularly powerful construction in [2] is currently the only estimator which allows the sampling density q to be arbitrarily close to zero. Since we are interested in addressing the problem of non-uniform sampling, we will apply the result of [2] for variable bandwidth kernels. However, the method of [2] used a special weighted graph Laplacian in order to approximate the Laplace-de Rham operator. The weighted graph Laplacian uses special normalizations which were first introduced in the diffusion maps algorithm [8] in order to reverse the effect of the sampling density. The goal of this paper is to use an unweighted graph Laplacian to approximate the Laplace-de Rham operator, since this allows us to compute the topology of the graph using fast combinatorial algorithms. In fact, the unweighted graph Laplacian will converge to the Laplace-de Rham operator of the embedded manifold in the special case of uniform sampling. The power of our new graph construction is that we can recover a Laplace-de Rham operator for the manifold from an unweighted graph Laplacian, even when the sampling is not uniform.
Let q(x) represent the sampling density of a data set We combine a global scaling parameter δ with a local scaling function ρ(x) to define the combined bandwidth δρ(x). Then consider the symmetric variable bandwidth kernel for any shape function h : [0, ∞) → [0, ∞) that has exponential decay. For a function f and any point x ∈ M we can form a Monte-Carlo estimate of the integral operator From [2] the expression in (A.2) has the asymptotic expansion where ∆ is the positive definite Laplace-de Rham operator, and ∇ is the gradient operator, both with respect to the Riemannian metric that M inherits from the ambient space R n . (Note that in [2] the expansion (A.3) is written with respect to the negative definite Laplace Beltrami operator, which is the negative of the Laplace-de Rham operator.) In the righthand side of (A.3), all functions are evaluated at x. The function ω = ω(x) depends on the shape function h and the curvature of the manifold at x.

Appendix A.1. Pointwise and integrated bias and variance
The standard graph Laplacian construction starts with a symmetric affinity matrix W whose i j entry quantifies similarity between nodes x i and x j . We assume in the following that W = W δ from (A.1), which includes the CkNN construction as a special case. Define the diagonal normalization matrix D as the row sums of W, i.e. D ii = j W i j . The unnormalized graph Laplacian matrix is then We interpret this matrix as the discrete analog of an operator. Given a function f , by forming a vector f j = f (x j ) the matrix vector product is so that L un f is also a vector that represents a function on the manifold. Theorem 2 below makes this connection rigorous by showing that for an appropriate factor c that depends on N and δ, the vector c −1 (L un f ) i is a consistent statistical estimator of the differential operator where all functions are evaluated at the point x i . The last equality in (A.6) follows from the definition of L and applying the product rules for positive definite Laplacian ∆( f g) = f ∆g + g∆ f − 2∇ f · ∇g and the gradient ∇( f g) = f ∇g + g∇ f . Theorem 2 shows that L un is a pointwise consistent statistical estimator of a differential operator. In fact consistency was first shown in [32] and the bias of this estimator was first computed in [2]. Here we include the variance of this estimator as well.
Theorem 2 (Pointwise bias and variance of L un ). Let M be an m-dimensional Riemannian manifold embedded in R n , let q : M → R be a smooth density function, and define L un as in (A.4). and where c ≡ m 2 2 (N − 1)δ m+2 and a ≡ Before proving Theorem 2, we comment on the constants such as m 0 , m 2 and m 2,2 , which depend on the shape of the kernel function. It was originally shown in [8] that the expansion (A.3) holds for any kernel with exponential decay at infinity. A common kernel used in geometric applications is the Gaussian h(||z|| 2 ) = exp(−||z|| 2 /2), due to its smoothness. For topological applications, we are interested in building an unnormalized graph, whose construction corresponds to a kernel defined by the indicator function 1 ||z|| 2 <1 . The sharp cutoff function enforces the fact that each pair of points is either connected or not. (The indicator kernel satisfies (A.3) since it has compact support and thus has exponential decay.) In the table below we give formulas for all the constants which appear in the results, along with their values for the Gaussian and indicator functions (note that B m is the unit ball in R m ).

Constant
Formula It turns out that the variance of the statistical estimators considered below are all proportional to the constant a. As a function of the intrinsic dimension m, the constant a decreases exponentially for the Gaussian kernel. On the other hand, since the volume of a unit ball decays like m −m/2−1/2 for large m, the constant a increases exponentially for the indicator kernel.
Proof of Theorem 2. Notice that the term i = j is zero and can be left out of the summation, this allows us to consider the expectation of L un f only over the terms i j which are identically distributed. From (A.3) the i-th entry of the vector L un f i has expected value and by (A.6) we have L q,ρ ≡ ρ m+2 ( f Lq − L( f q)). Dividing by the constant c yields (A.7), which shows that L un is a pointwise consistent estimator with bias of order δ 2 . We can also compute the variance of this estimator defined as Notice that for the last term we have this term will be higher order so we summarize it as O(N −1 , δ 2 ). Computing the remaining term we find the variance to be Since W 2 i j is also a local kernel with moments m 0,2 and m 2,2 the above asymptotic expansions apply and we find where the last equality follows from the applying the product rule. Substituting the previous expression into (A.10) verifies (A.8). Theorem 2 is a complete description of the pointwise convergence of the discrete operator L un . While the expansion (A.7) shows that the matrix c −1 L un approximates the operator L q,ρ , it does not tell us how the eigenvectors of c −1 L un approximate the eigenfunctions of L q,ρ . That relationship is the subject of Theorem 3 below.
Since c −1 L un is a symmetric matrix, we can interpret the eigenvectors f as the sequential orthogonal minimizers of the functional By dividing the numerator and denominator by N, we can interpret the inner product N −1 f f as an integral over the manifold since It is easy to see that the above estimator has variance . Similarly, in Theorem 3 we will interpret 1 N f c −1 L un f as approximating an integral over the manifold. Theorem 3 (Integrated bias and variance of L un ). Let M be an m-dimensional Riemannian manifold embedded in R n and let q : M → R be a smooth density function. For {x i } N i=1 independent samples of q, and f ∈ C 3 (M) we have Proof. By definition we have where the term i = j is zero and so the sum is over the N(N − 1) terms where i j. Since x i are sampled according to the density q we have We can now compute the variance of the estimator c −1 N f L un f which estimates f, qL q,ρ f dV . To find the variance we need to compute Notice that when i, j, k, l are all distinct, by independence we can rewrite these terms of (A.15) as The constant a 1 ≡ (N−2)(N−3) N(N−1) accounts for the fact that of the N 2 (N−1) 2 total terms in (A.15), only N(N−1)(N−2)(N−3) terms have distinct indices. Since i j and k l, we next consider the terms where either i ∈ {k, l} or j ∈ {k, l} but not both. Using the symmetry of L un , by changing index names we can rewrite all four combinations as i = k so that these terms of (A.15) can be written as Finally, we consider the terms where i ∈ {k, l} and j ∈ {k, l}. By symmetry we rewrite the two possibilities as i = k and j = l and these terms become, where the constant a 2 ≡ m 2,2 m 2 is the ratio between the second moment m 2 of the kernel W and the second moment m 2,2 of the squared kernel W 2 .
We can now compute the variance of the estimator In particular this says that var assuming that all the inner products are finite.
We should note that determining whether the inner product is finite is nontrivial when the manifold in question is unbounded and when the sampling density q is not bounded away from zero. We will return to this issue below. First, we compute the bias and variance of the spectral estimates. For wider applicability, we consider the generalized eigenvalue problem, c −1 L un f = λM f for any diagonal matrix M ii = µ(x i ) which corresponds to the functional

Theorem 4 (Spectral bias and variance). Under the same assumptions as Theorem 3 we have
So assuming the inner products are finite (for example if M is a compact manifold) we have var (cN Proof. We consider Λ( f ) to be a ratio estimator of the form

The correlation of a and b is given by
since the sum of the terms with k = i or k = j is clearly order N −1 , and when both k i and k j the expectation is equal to ab by independence. Since the variance of b and the correlation are both order N −1 by the standard ratio estimates, we have Combined with Theorem 3, these equations yield the desired result.
Comparing Theorems 2 and 4 we find a surprising result, namely that the optimal δ for spectral approximation of the operator L q,ρ is different from the optimal δ for pointwise approximation. To our knowledge this has not been noted before in the literature. We find the optimal choice of δ by balancing the squared bias with the variance of the respective estimators. For the optimal pointwise approximation we need δ 4 = N −1 δ −m−2 so the optimal choice is δ ∝ N −1/(m+6) and the combined error of the estimator is then O(N −2/(m+6) ). In contrast, for optimal spectral approximation we need δ 4 = N −2 δ −m−2 so the optimal choice is δ ∝ N −2/(m+6) and the combined error (bias and standard deviation) is The one exception to this rule is the case m = 1, where the second term in the spectral variance dominates, so that the optimal choice is obtained by setting the two terms in the spectral variance equal which yields δ ∝ N −1/3 and the combined error is order N −1 . Since graph Laplacians are most often used spectrally (for example to find lowdimensional coordinates with the diffusion map embedding [8]) this implies that the choice of bandwidth δ should be significantly smaller than suggested in the literature [28,2]. We demonstrate this for the cutoff kernel on a circle in the example below. Example 8 (Pointwise and spectral estimation on the unit circle). We illustrate by example the difference in bandwidth required for pointwise versus spectral approximation that are implied by Theorems 2 and 4. Consider data sets consisting of N ∈ {1000, 5000, 20000} points {x i = (sin(θ i ), cos(θ i )) } N i=1 sampled uniformly from the unit circle and the unnormalized graph Laplacian c −1 L un constructed using the cutoff kernel. We first chose δ = 3N −1/(m+6) = 3N −1/7 (the constant 3 was selected by hand tuning) which is optimal for pointwise estimation of the Laplace-de Rham operator ∆ as shown in Figure A.12(a)(b). Next, we set δ = 3N −1/3 which is optimal for spectral estimation as shown in Figure A.12(c)(d). This demonstrates how spectral estimation is optimal for a much smaller value of δ that pointwise estimation. Despite the extremely poor pointwise estimation when δ = 3N −1/3 , the spectral estimation, which involves an integrated quantity, is superior using this significantly smaller value of δ. Notice that the relative error in the eigenvalues increases as the eigenvalues increase, this phenomenon is explained at the end of the next section.
In general we only know that the optimal choice is δ ∝ N −2/(m+6) for m > 1 and δ ∝ N −1/3 for m = 1, and the constants for the optimal choice are quite complex. However, this does indicate the correct order of magnitude for δ, especially for large data sets. Figure A.12 dramatically illustrates the differences in optimal pointwise and optimal spectral estimation. As mentioned above, graph Laplacians are often used for spectral approximation, so previous analyses which tune δ for optimal pointwise estimation [28,2] are misleading for many applications.
In Figure A.13 we verify the power laws for the optimal choice of δ for pointwise and spectral estimation. For N ∈ {250, 500, 1000, 2000, 4000, 8000} we compute the pointwise and spectral root mean squared error (RMSE) for a wide range of values of δ and then plot the optimal value of δ as a function of N. To estimate the pointwise error for a fixed N and δ we generated N uniformly random points on a circle {x i = (sin(θ i ), cos(θ i )) } N i=1 and then construct the unnormalized graph Laplacian c −1 L un using the cutoff kernel. We then multiplied this Laplacian matrix by the vector between the estimator and the limiting expectation. We repeated this numerical experiment 10 times for each value of N and δ and the average RMSE is shown in Figure A.13(a). To estimate the spectral error, we computed the smallest 5 eigenvalues of c −1 L un and founded the RMSE between these eigenvalues and the true eigenvalues [0, 1, 1, 4, 4] of the limiting operator ∆. This numerical experiment was repeated 10 times for each value of N and δ and the average RMSE is shown in Figure A.13(b). Finally, in Figure A.13(c) we plot the value of δ which minimized the pointwise error (black) and the spectral error (red) for each value of N and we compare these data points to the theoretical power laws for the pointwise error δ ∝ N −1/7 (black, dashed line) and spectral error δ ∝ N −1/3 (red, dashed line) respectively. This immediately shows that when m 2 we can always choose ρ, µ to estimate the operator ∆g for any conformally isometric geometryg = ϕg. In the special case m = 2 this can also be achieved by setting ρ = q −1/2 and µ = qϕ −1 . However, if we assume that ρ = q β for some power β, then there are three choices of ρ, µ which have the same geometry for every dimension m, we summarize these in the table below.
Geometryg dṼ ρ µ Sampling measure geometry q 2/d g q dV q −1/m 1 Embedding geometry Inverse sampling geometry For the choice β = −1/m we find µ = 1, which is explored in the main body of the paper. The choice µ = 1 is the only one allowing an unweighted graph construction. To reconstruct the embedding geometry, one can estimate the Laplace-de Rham operator ∆ g using β = −2 m+2 . Finally, if we select β = −1/2 we find µ = q m/2+1 , which is closely related to the results of [2]. Finally, Our above construction shows that an appropriately chosen graph Laplacian is a consistent estimator of the normalized Dirichlet energy M f 2 dṼ for any conformally isometric geometryg = ϕg. The minimizers of this normalized Dirichlet energy are exactly the eigenfunctions of the Laplace-de Rham operator ∆g, so Theorem 4 is the first step towards spectral convergence. However, in the details of Theorem 4 there is a significant barrier to spectral convergence. This barrier is a very subtle effect of the second term in the variance of the estimator. Rewriting (A.20) from Theorem 4 in terms of the new geometry we find var (Λ( f )) = a δ −m−2 N(N − 1) The first term in (A.24) normally controls the bias-variance trade-off since it diverges as δ → 0, and the second term is typically higher order. However, the integral which defines the constant in the second error term has the potential to be infinite when q is not bounded below. Ignoring the terms which depend on f , we need q −1 ϕ m/2 dṼ = q −1 ϕ m dV to be integrable in order for the variance of the estimator to be well-defined, which proves the following result.
Theorem 5 (Spectral Convergence, Part 1). Let {x i } N i=1 be sampled from a density q. Consider a conformally equivalent metricg = ϕg such that q −1 ϕ m is integrable with respect to dV. Define the unnormalized Laplacian L un using a kernel K δ (x, y) = h ||x−y|| 2 δ 2 ρ(x)ρ(y) for any h with exponential decay where and the estimator Λ( f ) has bias of leading order δ 2 and finite variance which is of leading order δ −m−2 N −2 . The optimal choice δ ∝ N −2/(m+6) yields total error of order N −4/(m+6) .
In particular, for the choice ρ = q −2/(m+2) we find q −1 ϕ m = q −1 which will often not be integrable on a non-compact manifold when q is not bounded away from zero. This implies that we cannot spectrally approximate the Laplace-de Rham operator with respect to the embedding metric for many non-compact manifolds. Notice, that this variance barrier only applies to spectral convergence, so we can still approximate the operator pointwise using Theorem 2. Similarly, when β = −1/2 we find q −1 ϕ m = q −1−m and the exponent on q is negative, also leading to the possibility of divergence on non-compact manifolds. This shows yet another advantage of the choice β = −1/m, since q −1 ϕ m = q which is simply the sampling measure and is always integrable with respect to dV by definition.
Theorem 5 shows that the functional Λ( f ) converges to the normalized Dirichlet energy functional whose minimizers are the eigenfunctions of the Laplace-de Rham operator ∆g. It remains only to show that vectors which minimize the discrete functional Λ( f ) converge to the minimizers of the normalized Dirichlet energy functional. In fact this has already been shown in [34] assuming that the spectrum of ∆g is discrete.
Finally, will address an issue of spectral convergence that was introduced in [34], which suggests that unnormalized graph Laplacians have worse spectral convergence properties than normalized Laplacians. The theory in [34] is quite powerful, and indeed is the basis of our spectral convergence result below. However, a subtle detail reveals that the unnormalized Laplacian c −1 L un does not suffer from the spectral convergence issue they consider. For an unnormalized Laplacian, L un = D − W where D ii = N j=1 W i j is the degree function, "Result 2" of [34] states that if the first r eigenvalues of the limiting operator L q,ρ do not lie in the range of the degree function Since δ → 0 as N → ∞, this implies that the range of the true degree function c −1 d(x) is ∞. The reason this special class of unnormalized Laplacians avoids the difficult of [34] is that the first order term of the degree function in exactly cancelled by the first order term of the kernel W i j in the limit of large data, which is why the constant c is higher order than the degree function in terms of δ. This completely removes the only barrier to spectral convergence and so by the result of [34] we have the following result.
Theorem 6 (Spectral Convergence, Part 2). Under the assumptions of Theorem 5, in the limit of large data the eigenvalues of c −1 M −1 L un converge to those of the limiting operator ∆g, assuming the spectrum is discrete.
In particular, for a manifold without boundary or with a smooth boundary, spectral convergence is guaranteed for the unnormalized Laplacian when ρ = q −1/m and µ = 1.
Proof. Notice that the eigenvalues of c −1 M −1 L un f = λ f exactly correspond to the minimizers of the functional Λ( f ) in Theorem 5. The first claim follows from the convergence of the functional Λ( f ) to the normalized Dirichlet energy combined with convergence results of [34].
The guarantee of spectral convergence for ρ = q −1/m and µ = 1 follows from the fact that q −1 ϕ m = q is always integrable. Moreover, the volume of the manifold with respect to dṼ = q dV is always vol dṼ (M) = M q dV = 1, and for any manifold of finite volume with a smooth boundary the spectrum of the Laplace-de Rham operator is discrete [7].
Since the matrix c −1 M −1 L un is not symmetric it is numerically preferable to solve the generalized eigenvalue problem c −1 L un f = λM f since both c −1 L un and M are symmetric.
Finally, it is often noticed empirically that the error in the spectral approximations increases as the value of the eigenvalue increases. To understand this phenomenon, we will use the variance formula (A. 24). First, if f is an eigenfunction with ∆g f = λ f we note that, Notice that although the first term (which is typically the leading order term, especially on compact manifolds) can be tuned to give a constant relative error, the second term still grows proportionally to the eigenvalue λ. We should expect the spectrum to be fairly accurate until the order of the second term is equal to that of the first term, at which point the relative errors in the eigenvalues will grow rapidly. When ϕ = q 2/m as in the CkNN construction, the inner products in the two terms are the same, so the spectrum will be accurate when where the second and third equalities hold for the optimal choice δ ∝ N −2/(m+6) . Notice that this constraint does not apply to the case m = 1, where the second term in (A.24) dominates, or to the case m = 2 where the second and first terms are equal order for the optimal choice of δ. In all cases, the relative error increases as the eigenvalues increase.