Article Contents
Article Contents

# Reconstructing linearly embedded graphs: A first step to stratified space learning

• * Corresponding author: Yossi Bokor
• In this paper, we consider the simplest class of stratified spaces – linearly embedded graphs. We present an algorithm that learns the abstract structure of an embedded graph and models the specific embedding from a point cloud sampled from it. We use tools and inspiration from computational geometry, algebraic topology, and topological data analysis and prove the correctness of the identified abstract structure under assumptions on the embedding. The algorithm is implemented in the Julia package Skyler, which we used for the numerical simulations in this paper.

Erratum: Nanda et al. should be Stolz et al. in both the PDF (page 2, line 6) and HTML versions of this paper. We apologize for any inconvenience this may cause.

Mathematics Subject Classification: Primary: 57Z25, 51-08; Secondary: 55N31.

 Citation:

• Figure 2.1.  An example of scenario 1

Figure 3.2.  Graph of $\Psi\left(\frac{R}{\varepsilon}, 1\right)$

Figure 3.3.  Graph of $\Phi\left(\frac{R}{\varepsilon}, 1\right)$

Figure 3.4.  Both $q_1$ and $q_2$ are in the same half-space generated by the hyper-plane through $p$ perpendicular to $\overline{uv}$

Figure 3.6.  The case where $\angle uvw \leq \frac{\pi}{2}$

Figure 3.7.  The case where $\|\widetilde{p}-\widetilde{q}\|<\|\widetilde{p}-n\|$

Figure 3.8.  The case where $\|\widetilde{p}-\widetilde{q}\|>\|\widetilde{p}-n\|$

Figure 5.9.

Table 1.  Summary of the output of the algorithm for various ratios $\frac{R}{\varepsilon}$. Recall we wish to maximise Equation 9. The last 5 columns are the vertex locations obtained

 Ratio Correct Log Likelihood v1 v2 v3 v4 v5 $R/\varepsilon$ structure (Equation 9) $\begin{pmatrix} 0\\0\\0\end{pmatrix}$ $\begin{pmatrix}4.6\\ 6.24\\ 0\end{pmatrix}$ $\begin{pmatrix}4.86\\ 0.51\\ 3.47\end{pmatrix}$ $\begin{pmatrix} -1.32\\ 6.29\\ 4\end{pmatrix}$ $\begin{pmatrix}-4.23\\ -3.48\\ -3\end{pmatrix}$ $4$ No - - - - - - $6$ Yes $-33.183$ $\begin{pmatrix} 0.00 \\ 0.03 \\ 0.01\end{pmatrix}$ $\begin{pmatrix} 4.59 \\ 6.23 \\ -0.02 \end{pmatrix}$ $\begin{pmatrix} 4.56 \\ 0.56 \\ 3.43 \end{pmatrix}$ $\begin{pmatrix} -1.30 \\ 6.26 \\ 3.96\end{pmatrix}$ $\begin{pmatrix} -4.24 \\ -3.46 \\ -3.02 \end{pmatrix}$ $8$ Yes $-32.97$ $\begin{pmatrix} 0.00 \\ 0.02 \\ 0.01 \end{pmatrix}$ $\begin{pmatrix} 4.59 \\ 6.23 \\ -.02 \end{pmatrix}$ $\begin{pmatrix} 4.86 \\ 0.56 \\ 3.42\end{pmatrix}$ $\begin{pmatrix} -1.29 \\ 6.26 \\ 3.96\end{pmatrix}$ $\begin{pmatrix} -4.22\\ -3.45 \\ -3.01\end{pmatrix}$ $10$ Yes $-33.33$ $\begin{pmatrix}0.01 \\ 0.03 \\ 0.01 \end{pmatrix}$ $\begin{pmatrix}4.59 \\6.22\\-0.02 \end{pmatrix}$ $\begin{pmatrix} 4.86\\0.56\\3.42\end{pmatrix}$ $\begin{pmatrix} -1.26\\6.24\\3.95\end{pmatrix}$ $\begin{pmatrix} -4.19\\3.42\\-2.99\end{pmatrix}$ $12$ Yes $-33.84$ $\begin{pmatrix} 0.01\\0.03\\0.01\end{pmatrix}$ $\begin{pmatrix} 4.59\\6.23\\-0.02\end{pmatrix}$ $\begin{pmatrix}4.86\\0.56\\3.42 \end{pmatrix}$ $\begin{pmatrix} -1.26\\6.24 \\3.95 \end{pmatrix}$ $\begin{pmatrix} -4.14\\-3.38\\-2.96 \end{pmatrix}$ $14$ Yes $-36.61$ $\begin{pmatrix} 0.01\\0.03\\0.01\end{pmatrix}$ $\begin{pmatrix} 4.59\\6.26\\-0.03\end{pmatrix}$ $\begin{pmatrix}4.86\\0.56\\3.43 \end{pmatrix}$ $\begin{pmatrix}-1.26\\6.23\\3.95 \end{pmatrix}$ $\begin{pmatrix}-4.00\\-3.27\\-2.56 \end{pmatrix}$ $16$ Yes $-45.30$ $\begin{pmatrix} 0.02\\0.03\\0.01 \end{pmatrix}$ $\begin{pmatrix} 4.58\\6.27\\-0.05 \end{pmatrix}$ $\begin{pmatrix}4.56\\0.56\\3.43 \end{pmatrix}$ $\begin{pmatrix} -0.70\\ 3.70\\ 2.33 \end{pmatrix}$ $\begin{pmatrix}-3.96\\-3.22\\-2.81 \end{pmatrix}$
•  [1] M. Aanjaneya, F. Chazal, D. Chen, M. Glisse, L. Guibas and D. Morozov, Metric graph reconstruction from noisy data, Internat. J. Comput. Geom. Appl., 22 (2012), 305-325.  doi: 10.1142/S0218195912600072. [2] P. Bendich, B. Wang and S. Mukherjee, Local homology transfer and stratification learning, in Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, 2012, 1355–1370. [3] P. Bendich, B. Wang and S. Mukherjee, Towards stratification learning through homology inference, AAAI Fall Symposium on Manifold Learning and its Applications (AAAI), 2010. Available from: https://www.aaai.org/ocs/index.php/FSS/FSS10/paper/download/2273/2714. [4] Y. Bokor, D. Grixti-Cheng, M. Hegland, S. Roberts, and K. Turner, Stratified space learning: Reconstructing embedded graphs, 23rd International Congress on Modelling and Simulation, Australia, 2019. Available from: https://mssanz.org.au/modsim2019/A3/bokor.pdf. [5] P. Breiding, S. Kališnik, B. Sturmfels and M. Weinstein, Learning algebraic varieties from samples, Rev. Mat. Complut., 31 (2018), 545-593.  doi: 10.1007/s13163-018-0273-6. [6] S.-W. Cheng, T. K. Dey and E. A. Ramos, Manifold reconstruction from point samples, Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, 2005, 1018–1027. [7] A. P. Dempster, N. M. Laird and D. B. Rubin, Maximum likelihood from incomplete data via the EM algorithm, J. Roy. Statist. Soc. Ser. B, 39 (1977), 1-38.  doi: 10.1111/j.2517-6161.1977.tb01600.x. [8] T. K. Dey, Curve and Surface Reconstruction: Algorithms with Mathematical Analysis, Cambridge Monographs on Applied and Computational Mathematics, 23, Cambridge University Press, Cambridge, 2007. [9] T. K. Dey, F. Fan and Y. Wang, Dimension detection with local homology, 26th Canadian Conference on Computational Geometry, Nova Scotia, 2014. Available from: http://www.cccg.ca/proceedings/2014/papers/paper40.pdf. [10] E. M. Stein and R. Shakarchi, Real Analysis. Measure Theory, Integration, and Hilbert Spaces, Princeton Lectures in Analysis, 3, Princeton University Press, Princeton, NJ, 2005. [11] B. J. Stolz, J. Tanner, H. A. Harrington and V. Nanda, Geometric anomaly detection in data, Proc. Natl. Acad. Sci. USA, 117 (2020), 19664-19669.  doi: 10.1073/pnas.2001741117.

Figures(8)

Tables(1)