PLLay: Efficient Topological Layer based on Persistent Landscapes

Authors: Kwangho Kim, Jisu Kim, Manzil Zaheer, Joon Kim, Frederic Chazal, Larry Wasserman

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our approach by classification experiments on various datasets.
Researcher Affiliation Collaboration Kwangho Kim Carnegie Mellon University Pittsburgh, USA kwanghk@cmu.edu Jisu Kim Inria Palaiseau, France jisu.kim@inria.fr Manzil Zaheer Google Research Mountain View, USA manzilzaheer@google.com Joon Sik Kim Carnegie Mellon University Pittsburgh, USA joonsikk@cs.cmu.edu Frederic Chazal Inria Palaiseau, France frederic.chazal@inria.fr Larry Wasserman Carnegie Mellon University Pittsburgh, USA larry@stat.cmu.edu
Pseudocode Yes Algorithm 1 Implementation of single structure element for PLLay
Open Source Code Yes Reproducibility. The code for PLLay is available at https://github.com/jisuk1/pllay/.
Open Datasets Yes To demonstrate the effectiveness of the proposed approach, we study classification problems on two different datasets: MNIST handwritten digits and ORBIT5K. . . . ORBIT5K dataset [Adams et al., 2017, Carrière et al., 2020].
Dataset Splits No The paper specifies training and test set sizes (e.g., 'standard training set consists of 60,000 examples, and test set of 10,000 examples' for MNIST; 'We used 400 instances for training and 100 for testing' for ORBIT5K) but does not explicitly mention a separate validation split or its size.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, etc.) used for running the experiments.
Software Dependencies Yes The GUDHI Project. GUDHI User and Reference Manual. GUDHI Editorial Board, 3.3.0 edition, 2020. URL https://gudhi.inria.fr/doc/3.3.0/.
Experiment Setup Yes We refer to Appendix G for details about each simulation setup and our model architectures. . . . MLP model has 2 hidden layers with 100 neurons each. CNN model has two convolutional layers (32 filters, 5x5 kernel size, 2x2 pooling) followed by two fully connected layers (100 neurons each). . . . Adam optimizer with a batch size of 32 and learning rate of 0.001.