Learning Globally Smooth Functions on Manifolds

Authors: Juan Cervino, Luiz F. O. Chamon, Benjamin David Haeffele, Rene Vidal, Alejandro Ribeiro

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real world data illustrate the advantages of the proposed method relative to existing alternatives. To demonstrate the effectiveness of our method, we conduct two real world experiments with physical systems.
Researcher Affiliation Academia 1Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA 2Excellence Cluster for Simulation Technology, University of Stuttgart, Germany 3Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, USA 4Innovation in Data Engineering and Science (IDEAS), University of Pennsylvania, Philadelphia, USA.
Pseudocode Yes Algorithm 1 Gradient Based Smooth Learning on Data Manifold
Open Source Code Yes Our code is available here.
Open Datasets Yes For more details on the data acquisition, please refer to the original paper (Koppel et al., 2016). To generate the data we utilized sklearn library, and we utilize 1 labeled, and 200 unlabeled samples per class (i.e. moon), and we added noise σ = {0.05, 0.1}.
Dataset Splits No The paper only mentions 'training and testing' splits (e.g., '170 samples for training and 25 samples for testing'). No explicit mention or quantitative information about a 'validation' dataset split was found.
Hardware Specification Yes We run experiments on both NVIDIA 2080, as well as 3090 GPUs.
Software Dependencies No The paper mentions 'sklearn library' but does not provide a specific version number. No other software dependencies are listed with version numbers.
Experiment Setup Yes Regarding the optimization, we utilize the mean square error as a loss, and we train a 2 layer fully connected neural network with 256 hidden dimensions and hyperbolic tangent as the non-linearity. We train for 10000 epochs, with a learning rate of 0.0015 in the case of pavement, and 0.00015 in the case of grass. For batch size, we utilize the whole training set.