Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Closer Look at Embedding Propagation for Manifold Smoothing

Authors: Diego Velazquez, Pau Rodriguez, Josep M. Gonfaus, F. Xavier Roca, Jordi Gonzalez

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Next we present additional evidence of how EP smooths the classification surface and adapt it to different settings: adversarial attacks, selfand semi-supervised learning and few-shot learning. Although EP is applied at different stages of the machine learning pipeline for each of the following experiments (see Figure 2), the EP algorithm will remain unchanged across all experiments.
Researcher Affiliation Collaboration Diego Velazquez EMAIL Visual Tagging Services and Computer Vision Center, Barcelona, Spain Pau Rodr ıguez EMAIL Service Now Research, Montreal, Canada Josep M. Gonfaus EMAIL Visual Tagging Services, Barcelona, Spain F.Xavier Roca EMAIL Computer Vision Center and Univ. Aut onoma de Barcelona, Barcelona, Spain Jordi Gonz alez EMAIL Computer Vision Center and Univ. Aut onoma de Barcelona, Barcelona, Spain
Pseudocode No The paper describes the proposed method using mathematical equations and descriptive text in Section 3, but does not include a distinct pseudocode block or algorithm listing.
Open Source Code No The paper mentions "License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v23/21-0468.html." This refers to the paper's license and attribution, not the release of source code for the methodology. No other specific statement or link to source code is provided.
Open Datasets Yes 4.1 Datasets mini Imagenet (Ravi and Larochelle, 2016) consists of a subset of the Imagenet dataset (Russakovsky et al., 2015) tiered Imagenet (Ren et al., 2018) is a more challenging subset of the Imagenet dataset (Russakovsky et al., 2015) CIFAR10 (Krizhevsky et al., 2009) is comprised of 60,000 32 32 colour images divided into 10 classes CIFAR100 (Krizhevsky et al., 2009) is just like the CIFAR10 dataset, except it has 100 classes containing 600 images each. MNIST (Le Cun and Cortes, 2010) is a dataset of 70,000 small 28 28 pixels grayscale images of handwritten single digits between 0 and 9 (10 classes). Fashion-MNIST (Xiao et al., 2017) is a dataset of Zalando s article images consisting of a training set of 60,000 examples and a test set of 10,000 examples. STL-10 (Coates et al., 2011) is a dataset of 96 96 color images, categorized into 10 classes
Dataset Splits Yes mini Imagenet ... Classes are divided in three disjoint sets of 64 base classes, 16 for validation and 20 novel classes. tiered Imagenet ... divided into 20 base (351 classes), 6 validation (97 classes) and 8 novel (160 classes) categories. CIFAR10 ... 50,000 training images and 10,000 test images. CIFAR100 ... 500 training images and 100 testing images per class. MNIST ... 60,000 examples in the training dataset and 10,000 in the test dataset. Fashion-MNIST ... training set of 60,000 examples and a test set of 10,000 examples. STL-10 ... 500 training images and 800 test images per class. The dataset also has 100,000 unlabeled images for unsupervised learning. Each episode consists of n classes sampled uniformly without replacement from the set of all classes, a support set S (k examples per class) and a query set Q (q examples per class). This is referred to as n-way k-shot learning.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. It only mentions the architecture used for attacks, but not the hardware it ran on.
Software Dependencies No The paper mentions the use of the Adam W algorithm in Appendix C, but this refers to an optimization algorithm, not a specific software dependency with a version number. No other specific software versions are provided.
Experiment Setup Yes In EP (Eq. 2), the hyperparameter α controls the amount of propagation performed in the graph and σ is the radius of the RBF function used to calculate the similarity matrix. ...where α is a weighting hyper-parameter set to 0.6 (found through random search) for all experiments. In contrast to label propagation (Zhou et al., 2004), embedding propagation is completely unsupervised, which makes it possible to apply it during Mo Co s pre-training phase. For comparison, we also provide results with manifold mixup (Verma et al., 2019a) applied to q and k in the same way as EP. The main difference between the two methods is that manifold mixup considers random pairs of samples while EP takes into account the topology of the data. The hyperparameters manifold mixup s Dirichlet distribution are the best found through random search. ...we used three common feature extractors: (i) a 4-layer convnet ... with 64 channels per layer, (ii) a 12-layer resnet..., and (iii) a wide residual network (WRN-28-10). For mini and tiered Imagenet, images are resized to 84x84. ...we use a perturbation ϵ of 0.03 and for 40 and 100 steps respectively. For FGSM (Goodfellow et al., 2014) we use a higher value of ϵ = 0.3 since it is not an iterative attack. All methods use distance measure ℓ as a distance measure. For the manifold mixup experiments we follow the setting proposed in (Verma et al., 2019a) with a mixing coefficient α of 2. All of the attacks were conducted on a Resnet-18 architecture trained for 20 epochs using the Adam W algorithm (Loshchilov and Hutter, 2017) with a learning rate of 0.01 and weight decay of 5 * 10^-4.