Pooling by Sliced-Wasserstein Embedding

Authors: Navid Naderializadeh, Joseph F Comer, Reed Andrews, Heiko Hoffmann, Soheil Kolouri

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed pooling method on a wide variety of set-structured data, including point-cloud, graph, and image classification tasks, and demonstrate that our proposed method provides superior performance over existing set representation learning approaches.
Researcher Affiliation Collaboration Navid Naderializadeh Department of Electrical and Systems Engineering University of Pennsylvania Philadelphia, PA 19104 nnaderi@seas.upenn.edu; Joseph F. Comer, Reed W. Andrews, Heiko Hoffmann HRL Laboratories, LLC. Malibu, CA 90265 {jfcomer, rwandrews, hhoffmann}@hrl.com; Soheil Kolouri Computer Science Department Vanderbilt University Nashville, TN 37235 soheil.kolouri@vanderbilt.edu
Pseudocode Yes Algorithm 1 Pooling by Sliced Wasserstein Embedding procedure PSWE(Xi = {xi n Rd}Ni n=1) Trainable parameters: Slicer parameters ΘL Rdθ L, Reference elements X0 RN d for l = 1 to L do Calculate gθl(Xi) := {gθl(xi n)}Ni n=1 and gθl(X0) = {gθl(x0 n)}N n=1 Calculate πi = argsort(gθl(Xi)), π0 = argsort(gθl(X0)), and π 1 0 if Ni = N then Calculate νθl i according to (10) else Calculate νθl i according to (11) return φ(Xi) = [νθ1 i , ..., νθL i ] RN L
Open Source Code Yes Our code is available at https://github.com/navid-naderi/PSWE.
Open Datasets Yes We evaluate the proposed PSWE method on a variety of point cloud, graph, and image datasets as depicted in Figure 2. We consider the Model Net40 dataset [44]... Next, we consider the prominent TUD benchmark [45]... NWPU-RESISC45 [46]... and Places-Extra69 [47]
Dataset Splits Yes Model Net40 dataset... use the official split, with 9,843 training samples and 2,468 test samples. ...Table 2 shows the resulting 10-fold cross-validation accuracies on different datasets...
Hardware Specification No The paper mentions 'wall-clock training and testing times' but does not specify any hardware details like GPU/CPU models, processors, or memory specifications used for running experiments.
Software Dependencies No The paper mentions software components like 'PyTorch implementation', 'multi-layer perceptron (MLP)', 'GCN', 'GAT', 'GIN', and 'Res Net18' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes In all the PSWE experiments, to ease the optimization process of reference elements, we optimize the reference elements at the output of the slicers rather than in the input space of the slicers. Moreover, for both PMA and PSWE, to reduce the embedding size, we use a similar weighting approach to that of FSPool... For PSWE, we set the number of slices to L = 1024 for the 16 16 Patches + MLP backbone, and L = 1000 for the Res Net18 backbone.