See without looking: joint visualization of sensitive multi-site datasets

Authors: Debbrata K. Saha, Vince D. Calhoun, Sandeep R. Panta, Sergey M. Plis

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on the MNIST dataset we introduce metrics for measuring the embedding quality and use them to compare d SNE to its centralized counterpart. We also apply d SNE to a multi-site neuroimaging dataset with encouraging results. 3 Results We base our experiments in this section on two datasets: MNIST (Le Cun et al., 1998) for handwritten images of all digits in the 0 to 9 range, and Autism Brain Imaging Data Exchange (ABIDE) for f MRI data (Di Martino et al., 2014).
Researcher Affiliation Collaboration Debbrata K. Saha,1,2 Vince D. Calhoun,1,2 Sandeep R. Panta,2 Sergey M. Plis1,2 1 University of New Mexico 2 The Mind Research network
Pseudocode Yes Algorithm 1 Pairwise Affinities, Algorithm 2 t SNE, Algorithm 3 singleshot DSNE, Algorithm 4 Grad Step, Algorithm 5 Update Step, Algorithm 6 multishot DSNE
Open Source Code No The paper does not provide an explicit statement or a link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We base our experiments in this section on two datasets: MNIST (Le Cun et al., 1998) for handwritten images of all digits in the 0 to 9 range, and Autism Brain Imaging Data Exchange (ABIDE) for f MRI data (Di Martino et al., 2014).
Dataset Splits No The paper mentions how samples were picked (e.g., "randomly (but preserving class balance) pick 5,000 different samples") and how data was split for simulation ("randomly split these data into ten local and one reference datsets"). However, it does not specify explicit training, validation, or test dataset splits (e.g., percentages or exact counts) typically provided for reproducibility of model training and evaluation.
Hardware Specification No The paper does not specify any hardware used for running the experiments (e.g., CPU, GPU models, memory).
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes Input: Data: X = [x1, x2 . . . x N], xi Rn Objective parameters: ρ (perplexity) Optimization parameters: T (number of iterations), η (learning rate), α (momentum) Output: Y = {y1, y2, . . . , y N}, yi Rm, m << n