ACE: Explaining cluster from an adversarial perspective

Authors: Yang Young Lu, Timothy C Yu, Giancarlo Bonora, William Stafford Noble

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate that ACE is able to identify gene panels that are both highly discriminative and nonredundant, and we demonstrate the applicability of ACE to an image recognition task. 1Our experiments demonstrate that ACE identifies gene panels that are highly discriminative and exhibit low redundancy. We further provide results suggesting that ACE is useful in domains beyond biology, such as image recognition.
Researcher Affiliation Academia 1Department of Genome Sciences, University of Washington, Seattle, WA 2Graduate Program in Molecular and Cellular Biology, University of Washington, Seattle, WA 3Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA.
Pseudocode No The paper describes its approach conceptually and through mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No 1The Apache licensed source code of ACE will be available at bitbucket.org/noblelab/ace.
Open Datasets Yes To compare ACE to each of the baseline methods, we used a recently reported simulation method, Sym Sim (Zhang et al., 2019), to generate two synthetic sc RNA-seq datasets: one clean dataset and one complex dataset. We next applied ACE to a real dataset of peripheral blood mononuclear cells (PBMCs) (Zheng et al., 2017) and we applied ACE to the MNIST handwritten digits dataset (Le Cun, 1998).
Dataset Splits Yes The classification performance, in terms of area under the receiver operating characteristic curve (AUROC), is evaluated by 3-fold stratified cross-validation, and an additional 3-fold cross-validation is applied within each training split to determine the optimal C hyperparameter.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software like 'Scikitlearn' but does not provide specific version numbers for any software dependencies required to reproduce the experiments.
Experiment Setup Yes The SVM training involves two hyperparameters, the regularization coefficient C and the bandwidth parameter σ... The C parameter is selected by grid search from {5−5, 5−4, ..., 50, ..., 54, 55} . We used a simple convolution neural network architecture containing two convolution layers, each with a modest filter size (5, 5), a modest number of filters (32) and Re LU activation, followed by a max pooling layer with a pool size (2, 2), a fully connected layer, and a softmax layer. The model was trained on the MNIST training set (60,000 examples) for 10 epochs, using Adam (Kingma & Ba, 2015) with an initial learning rate of 0.001.