Neural Networks Classify through the Class-Wise Means of Their Representations

Authors: Mohamed El Amine Seddik, Mohamed Tamaazousti8204-8211

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments are notably performed on the datasets: MNIST, Fashion MNIST, Cifar10, and Cifar100 and using a standard CNN architecture.
Researcher Affiliation Collaboration 1Huawei Paris Research Center, 92012 Boulogne-Billancourt, France 2Universit e Paris-Saclay, CEA, List, F-91120 Palaiseau, France
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'open source pretrained models' with a link to 'https://modelzoo.co/', but this refers to external resources and not the authors' own code for the methodology described in the paper. There is no explicit statement or link providing access to their source code.
Open Datasets Yes All the experiments performed on four datasets which are: MNIST (Le Cun 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), Cifar10 (Krizhevsky and Hinton 2010) and Cifar100 (Krizhevsky, Nair, and Hinton 2009).
Dataset Splits No The paper describes the number of training and test images for each dataset (e.g., 'n = 60000 training images and 10000 test images' for MNIST), but does not explicitly mention a separate validation split or how it was derived.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions using 'Keras (G eron 2019)' for implementation, but 'Geron 2019' is a book citation, not a specific software version number. No other software dependencies with version numbers are provided.
Experiment Setup Yes Four CNN networks having the above architecture are trained on the four considered datasets for 50 epochs (except Cifar100 with 100 epochs) and using a batch-size of 1000 images, using the Adam optimizer (Kingma and Ba 2014).