A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs
Authors: Ido Nachum, Jan Hazla, Michael Gastpar, Anatoly Khina
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | For these datasets, clearly ρout ρin meaning that the relation of Re LU FNNs (2) represented in figure 1 by the yellow curve breaks down. That said, for inputs consisting of i.i.d. zero-mean Gaussians (and filters comprising i.i.d. zero-mean Gaussian weights as before) with a Pearson correlation coefficient ρ between corresponding entries (and independent otherwise), the relation in (2) between ρout and ρin of Re LU FNNs does hold for Re LU CNNs as well, as illustrated in figure 1a. Figure 1: Input and output cosine similarities of a single randomly initialized convolutional layer with 100 filters. Each red circle in the figures represents a random pair of images chosen from the corresponding dataset. 200 pairs were sampled in each figure. |
| Researcher Affiliation | Academia | Ido Nachum, Jan H azła, Michael Gastpar School of Computer and Communication Sciences École Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland forename.surname @epfl.ch Anatoly Khina School of Electrical Engineering Tel Aviv University Tel Aviv 6997801, Israel anatolyk@eng.tau.ac.il |
| Pseudocode | No | No structured pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | No explicit statement or link providing access to open-source code for the described methodology is found in the paper. |
| Open Datasets | Yes | For these datasets, clearly ρout ρin meaning that the relation of Re LU FNNs (2) represented in figure 1 by the yellow curve breaks down. That said, for inputs consisting of i.i.d. zero-mean Gaussians (and filters comprising i.i.d. zero-mean Gaussian weights as before) with a Pearson correlation coefficient ρ between corresponding entries (and independent otherwise), the relation in (2) between ρout and ρin of Re LU FNNs does hold for Re LU CNNs as well, as illustrated in figure 1a. Figure 1: Input and output cosine similarities of a single randomly initialized convolutional layer with 100 filters. Each red circle in the figures represents a random pair of images chosen from the corresponding dataset. 200 pairs were sampled in each figure. (b) F-MNIST, filter size 3 3 (c) CIFAR-10, filter size 3 3 (d) Image Net, filter size 11 11 3 |
| Dataset Splits | No | The paper focuses on analyzing the geometric representation changes after applying randomly initialized layers, rather than training models. It does not provide specific training, validation, or test dataset splits or cross-validation details for model training and evaluation. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, processors, or memory specifications) used for running the experiments or computations are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) needed to replicate the work. |
| Experiment Setup | Yes | Figure 1: Input and output cosine similarities of a single randomly initialized convolutional layer with 100 filters. Each red circle in the figures represents a random pair of images chosen from the corresponding dataset. 200 pairs were sampled in each figure. initialized by independent identically distributed (i.i.d.) Gaussian weights with mean zero and variance 1/N, where N is the number of neurons in that layer. filter size 3 3 / filter size 11 11 3 |