Understanding Representation Learnability of Nonlinear Self-Supervised Learning
Authors: Ruofeng Yang, Xiangyuan Li, Bo Jiang, Shuai Li
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also present the learning processes and results of the nonlinear SSL and SL model via simulation experiments. |
| Researcher Affiliation | Academia | Shanghai Jiao Tong University wanshuiyin@sjtu.edu.cn, lixiangyuan19@sjtu.edu.cn, bjiang@sjtu.edu.cn, shuaili8@sjtu.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. It describes methods and proofs in prose and mathematical notation. |
| Open Source Code | Yes | The codes of this section are available at https://github.com/wanshuiyin/AAAI-2023-The Learnability-of-Nonlinear-SSL. |
| Open Datasets | No | The paper designs a "toy data distribution" (Section 3.1) for its experiments and does not use a publicly available dataset with a specific link, DOI, repository name, or formal citation. |
| Dataset Splits | No | The paper describes the construction of its custom data distribution but does not specify any training, validation, or test dataset splits. |
| Hardware Specification | Yes | All experiments are conduct on a desktop with AMD Ryzen 7 5800H with Radeon Graphics 3.20 GHz and 16 GB memory. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., specific Python libraries like PyTorch or TensorFlow versions) used for the experiments. |
| Experiment Setup | Yes | In this section, we choose τ = 7, d = 10, ρ = 1/d1.5, α = 1/800, n = d2 and learning rate η = 0.001 if we do not specify otherwise. Experiments are averaged over 20 random seeds, and we show the average results with 95% confidence interval for learning curves. |