Information Competing Process for Learning Diversified Representations
Authors: Jie Hu, Rongrong Ji, ShengChuan Zhang, Xiaoshuai Sun, Qixiang Ye, Chia-Wen Lin, Qi Tian
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on image classification and image reconstruction tasks demonstrate the great potential of ICP to learn discriminative and disentangled representations in both supervised and self-supervised learning settings. |
| Researcher Affiliation | Collaboration | Jie Hu12, Rongrong Ji123 , Sheng Chuan Zhang1, Xiaoshuai Sun1, Qixiang Ye4, Chia-Wen Lin5, Qi Tian6. 1Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University. 2National Institute for Data Science in Health and Medicine, Xiamen University. 3Peng Cheng Laboratory. 4University of Chinese Academy of Sciences. 5National Tsing Hua University. 6Noah s Ark Lab, Huawei. |
| Pseudocode | Yes | Algorithm 1: Optimization of Information Competing Process |
| Open Source Code | Yes | Codes, models and experimental results are all available at https://github.com/hujiecpp/ Information Competing Process/ |
| Open Datasets | Yes | CIFAR-10 and CIFAR-100 [21] are used to evaluate the performance of ICP in the image classification task. These datasets contain natural images belonging to 10 and 100 classes respectively. CIFAR-100 comes with finer labels than CIFAR-10. The raw images are with 32 32 pixels and we normalize them using the channel means and standard deviations. Standard data augmentation by random cropping and mirroring is applied to the training set. |
| Dataset Splits | Yes | Standard data augmentation by random cropping and mirroring is applied to the training set. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions that components are "implemented by neural networks" and use "backpropagation" but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers. |
| Experiment Setup | Yes | In experiments, all the probabilistic feature extractors, task solvers, predictor and discriminator are implemented by neural networks. We suppose Q(z), Q(t|r), Q(t|z), Q(t|y) are standard Gaussian distributions and use reparameterization trick by following VAE [19]. The objectives are differentiable and trained using backpropagation. In the classification task (supervised setting), we use one fully-connected layer as classifier. In the reconstruction task (self-supervised setting), multiple deconvolution layers are used as the decoder to reconstruct the inputs. |