Adversarial Learning for Robust Deep Clustering
Authors: Xu Yang, Cheng Deng, Kun Wei, Junchi Yan, Wei Liu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two popular datasets show that the proposed adversarial learning method can significantly enhance the robustness and further improve the overall clustering performance. |
| Researcher Affiliation | Collaboration | 1School of Electronic Engineering, Xidian University, Xian 710071, China 2Department of CSE and Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University 3Tencent AI Lab, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1 Adversarial Learning for Robust Deep Clustering |
| Open Source Code | Yes | The source code is available at https://github.com/xdxuyang/ALRDC. |
| Open Datasets | Yes | MNIST [18]: containing a total of 70,000 handwritten digits with 60,000 training and 10,000 testing samples, each being a 28 × 28 monochrome image. Fashion MNIST [31]: having the same number of images with the same image size as MNIST, but fairly more complicated. |
| Dataset Splits | Yes | MNIST [18]: containing a total of 70,000 handwritten digits with 60,000 training and 10,000 testing samples, each being a 28 × 28 monochrome image. Fashion MNIST [31]: having the same number of images with the same image size as MNIST, but fairly more complicated. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper does not provide specific version numbers for ancillary software dependencies (e.g., Python, PyTorch, TensorFlow, or other libraries). |
| Experiment Setup | Yes | In our experiments, we set λ = 1. The hyper-parameters β and γ are determined by different networks and datasets... For MNIST, the channel numbers and kernel sizes of the autoencoder network are the same as those in [37], and we employ one convolutional layer and three following residual blocks in the encoder for Fashion-MNIST. The clustering layers consist of four fully-connected layers, and Re LU is employed as nonlinear activation. |