Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework

Authors: Haonan Huang, Guoxu Zhou, Yanghang Zheng, Yuning Qiu, Andong Wang, Qibin Zhao

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on multiview datasets confirmed that our attack framework effectively reduces the clustering performance of the target model. Furthermore, our proposed adversarially robust method is also demonstrated to be an effective defense against such attacks.
Researcher Affiliation Academia 1School of Automation, Guangdong University of Technology, Guangzhou, CHINA 2RIKEN AIP, Tokyo, JAPAN 3Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou, CHINA.
Pseudocode Yes Algorithm 1 Algorithm for Attacking DMVC and Algorithm 2 Algorithm of Training AR-DMVC-AM.
Open Source Code Yes Code is available at https: //github.com/libertyhhn/AR-DMVC.
Open Datasets Yes For evaluation, we utilize the following four benchmark multi-view datasets: Reg DB (Nguyen et al., 2017), Noisy Fashion, Noisy MNIST, and Patched MNIST (Trosten et al., 2023).
Dataset Splits No The paper states, 'For all datasets, we randomly split 50% of the data for training and the remaining 50% for testing.' It does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for conducting the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow x.x, PyTorch x.x, or specific library versions) that would be necessary for reproducibility.
Experiment Setup Yes Concerning the adversarial attack hyperparameters in Eq. 6, we adhere to the configuration outlined in (Chhabra et al., 2022), and the values of µ1, µ2, and µ3 are set to 5, 5, and 1, respectively. Regarding , the assigned values are 0.2 for Reg DB, 0.15 for Noisy Fashion, 0.3 for Noisy MNIST, and 0.3 for Patched MNIST. In Eq. 14, the trade-off coefficient λ is introduced to regulate varying levels of the strength of adversarial training, while γ is incorporated to govern the contribution of predictive consistency in our framework... we consistently set it to 1 in all experiments. From Figure 4(b), it can be observed that AR-DMVC-AM reaches a stable state at epoch 30. Consequently, we set the epoch to 30 in our experiments under attack.