Unlearning from Weakly Supervised Learning
Authors: Yi Tang, Yi Gao, Yong-gang Luo, Ju-Cheng Yang, Miao Xu, Min-Ling Zhang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies show the superiority of the proposed approach. We conduct experimental studies on three widely-used datasets: MNIST, Fashion-MNIST (Fashion), and CIFAR10. |
| Researcher Affiliation | Collaboration | Yi Tang1 , Yi Gao2,3 , Yong-Gang Luo4 , Ju-Cheng Yang4 , Miao Xu5 , Min-Ling Zhang6,3 1School of Automation, Southeast University, China 2School of Cyber Science and Engineering, Southeast University, China 3Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China 4AI LAB, Chongqing Changan Automobile Co. Ltd. 5University of Queensland, Australia 6School of Computer Science and Engineering,Southeast University, China |
| Pseudocode | No | The paper describes its methods mathematically and with text, but does not provide pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is released at https://github.com/Ehwartz/udru. |
| Open Datasets | Yes | We conduct experimental studies on three widely-used datasets: MNIST, Fashion-MNIST (Fashion), and CIFAR10. |
| Dataset Splits | No | The paper provides details on training parameters and how unlearning data is sampled, but it does not specify explicit training/validation/test dataset splits (e.g., percentages or counts for each). |
| Hardware Specification | Yes | We implement our experiments using Py Torch on NVIDIA RTX 4090. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The partial rate of PLL is set as 0.2, and the noise rate of NLL is 0.2. Specifically, MLP, CNN, and Res Net50 are used to identify MNIST, Fashion and CIFAR10, respectively. For the selection of loss functions, Cross-Entropy loss is used to SL, and Classifier-Consistent Loss [Feng et al., 2020] and Generalized Cross Entropy Loss [Zhang and Sabuncu, 2018] are used to train models in PLL and NLL, respectively. We train models using SGD with a learning rate of 10 4 and a weight decay of 10 3. The batch size and epoch are set as 64 and 256, respectively. Batch size for Fashion and CIFAR10 is set as 64, while we do not divide MNIST into batches. Learning rate for three datasets is set as 10 4, δ in UDRU is set as 1 for MNIST and Fashion, and 10 2 for CIFAR10. |