Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need
Authors: Xianlong Wang, Minghui Li, Wei Liu, Hangtao Zhang, Shengshan Hu, Yechao Zhang, Ziqi Zhou, Hai Jin
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both theoretical and empirical results (including 6 datasets, 16 models, and 2 tasks) demonstrate the effectiveness of our proposed unlearnable framework. |
| Researcher Affiliation | Academia | 1 National Engineering Research Center for Big Data Technology and System 2 Services Computing Technology and System Lab 3 Cluster and Grid Computing Lab 4 Hubei Engineering Research Center on Big Data Security 5 Hubei Key Laboratory of Distributed System Security School of Cyber Science and Engineering, Huazhong University of Science and Technology School of Software Engineering, Huazhong University of Science and Technology School of Computer Science and Technology, Huazhong University of Science and Technology {wxl99,minghuili,weiliu73,hangt_zhang,hushengshan,ycz,zhouziqi,hjin}@hust.edu.cn |
| Pseudocode | Yes | Algorithm 1 Our proposed UMT scheme |
| Open Source Code | Yes | Our code is available at https://github.com/CGCL-codes/Unlearnable PC. |
| Open Datasets | Yes | Three synthetic 3D point cloud datasets, Model Net40 [50], Model Net10 [50], Shape Net Part [4], and three real-world datasets including autonomous driving dataset KITTI [32] and indoor datasets Scan Object NN [41], S3DIS [2] are used. |
| Dataset Splits | No | Model Net40 [50]: ... comprising 9843 training and 2468 test point cloud data. ...Shape Net Part [4]: ... comprising 12137 training and 2874 test point cloud samples. ...Scan Object NN [41]: ... comprising 2309 training samples and 581 test samples. |
| Hardware Specification | Yes | Our experiments are conducted on a server running a 64-bit Ubuntu 20.04.1 system with an Intel Xeon Silver 4210R CPU @ 2.40GHz processor, 125GB memory, and four Nvidia Ge Force RTX 3090 GPUs, each with 24GB memory. |
| Software Dependencies | Yes | The experiments are performed using the Python language, version 3.8.19, and Py Torch library version 1.12.1. |
| Experiment Setup | Yes | The training process involves Adam optimizer [22], Cosine Annealing LR scheduler [30], initial learning rate of 0.001, weight decay of 0.0001. We empirically set rs, rp, bl, bu, ωl, ωu, hl, and hu to 15 , 120 , 0.6, 0.8, 0 , 20 , 0, and 0.4 respectively. The model training process on the unlearnable dataset and the clean dataset remains consistent, using the Adam optimizer [22], Cosine Annealing LR scheduler [30], initial learning rate of 0.001, weight decay of 0.0001, batch size of 16 (due to insufficient GPU memory, the batch size is set to 8 when training 3DGCN on Model Net40 dataset), and training for 80 epochs. Due to the longer training process required by PCT [16], the training epochs for PCT in Tab. 1 on Model Net10, Model Net40, and Scan Object NN datasets are all set to 240. |