Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Enhancing Learning with Label Differential Privacy by Vector Approximation
Authors: Puning Zhao, Jiafei Wu, Zhe Liu, Li Shen, Zhikun Zhang, Rongfei Fan, Le Sun, Qingming Li
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct experiments on both synthesized and real datasets, which validate our theoretical analysis as well as the practical performance of our method. Numerical results on synthesized data validate the theoretical analysis, which shows that the performance of our method only decays slightly with K. Experiments on real datasets also validate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Puning Zhao, Jiafei Wu, Zhe Liu Zhejiang Lab Hangzhou, Zhejiang, China EMAIL Li Shen Shenzhen Campus of Sun Yat-sen University Shenzhen, Guangdong, China EMAIL Zhikun Zhang Zhejiang University Hangzhou, Zhejiang, China EMAIL Rongfei Fan Beijing Institute of Technology EMAIL Le Sun Nanjing University of Information Science and Technology EMAIL Qingming Li Zhejiang University EMAIL |
| Pseudocode | No | The paper describes methods and concepts in paragraph text and uses a figure (Figure 1) to illustrate a comparison, but it does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | Now we evaluate our new method on standard benchmark datasets that have been widely used in previous works on differentially private machine learning, including MNIST (Le Cun, 1998), Fashion MNIST (Xiao et al., 2017) CIFAR-10 and CIFAR-100(Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions using standard benchmark datasets (MNIST, Fashion MNIST, CIFAR-10, CIFAR-100) but does not explicitly state the training/test/validation split ratios, sample counts, or specific methodology for creating these splits. It implies the use of standard splits for these datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and types of neural networks ('simple convolutional neural network', 'ResNet-18'), but it does not specify any software names with version numbers (e.g., Python version, PyTorch/TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | For the MNIST and Fashion MNIST datasets, we use a simple convolutional neural network composed of two convolution and pooling layers with dropout rate 0.5... In our experiments, we set the batch size to be 400, and use the Adam optimizer with learning rate 0.001. |