Positive-Unlabeled Compression on the Cloud
Authors: Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing XU, Dacheng Tao, Chang Xu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The superiority of the proposed method is verified through experiments conducted on the benchmark models and datasets. |
| Researcher Affiliation | Collaboration | Huawei Noah s Ark Lab Key Laboratory of Machine Perception (MOE), CMIC, School of EECS, Peking University, China The University of Sydney, Darlington, NSW 2008, Australia {yixing.xu, yunhe.wang, kai.han, xuchunjing}@huawei.com htchen@pku.edu.cn, {dacheng.tao, c.xu}@sydney.edu.au |
| Pseudocode | Yes | Algorithm 1: PU classifier for more data. and Algorithm 2: Robust Knowledge distillation. |
| Open Source Code | No | The paper does not include any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The widely used CIFAR-10 benchmark is first selected as the original dataset... Benchmark dataset Image Net contains over 1.2M images... The EMNIST dataset2 is used as the unlabeled dataset... 2https://www.westernsydney.edu.au/bens/home/reproducible_research/emnist |
| Dataset Splits | No | The paper describes training procedures, including epoch numbers and learning rate schedules, but it does not explicitly state specific validation dataset splits (e.g., percentages or counts for a validation set) or a cross-validation methodology. |
| Hardware Specification | No | The paper discusses the importance of GPUs for deep learning and mentions various devices (smart phones, cell phones, autonomous driving) as target environments for model deployment. However, it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to conduct its own experiments. |
| Software Dependencies | No | The paper mentions optimization algorithms like SGD and SGA, and model architectures such as Res Net-34, but it does not specify any software libraries (e.g., TensorFlow, PyTorch) or their version numbers used in the experiments. |
| Experiment Setup | Yes | The network is trained for 200 epochs using SGD. We use a weight decay of 0.005 and momentum of 0.9. We start with a learning rate of 0.001 and divide it by 10 every 50 epochs. ... A weight decay of 0.0005 and momentum of 0.9 is used. We optimized the student network using SGD by starting with a learning rate of 0.1 and divide it by 10 every 50 epochs. πp = 0.21 is used in the following experiments. |