Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
Authors: Weiqi Peng, Jinghui Chen
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically verify the effectiveness of our methods for learnability control. Following Huang et al. (2021), we then examine the strength and robustness of the generated perturbations in multiple settings and against potential counter-measures. Tasks are evaluated on three publicly available datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and IMAGENET (Deng et al., 2009). |
| Researcher Affiliation | Academia | Weiqi Peng Yale University weiqi.peng@yale.edu Jinghui Chen Pennsylvania State University jzc5917@psu.edu |
| Pseudocode | Yes | Algorithm 1 Learnability Locking (Linear) ... Algorithm 2 Learnability Unlocking (Linear) |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | Tasks are evaluated on three publicly available datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and IMAGENET (Deng et al., 2009). |
| Dataset Splits | No | The paper states: 'We train each model on CIFAR-10 for 60 epochs, CIFAR-100 for 100 epochs, and IMAGENET-10 for 60 epochs.' and 'Specifically, for IMAGENET experiments, we picked a subset of 10 classes, each with 700 training and 300 testing samples'. While it mentions training and testing, it does not explicitly provide the specific percentages or counts for validation splits for all datasets used. |
| Hardware Specification | No | The paper mentions 'due to memory limitation' when discussing batch sizes, but it does not provide any specific details about the hardware used for experiments, such as GPU models, CPU types, or other system specifications. |
| Software Dependencies | No | The paper mentions general techniques and algorithms like 'Stochastic Gradient Descent (SGD)', 'cosine annealing scheduler', and 'Cross-entropy' as a loss function, but it does not specify any software libraries or frameworks with their version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.x'). |
| Experiment Setup | Yes | For model training during the learnability locking process (updating fθ), we set initial learning rate of SGD as 0.1 with momentum as 0.9 and cosine annealing scheduler without restart. Cross-entropy is always used as the loss function if not mentioned otherwise. The batch size is set as 256 for CIFAR-10 and CIFAR-100, 128 for the IMAGENET due to memory limitation. (...) for CIFAR-10, we set I = 20 and J = 1 with a learning rate over ψ of 0.1 (...) For the unlocking process, we set the number of fixed-point iterations m = 5. |