Towards Robust Model Reuse in the Presence of Latent Domains

Authors: Jie-Jing Shao, Zhanzhan Cheng, Yu-Feng Li, Shiliang Pu

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on diverse real-world data sets clearly validate the effectiveness of proposed algorithms. and 4 Empirical Study To validate our method, we perform experiments on diverse tasks, including Digital Recognition, Attribute Classification and Face Recognition.
Researcher Affiliation Collaboration Jie-Jing Shao1 , Zhanzhan Cheng2 , Yu-Feng Li1 and Shiliang Pu2 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2Hikvision Research Institute, Hangzhou, China {shaojj, liyf}@lamda.nju.edu.cn, {chengzhanzhan, pushiliang.hri}@hikvision.com,
Pseudocode Yes Algorithm 1 The proposed MRL method
Open Source Code No The paper includes '1https://pytorch.org/' as a footnote, which refers to a third-party library used, not the authors' own source code for the proposed method. No explicit statement about releasing their code or a link to a repository for their method was found.
Open Datasets Yes Digital Recognition MNIST SVHN USPS... Our second set of experiments is based on the Animals with Attributes 2 dataset2, which contains 37,322 images of 50 animal classes. 2https://cvml.ist.ac.at/Aw A2/... Finally, we evaluate our method on the CMU Multi-PIE dataset [Sim et al., 2002]... These subsets3 are based on SURF features and the dimension of features is 1024. 3https://github.com/jindongwang/transferlearning/blob/master/ data/dataset.md
Dataset Splits Yes During reuse, we take 10% samples for validation and 40% samples for testing. and For the remaining 10,000 samples, we use 50% as training set, 10% examples as validation set and take the others for testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments were provided. The paper only mentions that methods are implemented on PyTorch.
Software Dependencies No The paper mentions 'Py Torch1' but does not specify a version number or other software dependencies with their versions.
Experiment Setup No The paper mentions 'The hyper-parameters are adjusted by the validation set for all methods.' but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or detailed training configurations.