Data Fine-Tuning

Authors: Saheb Chhabra, Puspita Majumdar, Mayank Vatsa, Richa Singh8223-8230

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments performed on three publicly available datasets LFW, Celeb A, and MUCT, demonstrate the effectiveness of the proposed concept.
Researcher Affiliation Academia IIIT-Delhi, India {sahebc, pushpitam, mayank, rsingh}@iiitd.ac.in
Pseudocode No No explicit pseudocode or algorithm block was found. Figure 3 is a "Block diagram illustrating the steps of the proposed algorithm", but not a pseudocode listing.
Open Source Code No No explicit statement or link providing access to the source code for the described methodology was found.
Open Datasets Yes Experiments are performed on three publicly available datasets and results showcase enhanced performance of black box systems using data fine-tuning.
Dataset Splits Yes The dataset is partitioned into 60% training set, 20% validation set, and 20% testing set. (LFW, MUCT)... 162,770 images in the training set, 19,867 into validation set, and 19,962 images in the testing set. (Celeb A)
Hardware Specification No No specific hardware (e.g., GPU model, CPU type) used for running the experiments was explicitly mentioned.
Software Dependencies No The paper mentions "Adam optimizer" and "VGGFace + NNET" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Perturbation Learning: To learn the perturbation for a given dataset, learning rate is set to 0.001 and the batch size is 800. The number of iterations used for processing each batch is 16, and the number of epochs is 5. Model Fine-tuning: To fine-tune the attribute classification model, Adam optimizer is used with learning rate set to 0.005. The model is trained for 20 epochs.