Batch Decorrelation for Active Metric Learning
Authors: Priyadarshini Kumari, Ritesh Goru, Siddhartha Chaudhuri, Subhasis Chaudhuri
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on two challenging real-world datasets, as well as on synthetic data. |
| Researcher Affiliation | Collaboration | 1IIT Bombay 2Adobe Research |
| Pseudocode | Yes | Algorithm 1: Batch-Mode Active Learning |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Yummly Food Data. This dataset has 72148 triplets deļ¬ned over 73 images of food items [Wilber et al., 2014]. Haptic Texture Data. This dataset has 108 surface materials (metals, paper, fabrics, etc), each represented by 10 texture signals [Strese et al., 2017] produced by tracing a haptic stylus over the physical surface. The triplets are generated from the ground truth perceptual dissimilarity metric gathered in user experiments [Priyadarshini K et al., 2019]. |
| Dataset Splits | Yes | To generate a train/test split we randomly subsample 20K training and 20K test triplets. As before, we randomly subsample 20K training and 20K test triplets. From X, we randomly sample 20K training and 20K test triplets. |
| Hardware Specification | No | The paper does not specify any hardware components such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' for training and 'Re LU nonlinearities' for activation functions but does not specify version numbers for any software, libraries, or frameworks used (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | Synthetic: 3 FC layers with 10, 20, 10 neurons resp.; Food: 3 FC layers with 6, 12, 12 neurons resp.; Haptic: 4 FC layers with 32, 32, 64, 32 neurons resp. All layers have Re LU activation. We train using Adam [Kingma and Ba, 2015] with learning rate 10 4. Each training round is budgeted 200 epochs for synthetic data and 1000 for food and haptic data. For all our experiments, the size of the overcomplete set (S) of informative triplets is twice the budget (b). |