Efficient Optimization for Average Precision SVM

Authors: Pritish Mohapatra, C.V. Jawahar, M. Pawan Kumar

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using the PASCAL VOC action classification and object detection datasets, we show that our approaches provide significant speed-ups during training without degrading the test accuracy of AP-SVM.
Researcher Affiliation Academia Pritish Mohapatra IIIT Hyderabad pritish.mohapatra@research.iiit.ac.in C.V. Jawahar IIIT Hyderabad jawahar@iiit.ac.in M. Pawan Kumar Ecole Centrale Paris & INRIA Saclay pawan.kumar@ecp.fr
Pseudocode Yes Algorithm 1 The optimal greedy algorithm for loss-augmented inference for training AP-SVM. input Training samples X containing positive samples P and negative samples N, parameters w.
Open Source Code No The paper does not include any explicit statement about releasing code or provide links to a code repository.
Open Datasets Yes We use the PASCAL VOC 2011 [7] action classification dataset for our experiments.
Dataset Splits Yes The dataset is divided into two parts: 3347 trainval person bounding boxes and 3363 test person bounding boxes. We use the trainval bounding boxes for training since their ground-truth action classes are known. We evaluate the accuracy of the different instances of SSVM on the test bounding boxes using the PASCAL evaluation server. The hyperparameters of all five methods are fixed using 5-fold cross-validation on the trainval set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using a 'Convolutional Neural Network (CNN)' but does not specify any software dependencies with version numbers, such as specific deep learning frameworks or libraries.
Experiment Setup No The paper states that 'The hyperparameters of all five methods are fixed using 5-fold cross-validation on the trainval set' and 'In our experiments, we determine the value of the hyperparameters using 5-fold cross-validation,' but it does not provide concrete values for specific hyperparameters (e.g., learning rate, batch size, optimizer settings) in the main text.