Image Privacy Prediction Using Deep Features
Authors: Ashwini Tonge, Cornelia Caragea
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results of our experiments show that models trained on deep visual features outperform those trained on SIFT and GIST. The results also show that deep image tags combined with user tags perform best among all tested features. |
| Researcher Affiliation | Academia | Ashwini Tonge and Cornelia Caragea Computer Science and Engineering University of North Texas Ashwini Tonge@my.unt.edu, ccaragea@unt.edu |
| Pseudocode | No | The paper describes the methods used in narrative text but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'CAFFE (Jia et al. 2014), which is an open-source framework for deep neural networks' but does not provide a link or statement indicating that the authors' specific implementation code for this paper is open-source or available. |
| Open Datasets | Yes | We evaluate our approach on Flickr images sampled from the Pi Calert dataset (Zerr et al. 2012). ... CAFFE implements an eight-layer network pre-trained on a subset of the Image Net dataset (Russakovsky et al. 2015), which consists of more than one million images annotated with 1000 object categories. |
| Dataset Splits | Yes | We sampled 4,700 images from Pi Calert and used them for our privacy prediction task. The public and private images are in the ratio of 3:1. We divided these images into two subsets, Train and Test, using 6-fold stratified sampling. Train consists of five folds randomly selected, whereas Test consists of the remaining fold. The number of images in Train and Test are 3,917 and 783, respectively. In all experiments, we used the Support Vector Machine (SVM) classifier implemented in Weka and chose the hyper-parameters (i.e., the C parameter and the kernel in SVM) using 5-fold cross-validation on Train. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Support Vector Machine (SVM) classifier implemented in Weka' and 'CAFFE (Jia et al. 2014)' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | In all experiments, we used the Support Vector Machine (SVM) classifier implemented in Weka and chose the hyper-parameters (i.e., the C parameter and the kernel in SVM) using 5-fold cross-validation on Train. We experimented with different C values, and two kernels, linear and RBF. ... We resized images in both Train and Test to the CAFFE convolutional neural net compatible size of 227 227 and encoded each image using the two deep feature representations corresponding to the output of the layers FC7 and FC8. |