Automatic Classifiers as Scientific Instruments: One Step Further Away from Ground-Truth

Authors: Jacob Whitehill, Anand Ramakrishnan

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To put these theoretical results into perspective, we conducted simulations based on two recent affective computing studies that used automated detectors as scientific instruments. We explored the coverage of T n for two automatic face analysis problems: student engagement recognition and age estimation using the HBCU (Whitehill et al., 2014) (Engagement) and GENKI (Lab) (Age) datasets, respectively;
Researcher Affiliation Academia Jacob Whitehill 1 Anand Ramakrishnan 1 1Department of Computer Science, Worcester Polytechnic Institute (WPI), MA, USA.
Pseudocode No We can then visualize how these vectors cover the (n 3)-sphere T n using the following procedure: 1. Normalize u, as well as each bu(j), to have 0-mean and unit-length. 2. Compute an orthonormal basis B... 3. Project each bu(j) onto the new basis B. 4. Define each x(j) to be the last n 2 components of vector Bbu(j). 5. Project the {x(j)} onto the two principal axes obtained from principal component analysis (PCA).
Open Source Code No The paper does not provide any explicit statements or links indicating the release of source code for the described methodology.
Open Datasets Yes We explored the coverage of T n for two automatic face analysis problems: student engagement recognition and age estimation using the HBCU (Whitehill et al., 2014) (Engagement) and GENKI (Lab) (Age) datasets, respectively; Lab, M. P. The MPLab GENKI Database. http://mplab.ucsd.edu.; Whitehill, J., Serpell, Z., Lin, Y.-C., Foster, A., and Movellan, J. R. The faces of engagement: Automatic recognition of student engagementfrom facial expressions. IEEE Transactions on Affective Computing, 5(1):86 98, 2014.
Dataset Splits No Training data consisted of 7629 face images from 15 subjects of HBCU (Whitehill et al., 2014), and testing data were 500 images from the remaining 5 subjects. (This corresponds to just one cross-validation fold from the original study (Whitehill et al., 2014).); Training data consisted of 31040 face images of the GENKI dataset (Lab), and testing data consisted of 500 face images.
Hardware Specification No We examined two modern deep learning-based visual recognition architectures VGG-16 (Simonyan & Zisserman, 2014) and Res Net-50 (He et al., 2016)
Software Dependencies No Optimization was performed using SGD for 10000 iterations; We examined two modern deep learning-based visual recognition architectures VGG-16 (Simonyan & Zisserman, 2014) and Res Net-50 (He et al., 2016)
Experiment Setup Yes Optimization was performed using SGD for 10000 iterations, and the network weights were saved every 1000 iterations. Inspired by (Huang et al., 2017), we also tried both cosine and triangular (Smith, 2017) learning rates.