Improving the Performance-Compatibility Tradeoff with Personalized Objective Functions

Authors: Jonathan Martinez, Kobi Gal, Ece Kamar, Levi H. S. Lelis5967-5974

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply this approach to three supervised learning tasks commonly used in the human-computer decision-making literature. We show that using our approach leads to significant improvements in the performance-compatibility tradeoff over the non-personalized approach of Bansal et al., achieving up to 300% improvement for certain users.
Researcher Affiliation Collaboration Jonathan Martinez,1 Kobi Gal,1,2 Ece Kamar,3 Levi H. S. Lelis4,5 1Ben-Gurion University 2University of Edinburgh 3Microsoft Research 4University of Alberta 5Alberta Machine Intelligence Institute (Amii)
Pseudocode No The paper provides mathematical equations for its method but does not include pseudocode or an algorithm block.
Open Source Code Yes Code and data is publicly available at https://github.com/ jonmartz/Compatibility Personalization
Open Datasets Yes 1. ASSISTments (available from PSLC Datashop https:// pslcdatashop.web.cmu.edu/). 2. Galaxy Zoo (GZ) (available from https://data.galaxyzoo. org/). 3. The Stanford MOOCPosts (MOOC) dataset (available from https://datastage.stanford.edu/Stanford Mooc Posts/).
Dataset Splits Yes In each innerfoldp,q a timestamp is randomly selected for every user i and used to split Dp i into disjoint and time-contiguous training set T p,q i and validation set V p,q i .
Hardware Specification No The paper states that models were trained using decision tree classifiers, but does not specify any hardware details such as GPU/CPU models, memory, or cloud instances used for the experiments.
Software Dependencies No The paper mentions 'scikit-learn Python package' but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup No The paper mentions that models are trained with 'various values of λ' and 'binary weights (0 or 1)' for W. However, it does not provide specific hyperparameter values for the decision tree classifier (e.g., max_depth, min_samples_split) or other system-level training settings.