Controlling Directions Orthogonal to a Classifier

Authors: Yilun Xu, Hao He, Tianxiao Shen, Tommi S. Jaakkola

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we present three use cases where controlling orthogonal variation is important: style transfer, domain adaptation, and fairness. Empirically, we present three use cases where controlling orthogonal variation is important: style transfer, domain adaptation, and fairness. [...] 4.1 EXPERIMENTS [...] 5.1 EXPERIMENTS [...] 6.1 EXPERIMENTS
Researcher Affiliation Academia Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology {ylxu, haohe, tianxiao}@mit.edu; tommi@csail.mit.edu
Pseudocode Yes Algorithm 1 Classifier Orthogonalization
Open Source Code Yes The code is available at https://github.com/Newbeeer/orthogonal_classifier.
Open Datasets Yes CMNIST: We construct C(olors)MNIST dataset based on MNIST digits (Le Cun & Cortes, 2005). [...] Celeb A-GH: We construct the Celeb A-G(ender)H(air) dataset based on the gender and hair color attributes in Celeb A (Liu et al., 2015). [...] UCI Adult dataset [...] UCI German credit dataset
Dataset Splits No For all datasets, we use 0.8/0.2 proportions to split the train/test set. The paper specifies train/test splits but does not explicitly mention validation splits.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions optimizers (Adam) and other general software components, but it does not provide specific version numbers for libraries or frameworks like Python, PyTorch, or TensorFlow, which are necessary for reproducibility.
Experiment Setup Yes We adopt Adam with learning rate 2e-4 as the optimizer and batch size 128/32 for CMNIST/Celeb A. [...] We use the Adam with learning rate 1e-3 as the optimizer, and a batch size of 64.