Heterogeneous Model Reuse via Optimizing Multiparty Multiclass Margin

Authors: Xi-Zhu Wu, Song Liu, Zhi-Hua Zhou

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and real-world data covering different multiparty scenarios show the effectiveness of our proposal.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2University of Bristol, Bristol, United Kingdom 3The Alan Turing Institute, London, United Kingdom.
Pseudocode Yes Algorithm 1 HMR
Open Source Code No The paper does not provide any statement or link for open-source code for the methodology described.
Open Datasets Yes Fashion-MNIST (Xiao et al., 2017), which is a widely used benchmarking dataset. ... https://github.com/zalandoresearch/fashion-mnist
Dataset Splits No Fashion-MNIST contains 70,000 28 28 grayscale fashion product images, each associated with a label from 10 classes. 10,000 out of 70,000 are used for testing. To simulate the multiparty setting, we seperate the training data into different parties according to Figure 3. The paper mentions training and testing but not a validation split.
Hardware Specification No Each party is equipped with a simple neural network with 3 conv-layers as the same structure in Google Colab. (This is a platform, not specific hardware details.)
Software Dependencies No Implementations in scikit-learn (Pedregosa et al., 2011) with default parameters are used for easy reproduction. (No version specified for scikit-learn)
Experiment Setup No Each party is equipped with a simple neural network with 3 conv-layers as the same structure in Google Colab. ... At calibration operation, each local model will be retrained with augmented data for one epoch. (Does not provide specific hyperparameters like learning rate, batch size, or optimizer details.)