Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization
Authors: Haoliang Li, Yufei Wang, Renjie Wan, Shiqi Wang, Tie-Qiang Li, Alex Kot
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two challenging medical imaging classification tasks indicate that our method can achieve better cross-domain generalization capability compared with state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1Rapid-Rich Object Search Lab, Nanyang Technological University, Singapore 2Department of Computer Science, City University of Hong Kong, China 3Department of Clinical Science, Intervention, and Technology, Karolinska Institute, Sweden 4Department of Medical Radiation and Nuclear Medicine, Karolinska University Hospital, Sweden |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/wyf0912/LDDG. |
| Open Datasets | Yes | We adopt seven public skin lesion datasets, including HAM10000 [33], Dermofit (DMF) [2], Derm7pt (D7P) [17], MSK [6], PH2 [26], SONIC (SON) [6], and UDA [6] |
| Dataset Splits | Yes | Each dataset is randomly divided into 50% training set, 20% validation set and 30% testing set, where the relative class proportions are maintained across dataset partitions. |
| Hardware Specification | No | No specific hardware details such as GPU models, CPU models, or detailed computer specifications used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers, such as programming language versions, library versions, or specific solver versions. |
| Experiment Setup | No | The paper states 'The detail of architectures and experimental settings can be found in supplementary materials.' and does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or a detailed experimental setup within the main text. While it mentions the backbone models and loss components, concrete values for parameters like λ1 and λ2 are absent. |