Test-Time Adaptation via Style and Structure Guidance for Histological Image Registration
Authors: Shenglong Zhou, Zhiwei Xiong, Feng Wu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments with several representative learning-based backbones on the public histological dataset, demonstrating the superior registration performance of our test-time adaptation method. |
| Researcher Affiliation | Academia | Shenglong Zhou1, Zhiwei Xiong1, 2*, Feng Wu1, 2 1 University of Science and Technology of China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper describes the proposed methods in detail but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on the public histological dataset ANHIR for comparison. SGTTA focuses on the test-time stage and is complementary to learning-based methods in the training stage, so it is feasible for us to choose the dataset for evaluation. For fair comparison and efficient experiments, we use the images containing public landmarks in the raw ANHIR dataset as our dataset, and we split them into 115 pairs for training and 115 pairs for testing. |
| Dataset Splits | No | The paper states: "we split them into 115 pairs for training and 115 pairs for testing." It does not mention a distinct validation split. |
| Hardware Specification | Yes | All the learning-based methods are implemented on Py Torch on 4 cards of NVIDIA TITAN XP. |
| Software Dependencies | No | The paper mentions "implemented on Py Torch" but does not specify a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | Specifically, during the training stage, we set the batch size as 1 and the number of epochs as 100. We set the regularization parameter λ as 30 following (Wodzinski and M uller 2021). For other hyperparameters, we set α as 0.2, w as 0.99, and k as 0.8 empirically. |