Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Supportive Negatives Spectral Augmentation for Source-Free Cross-Domain Segmentation

Authors: Kexin Zheng, Haifeng Xia, Siyu Xia, Ming Shao, Zhengming Ding

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Considerable experimental studies demonstrate that annotating merely 4% 5% of negative instances from the target domain significantly improves the segmentation performance over previous methods. Table 1 presents the quantitative evaluation results of our method under the source-free domain adaptation (SFDA) setting.
Researcher Affiliation Academia Kexin Zheng1,2, Haifeng Xia2*, Siyu Xia1,2*, Ming Shao3, Zhengming Ding4 1Advanced Ocean Institute, Southeast University, Nantong, China 2School of Automation, Southeast University, Nanjing, China 3Department of Computer and Information Science, University of Massachusetts Dartmouth, USA 4Department of Computer Science, Tulane University, USA EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Supportive Negatives Spectral Augmentation. Notation: Feature encoder E( ) trained on source data, unlabeled target data Xt U = {xt U}, source model Ms(Xt U), feature of k-th class in target domain f k i , centroid feature of k-th class in target domain f k p , number of target inputsnt. Stage 1: Active Hard Negative Discovery 1. Pseudo mask generation: Generate pseudo-labels from pre-trained source model ˆY k i = Ms(Xt U), acquire segmented masks Xk i = Xi ˆY k i for the k-th class 2. Class-level target mask feature extraction: for each target data xt U do Extract features f k i = E(Xk i ) Compute centroid f k p = 1 nt Pnt i=1 E(Xk i ) end for 3. Support hard negatives mining: Calculate the similarity of each sample s feature with the centroid feature of each class to acquire the score, rank the scores in descending order, and select the n L with the lowest intersection k classes as hard negatives to be labeled. Stage 2: Spectral Augmentation for each selected hard negative do for n rounds do Apply augmentation function Amfn( xn L) end for end for
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It does not contain a specific repository link, an explicit code release statement, or code in supplementary materials.
Open Datasets Yes We assess our approach on widely-utilized datasets for optic disc and cup segmentation across various clinical facilities. Following previous studies, we opted for the REFUGE (Orlando et al. 2020) dataset as the source domain and fine-tuned the model for evaluation on two target domains: the RIM-ONE-r3 (Fumero et al. 2011) and Drishti GS (Sivaswamy et al. 2015) datasets.
Dataset Splits Yes The source domain comprises 320/80 fundus images for training/test, accompanied by pixel-wise annotations for optic disc and cup segmentation. For the RIM-ONE-r3 dataset, 99/60 images were included, with 5% (5 images) of samples selected as hard negative instances. For the Drishti GS dataset, 50/51 images are included, with 4% (2 images) of samples selected.
Hardware Specification Yes We implemented our method using Py Torch on an NVIDIA 3080Ti GPU.
Software Dependencies No The paper mentions 'PyTorch' but does not specify its version number or the versions of any other key software dependencies.
Experiment Setup Yes The output probability threshold γ was set to 0.75. The source model was trained using the Adam optimizer with a learning rate of 2e-3. In the sourcefree domain adaptation step, data augmentation was carried out for 5 rounds with a hyper-parameter λ of 0.95 for the RIM-ONE-r3 and 0.5 for the Drishti GS datasets for a total of 20 epochs with a learning rate of 5e-4. The parameter η was set to 1e-4.