Deep Neural Networks Learn Meta-Structures from Noisy Labels in Semantic Segmentation
Authors: Yaoru Luo, Guole Liu, Yuanhao Guo, Ge Yang1908-1916
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To quantitatively analyze the segmentation performance of DNNs trained by these labels, we experiment on two representative segmentation models, U-Net (Ronneberger, Fischer, and Brox 2015) and Deep Labv3+ (Chen et al. 2018), with the same loss function (binary cross-entropy loss) and optimizer (stochastic gradient descent, SGD). |
| Researcher Affiliation | Academia | 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences |
| Pseudocode | Yes | Algorithm 1: Unsupervised Iteration Strategy |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | For binary-class segmentation, we select fluorescence microscopy images of ER, MITO datasets (Luo, Guo, and Yang 2020) and the NUC dataset (Caicedo et al. 2019). For multi-class segmentation, we select natural images of Cityscapes dataset (Cordts et al. 2016). |
| Dataset Splits | No | The paper mentions 'testing dice scores during training' but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, counts, or detailed methodology) in the main text. It refers to Appendix C for 'Detailed information on the datasets and experimental configurations', but this information is not directly in the main body. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of U-Net and Deep Labv3+ models, binary cross-entropy loss, and stochastic gradient descent (SGD) optimizer, but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | No | The paper mentions using U-Net and Deep Labv3+ models, binary cross-entropy loss, and SGD optimizer. It states that 'Detailed information on the datasets and experimental configurations are provided in Appendix C.', implying hyperparameters are not detailed in the main text. |