Weakly Supervised Virus Capsid Detection with Image-Level Annotations in Electron Microscopy Images
Authors: Hannah Kniesel, Leon Sick, Tristan Payer, Tim Bergner, Kavitha Shaga Devan, Clarissa Read, Paul Walther, Timo Ropinski, Pedro Hermosilla
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through a set of extensive studies, we show how the proposed pseudo-labels are easier to obtain, and, more importantly, are able to outperform other existing weak labeling methods, and even ground truth labels, in cases where the time to obtain the annotation is limited. |
| Researcher Affiliation | Academia | Hannah Kniesel , Leon Sick, Tristan Payer, Tim Bergner, Kavitha Shaga Devan, Clarissa Read, Paul Walther, Timo Ropinski Ulm University Pedro Hermosilla TU Vienna |
| Pseudocode | No | The paper describes the steps of the algorithm but does not include formal pseudocode or an algorithm block. |
| Open Source Code | Yes | The source code associated with the experiments conducted in this paper is publicly available on Git Hub at the following link: https://github.com/Hannah Kniesel/WSCD. |
| Open Datasets | Yes | Herpes virus... We use the data from Shaga Devan et al. (2021) which contains 359 EM images with 2860 annotated bounding boxes of the virus particles in total. We use 287 images for training, 36 for validation, and 36 for testing. ... Adeno virus... We use the data from Matuszewski & Sintorn (2021) containing 67 negative stain TEM images of the Adeno virus with location annotations. |
| Dataset Splits | Yes | Herpes virus... We use 287 images for training, 36 for validation, and 36 for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments. |
| Software Dependencies | Yes | We compute the CAM of the input image I using the Grad CAM (Selvaraju et al., 2017) algorithm based on our pre-trained classifier (implementation from Gildenblat & contributors (2021)). |
| Experiment Setup | Yes | For all methods, we perform a parameter search to find the best hyper-parameters. To measure the performance of the object detection models, we use mean average precision with an overlap of 50 % (m AP50). ... Mask standard deviation. ... We start with a large standard deviation σmax and then reduce it over the optimization process to σmin. ... We choose σmax such that the entire image will be visible if the mask is placed in the center of the image of the smallest magnification level. ... we define the standard deviation depending on the real-world virus size in nm. ... A.1.3 GAUSSIAN STANDARD DEVIATION We investigate the influence of σmin on the Gaussian mask. Given the virus radius r, we found that σmin = 0.5 r gives the best results (see Table 4). |