Geo-SIC: Learning Deformable Geometric Shapes in Deep Image Classifiers
Authors: Jian Wang, Miaomiao Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our method on both simulated 2D images and real 3D brain magnetic resonance (MR) images. Experimental results show that our model substantially improves the image classification accuracy with an additional benefit of increased model interpretability. |
| Researcher Affiliation | Academia | Jian Wang Computer Science University of Virginia jw4hv@virginia.edu Miaomiao Zhang Computer Science & Electrical Computer Engineering University of Virginia mz8rr@virginia.edu |
| Pseudocode | Yes | Algorithm 1: Joint learning of Geo-SIC. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/jw4hv/Geo-SIC |
| Open Datasets | Yes | We choose 50000 images (including five classes, circle, cloud, envelope, square, and triangle are shown in Fig 3) of Google Quickdraw dataset (21), a collection of categorized drawings contributed by online players in a drawing game. ... For brain data, we include 373 public T1-weighted brain MRI scans from Open Access Series of Imaging Studies (OASIS) (17). |
| Dataset Splits | Yes | For both 2D and 3D datasets, we split the images by using 70% as training images, 15% as validation images, and 15% as testing images. |
| Hardware Specification | Yes | All networks are trained with an i7, 9700K CPU with 32 GB internal memory. The training and prediction procedure of all learning-based methods are performed on four Nvidia GTX 2080Ti GPUs. |
| Software Dependencies | No | The paper mentions software components like 'UNet', 'Adam optimizer', but does not specify version numbers for these or other libraries/frameworks. |
| Experiment Setup | Yes | We set an optimal dimension of the low-dimensional shape representation as 162 for 2D dataset and 323 for 3D dataset. We set parameter α = 3 for the operator L, the number of time steps for Euler integration in EPDiff (Eq. (3)) as 10. We set the noise variance σ = 0.02. We set the batch size as 16 and use the cosine annealing learning rate schedule that starts from a learning rate η = 1e−3 for network training. We run 1000 epochs with the Adam optimizer and save networks with the best validation performance for all models. |