Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers
Authors: Chau Pham, Bryan Plummer
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report Di Cha Vi T yields a 1.5 5.0% gain over the state-of-the-art. |
| Researcher Affiliation | Academia | Chau Pham Boston University Boston, MA chaupham@bu.edu Bryan A. Plummer Boston University Boston, MA bplum@bu.edu |
| Pseudocode | Yes | Algorithm 1: Diverse Channel Sampling (DCS) |
| Open Source Code | Yes | Our code is publicly available at https://github.com/chaudatascience/diverse_channel_vit. |
| Open Datasets | Yes | Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report Di Cha Vi T yields a 1.5 5.0% gain over the state-of-the-art. |
| Dataset Splits | Yes | JUMP-CP [12] comprises images and profiles of cells that were individually perturbed using chemical and genetic methods. Our experiments focus on the compound perturbation plate BR00116991, which contains 127K training images, 45K validation images, and 45K test images. |
| Hardware Specification | Yes | In this study, experiments were conducted on So2Sat and CHAMMI using a single NVIDIA RTX (48GB RAM) and three Intel(R) Xeon(R) Gold 6226R CPUs @ 2.90GHz. For experiments on JUMP-CP, two NVIDIA RTX A6000 GPUs and six Intel(R) Xeon(R) Gold 6226R CPUs @ 2.90GHz were utilized. |
| Software Dependencies | No | The paper mentions software components like DINOv2 and AdamW optimizer, but does not specify their version numbers. |
| Experiment Setup | Yes | We train each model for 60 epochs with a learning rate of 0.00004, and a batch size of 64. For JUMP-CP and So2Sat, the learning rate is warmed up for the initial 10 epochs, peaking at 0.0005 after which it will gradually decay to 10 6 following a cosine scheduler. We also apply a weight decay of 0.04... We train each model for 100 epochs, with a batch size of 64 on JUMP-CP, and 128 on So2Sat. |