Amortized Inference for Heterogeneous Reconstruction in Cryo-EM

Authors: Axel Levy, Gordon Wetzstein, Julien N.P Martel, Frederic Poitevin, Ellen Zhong

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate cryo FIRE for ab initio heterogeneous reconstruction and compare it with the state-of-the-art method cryo DRGN2 [39]. We first validate that using an encoder to predict poses, instead of performing an exhaustive pose search, enables us to reduce the runtime of heterogeneous reconstruction on a synthetic dataset. We show that the encoder is able to accurately predict ϕi and zi for images it has never processed during training, thereby validating the ability of an encoder-like architecture to amortize the runtime over the size of the dataset.
Researcher Affiliation Academia Axel Levy Stanford University Gordon Wetzstein Stanford University Julien Martel Stanford University Frédéric Poitevin SLAC National Accelerator Laboratory Ellen D. Zhong Princeton University Correspondence to: zhonge@princeton.edu
Pseudocode No The paper describes the architecture and training procedures in text, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code No By providing an open-source implementation of Cryo FIRE upon publication, together with benchmark metrics, we hope to make cryo-EM research accessible to a broader class of researchers." and "Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] We plan on making code available.
Open Datasets Yes We use the publicly available dataset EMPIAR-10180 [21] of a pre-catalytic spliceosome (Supplement C).
Dataset Splits No Table 1 shows dataset sizes for training and testing (e.g., 'Small (Train: 50k / Test: 10k)'), but a separate validation dataset split with its size is not explicitly provided.
Hardware Specification Yes We train the models on a single NVIDIA A100 SXM4 40GB GPU.
Software Dependencies No The paper mentions using the ADAM optimizer, but it does not specify software dependencies like programming languages, libraries, or frameworks with specific version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes With cryo FIRE, we fix d = 8 and activate the conformation MLP after the model has seen 1.5M images... Images of size D = 128 are fed by batches of maximum sizes (128 for cryo FIRE, 32 for cryo DRGN2)... The model is optimized with the ADAM optimizer [10] and a learning rate of 10 4.