Neural Interferometry: Image Reconstruction from Astronomical Interferometers Using Transformer-Conditioned Neural Fields

Authors: Benjamin Wu, Chao Liu, Benjamin Eckart, Jan Kautz2685-2693

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on synthetically observed galaxies show that transformer-conditioned neural fields can successfully reconstruct astronomical observations even when the number of visibilities is very sparse.Table 1: Quantitative Metrics on Test Set Observations. Our proposed transformer outperforms both traditional methods (CLEAN) and also a strong deep learning baseline (U-Net).
Researcher Affiliation Collaboration Benjamin Wu1,2*, Chao Liu2, Benjamin Eckart2, Jan Kautz2 1 National Astronomical Observatory of Japan 2 NVIDIA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks, nor does it include clearly labeled algorithm sections.
Open Source Code Yes Code is available at https://github.com/wubenjamin/neural-interferometry.
Open Datasets Yes To learn priors from a large amount of data, we synthesized interferometric observations of the Galaxy10 (SDSS) and Galaxy10 (DECals) datasets (Leung 2021).
Dataset Splits No The paper mentions training on Galaxy10 (SDSS) and testing on Galaxy10 (DECals) but does not provide specific training/validation/test dataset splits (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning with a dedicated validation set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using the 'eht-imaging toolkit' and 'Ne RF-style' methods, but it does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1) needed to replicate the experiment.
Experiment Setup Yes In implementation, we use a Ne RF-style (Mildenhall et al. 2020) positional encoding (axis-aligned, powers-of-two frequency sinusoidal encodings) to make it easier for the MLP to learn high frequency information (Tancik et al. 2020a). In our experiments, we use an 8-layer MLP and 8 output tokens for the Transformer. Both γ( ) and β( ) are implemented as simple affine layers with non-linearities. The spectral coordinates are mapped into positional encodings (PEs) while the complex measurements are treated as 2D input and linearly embedded. The dimensions of the PE and linear embedding are both 512, with PE being the Random Fourier Embedding (Tancik et al. 2020b). The embedded measurements and the PEs are concatenated to form the input tokens to the Transformer layers. The multi-headed self-attention layers all have five heads. The two-layer MLPs between two neighboring self-attention layers share weights. The 1024-dimension output tokens are used as the conditioning variables in the following Fi LM layers to condition the MLP layers.