NeurOLight: A Physics-Agnostic Neural Operator Enabling Parametric Photonic Device Simulation

Authors: Jiaqi Gu, Zhengqi Gao, Chenghao Feng, Hanqing Zhu, Ray Chen, Duane Boning, David Pan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4.1 Experiment setup... In Table 1, we compare four models... Datasets. We focus on widely applied multi-mode interference (MMI) photonic devices... Table 1: Comparison of parameter count, train error, and test error on two benchmarks among four different models.
Researcher Affiliation Academia Jiaqi Gu1, Zhengqi Gao2, Chenghao Feng1, Hanqing Zhu1, Ray T. Chen1, Duane S. Boning2, David Z. Pan1 1The University of Texas at Austin, 2Massachusetts Institute of Technology
Pseudocode No The paper describes its model architecture and components in text and diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at link.
Open Datasets No We generate our customized MMI device simulation dataset using an open-source FDFD simulator angler [16]. The tunable MMI dataset has 5.5 K single-source training data, 614 validation data, and 1.5 K multi-source test data. The etched MMI dataset has 12.4 K single-source training data, 1.4 K validation data, and 1.5 K multi-source test data. The paper describes a custom-generated dataset but does not provide concrete access information (link, DOI, repository, or formal citation with author/year for the dataset itself).
Dataset Splits Yes For the tunable MMI dataset, we split all 7,680 examples into 72% training data, 8% validation data, and 20% test data. For the etched MMI dataset, we split all 15,360 examples into 81% training data, 9% validation data, and 10% test data.
Hardware Specification Yes All experiments are conducted on a machine with Intel Core i7-9700 CPUs and an NVIDIA Quadro RTX 6000 GPU.
Software Dependencies Yes We implement all models and training logic in Py Torch 1.10.2.
Experiment Setup Yes For training from scratch, we set the number of epochs to 200 with an initial learning rate of 0.002, cosine learning rate decay, and a mini-batch size of 12.