ILILT: Implicit Learning of Inverse Lithography Technologies

Authors: Haoyu Yang, Haoxing Ren

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of the ILILT framework, we adopt the latest AI for computational lithography benchmark suite Litho Bench (Zheng et al., 2023), where stateof-the-art mask optimization solutions are evaluated and will be used as comparison baselines.
Researcher Affiliation Industry 1Design Automation Research Group, NVIDIA, Austin, TX. Correspondence to: Haoyu Yang <haoyuy@nvidia.com>.
Pseudocode No The paper describes the algorithm steps in text and equations but does not provide pseudocode or an algorithm block.
Open Source Code No The paper does not contain any statement about releasing open-source code or provide a link to a code repository for the methodology described.
Open Datasets Yes To demonstrate the effectiveness of the ILILT framework, we adopt the latest AI for computational lithography benchmark suite Litho Bench (Zheng et al., 2023)
Dataset Splits No The paper mentions training data and evaluation metrics but does not specify explicit training, validation, and test splits (e.g., percentages or counts) for the dataset used.
Hardware Specification No The paper mentions 'GPU-ILT' as a comparison method but does not specify the hardware (e.g., specific GPU models, CPUs) used for its own experiments.
Software Dependencies No The paper mentions using a 'pytorch layer' for the lithography simulator and 'Optimizer Adam' in Table 1, but it does not provide specific version numbers for PyTorch, Adam, or any other software dependencies.
Experiment Setup Yes Detailed configurations of the generator used to reproduce our results are listed in Table 1. Table 1: Max Epoch 5, Initial Learning Rate 0.004, Learning Rate Decay Policy step, Optimizer Adam, Weight Decay 0.0001, Loss Equation (13), Sequence Unrolling Depth (T) 4-8, Batch Size 4