Unsupervised Deep Learning for Phase Retrieval via Teacher-Student Distillation

Authors: Yuhui Quan, Zhile Chen, Tongyao Pang, Hui Ji

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our proposed approach outperforms existing unsupervised PR methods with higher computational efficiency and performs competitively against supervised methods.
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China 2Pazhou Lab, Guangzhou 510320, China 3Department of Mathematics, National University of Singapore, 119076, Singapore
Pseudocode No The paper describes the network architecture and mathematical formulations but does not include a pseudocode block or algorithm block.
Open Source Code No The paper does not provide an explicit statement or link to the open-source code for the described methodology.
Open Datasets Yes our training set consists of the 400 images of the Berkeley segmentation dataset (BSD) (Martin et al. 2001) and the 5600 images selected randomly from the PASCAL VOC dataset (Everingham et al. 2015). We randomly select 6000 (100) images from the public fashion product image dataset (Aggarwal 2019) to form the training (test) set. We select 82 (20) cell images from public microscope cell image dataset (Payyavula 2018) to form the training (test) set.
Dataset Splits No The paper mentions training and testing sets, and details about learning rate decay, but it does not specify a separate validation dataset split (e.g., percentages or counts for validation data) used to reproduce the experiments.
Hardware Specification Yes Table 1 also lists the inference time of different methods in reconstructing a 128 128 image with a uniform mask, run on an RTX Titan GPU.
Software Dependencies No The TFPn P, E2E, DDec, DMMSE and our model are all implemented with Py Torch. However, specific version numbers for PyTorch or other software dependencies are not provided.
Experiment Setup Yes Through the experiments, we set K = 5 for both NNs and λT = λS = 1, N = 2 for training. The teacher and student models are jointly trained using the Adam optimizer with 200 epochs and batch size of 8. The learning rate is initialized to 5 10 4 when the measurement number is two times larger than the pixel number, and 1 10 3 otherwise. It is then decayed every 100 epochs with the factor of 0.5.