Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
Authors: Qing Wu, Lixuan Chen, Ce Wang, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimenting with multiple datasets shows that our Polyner achieves comparable or better performance than supervised methods on in-domain datasets while demonstrating significant performance improvements on out-of-domain datasets. |
| Researcher Affiliation | Academia | Shanghai Tech University Shanghai Jiao Tong University University of Science and Technology of China Institute of Computing Technology, Chinese Academy of Sciences {wuqing, chenlx1, yujingyi, zhangyy8}@shanghaitech.edu.cn wangce@ict.ac.cn hongjiang.wei@sjtu.edu.cn skevinzhou@ustc.edu.cn |
| Pseudocode | No | The paper describes the method textually and with a diagram (Figure 1), but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for this work is available at: https://github.com/iwuqing/Polyner. |
| Open Datasets | Yes | We conduct experiments on four datasets, including two simulation datasets and two real collected datasets. The first dataset is Deep Lesion [43]... The second dataset is the XCOM dataset [44]... CNN-MAR [7] and Score-MAR [9] are respectively trained on the XCOM [44] and LIDC [46] datasets... |
| Dataset Splits | No | Note that all data solely are utilized for testing purposes since our method is fully unsupervised. ... all hyper-parameters are tuned on 10 samples from the Deep Lesion dataset [43], and are then held constant across all other samples. (No explicit, reproducible training/validation/test dataset splits provided for their method, beyond using 10 samples for hyperparameter tuning). |
| Hardware Specification | Yes | Technically, the optimization of the Polyner for a CT image of 256 256 size requires about 2 minutes on a single NVIDIA RTX TITAN GPU (24 GB). |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'SPEKTR toolkit' and 'ifanbem in MATLAB', but does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks, or core libraries. |
| Experiment Setup | Yes | For our Polyner, we leverage hash encoding [47] in combination with two fully connected (FC) layers of width 128 to implement the MLP network. A ReLU activation function is then applied after the first FC layer. For the hash encoding [47], we configure its hyper-parameters as follow: L = 16, T = 219, F = 8, Nmin = 2, and b = 2. To optimize the network, we randomly sample 80 X-rays, i.e., |R| = 80 in Eqs. (7) & (8), at each training iteration. We set the hyperparameter λ to 0.2 in Eq. (8). We employ the Adam optimizer [48] with default hyper-parameters and set the initial learning rate to 1e-3, which decays by a factor of 0.5 per 1000 epochs. The total number of training epochs is 4000. |