Bat-G net: Bat-inspired High-Resolution 3D Image Reconstruction using Ultrasonic Echoes

Authors: Gunpil Hwang, Seohyeon Kim, Hyeon-Min Bae

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The Bat-G network shows the uniform 3D reconstruction results and achieves precision, recall, and F1-score of 0.896, 0.899, and 0.895, respectively. The experimental results demonstrate the implementation feasibility of a high-resolution non-optical sound-based imaging system being used by live bats.
Researcher Affiliation Academia School of Electrical Engineering Korea Advanced Institute of Science and Technology Daejeon, South Korea {gphwang, dddokman, hmbae}@kaist.ac.kr
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states 'The project web page (https://sites.google.com/view/batgnet) contains additional content summarizing our research.' However, this project web page does not explicitly provide a direct link to the source code for the described methodology.
Open Datasets No The paper states 'created 4-channel ultrasound echo dataset, ECHO-4CH (49 k data for training and 2.6 k data for evaluation).' and 'We have chosen 16.2 k geometric object configuration... and created the objects using the building blocks and a 3D printer.' This is a custom-made dataset, but no public access link, DOI, or formal citation is provided for it.
Dataset Splits Yes We have adopted a supervised learning algorithm and created 4-channel ultrasound echo dataset, ECHO-4CH (49 k data for training and 2.6 k data for evaluation).
Hardware Specification Yes The network is iteratively trained with 500 k steps on a GTX 1080 Ti GPU and a Threadripper 1900X CPU.
Software Dependencies No The paper mentions techniques like batch normalization and ReLU activation, and optimization algorithms like Adam, but does not provide specific version numbers for any software dependencies such as libraries or frameworks (e.g., PyTorch, TensorFlow).
Experiment Setup Yes The loss function is implemented by employing L2-regularization loss (regularization strength λ=10 6), and cross-entropy loss with softmax activation S... We adopted the Adam optimization algorithm [48] (β1, β2, and ε are 0.9, 0.999, and 10 8, respectively) with an exponential decay (learning rate, decay rate, and decay steps are 10 4, 0.9, and 5 k, respectively) for better convergence. To reduce overfitting, dropout with the probability of retention of 0.5 [49] is applied to the network during training. The network is iteratively trained with 500 k steps...