Log-Polar Space Convolution Layers
Authors: Bing Su, Ji-Rong Wen
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC. |
| Researcher Affiliation | Academia | Bing Su, Ji-Rong Wen Beijing Key Laboratory of Big Data Management and Analysis Methods Gaoling School of Artificial Intelligence, Renmin University of China Beijing 100872, China subingats@gmail.com; jrwen@ruc.edu.cn |
| Pseudocode | No | The paper describes the calculation of LPSC using mathematical equations (Eq. 1, Eq. 3) and descriptive text, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is available at https://github.com/Bing Su12/ Log-Polar-Space-Convolution. |
| Open Datasets | Yes | For image classification, we evaluate the behaviors of LPSC integrated with different CNN architectures on three datasets: CIFAR-10, CIFAR-100 [52], and Image Net [53]. |
| Dataset Splits | Yes | Image Net [53] contains 1.28 million training images and 50k validation images from 1000 classes. |
| Hardware Specification | No | The paper mentions 'Due to the limitation of computing resources, we reduced the batch size and learning rate by 4 times,' but does not provide specific hardware details such as GPU models, CPU types, or cloud instance specifications used for the experiments. |
| Software Dependencies | No | We use the Pytorch [54] implementation2 of these architectures as our baseline. |
| Experiment Setup | Yes | To make a fair comparison, all experimental setup and details including the learning rate, batch size, number of filters per layer, hyper-parameters for the optimizer (e.g., γ, momentum, weight decay) remain exactly the same as in the baseline. |