Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

FreeNet: Liberating Depth-Wise Separable Operations for Building Faster Mobile Vision Architectures

Authors: Hao Yu, Haoyu Chen, Wei Peng, Xu Cheng, Guoying Zhao

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Free Net offers a superior accuracy-efficiency tradeoff compared to the latest efficient models. On Image Net-1k, Free Net-S2 outperforms the Star Net-S4 by 0.4% in top-1 accuracy, while running around 40% faster on desktop GPU and 15% faster on Mobile GPU.
Researcher Affiliation Academia 1Center for Machine Vision and Signal Analysis, University of Oulu, Finland 2Department of Psychiatry and Behavioral Sciences, Stanford University, USA 3School of Computer Science, Nanjing University of Information Science and Technology, China 4Department of Computer Science, Aalto University, Finland EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods with figures (Figure 2, 3, 4) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper states 'Our models are implemented in Py Torch' but does not provide any explicit statement about releasing the source code, a repository link, or mention of code in supplementary materials.
Open Datasets Yes We benchmark Free Nets on the Image Net-1k dataset (Deng et al. 2009). ... We evaluate the Image Net-1k pretrained Free Net on the MSCOCO dataset (Lin et al. 2014) and ADE20K datasets (Zhou et al. 2017)...
Dataset Splits No The paper mentions training on Image Net-1k and evaluating on MSCOCO and ADE20K datasets, and refers to "basic settings following (Chen et al. 2023) and advanced settings with distillation (Touvron et al. 2021; Shaker et al. 2023)". However, it does not explicitly provide the specific training/validation/test splits (e.g., percentages or sample counts) used for its experiments in the main text.
Hardware Specification Yes For the speed benchmark on NVIDIA GPU, we select the RTX-2080ti GPU... The on-mobile speed benchmark is conducted on the Snapdragon 8cx Gen 3 chip... Our models are implemented in Py Torch and trained using 8 AMD Instinct MI250X GPUs...
Software Dependencies No The paper states 'Our models are implemented in Py Torch' and mentions 'Tencent TNN Android inference framework' and 'Open GL library' but does not provide specific version numbers for any of these software components.
Experiment Setup Yes Briefly, all models are trained for 300 epochs using the Adam W optimizer, with a learning rate scaled as Batch Size 1024 1e-3. Our models are implemented in Py Torch and trained using 8 AMD Instinct MI250X GPUs with the MXNet-REC format data.