Crafting Efficient Neural Graph of Large Entropy

Authors: Minjing Dong, Hanting Chen, Yunhe Wang, Chang Xu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on Image Net and CIFAR datasets [Krizhevsky, 2009; Deng et al., 2009] demonstrate that deep neural networks can be well compressed by investigating graph entropy while preserving the accuracy. and 3 Experiments To evaluate the efficiency of our algorithms, we apply our RAIW algorithm to generate neural graphs based on different popular CNN architectures, such as VGG, Resnet and Densenet.
Researcher Affiliation Collaboration Minjing Dong1 , Hanting Chen2,3 , Yunhe Wang3 and Chang Xu1 1School of Computer Science, Faculty of Engineering, University of Sydney, Australia 2Key Laboratory of Machine Perception (Ministry of Education), Peking University, China 3Huawei Noah s Ark Lab
Pseudocode Yes Algorithm 1 Neural graph generation with greedy algorithm, Algorithm 2 Random regular neural graph generation, Algorithm 3 Regular neural graph generation with importance weights
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Experimental results on Image Net and CIFAR datasets [Krizhevsky, 2009; Deng et al., 2009]
Dataset Splits No We compare our algorithm against some popular pruning techniques, [Han et al., 2015; Li et al., 2016; Liu et al., 2017] [Liu et al., 2017] which achieve good performance among pruning techniques. To evaluate the performance of RAIW, we evaluate on CIFAR10 [Krizhevsky, 2009] with VGG-16 architecture. The paper mentions evaluating on datasets but does not explicitly provide specific train/validation/test split percentages or sample counts for reproducibility. While standard datasets have standard splits, the paper does not explicitly state it used those or any specific custom splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU models, CPU models, memory).
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No To evaluate the efficiency of our algorithms, we apply our RAIW algorithm to generate neural graphs based on different popular CNN architectures, such as VGG, Resnet and Densenet. The paper does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text.