Hypergraph Induced Convolutional Manifold Networks
Authors: Taisong Jin, Liujuan Cao, Baochang Zhang, Xiaoshuai Sun, Cheng Deng, Rongrong Ji
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the image classification task on large benchmarking datasets demonstrate that our model achieves much better performance than the state-of-the-art. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart City, School of Information Science and Engineering, Xiamen University, 361005, China 2 Science and Technology on Electro-Optical Control Laboratory, Luoyang, 471023, China 3School of Automation Science and Electrical Engineering, Beihang University, 100083, China 4School of Electronic Engineering, Xidian University, 710071, China |
| Pseudocode | Yes | Algorithm 1 Training of convolutional manifold networks |
| Open Source Code | No | The paper does not contain any explicit statement that the authors are releasing their code or provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | We conducted the classification experiments on the CIFAR-10 natural image dataset [Krizhevsky, 2009]; In our experiments, the SVHN digit dataset [Netzer et al., 2012] and CIFAR-10 and CIFAR-100 natural image datasets [Krizhevsky, 2009] are added to Gaussian noise for classification tasks.; Finally, we conducted the classification experiments on the Large-scale Image Net dataset. |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly provide specific details about a validation dataset split (e.g., percentages, sample counts, or methodology). |
| Hardware Specification | Yes | The compared deep learning models were trained on the 4 GPUs (Titan XP), which have 3.20G HZ and 12G memory. |
| Software Dependencies | No | The paper mentions deep learning frameworks and models used (e.g., VGG, Google Net), but does not specify software versions for libraries, frameworks, or programming languages (e.g., Python version, TensorFlow/PyTorch version, CUDA version). |
| Experiment Setup | Yes | the feature buffer size is set to 100 and the neighborhood size parameter is set to 10. The mini-batch size is set to 150. The number of the epochs for our model is set to be 180. Our model has four essential parameters: (1) the ℓ2-norm regularization parameter λ of ridge regression, (2) the trade-off parameter ρ between the softmax loss and the manifold loss, (3) the weight decay coefficient γ, and (4) the threshold parameter τ of the hyperedge generation. For the threshold parameter τ, we set the threshold parameter as the function of the largest coefficient of centroid sample, i.e., τ = τ s(cmax), where cmax is the largest coefficient of the centroid sample. In addition, (1) the size of feature buffer k0 and (2) the neighborhood size k are also two issues that affect the performance. |