CT-Net: Channel Tensorization Network for Video Classification
Authors: Kunchang Li, Xianhang Li, Yali Wang, Jun Wang, Yu Qiao
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on several challenging video benchmarks, e.g., Kinetics-400, Something-Something V1 and V2. |
| Researcher Affiliation | Academia | 1Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China 2University of Chinese Academy of Sciences 3SIAT Branch, Shenzhen Institute of Artiļ¬cial Intelligence and Robotics for Society 4University of Central Florida |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. Figure 4 provides a diagram of the Tensor Excitation mechanism, but it is not pseudocode. |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code or include any links to code repositories. |
| Open Datasets | Yes | We conduct experiments on three large video benchmarks: Kinetics-400 (Carreira & Zisserman, 2017), Something-Something V1 and V2 (Goyal et al., 2017b)... To verify the generation ability of our CT-Net on smaller datasets, we conduct transfer learning experiments from Kinetics400 to UCF101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011). |
| Dataset Splits | Yes | We conduct experiments on three large video benchmarks: Kinetics-400 (Carreira & Zisserman, 2017), Something-Something V1 and V2 (Goyal et al., 2017b)... We test CT-Net with 16 input frames and evaluate it over three splits and report the averaged results. |
| Hardware Specification | No | The paper does not specify the hardware used for running experiments, such as particular GPU or CPU models. |
| Software Dependencies | No | The paper mentions optimization techniques (SGD with momentum, cosine learning rate schedule) and references other models/methods (ResNet, Non-local), but does not provide specific version numbers for software dependencies or libraries like PyTorch or TensorFlow. |
| Experiment Setup | Yes | For kinetics, the batch, total epochs, initial learning rate, dropout and weight decay are set to 64, 110, 0.01, 0.5 and 1e-4 respectively. All these hyper-parameters are set to 64, 45, 0.02, 0.3 and 5e-4 respectively for Something-Something. |