NeRV: Neural Representations for Videos
Authors: Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser Nam Lim, Abhinav Shrivastava
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on Big Buck Bunny sequence from scikit-video to compare our Ne RV with pixel-wise implicit representations, which has 132 frames of 720 1080 resolution. To compare with state-of-the-arts methods on video compression task, we do experiments on the widely used UVG [7], consisting of 7 videos and 3900 frames with 1920 1080 in total. |
| Researcher Affiliation | Collaboration | 1University of Maryland, College Park, 2Facebook AI |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. Figure 3 illustrates a pipeline, but it is a diagram, not structured pseudocode. |
| Open Source Code | Yes | The source code and pre-trained model can be found at https://github.com/haochen-rye/Ne RV.git. |
| Open Datasets | Yes | We perform experiments on Big Buck Bunny sequence from scikit-video to compare our Ne RV with pixel-wise implicit representations, which has 132 frames of 720 1080 resolution. To compare with state-of-the-arts methods on video compression task, we do experiments on the widely used UVG [7], consisting of 7 videos and 3900 frames with 1920 1080 in total. |
| Dataset Splits | No | No explicit information on training/validation/test dataset splits, such as percentages or sample counts, was provided in the paper. The paper mentions 'training epochs' and 'batchsize' but does not define a separate validation set split. |
| Hardware Specification | Yes | All experiments are run with NVIDIA RTX2080ti. |
| Software Dependencies | No | The paper mentions 'Py Torch [54]' but does not provide a specific version number. Other mentioned tools like 'Adam optimizer [51]' and 'cosine annealing learning rate schedule [52]' are algorithms or schedules, not software dependencies with version numbers. |
| Experiment Setup | Yes | In our experiments, we train the network using Adam optimizer [51] with learning rate of 5e-4. For ablation study on UVG, we use cosine annealing learning rate schedule [52], batchsize of 1, training epochs of 150, and warmup epochs of 30 unless otherwise denoted. When compare with state-of-the-arts, we run the model for 1500 epochs, with batchsize of 6. For experiments on Big Buck Bunny , we train Ne RV for 1200 epochs unless otherwise denoted. For fine-tune process after pruning, we use 50 epochs for both UVG and Big Buck Bunny . |