A Deep Learning Dataloader with Shared Data Preparation
Authors: jian xie, Jingwei Xu, Guochang Wang, Yuan Yao, Zenan Li, Chun Cao, Hanghang Tong
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed JOADER , showing a greater versatility and superiority of training speed improvement (up to 200% on Res Net18) without affecting the accuracy. |
| Researcher Affiliation | Academia | 1Nanjing University 2University of Illinois Urbana-Champaign |
| Pseudocode | No | The paper describes algorithms and data structures through textual descriptions and diagrams, but no formal pseudocode or algorithm blocks were found. |
| Open Source Code | No | The paper states that a prototype named JOADER was implemented, but it does not provide any concrete access information for its source code. |
| Open Datasets | Yes | In this section, we evaluate JOADER on Image Net with the family of Res Net models. |
| Dataset Splits | No | The paper evaluates on ImageNet but does not explicitly provide specific training/validation/test dataset splits (percentages, counts, or detailed methodology for splitting). |
| Hardware Specification | Yes | The experiments were conducted on a GPU server with two Intel Xeon Gold 5118 CPUs @ 2.30GHz (24 physical cores and 48 threads), 500GB RAM, and 6 TITAN RTX GPUs. The server ran Ubuntu 18.04 with GNU/Linux kernel 4.15.0. The disk is Symbios Logic Mega RAID SAS-3 3316 of 1GB/s read speed. |
| Software Dependencies | Yes | The evaluated models are the basic models with their default settings in torchvision [1], and trained on top of the Py Torch 1.6.0 DL framework. The server ran Ubuntu 18.04 with GNU/Linux kernel 4.15.0. |
| Experiment Setup | Yes | The evaluated models are the basic models with their default settings in torchvision [1], and trained on top of the Py Torch 1.6.0 DL framework. In this experiment, we start training multiple Res Net18 models at the same time but with different hyper-parameters. |