Task-aware Distributed Source Coding under Dynamic Bandwidth

Authors: Po-han Li, Sravan Kumar Ankireddy, Ruihan (Philip) Zhao, Hossein Nourkhiz Mahjoub, Ehsan Moradi Pari, Ufuk Topcu, Sandeep Chinchali, Hyeji Kim

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that NDPCA improves the success rate of multi-view robotic arm manipulation by 9% and the accuracy of object detection tasks on satellite imagery by 14% compared to an autoencoder with uniform bandwidth allocation. ... We validate NDPCA with tasks of CIFAR-10 image denoising, multi-view robotic arm manipulation, and object detection of satellite imagery (Sec. 5). NDPCA results in a 1.2d B increase in PSNR, a 9% increase in success rate, and a 14% increase in accuracy compared to an autoencoder with uniform bandwidth allocation, for the respective experiments mentioned above.
Researcher Affiliation Collaboration Po-han Li1 Sravan Kumar Ankireddy1 Ruihan Zhao1 Hossein Nourkhiz Mahjoub2 Ehsan Moradi-Pari2 Ufuk Topcu1 Sandeep Chinchali1 Hyeji Kim1 1The University of Texas at Austin 2Honda Research Institute USA
Pseudocode Yes Algorithm 1 Projection into a random low dimension using DPCA
Open Source Code Yes 2 https://github.com/UTAustin-Swarm Lab/Task-aware-Distributed-Source-Coding.
Open Datasets Yes We consider three different tasks to test our framework: (a) the denoising of CIFAR-10 images [24], (b) multi-view robotic arm manipulation [25], which we refer to as the locate and lift task, and (c) object detection on satellite imagery [26].
Dataset Splits No The paper mentions using a 'testing set' but does not explicitly provide details on the specific percentages or counts for training, validation, and test splits, nor does it specify predefined splits with citations for dataset partitioning.
Hardware Specification No The paper does not specify any particular hardware, such as GPU or CPU models, used for conducting the experiments.
Software Dependencies No The paper mentions the use of 'Yolo' [31] and 'pre-trained denoising network' but does not specify any programming languages, libraries, or frameworks with their respective version numbers required for reproducibility.
Experiment Setup Yes During the training of NDPCA, the weights of the task are always frozen because it is usually a large-scale pre-trained model that is expensive to re-train. We aim to learn the K neural encoders and the joint neural decoder which minimize the loss function: Ltot = λtask k ˆY Y k2 F | {z } task loss + λrec (k ˆX1 X1k2 F + k ˆX2 X2k2 F + . . . k ˆXK XKk2 F | {z } reconstruction loss. ... NDPCA trained at (mmin, mmax) = (8, 64). ... We pre-train the task model with randomly cropped and augmented images to make the model less sensitive to noise in the input image space, namely, the model has a smaller Lipschitz constant.