Measuring Acoustics with Collaborative Multiple Agents
Authors: Yinfeng Yu, Changan Chen, Lele Cao, Fangkai Yang, Fuchun Sun
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comparative experiments and ablation experiments are performed on two datasets Replica [Straub et al., 2019] and Matterport3D [Chang et al., 2017], verifying the effectiveness of the proposed solution. |
| Researcher Affiliation | Collaboration | Yinfeng Yu1,6 , Changan Chen2 , Lele Cao1,3 , Fangkai Yang4 and Fuchun Sun1,5, 1Beijing National Research Center for Information Science and Technology, State Key Lab on Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University 2University of Texas at Austin 3Motherbrain, EQT Group 4Microsoft Research 5THU-Bosch JCML Center 6College of Information Science and Engineering, Xinjiang University |
| Pseudocode | Yes | Algorithm 1 MACMA (Measuring Acoustics with Collaborative Multiple Agents) |
| Open Source Code | Yes | The full paper with appendix together with source code can be found at https://yyf17.github.io/MACMA. Corresponding author: Fuchun Sun. |
| Open Datasets | Yes | They are publicly available as several datasets: Replica [Straub et al., 2019], Matterport3D [Chang et al., 2017] and Sound Spaces (audio) [Chen et al., 2020]. |
| Dataset Splits | No | The paper mentions 'training and validation split' and 'test split' but does not provide specific percentages or counts for these splits. 'b) we train and validate every baseline with the generator Dr fine-tune together in the training and validation split, c) we test every baseline in the test split.' |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions 'gated recurrent unit (GRU)' and 'Proximal Policy Optimization (PPO)' which are algorithms/models, and specific parameters for STFT transform, but it does not list specific software libraries or their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, numpy 1.x). |
| Experiment Setup | Yes | Results are runs on two datasets under the experimental settings: αξ=1.0, αζ=1.0, αψ=-1.0, αϕ=1.0, κ=2, λ=0.1, ρ=-1.0. [...] a) we pretrain generator Dr under the setting L = Lm (wm = 1.0 and wξ = 0.0) with random policy for both agent 0 and agent 1 in the training split, b) we train and validate every baseline with the generator Dr fine-tune together in the training and validation split, c) we test every baseline in the test split. |