A Universal Music Translation Network
Authors: Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. |
| Researcher Affiliation | Collaboration | Noam Mor Facebook AI Research noam.mor@gmail.com Lior Wolf & Adam Polyak Facebook AI Research & Tel Aviv Uni. wolf,adampolyak@fb.com Yaniv Taigman Facebook AI Research yaniv@fb.com |
| Pseudocode | No | No pseudocode or algorithm blocks found in the paper. |
| Open Source Code | No | In the second phase, in order to allow reproducibility and sharing of the code and models, we train on audio data from Music NET (Thickstun et al., 2017). and "in order to freely share our trained models and allow for maximal reproducibility, we have retrained the network with data from Music Net (Thickstun et al., 2017)." |
| Open Datasets | Yes | in order to allow reproducibility and sharing of the code and models, we train on audio data from Music NET (Thickstun et al., 2017). |
| Dataset Splits | No | The training and test splits are strictly separated by dividing the tracks (or audio files) between the two sets. |
| Hardware Specification | Yes | The method was implemented in the Py Torch framework, and trained on eight Tesla V100 GPUs for a total of 6 days. |
| Software Dependencies | No | The method was implemented in the Py Torch framework, and trained on eight Tesla V100 GPUs for a total of 6 days. [...] using librosa (Mc Fee et al., 2015). [...] the nv-wavenet CUDA kernels provided by NVIDIA ( https://github.com/NVIDIA/ nv-wavenet) |
| Experiment Setup | Yes | We used the ADAM optimization algorithm with a learning rate of 10 3 and a decay factor of 0.98 every 10,000 samples. We weighted the confusion loss with λ = 10 2. |