Modelling heterogeneous distributions with an Uncountable Mixture of Asymmetric Laplacians

Authors: Axel Brando, Jose A. Rodriguez, Jordi Vitria, Alberto Rubio Muñoz

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experimental ResultsAll experiments are implemented in Tensor Flow [33] and Keras [34], running in a workstation with Titan X (Pascal) GPU and Ge Force RTX 2080 GPU.
Researcher Affiliation Collaboration Axel Brando BBVA Data & Analytics Universitat de Barcelona Jose A. Rodríguez-Serrano BBVA Data & Analytics Jordi Vitrià Universitat de Barcelona Alberto Rubio BBVA Data & Analytics
Pseudocode Yes Algorithm 2: How to build UMAL model by using any deep learning architecture for regression
Open Source Code Yes The source code to reproduce the public results reported is published in https://github.com/BBVA/UMAL.
Open Datasets Yes By using the publicly available information from the the Inside Airbnb platform [17] we selected Barcelona (BCN) and Vancouver (YVC) as the cities to carry out the comparison of the models in a real situation. ... [17] Murray Cox. Inside airbnb: adding data to the debate. Inside Airbnb [Internet].[cited 16 May 2019]. Available: http://insideairbnb.com, 2019.
Dataset Splits Yes A total of 50% of the random uniform generated data were considered as test data, 40% for training and 10% for validation.
Hardware Specification Yes All experiments are implemented in Tensor Flow [33] and Keras [34], running in a workstation with Titan X (Pascal) GPU and Ge Force RTX 2080 GPU.
Software Dependencies No Software used (Tensor Flow, Keras) is mentioned, but specific version numbers are not provided.
Experiment Setup Yes Regarding parameters, we use a common learning rate of 10 3. In addition, to restrict the value of the scale parameter, b, to strictly positive values, the respective output have a softplus function [35] as activation. We will refer to the number of parameters to be estimated as P. On the other hand, the Monte Carlo sampling number, Nτ, for Independent QR, ALD and UMAL models will always be fixed to 100 at the training time. Furthermore, all public experiments are trained using an early stopping training policy with 200 epochs of patience for all compared methods.