Regression with Sensor Data Containing Incomplete Observations
Authors: Takayuki Katsuki, Takayuki Osogami
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the advantages of our algorithm through numerical experiments. and Extensive experiments on synthetic and six real-world regression tasks including a real use case for healthcare demonstrate the effectiveness of the proposed method. |
| Researcher Affiliation | Industry | 1IBM Research Tokyo, Tokyo, Japan. Correspondence to: Takayuki Katsuki <kats@jp.ibm.com>. |
| Pseudocode | Yes | Algorithm 1 U2 regression based on stochastic gradient method. |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about the availability of its own source code for the methodology described. |
| Open Datasets | Yes | We use three synthetic tasks, Low Noise, High Noise, and Breathing, collected from the Kaggle dataset (Sen, 2016). and We next apply the proposed method and baselines to five different real-world healthcare tasks from the UCI Machine Learning Repository (Velloso, 2013; Velloso et al., 2013) |
| Dataset Splits | Yes | We conducted 5-fold cross-validation, each with a different randomly sampled training-testing split. For evaluation purposes, we do not include incomplete observations in these test sets. For each fold of the cross-validation, we use a randomly sampled 20% of the training set as a validation set to choose the best hyperparameters for each algorithm. |
| Hardware Specification | Yes | All of the experiments were carried out with a Python and Tensor Flow implementation on workstations having 80 GB of memory, a 4.0 GHz CPU, and an Nvidia Titan X GPU. |
| Software Dependencies | No | The paper mentions using 'Python and Tensor Flow implementation' but does not specify version numbers for these software components or any other libraries. |
| Experiment Setup | Yes | We set the candidates of the hyperparameters, ρ and λ, to {10−3, 10−2, 10−1, 100}. and We used Adam with the hyperparameters recommended in (Kingma & Ba, 2015), and the number of samples in the mini-batches was set to 32. and We used a 6-layer multilayer perceptron with Re LU (Nair & Hinton, 2010) (more specifically, D-100-100-100-1) as f(x) |