BatteryML: An Open-source Platform for Machine Learning on Battery Degradation
Authors: Han Zhang, Xiaofan Gui, Shun Zheng, Ziheng Lu, Yuqi Li, Jiang Bian
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide an in-depth evaluation of model performance across various datasets to inform model selection. Through a comprehensive analysis, our intent is to offer a holistic perspective on the efficacy of each model, empowering researchers and practitioners to make informed decisions tailored to their specific goals. |
| Researcher Affiliation | Collaboration | Han Zhang1 , Xiaofan Gui2 , Shun Zheng2, Ziheng Lu2, Yuqi Li3 , Jiang Bian2 1Institute for Interdisciplinary Information Sciences, Tsinghua University 2Microsoft Research 3Department of Materials Science and Engineering, Stanford University |
| Pseudocode | No | The paper provides multiple code examples (Code 1 to Code 9) for using the Battery ML platform, but it does not present pseudocode or clearly labeled algorithm blocks for its core methodology. |
| Open Source Code | Yes | We present Battery ML1 a onestep, all-encompass, and open-source platform that integrates data preprocessing, feature extraction, and the implementation of both conventional and state-of-the-art models." and "1Project repository: https://github.com/microsoft/Battery ML |
| Open Datasets | Yes | We based our evaluation on several publicly accessible battery datasets: CALCE (Xing et al., 2013; He et al., 2011a), HNEI (Devie et al., 2018), HUST (Ma et al., 2022), MATR (Severson et al., 2019; Hong et al., 2020), RWTH (Li et al., 2021), SNL (Preger et al., 2020), and UL PUR (Juarez-Robles et al., 2020; 2021). |
| Dataset Splits | No | The paper describes using train-test splits, including standard data splits from external papers, but does not explicitly specify a validation dataset split or strategy for its own reported experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run its machine learning experiments. |
| Software Dependencies | No | The paper mentions using scikit-learn, PyTorch, XGBoost, and LightGBM but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Following this, a configuration file is crafted to specify data locations, partitioning strategies, feature and label generation methods, as well as the associated model parameters. An elaborate sample of these settings is presented in Code 1. |