K-SHAP: Policy Clustering Algorithm for Anonymous Multi-Agent State-Action Pairs
Authors: Andrea Coletta, Svitlana Vyetrenko, Tucker Balch
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on simulated synthetic market data and a real-world financial dataset. We show that our proposal significantly and consistently outperforms the existing methods, identifying different agent strategies. |
| Researcher Affiliation | Industry | 1J.P. Morgan AI Research, London, UK. 2J.P. Morgan AI Research, New York, USA. |
| Pseudocode | Yes | Algorithm 1 K-SHAP Algorithm |
| Open Source Code | No | The paper refers to the public availability of the ABIDES simulator and the implementations of benchmark algorithms, but does not provide a link or explicit statement for the K-SHAP algorithm's own source code. |
| Open Datasets | Yes | We first use the state-of-art multi-agent market simulator ABIDES (Byrd et al., 2019) to simulate synthetic market data... then we consider real anonymous market data from NASDAQ stock exchange (NASDAQ, 2022). |
| Dataset Splits | No | The paper describes the datasets used (simulated and real-world) and evaluation metrics, but does not specify explicit training, validation, and test splits for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions software like scikit-learn, UMAP, PyTorch, and XGBoost, but does not specify their version numbers for reproducibility. |
| Experiment Setup | Yes | We fix the number of trees to 100 and we use mean squared error as objective. For UMAP we use the official implementation (Mc Innes et al., 2018b), and we fix the number of neighbors observations to 15. We consider a feedforward Neural Network with 2 linear hidden layers with Leaky Re LU activation function, and respectively 64 and 32 neurons. After Each hidden layer we consider a 0.1-dropout layer. |