Adaptive Sequential Recommendation Using Context Trees
Authors: Fei Mi, Boi Faltings
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results show that the CT recommender performs much better than other methods. We analyze the reasons for this and demonstrate that it is because of better adaptation to changes in the domain. The overall performances illustrated in Figure 1, for three MOOCs, show that CT recommender performs better than MF. |
| Researcher Affiliation | Academia | Fei Mi, Boi Faltings Artificial Intelligence Lab, EPFL Lausanne, Switzerland firstname.lastname@epfl.ch |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the methodology described. |
| Open Datasets | No | The paper mentions evaluating on 'three MOOCs' but provides no specific access information (link, DOI, repository, or formal citation for a public dataset) for these datasets. |
| Dataset Splits | No | The paper does not specify exact train/validation/test split percentages, absolute sample counts, or reference predefined splits with citations. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers). |
| Experiment Setup | No | The paper describes general update strategies for the models ('MF model is only updated periodically (week-by-week)', 'One-shot CT version', 'Slow-update CT version') but does not provide specific hyperparameters or system-level training settings like learning rates, batch sizes, or optimizer details. |