Cognitive Master Teacher

Authors: Raghu Krishnapuram, Luis Lastras, Satya Nitta

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have built prototypes of Webas well as mobile-accessible assistants in collaboration with IBM Corporate Citizenship. Working together with the IBM Foundation and other philanthropic organizations, we plan to improve CMT by incorporating the ability to learn preferences of users and provide personalized answers over time. In the coming years, CMT will be initially piloted at several schools that have adopted CCSS and ultimately be made freely available to teachers everywhere.
Researcher Affiliation Industry Raghu Krishnapuram1, Luis A. Lastras2, Satya Nitta1 1IBM TJ Watson Research Center, 2IBM Watson Group krishnap@us.ibm.com, lastrasl@us.ibm.com, svn@us.ibm.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the release of open-source code for the described methodology.
Open Datasets Yes The text corpus of CMT consists of several text books, Web pages and other information that is available from open education resources such as Engage NY and OER Commons as well as select non-profit educational organizations such as Achieve and Student Achievement Partners.
Dataset Splits No The paper mentions a "training mode" for machine learning but does not provide specific details on training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific hardware details (like CPU/GPU models or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like "IBM's Watson machine" and "Watson Engagement Advisor (WEA)" but does not specify version numbers for these or any other software dependencies.
Experiment Setup No The paper mentions aspects of training the WEA component and adding domain-specific terms, but it does not provide specific experimental setup details such as hyperparameter values or detailed training configurations.