Never-Ending Learning
Authors: Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, Ni Lao, Kathryn Mazaitis, Thahir Mohamed, Ndapa Nakashole, Emmanouil Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard Wang, Derry Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, Joel Welling
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our primary goal in experimentally evaluating NELL is to understand the degree to which NELL improves over time through learning, both in its reading competence, and in the size and quality of its KB. First, consider the growth of NELL s KB over time, from its inception in January 2010 through November 2014, during which NELL has completed 886 iterations. The left panel of Figure 3 shows the number of beliefs in NELL s KB over time, and the right panel of this figure shows the number of beliefs for which NELL holds high confidence. |
| Researcher Affiliation | Collaboration | Carnegie Mellon University, USA University of S ao Carlos, Brazil Indian Institute of Science, India Google Inc., USA Research carried out while at Carnegie Mellon University Ohio State University, USA Duolingo, USA Alpine Data Labs, USA Pittsburgh Supercomputing Center, USA |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that source code for NELL or its components is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | the web (an initial 500 million web pages from the Clue Web 2009 collection (Callan and Hoy 2009), and access to 100,000 Google API search queries each day) |
| Dataset Splits | No | The paper describes how different versions of NELL were trained on its evolving KB and unlabeled text, and then evaluated on a fixed set of web data. However, it does not specify a traditional training/validation/test split for the learning process itself, rather it evaluates historical versions of the system over time. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions various systems like CML, CPL, Open Eval, SEAL, NEIL, PRA, Ont Ext, but does not specify their software versions or any other specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes the general experimental process (training different NELL versions and evaluating them on web data) but does not provide specific hyperparameters (e.g., learning rate, batch size) or detailed system-level training settings. |