Fine-Grained Semantic Conceptualization of FrameNet
Authors: Jin-woo Park, Seung-won Hwang, Haixun Wang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive analysis with real-life data validates that our approach improves not only the quality of the identified concepts for Frame Net, but also that of applications such as selectional preference. |
| Researcher Affiliation | Collaboration | Jin-woo Park POSTECH, Korea, Republic of jwpark85@postech.edu Seung-won Hwang Yonsei University, Korea, Republic of seungwonh@yonsei.ac.kr Haixun Wang Facebook Inc., USA haixun@gamil.com |
| Pseudocode | Yes | Algorithm 1: Truss (G ) |
| Open Source Code | No | The paper states: "The entire results are released at http://karok.postech.ac.kr/FEconceptualization.zip." This link is for results, not source code. No other explicit statement or link for the paper's source code is provided. |
| Open Datasets | Yes | Proposed method To overcome concept and instance sparsity of manually-built KB, we utilize on Probase (Wu et al. 2012)2, which contains millions of fine-grained concepts automatically harvested from billions of web documents. Footnote 2: Dataset publicly available at http://probase.msra.cn/dataset.aspx. Frame Net (Fillmore, Johnson, and Petruck 2003)3. Footnote 3: We used the Frame Net 1.5 dataset. |
| Dataset Splits | No | The paper describes the Sem Eval 2010 dataset used for pseudo-disambiguation evaluation but does not specify training, validation, or test splits for its model's development and evaluation (e.g., percentages, sample counts, or references to standard splits). |
| Hardware Specification | Yes | All experiments were carried out on a machine with a Intel Core i3 CPU processor at 3.07GHz and 4GB of DDR3 memory. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch or TensorFlow with their respective versions). |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or specific training configurations. |