Functionality Discovery and Prediction of Physical Objects
Authors: Lei Ji, Botian Shi, Xianglin Guo, Xilin Chen123-130
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show the effectiveness of our methods. |
| Researcher Affiliation | Collaboration | Lei Ji,1,2,3 Botian Shi,4 Xianglin Guo,4 Xilin Chen1,2 1Institute of Computing Technology, CAS, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Microsoft Research Asia, Beijing, China 4Beijing Institute of Technology, Beijing, China |
| Pseudocode | Yes | Algorithm 1 Training algorithm |
| Open Source Code | No | The paper references third-party open-source tools (Rnn OIE, dependency parser, Path-ranking-algorithm) but does not provide access to the authors' own implementation code for their proposed methods or state that it is open source. |
| Open Datasets | No | The paper describes collecting its own dataset by crawling images from a commercial search engine, but it does not provide any concrete access information such as a specific link, DOI, repository name, or formal citation for public availability of this dataset. |
| Dataset Splits | Yes | We use 35,860 <head (image), action, tail> tuples as test data and same number as validation from all 678,900 Head(image)-Action-Tail dataset and We split the dataset into 3 parts with 630 testing images and 630 validation, and the result as training set. |
| Hardware Specification | No | The paper mentions software frameworks like PyTorch and models like ResNet152 but does not specify any hardware details such as CPU/GPU models, memory, or specific computing environments used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Py Torch framework', 'Glo Ve 100-dimensions word embeddings', 'Rnn OIE', and a 'dependency parser' but does not provide specific version numbers for any of these software components. |
| Experiment Setup | Yes | The bi-LSTM model has 1 layer and each LSTM cell uses 128 hidden units followed by a Re LU activation function. We train the model for 500 epochs with mini batch size as 80 and 0.001 as learning rate. We use Glo Ve 100-dimensions word embeddings. For Dismult, the dimension is set to 200, learning rate as 0.05, the batch size as 128 the negative sample number as 10. We adopted the Ada Grad optimizer and set L2 regularization as 0.0001. For linear combination, α is 1.6 and β is 0.57 respectively. |