Cross-Sentence Gloss Consistency for Continuous Sign Language Recognition
Authors: Qi Rao, Ke Sun, Xiaohan Wang, Qi Wang, Bang Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three CSLR datasets show that our proposed CSGC significantly boosts the performance of CSLR, surpassing existing state-of-the-art works by large margins (i.e., 1.6% on PHOENIX14, 2.4% on PHOENIX14-T, and 5.7% on CSLDaily). |
| Researcher Affiliation | Collaboration | Qi Rao1*, Ke Sun2, Xiaohan Wang3, Qi Wang2, Bang Zhang2 1Re LER, AAII, University of Technology Sydney 2Institute for Intelligent Computing, Alibaba Group 3Stanford University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is publicly available. |
| Open Datasets | Yes | We validate our proposed method on three datasets that are widely utilized in CSLR evaluation: PHOENIX14 (Koller, Forster, and Ney 2015), PHOENIX14-T (Camgoz et al. 2018) and CSL-Daily (Zhou et al. 2021). |
| Dataset Splits | Yes | PHOENIX14... 5672, 540, 129 sentences are used for training, validation (Dev) and testing (Test), respectively. PHOENIX14-T... divided into 7096 training instances, 519 validation instances (Dev) and 642 testing (Test) instances. CSL-Daily... The split of the dataset for training, validation (Dev) and testing (Test) is 18401, 1077 and 1176, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments in the main text. |
| Software Dependencies | No | The paper mentions architectural components and loss functions (e.g., 'Res Net18', 'Bi LSTM', 'CTC loss') but does not provide specific software dependencies like library names with version numbers (e.g., 'PyTorch 1.9'). |
| Experiment Setup | Yes | L = LSeq + LVE + γ1Lc + γ2Lf. (11) where γ1 and γ2 are both scaling factors... We empirically adopt the same setting in our experiments, i.e., γ1 = 0.3, γ2 = 0.1. ... Particularly, the momentum update approach performs reasonably well with a proper momentum value (i.e., β = 0.9). |