Unsupervised Sequence Classification using Sequential Output Statistics

Authors: Yu Liu, Jianshu Chen, Li Deng

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on real-world datasets demonstrate that the new unsupervised learning method gives drastically lower errors than other baseline methods.
Researcher Affiliation Industry Yu Liu , Jianshu Chen , and Li Deng Microsoft Research, Redmond, WA 98052, USA jianshuc@microsoft.com Citadel LLC, Seattle/Chicago, USA Li.Deng@citadel.com
Pseudocode Yes Algorithm 1 Stochastic Primal-Dual Gradient Method
Open Source Code No The code will be released soon.
Open Datasets Yes For the OCR task, we obtain our dataset from a public database UWIII English Document Image Database [27]
Dataset Splits No The paper mentions a 'train set' and 'test set' but does not specify explicit percentages or counts for training, validation, and test splits. For example, it states '153,221 characters for our OCR task' and 'total of 83,567 characters' for Spell-Corr, but doesn't detail how these are split into training, validation, and test subsets.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models, or cloud instance types.
Software Dependencies No The paper mentions 'Tensor Flow' but does not specify a version number or other software dependencies with their versions.
Experiment Setup No The paper mentions mini-batch sizes (10 to 10,000) and that hyperparameters were tuned, but does not provide a comprehensive list of specific hyperparameter values (e.g., learning rate, optimizer settings, number of epochs) or detailed training configurations.