| A Generative Product-of-Filters Model of Audio |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| An empirical analysis of dropout in piecewise linear networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Auto-Encoding Variational Bayes |
✅ |
❌ |
✅ |
❌ |
✅ |
❌ |
✅ |
4 |
| Bounding the Test Log-Likelihood of Generative Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Deep Convolutional Ranking for Multilabel Image Annotation |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Deep and Wide Multiscale Recursive Networks for Robust Image Labeling |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
✅ |
3 |
| EXMOVES: Classifier-based Features for Scalable Action Recognition |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| End-to-End Text Recognition with Hybrid HMM Maxout Models |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Exact solutions to the nonlinear dynamics of learning in deep linear neural networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Fast Training of Convolutional Networks through FFTs |
❌ |
❌ |
✅ |
❌ |
✅ |
✅ |
✅ |
4 |
| Group-sparse Embeddings in Collective Matrix Factorization |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| How to Construct Deep Recurrent Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Intriguing properties of neural networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Learned versus Hand-Designed Feature Representations for 3d Agglomeration |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
✅ |
1 |
| Learning Human Pose Estimation Features with Convolutional Networks |
❌ |
❌ |
✅ |
❌ |
✅ |
❌ |
❌ |
2 |
| Learning Semantic Script Knowledge with Event Embeddings |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Learning Transformations for Classification Forests |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Learning to encode motion using spatio-temporal synchrony |
❌ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
4 |
| Multi-View Priors for Learning Detectors from Sparse Viewpoint Data |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Multilingual Distributed Representations without Word Alignment |
❌ |
✅ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Network In Network |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
❌ |
2 |
| Neuronal Synchrony in Complex-Valued Deep Networks |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| On Fast Dropout and its Applicability to Recurrent Networks |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| On the number of inference regions of deep feed forward networks with piece-wise linear activations |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
❌ |
0 |
| OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Relaxations for inference in restricted Boltzmann machines |
✅ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
3 |
| Revisiting Natural Gradient for Deep Networks |
✅ |
✅ |
✅ |
✅ |
✅ |
❌ |
✅ |
6 |
| Sequentially Generated Instance-Dependent Image Representations for Classification |
✅ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
4 |
| Some Improvements on Deep Convolutional Neural Network Based Image Classification |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| Sparse similarity-preserving hashing |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Spectral Networks and Locally Connected Networks on Graphs |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| The return of AdaBoost.MH: multi-class Hamming trees |
✅ |
❌ |
✅ |
✅ |
❌ |
✅ |
✅ |
5 |
| Unsupervised Feature Learning by Deep Sparse Coding |
❌ |
❌ |
✅ |
✅ |
❌ |
❌ |
✅ |
3 |
| Zero-Shot Learning and Clustering for Semantic Utterance Classification |
❌ |
❌ |
❌ |
✅ |
✅ |
❌ |
✅ |
3 |
| Zero-Shot Learning by Convex Combination of Semantic Embeddings |
❌ |
❌ |
✅ |
❌ |
❌ |
❌ |
✅ |
2 |
| k-Sparse Autoencoders |
✅ |
❌ |
✅ |
✅ |
✅ |
❌ |
✅ |
5 |