1. Amodei D, Ananthanarayanan S, Anubhai R, Bai J, Battenberg E, Case C, Casper J, Catanzaro B, Cheng Q, Chen G, Chen J, Chen J, Chen Z, Chrzanowski M, Coates A, Diamos G, Ding K, Du N, Elsen E, Engel J, Fang W, Fan L, Fougner C, Gao L, Gong C, Hannun A, Han T, Johannes L, Jiang B, Ju C, Jun B, LeGresley P, Lin L, Liu J, Liu Y, Li W, Li X, Ma D, Narang S, Ng A, Ozair S, Peng Y, Prenger R, Qian S, Quan Z, Raiman J, Rao V, Satheesh S, Seetapun D, Sengupta S, Srinet K, Sriram A, Tang H, Tang L, Wang C, Wang J, Wang K, Wang Y, Wang Z, Wang Z, Wu S, Wei L, Xiao B, Xie W, Xie Y, Yogatama D, Yuan B, Zhan J, Zhu Z (Jun 2016) Deep speech 2: end-to-end speech recognition in English and mandarin. In: Balcan MF, Weinberger KQ (eds) Proceedings of The 33rd international conference on machine learning, Proceedings of machine learning research, PMLR, New York, New York, USA, vol 48, 20–22, pp 173–182
2. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL (2013) A public domain dataset for human activity recognition using Smartphones. In: ESANN
3. Avci A, Bosch S, Marin-Perianu M, Marin-Perianu R, Havinga P (2010) Activity, recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: 23rd international conference on Architecture of computing systems (ARCS), VDE 2010, pp 1–10
4. Bao L, Intille S (2004) Activity recognition from user-annotated acceleration data. In: Ferscha A., Mattern F (eds) Pervasive computing, Pervasive 2004. Lecture Notes in Computer Science, vol 3001. Springer, Berlin, Heidelberg, pp 1–17
5. Bhattacharya S, Nurmi P, Hammerla N, Plötz T (2014) Using unlabeled data in a sparse-coding framework for human activity recognition. Pervas Mobile Comput 15:242–262