


default search action
Speech Communication, Volume 68
Volume 68, April 2015
- Jinryoul Kim, Kyoung Won Nam, Sunhyun Yook, Sung Hwa Hong, Dong Pyo Jang, In-Young Kim:
Effect of the degree of sensorineural hearing impairment on the results of subjective evaluations of a noise-reduction algorithm. 1-10 - Haris B. C., Rohit Sinha
:
Low-complexity speaker verification with decimated supervector representations. 11-22 - Ramya Rasipuram, Mathew Magimai-Doss:
Acoustic and lexical resource constrained ASR using language-independent acoustic model and language-dependent probabilistic lexical model. 23-40 - Xiaoyi Chen, Wenwu Wang, Yingmin Wang, Xionghu Zhong, Atiyeh Alinaghi:
Reverberant speech separation with probabilistic time-frequency masking for B-format recordings. 41-54 - Yu Tsao
, Payton Lin, Ting-Yao Hu, Xugang Lu:
Ensemble environment modeling using affine transform group. 55-68 - Amir Sadeghian, Hilmi R. Dajani, Adrian D. C. Chan:
Classification of speech-evoked brainstem responses to English vowels. 69-84 - Nigel G. Ward, Steven D. Werner, Fernando García
, Emilio Sanchis:
A prosody-based vector-space model of dialog activity for information retrieval. 85-96 - Pasi Pertilä, Joonas Nikunen:
Distant speech separation using predicted time-frequency masks from spatial features. 97-106

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.