![](https://dblp.uni-trier.de./img/logo.ua.320x120.png)
![](https://dblp.uni-trier.de./img/dropdown.dark.16x16.png)
![](https://dblp.uni-trier.de./img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
![search dblp](https://dblp.uni-trier.de./img/search.dark.16x16.png)
default search action
8. SSW 2013: Barcelona, Spain
- The Eighth ISCA Tutorial and Research Workshop on Speech Synthesis, Barcelona, Spain, August 31-September 2, 2013. ISCA 2013
Prosody and Pausing
- Norbert Braunschweiler, Langzhou Chen:
Automatic detection of inhalation breath pauses for improved pause modelling in HMM-TTS. 1-6 - Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Alistair Conkie:
Role of pausing in text-to-speech synthesis for simultaneous interpretation. 7-11 - Alok Parlikar, Alan W. Black:
Minimum error rate training for phrasing in speech synthesis. 13-17 - Benjamin Picart, Sandrine Brognaux, Thomas Drugman:
HMM-based speech synthesis of live sports commentaries: integration of a two-layer prosody annotation. 19-24
General Topics in Speech Synthesis (Poster Sessions)
- Àngel Calzada Defez, Joan Claudi Socoró Carrié, Robert A. J. Clark:
Parametric model for vocal effort interpolation with harmonics plus noise models. 25-30 - Anh Tuan Dinh, Thanh Son Phan, Thang Tat Vu, Chi Mai Luong:
Vietnamese HMM-based speech synthesis with prosody information. 31-34 - Hiroya Hashimoto, Keikichi Hirose, Nobuaki Minematsu:
Context labels based on "bunsetsu" for HMM-based speech synthesis of Japanese. 35-39 - Yoshitaka Mamiya, Adriana Stan, Junichi Yamagishi, Peter Bell, Oliver Watts, Robert A. J. Clark, Simon King:
Using adaptation to improve speech transcription alignment in noisy and reverberant environments. 41-46 - Nobuyuki Nishizawa, Tsuneo Kato:
Speech synthesis using a maximally decimated pseudo QMF bank for embedded devices. 47-52 - Sathish Pammi, Marcela Charfuelan:
HMM-based scost quality control for unit selection speech synthesis. 53-57 - Lakshmi Saheer, Blaise Potard:
Understanding factors in emotion perception. 59-64 - Rubén San-Segundo-Hernández, Juan Manuel Montero, Mircea Giurgiu, Ioana Muresan, Simon King:
Multilingual number transcription for text-to-speech conversion. 65-69 - Ryoichi Takashima, Ryo Aihara, Tetsuya Takiguchi, Yasuo Ariki:
Noise-robust voice conversion based on spectral mapping on sparse space. 71-75 - Markus Toman, Michael Pucher, Dietmar Schabus:
Cross-variety speaker transformation in HSMM-based speech synthesis. 77-81 - Markus Toman, Michael Pucher, Dietmar Schabus:
Multi-variety adaptive acoustic modeling in HSMM-based speech synthesis. 83-87
Open Challenges in Speech Synthesis
- Tatsuo Inukai, Tomoki Toda, Graham Neubig, Sakriani Sakti, Satoshi Nakamura:
Investigation of intra-speaker spectral parameter variation and its prediction towards improvement of spectral conversion metric. 89-94 - Sunayana Sitaram, Gopala Krishna Anumanchipalli, Justin T. Chiu, Alok Parlikar, Alan W. Black:
Text to speech in new languages without a standardized orthography. 95-100 - Oliver Watts, Adriana Stan, Robert A. J. Clark, Yoshitaka Mamiya, Mircea Giurgiu, Junichi Yamagishi, Simon King:
Unsupervised and lightly-supervised learning for rapid construction of TTS systems in multiple languages from 'found' data: evaluation and analysis. 101-106
Robustness in Synthetic Speech
- Mauro Nicolao, Fabio Tesser, Roger K. Moore:
A phonetic-contrast motivated adaptation to control the degree-of-articulation on Italian HMM-based synthetic voices. 107-112 - Cassia Valentini-Botinhao, Mirjam Wester, Junichi Yamagishi, Simon King:
Using neighbourhood density and selective SNR boosting to increase the intelligibility of synthetic speech in noise. 113-118 - Kayoko Yanagisawa, Javier Latorre, Vincent Wan, Mark J. F. Gales, Simon King:
Noise robustness in HMM-TTS speaker adaptation. 119-124
Issues in HMM-based Speech Synthesis
- Daniel Erro, Agustín Alonso, Luis Serrano, Eva Navas, Inma Hernáez:
New method for rapid vocal tract length adaptation in HMMbased speech synthesis. 125-128 - Nobukatsu Hojo, Kota Yoshizato, Hirokazu Kameoka, Daisuke Saito, Shigeki Sagayama:
Text-to-speech synthesizer based on combination of composite wavelet and hidden Markov models. 129-134 - Qiong Hu, Korin Richmond, Junichi Yamagishi, Javier Latorre:
An experimental comparison of multiple vocoder types. 135-140 - Yusuke Ijima, Noboru Miyazaki, Hideyuki Mizuno:
Statistical model training technique for speech synthesis based on speaker class. 141-145
General Topics in Speech Synthesis (Poster Sessions)
- Florian Hinterleitner, Christoph Norrenbrock, Sebastian Möller:
Is intelligibility still the main problem? a review of perceptual quality dimensions of synthetic speech. 147-151 - Sébastien Le Maguer, Nelly Barbot, Olivier Boëffard:
Evaluation of contextual descriptors for HMM-based speech synthesis in French. 153-158 - Jaime Lorenzo-Trueba, Roberto Barra-Chicote, Junichi Yamagishi, Oliver Watts, Juan Manuel Montero:
Towards speaking style transplantation in speech synthesis. 159-163 - Thomas Merritt, Simon King:
Investigating the shortcomings of HMM synthesis. 165-170 - Raúl Montaño, Francesc Alías, Josep Ferrer:
Prosodic analysis of storytelling discourse modes and narrative situations oriented to text-to-speech synthesis. 171-176 - Ulpu Remes, Reima Karhila, Mikko Kurimo:
Objective evaluation measures for speaker-adaptive HMM-TTS systems. 177-181 - Fabio Tesser, Giacomo Sommavilla, Giulio Paci, Piero Cosi:
Experiments with signal-driven symbolic prosody for statistical parametric speech synthesis. 183-187 - Anandaswarup Vadapalli, Peri Bhaskararao, Kishore Prahallad:
Significance of word-terminal syllables for prediction of phrase breaks in text-to-speech systems for Indian languages. 189-194 - Catherine Inez Watson, Wei Liu, Bruce A. MacDonald:
The effect of age and native speaker status on synthetic speech intelligibility. 195-200 - Zhizheng Wu, Tuomas Virtanen, Tomi Kinnunen, Eng Siong Chng, Haizhou Li:
Exemplar-based voice conversion using non-negative spectrogram deconvolution. 201-206
Synthetic Singing Voices
- Maria Astrinaki, Alexis Moinet, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Thierry Dutoit:
Mage - reactive articulatory feature control of HMM-based parametric speech synthesis. 207-211 - Marti Umbert, Jordi Bonada, Merlijn Blaauw:
Systematic database creation for expressive singing voice synthesis control. 213-216
Expressive Speech Synthesis
- Matthew P. Aylett, Blaise Potard, Christopher J. Pidcock:
Expressive speech synthesis: synthesising ambiguity. 217-221 - Timo Baumann, David Schlangen:
Interactional adequacy as a factor in the perception of synthesized speech. 223-227 - Tamás Gábor Csapó, Géza Németh:
A novel irregular voice model for HMM-based speech synthesis. 229-234 - Kazuhiko Iwata, Tetsunori Kobayashi:
Expression of speaker's intentions through sentence-final particle/ intonation combinations in Japanese conversational speech synthesis. 235-240
Demo Session
- Oriol Guasch, Sten Ternström, Marc Arnela, Francesc Alías:
Unified numerical simulation of the physics of voice. the EUNISON project. 241-242 - Maria Astrinaki, Alexis Moinet, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Thierry Dutoit:
Mage - HMM-based speech synthesis reactively controlled by the articulators. 243 - Maria Astrinaki, Junichi Yamagishi, Simon King, Nicolas D'Alessandro, Thierry Dutoit:
Reactive accent interpolation through an interactive map application. 245 - Christophe Veaux, Maria Astrinaki, Keiichiro Oura, Robert A. J. Clark, Junichi Yamagishi:
Real-time control of expressive speech synthesis using kinect body tracking. 247-248
General Topics in Speech Synthesis (Poster Sessions)
- Ibrahim Almosallam, Atheer Alkhalifa, Mansour Al-Ghamdi, Mohamed I. Alkanhal, Ashraf Alkhairy:
SASSC: a standard Arabic single speaker corpus. 249-253 - Ladan Golipour, Alistair Conkie, Ann K. Syrdal:
Prosodically modifying speech for unit selection speech synthesis databases. 255-259 - Heng Lu, Simon King, Oliver Watts:
Combining a vector space representation of linguistic context with a deep neural network for text-to-speech synthesis. 261-265 - Jindrich Matousek, Daniel Tihelka, Milan Legát:
Is unit selection aware of audible artifacts? 267-271 - Kenji Matsui, Kenta Kimura, Yoshihisa Nakatoh, Yumiko O. Kato:
Development of electrolarynx with hands-free prosody control. 273-277 - Trung-Nghia Phung, Chi Mai Luong, Masato Akagi:
A hybrid TTS between unit selection and HMM-based TTS under limited data conditions. 279-284 - Antti Suni, Daniel Aalto, Tuomo Raitio, Paavo Alku, Martti Vainio:
Wavelets for intonation modeling in HMM speech synthesis. 285-290 - B. Ramani, S. Lilly Christina, Rachel G. Anushiya, V. Sherlin Solomi, Mahesh Kumar Nandwana, Anusha Prakash, S. Aswin Shanmugam, Raghava Krishnan, S. Kishore Prahalad, K. Samudravijaya, P. Vijayalakshmi, T. Nagarajan, Hema A. Murthy:
A common attribute based unified HTS framework for speech synthesis in Indian languages. 291-296 - Takenori Yoshimura, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda:
Cross-lingual speaker adaptation based on factor analysis using bilingual speech data for HMM-based speech synthesis. 297-302 - Yi-Chin Huang, Chung-Hsien Wu, Shih-Lun Lin:
Residual compensation based on articulatory feature-based phone clustering for hybrid Mandarin speech synthesis. 303-307
Keynote Papers
- Heiga Zen:
Deep learning in speech synthesis. 309 - Nigel Ward:
Prosodic patterns in dialog. 311-312 - Xavier Serra:
Singing voice synthesis in the context of music technology research. 313
![](https://dblp.uni-trier.de./img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.