Stop the war!
Остановите войну!
for scientists:
default search action
International Journal of Speech Technology, Volume 27
Volume 27, Number 1, March 2024
- Tebbi Hanane, Maamar Hamadouche:
Multi-agent based Arabic speech synthesis. 1-17 - Shrikala Deshmukh, Preeti Gupta:
Application of probabilistic neural network for speech emotion recognition. 19-28 - Aparna Vyakaranam, Tomas Maul, Bavani Ramayah:
A review on speech emotion recognition for late deafened educators in online education. 29-52 - Latifa Iben Nasr, Abir Masmoudi, Lamia Hadrich Belguith:
Survey on Arabic speech emotion recognition. 53-68 - Kimiya Nourali, Elham Dolkhani:
Scene text visual question answering by using YOLO and STN. 69-76 - B. G. Nagaraja, Thimmaraja Yadava G., Mohamed Anees:
Advancements in encoded speech data by background noise suppression under uncontrolled environment. 77-84 - O. Homa Kesav, G. K. Rajini:
Correction to: Automated detection system for texture feature based classification on different image datasets using S-transform. 85 - Mohit Dua, Bhavesh Bhagat, Shelza Dua:
An amalgamation of integrated features with DeepSpeech2 architecture and improved spell corrector for improving Gujarati language ASR system. 87-99 - Purva Barche, Krishna Gurugubelli, Anil Kumar Vuppala:
Stockwell-Transform based feature representation for detection and assessment of voice disorders. 101-119 - Merouane Bouzid, Nacèra Meziane, Salah Eddine Cheraitia:
Multi-coder vector quantizer for transparent coding of wideband speech ISF parameters. 121-132 - Mohit Dua, Bhavesh Bhagat, Shelza Dua, Nidhi Chakravarty:
A review on Gujarati language based automatic speech recognition (ASR) systems. 133-156 - Assal A. M. Alqudah, Mohammad A. M. Alshraideh, Mohammad A. M. Abushariah, Ahmad A. S. Sharieh:
Modern Standard Arabic speech disorders corpus for digital speech processing applications. 157-170 - Albert Cryssiover, Amalia Zahra:
Speech recognition model design for Sundanese language using WAV2VEC 2.0. 171-177 - Marek B. Trawicki:
Automatic gender recognition and speaker identification of Rhesus Macaques (Macaca mulatta) using hidden Markov models (HMMs). 179-186 - P. Ashwini, S. H. Bharathi:
Continuous feature learning representation to XGBoost classifier on the aggregation of discriminative Features using DenseNet-121 architecture and ResNet 18 architectures towards Apraxia Recognition in the Child Speech Therapy. 187-199 - Chengyong Yang, Xiukang Yu, Sheng Huang:
Conditional Denoising Diffusion Implicit Model for Speech Enhancement. 201-209 - Omayma Mahmoudi, Mouncef Filali Bouami, Mohamed Benchat:
Speech recognition based on the transformer's multi-head attention in Arabic. 211-223 - Nidhi Chakravarty, Mohit Dua:
Feature extraction using GTCC spectrogram and ResNet50 based classification for audio spoof detection. 225-237 - N. Aishwarya, Kanwaljeet Kaur, Karthik Seemakurthy:
A computationally efficient speech emotion recognition system employing machine learning classifiers and ensemble learning. 239-254 - Antor Mahamudul Hashan, Chaganov Roman Dmitrievich, Melnikov Alexander Valerievich, Dorokh Danila Vasilyevich, Khlebnikov Nikolai Alexandrovich, Boris Andreevich Bredikhin:
Hyperkinetic Dysarthria voice abnormalities: a neural network solution for text translation. 255-265 - Haidy H. Mustafa, Nagy Ramadan Darwish, Hesham A. Hefny:
Automatic Speech Emotion Recognition: a Systematic Literature Review. 267-285 - Hossam Boulal, Mohamed Hamidi, Mustapha Abarkan, Jamal Barkani:
Amazigh CNN speech recognition system based on Mel spectrogram feature extraction method. 287-296 - Retraction Note: Computer vision for facial analysis using human-computer interaction models. 297
Volume 27, Number 2, June 2024
- Shilin Wang, Haixin Guan, Shuang Wei, Yanhua Long:
Improving low-complexity and real-time DeepFilterNet2 for personalized speech enhancement. 299-306 - V. Karthikeyan, S. Suja Priyadharsini, K. Balamurugan, Manickam Ramasamy:
Retraction Note: Speaker identification using hybrid neural network support vector machine classifier. 307 - B. G. Nagaraja, Thimmaraja Yadava G., Prashanth Kabballi, C. M. Patil:
VAD system under uncontrolled environment: A solution for strengthening the noise robustness using MMSE-SPZC. 309-317 - Yongyan Yang:
Feature fusion: research on emotion recognition in English speech. 319-327 - Abdelkbir Ouisaadane, Said Safi, Miloud Frikel:
An experiment of Moroccan dialect speech recognition in noisy environments using PocketSphinx. 329-339 - Mohammed Hamzah Abed, Dávid Sztahó:
Effect of identical twins on deep speaker embeddings based forensic voice comparison. 341-351 - Arijul Haque, Krothapalli Sreenivasa Rao:
Speech emotion recognition with transfer learning and multi-condition training for noisy environments. 353-365 - Youyuan Zhang, Sashank Gondala, Thiago Fraga-Silva, Christophe Van Gysel:
Server-side rescoring of spoken entity-centric knowledge queries for virtual assistants. 367-375 - Qian Shen, Mengxi Guo, YiDa Huang, Jianfen Ma:
Attentional multi-feature fusion for spoofing-aware speaker verification. 377-387 - Maysa Khalil, Mohammad Azzeh:
Fake news detection models using the largest social media ground-truth dataset (TruthSeeker). 389-404 - Mohamed Abdelkarim Remmide, Fatima Boumahdi, Imane Rebeh Ammar Aouchiche, Amina Guendouz, Narhimene Boustia:
A robust approach to authorship verification using siamese deep learning: application in phishing email detection. 405-412 - Meriem Lounis, Bilal Dendani, Halima Bahi:
Anomaly detection with a variational autoencoder for Arabic mispronunciation detection. 413-424 - Shaik Mulla Shabber, Mohan Bansal:
Temporal feature-based approaches for enhancing phoneme boundary detection and masking in speech. 425-436 - Abderrahmane Louni, Leila Rizoug, Abderrahim Belmadani:
A quantal model for Algerian vowel features identification using formants and subglottal resonances. 437-445 - Joan L. Imbwaga, Nagaratna B. Chittaragi, Shashidhar G. Koolagudi:
Automatic hate speech detection in audio using machine learning algorithms. 447-469 - Abdul Malik Abbasi, Bisma Butt, Illahi Bux Gopang, Ahlam Khan, Kiran Naz, Dure Shehwar:
Analyzing acoustic patterns of vowel sounds produced by native Rangri speakers. 471-481 - Soumeya Belabbas, Djamel Addou, Sid-Ahmed Selouani:
Pathological voice classification system based on CNN-BiLSTM network using speech enhancement and multi-stream approach. 483-502 - Lei Jin:
RETRACTED ARTICLE: Research on pronunciation accuracy detection of English Chinese consecutive interpretation in English intelligent speech translation terminal. 503 - Jiejie Cui, Xiang Li, Yang Wang:
RETRACTED ARTICLE: Construction of voice access clustering model for online shopping user groups based on electronic communication data mining algorithm. 505 - Hailong Cui, Yu Zhao, Wenchao Dong:
RETRACTED ARTICLE: Research on life prediction method of rolling bearing based on deep learning and voice interaction technology. 507 - Dongmei Li:
RETRACTED ARTICLE: Speech fault recognition method of music intelligent player based on communication feature analysis. 509 - Jinxuan Wang, Zhanjun Tang, Peng Lu:
RETRACTED ARTICLE: Ice detection and voice alarm of wind turbine blades based on belief network. 511 - Xuyang Wang, Shijian Liu:
RETRACTED ARTICLE: Accurate recognition of heterogeneous features in super resolution image visualization based on voice remote control system. 513 - S. Shivaprasad, Sadanandam Manchala:
RETRACTED ARTICLE: Dialect recognition from Telugu speech utterances using spectral and prosodic features. 515 - Kusum Yadav, Shamik Tiwari, Anurag Jain, Alaa Kamal Yousif Dafhalla:
RETRACTED ARTICLE: Deep learning based cardiovascular disease diagnosis system from heartbeat sound. 517 - Sampath Dakshina Murthy Achanta, Thangavel Karthikeyan, R. Vinothkanna:
RETRACTED ARTICLE: Wearable sensor based acoustic gait analysis using phase transition-based optimization algorithm on IoT. 519
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.