default search action
Eita Nakamura
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c43]Fabian C. Moss, Eita Nakamura:
Modeling the Evolution of Harmony in Popular Music from Different Cultural Contexts. CHR 2024: 137-152 - [i14]Francesco Foscarin, Emmanouil Karystinaios, Eita Nakamura, Gerhard Widmer:
Cluster and Separate: a GNN Approach to Voice and Staff Prediction for Score Engraving. CoRR abs/2407.21030 (2024) - 2023
- [c42]Daichi Kamakura, Eita Nakamura, Kazuyoshi Yoshii:
CTC2: End-to-End Drum Transcription Based on Connectionist Temporal Classification With Constant Tempo Constraint. APSIPA ASC 2023: 158-164 - [c41]Eita Nakamura, Yasuyuki Saito:
Evolutionary Analysis and Cultural Transmission Models of Color Style Distributions in Painting Arts. APSIPA ASC 2023: 506-513 - [c40]Tengyu Deng, Eita Nakamura, Kazuyoshi Yoshii:
Audio-to-Score Singing Transcription Based on Joint Estimation of Pitches, Onsets, and Metrical Positions With Tatum-Level CTC Loss. APSIPA ASC 2023: 583-590 - [c39]Norihiro Kato, Eita Nakamura, Kyoko Mine, Orie Doeda, Masanao Yamada:
Computational Analysis of Audio Recordings of Piano Performance for Automatic Evaluation. EC-TEL 2023: 586-592 - [c38]Moyu Terao, Eita Nakamura, Kazuyoshi Yoshii:
Neural Band-to-Piano Score Arrangement with Stepless Difficulty Control. ICASSP 2023: 1-5 - 2022
- [c37]Tengyu Deng, Eita Nakamura, Kazuyoshi Yoshii:
End-to-End Lyrics Transcription Informed by Pitch and Onset Estimation. ISMIR 2022: 633-639 - [c36]Florian Thalmann, Eita Nakamura, Kazuyoshi Yoshii:
Tracking the Evolution of a Band's Live Performances over Decades. ISMIR 2022: 850-857 - [c35]Pedro Ramoneda, Dasaem Jeong, Eita Nakamura, Xavier Serra, Marius Miron:
Automatic Piano Fingering from Partially Annotated Scores using Autoregressive Neural Networks. ACM Multimedia 2022: 6502-6510 - 2021
- [j11]Kentaro Shibata, Eita Nakamura, Kazuyoshi Yoshii:
Non-local musical statistics as guides for audio-to-score piano transcription. Inf. Sci. 566: 262-280 (2021) - [j10]Eita Nakamura, Kazuyoshi Yoshii:
Musical rhythm transcription based on Bayesian piece-specific score models capturing repetitions. Inf. Sci. 572: 482-500 (2021) - [c34]Yuki Hiramatsu, Go Shibata, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii:
Statistical Correction of Transcribed Melody Notes Based on Probabilistic Integration of a Music Language Model and a Transcription Error Model. ICASSP 2021: 256-260 - [c33]Yuki Hiramatsu, Eita Nakamura, Kazuyoshi Yoshii:
Joint Estimation of Note Values and Voices for Audio-to-Score Piano Transcription. ISMIR 2021: 278-284 - 2020
- [j9]Eita Nakamura, Yasuyuki Saito, Kazuyoshi Yoshii:
Statistical learning and estimation of piano fingering. Inf. Sci. 517: 68-85 (2020) - [j8]Hiroaki Tsushima, Eita Nakamura, Kazuyoshi Yoshii:
Bayesian Melody Harmonization Based on a Tree-Structured Generative Model of Chord Sequences and Melodies. IEEE ACM Trans. Audio Speech Lang. Process. 28: 1644-1655 (2020) - [j7]Ryo Nishikimi, Eita Nakamura, Masataka Goto, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Bayesian Singing Transcription Based on a Hierarchical Generative Model of Keys, Musical Notes, and F0 Trajectories. IEEE ACM Trans. Audio Speech Lang. Process. 28: 1678-1691 (2020) - [j6]Yiming Wu, Tristan Carsault, Eita Nakamura, Kazuyoshi Yoshii:
Semi-Supervised Neural Chord Estimation Based on a Variational Autoencoder With Latent Chord Labels and Features. IEEE ACM Trans. Audio Speech Lang. Process. 28: 2956-2966 (2020) - [c32]Ryoto Ishizuka, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii:
Tatum-Level Drum Transcription Based on a Convolutional Recurrent Neural Network with Language Model-Based Regularized Training. APSIPA 2020: 359-364 - [c31]Yiming Wu, Eita Nakamura, Kazuyoshi Yoshii:
A Variational Autoencoder for Joint Chord and Key Estimation from Audio Chromagrams. APSIPA 2020: 500-506 - [c30]Yasuyuki Saito, Yasuji Sakai, Yuu Igarashi, Suguru Agata, Eita Nakamura, Shigeki Sagayama:
Music Recreation in Nursing Home using Automatic Music Accompaniment System and Score of VLN. LifeTech 2020: 127-131 - [i13]Yiming Wu, Tristan Carsault, Eita Nakamura, Kazuyoshi Yoshii:
Semi-supervised Neural Chord Estimation Based on a Variational Autoencoder with Discrete Labels and Continuous Textures of Chords. CoRR abs/2005.07091 (2020) - [i12]Kentaro Shibata, Eita Nakamura, Kazuyoshi Yoshii:
Non-Local Musical Statistics as Guides for Audio-to-Score Piano Transcription. CoRR abs/2008.12710 (2020) - [i11]Ryoto Ishizuka, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii:
Tatum-Level Drum Transcription Based on a Convolutional Recurrent Neural Network with Language Model-Based Regularized Training. CoRR abs/2010.03749 (2020)
2010 – 2019
- 2019
- [c29]Yui Uehara, Eita Nakamura, Satoshi Tojo:
Chord Function Identification with Modulation Detection Based on HMM. CMMR 2019: 166-178 - [c28]Ryo Nishikimi, Eita Nakamura, Satoru Fukayama, Masataka Goto, Kazuyoshi Yoshii:
Automatic Singing Transcription Based on Encoder-decoder Recurrent Neural Networks with a Weakly-supervised Attention Mechanism. ICASSP 2019: 161-165 - [c27]Andrew McLeod, Eita Nakamura, Kazuyoshi Yoshii:
Improved Metrical Alignment of Midi Performance Based on a Repetition-aware Online-adapted Grammar. ICASSP 2019: 186-190 - [c26]Eita Nakamura, Kentaro Shibata, Ryo Nishikimi, Kazuyoshi Yoshii:
Unsupervised Melody Style Conversion. ICASSP 2019: 196-200 - [c25]Kentaro Shibata, Ryo Nishikimi, Satoru Fukayama, Masataka Goto, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Joint Transcription of Lead, Bass, and Rhythm Guitars Based on a Factorial Hidden Semi-Markov Model. ICASSP 2019: 236-240 - [c24]Shun Ueda, Kentaro Shibata, Yusuke Wada, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii:
Bayesian Drum Transcription Based on Nonnegative Matrix Factor Decomposition with a Deep Score Prior. ICASSP 2019: 456-460 - [c23]Go Shibata, Ryo Nishikimi, Eita Nakamura, Kazuyoshi Yoshii:
Statistical Music Structure Analysis Based on a Homogeneity-, Repetitiveness-, and Regularity-Aware Hierarchical Hidden Semi-Markov Model. ISMIR 2019: 268-275 - [c22]Tristan Carsault, Andrew McLeod, Philippe Esling, Jérôme Nika, Eita Nakamura, Kazuyoshi Yoshii:
Multi-Step Chord Sequence Prediction Based On Aggregated Multi-Scale Encoder-Decoder Networks. MLSP 2019: 1-6 - [c21]Ryo Nishikimi, Eita Nakamura, Masataka Goto, Kazuyoshi Yoshii:
End-To-End Melody Note Transcription Based on a Beat-Synchronous Attention Mechanism. WASPAA 2019: 26-30 - [i10]Eita Nakamura, Yasuyuki Saito, Kazuyoshi Yoshii:
Statistical Learning and Estimation of Piano Fingering. CoRR abs/1904.10237 (2019) - [i9]Eita Nakamura, Kazuyoshi Yoshii:
Music Transcription Based on Bayesian Piece-Specific Score Models Capturing Repetitions. CoRR abs/1908.06969 (2019) - [i8]Tristan Carsault, Andrew McLeod, Philippe Esling, Jérôme Nika, Eita Nakamura, Kazuyoshi Yoshii:
Multi-Step Chord Sequence Prediction Based on Aggregated Multi-Scale Encoder-Decoder Network. CoRR abs/1911.04972 (2019) - 2018
- [j5]Kousuke Itakura, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Bayesian Multichannel Audio Source Separation Based on Integrated Source and Spatial Models. IEEE ACM Trans. Audio Speech Lang. Process. 26(4): 831-846 (2018) - [c20]Yusuke Wada, Ryo Nishikimi, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Sequential Generation of Singing F0 Contours from Musical Note Sequences Based on WaveNet. APSIPA 2018: 983-989 - [c19]Eita Nakamura, Ryo Nishikimi, Simon Dixon, Kazuyoshi Yoshii:
Probabilistic Sequential Patterns for Singing Transcription. APSIPA 2018: 1905-1912 - [c18]Kazuyoshi Yoshii, Koichi Kitamura, Yoshiaki Bando, Eita Nakamura, Tatsuya Kawahara:
Independent Low-Rank Tensor Analysis for Audio Source Separation. EUSIPCO 2018: 1657-1661 - [c17]Eita Nakamura, Emmanouil Benetos, Kazuyoshi Yoshii, Simon Dixon:
Towards Complete Polyphonic Music Transcription: Integrating Multi-Pitch Detection and Rhythm Quantization. ICASSP 2018: 101-105 - [c16]Hiroaki Tsushima, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Interactive Arrangement of Chords and Melodies Based on a Tree-Structured Generative Model. ISMIR 2018: 145-151 - [i7]Eita Nakamura, Kazuyoshi Yoshii:
Statistical Piano Reduction Controlling Performance Difficulty. CoRR abs/1808.05006 (2018) - 2017
- [j4]Misato Ohkita, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Audio-Visual Beat Tracking Based on a State-Space Model for a Robot Dancer Performing with a Human Dancer. J. Robotics Mechatronics 29(1): 125-136 (2017) - [j3]Eita Nakamura, Kazuyoshi Yoshii, Shigeki Sagayama:
Rhythm Transcription of Polyphonic Piano Music Based on Merged-Output HMM for Multiple Voices. IEEE ACM Trans. Audio Speech Lang. Process. 25(4): 794-806 (2017) - [j2]Eita Nakamura, Kazuyoshi Yoshii, Simon Dixon:
Note Value Recognition for Piano Transcription Using Markov Random Fields. IEEE ACM Trans. Audio Speech Lang. Process. 25(9): 1846-1858 (2017) - [c15]Kousuke Itakura, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii, Tatsuya Kawahara:
Bayesian multichannel nonnegative matrix factorization for audio source separation and localization. ICASSP 2017: 551-555 - [c14]Eita Nakamura, Kazuyoshi Yoshii, Haruhiro Katayose:
Performance Error Detection and Post-Processing for Fast and Accurate Symbolic Music Alignment. ISMIR 2017: 347-353 - [c13]Ryo Nishikimi, Eita Nakamura, Masataka Goto, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Scale- and Rhythm-Aware Musical Note Estimation for Vocal F0 Trajectories Based on a Semi-Tatum-Synchronous Hierarchical Hidden Semi-Markov Model. ISMIR 2017: 376-382 - [c12]Hiroaki Tsushima, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Function- and Rhythm-Aware Melody Harmonization Based on Tree-Structured Parsing and Split-Merge Sampling of Chord Sequences. ISMIR 2017: 502-508 - [c11]Kazuyoshi Yoshii, Eita Nakamura, Katsutoshi Itoyama, Masataka Goto:
Infinite probabilistic latent component analysis for audio source separation. MLSP 2017: 1-6 - [i6]Eita Nakamura, Kazuyoshi Yoshii, Shigeki Sagayama:
Rhythm Transcription of Polyphonic Piano Music Based on Merged-Output HMM for Multiple Voices. CoRR abs/1701.08343 (2017) - [i5]Eita Nakamura, Kazuyoshi Yoshii, Simon Dixon:
Note Value Recognition for Rhythm Transcription Using a Markov Random Field Model for Musical Scores and Performances of Piano Music. CoRR abs/1703.08144 (2017) - [i4]Hiroaki Tsushima, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Generative Statistical Models with Self-Emergent Grammar of Chord Sequences. CoRR abs/1708.02255 (2017) - 2016
- [j1]Tomohiko Nakamura, Eita Nakamura, Shigeki Sagayama:
Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips. IEEE ACM Trans. Audio Speech Lang. Process. 24(2): 329-339 (2016) - [c10]Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Rhythm transcription of MIDI performances based on hierarchical Bayesian modelling of repetition and modification of musical note patterns. EUSIPCO 2016: 1946-1950 - [c9]Kousuke Itakura, Yoshiaki Bando, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
A unified Bayesian model of time-frequency clustering and low-rank approximation for multi-channel source separation. EUSIPCO 2016: 2280-2284 - [c8]Eita Nakamura, Masatoshi Hamanaka, Keiji Hirata, Kazuyoshi Yoshii:
Tree-structured probabilistic model of monophonic written music based on the generative theory of tonal music. ICASSP 2016: 276-280 - [c7]Yuta Ojima, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
A Hierarchical Bayesian Model of Chords, Pitches, and Spectrograms for Multipitch Analysis. ISMIR 2016: 309-315 - [c6]Ryo Nishikimi, Eita Nakamura, Katsutoshi Itoyama, Kazuyoshi Yoshii:
Musical Note Estimation for F0 Trajectories of Singing Voices Based on a Bayesian Semi-Beat-Synchronous HMM. ISMIR 2016: 461-467 - 2015
- [c5]Eita Nakamura, Shigeki Sagayama:
Automatic Piano Reduction from Ensemble Scores Based on Merged-Output Hidden Markov Model. ICMC 2015 - [c4]Eita Nakamura, Philippe Cuvillier, Arshia Cont, Nobutaka Ono, Shigeki Sagayama:
Autoregressive Hidden Semi-Markov Model of Symbolic Music Performance for Score Following. ISMIR 2015: 392-398 - [c3]Eita Nakamura, Shinji Takaki:
Characteristics of Polyphonic Music Style and Markov Model of Pitch-Class Intervals. MCM 2015: 109-114 - [i3]Tomohiko Nakamura, Eita Nakamura, Shigeki Sagayama:
Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips. CoRR abs/1512.07748 (2015) - 2014
- [c2]Eita Nakamura, Nobutaka Ono, Yasuyuki Saito, Shigeki Sagayama:
Merged-Output Hidden Markov Model for Score Following of MIDI Performance with Ornaments, Desynchronized Voices, Repeats and Skips. ICMC 2014 - [c1]Eita Nakamura, Nobutaka Ono, Shigeki Sagayama:
Merged-Output HMM for Piano Fingering of Both Hands. ISMIR 2014: 531-536 - [i2]Eita Nakamura, Tomohiko Nakamura, Yasuyuki Saito, Nobutaka Ono, Shigeki Sagayama:
Outer-Product Hidden Markov Model and Polyphonic MIDI Score Following. CoRR abs/1404.2313 (2014) - [i1]Eita Nakamura, Nobutaka Ono, Shigeki Sagayama, Kenji Watanabe:
A Stochastic Temporal Model of Polyphonic MIDI Performance with Ornaments. CoRR abs/1404.2314 (2014)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-25 23:40 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint