default search action
Yasuhiro Oikawa
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j12]Izumi Tsunokuni, Gen Sato, Yusuke Ikeda, Yasuhiro Oikawa:
Spatial Extrapolation of Early Room Impulse Responses with Noise-Robust Physics-Informed Neural Network. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 107(9): 1556-1560 (2024) - [j11]Tomoro Tanaka, Kohei Yatabe, Yasuhiro Oikawa:
PHAIN: Audio Inpainting via Phase-Aware Optimization With Instantaneous Frequency. IEEE ACM Trans. Audio Speech Lang. Process. 32: 4471-4485 (2024) - [c51]Aki Kishimoto, Yasuhiro Oikawa:
The Relationship Between Sound of VR Concert and Motion Activity of Audience. HCI (58) 2024: 237-244 - [c50]Rikuto Ito, Yasuhiro Oikawa, Kenji Ishikawa:
Tomographic Reconstruction of Sound Field From Optical Projections Using Physics-Informed Neural Networks. MLSP 2024: 1-6 - 2023
- [j10]Yoshiki Masuyama, Kohei Yatabe, Kento Nagatomo, Yasuhiro Oikawa:
Online Phase Reconstruction via DNN-Based Phase Differences Estimation. IEEE ACM Trans. Audio Speech Lang. Process. 31: 163-176 (2023) - [c49]Rikuto Ito, Natsuki Akaishi, Kohei Yatabe, Yasuhiro Oikawa:
On-Line Chord Recognition Using FifthNet with Synchrosqueezing Transform. EUSIPCO 2023: 141-145 - [c48]Natsuki Akaishi, Kohei Yatabe, Yasuhiro Oikawa:
Improving Phase-Vocoder-Based Time Stretching by Time-Directional Spectrogram Squeezing. ICASSP 2023: 1-5 - [c47]Tomoro Tanaka, Kohei Yatabe, Yasuhiro Oikawa:
UPGLADE: Unplugged Plug-and-Play Audio Declipper Based on Consensus Equilibrium of DNN and Sparse Optimization. ICASSP 2023: 1-5 - [c46]Ayame Uchida, Izumi Tsunokuni, Yusuke Ikeda, Yasuhiro Oikawa:
Mixed Reality Visualization of Room Impulse Response Map using Room Geometry and Physical Model of Sound Propagation. SIGGRAPH Posters 2023: 20:1-20:2 - [c45]Masahiko Goto, Yasuhiro Oikawa, Atsuto Inoue, Wataru Teraoka, Takahiro Sato, Yasuyuki Iwane, Masahito Kobayashi:
Utilizing LiDAR Data for 3D Sound Source Localization. SIGGRAPH Posters 2023: 38:1-38:2 - 2022
- [j9]Tsubasa Kusano, Kohei Yatabe, Yasuhiro Oikawa:
Window Functions With Minimum-Sidelobe Derivatives for Computing Instantaneous Frequency. IEEE Access 10: 32075-32092 (2022) - [c44]Kento Nagatomo, Masahiro Yasuda, Kohei Yatabe, Shoichiro Saito, Yasuhiro Oikawa:
Wearable Seld Dataset: Dataset For Sound Event Localization And Detection Using Wearable Devices Around Head. ICASSP 2022: 156-160 - [c43]Natsuki Akaishi, Kohei Yatabe, Yasuhiro Oikawa:
Harmonic and Percussive Sound Separation Based on Mixed Partial Derivative of Phase Spectrogram. ICASSP 2022: 301-305 - [c42]Tomoro Tanaka, Kohei Yatabe, Masahiro Yasuda, Yasuhiro Oikawa:
APPLADE: Adjustable Plug-and-Play Audio Declipper Combining DNN with Sparse Optimization. ICASSP 2022: 1011-1015 - [c41]Tomoki Kobayashi, Tomoro Tanaka, Kohei Yatabe, Yasuhiro Oikawa:
Acoustic Application of Phase Reconstruction Algorithms in Optics. ICASSP 2022: 6212-6216 - [i12]Tomoro Tanaka, Kohei Yatabe, Masahiro Yasuda, Yasuhiro Oikawa:
APPLADE: Adjustable Plug-and-play Audio Declipper Combining DNN with Sparse Optimization. CoRR abs/2202.08028 (2022) - [i11]Kento Nagatomo, Masahiro Yasuda, Kohei Yatabe, Shoichiro Saito, Yasuhiro Oikawa:
Wearable SELD dataset: Dataset for sound event localization and detection using wearable devices around head. CoRR abs/2202.08458 (2022) - [i10]Yoshiki Masuyama, Kohei Yatabe, Kento Nagatomo, Yasuhiro Oikawa:
Online Phase Reconstruction via DNN-based Phase Differences Estimation. CoRR abs/2211.08246 (2022) - 2021
- [j8]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration: Trainable Iterative Phase Reconstruction Using Neural Network. IEEE J. Sel. Top. Signal Process. 15(1): 37-50 (2021) - [j7]Ryo Iijima, Shota Minami, Yunao Zhou, Tatsuya Takehisa, Takeshi Takahashi, Yasuhiro Oikawa, Tatsuya Mori:
Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams and its Feasibility. IEEE Trans. Emerg. Top. Comput. 9(4): 2004-2018 (2021) - [c40]Tomoro Tanaka, Kohei Yatabe, Yasuhiro Oikawa:
Phase-aware Audio Inpainting Based on Instantaneous Frequency. APSIPA ASC 2021: 254-258 - [c39]Yoshiki Masuyama, Tomoro Tanaka, Kohei Yatabe, Tsubasa Kusano, Yasuhiro Oikawa:
Simultaneous Declipping and Beamforming via Alternating Direction Method of Multipliers. EUSIPCO 2021: 316-320 - [c38]Tsubasa Kusano, Kohei Yatabe, Yasuhiro Oikawa:
Sparse Time-Frequency Representation Via Atomic Norm Minimization. ICASSP 2021: 5075-5079 - 2020
- [j6]Yoshiki Masuyama, Kohei Yatabe, Kento Nagatomo, Yasuhiro Oikawa:
Joint Amplitude and Phase Refinement for Monaural Source Separation. IEEE Signal Process. Lett. 27: 1939-1943 (2020) - [c37]Yukiko Okawa, Yasuaki Watanabe, Yusuke Ikeda, Yuta Kataoka, Yasuhiro Oikawa, Naotoshi Osaka:
Visualization System of Measured and Simulated Sound Intensities with Mixed Reality. GCCE 2020: 44-48 - [c36]Kakeru Kurokawa, Izumi Tsunokuni, Yusuke Ikeda, Naotoshi Osaka, Yasuhiro Oikawa:
Sound Localization Accuracy in 2.5 Dimensional Local Sound Field Synthesis. GCCE 2020: 49-53 - [c35]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Phase Reconstruction Based On Recurrent Phase Unwrapping With Deep Neural Networks. ICASSP 2020: 826-830 - [c34]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Real-Time Speech Enhancement Using Equilibriated RNN. ICASSP 2020: 851-855 - [c33]Tsubasa Kusano, Kohei Yatabe, Yasuhiro Oikawa:
Maximally Energy-Concentrated Differential Window for Phase-Aware Signal Processing Using Instantaneous Frequency. ICASSP 2020: 5825-5829 - [c32]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Invertible DNN-Based Nonlinear Time-Frequency Transform for Speech Enhancement. ICASSP 2020: 6644-6648 - [c31]Yoshiki Masuyama, Yoshiaki Bando, Kohei Yatabe, Yoko Sasaki, Masaki Onishi, Yasuhiro Oikawa:
Self-supervised Neural Audio-Visual Sound Source Localization via Probabilistic Spatial Modeling. IROS 2020: 4848-4854 - [c30]Yuta Kataoka, Yasuhiro Oikawa, Yasuaki Watanabe, Yusuke Ikeda:
Mixed Reality Visualization of Instantaneous Sound Intensity with Moving 4-ch Microphone Array. SIGGRAPH Posters 2020: 57:1-57:2 - [c29]Yasuaki Watanabe, Yusuke Ikeda, Yuta Kataoka, Yasuhiro Oikawa, Naotoshi Osaka:
Visualization of Spatial Impulse Responses using Mixed Reality and Moving Microphone. SIGGRAPH Posters 2020: 58:1-58:2 - [i9]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Phase reconstruction based on recurrent phase unwrapping with deep neural networks. CoRR abs/2002.05832 (2020) - [i8]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Real-time speech enhancement using equilibriated RNN. CoRR abs/2002.05843 (2020) - [i7]Yoshiki Masuyama, Yoshiaki Bando, Kohei Yatabe, Yoko Sasaki, Masaki Onishi, Yasuhiro Oikawa:
Self-supervised Neural Audio-Visual Sound Source Localization via Probabilistic Spatial Modeling. CoRR abs/2007.13976 (2020)
2010 – 2019
- 2019
- [j5]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Griffin-Lim Like Phase Recovery via Alternating Direction Method of Multipliers. IEEE Signal Process. Lett. 26(1): 184-188 (2019) - [c28]Kakeru Kurokawa, Izumi Tsunokuni, Yusuke Ikeda, Naotoshi Osaka, Yasuhiro Oikawa:
Effect of Switching Reproduction Area in Dynamic Local Sound Field Synthesis. GCCE 2019: 335-339 - [c27]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration. ICASSP 2019: 61-65 - [c26]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Data-driven Design of Perfect Reconstruction Filterbank for DNN-based Sound Source Enhancement. ICASSP 2019: 596-600 - [c25]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Low-rankness of Complex-valued Spectrogram and Its Application to Phase-aware Audio Processing. ICASSP 2019: 855-859 - [c24]Risako Tanigawa, Kohei Yatabe, Yasuhiro Oikawa:
Guided-spatio-temporal Filtering for Extracting Sound from Optically Measured Images Containing Occluding Objects. ICASSP 2019: 945-949 - [c23]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Phase-aware Harmonic/percussive Source Separation via Convex Optimization. ICASSP 2019: 985-989 - [i6]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration. CoRR abs/1903.03971 (2019) - [i5]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Phase-aware Harmonic/Percussive Source Separation via Convex Optimization. CoRR abs/1903.05600 (2019) - [i4]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Low-rankness of Complex-valued Spectrogram and Its Application to Phase-aware Audio Processing. CoRR abs/1903.05603 (2019) - [i3]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Data-driven design of perfect reconstruction filterbank for DNN-based sound source enhancement. CoRR abs/1903.08876 (2019) - [i2]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Invertible DNN-based nonlinear time-frequency transform for speech enhancement. CoRR abs/1911.10764 (2019) - 2018
- [c22]Ryo Iijima, Shota Minami, Yunao Zhou, Tatsuya Takehisa, Takeshi Takahashi, Yasuhiro Oikawa, Tatsuya Mori:
Audio Hotspot Attack: An Attack on Voice Assistance Systems Using Directional Sound Beams. CCS 2018: 2222-2224 - [c21]Daiki Takeuchi, Kohei Yatabe, Yasuhiro Oikawa:
Realizing Directional Sound Source in FDTD Method by Estimating Initial Value. ICASSP 2018: 461-465 - [c20]Shota Minami, Jun Kuroda, Yasuhiro Oikawa:
Individual Difference of Ultrasonic Transducers for Parametric Array Loudspeaker. ICASSP 2018: 486-490 - [c19]Kenji Kobayashi, Daiki Takeuchi, Mio Iwamoto, Kohei Yatabe, Yasuhiro Oikawa:
Parametric Approximation of Piano Sound Based on Kautz Model with Sparse Linear Prediction. ICASSP 2018: 626-630 - [c18]Yoshiki Masuyama, Tsubasa Kusano, Kohei Yatabe, Yasuhiro Oikawa:
Modal Decomposition of Musical Instrument Sound Via Alternating Direction Method of Multipliers. ICASSP 2018: 631-635 - [c17]Kohei Yatabe, Yasuhiro Oikawa:
Phase Corrected Total Variation for Audio Signals. ICASSP 2018: 656-660 - [c16]Tsubasa Kusano, Kohei Yatabe, Yasuhiro Oikawa:
Envelope Estimation by Tangentially Constrained Spline. ICASSP 2018: 4374-4378 - [c15]Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Model-Based Phase Recovery of Spectrograms via Optimization on Riemannian Manifolds. IWAENC 2018: 126-130 - [c14]Tomoya Tachikawa, Kohei Yatabe, Yasuhiro Oikawa:
Underdetermined Source Separation with Simultaneous DOA Estimation Without Initial Value Dependency. IWAENC 2018: 161-165 - [c13]Atsushi Hiruma, Kohei Yatabe, Yasuhiro Oikawa:
Separating Stereo Audio Mixture Having No Phase Difference by Convex Clustering and Disjointness Map. IWAENC 2018: 266-270 - [c12]Kohei Yatabe, Yoshiki Masuyama, Yasuhiro Oikawa:
Rectified Linear Unit Can Assist Griffin-Lim Phase Recovery. IWAENC 2018: 555-559 - [c11]Yuta Kataoka, Wataru Teraoka, Yasuhiro Oikawa, Yusuke Ikeda:
Real-time measurement and display system of 3D sound intensity map using optical see-through head mounted display. SIGGRAPH ASIA Posters 2018: 71:1-71:2 - [i1]Tsubasa Kusano, Yoshiki Masuyama, Kohei Yatabe, Yasuhiro Oikawa:
Designing nearly tight window for improving time-frequency masking. CoRR abs/1811.08783 (2018) - 2017
- [c10]Yuji Koyano, Kohei Yatabe, Yasuhiro Oikawa:
Infinite-dimensional SVD for analyzing microphone array. ICASSP 2017: 176-180 - [c9]Tomoya Tachikawa, Kohei Yatabe, Yasuhiro Oikawa:
Coherence-adjusted monopole dictionary and convex clustering for 3D localization of mixed near-field and far-field sources. ICASSP 2017: 3191-3195 - [c8]Atsuto Inoue, Kohei Yatabe, Yasuhiro Oikawa, Yusuke Ikeda:
Visualization of 3D sound field using see-through head mounted display. SIGGRAPH Posters 2017: 34:1-34:2 - 2016
- [j4]Atsuto Inoue, Yusuke Ikeda, Kohei Yatabe, Yasuhiro Oikawa:
Three-dimensional sound-field visualization system using head mounted display and stereo camera. Proc. Meet. Acoust. 29(1) (2016) - [j3]Kenji Ishikawa, Kohei Yatabe, Yusuke Ikeda, Yasuhiro Oikawa, Takashi Onuma, Hayato Niwa, Minoru Yoshii:
Optical sensing of sound fields: non-contact, quantitative, and single-shot imaging of sound using high-speed polarization camera. Proc. Meet. Acoust. 29(1) (2016) - [j2]Ryouzi Saitou, Yusuke Ikeda, Yasuhiro Oikawa:
Three-dimensional noise mapping system with aerial blimp robot. Proc. Meet. Acoust. 29(1) (2016) - [j1]Kohei Yatabe, Kenji Ishikawa, Yasuhiro Oikawa:
Signal processing for optical sound field measurement and visualization. Proc. Meet. Acoust. 29(1) (2016) - [c7]Yuji Koyano, Kohei Yatabe, Yusuke Ikeda, Yasuhiro Oikawa:
Physical-model based efficient data representation for many-channel microphone array. ICASSP 2016: 370-374 - 2015
- [c6]Kohei Yatabe, Yasuhiro Oikawa:
Optically visualized sound field reconstruction based on sparse selection of point sound sources. ICASSP 2015: 504-508 - [c5]Nachanant Chitanont, Keita Yaginuma, Kohei Yatabe, Yasuhiro Oikawa:
Visualization of sound field by means of Schlieren method with spatio-temporal filtering. ICASSP 2015: 509-513 - 2014
- [c4]Kohei Yatabe, Yasuhiro Oikawa:
PDE-based interpolation method for optically visualized sound field. ICASSP 2014: 4738-4742 - 2012
- [c3]Mariko Akutsu, Yasuhiro Oikawa:
Extraction of sound field information from flowing dust captured with high-speed camera. ICASSP 2012: 545-548 - [c2]Tomoyasu Komori, Atsushi Imai, Nobumasa Seiyama, Reiko Takou, Tohru Takagi, Yasuhiro Oikawa:
Development of a Broadcast Sound Receiver for Elderly Persons. ICCHP (1) 2012: 681-688
2000 – 2009
- 2005
- [c1]Yasuhiro Oikawa, Makoto Goto, Yusuke Ikeda, Toshikazu Takizawa, Yoshio Yamasaki:
Sound field measurements based on reconstruction from laser projections. ICASSP (4) 2005: 661-664
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-30 01:09 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint