default search action
SIGGRAPH Asia 2024 Posters: Tokyo, Japan
- Takeo Igarashi, Ruizhen Hu:
SIGGRAPH Asia 2024 Posters, SA 2024, TokyoJapan, December 3-6, 2024. ACM 2024, ISBN 979-8-4007-1138-1
Human-Computer Interaction
- Junseo Park, Hanseob Kim, Gerard Jounghyun Kim:
A Study of 3D Character Control Methods_Keyboard, Speech, Hand Gesture, and Mixed Interfaces. 1:1-1:2 - Akira Nakayasu:
ChoreoSurf: Scalable Surface System with 8-DOF SMA Actuators. 2:1-2:2 - Momoka Nakayama, Risako Kawashima, Shintaro Murakami, Yuta Takeuchi, Tatsuya Mori, Dai Takanashi:
A Method for Generating Tactile Sensations from Textual Descriptions Using Generative AI. 3:1-3:2 - Yihao He:
Not Just a Gimmick: A Preliminary Study on Designing Interactive Media Art to Empower Embedded Culture's Practitioner. 4:1-4:2 - Daun Kim, Jin-Woo Jeong:
Designing LLM Response Layouts for XR Workspaces in Vehicles. 5:1-5:2 - Ruofan Liu, Yichen Peng, Takanori Oku, Erwin Wu, Shinichi Furuya, Hideki Koike:
PianoKeystroke-EMG: Piano Hand Muscle Electromyography Estimation from Easily Accessible Piano Keystroke. 6:1-6:2 - Wataru Kawabe, Yusuke Sugano:
A Multimodal LLM-based Assistant for User-Centric Interactive Machine Learning. 7:1-7:2 - Ting-Han Daniel Chen:
A Lava Well of Reflexivity: Exploring Speculative Ambient Media. 8:1-8:2 - Minjae Lee, Jiho Bae, Sang-Min Choi, Suwon Lee:
Finger-Pointing Interface for Human Gesture Recognition Based on Real-Time Geometric Comprehension. 9:1-9:2 - Ryunosuke Ise, Koji Tsukada:
HidEye: Proposal of HMD Interaction Method by Hiding One Eye. 10:1-10:2 - Pei-Hsin Huang, Shang-Ching Liu, Li-Yang Huang, Chuan-Meng Chiu, Jo-Chien Wang, Pu Ching, Hung-Kuo Chu, Min-Chun Hu:
IT3: Immersive Table Tennis Training Based on 3D Reconstruction of Broadcast Video. 11:1-11:2 - Hideaki Nii, Kazutoshi Kashimoto, Shozaburo Shimada:
V-Wire: A Single-Wire System for Simplified Hardware Prototyping and Enhanced Fault Detection in Education. 12:1-12:2 - Takahiro Kusabuka, Yuichi Maki, Kakagu Komazaki, Masafumi Suzuki, Hiroshi Chigira, Takayoshi Mochizuki:
Vibrotactile Invisible Presence: Conveying Remote Presence through Moving Vibrotactile Footstep Cues on a Haptic Floor. 13:1-13:2 - Pranshu Anand, Vishal Bharti, Anmol Srivastava:
Curtain UI: Augmenting Curtains for Tangible Interactions. 14:1-14:2 - Sawa Yoshioka, Shinichi Fukushige, Kohta Seki, Mizuki Kawakami:
An immersive interface for remote collaboration with multiple telepresence robots through digital twin spaces. 15:1-15:2 - Yuki Kubo, Buntarou Shizuki:
Signal2Hand: Sensor Modality Translation from Body-Worn Sensor Signals to Hand-Depth Images. 16:1-16:2 - Mingyang Xu, Yulan Ju, Yunkai Qi, Xiaru Meng, Qing Zhang, Matthias Hoppe, Kouta Minamizawa, Giulia Barbareschi, Kai Kunze:
Affective Wings: Exploring Affectionate Behaviors in Close-Proximity Interactions with Soft Floating Robots. 17:1-17:3 - Wenze Song, Takefumi Hayashi:
Time Light: An Interface for Comparing National Treasure Murals Across Time. 18:1-18:2 - Daniel Oswaldo Lopez Tassara, Naoto Wakatsuki, Keiichi Zempo:
Auditory AR System to Induce Pseudo-Haptic Force Feedback for Lateral Hand Movements Using Spatially Localized Sound Stimuli. 19:1-19:2 - Anji Fujiwara, Kodai Iwasaki, Tamami Watanabe, Hideaki Uchiyama:
Thermiapt: Sensory Perception of Quantitative Thermodynamics Concepts in Education. 20:1-20:3 - Yurun Chen, Xin Lyu, Tianzhao Li, Zihan Gao:
Li Bai the Youth: An LLM-Powered Virtual Agent for Children's Chinese Poetry Education. 21:1-21:2 - Yuyao Heng, Yingman Chen, Zihan Gao:
Echoes of Antiquity: An Interactive Installation for Guqin Culture Heritage Using Mid-Air interaction and Generative AI. 22:1-22:2 - Sotaro Yokoi, Kaishi Amitani, Natsuki Hamanishi, Jun Rekimoto:
Jaku-in: A Cultural Skills Training System for Recording and Reproducing Three-dimensional Body, Eye, and Hand Movements. 23:1-23:2 - Shinichiro Terasawa, Oki Hasegawa, Toshiki Sato:
A Novel Projection Screen using the Crystalline Film of a Frozen Soap Bubble. 24:1-24:2 - Ryuhei Furuta, Hikari Kawaguchi, Kazuki Miyasaka, Mika Sai, Toshiki Sato:
Dynamically Reconfigurable Paper. 25:1-25:2
Virtual Reality, Augmented Reality, and Mixed Reality
- Wonok Kwon, Sanghoon Cheon, Kihong Choi, Keehoon Hong:
Real-time Holographic Media System Utilizing HBM-based Holography Processor. 26:1-26:3 - Yasunori Akashi, Changyo Han, Takeshi Naemura:
MMM: Mid-air image Moving in and out of the Mirror with backward glance in the mirror. 27:1-27:2 - Jaehong Lee, Duksu Kim:
Out-Of-Core Diffraction for Terascale Holography. 28:1-28:2 - Hidetaka Katsuyama, Shio Miyafuji, Hideki Koike:
DiskPlay: Dynamic Projection Mapping on Rotating Platforms for Extended Holographic Display. 29:1-29:2 - Kexin Nie, Mengyao Guo:
Flying Your Imagination: Integrating AI in VR for Kite Heritage. 30:1-30:2 - Hsueh-Han Wu, Kelvin Cheng, Koji Nishina, Jorge Chavez:
Engaging Racing Fans through Offline E-racing Spectator Experience in AR. 31:1-31:2 - Jun Miao, Alex Shin, Jeanne Vu, Takanori Miki, Guodong Rong, Joshua Davis, Zilong Li, Wenbin Wang, Jinglun Gao, Jiangtao Kuang:
Bridging Reality and the Virtual Environment: Perceptual Consistency and Visual Adaptation. 32:1-32:3 - Jieon Du, Heewon Lee, Jeongmin Lee, Gewon Kim:
Media Bus: XR-Based Immersive Cultural Heritage Tourism. 33:1-33:3 - Shigenori Mochizuki, Jonathan Duckworth, Ross Eldridge, James Hullick:
XR Avatar Prototype for Art Performance Supporting the Inclusion of Neurodiverse Artists. 34:1-34:2 - Ryu Nakagawa, Kenta Hidaka, Shimpei Biwata, Sho Kato, Taiki Shigeno:
Shadows Being Vacuumed Away: An MR Experience of Shadow Loss of Body with Spine-Chilling and Body-Trembling, and Shadow Loss of Thing. 35:1-35:2 - Ikuho Tani, Daisuke Iwai, Kosuke Sato:
High Spatial Resolution Projection Mapping for Visually Consistent Reproduction of Physical Surfaces. 36:1-36:2 - Xiaozhan Liang, Yu Wang, Fengyi Yan, Zehong Ouyang, Yong Hu, Siyu Luo:
Reborn of the White Bone Demon: Role-Playing Game Design Using Generative AI in XR. 37:1-37:3 - Junseo Choi, Hyeonji Kim, Haill An, Younhyun Jung:
Real-Time Transfer Function Editor for Direct Volume Rendering in Mixed Reality. 38:1-38:2 - Kinga Skiers, Danyang Peng, Giulia Barbareschi, Yun Suen Pai, Kouta Minamizawa:
NatureBlendVR: A Hybrid Space Experience for Enhancing Emotional Regulation and Cognitive Performance. 39:1-39:2 - Ching-Hua Chuan, Wan-Hsiu Sunny Tsai, Xueer Xia:
An Augmented Reality Experience for Climate Justice: Using Spatial Animation to Enhance Perceived Togetherness. 40:1-40:2 - Kezhou Yang, Sohei Wakisaka:
Co-play with Double Self: Exploring Bodily-Self Through Heautoscopy-Based XR Hide and Seek Game. 41:1-41:2 - Xinjun Li, Zhenhong Lei:
Mixed Reality Solutions for Tremor Disorders: Ergonomic Hand Motion and AR Rehabilitation. 42:1-42:2 - Xuechang Tu, Bernhard Kerbl, Fernando De la Torre:
Fast and Robust 3D Gaussian Splatting for Virtual Reality. 43:1-43:3 - Yuanlinxi Li, Mengyao Guo, Ze Gao:
Empathy Engine: Using Game Design and Real-time Technology to Cultivate Social Connection. 44:1-44:3
Image Processing and Understanding
- Jiho Bae, Minjae Lee, Ungsik Kim, Suwon Lee:
SLAM-Based Illegal Parking Detection System. 45:1-45:2 - Ziyang Chen, Mustafa Doga Dogan, Josef B. Spjut, Kaan Aksit:
SpecTrack: Learned Multi-Rotation Tracking via Speckle Imaging. 46:1-46:2 - Yu-Chiao Wang, Tung-Ju Hsieh, Pei-Ying Chiang:
Self-attention Handwriting Generative Model. 47:1-47:2 - Yusuke Takeuchi, Qi An, Atsushi Yamashita:
Boundary Conditioned Floor Layout Generation with Diffusion Model. 48:1-48:2 - Saki Kominato, Miyu Fukuoka, Naoya Koizumi:
Perceiving 3D from a 2D Mid-air Image. 49:1-49:2 - Oliver Richards, Chris Cook:
Efficient Space Variant Gaussian Blur Approximation. 50:1-50:2 - Yamato Miyatake, Parinya Punpongsanon:
An Exploratory Study on Fabricating of Unobtrusive Edible Tags. 51:1-51:2 - Chia-Chia Chen, Chi-Han Peng:
Shortest Path Speed-up Through Binary Image Downsampling. 52:1-52:2 - Tatsuki Arai, Mariko Isogawa, Kuniharu Sakurada, Maki Sugimoto:
3D Human Pose Estimation Using Ultra-low Resolution Thermal Images. 53:1-53:2 - Hao Jin, Zhengyang Wang, Xusheng Du, Xiaoxuan Xie, Haoran Xie:
Landscape Cinemagraph Synthesis with Sketch Guidance. 54:1-54:2 - Deinyon Lachlan Davies, Chris Cook:
Perceptually Uniform Hue Adjustment: Hue Distortion Cage. 55:1-55:2 - Sotaro Kanazawa, I-Chao Shen, Yuki Tatsukawa, Takeo Igarashi:
Generating Font Variations Using Latent Space Trajectory. 56:1-56:2 - Dongsik Yoon, Jongeun Kim, Seonggeun Song, Yejin Lee, Gunhee Lee:
OverallNet: Scale-Arbitrary Lightweight SR Model for handling 360° Panoramic Images. 57:1-57:2 - Ra Yun Boo, Jinhong Park, Jin-Woo Kim, Sunho Ki, Jeong-Ho Woo:
Disparity Map based Synthetic IR Pattern Augmentation for Active Stereo Matching. 58:1-58:2 - Daisuke Nanya, Kouki Yonezawa:
Anime line art colorization by region matching using region shape. 59:1-59:2 - Jiho Shin, Seungkyu Lee:
Strainer GAN: Filtering out Impurity Samples in GAN Training. 60:1-60:2 - Hao Sha, Tongtai Cao, Yue Liu:
Material and Colored Illumination Separation from Single Real Image via Self-Supervised Domain Adaptation. 61:1-61:2 - Wajahat Ali Khan, Seungkyu Lee:
Multimodal Learning for Autoencoders. 62:1-62:2 - Hyerin Cho, Jin-Woo Kim, Jinhong Park, Jeong-Ho Woo:
Heterogeneous Architecture for Asynchronous Seamless Image Stitching. 63:1-63:2
Reconstruction, Modeling and Processing
- Yuki Maegawa, Masanori Hashimoto, Ryo Shirai:
Development of Tiny Wireless Position Tracker Enabling Real-Time Intuitive 3D Modeling. 64:1-64:2 - Hanqin Wang, Alexei Sourin:
Sound Signatures for Geometrical Shapes. 65:1-65:2 - Kohei Miura, Daisuke Iwai, Kosuke Sato:
3D Reconstruction of a Soft Object Surface and Contact Areas in Hand-Object Interactions. 66:1-66:2 - Naoki Shitanda, Jun Rekimoto:
Gaussians in the City: Enhancing 3D Scene Reconstruction under distractors with Text-guided Segmentation and Inpainting. 67:1-67:2 - Shunsuke Hirata, Yuta Noma, Koya Narumi, Yoshihiro Kawahara:
ARAP-Based Shape Editing to Manipulate the Center of Mass. 68:1-68:2 - Nathan D. King, Steven J. Ruuth, Christopher Batty:
A Simple Heat Method for Computing Geodesic Paths on General Manifold Representations. 69:1-69:2 - Yi-Ju Pan, Pei-Chun Tsai, Kuan-Wen Chen:
Semantics-guided 3D Indoor Scene Reconstruction from a Single RGB Image with Implicit Representation. 70:1-70:2 - Vivica Wirth, Max Mühlhäuser, Alejandro Sánchez Guinea:
3D Scene Reconstruction of Point Cloud Data: A Lightweight Procedural Approach. 71:1-71:2 - Tianrun Chen, Xinyu Chen, Chaotao Ding, Ling Bai, Shangzhan Zhang, Lanyun Zhu, Ying Zang, Wenjun Hu, Zejian Li, Lingyun Sun:
New Fashion: Personalized 3D Design with a Single Sketch Input. 72:1-72:3 - Lidong Zhao, Xueyun Zhang, Lin Lu, Lifang Wu:
Hybrid Physical Model and Status Data-Driven Dynamic Control for Digital Light Processing 3D Printing. 73:1-73:2 - Seunghwan Kim, Sunha Park, Seungkyu Lee:
Neural Clustering for Prefractured Mesh Generation in Real-time Object Destruction. 74:1-74:2 - Jinsong Zhang, I-Chao Shen, Jotaro Sakamiya, Yu-Kun Lai, Takeo Igarashi, Kun Li:
DualAvatar: Robust Gaussian Splatting Avatar with Dual Representation. 75:1-75:3 - Keigo Minamida, Jun Rekimoto:
Incremental Gaussian Splatting: Gradual 3D Reconstruction from a Monocular Camera Following Physical World Changes. 76:1-76:2 - Joji Joseph, Bharadwaj Amrutur, Shalabh Bhatnagar:
Segmentation of 3D Gaussians using Masked Gradients. 77:1-77:2
Animation and Simulation
- Yuki Era, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama:
Generalizing Human Motion Style Transfer Method Based on Metadata-independent Learning. 78:1-78:3 - Shu-Ting Lin, Ming-Te Chi:
3D-to-2D Animation Smear Effect Technique Based on Japanese Hand-Drawn Animation Style. 79:1-79:2 - Eleni Tselepi, Spyridon Thermos, Georgios Albanis, Anargyros Chatzitofis:
Controlling Diversity in Single-shot Motion Synthesis. 80:1-80:2 - Théo Cheynel, Omar El Khalifi, Baptiste Bellot-Gurlet, Damien Rohmer, Marie-Paule Cani:
Rethinking motion keyframe extraction: a greedy procedural approach using a neural control rig. 81:1-81:2 - Usfita Kiftiyani, Seungkyu Lee:
Controlling Cross-Content Motion Style Transfer via Statistical Style Difference. 82:1-82:2 - Dayeon Lee, Seungkyu Lee:
Style Transfer with Gesture Style Generator. 83:1-83:2 - Yidi Wang, Frank Guan, Malcolm Yoke Hean Low, Zhengkui Wang, Aik Beng Ng, Simon See:
Towards Accelerating Physics Informed Graph Neural Network for Fluid Simulation. 84:1-84:3 - Yuki Kimura, Yoshinori Dobashi, Syuhei Sato:
Locally Editing Steady Fluid Flow via Controlling Repulsive Forces from Terrain. 85:1-85:2 - Atsuki Haruyama, Yuki Morimoto:
Fluid Highlights: Stylized Highlights for Anime-Style Food Rendering by Fluid Simulation. 86:1-86:2
Hardware System
- Mari Shiina, Naoki Hashimoto:
Transparent 360-Degree Display for High-Resolution Naked-Eye Stereoscopic Aerial Images. 87:1-87:2 - Pan-Pan Shiung, June-Hao Hou:
A Sentient Space Using Light Sensing with Particle Life. 88:1-88:2 - Jinsu Lee, Keehoon Hong, Minsik Park:
Automotive Holographic Head-Up Display. 89:1-89:2 - Asuka Fukubayashi, Mayu Ishii, Yu Nakayama, Shun Kaizu:
Cocktail-Party Communication from a Display to a Synchronized Camera. 90:1-90:2 - Shun-Han Chang, Chen-Chun Wu, Zi-Yun Lai, Tsung-Yen Lee, Cheng-En Ho, Min-Chun Hu:
Tingle Tennis: Menstrual Experience Sensory Simulation Sport Device. 91:1-91:2 - Shuyi Li, Yifan Ding, Zihan Gao:
Sensory Cravings: A Mixed Reality Installation Enhancing Psychological Experiences through Multisensory Interactions. 92:1-92:2 - Masaya Shimizu, Berend te Linde, Takatoshi Yoshida, Arata Horie, Nobuhisa Hanamitsu, Kouta Minamizawa:
GentlePoles : Designing Wooden Pole Actuators for Guiding People. 93:1-93:2 - Sunho Ki, Jinhong Park, Jin-Woo Kim, Ra Yun Boo, Hyerin Cho, Hanjun Choi, Sungmin Woo, Jeong-Ho Woo:
Deep Learning based Stereo Vision Camera System. 94:1-94:2 - Hyuma Auchi, Akito Fukuda, Yuta Yamauchi, Homura Kawamura, Keiichi Zempo:
Individual Diffusion Auralize Display Using an Array of Audio Source Position Tracking Ultrasonic Speakers. 95:1-95:2
Lighting, Rendering, and Material Representations
- Pascal Clausen, Li Ma, Mingming He, Ahmet Levent Tasel, Oliver Pilarski, Paul E. Debevec:
Fitting Spherical Gaussians to Dynamic HDRI Sequences. 96:1-96:3 - Shun Tatsukawa, Syuhei Sato:
A Relighting Method for Single Terrain Image based on Two-stage Albedo Estimation Model. 97:1-97:2 - Tomoya Sawada, Marie Katsurai:
MambaPainter: Neural Stroke-Based Rendering in a Single Step. 98:1-98:2 - Riel Suzuki, Yoshinori Dobashi:
Efficient visualization of appearance space of translucent objects using differential rendering. 99:1-99:2 - Ning Xia, Xiaofei Yin, Xuecong Feng:
Empowering CG Production:Cost-Effective Techniques for Voluminous Fur Rendering with Unreal Engine. 100:1-100:2 - Hayase Nishi, Daisuke Iwai, Kosuke Sato:
3D Texture Representation in Projection Mapping onto a Surface with Micro-Vibration. 101:1-101:2 - Takahiro Okamoto, Daisuke Iwai, Kosuke Sato:
Multidirectional Superimposed Projection for Delay-free Shadow Suppression on 3D Objects. 102:1-102:2 - Mehmet Oguz Derin, Takahiro Harada:
Gradient Traversal: Accelerating Real-Time Rendering of Unstructured Volumetric Data. 103:1-103:2
Creativity and Digital Art
- Joe Takayama:
Tracery Designer: A Metaball-Based Interactive Design Tool for Gothic Ornaments. 104:1-104:2 - Jung-Jae Yu, Dae-Young Song:
Latent Bias Correction in Outpainting Artworks. 105:1-105:2 - Zhiwei Wang, Yuzhe Xia, Kexin Nie, Mengyao Guo:
Alive Yi: Interactive Preservation of Yi Minority Embroidery Patterns through Digital Innovation. 106:1-106:2 - Xin-Wei Lin, Zhi-Yang Goh, Huiguang Huang, Dong-Yi Wu, Thi Ngoc Hanh Le, Tong-Yee Lee:
Design for Hypnotic Line Art Animation from a Still Image. 107:1-107:2 - Marina Nakagawa, Sohei Wakisaka:
[INDRA] Interactive Deep-dreaming Robotic Artist: Perceived artistic agency when collaborating with Embodied AI. 108:1-108:2
Visualization
- JungIn Lee, ChanKeun Park, Sooyeon Lim:
Analyzing and Visualizing the Correlation between Ecosystems and Environmental Sustainability : Focusing on Search API Data. 109:1-109:2 - Xu Han, Mina Shibasaki, Saki Sakaguchi, Asuka Mano, Tsuyoshi Nakayama, Yuji Higashi, Kumiko Kushiyama:
Visualization Methods for Manual Wheelchair Training: Impact on Communication Between Coaches and Users. 110:1-110:2 - Dong-Yi Wu, Li-Kuan Ou, Huiguang Huang, Yu Cao, Xin-Wei Lin, Thi Ngoc Hanh Le, Sheng-Yi Yao, Tong-Yee Lee:
Animated Pictorial Maps. 111:1-111:3 - Ryota Nakayama, Soshi Takeda, Gakuto Sekine, Yuichiro Katsumoto:
Design of Wall Art Utilizing Dynamic Color Changes through Photoelasticity. 112:1-112:2 - Kanyu Chen, Emiko Kamiyama, Ruiteng Li, Yichen Peng, Daichi Saito, Erwin Wu, Hideki Koike, Akira Kato:
Phantom Audition: Using the Visualization of Electromyography and Vocal Metrics as Tools in Singing Training. 113:1-113:2 - Li-Huan Shen, Joyce Sun, Jan-Yue Lin, Yi-Hsuan Chiu, Ssu-Hsuan Wu, Tai-Chen Tsai, Shun-Han Chang, Hung-Kuo Chu, Min-Chun Hu:
PnRInfo : Interactive Tactical Information Visualization for Pick and Roll Event. 114:1-114:2 - Taiju Kimura, Hiroki Nishino:
'Colorblind Game' Can Enhances Awareness of Color Blindness. 115:1-115:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.