default search action
SIGGRAPH Asia 2022 Posters: Daegu, Korea
- Soon Ki Jung, Neil A. Dodgson:
SIGGRAPH Asia 2022 Posters, SA 2022, Daegu, Republic of Korea, December 6-9, 2022. ACM 2022, ISBN 978-1-4503-9462-8
Material Simulation
- Shoon Komori, Tomokazu Ishikawa:
Representation of FRP material damage in 3DCG. 1:1-1:2 - Bohan Jing, Weiran Li, Qing Zhu:
A Non-Associated MSCC Model for Simulating Structured and Destructured Clays. 2:1-2:2 - Tomoya Tamagawa, Tomokazu Ishikawa:
Visual Simulation of Tire Smoke. 3:1-3:2
Animating Human Characters
- Ghazanfar Ali, Jae-In Hwang:
Improving Co-speech gesture rule-map generation via wild pose matching with gesture units. 4:1-4:2 - Jinhui Luan, Haiyong Jiang, Junqi Diao, Ying Wang, Jun Xiao:
MEMformer: Transformer-based 3D Human Motion Estimation from MoCap Markers. 5:1-5:2 - Mohamed Younes, Ewa Kijak, Simon Malinowski, Richard Kulpa, Franck Multon:
AIP: Adversarial Interaction Priors for Multi-Agent Physics-based Character Control. 6:1-6:2 - Deepak Gopinath, Hanbyul Joo, Jungdam Won:
Motion In-betweening for Physically Simulated Characters. 7:1-7:2 - Yui Koroku, Issei Fujishiro:
Anime-Like Motion Transfer with Optimal Viewpoints. 8:1-8:2
Computer Vision and Image Understanding
- Naoki Kita, Shiori Kawasaki, Takafumi Saito:
Palette-based Image Search with Color Weights. 9:1-9:2 - Luca Quartesan, Carlos Pereira Santos:
Neural Bidirectional Texture Function Compression and Rendering. 10:1-10:2 - Ming-Ru Xie, Shing-Yun Jung, Kuan-Wen Chen:
Efficient Drone Exploration in Real Unknown Environments. 11:1-11:2 - Ting-Hua Yang, Hsien-Yuan Hsieh, Tse-Han Lin, Wei-Zen Sun, Ming Ouhyoung:
A High Frame Rate Affordable Nystagmus Detection Method with Smartphones Used in Outpatient Clinic. 12:1-12:2 - Chun Wei Ooi, John Dingliana:
Color LightField: Estimation Of View-point Dependant Color Dispersion In Waveguide Display. 13:1-13:2 - Yohei Hanaoka, Suwichaya Suwanwimolkul, Satoshi Komorita:
Recursive Rendering of 2D Images for Accurate Pose Estimation in a 3D Mesh Map. 14:1-14:2 - Seungkyu Lee, Dongshen Han:
Internal-External Boundary Attentions for Transparent Object Segmentation. 15:1-15:2
Image and Video Processing Applications
- Shingo Hattori, Takefumi Hiraki:
Accelerated and Optimized Search of Imperceptible Color Vibration for Embedding Information into LCD images. 16:1-16:2 - Jae-Ho Nah, Hyeju Kim:
TexSR: Image Super-Resolution for High-Quality Texture Mapping. 17:1-17:2 - Ryoma Hashimoto, Yoshinori Dobashi:
Adjusting Level of Abstraction for Stylized Image Composition. 18:1-18:2 - Rui Wang, Nisha Huang, Fan Tang, Weiming Dong, Tong-Yee Lee:
Language-driven Diversified Image Retargeting. 19:1-19:2 - Xue Song, Jiawei Pan, Fuzhang Wu, Weiming Dong:
Optimal Composition Recommendation for Portrait Photography. 20:1-20:2 - Akihiro Sakai, Yasuhito Sawahata, Yamato Miyashita, Kazuteru Komine:
Cognition-aware automatic viewpoint selection in scenes with crowds of objects. 21:1-21:2
Geometry and Modeling
- Lakshmi Priya Muraleedharan, Vikas Kantha Gowda, Basavaraja Shanthappa Vandrotti:
Automatic Deformation-based animation of 3D mesh. 22:1-22:2 - Naoki Kita, Satoshi Tsukii, Miwako Tsuru:
Procedural Modeling of Crystal Clusters. 23:1-23:2 - Takashi Horiuchi, Ziyuan Cao, Yuto Kominami, Wataru Umezawa, Yuhao Dou, Daichi Ando, Tomohiko Mukai:
Artist-directed Modeling of Competitively Growing Corals. 24:1-24:2 - Chia Hung Yu, Kuan-Wen Chen:
Robust and Efficient Structure-from-Motion Method for Ambiguous Large-Scale Indoor Scene. 25:1-25:2 - Chuanchuan Wang, Yatong Xu, Minjie Tang, Jie Li, Hui Mao, Shiliang Pu:
Robust Vectorized Surface Reconstruction with 2D-3D Joint Optimization. 26:1-26:2
Rendering Techniques
- Rui Li:
Adaptive real-time interactive rendering of gigantic multi-resolution models. 27:1-27:2 - Paritosh Kulkarni, Sho Ikeda, Takahiro Harada:
Fused BVH to Ray Trace Level of Detail Meshes. 28:1-28:2
Machine Learning for Computer Graphics
- Sihi-Ting Huang, Tung-Ju Hsieh:
Ribbon Font Neural Style Transfer for OpenType-SVG Font. 29:1-29:2 - Samuel Giraud-Carrier, Seth Holladay, Parris K. Egbert:
Time-Dependent Machine Learning for Volumetric Simulation. 30:1-30:2 - Tejas Anvekar, Ramesh Ashok Tabib, Dikshit Hegde, Uma Mudenagudi:
Metric-KNN is All You Need. 31:1-31:2
Human-Computer Interaction
- Ryotaro Kimura, Jun Rekimoto:
Prometheus: A mobile telepresence system connecting the 1st person and 3rd person perspectives continuously. 32:1-32:2 - Yusuke Miura, Masaki Kuribayashi, Erwin Wu, Hideki Koike, Shigeo Morishima:
A Study on Sonification Method of Simulator-Based Ski Training for People with Visual Impairment. 33:1-33:2 - Hanseob Kim, Jieun Kim, Ghazanfar Ali, Jae-In Hwang:
No-code Digital Human for Conversational Behavior. 34:1-34:2 - Hidetaka Katsuyama, Erwin Wu, Hideki Koike:
Using Rhythm Game to Train Rhythmic Motion in Sports. 35:1-35:2 - Shintaro Fukumoto, Seiya Mitsuno, Kazuki Nakayama, Reiya Itatani, Genki Jogan, Hamed Mahzoon:
Investigating the Effects of Synchronized Visuo-Tactile Stimuli for Inducing Kinesthetic Illusion in Observational Learning of Whole-Body Movements. 36:1-36:2 - Dong-Geun Kim, Jungeun Lee, Seungmoon Choi:
MMGrip: A Handheld Multimodal Haptic Device Combining Vibration, Impact, and Shear for Realistic Expression of Contact. 37:1-37:2 - Yuto Umetsu, Parinya Punpongsanon, Takefumi Hiraki:
InfiniteShader: Color Changeable 3D Printed Objects using Bi-Stable Thermochromic Materials. 38:1-38:2 - Kinga Skiers, Yun Suen Pai, Kouta Minamizawa:
Transcendental Avatar: Experiencing Bioresponsive Avatar of the Self for Improved Cognition. 39:1-39:2 - Yuto Nakanishi, Yuya Kinzuka, Fumiaki Sato, Shigeki Nakauchi, Tetsuto Minami:
Pupillary oscillation induced by pseudo-isochromatic stimuli for objective color vision test. 40:1-40:2
Virtual Reality, Augmented Reality, and Mixed Reality
- Yuetong Zhao, Shuo Yan, Xukun Shen:
Cohand VR: Towards A Shareable Immersive Experience via Wearable Gesture Interface between VR Audiences and External Audiences. 41:1-41:2 - Seina Kobayashi, Kei Kanari, Mie Sato:
Effects of Font Type and Weight on Reading in VR. 42:1-42:2 - Suhyeon Kim, Dokyung Lee, Jaejun Park, Myungji Song, Younhyun Jung:
Codeless Content Creator System: Anyone Can Make Their Own Mixed Reality Content Without Relying on Software Developer Tools. 43:1-43:2 - Ho-San Kang, Jong-Won Lee, Soo-Mi Choi:
Combining Augmented and Virtual Reality Experiences for Immersive Fire Drills. 44:1-44:2 - Yixin Wang, Yun Suen Pai, Kouta Minamizawa:
It's Me: VR-based Journaling for Improved Cognitive Self-Regulation. 45:1-45:2 - Kaho Iwasaki, Yuji Sakamoto:
Method of Creating Video Content that Cause The Sensation of Falling. 46:1-46:2 - Takuma Kato, Tomosuke Nakano, Takanori Horibe, Miku Takemasa, Yusuke Yamazaki, Akihiko Shirai:
Cross-platforming "School life metaverse" user experience. 47:1-47:2 - Bin Han, Gerard Jounghyun Kim, Jae-In Hwang:
Real-Time Facial Animation Generation on Face Mask. 48:1-48:2 - Takashi Matsumoto, Erwin Wu, Hideki Koike:
Temporal and Spatial Distortion for VR Rhythmic Skill Training. 49:1-49:2 - Goksu Yamac, Carol O'Sullivan:
Eye on the Ball: The effect of visual cue on virtual throwing. 50:1-50:2
Display and Print Technologies
- Kazushi Nakamura, Kenta Yamamoto, Yoichi Ochiai:
Computer Generated Hologram Optimization for Lens Aberration. 51:1-51:2 - Ryota Koiso, Keisuke Nonaka, Tatsuya Kobayashi, Kyoji Matsushima:
Color Animated Full-parallax High-definition Computer-generated Hologram. 52:1-52:2 - Rina Kinoshita, Hiroya Tanaka:
Hanging Print: Plastic Extrusion for Catenary Weaving in Mid-Air. 53:1-53:2 - Tianqin Yang, Mingli Xiang, Lidong Zhao, Zhi Zhao, Lifang Wu:
A novel solution to manufacturing multi-color medical preoperative models with transparent shells. 54:1-54:2 - Soya Eguchi, Yasuo Nagura, Hiroya Tanaka:
Realistic Rendering Tool for Pseudo-Structural Coloring with Multi-Color Extrusion of FFF 3D Printing. 55:1-55:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.