default search action
50th SIGGRAPH 2023: Los Angeles, CA, USA - Posters
- Erik Brunvand, James Tompkin:
ACM SIGGRAPH 2023 Posters, SIGGRAPH 2023, Los Angeles, CA, USA, August 6-10, 2023. ACM 2023
Animation & Simulation
- Jen-Chun Lin, Wen-Li Wei:
3D Character Motion Authoring From Temporally-Sparse Photos. 1:1-1:2 - Holly Bloom, Kevin Creamer, Lana Ismail:
Art Simulates Life: 3D Visualization Takes Pediatric Hospitalist Simulations to the Next Level. 2:1-2:2 - Chen-Chieh Liao, Jong-Hwan Kim, Hideki Koike, Dong-Hyun Hwang:
Content-Preserving Motion Stylization using Variational Autoencoder. 3:1-3:2 - Shaimaa Monem Abdelhafez, Peter Benner, Christian Lessig:
Improved Projective Dynamics Global Using Snapshots-based Reduced Bases. 4:1-4:2 - Minkwan Kim, Yoonsang Lee:
Learning Human-like Locomotion Based on Biological Actuation and Rewards. 5:1-5:2 - Bilas Talukdar, Yunhao Zhang, Tomer Weiss:
Learning to Simulate Crowds with Crowds. 6:1-6:2
Art & Design
- Naoki Agata, Anran Qi, Yuta Noma, I-Chao Shen, Takeo Igarashi:
Computational Design of Nebuta-like Paper-on-Wire Artworks. 7:1-7:2 - Maria Rita Nogueira, João Braz Simões, Paulo Menezes:
"F O R M S" - Creating new visual perceptions of dance movement through machine learning. 8:1-8:2 - Abdalla G. M. Ahmed:
Image Printing on Stones, Wood, and More. 9:1-9:2 - Bo Shui, Chufan Shi, Xiaomei Nie:
Metro Re-illustrated: Incremental Generation of Stylized Paintings Using Neural Networks. 10:1-10:2 - Miao Lin, I-Chao Shen, Hsiao-Yuan Chin, Ruo-Xi Chen, Bing-Yu Chen:
Palette-Based Colorization for Vector Icons. 11:1-11:2 - Keijiroh Nagano, Maya Georgieva, Harpreet Sareen, Clarinda Mac Low:
The Talk: Speculative Conversation with Everyday Objects. 12:1-12:2
Augmented & Virtual Reality
- Dorota Kaminska, Grzegorz Zwolinski, Dorota Merecz-Kot:
Acute Stress Disorder Therapy by Virtual Reality: a Case Study of Ukrainian Refugees. 13:1-13:2 - Kai Kohyama, Alexandre Berthault, Takuma Kato, Akihiko Shirai:
AI-Assisted Avatar Fashion Show: Word-to-Clothing Texture Exploration and Motion Synthesis for Metaverse UGC. 14:1-14:2 - Taiyo Taguchi, Yurie Watanabe, Tomokazu Ishikawa:
An investigation of changes in taste perception by varying polygon resolution of foods in virtual environments. 15:1-15:2 - Ginam Ko, Jaeseok Yoon, Byungseok Jung, SangHun Nam:
BStick: Hand-held Haptic Controller for Virtual Reality Applications. 16:1-16:2 - Madeline Verheydt, Evan Staben, Kyle Carncross, Meredith Minear:
Down the Rabbit Hole: : Experiencing Alice in Wonderland Syndrome through Virtual Reality. 17:1-17:2 - Ke-Fan Lin, Yu-Chih Chou, Yu-Hsiang Weng, Yvone Tsai Chen, Tse-Yu Pan, Ping-Hsuan Han:
Exploring Multiple-Display Interaction for Live XR Performance. 18:1-18:2 - Hsuehhan Wu, Kelvin Cheng, Madoka Inoue, Soh Masuko:
Mixed Reality Racing: Combining Real and Virtual Motorsport Racing. 19:1-19:2 - Ayame Uchida, Izumi Tsunokuni, Yusuke Ikeda, Yasuhiro Oikawa:
Mixed Reality Visualization of Room Impulse Response Map using Room Geometry and Physical Model of Sound Propagation. 20:1-20:2 - Mathieu Lutfallah, Christian Hirt, Valentina Gorobets, Manuel Gregor, Andreas M. Kunz:
Redirected Walking in Overlapping Rooms. 21:1-21:2 - Nisho Matsushita, Shota Sugiura, Kohei Arai:
Towards Realistic Virtual Try-on for E-commerce by Sewing Pattern Estimation. 22:1-22:2
Geometry & Modeling
- Adilla Böhmer-Mzee, Lizeth Joseline Fuentes Perez, Renato Pajarola:
Automatic Architectural Floorplan Reconstruction. 23:1-23:2 - Jiwoo Kang, Juheon Hwang, Moonkyeong Choi, Sanghoon Lee:
High-resolution 3D Reconstruction with Neural Mesh Shading. 24:1-24:2 - Bruno Roy:
Neural Shape Diameter Function for Efficient Mesh Segmentation. 25:1-25:2 - Sakura Shinji, Issei Fujishiro:
Virtual Manipulation of Cultural Assets: An Initial Case Study with Single-Joint Articulated Models. 26:1-26:2
Images, Video, & Computer Vision
- Craig Reynolds:
Camouflage via Coevolution of Predator and Prey. 27:1-27:2 - Eunhee Kim, TaeHwa Park, JaeYoung Moon, Wonsang You, Taegwan Ha, Kyung-Joong Kim:
DAncing body, Speaking Hands (DASH): Sign Dance Generation System with Deep Learning. 28:1-28:2 - Kazuhito Sato, Shugo Yamaguchi, Tsukasa Takeda, Shigeo Morishima:
Deformable Neural Radiance Fields for Object Motion Blur Removal. 29:1-29:2 - Tsukasa Takeda, Shugo Yamaguchi, Kazuhito Sato, Kosuke Fukazawa, Shigeo Morishima:
Efficient 3D Reconstruction of NeRF using Camera Pose Interpolation and Photometric Bundle Adjustment. 30:1-30:2 - Taiki Watanabe, Seitaro Shinagawa, Takuya Funatomi, Akinobu Maejima, Yasuhiro Mukaigawa, Satoshi Nakamura, Hiroyuki Kubo:
Improved Automatic Colorization by Optimal Pre-colorization. 31:1-31:2 - Shaohui Jiao, Yuzhong Chen, Zhaoliang Liu, Danying Wang, Wen Zhou, Li Zhang, Yue Wang:
Photo-Realistic Streamable Free-Viewpoint Video. 32:1-32:2 - Nanami Kotani, Asako Kanezaki:
Point Anywhere: Directed Object Estimation from Omnidirectional Images. 33:1-33:2 - Daljit Singh Dhillon, Parisha Joshi, Jessica R. Baron, Eric K. Patterson:
Robust Color Correction for Preserving Spatial Variations within Photographs. 34:1-34:2 - Andy Yu-Hsiang Tseng, Wen-Fan Wang, Bing-Yu Chen:
SegAnimeChara: Segmenting Anime Characters Generated by AI. 35:1-35:2 - Elliot Dickman, Paul J. Diefenbach, Matthew Burlick, Mark Stockton:
Smart Scaling: A Hybrid Deep-Learning Approach to Content-Aware Image Retargeting. 36:1-36:2 - Ippei Otake, Kazuya Kitano, Takahiro Kushida, Hiroyuki Kubo, Akinobu Maejima, Yuki Fujimura, Takuya Funatomi, Yasuhiro Mukaigawa:
Updating Human Pose Estimation using Event-based Camera to Improve Its Accuracy. 37:1-37:2 - Masahiko Goto, Yasuhiro Oikawa, Atsuto Inoue, Wataru Teraoka, Takahiro Sato, Yasuyuki Iwane, Masahito Kobayashi:
Utilizing LiDAR Data for 3D Sound Source Localization. 38:1-38:2
Interactive Techniques
- Hikaru Hagura, Ryuta Yamaguchi, Tomoki Yoshihisa, Shinji Shimojo, Yukiko Kawai:
A Proposal of Acquiring and Analyzing Method for Distributed Litter on the Street using Smartphone Users as Passive Mobility Sensors. 39:1-39:2 - Anyi Rao, Xuekun Jiang, Yuwei Guo, Linning Xu, Lei Yang, Libiao Jin, Dahua Lin, Bo Dai:
Dynamic Storyboard Generation in an Engine-based Virtual Environment for Video Production. 40:1-40:2 - Shieru Suzuki, Kazuma Aoyama, Ryosei Kojima, Kazuya Izumi, Tatsuki Fushimi, Yoichi Ochiai:
ExudedVestibule: Enhancing Mid-air Haptics through Galvanic Vestibular Stimulation. 41:1-41:2 - Timothy Chen, Miguel Ying Jie Then, Jing-Yuan Huang, Yang-Sheng Chen, Ping-Hsuan Han, Yi-Ping Hung:
sPellorama: An Immersive Prototyping Tool using Generative Panorama and Voice-to-Prompts. 42:1-42:2 - Qin Wu, Wenlu Wang, Qianru Liu, Jiashuo Cao, Duo Xu, Suranga Nanayakkara:
Tidd: Augmented Tabletop Interaction Supports Children with Autism to Train Daily Living Skills. 43:1-43:2
Rendering & Displays
- Kensuke Katori, Kenta Yamamoto, Ippei Suzuki, Tatsuki Fushimi, Yoichi Ochiai:
Crossed half-silvered Mirror Array: Fabrication and Evaluation of a See-Through Capable DIY Crossed Mirror Array. 44:1-44:2 - Shunya Motegi, Takuya Funatomi, Yasuhiro Mukaigawa, Aki Takayanagi, Hayato Kikuta, Hiroyuki Kubo:
Efficient Rendering of Glossy Materials by Interpolating Prefiltered Environment Maps based on Primary Normals. 45:1-45:2 - Takegi Yoshimoto, Nobuhito Kasahara, Homei Miyashita:
Fabrication of Edible lenticular lens. 46:1-46:2 - Kaloian Petkov:
Guided Training of NeRFs for Medical Volume Rendering. 47:1-47:2 - Adrian Xuan Wei Lim, Lynnette Hui Xian Ng, Conor Griffin, Nicholas Kryer, Faraz Baghernezhad:
Reverse Projection: Real-Time Local Space Texture Mapping. 48:1-48:2 - Serguei A. Mokhov, Jonathan Llewellyn, Carlos Alarcon Meza, Tariq Daradkeh, Gillian Roper:
The use of Containers in OpenGL, ML and HPC for Teaching and Research Support. 49:1-49:2 - Jessica R. Baron, Eric K. Patterson, Jonathan Dupuy:
Toward Efficient Capture of Spatially Varying Material Properties. 50:1-50:2 - Jinyuan Yang, Abraham G. Campbell:
VirtualVoxel: Real-Time Large Scale Scene Visualization and Modification. 51:1-51:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.