default search action
46th SIGGRAPH 2019: Los Angeles, CA, USA - Posters
- Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2019, Los Angeles, CA, USA, July 28 - August 1, 2019, Posters. ACM 2019, ISBN 978-1-4503-6314-3
Adaptive / assistive technology
- Mark Murnane, Max Breitmeyer, Francis Ferraro, Cynthia Matuszek, Don Engel:
Learning from human-robot interactions in modeled scenes. 1:1-1:2 - Noriyasu Obushi, Sohei Wakisaka, Shunichi Kasahara, Atsushi Hiyama, Masahiko Inami:
MagniFinger: fingertip-mounted microscope for augmenting human perception. 2:1-2:2 - Vineet Batra, Ankit Phogat, Tarun Beri:
Massively parallel layout generation in real time. 3:1-3:2 - Huiyi Fang, Kenji Funahashi, Shinji Mizuno, Yuji Iwahori:
Partial zoom on small display for people suffering from presbyopia. 4:1-4:2 - Chia-Yu Chen, I-Chen Lin:
Rapid 3D building modeling by sketching. 5:1-5:2 - Yi-Ching Kang, Hiroki Nishino:
Scented graphics: exploration in inkjet scented-printing. 6:1-6:2 - Levan Sulimanov, Marc Olano:
Virtual reality mirror therapy rehabilitation for post-stroke patients. 7:1-7:2
Art & design
- Angela Wang, Anthony Dalton Eason, Ergun Akleman:
A formal process to design visual archetypes based on character taxonomies. 8:1-8:2 - Sou Tabata, Hiroki Yoshihara, Haruka Maeda, Kei Yokoyama:
Automatic layout generation for graphical design magazines. 9:1-9:2 - Sayan Ghosh, Jose Echevarria, Vineet Batra, Ankit Phogat:
Exploring color variations for vector graphics. 10:1-10:2 - Kenta Akita, Yuki Morimoto, Reiji Tsuruno:
Fully automatic colorization for anime character considering accurate eye colors. 11:1-11:2 - Der-Lor Way, Weng-Kei Lau, Tzu Ying Huang:
Glove puppetry cloud theater through a virtual reality network. 12:1-12:2 - Akinobu Maejima, Hiroyuki Kubo, Takuya Funatomi, Tatsuo Yotsukura, Satoshi Nakamura, Yasuhiro Mukaigawa:
Graph matching based anime colorization with multiple references. 13:1-13:2 - YanXiang Zhang, Yirun Shen, Weiwei Zhang, Ziqiang Zhu, Pengfei Ma:
Interactive spatial augmented reality system for Chinese opera. 14:1-14:2 - Yanxiang Zhang, Li Tao, Yirun Shen, Clayton Elieisar, Fangbemi Abassin:
Interactive virtual reality orchestral music. 15:1-15:2 - Serguei A. Mokhov, Deschanel Li, Haotao Lai, Jashanjot Singh, Yiran Shen, Jonathan Llewellyn, Miao Song, Sudhir P. Mudur:
ISSv2 and OpenISS distributed system for real-time interaction for performing arts. 16:1-16:2 - Amal Dev Parakkat, Pooran Memari, Marie-Paule Cani:
Layered reconstruction of stippling art. 17:1-17:2 - Ye-Ning Jiang, Hiroki Nishino:
Meet in rain: a serious game to help the better appreciation of Chinese poems. 18:1-18:2 - Youngsoo Kim, Shounan An, Youngbak Jo, Seungje Park, Shindong Kang, Insoo Oh, Duke Donghyun Kim:
Multi-task audio-driven facial animation. 19:1-19:2 - Jiawei Huang:
Pieces of the past, maya treasure hunt: a virtual reality game experience. 20:1-20:2 - Praveen Kumar Dhanuka, Nirmal Kumawat, Nipun Jindal:
Vector based glyph style transfer. 21:1-21:2
Augmented & virtual realities
- Laura Mann, Oleg Fryazinov:
3D printing for mixed reality hands-on museum exhibit interaction. 22:1-22:2 - Naoki Kawai:
A method for rectifying inclination of panoramic images. 23:1-23:2 - Keisuke Hattori, Tatsunori Hirai:
An intuitive and educational programming tool with tangible blocks and AR. 24:1-24:2 - Takahito Ito, César A. Hidalgo:
Biodigital: transform data to experience, beyond data visualization. 25:1-25:2 - Yoonjung Park, Hyocheol Ro, Tack-Don Han:
Deep-ChildAR bot: educational activities and safety care augmented reality system with deep-learning for preschool. 26:1-26:2 - Yusuf Sermet, Ibrahim Demir:
Flood action VR: a virtual reality framework for disaster awareness and emergency response training. 27:1-27:2 - Tamás Matuszka, Ferenc Czuczor, Zoltán Sóstai:
HeroMirror interactive: a gesture controlled augmented reality gaming experience. 28:1-28:2 - Margaret E. Cook, Amber Ackley, Karla I. Chang Gonzalez, Austin Payne, Jinsil Hwaryoung Seo, Caleb Kicklighter, Michelle Pine, Timothy McLaughlin:
InNervate immersion: case study of dynamic simulations in AR/VR environments for learning muscular innervation. 29:1-29:2 - Rowan T. Hughes, Campbell W. Strong, John B. McGhee:
Vox-cells: voxel-based visualization of volume data for enhanced understanding and exploration in virtual reality (VR). 30:1-30:2 - Catherine Taylor, Robin McNicholas, Darren Cosker:
VRProp-net: real-time interaction with virtual props. 31:1-31:2
Display & rendering
- Nahomi Maki, Toshiaki Yamanouchi, Kazuhisa Yanaka:
3D aerial display with micro mirror array plate and reversed depth integral photography. 32:1-32:2 - Yun Liang, Mingqin Chen, Zesheng Huang, Diego Gutierrez, Adolfo Muñoz, Julio Marco:
A data-driven compression method for transient rendering. 33:1-33:2 - Fangcheng Zhong, George Alex Koulieris, George Drettakis, Martin S. Banks, Mathieu Chambe, Frédo Durand, Rafal Mantiuk:
DiCE: dichoptic contrast enhancement for binocular displays. 36:1-36:2 - Xingyu Pan, Mengya Zheng, Abraham G. Campbell:
Exploration of using face tracking to reduce GPU rendering on current and future auto-stereoscopic displays. 39:1-39:2 - Daniel Bird, Stephen D. Laycock:
GPGPU acceleration of environmental and movement datasets. 41:1-41:2 - SeokYeong Lee, Hwasup Lim, Sang Chul Ahn, SeungKyu Lee:
IR surface reflectance estimation and material type recognition using two-stream net and kinect camera. 43:1-43:2 - Tzu-Chieh Chang, Ming Ouhyoung:
Photon: a modular, research-oriented rendering system. 45:1-45:2 - Yuliya Gitlina, Daljit Singh Dhillon, Giuseppe Claudio Guarnera, Abhijeet Ghosh:
Practical measurement and modeling of spectral skin reflectance. 47:1-47:2 - Shingo Kagami, Kotone Higuchi, Koichi Hashimoto:
Puppeteered rain: interactive illusion of levitating water drops by position-dependent strobe projection. 48:1-48:2 - Namo Podee, Nelson Max, Kei Iwasaki, Yoshinori Dobashi:
Temporal and spatial anti-aliasing for rendering reflection on a water surface. 50:1-50:2 - Wei Sen Loi, Kenneth J. Chau:
Visualization of ultra-thin semi-transparent metallic films by wave simulations and ray-tracing rendering. 51:1-51:2
Hardware interfaces
- Junichi Nabeshima, M. H. D. Yamen Saraiji, Kouta Minamizawa:
Arque: artificial biomimicry-inspired tail for extending innate body functions. 52:1-52:2 - Maria Mannone, Eri Kitamura, Jiawei Huang, Ryo Sugawara, Pascal Chiu, Yoshifumi Kitamura:
CubeHarmonic: a new musical instrument based on Rubik's cube with embedded motion sensor. 53:1-53:2 - Taichi Furukawa, Nobuhisa Hanamitsu, Yoichi Kamiyama, Hideaki Nii, Charalampos Krekoukiotis, Kouta Minamizawa, Akihito Noda, Junko Yamada, Keiichi Kitamura, Daisuke Niwa, Yoshiaki Hirano:
Designing a full-body customizable haptic interface using two-dimensional signal transmission. 54:1-54:2 - Hyocheol Ro, Yoonjung Park, Jung-Hyun Byun, Tack-Don Han:
Display methods of projection augmented reality based on deep learning pose estimation. 55:1-55:2 - Shogo Yamashita, Takaaki Kasuga, Shunichi Suwa, Takashi Miyaki, Masaya Nogi, Jun Rekimoto:
Fluid-measurement technology using flow birefringence of nanocellulose. 56:1-56:2 - Matheus Alberto de Oliveira Ribeiro, Allan Amaral Tori, Romero Tori, Fátima L. S. Nunes:
Immersive game for dental anesthesia training with haptic feedback. 57:1-57:2 - Qin Wu, Jiayuan Wang, Sirui Wang, Tong Su, Chenmei Yu:
MagicPAPER: tabletop interactive projection device based on tangible interaction. 58:1-58:2 - Alexander Lattas, Mingqian Wang, Stefanos Zafeiriou, Abhijeet Ghosh:
Multi-view facial capture using binary spherical gradient illumination. 59:1-59:2 - Yusuke Yamazaki, Shoichi Hasegawa, Hironori Mitake, Akihiko Shirai:
Neck strap haptics: an algorithm for non-visible VR information using haptic perception on the neck. 60:1-60:2 - Takahiro Shitara, Vibol Yem, Hiroyuki Kajimoto:
Reconsideration of ouija board motion in terms of haptic illusions (IV): effect of haptic cue and another player. 61:1-61:2 - Shio Miyafuji, Soichiro Toyohara, Toshiki Sato, Hideki Koike:
Remote control experiment with displaybowl and 360-degree video. 62:1-62:2 - Takuro Nakao, Stevanus Kevin Santana, Megumi Isogai, Shinya Shimizu, Hideaki Kimata, Kai Kunze, Yun Suen Pai:
ShareHaptics: a modular haptic feedback system using shape memory alloy for mixed reality shared space applications. 63:1-63:2 - Takayuki Nozawa, Erwin Wu, Florian Perteneder, Hideki Koike:
Visualizing expert motion for guidance in a VR ski simulator. 64:1-64:2 - Aishwari Talhan, Hwangil Kim, Seokhee Jeon:
Wearable soft pneumatic ring with multi-mode controlling for rich haptic effects. 65:1-65:2
Production
- Madison Kramer, Ergun Akleman:
A procedural approach to creating second empire houses. 66:1-66:2 - Matthew DuVall, John Flynn, Michael Broxton, Paul E. Debevec:
Compositing light field video using multiplane images. 67:1-67:2 - Alexandre Derouet-Jourdan, Marc Salvati, Xiaoxiong Xing, Takuro Nishikawa:
Efficient mask expansion for green-screen keying using color distributions. 68:1-68:2 - Jasmine Y. Shih, Kalina Borkiewicz, A. J. Christensen, Donna J. Cox:
Interactive cinematic scientific visualization in unity. 69:1-69:2 - Ankit Phogat, Matthew Fisher, Danny M. Kaufman, Vineet Batra:
Skinning vector graphics with GANs. 70:1-70:2 - Mikiko Amano, Takayuki Itoh:
Visual simulation of ice and frost with sketch input. 71:1-71:2 - Tor Robinson, William Furneaux:
Voxel printing using procedural art-directable technologies. 72:1-72:2
Research
- Miguel Galindo, Julio Marco, Matthew O'Toole, Gordon Wetzstein, Diego Gutierrez, Adrián Jarabo:
A dataset for benchmarking time-resolved non-line-of-sight imaging. 73:1-73:2 - Xiaokun Wang, Sinuo Liu, Xiaojuan Ban, Yanrui Xu, Jing Zhou, Cong Wang:
Convergent turbulence refinement toward irrotational vortex. 80:1-80:2 - Sai Ganesh Subramanian, Mathew Eng, Vinayak R. Krishnamurthy, Ergun Akleman:
Delaunay lofts: a new class of space-filling shapes. 81:1-81:2 - Rex Hsieh, Akihiko Shirai, Hisashi Sato:
Effectiveness of facial animated avatar and voice transformer in elearning programming course. 82:1-82:2 - Anam Mehmood, Ishtiaq Rasool Khan, Hassan Dawood, Hussain Dawood:
Enhancement of CT images for visualization. 83:1-83:2 - Alexander Naitsat, Yehoshua Y. Zeevi:
Multi-resolution approach to computing locally injective maps on meshes. 87:1-87:2 - Jing Ke, Junwei Deng, Yizhou Lu:
Noise reduction with image inpainting: an application in clinical data diagnosis. 88:1-88:2 - Christopher Ratto, Mimi Szeto, David Slocum, Kevin Del Bene:
OceanGAN: a deep learning alternative to physics-based ocean rendering. 89:1-89:2 - Lei Ma, Hong Deng, Beibei Wang, Yanyun Chen, Tamy Boubekeur:
Real-time structure aware color stippling. 92:1-92:2 - Pratik Kalshetti, Parag Chaudhuri:
Unsupervised incremental learning for hand shape and pose estimation. 96:1-96:2 - Juraj Tomori:
VFX fractal toolkit: integrating fractals into VFX pipeline. 97:1-97:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.