default search action
SIGGRAPH Asia 2023 Conference Papers: Sydney, NSW, Australia
- June Kim, Ming C. Lin, Bernd Bickel:
SIGGRAPH Asia 2023 Conference Papers, SA 2023, Sydney, NSW, Australia, December 12-15, 2023. ACM 2023
Shells
- Xuan Li, Yu Fang, Lei Lan, Huamin Wang, Yin Yang, Minchen Li, Chenfanfu Jiang:
Subspace-Preconditioned GPU Projective Dynamics with Contact for Cloth Simulation. 1:1-1:12
Character and Rigid Body Control
- Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang:
C·ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters. 2:1-2:11 - Yusen Feng, Xiyan Xu, Libin Liu:
MuscleVAE: Model-Based Controllers of Muscle-Actuated Characters. 3:1-3:11 - Doug L. James, David I. W. Levin:
ViCMA: Visual Control of Multibody Animations. 4:1-4:11
Computational Design
- Dylan Rowe, Albert Chern:
Sparse Stress Structures from Optimal Geometric Measures. 5:1-5:9
Fluid Simulation
- Yanrui Xu, Xiaokun Wang, Jiamin Wang, Chongming Song, Tiancheng Wang, Yalan Zhang, Jian Chang, Jian-Jun Zhang, Jirí Kosinka, Alexandru C. Telea, Xiaojuan Ban:
An Implicitly Stable Mixture Model for Dynamic Multi-fluid Simulations. 6:1-6:11
Robots & Characters
- Deok-Kyeong Jang, Yuting Ye, Jungdam Won, Sung-Hee Lee:
MOCHA: Real-Time Motion Characterization via Context Matching. 7:1-7:11
Rendering
- Zhihua Zhong, Jingsen Zhu, Yuxin Dai, Chuankun Zheng, Guanlin Chen, Yuchi Huo, Hujun Bao, Rui Wang:
FuseSR: Super Resolution for Real-time Rendering through Efficient Multi-resolution Fusion. 8:1-8:10 - Jonghee Back, Binh-Son Hua, Toshiya Hachisuka, Bochang Moon:
Input-Dependent Uncorrelated Weighting for Monte Carlo Denoising. 9:1-9:10 - Zhizhen Wu, Chenyu Zuo, Yuchi Huo, Yazhen Yuan, Yifan Peng, Guiyang Pu, Rui Wang, Hujun Bao:
Adaptive Recurrent Frame Prediction with Learnable Motion Vectors. 10:1-10:11 - Enrique Rosales, Fatemeh Teimury, Joshua Horacsek, Aria Salari, Xuebin Qin, Adi Bar-Lev, Xiaoqiang Zhe, Ligang Liu:
Fast-MSX: Fast Multiple Scattering Approximation. 11:1-11:9
See Details
- Theo Thonat, Iliyan Georgiev, François Beaune, Tamy Boubekeur:
RMIP: Displacement ray tracing via inversion and oblong bounding. 12:1-12:11
View Synthesis
- Guo Pu, Peng-Shuai Wang, Zhouhui Lian:
SinMPI: Novel View Synthesis from a Single Image with Expanded Multiplane Images. 13:1-13:10 - Mathias Harrer, Linus Franke, Laura Fink, Marc Stamminger, Tim Weyrich:
Inovis: Instant Novel-View Synthesis. 14:1-14:12 - Haotong Lin, Sida Peng, Zhen Xu, Tao Xie, Xingyi He, Hujun Bao, Xiaowei Zhou:
High-Fidelity and Real-Time Novel View Synthesis for Dynamic Scenes. 15:1-15:9 - Yash Kant, Aliaksandr Siarohin, Michael Vasilkovsky, Riza Alp Güler, Jian Ren, Sergey Tulyakov, Igor Gilitschenski:
Repurposing Diffusion Inpainters for Novel View Synthesis. 16:1-16:12 - Yuan-Chen Guo, Yan-Pei Cao, Chen Wang, Yu He, Ying Shan, Song-Hai Zhang:
VMesh: Hybrid Volume-Mesh Representation for Efficient View Synthesis. 17:1-17:11
Motion Synthesis with Awareness
- Yifeng Jiang, Jungdam Won, Yuting Ye, C. Karen Liu:
DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics. 18:1-18:11 - Kai Wang, Xiaoyu Xu, Yinping Zheng, Da Zhou, Shihui Guo, Yipeng Qin, Xiaohu Guo:
Computational Design of Wiring Layout on Tight Suits with Minimal Motion Resistance. 19:1-19:12
Holography
- Koray Kavakli, Liang Shi, Hakan Urey, Wojciech Matusik, Kaan Aksit:
Multi-color Holograms Improve Brightness in Holographic Displays. 20:1-20:11 - Antonin Gilles, Pierre Le Gargasson, Grégory Hocquet, Patrick Gioia:
Holographic Near-eye Display with Real-time Embedded Rendering. 21:1-21:10 - Eric Markley, Nathan Matsuda, Florian Schiffers, Oliver Cossairt, Grace Kuo:
Simultaneous Color Computer Generated Holography. 22:1-22:11
Full-Body Avatar
- Haotian Yang, Mingwu Zheng, Wanquan Feng, Haibin Huang, Yu-Kun Lai, Pengfei Wan, Zhongyuan Wang, Chongyang Ma:
Towards Practical Capture of High-Fidelity Relightable Avatars. 23:1-23:11 - Donglai Xiang, Fabian Prada, Zhe Cao, Kaiwen Guo, Chenglei Wu, Jessica K. Hodgins, Timur M. Bagautdinov:
Drivable Avatar Clothing: Faithful Full-Body Telepresence with Dynamic Clothing Driven by Sparse RGB-D Input. 24:1-24:11
How To Deal With NERF?
- Nagabhushan Somraj, Adithyan Karanayil, Rajiv Soundararajan:
SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with Simpler Solutions. 25:1-25:11 - Jingyu Zhuang, Chen Wang, Liang Lin, Lingjie Liu, Guanbin Li:
DreamEditor: Text-Driven 3D Scene Editing with Neural Fields. 26:1-26:10
Smooth-Parametric-LINE
- Siqi Wang, Chenxi Liu, Daniele Panozzo, Denis Zorin, Alec Jacobson:
Bézier Spline Simplification Using Locally Integrated Error Metrics. 27:1-27:11 - Kai Yan, Fujun Luan, Milos Hasan, Thibault Groueix, Valentin Deschaintre, Shuang Zhao:
PSDR-Room: Single Photo to Scene using Differentiable Rendering. 28:1-28:11
Applications & Innovations
- Martin Bálint, Karol Myszkowski, Hans-Peter Seidel, Gurprit Singh:
Joint Sampling and Optimisation for Inverse Rendering. 29:1-29:10 - Jiankai Xing, Xuejun Hu, Fujun Luan, Ling-Qi Yan, Kun Xu:
Extended Path Space Manifolds for Physically Based Differentiable Rendering. 30:1-30:11
Light, Shadows & Curves
- Jia-Mu Sun, Tong Wu, Yong-Liang Yang, Yu-Kun Lai, Lin Gao:
SOL-NeRF: Sunlight Modeling for Outdoor Scene Decomposition and Relighting. 31:1-31:11 - Lucas Valença, Jinsong Zhang, Michaël Gharbi, Yannick Hold-Geoffroy, Jean-François Lalonde:
Shadow Harmonization for Realistic Compositing. 32:1-32:12
Rendering, Neural Fields & Neural Caches
- Bingchen Gong, Yuehao Wang, Xiaoguang Han, Qi Dou:
SeamlessNeRF: Stitching Part NeRFs with Gradient Propagation. 33:1-33:10 - Zilu Li, Guandao Yang, Xi Deng, Christopher De Sa, Bharath Hariharan, Steve Marschner:
Neural Caches for Monte Carlo Partial Differential Equation Solvers. 34:1-34:10
TechScape
- Phillip Guan, Eric Penner, Joel Hegland, Benjamin Letham, Douglas Lanman:
Perceptual Requirements for World-Locked Rendering in AR and VR. 35:1-35:10 - Taimoor Tariq, Nathan Matsuda, Eric Penner, Jerry Jia, Douglas Lanman, Ajit Ninan, Alexandre Chapiro:
Perceptually Adaptive Real-Time Tone Mapping. 36:1-36:10
Anything Can be Neural
- Laura Fink, Darius Rückert, Linus Franke, Joachim Keinert, Marc Stamminger:
LiveNVS: Neural View Synthesis on Live RGB-D Streams. 37:1-37:11 - Linus Franke, Darius Rückert, Laura Fink, Matthias Innmann, Marc Stamminger:
VET: Visual Error Tomography for Point Cloud Completion and High-Quality Neural Rendering. 38:1-38:12
Materials
- Yuang Cui, Gaole Pan, Jian Yang, Lei Zhang, Lingqi Yan, Beibei Wang:
Multiple-bounce Smith Microfacet BRDFs using the Invariance Principle. 39:1-39:10 - Simon Lucas, Mickaël Ribardière, Romain Pacanowski, Pascal Barla:
A Micrograin BSDF Model for the Rendering of Porous Layers. 40:1-40:10
Beyond Skin Deep
- Radek Danecek, Kiran Chhatre, Shashank Tripathi, Yandong Wen, Michael J. Black, Timo Bolkart:
Emotional Speech-Driven Animation with Content-Emotion Disentanglement. 41:1-41:13 - Kripasindhu Sarkar, Marcel C. Bühler, Gengyan Li, Daoye Wang, Delio Vicini, Jérémy Riviere, Yinda Zhang, Sergio Orts-Escolano, Paulo F. U. Gotardo, Thabo Beeler, Abhimitra Meka:
LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces. 42:1-42:11
Technoscape
- Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal, Changil Kim, Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder, Aljaz Bozic, Dahua Lin, Michael Zollhöfer, Christian Richardt:
VR-NeRF: High-Fidelity Virtualized Walkable Spaces. 43:1-43:12 - Jordan Voas, Yili Wang, Qixing Huang, Raymond Mooney:
What is the Best Automated Metric for Text to Motion Generation? 44:1-44:11
Motion Synthesis With Awareness, Part II
- Sunmin Lee, Taeho Kang, Jungnam Park, Jehee Lee, Jungdam Won:
SAME: Skeleton-Agnostic Motion Embedding for Character Animation. 45:1-45:11 - Tianyu Li, Jungdam Won, Alexander Clegg, Jeonghwan Kim, Akshara Rai, Sehoon Ha:
ACE: Adversarial Correspondence Embedding for Cross Morphology Motion Retargeting from Human to Nonhuman Characters. 46:1-46:11 - Noshaba Cheema, Rui Xu, Nam Hee Kim, Perttu Hämäläinen, Vladislav Golyanik, Marc Habermann, Christian Theobalt, Philipp Slusallek:
Discovering Fatigued Movements for Virtual Character Animation. 47:1-47:12
All About Animation
- Xingjian Han, Benjamin Senderling, Stanley To, Deepak Kumar, Emily Whiting, Jun Saito:
GroundLink: A Dataset Unifying Human Body Movement and Ground Reaction Dynamics. 48:1-48:10 - Dhruv Agrawal, Martin Guay, Jakob Buhmann, Dominik Borer, Robert W. Sumner:
Pose and Skeleton-aware Neural IK for Pose and Motion Editing. 49:1-49:10
Avatar Portrait
- Cong Wang, Di Kang, Yan-Pei Cao, Linchao Bao, Ying Shan, Song-Hai Zhang:
Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar. 50:1-50:12 - Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong:
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections. 51:1-51:9
Texture Magic
- Yuzhe Luo, Xiaogang Jin, Zherong Pan, Kui Wu, Qilong Kou, Xiajun Yang, Xifeng Gao:
Texture Atlas Compression Based on Repeated Content Removal. 52:1-52:11 - Tong Wu, Zhibing Li, Shuai Yang, Pan Zhang, Xingang Pan, Jiaqi Wang, Dahua Lin, Ziwei Liu:
HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image. 53:1-53:10
Creative Expression
- Peiying Zhang, Nanxuan Zhao, Jing Liao:
Text-Guided Vector Graphics Customization. 54:1-54:11 - Changshuo Wang, Lei Wu, Xiaole Liu, Xiang Li, Lei Meng, Xiangxu Meng:
Anything to Glyph: Artistic Font Synthesis via Text-to-Image Diffusion Model. 55:1-55:11 - Cheng Zheng, Guangyuan Zhao, Peter T. C. So:
Close the Design-to-Manufacturing Gap in Computational Optics with a 'Real2Sim' Learned Two-Photon Neural Lithography Simulator. 56:1-56:9
From Pixels to Gradients
- Fangzhou Gao, Lianghao Zhang, Li Wang, Jiamin Cheng, Jiawan Zhang:
Transparent Object Reconstruction via Implicit Differentiable Refraction Rendering. 57:1-57:11 - Qimin Chen, Zhiqin Chen, Hang Zhou, Hao Zhang:
ShaDDR: Interactive Example-Based Geometry and Texture Generation via 3D Shape Detailization and Differentiable Rendering. 58:1-58:11
Embed to a Different Space
- Ahmed Abdelreheem, Abdelrahman Eldesokey, Maks Ovsjanikov, Peter Wonka:
Zero-Shot 3D Shape Correspondence. 59:1-59:11 - Nima Fathollahi, Sean Chester:
Lock-free Vertex Clustering for Multicore Mesh Reduction. 60:1-60:10
Magic Diffusion Model
- Nir Zabari, Aharon Azulay, Alexey Gorkor, Tavi Halperin, Ohad Fried:
Diffusing Colors: Image Colorization with Text Guided Diffusion. 61:1-61:11 - Badour Albahar, Shunsuke Saito, Hung-Yu Tseng, Changil Kim, Johannes Kopf, Jia-Bin Huang:
Single-Image 3D Human Digitization with Shape-guided Diffusion. 62:1-62:11 - Bastien Doignies, Nicolas Bonneel, David Coeurjolly, Julie Digne, Loïs Paulin, Jean-Claude Iehl, Victor Ostromoukhov:
Example-Based Sampling with Diffusion Models. 63:1-63:11
Simulation and Animation of Natural Phenomena
- Mengyi Shan, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M. Seitz:
Animating Street View. 64:1-64:12 - Haozhe Su, Siyu Zhang, Zherong Pan, Mridul Aanjaneya, Xifeng Gao, Kui Wu:
Real-time Height-field Simulation of Sand and Water Mixtures. 65:1-65:10 - Filippo Maggioli, Jonathan Klein, Torsten Hädrich, Emanuele Rodolà, Wojtek Palubicki, Sören Pirk, Dominik L. Michels:
A Physically-inspired Approach to the Simulation of Plant Wilting. 66:1-66:8
Navigating Shape Spaces
- Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Hao Zhang, Chi-Wing Fu:
CLIPXPlore: Coupled CLIP and Shape Spaces for 3D Shape Exploration. 67:1-67:12 - Arman Maesumi, Paul Guerrero, Noam Aigerman, Vladimir G. Kim, Matthew Fisher, Siddhartha Chaudhuri, Daniel Ritchie:
Explorable Mesh Deformation Subspaces from Unstructured 3D Generative Models. 68:1-68:11 - Milin Kodnongbua, Benjamin T. Jones, Maaz Bin Safeer Ahmad, Vladimir G. Kim, Adriana Schulz:
ReparamCAD: Zero-shot CAD Re-Parameterization for Interactive Manipulation. 69:1-69:12
Personalized Generative Models
- Libing Zeng, Lele Chen, Yi Xu, Nima Khademi Kalantari:
MyStyle++: A Controllable Personalized Generative Prior. 70:1-70:11 - Daohan Lu, Sheng-Yu Wang, Nupur Kumari, Rohan Agarwal, Mia Tang, David Bau, Jun-Yan Zhu:
Content-based Search for Deep Generative Models. 71:1-71:12 - Moab Arar, Rinon Gal, Yuval Atzmon, Gal Chechik, Daniel Cohen-Or, Ariel Shamir, Amit H. Bermano:
Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models. 72:1-72:10
Reconstruction
- Silvia Sellán, Christopher Batty, Oded Stein:
Reach For the Spheres: Tangency-aware surface reconstruction of SDFs. 73:1-73:11 - Silvia Sellán, Alec Jacobson:
Neural Stochastic Poisson Surface Reconstruction. 74:1-74:9 - Nuri Ryu, Minsu Gong, Geonung Kim, Joo-Haeng Lee, Sunghyun Cho:
360° Reconstruction From a Single Image Using Space Carved Outpainting. 75:1-75:11
Neural Physics
- Ryan S. Zesch, Vismay Modi, Shinjiro Sueda, David I. W. Levin:
Neural Collision Fields for Triangle Primitives. 76:1-76:10 - Cristian Romero, Dan Casas, Maurizio M. Chiaramonte, Miguel A. Otaduy:
Learning Contact Deformations with General Collider Descriptors. 77:1-77:10 - Zeshun Zong, Xuan Li, Minchen Li, Maurizio M. Chiaramonte, Wojciech Matusik, Eitan Grinspun, Kevin Carlberg, Chenfanfu Jiang, Peter Yichen Chen:
Neural Stress Fields for Reduced-order Elastoplasticity and Fracture. 78:1-78:11
Multidisciplinary Fusion
- Qiang Zhang, Yuanqiao Lin, Yubin Lin, Szymon Rusinkiewicz:
Hand Pose Estimation with Mems-Ultrasonic Sensors. 79:1-79:11 - Jialin Huang, Alexa F. Siu, Rana Hanocka, Yotam I. Gingold:
ShapeSonic: Sonifying Fingertip Interactions for Non-Visual Virtual Shape Perception. 80:1-80:9 - Eunjae Kim, Sukwon Choi, Jiyoung Kim, Jae-Ho Nah, Woonam Jung, Tae-Hyeong Lee, Yeon-Kug Moon, Woo-Chan Park:
An Architecture and Implementation of Real-Time Sound Propagation Hardware for Mobile Devices. 81:1-81:9 - Taeho Kang, Kyungjin Lee, Jinrui Zhang, Youngki Lee:
Ego3DPose: Capturing 3D Cues from Binocular Egocentric Views. 82:1-82:10
Flesh & Bones
- Pablo Ramón, Cristian Romero, Javier Tapia, Miguel A. Otaduy:
SFLSH: Shape-Dependent Soft-Flesh Avatars. 83:1-83:9 - Hongyu Tao, Shuaiying Hou, Changqing Zou, Hujun Bao, Weiwei Xu:
Neural Motion Graph. 84:1-84:11
Visualizing the Future
- Li Wang, Lianghao Zhang, Fangzhou Gao, Jiawan Zhang:
DeepBasis: Hand-Held Single-Image SVBRDF Capture via Two-Level Basis Material Model. 85:1-85:11 - Sam Sartor, Pieter Peers:
MatFusion: A Generative Diffusion Model for SVBRDF Capture. 86:1-86:10 - Guoqing Hao, Satoshi Iizuka, Kensho Hara, Edgar Simo-Serra, Hirokatsu Kataoka, Kazuhiro Fukui:
Diffusion-based Holistic Texture Rectification and Synthesis. 87:1-87:11
Visual Perception
- Jakob Andreas Bærentzen, Jeppe Revall Frisvad, Jonàs Martínez:
Curl Noise Jittering. 88:1-88:11 - Misa Korac, Corentin Salaün, Iliyan Georgiev, Pascal Grittmann, Philipp Slusallek, Karol Myszkowski, Gurprit Singh:
Perceptual error optimization for Monte Carlo animation rendering. 89:1-89:10 - Bin Chen, Akshay Jindal, Michal Piovarci, Chao Wang, Hans-Peter Seidel, Piotr Didyk, Karol Myszkowski, Ana Serrano, Rafal K. Mantiuk:
The effect of display capabilities on the gloss consistency between real and virtual objects. 90:1-90:11
What're Your Points?
- Markus Kettunen, Daqi Lin, Ravi Ramamoorthi, Thomas Bashford-Rogers, Chris Wyman:
Conditional Resampled Importance Sampling and ReSTIR. 91:1-91:11 - Songyin Wu, Sungye Kim, Zheng Zeng, Deepak Vembar, Sangeeta Jha, Anton Kaplanyan, Lingqi Yan:
ExtraSS: A Framework for Joint Spatial Super Sampling and Frame Extrapolation. 92:1-92:11 - Shinji Ogaki:
Nonlinear Ray Tracing for Displacement and Shell Mapping. 93:1-93:10
Text To Anything
- Dani Valevski, Danny Lumen, Yossi Matias, Yaniv Leviathan:
Face0: Instantaneously Conditioning a Text-to-Image Model on a Face. 94:1-94:10 - Shuai Yang, Yifan Zhou, Ziwei Liu, Chen Change Loy:
Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation. 95:1-95:11 - Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, Dani Lischinski:
Break-A-Scene: Extracting Multiple Concepts from a Single Image. 96:1-96:12
See Through The Field
- Jiangkai Wu, Liming Liu, Yunpeng Tan, Quanlu Jia, Haodan Zhang, Xinggong Zhang:
ActRay: Online Active Ray Sampling for Radiance Fields. 97:1-97:10 - Kunal Gupta, Milos Hasan, Zexiang Xu, Fujun Luan, Kalyan Sunkavalli, Xin Sun, Manmohan Chandraker, Sai Bi:
MCNeRF: Monte Carlo Rendering and Denoising for Real-Time NeRFs. 98:1-98:11 - Zixi Shu, Ran Yi, Yuqi Meng, Yutong Wu, Lizhuang Ma:
RT-Octree: Accelerate PlenOctree Rendering with Batched Regular Tracking and Neural Denoising for Real-time Neural Radiance Fields. 99:1-99:11
Humans & Characters
- Chris Careaga, S. Mahdi H. Miangoleh, Yagiz Aksoy:
Intrinsic Harmonization for Illumination-Aware Image Compositing. 100:1-100:10 - Yuan Gong, Youxin Pang, Xiaodong Cun, Menghan Xia, Yingqing He, Haoxin Chen, Longyue Wang, Yong Zhang, Xintao Wang, Ying Shan, Yujiu Yang:
Interactive Story Visualization with Multiple Characters. 101:1-101:10
\nabla f = ?
- Kaizhang Kang, Zoubin Bi, Xiang Feng, Yican Dong, Kun Zhou, Hongzhi Wu:
Differentiable Dynamic Visible-Light Tomography. 102:1-102:12 - Logan Mosier, Morgan McGuire, Toshiya Hachisuka:
Quantum Ray Marching: Reformulating Light Transport for Quantum Computers. 103:1-103:9 - Sayantan Datta, Carl S. Marshall, Zhao Dong, Zhengqin Li, Derek Nowrouzezahrai:
Efficient Graphics Representation with Differentiable Indirection. 104:1-104:10
Put Things Together
- Tianyang Xue, Mingdong Wu, Lin Lu, Haoxuan Wang, Hao Dong, Baoquan Chen:
Learning Gradient Fields for Scalable and Generalizable Irregular Packing. 105:1-105:11
Head & Face
- Lingchen Yang, Gaspard Zoss, Prashanth Chandran, Paulo F. U. Gotardo, Markus Gross, Barbara Solenthaler, Eftychios Sifakis, Derek Bradley:
An Implicit Physical Face Model Driven by Expression and Style. 106:1-106:12
Computer Vision
- Dorian Chan, Matthew O'Toole:
Light-Efficient Holographic Illumination for Continuous-Wave Time-of-Flight Imaging. 107:1-107:10 - Kiseok Choi, Inchul Kim, Dongyoung Choi, Julio Marco, Diego Gutierrez, Min H. Kim:
Self-Calibrating, Fully Differentiable NLOS Inverse Rendering. 108:1-108:11 - Youngchan Kim, Wonjoon Jin, Sunghyun Cho, Seung-Hwan Baek:
Neural Spectro-polarimetric Fields. 109:1-109:11 - Floor Verhoeven, Tanguy Magne, Olga Sorkine-Hornung:
UVDoc: Neural Grid-based Document Unwarping. 110:1-110:11
Deformable Solids
- Yue Chang, Peter Yichen Chen, Zhecheng Wang, Maurizio M. Chiaramonte, Kevin Carlberg, Eitan Grinspun:
LiCROM: Linear-Subspace Continuous Reduced Order Modeling with Neural Fields. 111:1-111:12 - Ty Trusty, Otman Benchekroun, Eitan Grinspun, Danny M. Kaufman, David I. W. Levin:
Subspace Mixed Finite Elements for Real-Time Heterogeneous Elastodynamics. 112:1-112:10 - Qiqin Le, Yitong Deng, Jiamu Bu, Bo Zhu, Tao Du:
Second-Order Finite Elements for Deformable Surfaces. 113:1-113:10 - Yunxiao Zhang, Zixiong Wang, Zihan Zhao, Rui Xu, Shuang-Min Chen, Shiqing Xin, Wenping Wang, Changhe Tu:
A Hessian-Based Field Deformer for Real-Time Topology-Aware Shape Editing. 114:1-114:11
Motion Capture and Reconstruction
- Lin Cong, Philipp Ruppel, Yizhou Wang, Xiang Pan, Norman Hendrich, Jianwei Zhang:
Efficient Human Motion Reconstruction from Monocular Videos with Physical Consistency Loss. 115:1-115:9 - Shaohua Pan, Qi Ma, Xinyu Yi, Weifeng Hu, Xiong Wang, Xingkang Zhou, Jijunnan Li, Feng Xu:
Fusing Monocular Images and Sparse IMU Signals for Real-time Human Motion Capture. 116:1-116:11 - Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou, Xiaogang Jin:
A Locality-based Neural Solver for Optical Motion Capture. 117:1-117:11 - Taesoo Kwon, Taehong Gu, Jaewon Ahn, Yoonsang Lee:
Adaptive Tracking of a Single-Rigid-Body Character in Various Environments. 118:1-118:11
Neural Shape Representation
- Yiyu Zhuang, Qi Zhang, Ying Feng, Hao Zhu, Yao Yao, Xiaoyu Li, Yan-Pei Cao, Ying Shan, Xun Cao:
Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail. 119:1-119:10 - Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller:
Compact Neural Graphics Primitives with Learned Hash Probing. 120:1-120:10 - Zoë Marschner, Silvia Sellán, Hsueh-Ti Derek Liu, Alec Jacobson:
Constructive Solid Geometry on Neural Signed Distance Fields. 121:1-121:12 - Qing Li, Huifang Feng, Kanle Shi, Yi Fang, Yu-Shen Liu, Zhizhong Han:
Neural Gradient Learning and Optimization for Oriented Point Normal Estimation. 122:1-122:9
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.