default search action
SIGGRAPH Asia 2024 Conference Papers: Tokyo, Japan
- Takeo Igarashi, Ariel Shamir, Hao (Richard) Zhang:
SIGGRAPH Asia 2024 Conference Papers, SA 2024, Tokyo, Japan, December 3-6, 2024. ACM 2024, ISBN 979-8-4007-1131-2
Going Big in Rendering
- Jiabin Liang, Lanqing Zhang, Zhuoran Zhao, Xiangyu Xu:
InfNeRF: Towards Infinite Scale NeRF Rendering with O(log n) Space Complexity. 1:1-1:11 - Saswat Subhajyoti Mallick, Rahul Goel, Bernhard Kerbl, Markus Steinberger, Francisco Vicente Carrasco, Fernando De la Torre:
Taming 3DGS: High-Quality Radiance Fields with Limited Resources. 2:1-2:11
Make It Yours - Customizing Image Generation
- Kuan-Chieh Wang, Daniil Ostashev, Yuwei Fang, Sergey Tulyakov, Kfir Aberman:
MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation. 3:1-3:12 - Ziqi Huang, Tianxing Wu, Yuming Jiang, Kelvin C. K. Chan, Ziwei Liu:
ReVersion: Diffusion-Based Relation Inversion from Images. 4:1-4:11 - Moab Arar, Andrey Voynov, Amir Hertz, Omri Avrahami, Shlomi Fruchter, Yael Pritch, Daniel Cohen-Or, Ariel Shamir:
PALP: Prompt Aligned Personalization of Text-to-Image Models. 5:1-5:11 - Maxwell Jones, Sheng-Yu Wang, Nupur Kumari, David Bau, Jun-Yan Zhu:
Customizing Text-to-Image Models with a Single Image Pair. 6:1-6:13 - Nupur Kumari, Grace Su, Richard Zhang, Taesung Park, Eli Shechtman, Jun-Yan Zhu:
Customizing Text-to-Image Diffusion with Object Viewpoint Control. 7:1-7:13
Modeling and PDEs
- Ryusuke Sugimoto, Nathan D. King, Toshiya Hachisuka, Christopher Batty:
Projected Walk on Spheres: A Monte Carlo Closest Point Method for Surface PDEs. 8:1-8:10 - Haocheng Yuan, Adrien Bousseau, Hao Pan, Quancheng Zhang, Niloy J. Mitra, Changjian Li:
DiffCSG: Differentiable CSG via Rasterization. 9:1-9:10 - Yun-Chun Chen, Selena Ling, Zhiqin Chen, Vladimir G. Kim, Matheus Gadelha, Alec Jacobson:
Text-guided Controllable Mesh Refinement for Interactive 3D Modeling. 10:1-10:11
Neural Relighting and Reflection
- Mingming He, Pascal Clausen, Ahmet Levent Tasel, Li Ma, Oliver Pilarski, Wenqi Xian, Laszlo Rikker, Xueming Yu, Ryan Burgert, Ning Yu, Paul E. Debevec:
DifFRelight: Diffusion-Based Facial Performance Relighting. 11:1-11:12 - Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu:
GS3: Efficient Relighting with Triple Gaussian Splatting. 12:1-12:12 - Zhiyi Kuang, Yanchao Yang, Siyan Dong, Jiayue Ma, Hongbo Fu, Youyi Zheng:
OLAT Gaussians for Generic Relightable Appearance Acquisition. 13:1-13:11 - Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, Johannes Kopf:
Planar Reflection-Aware Neural Radiance Fields. 14:1-14:10 - Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron:
NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections. 15:1-15:10 - Xiuchao Wu, Jiamin Xu, Chi Wang, Yifan Peng, Qixing Huang, James Tompkin, Weiwei Xu:
Local Gaussian Density Mixtures for Unstructured Lumigraph Rendering. 16:1-16:11
Design It All: Font, Paint, and Colors
- Alexandre Binninger, Olga Sorkine-Hornung:
SD-πXL: Generating Low-Resolution Quantized Imagery via Score Distillation. 17:1-17:12 - Yiren Song, Shijie Huang, Chen Yao, Hai Ci, Xiaojun Ye, Jiaming Liu, Yuxuan Zhang, Mike Zheng Shou:
ProcessPainter: Learning to draw from sequence data. 18:1-18:10 - Bowei Chen, Yifan Wang, Brian Curless, Ira Kemelmacher-Shlizerman, Steven M. Seitz:
Inverse Painting: Reconstructing The Painting Process. 19:1-19:11
Path Guiding, Scattering
- Joshua Meyer, Alexander Rath, Ömercan Yazici, Philipp Slusallek:
MARS: Multi-sample Allocation through Russian roulette and Splitting. 20:1-20:10 - Honghao Dong, Rui Su, Guoping Wang, Sheng Li:
Efficient Neural Path Guiding with 4D Modeling. 21:1-21:11 - Jiaxiong Qiu, Ruihong Cen, Zhong Li, Han Yan, Ming-Ming Cheng, Bo Ren:
NeuSmoke: Efficient Smoke Reconstruction and View Synthesis with Neural Transportation Fields. 22:1-22:12 - Rui Su, Honghao Dong, Jierui Ren, Haojie Jin, Yisong Chen, Guoping Wang, Sheng Li:
Dynamic Neural Radiosity with Multi-grid Decomposition. 23:1-23:12 - Chuankun Zheng, Yuchi Huo, Hongxiang Huang, Hongtao Sheng, Junrong Huang, Rui Tang, Hao Zhu, Rui Wang, Hujun Bao:
Neural Global Illumination via Superposed Deformable Feature Fields. 24:1-24:11
Color and Display
- Robert Wanat, Michael D. Smith, Junwoo Jang, Sally Hattori:
COMFI: A Calibrated Observer Metameric Failure Index for Color Critical Tasks. 25:1-25:9 - Brian Chao, Manu Gopakumar, Suyeon Choi, Jonghyun Kim, Liang Shi, Gordon Wetzstein:
Large Étendue 3D Holographic Display with Content-adaptive Dynamic Fourier Modulation. 26:1-26:12 - Yancheng Cai, Ali Bozorgian, Maliha Ashraf, Robert Wanat, Rafal K. Mantiuk:
elaTCSF: A Temporal Contrast Sensitivity Function for Flicker Detection and Modeling Variable Refresh Rate Flicker. 27:1-27:11
Look at it Differently: Novel View Synthesis
- Guo Pu, Yiming Zhao, Zhouhui Lian:
Pano2Room: Novel View Synthesis from a Single Indoor Panorama. 28:1-28:11 - Marcel C. Bühler, Gengyan Li, Erroll Wood, Leonhard Helminger, Xu Chen, Tanmay Shah, Daoye Wang, Stephan J. Garbin, Sergio Orts-Escolano, Otmar Hilliges, Dmitry Lagun, Jérémy Riviere, Paulo F. U. Gotardo, Thabo Beeler, Abhimitra Meka, Kripasindhu Sarkar:
Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures. 29:1-29:12 - Colton Stearns, Adam W. Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, Leonidas J. Guibas:
Dynamic Gaussian Marbles for Novel View Synthesis of Casual Monocular Videos. 30:1-30:11 - Ilya Chugunov, Amogh Joshi, Kiran Murthy, Francois Bleibel, Felix Heide:
Neural Light Spheres for Implicit Image Stitching and View Synthesis. 31:1-31:11
Your Wish is my Command: Generate, Edit, Rearrange
- Wenhao Li, Zhiyuan Yu, Qijin She, Zhinan Yu, Yuqing Lan, Chenyang Zhu, Ruizhen Hu, Kai Xu:
LLM-enhanced Scene Graph Learning for Household Rearrangement. 32:1-32:11 - Nan Jiang, Zimo He, Zi Wang, Hongjie Li, Yixin Chen, Siyuan Huang, Yixin Zhu:
Autonomous Character-Scene Interaction Synthesis from Text Instruction. 33:1-33:11 - Yunxin Li, Haoyuan Shi, Baotian Hu, Longyue Wang, Jiashun Zhu, Jinyi Xu, Zhen Zhao, Min Zhang:
Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation. 34:1-34:11
Splats and Blobs: Generate, Deform, Diffuse
- Qi-Yuan Feng, Geng-Chen Cao, Hao-Xiang Chen, Qun-Ce Xu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu:
EVSplitting: An Efficient and Visually Consistent Splitting Algorithm for 3D Gaussian Splatting. 35:1-35:11 - Chao Liu, Weili Nie, Sifei Liu, Abhishek Badki, Hang Su, Morteza Mardani, Benjamin Eckart, Arash Vahdat:
BlobGEN-3D: Compositional 3D-Consistent Freeview Image Generation with 3D Blobs. 36:1-36:11 - Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Angela Dai, Matthias Nießner:
L3DG: Latent 3D Gaussian Diffusion. 37:1-37:11
It's All About Change: Image Editing
- Omri Avrahami, Rinon Gal, Gal Chechik, Ohad Fried, Dani Lischinski, Arash Vahdat, Weili Nie:
DiffUHaul: A Training-Free Method for Object Dragging in Images. 38:1-38:12 - Joonghyuk Shin, Daehyeon Choi, Jaesik Park:
InstantDrag: Improving Interactivity in Drag-based Image Editing. 39:1-39:10 - Or Patashnik, Rinon Gal, Daniel Cohen-Or, Jun-Yan Zhu, Fernando De la Torre:
Consolidating Attention Features for Multi-view Image Editing. 40:1-40:12 - Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, Daniel Cohen-Or:
TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models. 41:1-41:12
Animating Humans
- Feng Qiu, Wei Zhang, Chen Liu, Rudong An, Lincheng Li, Yu Ding, Changjie Fan, Zhipeng Hu, Xin Yu:
FreeAvatar: Robust 3D Facial Animation Transfer by Learning an Expression Foundation Model. 42:1-42:11 - Junfeng Lyu, Feng Xu:
High-quality Animatable Eyelid Shapes from Lightweight Captures. 43:1-43:11 - Nikos Athanasiou, Alpár Cseke, Markos Diomataris, Michael J. Black, Gül Varol:
MotionFix: Text-Driven 3D Human Motion Editing. 44:1-44:11 - Stevo Rackovic, Dusan Jakovetic, Cláudia Soares:
Refined Inverse Rigging: A Balanced Approach to High-fidelity Blendshape Animation. 45:1-45:9
Threads of Reality: Garments & Knitting
- Cheng Zhang, Yuanhao Wang, Francisco Vicente, Chenglei Wu, Jinlong Yang, Thabo Beeler, Fernando De la Torre:
FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Images. 46:1-46:12
Beyond RGB
- Xinge Yang, Matheus Souza, Kunyi Wang, Praneeth Chakravarthula, Qiang Fu, Wolfgang Heidrich:
End-to-End Hybrid Refractive-Diffractive Lens Design with Differentiable Ray-Wave Model. 47:1-47:11 - Parsa Mirdehghan, Brandon Buscaino, Maxx Wu, Doug Charlton, Mohammad E. Mousa-Pasandi, Kiriakos N. Kutulakos, David B. Lindell:
Coherent Optical Modems for Full-Wavefield Lidar. 48:1-48:10 - Oscar Pueyo-Ciutad, Julio Marco, Stephane Schertzer, Frank Christnacher, Martin Laurenzis, Diego Gutierrez, Albert Redo-Sanchez:
Time-Gated Polarization for Active Non-Line-Of-Sight Imaging. 49:1-49:11
Domo Arigato, Mr. Roboto / Robots and Characters
- Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, Moritz Bächer:
Robot Motion Diffusion Model: Motion Generation for Robotic Characters. 50:1-50:9 - Xujie Shen, Haocheng Peng, Zesong Yang, Juzhan Xu, Hujun Bao, Ruizhen Hu, Zhaopeng Cui:
PC-Planner: Physics-Constrained Self-Supervised Learning for Robust Neural Motion Planning with Shape-Aware Distance Function. 51:1-51:11 - Agastya Kalra, Vage Taamazyan, Alberto Dall'olio, Raghav Khanna, Tomas Gerlich, Georgia Giannopolou, Guy Stoppi, Daniel Baxter, Abhijit Ghosh, Rick Szeliski, Kartik Venkataraman:
A Plentoptic 3D Vision System. 52:1-52:12 - Otman Benchekroun, Kaixiang Xie, Hsueh-Ti Derek Liu, Eitan Grinspun, Sheldon Andrews, Victor B. Zordan:
Actuators A La Mode: Modal Actuations for Soft Body Locomotion. 53:1-53:10 - Xiangjun Tang, Linjun Wu, He Wang, Yiqian Wu, Bo Hu, Songnan Li, Xu Gong, Yuchen Liao, Qilong Kou, Xiaogang Jin:
Decoupling Contact for Fine-Grained Motion Style Transfer. 54:1-54:11
Computational Design
- Mulun Na, Hector A. Jimenez Romero, Xinge Yang, Jonathan Klein, Dominik L. Michels, Wolfgang Heidrich:
End-to-end Optimization of Fluidic Lenses. 55:1-55:10 - Qibing Wu, Zhihao Zhang, Xin Yan, Fanchao Zhong, Yueze Zhu, Xurong Lu, Runze Xue, Rui Li, Changhe Tu, Haisen Zhao:
Tune-It: Optimizing Wire Reconfiguration for Sculpture Manufacturing. 56:1-56:11 - Xingjian Han, Yu Jiang, Weiming Wang, Guoxin Fang, Simeon Gill, Zhiqiang Zhang, Shengfa Wang, Jun Saito, Deepak Kumar, Zhongxuan Luo, Emily Whiting, Charlie C. L. Wang:
Motion-Driven Neural Optimizer for Prophylactic Braces Made by Distributed Microstructures. 57:1-57:11 - Qun-Ce Xu, Hao-Xiang Chen, Jiacheng Hua, Xiaohua Zhan, Yong-Liang Yang, Tai-Jiang Mu:
FragmentDiff: A Diffusion Model for Fractured Object Assembly. 58:1-58:12
Text, Texturing, and Stylization
- Mingxin Yang, Jianwei Guo, Yuzhi Chen, Lan Chen, Pu Li, Zhanglin Cheng, Xiaopeng Zhang, Hui Huang:
InstanceTex: Instance-level Controllable Texture Synthesis for 3D Scenes via Diffusion Priors. 59:1-59:11 - Yuxin Liu, Minshan Xie, Hanyuan Liu, Tien-Tsin Wong:
Text-Guided Texturing by Synchronized Multi-View Diffusion. 60:1-60:11 - Peihan Tu, Li-Yi Wei, Matthias Zwicker:
Compositional Neural Textures. 61:1-61:11 - I-Sheng Fang, Yue-Hua Han, Jun-Cheng Chen:
Camera Settings as Tokens: Modeling Photography on Latent Diffusion Models. 62:1-62:11
To Bend or not to Bend?
- Juan Sebastian Montes Maestre, Stelian Coros, Bernhard Thomaszewski:
Q3T Prisms: A Linear-Quadratic Solid Shell Element for Elastoplastic Surfaces. 63:1-63:9 - Antoine Chan-Lock, Miguel A. Otaduy:
Polar Interpolants for Thin-Shell Microstructure Homogenization. 64:1-64:10 - Chang Yu, Xuan Li, Lei Lan, Yin Yang, Chenfanfu Jiang:
XPBI: Position-Based Dynamics with Smoothing Kernels Handles Continuum Inelasticity. 65:1-65:12
Deform Your Axis: Skeletons and Cages
- Qijia Huang, Pierre Kraemer, Sylvain Thery, Dominique Bechmann:
Dynamic Skeletonization via Variational Medial Axis Sampling. 66:1-66:11 - Zhehui Lin, Renjie Chen:
Polynomial Cauchy Coordinates for Curved Cages. 67:1-67:8
(Don't) Make Some Noise: Denoising
- Hiroyuki Sakai, Christian Freude, Thomas Auzinger, David Hahn, Michael Wimmer:
A Statistical Approach to Monte Carlo Denoising. 68:1-68:11 - Difei Yan, Shaokun Zheng, Ling-Qi Yan, Kun Xu:
Filtering-Based Reconstruction for Gradient-Domain Rendering. 69:1-69:10 - Wesley Chang, Xuanda Yang, Yash Belhe, Ravi Ramamoorthi, Tzu-Mao Li:
Spatiotemporal Bilateral Gradient Filtering for Inverse Rendering. 70:1-70:11
Keep in Touch / No Touching
- Peng Fan, Wei Wang, Ruofeng Tong, Hailong Li, Min Tang:
gDist: Efficient Distance Computation between 3D Meshes on GPU. 71:1-71:11
3D Printing, Manufacturing
- Emiliano Luci, Fabio Pellacini, Vahid Babaei:
Differentiable Modeling of Material Spreading in Inkjet Printing for Appearance Prediction. 72:1-72:10
Going Fast: Accelerated Rendering
- Xinzhe Wang, Ran Yi, Lizhuang Ma:
AdR-Gaussian: Accelerating Gaussian Splatting with Adaptive Radius. 73:1-73:10 - Luc Guy Rosenzweig, Brennan Shacklett, Warren Xia, Kayvon Fatahalian:
High-Throughput Batch Rendering for Embodied AI. 74:1-74:9
Capture Me If You Can
- Briac Toussaint, Laurence Boissieux, Diego Thomas, Edmond Boyer, Jean-Sébastien Franco:
Millimetric Human Surface Capture in Minutes. 75:1-75:12 - Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Zijiao Zeng, Qilong Kou, He Wang, Xiaogang Jin:
RoMo: A Robust Solver for Full-body Unlabeled Optical Motion Capture. 76:1-76:11 - Ruocheng Wang, Pei Xu, Haochen Shi, Elizabeth Schumann, C. Karen Liu:
FürElise: Capturing and Physically Synthesizing Hand Motion of Piano Performance. 77:1-77:11
Neural Shapes
- Tianchang Shen, Zhaoshuo Li, Marc T. Law, Matan Atzmon, Sanja Fidler, James Lucas, Jun Gao, Nicholas Sharp:
SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes. 78:1-78:11 - Hongbo Li, Haikuan Zhu, Sikai Zhong, Ningna Wang, Cheng Lin, Xiaohu Guo, Shiqing Xin, Wenping Wang, Jing Hua, Zichun Zhong:
NASM: Neural Anisotropic Surface Meshing. 79:1-79:12 - Xiangyu Zhu, Zhiqin Chen, Ruizhen Hu, Xiaoguang Han:
Controllable Shape Modeling with Neural Generalized Cylinder. 80:1-80:11 - Mingze Sun, Chen Guo, Puhua Jiang, Shiwei Mao, Yurun Chen, Ruqi Huang:
SRIF: Semantic Shape Registration Empowered by Diffusion-based Image Morphing and Flow Estimation. 81:1-81:11
Sampling and Light Transport
- Yusuke Tokuyoshi, Sho Ikeda, Paritosh Kulkarni, Takahiro Harada:
Hierarchical Light Sampling with Accurate Spherical Gaussian Lighting. 82:1-82:11 - Ziyang Fu, Yash Belhe, Haolin Lu, Liwen Wu, Bing Xu, Tzu-Mao Li:
BSDF importance sampling using a diffusion model. 83:1-83:11 - Joey Litalien, Milos Hasan, Fujun Luan, Krishna Mullia, Iliyan Georgiev:
Neural Product Importance Sampling via Warp Composition. 84:1-84:11 - Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael Zollhöfer, Thomas Leimkühler, Christian Theobalt:
Manifold Sampling for Differentiable Uncertainty in Radiance Fields. 85:1-85:11
Characters and Crowds
- Takara Everest Truong, Michael Piseno, Zhaoming Xie, C. Karen Liu:
PDP: Physics-Based Character Animation via Diffusion Policy. 86:1-86:10 - Sigal Raab, Inbar Gat, Nathan Sala, Guy Tevet, Rotem Shalev-Arkushin, Ohad Fried, Amit Haim Bermano, Daniel Cohen-Or:
Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer. 87:1-87:13 - Sunwoo Kim, Minwook Chang, Yoonhee Kim, Jehee Lee:
Body Gesture Generation for Multimodal Conversational Agents. 88:1-88:11
Mesh Processing Unleashed
- Haoran Sun, Jingkai Wang, Hujun Bao, Jin Huang:
GauWN: Gaussian-smoothed Winding Number and its Derivatives. 89:1-89:9 - Zhe Su, Yiying Tong, Guowei Wei:
Hodge decomposition of vector fields in Cartesian grids. 90:1-90:10 - Jihyeon Je, Jiayi Liu, Guandao Yang, Boyang Deng, Shengqu Cai, Gordon Wetzstein, Or Litany, Leonidas J. Guibas:
Robust Symmetry Detection via Riemannian Langevin Dynamics. 91:1-91:11 - Dylan Rowe, Alec Jacobson, Oded Stein:
Sharpening and Sparsifying with Surface Hessians. 92:1-92:12
Diffusing Your Videos
- Johanna Karras, Yingwei Li, Nan Liu, Luyang Zhu, Innfarn Yoo, Andreas Lugmayr, Chris Lee, Ira Kemelmacher-Shlizerman:
Fashion-VDM: Video Diffusion Model for Virtual Try-On. 93:1-93:11 - Omer Bar-Tal, Hila Chefer, Omer Tov, Charles Herrmann, Roni Paiss, Shiran Zada, Ariel Ephrat, Junhwa Hur, Guanghui Liu, Amit Raj, Yuanzhen Li, Michael Rubinstein, Tomer Michaeli, Oliver Wang, Deqing Sun, Tali Dekel, Inbar Mosseri:
Lumiere: A Space-Time Diffusion Model for Video Generation. 94:1-94:11 - Wenqi Ouyang, Yi Dong, Lei Yang, Jianlou Si, Xingang Pan:
I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models. 95:1-95:11 - Jingwei Ma, Erika Lu, Roni Paiss, Shiran Zada, Aleksander Holynski, Tali Dekel, Brian Curless, Michael Rubinstein, Forrester Cole:
VidPanos: Generative Panoramic Videos from Casual Panning Videos. 96:1-96:11 - Wan-Duo Kurt Ma, John P. Lewis, W. Bastiaan Kleijn:
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation. 97:1-97:11
Fill the Gap: What Happened In-between?
- Jie Zhou, Chufeng Xiao, Miu-Ling Lam, Hongbo Fu:
DrawingSpinUp: 3D Animation from Single Character Drawings. 98:1-98:10 - Ziran Zhang, Yongrui Ma, Yueting Chen, Feng Zhang, Jinwei Gu, Tianfan Xue, Shi Guo:
From Sim-to-Real: Toward General Event-based Low-light Frame Interpolation with Per-scene Optimization. 99:1-99:10
Generate It All: Scenes, Humans, LEGOs
- Han Yan, Yang Li, Zhennan Wu, Shenzhou Chen, Weixuan Sun, Taizhang Shang, Weizhe Liu, Tian Chen, Xiaqiang Dai, Chao Ma, Hongdong Li, Pan Ji:
Frankenstein: Generating Semantic-Compositional 3D Scenes in One Tri-Plane. 100:1-100:11 - Xiao-Lei Li, Haodong Li, Hao-Xiang Chen, Tai-Jiang Mu, Shi-Min Hu:
DIScene: Object Decoupling and Interaction Modeling for Complex Scene Generation. 101:1-101:12 - Jianchun Chen, Jian Wang, Yinda Zhang, Rohit Pandey, Thabo Beeler, Marc Habermann, Christian Theobalt:
EgoAvatar: Egocentric View-Driven and Photorealistic Full-body Avatars. 102:1-102:11 - Amir Barda, Vladimir G. Kim, Noam Aigerman, Amit Haim Bermano, Thibault Groueix:
MagicClay: Sculpting Meshes With Generative Neural Fields. 103:1-103:10
Diffuse and Conquer
- Xuan Gao, Haiyao Xiao, Chenglai Zhong, Shimin Hu, Yudong Guo, Juyong Zhang:
Portrait Video Editing Empowered by Multimodal Generative Priors. 104:1-104:11 - Abdul Basit Anees, Ahmet Canberk Baykal, Muhammed Burak Kizil, Duygu Ceylan, Erkut Erdem, Aykut Erdem:
HyperGAN-CLIP: A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation. 105:1-105:12 - Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, Robin Rombach:
Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation. 106:1-106:11
Talking Heads and Moving Faces
- Luchuan Song, Lele Chen, Celong Liu, Pinxin Liu, Chenliang Xu:
TextToon: Real-Time Text Toonify Head Avatar from Single Video. 107:1-107:11 - Longhao Zhang, Shuang Liang, Zhipeng Ge, Tianshu Hu:
PersonaTalk: Bring Attention to Your Persona in Visual Dubbing. 108:1-108:9 - Jiazhi Guan, Quanwei Yang, Kaisiyuan Wang, Hang Zhou, Shengyi He, Zhiliang Xu, Haocheng Feng, Errui Ding, Jingdong Wang, Hongtao Xie, Youjian Zhao, Ziwei Liu:
TALK-Act: Enhance Textural-Awareness for 2D Speaking Avatar Reenactment with Diffusion Model. 109:1-109:11 - Yue Ma, Hongyu Liu, Hongfa Wang, Heng Pan, Yingqing He, Junkun Yuan, Ailing Zeng, Chengfei Cai, Heung-Yeung Shum, Wei Liu, Qifeng Chen:
Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation. 110:1-110:12 - Changan Zhu, Chris Joslin:
Fabrig: A Cloth-Simulated Transferable 3D Face Parameterization. 111:1-111:10
Beauty Salon: Hair, Face, Lips, and Teeth
- Haomiao Wu, Alvin Shi, A. M. Darke, Theodore Kim:
Curly-Cue: Geometric Methods for Highly Coiled Hair. 112:1-112:11 - Kelian Baert, Shrisha Bharadwaj, Fabien Castan, Benoit Maujean, Marc Christie, Victoria Fernández Abrevaya, Adnane Boukhayma:
SPARK: Self-supervised Personalized Real-time Monocular Face Capture. 113:1-113:12 - Yujian Zheng, Yuda Qiu, Leyang Jin, Chongyang Ma, Haibin Huang, Di Zhang, Pengfei Wan, Xiaoguang Han:
Towards Unified 3D Hair Reconstruction from Single-View Portraits. 114:1-114:11 - Feisal Rasras, Stanislav Pidhorskyi, Tomas Simon, Hallison Paz, He Wen, Jason M. Saragih, Javier Romero:
The Lips, the Teeth, the tip of the Tongue: LTT Tracking. 115:1-115:11 - Givi Meishvili, James Clemoes, Charlie Hewitt, Zafiirah Hosenie, Xian Xiao, Martin de La Gorce, Tibor Takács, Tadas Baltrusaitis, Antonio Criminisi, Chyna McRae, Nina Jablonski, Marta Wilczkowiak:
Hairmony: Fairness-aware hairstyle classification. 116:1-116:11
Differentiable Rendering
- Kai Yan, Vincent Pegoraro, Marc Droske, Jirí Vorba, Shuang Zhao:
Differentiating Variance for Variance-Aware Inverse Rendering. 117:1-117:10 - Peiyu Xu, Sai Praveen Bangaru, Tzu-Mao Li, Shuang Zhao:
Markov-Chain Monte Carlo Sampling of Visibility Boundaries for Differentiable Rendering. 118:1-118:11 - Zichen Wang, Xi Deng, Ziyi Zhang, Wenzel Jakob, Steve Marschner:
A Simple Approach to Differentiable Rendering of SDFs. 119:1-119:11
Elastics / Solvers / Neural Physics
- Honglin Chen, Hsueh-Ti Derek Liu, Alec Jacobson, David I. W. Levin, Changxi Zheng:
Trust-Region Eigenvalue Filtering for Projected Newton. 120:1-120:10 - Yuanyuan Tao, Ivan Puhachov, Derek Nowrouzezahrai, Paul G. Kry:
Neural Implicit Reduced Fluid Simulation. 121:1-121:11 - Meng Zhang, Jun Li:
Neural Garment Dynamic Super-Resolution. 122:1-122:11
Modeling and Reconstruction
- Aleksander Plocharski, Jan Swidzinski, Joanna Porter-Sobieraj, Przemyslaw Musialski:
FaçAID: A Transformer Model for Neuro-Symbolic Facade Reconstruction. 123:1-123:11 - Xi Deng, Lifan Wu, Bruce Walter, Ravi Ramamoorthi, Eugene d'Eon, Steve Marschner, Andrea Weidlich:
Reconstructing translucent thin objects from photos. 124:1-124:11 - Haruo Fujiwara, Yusuke Mukuta, Tatsuya Harada:
Style-NeRF2NeRF: 3D Style Transfer from Style-Aligned Multi-View Images. 125:1-125:10
My Name is Carl: Gaussian Humans
- Tobias Kirschstein, Simon Giebenhain, Jiapeng Tang, Markos Georgopoulos, Matthias Nießner:
GGHead: Fast and Generalizable 3D Gaussian Heads. 126:1-126:11 - Simon Giebenhain, Tobias Kirschstein, Martin Rünz, Lourdes Agapito, Matthias Nießner:
NPGA: Neural Parametric Gaussian Avatars. 127:1-127:11 - Junxuan Li, Chen Cao, Gabriel Schwartz, Rawal Khirodkar, Christian Richardt, Tomas Simon, Yaser Sheikh, Shunsuke Saito:
URAvatar: Universal Relightable Gaussian Codec Avatars. 128:1-128:11
Points, Graphs, Surfaces, and Fields
- Jisung Hwang, Minhyuk Sung:
Occupancy-Based Dual Contouring. 129:1-129:11 - Diana Marin, Amal Dev Parakkat, Stefan Ohrhallinger, Michael Wimmer, Steve Oudot, Pooran Memari:
SING: Stability-Incorporated Neighborhood Graph. 130:1-130:10
Appearance Modeling
- Di Luo, Hanxiao Sun, Lei Ma, Jian Yang, Beibei Wang:
Correlation-aware Encoder-Decoder with Adapters for SVBRDF Acquisition. 131:1-131:10 - Zilin Xu, Zahra Montazeri, Beibei Wang, Ling-Qi Yan:
A Dynamic By-example BTF Synthesis Scheme. 132:1-132:10
(Do) Make Some Noise
- Qingrong Cheng, Xu Li, Xinghui Fu:
SIGGesture: Generalized Co-Speech Gesture Synthesis via Semantic Injection with Large-Scale Pre-Training Diffusion Models. 133:1-133:11 - Kangrui Xue, Jui-Hsien Wang, Timothy R. Langlois, Doug L. James:
WaveBlender: Practical Sound-Source Animation in Blended Domains. 134:1-134:10 - Sifei Li, Weiming Dong, Yuxin Zhang, Fan Tang, Chongyang Ma, Oliver Deussen, Tong-Yee Lee, Changsheng Xu:
Dance-to-Music Generation with Encoder-based Textual Inversion. 135:1-135:11 - Matthew Caren, Kartik Chandra, Joshua B. Tenenbaum, Jonathan Ragan-Kelley, Karima Ma:
Sketching With Your Voice: "Non-Phonorealistic" Rendering of Sounds via Vocal Imitation. 136:1-136:11
Interactive Methods and VR/AR
- Yotam Erel, Or Kozlovsky-Mordenfeld, Daisuke Iwai, Kosuke Sato, Amit H. Bermano:
Casper DPM: Cascaded Perceptual Dynamic Projection Mapping onto Hands. 137:1-137:10 - Haichen Gao, Shaoyu Cai, Yuhong Wu, Kening Zhu:
ThermOuch: A Wearable Thermo-Haptic Device for Inducing Pain Sensation in Virtual Reality through Thermal Grill Illusion. 138:1-138:12 - Itai Lang, Fei Xu, Dale Decatur, Sudarshan Babu, Rana Hanocka:
iSeg: Interactive 3D Segmentation via Interactive Attention. 139:1-139:11
Enhancing, Saliency
- Yitong Wang, Xudong Xu, Li Ma, Haoran Wang, Bo Dai:
Boosting 3D object generation through PBR materials. 140:1-140:11 - Zhongshi Jiang, Kishore Venkateshan, Giljoo Nam, Meixu Chen, Romain Bachy, Jean-Charles Bazin, Alexandre Chapiro:
FaceMap: Distortion-Driven Perceptual Facial Saliency Maps. 141:1-141:11 - Mark van de Ruit, Elmar Eisemann:
Controlled Spectral Uplifting for Indirect-Light-Metamerism. 142:1-142:9 - Pei Xu, Ruocheng Wang:
Synchronize Dual Hands for Physics-Based Dexterous Guitar Playing. 143:1-143:11 - Zehong Shen, Huaijin Pi, Yan Xia, Zhi Cen, Sida Peng, Zechen Hu, Hujun Bao, Ruizhen Hu, Xiaowei Zhou:
World-Grounded Human Motion Recovery via Gravity-View Coordinates. 144:1-144:11 - Sammy Christen, Shreyas Hampali, Fadime Sener, Edoardo Remelli, Tomas Hodan, Eric Sauser, Shugao Ma, Bugra Tekin:
DiffH2O: Diffusion-Based Synthesis of Hand-Object Interactions from Textual Descriptions. 145:1-145:11 - Luis Bolanos, Pearson Wyder-Hodge, Xindong Lin, Dinesh K. Pai:
Measuring Human Motion Under Clothing. 146:1-146:10
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.