High Fidelity 3D Reconstructions with Limited Physical Views
In collaboration with Carnegie Mellon University
AuthorsMosam Dabhi, Chaoyang Wang, Kunal Saluja, Laszlo A. Jeni, Ian Fasel, Simon Lucey
High Fidelity 3D Reconstructions with Limited Physical Views
In collaboration with Carnegie Mellon University
AuthorsMosam Dabhi, Chaoyang Wang, Kunal Saluja, Laszlo A. Jeni, Ian Fasel, Simon Lucey
Multi-view triangulation is the gold standard for 3D reconstruction from 2D correspondences, given known calibration and sufficient views. However in practice expensive multi-view setups — involving tens sometimes hundreds of cameras — are required to obtain the high fidelity 3D reconstructions necessary for modern applications. In this work we present a novel approach that leverages recent advances in 2D-3D lifting using neural shape priors while also enforcing multi-view equivariance. We show that our method can achieve comparable fidelity to expensive calibrated multi-view rigs using a limited (2-3) number of uncalibrated camera views.
Large-Scale High-Quality 3D Gaussian Head Reconstruction from Multi-View Captures
May 8, 2026research area Computer Vision
We propose HeadsUp, a scalable feed-forward method for reconstructing high-quality 3D Gaussian heads from large-scale multi-camera setups. Our method employs an efficient encoder-decoder architecture that compresses input views into a compact latent representation. This latent representation is then decoded into a set of UV-parameterized 3D Gaussians anchored to a neutral head template. This UV representation decouples the number of 3D Gaussians…
Direct2.5: Diverse 3D Content Creation via Multi-view 2.5D Diffusion
April 29, 2024research area Computer Visionconference CVPR
Recent advances in generative AI have unveiled significant potential for the creation of 3D content. However, current methods either apply a pre-trained 2D diffusion model with the time-consuming score distillation sampling (SDS), or a direct 3D diffusion model trained on limited 3D data losing generation diversity. In this work, we approach the problem by employing a multi-view 2.5D diffusion fine-tuned from a pre-trained 2D diffusion model. The…