View publication

We study the problem of novel view synthesis of a scene comprised of 3D objects. We propose a simple yet effective approach that is neither continuous nor implicit, challenging recent trends on view synthesis. We demonstrate that although continuous radiance field representations have gained a lot of attention due to their expressive power, our simple approach obtains comparable or even better novel view reconstruction quality comparing with state-of-the-art baselines while increasing rendering speed by over 400x. Our model is trained in a category-agnostic manner and does not require scene-specific optimization. Therefore, it is able to generalize novel view synthesis to object categories not seen during training. In addition, we show that with our simple formulation, we can use view synthesis as a self-supervision signal for efficient learning of 3D geometry without explicit 3D supervision.

Related readings and updates.

High Fidelity 3D Reconstructions with Limited Physical Views

Multi-view triangulation is the gold standard for 3D reconstruction from 2D correspondences, given known calibration and sufficient views. However in practice expensive multi-view setups — involving tens sometimes hundreds of cameras — are required to obtain the high fidelity 3D reconstructions necessary for modern applications. In this work we present a novel approach that leverages recent advances in 2D-3D lifting using neural shape priors…
See paper details

Learning to Generate Radiance Fields of Indoor Scenes

People have an innate capability to understand the 3D visual world and make predictions about how the world could look from different points of view, even when relying on few visual observations. We have this spatial reasoning ability because of the rich mental models of the visual world we develop over time. These mental models can be interpreted as a prior belief over which configurations of the visual world are most likely to be observed. In this case, a prior is a probability distribution over the 3D visual world.

See article details