Velox: Learning Representations of 4D Geometry and Appearance
AuthorsAnagh Malik†, Dorian Chan, Xiaoming Zhao, David B. Lindell†, Oncel Tuzel, Jen-Hao Rick Chang
Velox: Learning Representations of 4D Geometry and Appearance
AuthorsAnagh Malik†, Dorian Chan, Xiaoming Zhao, David B. Lindell†, Oncel Tuzel, Jen-Hao Rick Chang
We introduce a framework for learning latent representations of 4D objects which are descriptive, faithfully capturing object geometry and appearance; compressive, aiding in downstream efficiency; and accessible, requiring minimal input, i.e., an unstructured dynamic point cloud, to construct. Specifically, Velox trains an encoder to compress spatiotemporal color point clouds into a set of dynamic shape tokens. These tokens are supervised using two complementary decoders: a 4D surface decoder, which models the time-varying surface distribution capturing the geometry; and a Gaussian decoder, which maps the tokens to 3D Gaussians, helping learn appearance. To demonstrate the utility of our representation, we evaluate it across three downstream tasks — video-to-4D generation, 3D tracking, and cloth simulation via image-to-4D generation — and observe strong performances in all settings.
DeepPRO: Deep Partial Point Cloud Registration of Objects
October 8, 2021research area Computer Vision, research area Methods and Algorithmsconference ICCV
We consider the problem of online and real-time registration of partial point clouds obtained from an unseen real-world rigid object without knowing its 3D model. The point cloud is partial as it is obtained by a depth sensor capturing only the visible part of the object from a certain viewpoint. It introduces two main challenges: 1) two partial point clouds do not fully overlap and 2) keypoints tend to be less reliable when the visible part of…
VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection
November 13, 2017research area Computer Vision
Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird’s eye view projection. In this work, we remove the need of manual feature engineering for 3D…