Geometry Distributions

1KAUST, 2ETH Zurich

Abstract

Neural representations of 3D data have been widely adopted across various applications, particularly in recent work leveraging coordinate-based networks to model scalar or vector fields. However, these approaches face inherent challenges, such as handling thin structures and non-watertight geometries, which limit their flexibility and accuracy. In contrast, we propose a novel geometric data representation that models geometry as distributions-a powerful representation that makes no assumptions about surface genus, connectivity, or boundary conditions. Our approach uses diffusion models with a novel network architecture to learn surface point distributions, capturing fine-grained geometric details. We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity. Additionally, we explore applications using our representation, such as textured mesh representation, neural surface compression, dynamic object modeling, and rendering, highlighting its potential to advance 3D geometric learning.

Mapping from Gaussian distributions to the surface.

Textured geometry. The proposed representation can also be used for textured geometry.

Dynamic object modeling.

Combination with color field network. We show results of different numbers of points (from left to right: 250K, 500K, 1M, and 2M).

Inversion sampling and forward sampling.