Monday, September 7, 2009

Meshing Point Clouds

One of the most requested tasks when managing 3D scanning data is the conversion of point clouds into more practical triangular meshes. Here is a step-by-step guide for transforming a raw point cloud into a colored mesh.

Let's start from a colored point cloud (typical output of many 3D scanning devices), each point has just color and no normal information. The example dataset that we will use is a medium sized dataset of 9 millions of points. Typical issues of such a dataset dataset: it is non uniform (comes from an integration of different datasets), has some strongly biased error (alignment error, some problem during data integration), it comes without normals (hard to be shaded).

  1. Subsampling


    As a first step we reduce a bit the dataset in order to have amore manageable dataset. Many different options here. Having a nicely spaced subsampling is a good way to make some computation in a faster way. The Sampling->Poisson Disk Sampling filter is a good option. While it was designed to create Poisson disk samples over a mesh, it is able to also compute Poisson disk subsampling of a given point cloud (remember to check the 'subsampling' boolean flag). For the curious ones, it uses an algorithm very similar to the dart throwing paper presented at EGSR2009 (except that we have released code for such an algorith long before the publication of this article :) ). In the invisible side figure a Poisson disk subsampling of just 66k vertices.
  2. Normal Reconstruction


    Currently inside MeshLab the construction of normals for a point cloud is not particularly optimized (I would not apply it over 9M point cloud) so starting from smaller mesh can give better, faster results. You can use this small point cloud to issue a fast surface reconstruction (using Remeshing->Poisson surface reconstruction) and then transfer the normals of this small rough surface to the original point cloud. Obviously in this way the full point cloud will have a normal field that is by far smoother than necessary, but this is not an issue for most surface reconstruction algorithms (but it is an issue if you want to use these normals for shading!).
  3. Surface reconstruction


    Once rough normals are available Poisson surface reconstruction is a good choice. Using the original point cloud with the computed normals we build a surface at the highest resolution (recursion level 11). Roughly clean it removing large faces filter, and eventually simplify it a bit (remove 30% of the faces) using classical Remeshing->Quadric edge collapse simplification filter (many implicit surface filters rely on marching cube like algorithms and leave useless tiny triangles).
  4. Recovering original color


    Here we have two options, recovering color as a texture or recovering color as per-vertex color. Here we go for the latter, leaving the former to a next post where we will go in more details on the new automatic parametrization stuff that we are adding in MeshLab. Obviously if you store color onto vertexes you need to have a very dense mesh, more or less of the same magnitudo of the original point cloud, so probably refining large faces a bit could be useful. After refining the mesh you simply transfer the color attribute from the original point cloud to the reconstructed surface using the vertex attribute transfer filter.

  5. Cleaning up and assessing


    The vertex attribute transfer filter uses a simple closest point heuristic to match the points between the two meshes. As a side product it can store (in the all-purpose per-vertex scalar quality) the distance of the matching points. Now just selecting the faces having vertices whose distance is larger than a given threshold we can easily remove the redundant faces created by the Poisson Surface Reconstruction.

This pipeline is only one of the many possible way of ending up into a nice mesh. For example different choices could have been done for step 2/3. There are reconstruction algorithms that do not need surface normals, like for example the "Voronoi Filtering" that is an interpolating reconstruction algorithm (e.g. it build up only triangles on the given input points) but usually these filters works better on very clean datasets, without noise or alignment errors. Otherwise on noisy datasets it is easy that they create a lot of non manifold situations. Final thanks to Denis Pitzalis for providing me this nice dataset of a Cypriot theater.