SIGGRAPH 2003: Meshes papers


The final paper session at SIGGRAPH 2003 was on Meshes. Frankly, I don't know if I will do this set of talks justice, as I took every few notes through most of them, partially because the speakers were getting a little dry and partially because my brain was completely saturated.

Out of Core Compression for Gigantic Meshes

This paper described a technique for compressing extremely large meshes, like the digitized Saint Matthew statue from the Digital Michelangelo Project at Stanford. For those unfamiliar with this mesh, it contains about 186M vertices or 372M triangles which can be stored in a mere 6.7TB of data.

Standard streaming compression of sparse vertex data is almost useless because there is very little redundancy in the data, so this means that specific techniques must be used that know about the graphical properties of the data. Unfortunately, most of those techniques require random access to every vertex in the mesh, and thus require that the model fit in main memory, which isn't the case for this model.

This paper describes the technique used by the researchers to achieve a compression from 6.7TB to 344MB, with a streaming load time of only 174 seconds. Their method involves taking 7 passes through the data (note: compression is much slower than decompression) creating a temporary, ordered copy of data on the disk that contains an addressable list of half-edges between each connected vertex and then using parallelogram prediction with point adjustment to store the data efficiently.

Non-interative Feature-preserving Mesh Smoothing

This technique is used to smooth polygonal soups, such as those that might be obtained by using range scanners to scan physical objects. Often, these scans contain a significant amount of noise, which preferably would be filtered out before the final data is ready for use. However, many techniques used for denoising data result in softening of hard edged features, which is undesirable.

This technique describes one method for denoising while preserving these features by extending the use of bilinear filters to 3D and using them to predict outliers (points that don't belong) by comparing the points from the filter.