Friday, March 26, 2010

Assessing open source software as a scholarly contribution

A post that is not strictly related to Computer Graphics, 3D or Cultural Heritage.
Just a small note/rant to point out a recent paper:

Lou Hafer and Arthur E. Kirkpatrick
"Assessing open source software as a scholarly contribution"
Communications of the ACM, Volume 52 ,  Issue 12  (December 2009)

It is an interesting discussion on the fact that "Academic computer science has an odd relationship with software: Publishing papers about software is considered a distinctly stronger contribution than publishing the software".
 
Being a senior researcher before being the lead developer of MeshLab, I have to say that I totally agree with those feelings. I have often thought that devoting a significant portion of my time to the MeshLab project is not a 100% wise move from a career point of view; probably writing a bunch of easy minor-variation papers is much more rewarding and is evaluated better when running for higher positions.


The sad thing is that there are people thinking that the citations coming from the paper you have written about your software are more than enough to reward you for your effort of writing it. Usually these considerations came from computer scientists who do not have perfectly clear what means writing and maintaining real software tools.
Some bare facts:
  1. If you write and maintain significant software tools/library then you are serving the research community in a way that is more significant that writing a paper.
  2. The time required to develop and maintain sw tools is much larger than the time required to write one paper.
  3. Assessing the importance/significance of software is more difficult than assessing the value of papers, no common bibliometric tools (obviously download count is not a good metric).
  4. Commissions evaluating people careers usually ignore sw and concentrate on other, more standard, research products (papers, editorial boards, commitee, teaching, prizes, etc).
As a simple consequence, of 2,3,4 and despite of 1, with current career evaluation habits, developing and maintaining sw tools that are significantly useful for the research community is NOT a career maximizing move. And this is, in my humble opinion, definitely, completely, utterly WRONG.

Now when you stumble upon a discontinued piece of code that you would have loved to have maintained, you have an hint of why the original author abandoned it.

Tuesday, March 23, 2010

Mean Curvature, Cavity Map, ZBrush and nice tricks for enhancing surface shading

There are many many techniques for enhancing the look of a surface by mean of smart shaders. Without touching Non photo-realistic rendering techniques there are many tricks that can helps the perception of the features and the fine details of the surface of an object. ZBrush has popularized one of these techniques with the name of cavity mapping. The main idea is that you detect 'pits' on the surface and you make them of a different color and, very important, very dull. In practice it simulates in a rough way all those materials where in low accessibility regions dust/rust/oxide accumulates while in the most exposed parts the use make them shiny. You can do such effects in MeshLab and they can be very useful for making quick nice renderings of scanned objects; let make a
a practical example using a 3D scanned model of ancient statuette of the Assyrian demon Pazuzu (courtesy of Denis Pitzalis C2RMF/Louvre). The plain model (shown on the right) is rather dull and not very readable and at a first glance you cannot appreciate the scale of the scanned details.



You can improve it a bit by adding a bit of ambient occlusion. In MeshLab you can do it in a couple of ways, either computing an actual ambient occlusion term (e.g. a steradian denoting the portion of sky that you can see from a given point) for each vertex (Filter->Color->Vertex ambient occlusion) or just resort to a quick and dirty Screen Space Ambient Occlusion (SSAO) approximation (render->Screen Space Ambient Occlusion).In the first case as a bonus you are able to tweak the computed value as you want (by playing with the quality mapping tools), in the second you have just an approximation, but the nice fact is that it is a decoration e.g. it is blended over the current frame buffer and therefore it mix up with whatever rendering you have. In the two pictures  per vertex AO vs SSAO.

Not enough. Lets try to find out and enhance the pits. Mean Curvature is what you need, you can think it as the divergence of the normals over the surface and it captures well the concept of characterizing local variation of the surface. There is a vast literature on computing curvatures on discrete triangulated surface and MeshLab exposed a few different methods (and it has also a couple of methods for computing them on Point Clouds). The fastest one (a bit resilient on the quality of the mesh) is filter->color->Discrete Curvature. On the right you can see the result of such filter mapped into a red-green-blu colormap.
if you want expmeriment you can try the various options under filter->normal->compute curvature principal direction (be careful some of these filters can be VERY SLOW).




Now the final step is just to use this value for the shading. Just start the shader called ZBrush and play a bit with the parameters and then, hopefully, you can get the desired result. Some notes. It happens that curvature has some outliers so clamping the quality values before starting the filter can be a good idea (filter->quality->clamp). Similarly the range can be very large so playing with the "transition_speed" parameter of the shader can be a quite useful. To vary the amount of "pits" use the "transition center" slider.

Tuesday, March 2, 2010

Measuring the distance between two meshes (2)

Second part of the "metro" tutorial, the first part is here.

Remember that MeshLab uses a sampling approach to compute the Hausdorff distance taking a set of points over a mesh X and for each point x on X it searches the closest point y on the other mesh Y. That means that the result is strongly affected on how many points do you take over X. Now assume that we want color the mesh X (e.g. the low resolution one) with the distance from Y.
In this case the previous trick of using per vertex color will yeld poor results, given the low resolution quality of the mesh.
Let's start again with our two HappyBuddha, full resolution and simplified to 50k faces.
Therefore first of all we need a denser sampling over the low res mesh, that means that we compute the Hausdorff distance we set the simplified mesh as sample mesh, the original happybuddha as target, we set a face sampling with a reasonable high number of sampling points (10^6 is ok) and, very important, we ask to save the computed samples by checking the appropriate option.

After a few secs you will see in the layer window two new layers that contains the point clouds representing respectively to the sample taken over the simplified mesh and the corresponding closest points on the original meshes.


To see and inspect these point clouds you have to manually switch to point visualization and turn off the other layers. Below two snapshots of the point cloud (in this case just 2,000,000 of point samples) at different zoom level to get a hint of the cloud density)

Then, just like in the previous post, use the Color->Colorize by quality filter to map the point cloud of the sample points (the one over the simplified mesh) with the standard red-green-blue colormap.

Now to better visualize these color we use a texture map. As a first step we need a simple parameterization of the simplified mesh. MeshLab offers a couple of parametrization tools. the first one is a rather simple trivial independent right triangle packing approach while the other one is the state of the art almost isometric approach . Let's use the first one (more on the latter in a future post...) simply by starting Texture->Trivial Per-Triangle Parametrization. This kind of parametrization is quite trivial, with a lot of problems (distorsion, fragmentation etc) but on the other hand is quite fast, simple and robust and in a few cases can even be useful.

Now you have just to fill the texture with the color of the sampled point cloud; you can do this with the filter texture->Transfer color to texture and choosing an adequate texture size (be bold and use a large texture size...). Below the result with a comparison about error color coded  using a texture or simply the color-per-vertex.