tag:blogger.com,1999:blog-53339577517697558092024-02-19T07:14:18.465-08:00MeshLab StuffPractical Mesh Processing ExperimentsALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comBlogger34125tag:blogger.com,1999:blog-5333957751769755809.post-33812651188508410712018-06-28T01:07:00.000-07:002018-06-28T01:07:02.968-07:00HexaLab.net: a new online tool for visualization and evaluation of hexahedral mesh<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div style="text-align: left;">
We are proud to present <a href="http://hexalab.net/"><b>HexaLab.net</b></a> our new free <i>online</i> tool for inspecting hex meshes.</div>
<br /><div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvYo0kKASUyMFTkAkRjVat51zGbKBr2a_mcQgFxyuKG4ImeIuSaQt6ly9jcW3gFgECZQB0ayEqFcPnVIm-UjCdDIQ7y3OWOS_jbboN2gKlJkga9ntGY2m2lv-DsKo4imhTnIlFeebnw30/s1600/HexaLab+Slicing.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="336" data-original-width="400" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvYo0kKASUyMFTkAkRjVat51zGbKBr2a_mcQgFxyuKG4ImeIuSaQt6ly9jcW3gFgECZQB0ayEqFcPnVIm-UjCdDIQ7y3OWOS_jbboN2gKlJkga9ntGY2m2lv-DsKo4imhTnIlFeebnw30/s320/HexaLab+Slicing.gif" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: justify;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
<a href="http://www.hexalab.net/">HexaLab</a> is a WebGL online tool for real time visualization, exploration and assessment of hexahedral meshes that runs directly in your web browser. This visualization tool targets both users and scholars who employ hexmeshes for Finite Element Analysis, can readily check mesh quality and assess its usability for simulations. You can use HexaLab to perform a detailed analysis of the mesh structure, isolating weak points and generate high quality images. </div>
<div class="separator" style="clear: both; text-align: justify;">
To this end, we support a wide variety of visualization and volume inspection tools. The system also offers immediate access to a repository containing all the publicly available meshes produced with the most recent techniques for hex mesh generation. </div>
<div class="separator" style="clear: both; text-align: left;">
The system supports hexahedral models in the popular .<span style="font-family: Courier New, Courier, monospace;"><b>mesh</b></span> and .<span style="font-family: Courier New, Courier, monospace;"><b>vtk</b></span> ASCII formats. </div>
<div class="separator" style="clear: both; text-align: justify;">
So follow the <a href="http://www.hexalab.net/">link</a> and just drop a mesh on that page, and, please, note that <b><i>meshes are NOT uploaded anywhere</i></b>. No 3D data will leave your browser and everything will stay local. </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: justify;">
Beside classical slicing (pictured above) in HexaLab there are many visualization techniques are available like for example we have a <i>minecraft</i>-like interactive digging and undigging of individual cells, that allows to pick exactly what cell you want to hide/reveal.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEuXm3B3gh0XJHWJIkrDT2AwRvpzTLrx6deQPLC98o5gsHqX-cvLb_4NzGB2oFlDbiy0PuSpInVCNJuiEkvDO6BBW2yUwH_inb07Rb6btp536y8KWMox4FNPmYXCLRQWJyaziS3EDLCKQ/s1600/HexaLab+Digging+and+Undigging.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="358" data-original-width="400" height="286" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiEuXm3B3gh0XJHWJIkrDT2AwRvpzTLrx6deQPLC98o5gsHqX-cvLb_4NzGB2oFlDbiy0PuSpInVCNJuiEkvDO6BBW2yUwH_inb07Rb6btp536y8KWMox4FNPmYXCLRQWJyaziS3EDLCKQ/s320/HexaLab+Digging+and+Undigging.gif" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"><br /></span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;">Or you can reveal the interior by a interactive peeling that progressively hides the cells from the external boundaries:</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"><br /></span></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvYo0kKASUyMFTkAkRjVat51zGbKBr2a_mcQgFxyuKG4ImeIuSaQt6ly9jcW3gFgECZQB0ayEqFcPnVIm-UjCdDIQ7y3OWOS_jbboN2gKlJkga9ntGY2m2lv-DsKo4imhTnIlFeebnw30/s1600/HexaLab+Slicing.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvYo0kKASUyMFTkAkRjVat51zGbKBr2a_mcQgFxyuKG4ImeIuSaQt6ly9jcW3gFgECZQB0ayEqFcPnVIm-UjCdDIQ7y3OWOS_jbboN2gKlJkga9ntGY2m2lv-DsKo4imhTnIlFeebnw30/s1600/HexaLab+Slicing.gif" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a><div style="margin-left: 1em; margin-right: 1em; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMTbHZdsKeqoFM1cfjL0Ro1TpzVFkqGd4FEkclxUCO-BW0V3bK8nZUpbalFUWZTesRC0OoLFwofpLRLjyraoA51se9vYZfE0IDZa7lnhuVU5B2XIeVxxfVdNqO78Vriuy-7em5limKTvg/s1600/HexaLab+Filtering+by+Peeling.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="362" data-original-width="400" height="289" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMTbHZdsKeqoFM1cfjL0Ro1TpzVFkqGd4FEkclxUCO-BW0V3bK8nZUpbalFUWZTesRC0OoLFwofpLRLjyraoA51se9vYZfE0IDZa7lnhuVU5B2XIeVxxfVdNqO78Vriuy-7em5limKTvg/s320/HexaLab+Filtering+by+Peeling.gif" width="320" /></a></div>
<div>
<br /></div>
<div>
Or you can interactively hide the good shaped cells to reveal only where the bad ones are. Quality of the meshing can be measured using a variety of measures (indeed all the well known <i>Verdict</i> measures, like Scaled Jacobian, distortion, edge ratio, volume, etc.):<br /><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWNFfDtntA036vr9e7Y1hgIGDemUG6Zw6vnkyt6SpIZUXysgk98qUR2lEBYlhDl-aIiNq_OxcNJcad-0eI84AKKPDiYusdSeoTfMkWAqfCDfCM-5AKecACXC6vnH4sC7VZnbzV02iqVmk/s1600/Hexalab+Filtering+by+quality.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="373" data-original-width="400" height="298" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWNFfDtntA036vr9e7Y1hgIGDemUG6Zw6vnkyt6SpIZUXysgk98qUR2lEBYlhDl-aIiNq_OxcNJcad-0eI84AKKPDiYusdSeoTfMkWAqfCDfCM-5AKecACXC6vnH4sC7VZnbzV02iqVmk/s320/Hexalab+Filtering+by+quality.gif" width="320" /></a></div>
<br />
<br />
Finally remember that it is free to use, but it is always kind to cite its use by referring the companion paper:<br />
<br />
<b><span style="font-family: Arial, Helvetica, sans-serif; font-size: large;">"<a href="https://arxiv.org/abs/1806.06639">HexaLab.net: an online viewer for hexahedral meshes</a>"</span></b><br />
<i><span style="font-family: Arial, Helvetica, sans-serif; font-size: large;">Matteo Bracci, Marco Tarini, Nico Pietroni, Marco Livesu, Paolo Cignoni</span></i><br />
<span style="font-family: Arial, Helvetica, sans-serif; font-size: large;">(<a href="https://arxiv.org/pdf/1806.06639">PDF</a> freely available on <a href="https://arxiv.org/abs/1806.06639">arxiv</a>)</span><br />
<br />
<br />
<br /></div>
</div>
ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-82765393615837091262016-02-02T02:14:00.003-08:002016-02-02T02:14:59.481-08:00MeshLab JS 16.01 Tutorial<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
A new release of MeshLabJS, the javascript version of meshlab is out.</div>
Obviously being a totally client based, run-in-browser application it is sufficient to open its web page to get the latest version. :). For a mesh processing system that runs inside your browser a new version is just the deploy of the html+js code on the server.<br />
<br />
Now a very simple tutorial of what you can do with MeshLabJS v16.01: Remeshing, Comparing two meshes and showing the results.<br />
<br />
Start it by simply opening the following web page:<br />
<br />
<div style="text-align: center;">
<a href="http://www.meshlabjs.net/"><b><span style="font-family: Arial, Helvetica, sans-serif;">http://www.meshlabjs.net</span></b></a></div>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrocSo9bfhDqoQVhPYiC_weUxdFyPCOTgGE533yJmS6m_uZcpyWMu27nwDnS41THr8Bx7CvguYezeoxSdrEHrLvLKyMWNVN37NqTAc-oy0qZEqeWQDKFm3qs8R64De9QbTy6ap1mGU2Qw/s1600/Screen+Shot+2016-02-02+at+10.13.17+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="230" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrocSo9bfhDqoQVhPYiC_weUxdFyPCOTgGE533yJmS6m_uZcpyWMu27nwDnS41THr8Bx7CvguYezeoxSdrEHrLvLKyMWNVN37NqTAc-oy0qZEqeWQDKFm3qs8R64De9QbTy6ap1mGU2Qw/s320/Screen+Shot+2016-02-02+at+10.13.17+am.jpg" width="320" /></a><br />Press <b>CTRL+f</b> (or <b>⌘+f</b> on OSX) to jump to the find box and type '<b><span style="font-family: "courier new" , "courier" , monospace;">torus</span></b>'. While you type the long list of available filters will reduce to only the ones matching with the typed text (in this case just one).<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFMLZOuQJZ6-wvQCasQZrnstOLKZrp-TXMN2xqykLoNv0vooh7gmsyCE0Pcr7Y6nHOTU_tWd8ihmmZCxbKB2A2YHM7KqMOstwpu0Xih9gDZaohmI-TVHHBh5ce_4U00aK4Lib5tCo_fGI/s1600/Screen+Shot+2016-02-02+at+10.13.50+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFMLZOuQJZ6-wvQCasQZrnstOLKZrp-TXMN2xqykLoNv0vooh7gmsyCE0Pcr7Y6nHOTU_tWd8ihmmZCxbKB2A2YHM7KqMOstwpu0Xih9gDZaohmI-TVHHBh5ce_4U00aK4Lib5tCo_fGI/s200/Screen+Shot+2016-02-02+at+10.13.50+am.jpg" width="200" /></a>Click on the '<span style="font-family: "arial" , "helvetica" , sans-serif;">Create Torus</span>' filter box and it will open to reveal the parameters. Just increase the '<span style="font-family: "arial" , "helvetica" , sans-serif;">Subdivision</span>' parameter to 64 and press the '▶︎' (apply filter) button and you should see a torus appear on the right.<br />
<br />
<br />
Click on the <i><span style="font-family: "arial" , "helvetica" , sans-serif;">Rendering</span></i> tab and you access to all the different rendering modes. Click on the wireframe icon to enable, for the current layer, the display of the edges of the mesh. In the space below you should see the parameters of the wireframe rendering (<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">color, thickness </span>etc).<br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEnBTHZJdZRFRTxP14qh-xPy3blgnstvD-vSnpX1HH65iUgGfZ8Eba2MDxKm76sWfpoeNHLOmn_T-BQ12B8Y_419qUPl3zB9RYNuXQ5aHshsK19FjHBimCvf4kTkSOpT671mOmvlEzVJY/s1600/Screen+Shot+2016-02-02+at+10.14.16+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEnBTHZJdZRFRTxP14qh-xPy3blgnstvD-vSnpX1HH65iUgGfZ8Eba2MDxKm76sWfpoeNHLOmn_T-BQ12B8Y_419qUPl3zB9RYNuXQ5aHshsK19FjHBimCvf4kTkSOpT671mOmvlEzVJY/s200/Screen+Shot+2016-02-02+at+10.14.16+am.jpg" width="200" /></a>Now press <b>CTRL+f</b> (<b>⌘+f</b> on OSX) again and type '<b><span style="font-family: "courier new" , "courier" , monospace;">remesh</span></b>', open the parameters of the '<span style="font-family: "arial" , "helvetica" , sans-serif;">Voronoi Remeshing</span>' filter, raise to 2 the '<span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">Refine Step</span>' param, and turn off the '<span style="font-family: "arial" , "helvetica" , sans-serif; font-size: x-small;">Voronoi Coloring</span>' option; press the apply filter '▶︎' button. After one sec you should see a new layer appear in the list of the meshes, named something like '<span style="font-family: "arial" , "helvetica" , sans-serif;">Voronoi Remeshing of Torus</span>'. There are two meshes superimposed on the right, to see clearly both of them just click on the 'eye' icon to disable/enable the visualization of each layer. The new mesh is a remeshing done using a simple sampling plus relaxation strategy followed by a Delaunay triangulation of these samples<br />
done in the geodesic metric. The result is a base mesh that is refined and adapted over the original mesh. Enable wireframe for this mesh too and switch between the two layers to see the difference in meshing.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie5zg-W6ML2-pVOSfa8r4Fpg76BUaJKFZMcv8qEAvKbR-4xjXN69VSugg1bUKNFQmRW91SNl_m_3C4wES3o3xDCeiM-ow6CVLVJotSIQVm0KcKy5FtVkhNF2TKJ4qyUA6CKqdwEk9g4wM/s1600/Screen+Shot+2016-02-02+at+10.15.31+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie5zg-W6ML2-pVOSfa8r4Fpg76BUaJKFZMcv8qEAvKbR-4xjXN69VSugg1bUKNFQmRW91SNl_m_3C4wES3o3xDCeiM-ow6CVLVJotSIQVm0KcKy5FtVkhNF2TKJ4qyUA6CKqdwEk9g4wM/s200/Screen+Shot+2016-02-02+at+10.15.31+am.jpg" width="200" /></a>Now we want just to compute the difference between these two mesh. As it is well known basic differnce between 3D surfaces is well captured by <a href="https://en.wikipedia.org/wiki/Hausdorff_distance#Applications">Hausdorff Distance</a>. So again <b>CTRL+f</b> /<b>⌘+f </b>and type '<span style="font-family: Courier New, Courier, monospace;">diff</span>' , in the filter list should appear 'Compute Hausdorff Distance', open the parameter list and set: target mesh as voronoi remeshing of Torus', sample Num as 1.000.000, and check the 'Save Sample' flag'. Then just start the filter ( ▶︎) in a few secs (two secs on my laptop) you will have the (one sided) Hausdorff distance computed. In the lower left log window you should see numerical info about the computed distance, something like:<br />
<br />
<span style="font-family: Courier New, Courier, monospace; font-size: xx-small;">Hausdorff Distance computed</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: xx-small;"> Sampled 1008192 pts (rng: 0) on Torus searched closest on Voronoi Remeshing of Torus </span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: xx-small;"> min : 0.000000 max 0.005136 mean : 0.001373 RMS : 0.001600 </span><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFypRj-rVII7kB4_DjJPO3gnj0fDTec2GbO9LFRMpgFDezYjxJFQXvCImsCsW7CNOAQ3ZdLjwTvo8AMvI-Bic_UsoWHaL0jFVSXv8bBrAIXxCu9C88FLSoFFb6R-yXz24eCEPqLrB3zys/s1600/Screen+Shot+2016-02-02+at+10.15.56+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFypRj-rVII7kB4_DjJPO3gnj0fDTec2GbO9LFRMpgFDezYjxJFQXvCImsCsW7CNOAQ3ZdLjwTvo8AMvI-Bic_UsoWHaL0jFVSXv8bBrAIXxCu9C88FLSoFFb6R-yXz24eCEPqLrB3zys/s200/Screen+Shot+2016-02-02+at+10.15.56+am.jpg" width="200" /></a><span style="font-family: Courier New, Courier, monospace; font-size: xx-small;">Values w.r.t. BBox Diag (3.464102)</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: xx-small;"> min : 0.000000 max 0.001483 mean : 0.000396 RMS : 0.000462</span><br />
<span style="font-family: Courier New, Courier, monospace; font-size: xx-small;"><br /></span>
You will notice also that there is another layer '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Haudorff Samples</span>': a 1M point cloud with all the samples computed. For all these samples also the computed distance is stored as a scalar value, called for lazy traditional reasons, <i>quality</i>.<br />
<br />
Lets color this point cloud according it with the distance computed.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvHg7A3VO_KirF2D7bFkZXI9Y0Qk1mflqDMtBMCtQjAy7T0PEN-gGSQvZWCTbU-04xaMWfVQnAW_sMtWACRFFggl3kK1RIVJ8Ta-2XFbFSwhiXIiOl7hqHhO7V5BMgztELqaH0ItifKFg/s1600/Screen+Shot+2016-02-02+at+10.16.28+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="143" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvHg7A3VO_KirF2D7bFkZXI9Y0Qk1mflqDMtBMCtQjAy7T0PEN-gGSQvZWCTbU-04xaMWfVQnAW_sMtWACRFFggl3kK1RIVJ8Ta-2XFbFSwhiXIiOl7hqHhO7V5BMgztELqaH0ItifKFg/s200/Screen+Shot+2016-02-02+at+10.16.28+am.jpg" width="200" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieiijr1aRJwR2R4WuHmh8wmwxgN23FYH9sCM1MWuZitwa9Z0x6hGd0RcjuVyZNnw2pj7WdKtAbmCqPbwe5wNWHwBkNuUA4wHss0ts3eZKiZK1wc0Gp977ICYQO5-M0Q89JTv6hPnFI3ok/s1600/Screen+Shot+2016-02-02+at+10.17.07+am.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieiijr1aRJwR2R4WuHmh8wmwxgN23FYH9sCM1MWuZitwa9Z0x6hGd0RcjuVyZNnw2pj7WdKtAbmCqPbwe5wNWHwBkNuUA4wHss0ts3eZKiZK1wc0Gp977ICYQO5-M0Q89JTv6hPnFI3ok/s320/Screen+Shot+2016-02-02+at+10.17.07+am.jpg" width="320" /></a> <b>CTRL+f</b> /<b>⌘+f </b>and type '<span style="font-family: Courier New, Courier, monospace;">quality</span>' , in the filter list should appear '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Generate Color from Vertex Quality</span>'; apply it to the <span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Hausdorff Samples</span> layer, switch to the rendering tab and access to the parameter of the point rendering (just click on the small down arrow ▾<span style="font-size: large;"> </span>below the point rendering icon): Choose '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Per Vertex</span>' as '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Color Source</span>' and '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Flat</span>' as '<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">Shading</span>'. Now you have your nicely colored samples showing the difference between the original torus and the remeshed one.<br />
<br />
<br />
Finally click on the histogram icon to get some insight on the distribution of the error and a more precise meaning of the color mapping used.<br />
<br />
<br /></div>
ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-65824924430510390882016-01-07T10:05:00.002-08:002016-01-07T10:22:36.718-08:00MeshLab in javascript<div dir="ltr" style="text-align: left;" trbidi="on">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7k_dzQDfhIyVOYS_9yq_SwIIVSVEjvLwruaHMZXCfV39SADhymVyaCD1VZqcrKHl6igpU9C4wfO3KTRJOMIxf9z7OhxtBVt-Eoqtuh_4JRLNV5oHtrXsUYAKhp1ZGgWsVOZruWkSTB1M/s1600/meshlabjs.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7k_dzQDfhIyVOYS_9yq_SwIIVSVEjvLwruaHMZXCfV39SADhymVyaCD1VZqcrKHl6igpU9C4wfO3KTRJOMIxf9z7OhxtBVt-Eoqtuh_4JRLNV5oHtrXsUYAKhp1ZGgWsVOZruWkSTB1M/s400/meshlabjs.png" width="400" /></a><span style="background-color: white; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 18px;">We are proud to present the first beta, experimental, buggy, incomplete version of <b>MeshLabJS</b>, the client-side, run-in-browser port of MeshLab. </span><br />
<span style="background-color: white; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 18px;">Yes, a version of MeshLab that runs directly inside the browser.</span><br />
<br />
<div style="text-align: center;">
<a href="http://www.meshlabjs.net/" rel="nofollow nofollow" style="background-color: white; color: #3b5998; cursor: pointer; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 18px; text-decoration: none;" target="_blank"><b>http://www.meshlabjs.net/</b></a></div>
<br style="background-color: white; color: #141823; font-family: helvetica, arial, sans-serif; font-size: 14px; line-height: 18px;" />
<span style="background-color: white;"><span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="font-size: 14px; line-height: 18px;">It is still rudimental, very minimal, but yet it is a nice example of how current browsers are able to run C++ code compiled into a javascript (thanks to emscripten) at a pretty decent speed. WebGL (via three.js) is used for the rendering. </span></span></span><br />
<span style="background-color: white;"><span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="font-size: 14px; line-height: 18px;">Just to clarify it totally runs inside your browser, no 3D data is transferred to a server for processing, all the computation are done (in javascript) locally. Your data is safe as in a classical desktop app. </span></span></span><br />
<span style="background-color: white;"><span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="font-size: 14px; line-height: 18px;">It is a bit more than an experiment, there are only a few tens of filters (more to come!), and no fancy tools, but some classics like the renowned quadric simplifier and <i>radiance scaling </i>rendering mode, are available.</span></span></span><br />
<span style="background-color: white;"><span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="font-size: 14px; line-height: 18px;"><br /></span></span></span>
<span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="background-color: white; font-size: 14px; line-height: 18px;">As usual everything is opensource, this time on <a href="http://github.com/cnr-isti-vclab/meshlabjs">github</a>. If you like it star it on github and if you need some specific meshlab filter, just ask for it on the <a href="http://github.com/cnr-isti-vclab/meshlabjs/issues">github issue page</a>.</span></span><br />
<span style="color: #141823; font-family: helvetica, arial, sans-serif;"><span style="background-color: white; font-size: 14px; line-height: 18px;"><br /></span></span>
<br /></div>
ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-8719112887459287222011-09-26T00:46:00.000-07:002011-09-26T00:46:24.124-07:00MeshLab for iOS<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjxfCwoMtXKqE4tg-qWMnK8nQj9Zs__PdPobB_Dj2S-qKzsW2nzWBtNRhu63W06JJY6FhZsk6ifb_OhuPWQLlloRMGP4LzuLAj9_4ZJOQdGPp2FjlTEjBNofeZ4DyJ4TN1lSdVK4vD-6Q/s1600/sshot07.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjxfCwoMtXKqE4tg-qWMnK8nQj9Zs__PdPobB_Dj2S-qKzsW2nzWBtNRhu63W06JJY6FhZsk6ifb_OhuPWQLlloRMGP4LzuLAj9_4ZJOQdGPp2FjlTEjBNofeZ4DyJ4TN1lSdVK4vD-6Q/s320/sshot07.png" width="213" /></a></div>
<div style="text-align: left;">
2 Big News:</div>
<div style="text-align: left;">
</div>
<ol style="text-align: left;">
<li>MeshPad has changed name: now its official name is <a href="http://www.meshpad.org/">MeshLab for iOS</a> </li>
<li>MeshLab for iOS is available on the <a href="http://itunes.apple.com/app/meshlab-for-ios/id451944013?mt=8">App Store</a>!<br />And it is free :)</li>
</ol>
If you have a iPad or an iPhone you can't miss it, go download it and share the news...<br />
<br />
We are investing in it, so expect frequent updates. We feel that this kind of support (i.e. tablet) is really great for showing off results to a really broad spectrum of non technically skilled people. Every time that I give to some CH-only guy an iPad to with a gorgeous model ready to be browsed, well, it really pay off <b>much more</b> than asking him to sit down in front of a PC and passing him a mouse...<br />
<div>
<div>
<br /></div>
<div>
<br />
<div style="text-align: left;">
</div>
<div>
<br /></div>
</div>
</div>
</div>
ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-47119158597920604682011-08-03T16:08:00.000-07:002011-08-03T16:08:37.459-07:00MeshPad<div dir="ltr" style="text-align: left;" trbidi="on"><a href="http://www.meshpad.org/img/sshot01.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="http://www.meshpad.org/img/sshot01.png" width="150" /></a>If you liked MeshLab and you have an iPad or an iPhone, you cannot miss this: an intuitive, cool 3D viewer to show your models. It is able to sustain the interactive browsing of detailed models (usable up to 2M triangles). Perfect for boldly show hi quality 3D scanned stuff to non-technical guys. Soon to be released.<br />
<br />
More info can be found both on <a href="http://www.meshpad.org/">MeshPad official web page</a> or on the <a href="http://www.facebook.com/pages/MeshPad/124026831024220">facebook MeshPad page</a>.<br />
<br />
The viewer is well integrated in iOs, so it is automatically started whenever you encounter a 3D model in a recognized format (currently just ply stl obj off). It works with models on the web (see the second video) or with other cloud storage services like DropBox.<br />
<br />
So for example it is easy to put a bunch of model on your dropbox account, to boldly show off them just when you need on your iPad.<br />
<br />
Here are two videos showing MeshPad in action:<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/36Ujy17QAsk?feature=player_embedded' frameborder='0'></iframe></div><br />
<div class="separator" style="clear: both; text-align: center;"><object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://i.ytimg.com/vi/tn_3ROW-vJI/0.jpg"><param name="movie" value="http://www.youtube.com/v/tn_3ROW-vJI?f=user_uploads&c=google-webdrive-0&app=youtube_gdata" /><param name="bgcolor" value="#FFFFFF" /><embed width="320" height="266" src="http://www.youtube.com/v/tn_3ROW-vJI?f=user_uploads&c=google-webdrive-0&app=youtube_gdata" type="application/x-shockwave-flash"></embed></object></div><br />
Stay tuned for the official release of the app!</div>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-53659612233303528002011-03-15T10:45:00.000-07:002011-03-15T10:46:14.029-07:00MeshLab Video Tutorial<div dir="ltr" style="text-align: left;" trbidi="on">This blog has been quite lazy recently. But now great news!<br />
We are proud to announce the birth of a dedicated YouTube channel for MeshLab tutorials.<br />
<div style="text-align: center;"><b><i> <a href="http://www.youtube.com/user/MrPMeshLabTutorials#g/p" style="font-family: Georgia,"Times New Roman",serif;">Mr P.'s MeshLab Tutorials</a></i></b></div>We will upload some new tutorials in the next days. The first one is already online, and it's a basic one about navigation.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/Sl0vJfmj5LQ?feature=player_embedded' frameborder='0'></iframe></div><br />
<br />
Stay in touch for news, and if you want to collaborate, you are welcome!</div>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-19797026630175816762010-09-07T06:37:00.000-07:002010-09-07T06:37:01.796-07:00VAST 2010 MeshLab Tutorial<a href="http://www.vast2010.org/sites/default/files/acquia_slate_logo.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://www.vast2010.org/sites/default/files/acquia_slate_logo.png" /></a>At <a href="http://www.vast2010.org/">VAST 2010</a> the 11<sup><small>th</small></sup> International Symposium on <b><i>Virtual Reality, Archaeology and Cultural Heritage</i></b>. (Louvre, Paris, 21-24 Sept. 2010) there will be <a href="http://www.vast2010.org/workshop/meshlab">a full day tutorial of MeshLab</a>.<br />
<br />
It will be held by Marco Callieri and Guido Ranzuglia and will cover almost everything of MeshLab, from basic navigation hint to advanced remeshing, measuring and processing tasks. Obviously with a bit of Cultural Heritage pepper here and there. <br />
<br />
<i><b>Target Audience</b></i><br />
<ul><li>People interested in a simple but powerful opensource tool for mesh processing.</li>
<li>People who need to visualize, edit and convert 3D models.</li>
<li>People who need small editing, batch process and mesh cleaning.</li>
<li>People trying to integrate/replace an existing mesh processing pipeline.</li>
<li>People interested in advanced, custom measuring/processing of 3D models, exploiting state-of-the-art algorithms.</li>
</ul>Participants will be given the latest build of the tool plus some test dataset to experiment with the presented features. Bring your own laptop!<br />
<br />
RSVP at the <a href="http://www.facebook.com/MeshLab?v=app_2344061033#%21/event.php?eid=129053717141860&index=1">FaceBook Event Page</a>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-54387319732040778432010-07-20T16:57:00.000-07:002010-07-20T16:57:41.798-07:00Remeshing and Texturing (1)<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglj4BFfvdxkMiotEBja9nM56wL5HZ5g_mU7q-S1H56ElxWpBNuEGPIJiFiVNEwQ_1JZ6exPhEc7TCk7WkI-PnPF3vJwVuDdXmznSTxorSAZuWzkmH3LSeZaN1x8ihWeVTzrjKQF-d8ltM/s1600/snap_1_skull_1M_color_snap01.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglj4BFfvdxkMiotEBja9nM56wL5HZ5g_mU7q-S1H56ElxWpBNuEGPIJiFiVNEwQ_1JZ6exPhEc7TCk7WkI-PnPF3vJwVuDdXmznSTxorSAZuWzkmH3LSeZaN1x8ihWeVTzrjKQF-d8ltM/s200/snap_1_skull_1M_color_snap01.png" width="200" /></a><br />
In the pipeline of processing 3D data, after you have aligned and merged your range maps, you ofter require to get a <i><b>nice clean textured</b></i> mesh. In the last release of MeshLab we included our state-of-the-art parametrization/remeshing algorithm based on abstract parametrization. Now some a two-part tutorial on his practical usage. <br />
Let's start from a medium complexity mesh of a skull (kindly provided and scanned for the VCG Lab by Marco Callieri). You can see it depicted in the two small figures on the right. <br />
The mesh of the skull is composed by 1.000.000 triangles, it has a meaningful per-vertex color (recovered from a set of photos) and, as it often happens, it is topologically dirty. <br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZLALjQZcNZ7IlkE7fKvHSU30SD5R73ex8Th4Zin9Mwij9DI-9pzHaa2TO52VtUjNshn8fA5TUFbII0vzc0bwV1gY-xCJsRYasDA0YU2qbYTBWQFn2Qhwh-8yDzRC19YkYhTA-jtHevdA/s1600/snap_0_skull_1M_color_snap00.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZLALjQZcNZ7IlkE7fKvHSU30SD5R73ex8Th4Zin9Mwij9DI-9pzHaa2TO52VtUjNshn8fA5TUFbII0vzc0bwV1gY-xCJsRYasDA0YU2qbYTBWQFn2Qhwh-8yDzRC19YkYhTA-jtHevdA/s200/snap_0_skull_1M_color_snap00.png" width="200" /></a>First of all it is non 2-manifold (there are 7 edges where more than two face are incident) than there are many small holes and handles that make difficult any kind of parametrization. <br />
<br />
So the first step is to build a watertight, coarser but topologically sound model. Poisson surface reconstruction is a perfect filter for this task. A reconstruction at depth 9 is usually good, that generates a mesh of 1.3M of faces. For this kind of processing a quite faithful geometric representation is not needed, but it is strongly needed that the overall topology is the right one. In this case some portions of the skull are remarkably thin and at low resolutions the poisson surface reconstruction can create unwanted holes. <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-OmCNW5zA418L1Z77bYdHjT8KOqbKbwYLgY2rQ36RM1D-W8gO65ZpcZnjNomXsrOPgUuulPg7TKPX4ps5AkpuZ_G7hpIZdtwQwXWkTtoDbw8wmrGGO4mK3nIRMSV_3UAS9TZN5eYJ_9U/s1600/snap_2_skull_poisson_snap05.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-OmCNW5zA418L1Z77bYdHjT8KOqbKbwYLgY2rQ36RM1D-W8gO65ZpcZnjNomXsrOPgUuulPg7TKPX4ps5AkpuZ_G7hpIZdtwQwXWkTtoDbw8wmrGGO4mK3nIRMSV_3UAS9TZN5eYJ_9U/s200/snap_2_skull_poisson_snap05.png" width="200" /></a></div>After that a further simplification step is needed to bring the model size to a number reasonable for the Isoparametrization engine. Remember that the when building an abstract parametrization you do not need the full accuracy model but just a model that shares the overall shape and the same topology. For the purpose of the parametrization small details have a very small influence on the overall quality of the parametrization. Side figure depict the watertight Poisson reconstructed surface, note how the nostril cavity was filled (as expected because it was a hole with boundary). <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjINPELH8cFtVBV88UO5vfFdGbhl7cJ20GyLnRET5B3s0C7_mP5d_P7y_Ukgvc5ZK__ZbeUIJrvUtHE-XCKOG6rAoEjsMWBgDiotONVKM9xIzrA5ukS3pA_FSYVBvfLhm6aMLTRuXXLRdc/s1600/snap_3_skull_poisson_para60k_snap.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjINPELH8cFtVBV88UO5vfFdGbhl7cJ20GyLnRET5B3s0C7_mP5d_P7y_Ukgvc5ZK__ZbeUIJrvUtHE-XCKOG6rAoEjsMWBgDiotONVKM9xIzrA5ukS3pA_FSYVBvfLhm6aMLTRuXXLRdc/s200/snap_3_skull_poisson_para60k_snap.png" width="200" /></a></div>So simplify it our watertight skull up to 50000 triangles. Take care to check Normal Preservation and Topology preservation Flag. The second one is particularly important, infact the basic edge collapse simplification algorithms can during simplification change the topology of the mesh, and while this is usually a nice feature (it allows for example the closure of very small holes) when you start from a mesh that is surely clean (a 2-manifold watertight model) it is better to be sure that such properties are preserved. <br />
<br />
After that you can start with creating the <i>Abstract Isoparametrization</i>, a technique we introduced in:<br />
<span style="font-size: small;"><br />
</span><br />
<span style="font-size: small;"><i>Nico Pietroni, Marco Tarini, Paolo Cignoni</i></span><br />
<span style="font-size: small;"><b><a href="http://www.blogger.com/post-edit.g?blogID=5333957751769755809&postID=5438731973204077843">Almost isometric mesh parameterization through abstract domains</a> </b></span><br />
<span style="font-size: small;">IEEE Transaction on Visualization and Computer Graphics, Volume 16, Number 4, page 621-635 - July/August 2010</span><br />
<br />
Without going into details, that you will find in the above paper, the main idea is rather simple. Usually textures are defined in a dominion that is just the (0,0)-(1,1) square on the plane. In our approach as a domain of the parametrization we use a different 2-dimensional domain, the surface of a very coarse simplicial complex that has the same topology of the original mesh and it is composed by just a few hundred triangles. Such an approach is interesting because this abstract parametrization can be used for a number of things, like for example remeshing, texturing, tangent space smoothing etc.<br />
<br />
To build the abstract isoparametrization just start the corresponding filter called "<i>Isoparametrization"</i>, (default params are ok, you can lower convergence precision to a '1' to speedup a bit and try to change a bit the targeted size of the abstract domain). It is a bit slow so wait some minutes for the processing. At the end of the process, you do not see anything directly but the structure is attached to the mesh and you can use it in the other filters. If you want re-use it for a later use you have to save both the processed mesh and as a separate step the isoparametrization using the "<i>Isoparametrization Save Abstract Domain Filter</i>".<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpuqsPIK60vl0X4WRgi-aSi1krQvp8aZmtuV8mIqPbVtx-kgZAunBuh2yn2rT5DwjgFxG32CDbJLnAJ-eOEesmES1ewaQGXW_h9llgs_PG9oqRVF0bWmBvgZpBN3bMPTM5RCULpFWhnB8/s1600/snap_4_skull_10k.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpuqsPIK60vl0X4WRgi-aSi1krQvp8aZmtuV8mIqPbVtx-kgZAunBuh2yn2rT5DwjgFxG32CDbJLnAJ-eOEesmES1ewaQGXW_h9llgs_PG9oqRVF0bWmBvgZpBN3bMPTM5RCULpFWhnB8/s200/snap_4_skull_10k.png" width="200" /></a>The created isoparametrization can be used to build a standard parametrization over any mesh that is reasonably close to the original one.<br />
In our example we take a simplified version of the original mesh, composed by just 10000 triangles ("Skull_10k.ply"). We transfer over this simplifed mesh the just build isoparametrization<br />
and then using the filter "<i>Iso Parametrization transfer between meshes</i>", setting as source mesh the one with the abstract parametrization (skull_60k_isoparam.ply) and skull_10k.ply as target.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4FkmRfQEYh9_IKxZYnyPx_VUe7fs5Xj23xGbyUQ8DCtgWV56OmFtxqNdmrWvH2vKwKz3MGPUADYxUxULifUTrkI6_Neie9baIVW2_sO54E0snM1cgXEzLfuNdqzV6FoTBBmfyxt1QdAA/s1600/snap_5_skull_10k_param.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4FkmRfQEYh9_IKxZYnyPx_VUe7fs5Xj23xGbyUQ8DCtgWV56OmFtxqNdmrWvH2vKwKz3MGPUADYxUxULifUTrkI6_Neie9baIVW2_sO54E0snM1cgXEzLfuNdqzV6FoTBBmfyxt1QdAA/s200/snap_5_skull_10k_param.png" width="200" /></a>Now we can transform the transferred isoparametrization into a standard atlased parametrization using the "<i>Isoparametrization Build Atlased Mesh</i>" filter. The two image on the right seems equal but you can see that in the lower one the triangles of the mesh have been cut along the triangles of the abstract parametrization in order to get proper atlas regions. At this point your mesh has a standard texture parametrization and it is ready for use it for a variety of operation.<br />
<br />
The First thing that we can do is just to transfer the color of the original 1M vertexes color onto a texture according this parametrization. This can be done by using the filter <i>"Transfer color to texture (between 2 meshes)", </i>choose a reasonable texture size (2048x2048 is good) and you will obtain a simplified textured mesh that looks strikingly similar to the original heavy 1M tri model (try to compare the first and last snapshots).<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqYaoriwfxsbe3aclSrCnVo4COjLaQuMNTIFhVwqTE6ZNMTBd24gysTWhcCZxPxDakQrYRQWJoGX2zCfdHqfECSqtUAWnJJ9oU9W2EsQWxV6PKUaMT0UiMNsi6jrHGPN66cXpNkesmOME/s1600/snap_6_skull_10k_param_tex.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="181" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqYaoriwfxsbe3aclSrCnVo4COjLaQuMNTIFhVwqTE6ZNMTBd24gysTWhcCZxPxDakQrYRQWJoGX2zCfdHqfECSqtUAWnJJ9oU9W2EsQWxV6PKUaMT0UiMNsi6jrHGPN66cXpNkesmOME/s200/snap_6_skull_10k_param_tex.png" width="200" /></a></div><br />
<b> Summarized Recipe</b><br />
<ol><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEnPV_wuohyphenhyphenIm3MkrhAyzjWyQfbk3UMmcixZnGpE0OCHlaTn-oKTKvj4inZ-rOfwEryAJ8OPdIoPvC-0lNd2iIeRQXx2qk68Qd9xP7-9Ug2iEfK8UoMYMVP9bqNqKdv2ANrypVbYToatA/s1600/teschio_color_simp_10k_param_color.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEnPV_wuohyphenhyphenIm3MkrhAyzjWyQfbk3UMmcixZnGpE0OCHlaTn-oKTKvj4inZ-rOfwEryAJ8OPdIoPvC-0lNd2iIeRQXx2qk68Qd9xP7-9Ug2iEfK8UoMYMVP9bqNqKdv2ANrypVbYToatA/s200/teschio_color_simp_10k_param_color.png" width="100" /></a>
<li>take a 1M tri colored model </li>
<li>make the model watertight using Poisson</li>
<li>Simplify it to a 50k model (preserving topology)</li>
<li>Build the Isoparametrization</li>
<li>build another very simple 10k model from the original 1M model</li>
<li>transfer the isoparametrization over the very simple model</li>
<li>convert the isoparametrization into a standard atlased texture</li>
<li>generate a texture with the color from the original 1M model</li>
</ol>Next part of the tutorial with remeshing and other hints in a few days... <br />
<ol></ol>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-12619664234184077612010-07-16T01:27:00.000-07:002010-07-16T13:51:38.656-07:00First MeshLab 1.3.0 beta out!<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8J9bxmaj_oe36iCEq2zFjKy4xmcRdeu6mGinEjAflE-tFaJpFGMTamHUQixXzs_vsK8fJW6ukDpmNL-t7UWTRpLTAuBW0O0q2a6RFfZ0z5mInFg7sY2d8PHeUbH340J8t6sBkgJSLU1M/s1600/radiancescaling.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8J9bxmaj_oe36iCEq2zFjKy4xmcRdeu6mGinEjAflE-tFaJpFGMTamHUQixXzs_vsK8fJW6ukDpmNL-t7UWTRpLTAuBW0O0q2a6RFfZ0z5mInFg7sY2d8PHeUbH340J8t6sBkgJSLU1M/s400/radiancescaling.png" width="191" /></a>The first Beta version of MeshLab 1.3.0 is out. A lot of work has been done under the hood and many new features have been added. In the followings some of the notable improvements: <br />
<ul><li>Totally restructured view/window mechanism. Now you can have: <br />
<br />
<br />
<ul><li> multiple views of the same mesh. </li>
<li> standard orthographic viewing directions (up/down etc) </li>
<li> copy/paste of current viewing parameters (you can even save them for later re-use...); </li>
</ul></li>
<li> The Isoparametrization works. Really! A detailed tutorial on how to practically use it will appear in a day or two!</li>
<li> new <a href="http://iparla.labri.fr/publications/2010/VPBGS10/"><i>Radiance Scaling</i></a> rendering mode, (thanks to Romain Vergne, Romain Pacanowski, Pascal Barla, Xavier Granier and Christophe Schlick for providing the code and to Gaël Guennebaud for helping out!). More on this new rendering mode on another post, as it deserves more space, for now just look at the side image... </li>
</ul>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-13624789237148202072010-04-28T11:04:00.000-07:002010-04-28T13:59:18.213-07:00MeshLab on Facebook<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP0kMgG2s9tJmJhSuGdUQl8T67gkR0qzNd67Kvw4Doc4wrX211nvyM35kb2NRkM46pzaoT-jkEinpbvaXX3HFRP4b4_sR4gC4R1IgXZ8qs2KGV2vJya5kA4JhdfPCqM0Gk4eQB5l29VjI/s1600/facebook-icon.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="128" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiP0kMgG2s9tJmJhSuGdUQl8T67gkR0qzNd67Kvw4Doc4wrX211nvyM35kb2NRkM46pzaoT-jkEinpbvaXX3HFRP4b4_sR4gC4R1IgXZ8qs2KGV2vJya5kA4JhdfPCqM0Gk4eQB5l29VjI/s200/facebook-icon.png" width="128" /></a></div>Just a short shameless plug to the <a href="http://www.facebook.com/pages/MeshLab/323420321688">MeshLab</a> page on <a href="http://www.facebook.com/pages/MeshLab/323420321688">facebook</a> (thanks to Marco Callieri who had the idea and set up the page!). Yet another place for disseminating news and bits on MeshLab (like the MeshLab tutorial at the forthcoming <a href="http://www.archeologiadigitale.it/archeofoss/2010.html">ArcheoFoss</a> workshop in Foggia).<br />
<br />
Still on the social side I have been happy to discover that the old-style, web 1.0, IRC channel <a href="irc://freenode/meshlab">#meshlab</a> on freenode.net is still alive and kicking, with a few generous developers hanging on it...ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-23075377115180245952010-03-26T00:13:00.000-07:002010-03-26T00:13:00.558-07:00Assessing open source software as a scholarly contributionA post that is not strictly related to Computer Graphics, 3D or Cultural Heritage.<br />
Just a small note/rant to point out a recent paper:<br />
<br />
<a href="http://www.blogger.com/goog_473044561">Lou Hafer and Arthur E. Kirkpatrick</a><br />
<i><a href="http://www.blogger.com/goog_473044561">"Assessing open source software as a scholarly contribution"</a></i><br />
<a href="http://www.blogger.com/goog_473044561">Communications of the ACM, Volume 52 , Issue 12 (December 2009) </a><br />
<br />
It is an interesting discussion on the fact that <i>"Academic computer science has an odd relationship with software: Publishing papers about software is considered a distinctly stronger contribution than publishing the software".</i><br />
<i> </i> <br />
Being a senior researcher before being the lead developer of MeshLab, I have to say that I totally agree with those feelings. I have often thought that devoting a significant portion of my time to the MeshLab project is not a 100% wise move from a career point of view; probably writing a bunch of easy minor-variation papers is much more rewarding and is evaluated better when running for higher positions.<br />
<br />
<br />
The sad thing is that there are people thinking that the citations coming from the paper you have written about your software are more than enough to reward you for your effort of writing it. Usually these considerations came from computer scientists who do not have perfectly clear what means writing and maintaining real software tools. <br />
Some bare facts:<br />
<ol><li>If you write and maintain significant software tools/library then you are serving the research community in a way that is more significant that writing a paper.</li>
<li>The time required to develop and maintain sw tools is much larger than the time required to write one paper.</li>
<li>Assessing the importance/significance of software is more difficult than assessing the value of papers, no common bibliometric tools (obviously download count is not a good metric).</li>
<li>Commissions evaluating people careers usually ignore sw and concentrate on other, more standard, research products (papers, editorial boards, commitee, teaching, prizes, etc).</li>
</ol>As a simple consequence, of 2,3,4 and despite of 1, with current career evaluation habits, developing and maintaining sw tools that are significantly useful for the research community is NOT a career maximizing move. And this is, in my humble opinion, definitely, completely, utterly <b>WRONG</b>. <br />
<br />
Now when you stumble upon a discontinued piece of code that you would have loved to have maintained, you have an hint of why the original author abandoned it.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-65420228565280259032010-03-23T17:56:00.000-07:002010-03-23T17:56:18.072-07:00Mean Curvature, Cavity Map, ZBrush and nice tricks for enhancing surface shading<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvihIlASK6jAfA8McnmDrty405sDcu1hrAlV2MDf4v6Og2aUCXMYntHbQ5NClxyH6GjvprEBXy57Dbgl9Li5VmEB77ndt7wqiZxfNv5STBf3aM733a4xcB5K7lifUkbqDG9f9LFtgPQUA/s1600-h/pazuzu_poiss_1Snap_plain.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvihIlASK6jAfA8McnmDrty405sDcu1hrAlV2MDf4v6Og2aUCXMYntHbQ5NClxyH6GjvprEBXy57Dbgl9Li5VmEB77ndt7wqiZxfNv5STBf3aM733a4xcB5K7lifUkbqDG9f9LFtgPQUA/s200/pazuzu_poiss_1Snap_plain.png" width="125" /></a>There are many many techniques for enhancing the look of a surface by mean of smart shaders. Without touching <i>Non photo-realistic</i> rendering techniques there are many tricks that can helps the perception of the features and the fine details of the surface of an object. ZBrush has popularized one of these techniques with the name of <b><i>cavity mapping</i></b>. The main idea is that you detect 'pits' on the surface and you make them of a different color and, very important, very dull. In practice it simulates in a rough way all those materials where in low accessibility regions dust/rust/oxide accumulates while in the most exposed parts the use make them shiny. You can do such effects in MeshLab and they can be very useful for making quick nice renderings of scanned objects; let make a <br />
a practical example using a 3D scanned model of ancient statuette of the Assyrian demon <a href="http://en.wikipedia.org/wiki/Pazuzu">Pazuzu</a> (courtesy of <a href="http://denics.free.fr/">Denis Pitzalis</a> C2RMF/Louvre). The plain model (shown on the right) is rather dull and not very readable and at a first glance you cannot appreciate the scale of the scanned details.<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsZWE2gtaOh_PxajhXr9aFbiUTLdyBMYAoOQU3vOSZj0oOetzwkwF3-HnyyNgqoXNtDnLO1WpYkuHq0CUmWiqSID-VzPQIVauC_P_qYow8l11VgIyQY9EtQfYGsdJajoAQubLEpHuShrI/s1600/pazuzu_poiss_1Snap_AO.png" imageanchor="1" style="float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsZWE2gtaOh_PxajhXr9aFbiUTLdyBMYAoOQU3vOSZj0oOetzwkwF3-HnyyNgqoXNtDnLO1WpYkuHq0CUmWiqSID-VzPQIVauC_P_qYow8l11VgIyQY9EtQfYGsdJajoAQubLEpHuShrI/s200/pazuzu_poiss_1Snap_AO.png" width="125" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGv9A5BhJW8F9lGzIzDTJJqbsvO9koMQGBEkwBTYws9XWf96AOig4ruWU4GAs0UPooVCaEQRqHb6qB91kUpDjeIU-lbYBRelbLNMElHjLGhA1vSRQZmfywFc2xgQ5sGJW2WXEMwSHO2f8/s1600-h/pazuzu_poiss_1Snap_SSAO.png" imageanchor="1" style="float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhGv9A5BhJW8F9lGzIzDTJJqbsvO9koMQGBEkwBTYws9XWf96AOig4ruWU4GAs0UPooVCaEQRqHb6qB91kUpDjeIU-lbYBRelbLNMElHjLGhA1vSRQZmfywFc2xgQ5sGJW2WXEMwSHO2f8/s200/pazuzu_poiss_1Snap_SSAO.png" width="125" /></a></div>You can improve it a bit by adding a bit of ambient occlusion. In MeshLab you can do it in a couple of ways, either computing an actual ambient occlusion term (e.g. a <a href="http://en.wikipedia.org/wiki/Steradian">steradian</a> denoting the portion of sky that you can see from a given point) for each vertex (<i>Filter->Color->Vertex ambient occlusion</i>) or just resort to a quick and dirty <i>Screen Space Ambient Occlusion</i> (SSAO) approximation (<i>render->Screen Space Ambient Occlusion</i>).In the first case as a bonus you are able to tweak the computed value as you want (by playing with the quality mapping tools), in the second you have just an approximation, but the nice fact is that it is a <i>decoration</i> e.g. it is blended over the current frame buffer and therefore it mix up with whatever rendering you have. In the two pictures per vertex AO vs SSAO.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDmV9qx5uc5iY3WkTcsMM8Wa7to4wb-LA9Zez5NOHs8_4_B5c6ajOleM7HV-hLrMjoonjc5m4wuCtn9X_gVQjDEZLVJvkwKtYdBLxxtfY_79Z9Cl11gGnl-0V4wu74wu1XGxjqssEbLjE/s1600-h/pazuzu_poiss_1Snap_curv.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDmV9qx5uc5iY3WkTcsMM8Wa7to4wb-LA9Zez5NOHs8_4_B5c6ajOleM7HV-hLrMjoonjc5m4wuCtn9X_gVQjDEZLVJvkwKtYdBLxxtfY_79Z9Cl11gGnl-0V4wu74wu1XGxjqssEbLjE/s200/pazuzu_poiss_1Snap_curv.png" width="125" /></a></div>Not enough. Lets try to find out and enhance the pits. <a href="http://en.wikipedia.org/wiki/Mean_curvature">Mean Curvature</a> is what you need, you can think it as the divergence of the normals over the surface and it captures well the concept of characterizing local variation of the surface. There is a vast literature on computing curvatures on discrete triangulated surface and MeshLab exposed a few different methods (and it has also a couple of methods for computing them on Point Clouds). The fastest one (a bit resilient on the quality of the mesh) is <i>filter->color->Discrete Curvature. </i>On the right you can see the result of such filter mapped into a red-green-blu colormap. <br />
if you want expmeriment you can try the various options under <i>filter->normal->compute curvature principal direction</i> (be careful some of these filters can be VERY SLOW). <br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid4DYE3SfvM__2VJaUaJbBBHHB8us9t5g20WcxqJfyEAQKyz3RUCcjouCiJnBTIO5IvvimLIurzYWp7DygqsrRdF_3N17TXkuKnemnG4nqctypAu1TWGu2pcy5aSHSJMDjDOSQsvkePpA/s1600-h/pazuzu_poiss_1Snap_Zb2.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid4DYE3SfvM__2VJaUaJbBBHHB8us9t5g20WcxqJfyEAQKyz3RUCcjouCiJnBTIO5IvvimLIurzYWp7DygqsrRdF_3N17TXkuKnemnG4nqctypAu1TWGu2pcy5aSHSJMDjDOSQsvkePpA/s320/pazuzu_poiss_1Snap_Zb2.png" /></a></div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4hAT1mJuQaowXUVjroL1qkL_ry8v5a3PJCVVlNDcYG8yrcR8oQaDwbHZeKZJODTUU8E-Anr6l6gF1kYag6prjABmoUkNUpHunegiKsnJOSRyd5QR2tNi7wOi_qOMogi2CKHjHp0RT-vc/s1600-h/pazuzu_poiss_1Snap_Zb.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4hAT1mJuQaowXUVjroL1qkL_ry8v5a3PJCVVlNDcYG8yrcR8oQaDwbHZeKZJODTUU8E-Anr6l6gF1kYag6prjABmoUkNUpHunegiKsnJOSRyd5QR2tNi7wOi_qOMogi2CKHjHp0RT-vc/s320/pazuzu_poiss_1Snap_Zb.png" /></a><br />
<br />
Now the final step is just to use this value for the shading. Just start the shader called ZBrush and play a bit with the parameters and then, hopefully, you can get the desired result. Some notes. It happens that curvature has some outliers so clamping the quality values before starting the filter can be a good idea (filter->quality->clamp). Similarly the range can be very large so playing with the "transition_speed" parameter of the shader can be a quite useful. To vary the amount of "pits" use the "transition center" slider.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-25249287769425487752010-03-02T16:18:00.000-08:002010-07-06T01:31:10.463-07:00Measuring the distance between two meshes (2)Second part of the "<i>metro</i>" tutorial, the first part is <a href="http://meshlabstuff.blogspot.com/2010/01/measuring-difference-between-two-meshes.html">here</a>.<br />
<br />
Remember that MeshLab uses a sampling approach to compute the Hausdorff distance taking a set of points over a mesh X and for each point x on X it searches the closest point y on the other mesh Y. That means that the result is strongly affected on how many points do you take over X. Now assume that we want color the mesh X (e.g. the low resolution one) with the distance from Y. <br />
In this case the previous trick of using per vertex color will yeld poor results, given the low resolution quality of the mesh. <br />
Let's start again with our two HappyBuddha, full resolution and simplified to 50k faces. <br />
Therefore first of all we need a denser sampling over the low res mesh, that means that we compute the Hausdorff distance we set the simplified mesh as sample mesh, the original happybuddha as target, we set a face sampling with a reasonable high number of sampling points (10^6 is ok) and, very important, we ask to save the computed samples by checking the appropriate option.<br />
<br />
After a few secs you will see in the layer window two new layers that contains the point clouds representing respectively to the sample taken over the simplified mesh and the corresponding closest points on the original meshes. <br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg951jyCic8zfHkC96WCa881kagQQZ9SyGvjECchUAVKYOadeMK9R3kT5nfmqKBwPfb5APcQUv4DFmYUo8mjFiW6Ob5dx83iTYGO9vW1EAy_uXo7ALbs4sE3E8KSzEDwe-LZVvNswuV2wI/s1600-h/happy_vripSnap200.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg951jyCic8zfHkC96WCa881kagQQZ9SyGvjECchUAVKYOadeMK9R3kT5nfmqKBwPfb5APcQUv4DFmYUo8mjFiW6Ob5dx83iTYGO9vW1EAy_uXo7ALbs4sE3E8KSzEDwe-LZVvNswuV2wI/s320/happy_vripSnap200.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg0vTkclIr_u0SNFuz4r2wBMe6HHEo0hQeocnowtKzq2jWxc2_d8xCOQFvCcCrwG5Lj2EowLSDzCrEZw_KM4LqojZBOHTgQ56ilW474shW6gkkBvG41pqBCfJaH2CzahnOqTurnP9xeRE/s1600-h/happy_vripSnap201.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg0vTkclIr_u0SNFuz4r2wBMe6HHEo0hQeocnowtKzq2jWxc2_d8xCOQFvCcCrwG5Lj2EowLSDzCrEZw_KM4LqojZBOHTgQ56ilW474shW6gkkBvG41pqBCfJaH2CzahnOqTurnP9xeRE/s320/happy_vripSnap201.png" /></a></div><br />
To see and inspect these point clouds you have to manually switch to point visualization and turn off the other layers. Below two snapshots of the point cloud (in this case just 2,000,000 of point samples) at different zoom level to get a hint of the cloud density)<br />
<br />
Then, just like in the previous post, use the <i>Color->Colorize by quality</i> filter to map the point cloud of the sample points (the one over the simplified mesh) with the standard red-green-blue colormap. <br />
<br />
Now to better visualize these color we use a texture map. As a first step we need a simple parameterization of the simplified mesh. MeshLab offers a couple of parametrization tools. the first one is a rather simple trivial independent right triangle packing approach while the other one is the state of the art <a href="http://meshlabstuff.blogspot.com/2009/07/almost-isometric-mesh-parameterization.html">almost isometric approach </a>. Let's use the first one (more on the latter in a future post...) simply by starting <i>Texture->Trivial Per-Triangle Parametrization</i>. This kind of parametrization is quite trivial, with a lot of problems (distorsion, fragmentation etc) but on the other hand is quite fast, simple and robust and in a few cases can even be useful.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZkSBh43NRwN0S3o8hZPbLFd0C4ZbnLXNlfT9TzPad-MeC142kmF1by4GEfhvfaDHkCjuI2NKmi43c4y1HVDjG4eo0b9NUwiMQq8ywYMxukSUxyv5pA3IHJ5RwSQB_g1aJpRSNapJsK-Y/s1600-h/happy_vripSnap202.png" imageanchor="1" style="margin-left: 0.3em; margin-right: 0.3em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZkSBh43NRwN0S3o8hZPbLFd0C4ZbnLXNlfT9TzPad-MeC142kmF1by4GEfhvfaDHkCjuI2NKmi43c4y1HVDjG4eo0b9NUwiMQq8ywYMxukSUxyv5pA3IHJ5RwSQB_g1aJpRSNapJsK-Y/s320/happy_vripSnap202.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgz6QDbylJGgaDzgbG-t9ngjAJPxCFE6VK20Q-75IYLN41pW-9crjYiokUCwEuSFEXJzVZBb8siVd2UP5aKbfRoVYLJGz54nOuWKEH9byKH6Qa7ravmgatU313v0d7MLculLlgwPfwNuVc/s1600-h/happy_vripSnap203.png" imageanchor="1" style="margin-left: 0.3em; margin-right: 0.3em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgz6QDbylJGgaDzgbG-t9ngjAJPxCFE6VK20Q-75IYLN41pW-9crjYiokUCwEuSFEXJzVZBb8siVd2UP5aKbfRoVYLJGz54nOuWKEH9byKH6Qa7ravmgatU313v0d7MLculLlgwPfwNuVc/s320/happy_vripSnap203.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7Ym_mjlYC8s5S00gs9HKfIvYyJSGc3eYBU6ol1IK8Rc_3hsoJFFnxTjWMwBaFkXi1Rs8eJqFCHw5imNnCl_FqXs9If-mq0hmyFnbUGlQ9W6yvvJhVqoSV5hrU216eJZHgRSHO8_jreJo/s1600-h/happy_vripSnap204.png" imageanchor="1" style="margin-left: 0.3em; margin-right: 0.3em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7Ym_mjlYC8s5S00gs9HKfIvYyJSGc3eYBU6ol1IK8Rc_3hsoJFFnxTjWMwBaFkXi1Rs8eJqFCHw5imNnCl_FqXs9If-mq0hmyFnbUGlQ9W6yvvJhVqoSV5hrU216eJZHgRSHO8_jreJo/s320/happy_vripSnap204.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPez3lcqaZY-rn5T7GraNT0rpXMQFPx7QcF_PJSJsq1tEBHb22trXUZEgEiegmKQF4PrHtqzfBj2kCQC4Ez5t7zP_OUSYg2ksUKLeLxpFK6VdYY-us330vl43_DuFiQz6d2JbT7Qql2QU/s1600-h/happy_vripSnap205.png" imageanchor="1" style="margin-left: 0.3em; margin-right: 0.3em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPez3lcqaZY-rn5T7GraNT0rpXMQFPx7QcF_PJSJsq1tEBHb22trXUZEgEiegmKQF4PrHtqzfBj2kCQC4Ez5t7zP_OUSYg2ksUKLeLxpFK6VdYY-us330vl43_DuFiQz6d2JbT7Qql2QU/s320/happy_vripSnap205.png" /></a></div>Now you have just to fill the texture with the color of the sampled point cloud; you can do this with the filter <i>texture->Transfer color to texture</i> and choosing an adequate texture size (be bold and use a large texture size...). Below the result with a comparison about error color coded using a texture or simply the color-per-vertex.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-5712613272400158102010-01-10T17:09:00.000-08:002010-07-06T01:29:20.599-07:00Measuring the difference between two meshesComputing the geometric difference between two 3D models is a quite common task in mesh processing. In our lab, many years ago (11 !), we developed and freely distributed the standard tool for such task, <i><b>Metro</b></i>, whose <a href="http://www3.interscience.wiley.com/journal/119117847/abstract">paper</a> has been cited <a href="http://scholar.google.com/scholar?q=metro">more than 500 times </a>. While Metro is still a small open source standalone command line program <a href="http://vcg.sourceforge.net/index.php/Metro">available at our web site</a>, its functionality have been integrated into MeshLab in the filter <i>Sampling->Hausdorff Distance</i>, and they can be used in a variety of ways.<br />
So here is a short basic tutorial. <br />
<br />
Start with a mesh (in the following the well known Stanford Happy Buddha (1087716 triangles). Aggressively simplify it in a significant way to just 50k tris (e.g. 1/20 of the original size). Reload the original mesh as a new layer. At this point you should have two approximation of the same shape well aligned in the same space. Toggling the visibility on and off of each mesh you should easily see the difference between the two meshes (tip: <i>ctrl+click</i> over the eye icon in the <i>layers</i> window turn off all the other layers).<br />
<br />
Now you are ready to start the Hausdorff distance filter. First of all remember that the <a href="http://en.wikipedia.org/wiki/Hausdorff_distance">Hausdorff Distance</a> between two meshes is a the maximum between the two so-called one-sided Hausdorff distances (technically not they are not a distance): <br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTRzGn2IVU4lyVAnBAVeOnvXnzeH1RKTnMB2_OwUMPPGs4GXlLVAnABG-dTN35BEKiDBvhNe7BVGNDci7NVQ2s6486VP67Yjxr_ITsQNK23EBEW3eHF11YDHnshExE4EzM9_AwqMFAur8/s1600-h/Screen+shot+2010-01-11+at+12.30.43+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTRzGn2IVU4lyVAnBAVeOnvXnzeH1RKTnMB2_OwUMPPGs4GXlLVAnABG-dTN35BEKiDBvhNe7BVGNDci7NVQ2s6486VP67Yjxr_ITsQNK23EBEW3eHF11YDHnshExE4EzM9_AwqMFAur8/s640/Screen+shot+2010-01-11+at+12.30.43+AM.png" /></a></div>These two measures are not symmetric (e.g. the results depends on what mesh you set as X or Y).<br />
In the Hausdorff filters MeshLab computes only the one-sided version<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMbC5sYfP3GrmzpAGsxfSMlm50EbwHRnIy1Ny5KbkAld6DIDe0Nxa9QeqzEWfdSpYqNrKEw_aXZJUNkjkFCpBfYBfLnAaqonfZFaAR2f3YMcoKRq7ZcZcCXwYF3MGpPDAVbye1y15lDAg/s1600-h/Screen+shot+2010-01-11+at+12.31.39+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMbC5sYfP3GrmzpAGsxfSMlm50EbwHRnIy1Ny5KbkAld6DIDe0Nxa9QeqzEWfdSpYqNrKEw_aXZJUNkjkFCpBfYBfLnAaqonfZFaAR2f3YMcoKRq7ZcZcCXwYF3MGpPDAVbye1y15lDAg/s200/Screen+shot+2010-01-11+at+12.31.39+AM.png" /></a></div>leaving the task of getting the maximum of the two to the user.<br />
<br />
Now on the practical side. MeshLab uses a sampling approach to compute the above formula taking a bunch of points over a mesh <i>X</i> and searching for each <i>x</i> the closest point <i>y</i> on a mesh <i>Y</i>. That means that the result is strongly affected on how many points do you take over X and there are a lot of option on for that. A common very simple approach is just to use the vertexes of the highest density mesh as sampling points (e.g. the original Buddha vertexes): to do this simply leave checked only the "<i>vertex sampling</i>" option in the filter dialog and be sure that the number of samples is greater or equal than the vertex number. After a few secs the filter ends writing down in the layers log windows the collected info. Something like:<br />
<br />
<div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: Hausdorff Distance computed</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: Sample 543652</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: min : 0.000000 max 0.001862</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: mean : 0.000029 RMS : 0.000083</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: Values w.r.t. BBox Diag (0.229031)</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: min : 0.000000 max 0.008128</span></div><div style="font-family: "Courier New",Courier,monospace;"><span style="font-size: x-small;">: mean : 0.000126 RMS : 0.000361</span></div><br />
For sake of human readability the filter reports the values in the mesh units (whatever they are) and with respect to the diagonal of the bounding box of the mesh that is something you are always able to understand without knowing anything about the model units. For example in this case you can see that the maximum error between the two mesh is approximately 1% of the bbox diag, but on the average the two meshes are almost in the 1/10000 range.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoFN4tGFY12pGaShR65bsPc20rmwN0YTrDFEKBd-YnDjEDiwud8nmJVzF1lqMp-UrtcW_9qIPwbEQnfNTmmG9RiR4ZS5IXYWBGRa-zjR6tATydkect6WZ4aQ-vZBFFViRBLEVUU4HqtDg/s1600-h/happy_vripSnap00.png" imageanchor="1" style="margin-left: 0.25em; margin-right: 0.25em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoFN4tGFY12pGaShR65bsPc20rmwN0YTrDFEKBd-YnDjEDiwud8nmJVzF1lqMp-UrtcW_9qIPwbEQnfNTmmG9RiR4ZS5IXYWBGRa-zjR6tATydkect6WZ4aQ-vZBFFViRBLEVUU4HqtDg/s320/happy_vripSnap00.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxrHKV3OUEQyizwvyvsCnnt9gh3JvlaHEhU-pjhI3VVrAqFUak8oXotGqPvBdvuvGyAFGAUK_w80TsBAMw3E6ZKX-axb_00wIEAyuMBnred5V6f6qKU1cGVwgR2Pbtva_ahqVl4DiZAnw/s1600-h/happy_vripSnap01.png" imageanchor="1" style="margin-left: 0.25em; margin-right: 0.25em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxrHKV3OUEQyizwvyvsCnnt9gh3JvlaHEhU-pjhI3VVrAqFUak8oXotGqPvBdvuvGyAFGAUK_w80TsBAMw3E6ZKX-axb_00wIEAyuMBnred5V6f6qKU1cGVwgR2Pbtva_ahqVl4DiZAnw/s320/happy_vripSnap01.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqMTX6B7Ywjbq3iGTqOovffGM9PvB9xND-Sa18i7N5oas_9Kqkmc1LetjnXyrfisXsFiD-uLdkdTz9Cmgc1B0fzb8e6mXAUA_mBdmKdBV69dDDXh0QgVH-0Hq-SRBh9qb2IJCp1vn-79s/s1600-h/happy_vripSnap03.png" imageanchor="1" style="margin-left: 0.25em; margin-right: 0.25em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqMTX6B7Ywjbq3iGTqOovffGM9PvB9xND-Sa18i7N5oas_9Kqkmc1LetjnXyrfisXsFiD-uLdkdTz9Cmgc1B0fzb8e6mXAUA_mBdmKdBV69dDDXh0QgVH-0Hq-SRBh9qb2IJCp1vn-79s/s320/happy_vripSnap03.png" /></a></div><br />
The filter save in the all-purpose <i>quality</i> field of the vertexes of the sampled mesh the computed distance values. To better visualize the error you can simply convert these values (for the high resolution mesh) into colors using the<i> Color->Colorize by quality</i> filter that maps them in to a rather red-green-blue colormap. Usually given the non uniform distribution of the values you have to play a bit with the filter parameters clamping the mapping range to something meaningful (only a few points have the maximum so with a linear mapping of the values over the whole range will result into a almost uniform red mesh. Note that it is a red-green-blu map, so red is min and blue is max, so in our case red means zero error (good) and blue high error (bad).<br />
The next image sequence report just a small detail of the one of the points with higher error. During the simplification we removed some <i>topological noise, </i>(the thin tubes connecting the two side of the hole) from a Hausdorff point of view it is a rather large error: the points in the middle of the thin tubes has nothing in the simplified mesh that is close to them; so they bring up the maximum error significantly. Luckily enough they represent only a small portion of the whole mesh so the average error remains low.<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl09y-raxAzD-7AzAbgYk7rtKHg5Mv461qwxEkBsEsj_3rLIQvQeCk3dFGFne_BejM-JBc1runjelULRAqTKrjDFizEwTJL63V15GMjtjIJ4NLFWrJgwY5DZXN2erCgsKoPoMhbscZHYI/s1600-h/happy_vripSnap04.png" imageanchor="1" style="margin-left: 0.15em; margin-right: 0.15em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjl09y-raxAzD-7AzAbgYk7rtKHg5Mv461qwxEkBsEsj_3rLIQvQeCk3dFGFne_BejM-JBc1runjelULRAqTKrjDFizEwTJL63V15GMjtjIJ4NLFWrJgwY5DZXN2erCgsKoPoMhbscZHYI/s200/happy_vripSnap04.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOJXC8detPTEbG3MLV8-chrekHI-Oeryaxtv0xBwNl_wCstvOzHv_Yyjff0RmBKBCmY5Uf6YpNuvjjzYuJNDe3iYkfaGbhX9wQZvMArbe9XkIzSgJ99MWozClyaf_ZPW35z14jYeQE1kw/s1600-h/happy_vripSnap05.png" imageanchor="1" style="margin-left: 0.15em; margin-right: 0.15em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOJXC8detPTEbG3MLV8-chrekHI-Oeryaxtv0xBwNl_wCstvOzHv_Yyjff0RmBKBCmY5Uf6YpNuvjjzYuJNDe3iYkfaGbhX9wQZvMArbe9XkIzSgJ99MWozClyaf_ZPW35z14jYeQE1kw/s200/happy_vripSnap05.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0-IW9C9zZKys5eRfkFkCD96Xj3bfFZIN6fVspWbysL0Jyfe6T-EZF2qj2qASQ9n8GsAEVVxCxu9RGrrI5fxjcubhsugvRHnwurP-1FVPd_3O-w5Lpj5rcczQGkHVOJOzaNfCNmiitBHg/s1600-h/happy_vripSnap06.png" imageanchor="1" style="margin-left: 0.15em; margin-right: 0.15em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0-IW9C9zZKys5eRfkFkCD96Xj3bfFZIN6fVspWbysL0Jyfe6T-EZF2qj2qASQ9n8GsAEVVxCxu9RGrrI5fxjcubhsugvRHnwurP-1FVPd_3O-w5Lpj5rcczQGkHVOJOzaNfCNmiitBHg/s200/happy_vripSnap06.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiA25fgkOkVjPY57GNjo9z6VDDiSAFBb-AXRXotnxMGvtwkMq4YD9W5keaEbc9P8yQFYATU_1Wogy-RZj3sYDbJomfGmN8h-a7YNZBtkmkJsUG6Knu2XlRMFO8yDHDpra_sp7kzn-_is00/s1600-h/happy_vripSnap07.png" imageanchor="1" style="margin-left: 0.15em; margin-right: 0.15em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiA25fgkOkVjPY57GNjo9z6VDDiSAFBb-AXRXotnxMGvtwkMq4YD9W5keaEbc9P8yQFYATU_1Wogy-RZj3sYDbJomfGmN8h-a7YNZBtkmkJsUG6Knu2XlRMFO8yDHDpra_sp7kzn-_is00/s200/happy_vripSnap07.png" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5ZPjjmPX8GKgNo-7nusdGVkfRVZkf3eeVQCFy4y2ZaN6AHMm8UGvZ3bb8LttLSocDFbY1bbp-RKTpiocnL9j7VF1A0-kU9jadVbGEc8rREzvHBAQ7U_S8L_eOQsTrMQ3LUhYYrJacrbg/s1600-h/happy_vripSnap08.png" imageanchor="1" style="margin-left: 0.15em; margin-right: 0.15em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5ZPjjmPX8GKgNo-7nusdGVkfRVZkf3eeVQCFy4y2ZaN6AHMm8UGvZ3bb8LttLSocDFbY1bbp-RKTpiocnL9j7VF1A0-kU9jadVbGEc8rREzvHBAQ7U_S8L_eOQsTrMQ3LUhYYrJacrbg/s200/happy_vripSnap08.png" /></a></div><br />
Note that if you measure the other one-sided Hausdorff distance, that specific mesh portion will not denote any particular error, because in that case you sample the simplified mesh and for each point of the simplified mesh there are points of the original mesh that are quite close to them. In other words, in this case the simplified mesh is <i>close</i> to the original one, but the original one is <i>not close</i> to the simplified one.<br />
<br />
Next post will discuss some remaining issues including the sampling of the surface, looking at all the taken samples and the found closest points and how to colorize the low resolution mesh...<br />
Second part of the tutorial <i><b><a href="http://meshlabstuff.blogspot.com/2010/03/measuring-distance-between-two-meshes-2.html">here.</a></b></i>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-42536138998580549752010-01-06T10:53:00.000-08:002010-01-06T10:53:39.620-08:00Desktop Manifacturing<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHcuszdqV2zk83ioIb7c6G6Dttarg5dmX18EZ9KPSJV110w9zdzJJQDHak4RMHBdVpWAIQ7mxJJHrXAh2qxy2MGkucmxiyAU9ozLd-SI1Sme9isC9Bq5xM8CF2n0znfUFpPOXrvBA9Tfo/s1600-h/Screen+shot+2010-01-06+at+7.46.42+PM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHcuszdqV2zk83ioIb7c6G6Dttarg5dmX18EZ9KPSJV110w9zdzJJQDHak4RMHBdVpWAIQ7mxJJHrXAh2qxy2MGkucmxiyAU9ozLd-SI1Sme9isC9Bq5xM8CF2n0znfUFpPOXrvBA9Tfo/s200/Screen+shot+2010-01-06+at+7.46.42+PM.png" /></a>The January 2010 number of <a href="http://makezine.com/magazine/">Make</a> will contains a lot of stuff about Desktop Manufacturing, a field where MeshLab has always been useful as an all purpose repairing tooling (and it is often cited as a handy free stl viewer...). In particular in the "3D Fabbing state of the art" of Make, they refer MeshLab as a<i> <a href="http://www.make-digital.com/make/vol21/?pg=75&search=meshlab&per_page=5&results_page=1&doc_id=-1">"really high quality free software"</a></i>. That's flattering :).ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-13930514600271550022009-12-22T16:41:00.000-08:002010-01-02T00:50:50.736-08:00Practical Quad Mesh SimplificationJust a shameless plug to our last <a href="http://www.eurographics2010.se/">EG</a> paper that will find is way inside MeshLab:<br />
<br />
<i> Marco Tarini, Nico Pietroni, Paolo Cignoni, Daniele Panozzo, Enrico Puppo</i><br />
<a href="http://vcg.isti.cnr.it/Publications/2010/TPCPP10"><b>Practical Quad Mesh Simplification</b></a><br />
Computer Graphics Forum, Volume 29, Number 2, EuroGraphics 2010<br />
<br />
In our community it is well know the old religious war between quad vs. triangle meshes, each approach has its own merits and I not discuss them here. <br />
Moving back and forth between the two approaches is often useful but the issue of getting a good quad mesh from a highly irregular tri mesh is a tough one. <br />
<br />
In the above paper we present a novel approach to the problem of quad mesh simplification, striving to use practical local operations, while maintaining the same goal to maximize tessellation quality. We aim to progressively generate a mesh made of convex, right-angled, flat, equally sided quads, with a uniform distribution of vertices (or, depending on the application, a controlled/adaptive sample density) having regular valency wherever appropriate. <br />
<br />
In simple words we start from a tri mesh, we convert into a dense quad mesh using a new Triangle-to-Quad conversion algorithm and then we simplify it using a new progressive quad simplification algorithm. The nice side is that the quad simplification algorithm actually improves the quality of the quad mesh. Below a small example.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5gsIMnSlz_9PNXFGFzjPni7lAqt2Ppyney2b6Pi8uV5BXaqFtXATiIBrquQM_TiRaBiM2LjaZm014h1yWBHHzJKaPynXD76b2ELwoz0edRT8VxI-VLfSGnDasN36S0GChuM8tMQ0ogRk/s1600-h/ultimo_dei_moai.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5gsIMnSlz_9PNXFGFzjPni7lAqt2Ppyney2b6Pi8uV5BXaqFtXATiIBrquQM_TiRaBiM2LjaZm014h1yWBHHzJKaPynXD76b2ELwoz0edRT8VxI-VLfSGnDasN36S0GChuM8tMQ0ogRk/s400/ultimo_dei_moai.png" /></a><br />
</div>We are currently adding this stuff inside MeshLab. The first things that will appear are the triangle to quad conversion algorithms and some functions for measuring the quality of a quad mesh according to some metrics. More info in the next posts....<br />
<span style="font-size: x-small;"><i><br />
</i></span><br />
<span style="font-size: small;"><i>(2/1/10 edit: if the above link for the paper does not work try this: <b><a href="http://www.cignoni.org/PracticalQuadMeshSimplification.pdf"><b>Practical Quad Mesh Simplification</b></a></b>)</i></span>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-57789762989430119022009-12-03T00:34:00.000-08:002010-02-05T05:16:24.819-08:00MeshLab on YouTubeJust a short post of a video created by Nicolò dell'Unto (<a href="http://www.imtlucca.it/phd_programs/alumni.php">a PhD student at IMT</a>)<br />
about the use of MeshLab and Arc3D for building up a 3D model of an archeological excavation and showing it inside a <a href="http://en.wikipedia.org/wiki/Cave_Automatic_Virtual_Environment">cave</a>.<br />
<br />
<object width="320" height="265"><param name="movie" value="http://www.youtube.com/v/xvTfg2Vxx88&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/xvTfg2Vxx88&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="320" height="265"></embed></object><br />
<br />
Side note, the data was collected during a workshop of the <a href="http://www.3d-coform.eu/">3DCOFORM</a> training series on “<a href="http://www.cyi.ac.cy/node/578">3D acquisition and post-processing</a>” that took place at The Cyprus Institute in Nicosia on 2-6 November 2009 and that, among other things focused on the use of MeshLab for Cultural Heritage related activities.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-40629969128015564412009-11-03T01:09:00.000-08:002009-11-04T05:29:24.996-08:003D scanning and unrolling an ancient seal<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8yerXxigf29kiLOv9vEcOAOI3cWWvC4o_M9ahRIBzA5KThBWvN5ndlGkSovK8Xd2gsHpmZr_qSg4jIAbE3xEpPKAnOHIpI4nWpDcR5JSkv8QM_NN0OgjFZRJIrkVx1BUZG46z9nU1oMA/s1600-h/sceau.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8yerXxigf29kiLOv9vEcOAOI3cWWvC4o_M9ahRIBzA5KThBWvN5ndlGkSovK8Xd2gsHpmZr_qSg4jIAbE3xEpPKAnOHIpI4nWpDcR5JSkv8QM_NN0OgjFZRJIrkVx1BUZG46z9nU1oMA/s200/sceau.jpg" /></a>A few lines on an interesting recent project I participated and that exploited MeshLab processing abilities. <br />
The project whose results are now shown in a exhibition at the Louvre involved the scanning with non traditional technologies of the very small and wonderful ancient <a href="http://www.louvre.fr/llv/oeuvres/detail_notice.jsp?CONTENT%3C%3Ecnt_id=10134198673225280&CURRENT_LLV_NOTICE%3C%3Ecnt_id=10134198673225280&FOLDER%3C%3Efolder_id=9852723696500800&baseIndex=37&bmLocale=en">Cylinder Seal of Ibni-Sharrum</a> (photo © CRMF / D. Pitzalis), a precious antique mesopotamic artifact that is considered one of the absolute masterpieces of <a href="http://en.wikipedia.org/wiki/Ancient_glyptic_art">glyptic art</a>.<br />
<br />
This small seal was digitally acquired at <a href="http://www.c2rmf.fr/">CRMF</a> at a very high resolution and with a variety of 3D scanning techniques (microprofilometry, x-ray Tomography, photogrammetric techniques) and, obviously, the results were processed and integrated entirely with MeshLab.<br />
<br />
Among the nice things that we did inside MeshLab was the <i>virtual unrolling</i> of the seal, e.g. getting the inverse shape that you get when you roll the seal over a soft substance like clay or wax. It was quite easy from a technical point of view, but very appreciated by the restorers that disregard invasive plaster based techniques that often can leave small residuals over the precious artifacts. You can find more details on the whole acquisition and processing of the seal on this VAST conference <a href="http://vcg.isti.cnr.it/Publications/2008/PCMA08/">paper</a>.<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;"><br />
</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOZt3SQdrVvvktCejtfzDcbSQ4BdWBx-GgIBG8Qk4Qrs6oGGgJHXiPyq1ne_Nqq_UIE2Ymt1wm2OKajN_JUdiVaYH7tlM_jOZ70_j_kZsNUwTXS2dz-Jvf7XCo4VaUKwdBHAqZDgjkv6Y/s1600-h/sceau_unwrap_shader.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOZt3SQdrVvvktCejtfzDcbSQ4BdWBx-GgIBG8Qk4Qrs6oGGgJHXiPyq1ne_Nqq_UIE2Ymt1wm2OKajN_JUdiVaYH7tlM_jOZ70_j_kZsNUwTXS2dz-Jvf7XCo4VaUKwdBHAqZDgjkv6Y/s320/sceau_unwrap_shader.jpg" /></a><br />
</div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtdM2JIw1b7p7f2uuifzXzZjxTkAANworJ6HVTFyiNRLIcF6E42qW78C1hWillsToaprJ1A2mpNVZ6Lk77GDsvH8Mbs95Vex6lgKAHI_4v2LiavCb5Q0LYCmjLy7b6bI6tUuhry_h8D2M/s1600-h/sceau_unwrap_flat.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtdM2JIw1b7p7f2uuifzXzZjxTkAANworJ6HVTFyiNRLIcF6E42qW78C1hWillsToaprJ1A2mpNVZ6Lk77GDsvH8Mbs95Vex6lgKAHI_4v2LiavCb5Q0LYCmjLy7b6bI6tUuhry_h8D2M/s320/sceau_unwrap_flat.jpg" /></a><br />
<br />
<br />
On the side you can see a couple of renderings of the 2-million of triangle model of the unrolled seal; the renderings were done inside MeshLab, the first one is a simple flat shaded rendering, while the second one exploit a nice shader that I have recently added to the MeshLab shading arsenal, it mimics in a shameless way the ZBrush technique of varying shininess and color according to the "cavities" of the geometric model (they use it for the famous zbrush wax and bronze materials). It is nice to see how the shading vastly improve the shape perception of the 3D model.<br />
I have not seen many correct discussion on how to perform these kind of shading, so expects a post on that... <br />
<br />
<br />
A massive physical reproduction (4 meters long!) of the unrolled seal is at the center of "<i><b>OnLab</b></i>" a thematic <a href="http://www.louvre.fr/llv/exposition/detail_exposition_print.jsp?CONTENT%3C%3Ecnt_id=10134198674147803&CURRENT_LLV_EXPO%3C%3Ecnt_id=10134198674147803&pageId=1">exhibition of Michel Paysant</a>, that will open in the next days at Louvre, Denis Pitzalis worked a lot on this project and you can find more details and photos in his <a href="http://www.pitzalis.org/index.php/2009/10/30/michel-paysant-onlab-thematic-exhibition-at-the-louvre-museum-26-11-2009-01-03-2010/">blog.</a>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-10969445513822866212009-09-08T08:09:00.001-07:002009-09-08T08:09:37.043-07:00MeshLab V1.2.2 Released!<div xmlns='http://www.w3.org/1999/xhtml'>Yet another <a href='https://sourceforge.net/projects/meshlab/files/' target='_blank'>minor release of MeshLab</a>. This time a lot of large internal changes (we redesigned the parameter mechanism of the filters for a better previewing mechanism)<img width='200' height='167' src='https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjS1oa5rRnw0Q3pJ_iF4aEDai_Q8yDkXzYXM5WxD1mRtd9fb9tb_yrioeBWGq0ALxEZC9lvgfo8JEsMYLBzVWsayM0pAi3HSlSoirGv_0_UXtFmRQklCcDSxmWe9GQZcXHQo3fUDJQUTPw/?imgmax=800' style='max-width: 800px; float: right; margin-top: 10px; margin-bottom: 10px; margin-left: 10px;'/> and we added a few new features:<br/>* <a href='http://en.wikipedia.org/wiki/Protein_Data_Bank' target='_blank'>pdb</a> molecular importing to build up meshes from molecular description. It feature various ways of building meshes from pdb description. <br/>* Weighted simplification; you can now weight the simplification process with a generic scalar value (e.g. simplify more the internal regions, preserve better the face of a character, etc, etc.).<br/>* Improved the vertex attribute transfer filter (the filter that allows you to transfer color, vertex, position, quality from a mesh to another one) to support the management<br />of point cloud data and to limit the attribute transfer to a limited<br />distance.<br/>*<img width='200' height='174' src='https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8GopRZWD3vdnoLBNpFWNaP7IbyT_cs4ATeFpiKrFYtqO2EGyIVyVWZ9gc0-q02xm1ioO2zf6KAUVsu5QxXpnAdaKMk9JrH3TxEVnm0lNTrue_F9yU3qp8BdiPiMcupEWdh1sSUdSsHWw/?imgmax=800' style='max-width: 800px; float: right; margin-top: 10px; margin-bottom: 10px; margin-left: 10px;'/> The new <a href='http://vcg.isti.cnr.it/Publications/2009/PTC09/' target='_blank'>abstract surface parametrization algorithm</a> in now inside MeshLab; currently it is a bit slow and buggy (well it is the first release) so sometime it can crash. The current version of the filter support only the remeshing side of the technique, e.g. you can create an abstract texture and then use it to remesh your model in a very nice way. Full texture parametrization of meshes ahead in the next version. <br/>* And obviously a lot of small bug issues....<br/>As usual release notes are <a href='http://meshlab.sourceforge.net/wiki/index.php/Release_Notes_1.2.2' target='_blank'>here</a> in the wiki.<br/><br/><br/><br/><br/><br/></div>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-81683115249994558872009-09-07T06:11:00.000-07:002010-03-12T15:32:23.110-08:00Meshing Point Clouds<div xmlns="http://www.w3.org/1999/xhtml"><div style="text-align: justify;">One of the most requested tasks when managing 3D scanning data is the conversion of point clouds into more practical triangular meshes. Here is a step-by-step guide for transforming a raw point cloud into a colored mesh.</div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMRRQNPxG-Li69TkQwS3RKoQdvHNqsOoUkqfel0ce9OCbibKk8NSnEvFdQidQupokKG-HvlXLF_LOKK9C-vSRsqdHdcws_4fTMtLSKC4VV_uePGgkKJ-0-dkwzB15ZYwLHxQRlwYrQ2gw/s1600-h/Teatro00.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378522187757813138" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMRRQNPxG-Li69TkQwS3RKoQdvHNqsOoUkqfel0ce9OCbibKk8NSnEvFdQidQupokKG-HvlXLF_LOKK9C-vSRsqdHdcws_4fTMtLSKC4VV_uePGgkKJ-0-dkwzB15ZYwLHxQRlwYrQ2gw/s200/Teatro00.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a><br />
<div style="text-align: justify;">Let's start from a colored point cloud (typical output of many 3D scanning devices), each point has just color and no normal information. The example dataset that we will use is a medium sized dataset of 9 millions of points. Typical issues of such a dataset dataset: it is non uniform (comes from an integration of different datasets), has some strongly biased error (alignment error, some problem during data integration), it comes without normals (hard to be shaded).</div><br />
<ol><li><span style="font-weight: bold;">Subsampling</span><br />
<br />
<br />
<div style="text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVdGNDokcBkzaQfRMjBzvg3cOPMphV_IvvRjsvDRWp_pnCLKghx0T1CJtHZ_vRuRpLIXIAoaDI8XMj0nKwEmK4kZW3TpRUXT9D02V4-NJDnvZjUxhXOEnjusvOOirihD9CJuVvL-fU5No/s1600-h/Teatro06.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378526276153391458" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVdGNDokcBkzaQfRMjBzvg3cOPMphV_IvvRjsvDRWp_pnCLKghx0T1CJtHZ_vRuRpLIXIAoaDI8XMj0nKwEmK4kZW3TpRUXT9D02V4-NJDnvZjUxhXOEnjusvOOirihD9CJuVvL-fU5No/s200/Teatro06.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a> As a first step we reduce a bit the dataset in order to have amore manageable dataset. Many different options here. Having a nicely spaced subsampling is a good way to make some computation in a faster way. The <i>Sampling->Poisson Disk Sampling</i> filter is a good option. While it was designed to create Poisson disk samples over a mesh, it is able to also compute Poisson disk subsampling of a given point cloud (remember to check the 'subsampling' boolean flag). For the curious ones, it uses an algorithm very similar to the dart throwing paper presented at <a href="http://kesen.huang.googlepages.com/egsr2009Papers.htm">EGSR2009</a> (except that we have released code for such an algorith long before the publication of this article :) ). In the invisible side figure a Poisson disk subsampling of just 66k vertices.</div></li>
<li><span style="font-weight: bold;">Normal Reconstruction</span><br />
<br />
<br />
<div style="text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN6q-cPBgvh0Qmka6-hj3gdyiA215B8Vjd4YagsNOEcOKis7nIGkpGe1MBX01tnR9dU5Gg4Dtv-Luh51lOeemVLNiKg__TJXeo8IwDnSDgNyIeWhk6N4XAWtepceNvcmABVV8q0PxXDEU/s1600-h/Teatro02.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378522209097645106" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiN6q-cPBgvh0Qmka6-hj3gdyiA215B8Vjd4YagsNOEcOKis7nIGkpGe1MBX01tnR9dU5Gg4Dtv-Luh51lOeemVLNiKg__TJXeo8IwDnSDgNyIeWhk6N4XAWtepceNvcmABVV8q0PxXDEU/s200/Teatro02.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a> Currently inside MeshLab the construction of normals for a point cloud is not particularly optimized (I would not apply it over 9M point cloud) so starting from smaller mesh can give better, faster results. You can use this small point cloud to issue a fast surface reconstruction (using<i> Remeshing->Poisson surface reconstruction</i>) and then transfer the normals of this small rough surface to the original point cloud. Obviously in this way the full point cloud will have a normal field that is by far smoother than necessary, but this is not an issue for most surface reconstruction algorithms (but it is an issue if you want to use these normals for shading!).</div></li>
<li><span style="font-weight: bold;">Surface reconstruction</span><br />
<br />
<br />
<div style="text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMFupkJPDF_5ueDIQ3BGBNq0K0WCf6XN1QcuDG_lM2Q_QGLvXwJOubbx3lyC70kBs3hJwMclXO2unn7AwKuI_ewoZiOakvQdenehn0ZYnDmGUseL5xHDouiDmQeQewd4TupBhyphenhyphen3d8Xagk/s1600-h/Teatro03.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378522214083469234" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMFupkJPDF_5ueDIQ3BGBNq0K0WCf6XN1QcuDG_lM2Q_QGLvXwJOubbx3lyC70kBs3hJwMclXO2unn7AwKuI_ewoZiOakvQdenehn0ZYnDmGUseL5xHDouiDmQeQewd4TupBhyphenhyphen3d8Xagk/s200/Teatro03.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a>Once rough normals are available Poisson surface reconstruction is a good choice. Using the original point cloud with the computed normals we build a surface at the highest resolution (recursion level 11). Roughly clean it removing large faces filter, and eventually simplify it a bit (remove 30% of the faces) using classical <i>Remeshing->Quadric edge collapse simplification</i> filter (many implicit surface filters rely on marching cube like algorithms and leave useless tiny triangles).</div></li>
<li><span style="font-weight: bold;">Recovering original color</span><br />
<br />
<br />
<div style="text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_jnKHw79F4bs8gSQwwU0J6abM9MV5Z786yNb8oEGsUOERZS1aGBqMhUX0stnwUKV-G52E60M2vHR36sQuSH4zBotEdSDXF9E6tqVz9vjoYLwX2t4zoJejkguP3IopSQrEy3YaiJw3TjI/s1600-h/Teatro04.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378522229767796066" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_jnKHw79F4bs8gSQwwU0J6abM9MV5Z786yNb8oEGsUOERZS1aGBqMhUX0stnwUKV-G52E60M2vHR36sQuSH4zBotEdSDXF9E6tqVz9vjoYLwX2t4zoJejkguP3IopSQrEy3YaiJw3TjI/s200/Teatro04.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a>Here we have two options, recovering color as a texture or recovering color as per-vertex color. Here we go for the latter, leaving the former to a next post where we will go in more details on the new automatic parametrization stuff that we are adding in MeshLab. Obviously if you store color onto vertexes you need to have a very dense mesh, more or less of the same magnitudo of the original point cloud, so probably refining large faces a bit could be useful. After refining the mesh you simply transfer the color attribute from the original point cloud to the reconstructed surface using the <i>vertex attribute transfer</i> filter.</div><br />
</li>
<li><span style="font-weight: bold;">Cleaning up and assessing</span><br />
<br />
<br />
<div style="text-align: justify;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiskztmHUJpObecoenjyQ7qhiGIsNTpx3OdNJmNwq9tq8xizZEv35HngRjH9Lqj1Sc80vON_SFNxy3jV09KOg7MnZ9vUTswGd04FsnH5117mOkL9cQDK6MlZCf8d5VS36fkJLIBHWQc_w/s1600-h/Teatro07.png" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5378528012919758562" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiiskztmHUJpObecoenjyQ7qhiGIsNTpx3OdNJmNwq9tq8xizZEv35HngRjH9Lqj1Sc80vON_SFNxy3jV09KOg7MnZ9vUTswGd04FsnH5117mOkL9cQDK6MlZCf8d5VS36fkJLIBHWQc_w/s200/Teatro07.png" style="cursor: pointer; float: right; height: 104px; margin: 0pt 0pt 10px 10px; width: 200px;" /></a>The <i>vertex attribute transfer</i> filter uses a simple closest point heuristic to match the points between the two meshes. As a side product it can store (in the all-purpose per-vertex scalar quality) the distance of the matching points. Now just selecting the faces having vertices whose distance is larger than a given threshold we can easily remove the redundant faces created by the Poisson Surface Reconstruction.</div></li>
</ol><br />
<div style="text-align: justify;">This pipeline is only one of the many possible way of ending up into a nice mesh. For example different choices could have been done for step 2/3. There are reconstruction algorithms that do not need surface normals, like for example the "Voronoi Filtering" that is an interpolating reconstruction algorithm (e.g. it build up only triangles on the given input points) but usually these filters works better on very clean datasets, without noise or alignment errors. Otherwise on noisy datasets it is easy that they create a lot of non manifold situations. Final thanks to <a href="http://www.cyi.ac.cy/user/1">Denis Pitzalis</a> for providing me this nice dataset of a Cypriot theater.</div></div>ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-40623470258521932592009-08-18T16:43:00.000-07:002009-08-19T18:01:12.728-07:00Computation & Cultural Heritage Siggraph Course<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnEfY85heYSCIW033koY2-44PGNZxgqlBZ_5B1TQkh67IBMfEvB3-3WNZymN2LZwUYQfLB3J6HIB3Js94timoEyPZJtiQsngc3ED5-WsJF75ga3KCdd_SdhK4u8Tx7zD7bAK5uIm_uMRI/s1600-h/Ripoll_poisson_10full.jpg"><img style="margin: 0pt 0pt 1px 1px; float: right; cursor: pointer; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnEfY85heYSCIW033koY2-44PGNZxgqlBZ_5B1TQkh67IBMfEvB3-3WNZymN2LZwUYQfLB3J6HIB3Js94timoEyPZJtiQsngc3ED5-WsJF75ga3KCdd_SdhK4u8Tx7zD7bAK5uIm_uMRI/s400/Ripoll_poisson_10full.jpg" alt="" id="BLOGGER_PHOTO_ID_5371832523382867698" border="0" /></a><br /><br />Shameless linking of the <a href="http://vcg.isti.cnr.it/%7Ecignoni/CHCourse/">Computation & Cultural Heritage Siggraph Course</a> where, a week ago, I gave my contribution. The course surveyed several practical CG techniques for applications in cultural heritage, archeology, and art history. Topics include: efficient/advanced/cheap techniques for 2D/3D digital capture of heritage objects, appropriate uses in the heritage field, an end-to-end pipeline for processing archeological reconstructions (with special attention to incorporating archeological data and review throughout the process), how digital techniques are actually used in cultural heritage projects, and an honest evaluation of progress and challenges in this field.<br /><br />Specifically to this blog in my first presentation I described a free <span style="font-weight: bold; font-style: italic;">photo to 3D</span> pipeline that relies on the free web-based service <a style="font-style: italic;" href="http://www.arc3d.be/"><span style="font-weight: bold;">Arc3D</span></a> (<span style="font-size:small;">developed during the <a href="http://www.epoch-net.org/">Epoch</a> EU project by <a href="http://www.esat.kuleuven.be/psi/visics/">Visic</a> of KUL</span>) for <a href="http://en.wikipedia.org/wiki/Structure_from_motion">Structure-from-Motion</a> reconstruction and (obviously) on <span style="font-weight: bold; font-style: italic;"><a href="http://www.meshlab.org">MeshLab</a></span> for the processing of the generated 3D range maps. In practice it is a pipeline that allows to cheaply reconstruct nice accurate 3D models from just a set of high resolution photos. Obviously not all the subject fit with this kind of approaches (forget moving subjects and glassy, shiny, fluffy, iridescent stuff), but for stable, dull, textured objects, it works surprisingly well, giving results with a quality not far from traditional laser based 3D scanning. More info on the process in the <a href="http://vcg.isti.cnr.it/%7Ecignoni/CHCourse/">slides</a> (and eventually in other posts here). In the top right picture a typical example of the results that you can obtain when starting from a reasonable set of photos of a detail of a weathered stone romanesque high relief (<span style="font-size:small;"><a href="http://commons.wikimedia.org/wiki/Category:P%C3%B3rtico_de_Ripoll">Monasterio de Santa María de Ripoll</a></span>). The model is untextured, with just a bit of ambient occlusion: all you see is geometry.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-78361444441581884592009-07-31T04:48:00.000-07:002009-07-31T05:50:00.986-07:00Almost isometric mesh parameterizationA short post after a long inactivity just before going to Siggraph.<br><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4XNEXSuQQehXkEDM-_aMsm9DHy21ixU3aaQw7Z3A3ogdS6dY-BRhmmUqnUWYTofvEhytm9qwy_lXtkFpLG7Dm3nbhPoFKKS_kWb8W2opUUmDM6ibZFeUuyqmNH3e9dqNcguDqDiEOr7U/s1600-h/total.png"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 196px; height: 200px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg4XNEXSuQQehXkEDM-_aMsm9DHy21ixU3aaQw7Z3A3ogdS6dY-BRhmmUqnUWYTofvEhytm9qwy_lXtkFpLG7Dm3nbhPoFKKS_kWb8W2opUUmDM6ibZFeUuyqmNH3e9dqNcguDqDiEOr7U/s200/total.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5364594340434538034" /></a><br />Many users of MeshLab complained the lack of texturing tools. As you probably know perfect, nice, clean, robust, automatic texture parametrization is a kind of 'holy grail' in CG. There are many many solutions around and a huge literature on that, but no silver bullet.<br />We (mostly <a href="http://vcg.isti.cnr.it/~pietroni/">Nico</a> and <a href="http://vcg.isti.cnr.it/~tarini/">Marco</a>) added our 5 cents to the literature with yet another approach [1] that is able to produce parametrizations that exhibit a very low distortion and are composed by a small number of large regular patches. The parametrization domain is a collection of equilateral triangular 2D regions enriched with explicit adjacency relationships (we call it abstract because no explicit 3D embedding is necessary). It is tailored in order to minimize the distortion, resulting in excellent parametrization qualities, even when meshes with complex shapes and topology are mapped into domains composed of a small number of large contiguous regions. <br /><br /><object style="float:right; margin:0 0 10px 10px"><param name="movie" value="http://www.youtube.com/v/t4RC3H3Ab0Y&hl=en&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/t4RC3H3Ab0Y&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="212" height="172"></embed></object> An interesting consequence of having a texturing domain that is composed by 'abstract' equilateral triangles is that you can exploit this parametrization to build high quality remeshing that are better that the current state of the art. Look at the top figures to get an idea of the quality of the produced meshes. As usual all the gory details of the technique in the below paper preprint and a working open source implementation in the next versions of MeshLab. <br /><br /><br /><br><br />[1] Nico Pietroni, Marco Tarini, Paolo Cignoni, <a href="http://vcg.isti.cnr.it/Publications/2009/PTC09/">Almost isometric mesh parameterization through abstract domains</a>, IEEE Transaction on Visualization and Computer Graphics, Volume In press - 2009ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-69475026903494450642009-06-02T01:31:00.000-07:002009-06-02T01:57:56.702-07:00MeshLab V1.2.1 Released!<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOnuyRMFfiO421yq4h6eo9PC4wmodljFxeSNIb-5H4MzxpEP9R4Qqj0Cy5bieq27dtixqM6r89HQO5TKsZTdQouH70uOef4Dp7VZ3Rsrewca-0TDrQcA0S30Xw0NFQLr0MGB78_hgVYQA/s1600-h/MeshLab_AlphaShape.png"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 200px; height: 187px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOnuyRMFfiO421yq4h6eo9PC4wmodljFxeSNIb-5H4MzxpEP9R4Qqj0Cy5bieq27dtixqM6r89HQO5TKsZTdQouH70uOef4Dp7VZ3Rsrewca-0TDrQcA0S30Xw0NFQLr0MGB78_hgVYQA/s200/MeshLab_AlphaShape.png" alt="" id="BLOGGER_PHOTO_ID_5342651747255273186" border="0" /></a><br />Initially this release was planned just as a bug fixing release (a really needed one!): a couple of really annoying bugs infiltrated the 1.2.0 release, causing crashes for all the tools that involved a <a href="http://en.wikipedia.org/wiki/Marching_cubes">marching cube</a> processing and malfunctioning of the <a href="http://en.wikipedia.org/wiki/U3D">U3D</a> exporting. Now they should work well.<br /><br />In practice it is a feature rich release: as a bonus we have added some new nice functionalities (thanks to M. Sottile for implementing them): <a href="http://meshlab.sourceforge.net/wiki/index.php/Qhull_Filter" title="Qhull Filter">Convex Hull, Alpha shape, Voronoi Filtering, and Visible points</a> filters. These filters rely on the well known <a href="http://www.qhull.org/">Qhull</a> convex hull library.<br />Convex hulls and Alpha shapes do not need extensive introduction, but a few notes on the two other filters are probably needed.<br /><br /><span style="font-weight: bold;">Voronoi filtering</span> implements the homonym surface reconstruction algorithm by <a href="http://portal.acm.org/citation.cfm?id=276889">Nina Amenta and Marshall Bern</a> that is able to reconstruct a nice interpolating triangulated mesh from a point clouds. It requires nicely sampled, low noise point clouds, but it works well.<br /><br />The <span style="font-weight: bold;">Visible Points</span> filter implements a nice algorithm of <a href="http://portal.acm.org/citation.cfm?id=1276407">Sagi Katz, Ayellet Talfor and Ronen Basri</a> for computing direct visibility of point clouds. It is a really really simple and smart trick that works well and it is really easy to be implemented (once you have a convex hull implementation).<br /><br />As usual, release notes are <a href="http://meshlab.sourceforge.net/wiki/index.php/Release_Notes_1.2.1">here</a> in the wiki.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-38125600640452405182009-04-30T14:35:00.001-07:002009-04-30T14:40:13.782-07:00MeshLab V1.2.0 Released!After more than one year from version 1.1.1, the long, long waited MeshLab v.1.2.0 has<br />been released! Jump over the main page and download it.<br /><br /><a href="http://www.meshlab.org/">http://www.meshlab.org</a><br /><br />A sincere thank-you to every contributor and, in particular, to Guido Ranzuglia<br />that has willingly taken the demanding and onerous task of coordinating<br />(e.g. actually performing) the whole release process.<br />Next release cycles, in particular for bug fixing releases, will be much<br />shorter...<br />With respect to v1.1.1 the list of new features is very very long, now more than 100 different filtering actions are provided. In the next post I will spot some of the most interseting algorithm that have been added. In the meantime just download and try it!ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.comtag:blogger.com,1999:blog-5333957751769755809.post-60843985347684313802009-04-29T01:25:00.000-07:002009-04-29T03:32:19.189-07:00MeshLab at Archeo-Foss (2)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsqMJiVBS1Os_BZvQn_I4nmJ2IvNyq4Ixe4bmXRlrkVpgFuXX35_auemtCs4qfLS7AGB-Ok8gj5naFC8OxiIurzxid5tPEYzISrbg3i4WbeseonZqPVj_adQ6hAklTvFt8uoc3zS2_wUM/s1600-h/ArcheoFOSS_sala.jpg"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 200px; height: 139px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjsqMJiVBS1Os_BZvQn_I4nmJ2IvNyq4Ixe4bmXRlrkVpgFuXX35_auemtCs4qfLS7AGB-Ok8gj5naFC8OxiIurzxid5tPEYzISrbg3i4WbeseonZqPVj_adQ6hAklTvFt8uoc3zS2_wUM/s200/ArcheoFOSS_sala.jpg" alt="" id="BLOGGER_PHOTO_ID_5330046748499758098" border="0" /></a><br />Yet another non technical post :)<br />I have just returned from the Rome <a href="http://www.archeo-foss.org/">ArcheoFoss workshop</a>. Being one of the organizers I can be proud of the success of the event, more than 150 people from the archeological field attended to the event crowding the main room of the <a href="http://www.cnr.it">CNR</a> central building. I did not think that such a strictly focused event could attract such a wide audience; it seems that the intersection of people that have a genuine interest in Archeology, believe in open solutions, and live in Italy is a significant set :).<br />We (<a href="http://vcg.isti.cnr.it/joomla/index.php?option=com_content&task=view&id=167&Itemid=29" guido="" ranzuglia="">Guido Ranzuglia</a> was the speaker) kept a short (40 min) tutorial on MeshLab, to a very interested, non computer scientist, audience; hopefully in a short time there should be a video available.<br />Pleasant discoveries: MeshLab is already well known in the field as a low cost alternative of the well known big names in 3D scanning processing tools. I also discovered that MeshLab was included in a <a href="http://www.arc-team.com/archeos/wiki/doku.php">ArcheoOS</a> a linux distribution targeted to Archeological people.ALoopingIconhttp://www.blogger.com/profile/10223359091507522354noreply@blogger.com