A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. An open, public virtual world platform is currently using this method and has processed over 700 models.
Unsupervised Conversion of 3D models for Interactive Metaverses
IEEE International Conference on Multimedia and Expo, 2012