3D scans of cultural heritage monuments and artifacts can be -and often are- extremely detailed, consisting of hundreds of thousands, or even millions of vertices. Such models, when integrated in the Web3D domain, can provide detailed reconstructions of objects and locations for both laypeople and experts to observe, manipulate and study remotely, over the web. However, the transmission of such models via network and their display from within a browser context can be problematic as their volume increases.
There exist a number of approaches which aim at multimedia content streaming. Such approaches aim to offer a progressive transmission and display of the 3d model, ensuring that the user is provided with a rouch sketch of the final model from early on, which is then progressively refined until finally the full model is sent and displayed. Such approaches tend to rely on the pre-processing of the given models, the extraction of multiple Levels Of Detail (LOD) from them, and the serial transmission of these Levels, from the coarser to the finer ones.
In our approach, we opted for a face-by-face approach, where the given model is not pre-processed, but instead separated into subsets on-the-fly. Any model, of any size, can be fed to the server. The model is split into "chunks" of customizable size, which are progressively transmitted to the client, and displayed as an X3DOM scene. The order in which the initial model faces are transmitted is not fixed but can instead be derived from a number of algorithms (such as by order of appearance, largest-area faces first, or random ordering). This, combined with the transmission and display technologies used, allows for instant smooth, progressive and seamless transmission of any model, without any pre-processing step.