In this paper we are using PointNet a Deep Learning algorithm for unordered point clouds classification that makes the algorithm ideal for models generated by sensors. For the model description we use X3D IndexedFaceset with coord attribute to give us the point cloud. Since PointNet doesn't require any knowledge of surfaces of the 3D models, coordIndex in X3D IndexedFaceSet are only considered when is required resampling of the point clouds to increase efficiency. The problem that arises when using an point based algorithm in classification of meshes is that usually meshes doesn't provide enough points for the algorithm to work efficiently. We solve this problem by increasing the number of points to the required one by interpolating points inside faces of the mesh. Moreover we evaluate our concept using the ModelNet40 with 40 individual classes of 3D objects. An important notice here is that the training set is not required to be in X3D format, only the testing set is in this format. The whole implementation of Deep Learning is in Google ™ TensorFlow with KERAS environment and after training the algorithm is extracted in JSON format and runs in client-side inside webpage with TensorFlow Javascript runtime library.
In our results we show that X3D models enhanced with some resampling of the vertices, especially in models with large polygons, are capable enough to work with deep learning algorithms. So the major contribution of this work is that we prove that PointNet algorithm can be used for meshes as well as for point clouds having similar performance to one of the latest meshes classifications algorithm. Even more we show that even sparse meshes like those in Web3D can be classified as well. This conclusion creates new opportunities in the use of X3D as a modeling language of real time scanning created and annotated scenes. In our future work we will focus on segmentation of point sets and the adoption of X3D format as a point set declaration format.