PointNet

As mentioned in the "Why ML?" section, though our data looks like an image, it is not exactly set up in a stacked matrix, but rather with each point having the same four properties (x, y, z, charge); this format is referred to as pointclouds.

This disables us from using a convolutional neural network, as those work best with stacked matrices where different filters (other matrices) are applied for transformation. However, there are specific model architectures, that are neural nets, built for the point cloud format specifically; we have chosen to use one called PointNet. This model was coined by Qi et al. at Stanford University in 2017 (see here for the publication).

Config
Figure 6: PointNet architecture

Fig. 6 shows the different purposes for the PointNet model, the part segmentation and semantic segmentation, though useful, is too advanced for our purposes—we only require the classification tool. It is important to note that the pointclouds used by Qi et al. are objects, not akin to the tracks that are seen in the AT-TPC data; we can already foresee a possible source of trouble.

If you are interested in getting started with this model, there is a GitHub repository managed by the authors of model here.

If you would like to learn more about the model architecture (and how PointNet works), below are some resources from YouTube that provide an in depth explanation.

The model version I am using was adapted from a reference by Emilio Villasana, Andrew Rice, Raghu Ramanujan, and Dylan Sparks from the ALPhA group at Davidson College.