Efficiency parametrization with Neural Networks

Event weighting is like resizing the lego bricks, we can use all the bricks (both the red and the green ones) and the output is still the same!

 

In HEP we often encounter situations where we are interested in events occuring in a very restricted region of the phase space. The most trivial approach in such situation is to use Event Selection to filter out the events of interest. But this usually results in low statistics as the region of interst is narrow.

 

An alternate solution is the Event Weighting Technique where we replace the selection by weights. The most common practice to get these event weights is to use binned efficiency maps. But this way of parameterisng the efficiency is problematic, because - very often the efficiency is influenced by a large set of parameters (we are typically not aware of the full set) and binned maps do not work well in higher dimension, moreover low statistcs also affects the estimation of efficiency.

 

The situation can be significantly improved by using a GNN for estimating the efficiencies. The GNN can infer the set of parameters from the given input and can construct a very rich, high-dimensional parameterisation of the efficiencies. The magical ingredient here is the message passing among the nodes in the graph. The efficiency of a node is often influenced by the presence of other nodes in the graph (eg. In flavor tagging, eficiency of a jet depends on the presence of a nearby jet), and the GNN allows us to model it well, resulting in an improved estimation. This approach is also universal, meaning - it generalises well on samples with different properties that the model did not see during training.

 

Currently we are studying this approach in the scope of ATLAS, and working on implementing these ideas in practice.