Interpolation methods are commonly used to determine precise detection times of high-energy particles in detectors coupled to fast sampling readout. In undersampled conditions the use of computationally preferred linear interpolation leads to systematic errors that underestimate detection times and, more importantly, increase the variance of the measured times.
In this thesis a machine-learning method to correct for these errors is presented and evaluated. I further show how a suitable parametrization of the error distribution allows for good results in cases with very limited learning ensembles.
In a similay way, we improve on the pulse-shape method for neutron vs. gamma-ray discrimination in organic scintillation detectors.
|