We introduce the reader to the bootstrap, a simple and flexible resampling-based alternative for quantifying uncertainty. We describe the basic characteristics of the non-parametric bootstrap and illustrate its practical behaviour with simulations in the context of a typical task in machine learning - estimating and comparing the performance of different prediction models. We also present some of the method's weaknesses. We introduce and compare three standard intervals: the standard normal using bootstrap standard error and two more typical bootstrap confidence intervals, the percentile and the BCa interval. As theory suggests, the BCa performs the best over a wide range of situations.