Time complexity is known as one of the principal tasks of algorithm analysis; the goal is to obtain a function which - for a given size of the problem - estimates how much time the algorithm execution will take. Theoretical analysis is often cumbersome and has other drawbacks as well. Thus, the empirical analysis of time complexity can be used, which is also the primary focus of this paper. We have developed procedures that allow us analysis of measurements - i.e. data -, which we obtain by running algorithms on problems of different sizes. The analysis provides us with an estimated time complexity class and function in symbolic form. We have developed a new method for detection of bad measurements, which is based on analysis of consecutive points, and introduced new metrics for algorithm comparison. A few new approaches were intertwined together with existing methods, which was then, all together, integrated in the existing system for automatic algorithm analysis.
|