Truth in advertising: Reporting performance of computer programs, algorithms and the impact of architecture

Authors

  • Scott Hazelhurst

DOI:

https://doi.org/10.18489/sacj.v46i0.50

Keywords:

comparison, experiments, reproducibility

Abstract

The level of detail and precision that appears in the experimental methodology section computer science papers is usually much less than in natural science disciplines. This is partially justified by different nature of experiments. The experimental evidence presented here shows that the time taken by the same algorithm varies so significantly on different CPUs that without knowing the exact model of CPU, it is difficult to compare the results. This is placed in context by analysing a cross-section of experimental results reported in the literature. The reporting of experimental results is sometimes insufficient to allow experiments to be replicated, and in some case is insufficient to support the claims made for the algorithms. New standards for reporting on algorithms results are suggested.

Downloads

Published

2010-11-18

Issue

Section

Research Papers (general)