Truth in advertising: Reporting performance of computer programs, algorithms and the impact of architecture

Scott Hazelhurst

Abstract


The level of detail and precision that appears in the experimental
methodology section computer science papers is usually much less
than in natural science disciplines. This is partially justified by
different nature of experiments. The experimental evidence presented
here shows that the time taken by the same algorithm varies so
significantly on different CPUs that without knowing the exact model
of CPU, it is difficult to compare the results. This is placed in
context by analysing a cross-section of experimental results
reported in the literature. The reporting of experimental results
is sometimes insufficient to allow experiments to be replicated,
and in some case is insufficient to support the claims made for the
algorithms. New standards for reporting on algorithms results are
suggested.

Keywords


comparison, experiments, reproducibility

Full Text:

PDF


DOI: http://dx.doi.org/10.18489/sacj.v46i0.50

Copyright (c)