| microbenchmark {microbenchmark} | R Documentation |
microbenchmark serves as a more accurate
replacement of the often seen
system.time(replicate(1000, expr)) expression. It
tries hard to accurately measure only the time it takes
to evaluate expr. To achieved this, the
sub-millisecond (supposedly nanosecond) accurate timing
functions most modern operating systems provide are used.
Additionally all evaluations of the expressions are done
in C code to minimze any overhead.
microbenchmark(..., list = NULL, times = 100L, unit,
control = list())
... |
Expressions to benchmark. |
list |
List of unevaluated expression to benchmark. |
times |
Number of times to evaluate the expression. |
control |
List of control arguments. See Details. |
unit |
Default unit used in |
This function is only meant for micro-benchmarking small pieces of source code and to compare their relative performance characteristics. You should generally avoid benchmarking larger chunks of your code using this function. Instead, try using the R profiler to detect hot spots and consider rewriting them in C/C++ or FORTRAN.
The control list can contain the following
entries:
the order in which the expressions are evaluated. “random” (the default) randomizes the execution order, “inorder” executes each expression in order and “block” executes all repetitions of each expression as one block.
the number of warm-up iterations performed before the actual benchmark. These are used to estimate the timing overhead as well as spinning up the processor from any sleep or idle states it might be in. The default value is 2.
Object of class ‘microbenchmark’, a matrix with one column per expression. Each row contains the time it took to evaluate the respective expression one time in nanoseconds.
Depending on the underlying operating system, different
methods are used for timing. On Windows the
QueryPerformanceCounter interface is used to
measure the time passed. For Linux the
clock_gettime API is used and on Solaris the
gethrtime function. Finally on MacOS X the,
undocumented, mach_absolute_time function is used
to avoid a dependency on the CoreServices Framework.
Before evaluating each expression times times, the
overhead of calling the timing functions and the C
function call overhead are estimated. This estimated
overhead is subtracted from each measured evaluation
time. Should the resulting timing be negative, a warning
is thrown and the respective value is replaced by
NA.
Olaf Mersmann olafm@p-value.ner
print.microbenchmark to display and
boxplot.microbenchmark to plot the results.
## Measure the time it takes to dispatch a simple function call
## compared to simply evaluating the constant \code{NULL}
f <- function() NULL
res <- microbenchmark(NULL, f(), times=1000L)
## Print results:
print(res)
## Plot results:
boxplot(res)
## Pretty plot:
if (require("ggplot2")) {
plt <- ggplot2::qplot(y=time, data=res, colour=expr)
plt <- plt + ggplot2::scale_y_log10()
print(plt)
}