The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Update references missed in CL 263142.
For #41190
Change-Id: I778760a6a69bd0440fec0848bdef539c9ccb4ee1
GitHub-Last-Rev: dda42b09ff
GitHub-Pull-Request: golang/go#42874
Reviewed-on: https://go-review.googlesource.com/c/go/+/273946
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: Cherry Zhang <cherryyz@google.com>
We can't depend on init() order, and certainly we don't want to
register all future benchmarks that use jsonbytes or jsondata to init()
in json_test.go, so we use a more general solution: make generation of
jsonbytes and jsondata their own function so that the compiler will take
care of the order.
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/6282046
I have included a few important microbenchmarks,
but the overall intent is to have mostly end-to-end
benchmarks timing real world operations.
The jsondata.go file is a summary of agl's
activity in various open source repositories.
It gets used as test data for many of the benchmarks.
Everything links into one binary (even the test data)
so that it is easy to run the benchmarks on many
computers: there is just one file to copy around.
R=golang-dev, r, bradfitz, adg, r
CC=golang-dev
https://golang.org/cl/5484071