2016-04-10 15:32:26 -06:00
|
|
|
// Copyright 2011 The Go Authors. All rights reserved.
|
2011-12-15 10:32:59 -07:00
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
|
|
|
// This benchmark tests gzip and gunzip performance.
|
|
|
|
|
|
|
|
package go1
|
|
|
|
|
|
|
|
import (
|
|
|
|
"bytes"
|
|
|
|
gz "compress/gzip"
|
|
|
|
"io"
|
|
|
|
"testing"
|
|
|
|
)
|
|
|
|
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
func makeGunzip(jsonbytes []byte) []byte {
|
|
|
|
return bytes.Repeat(jsonbytes, 10)
|
|
|
|
}
|
2011-12-15 10:32:59 -07:00
|
|
|
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
func makeGzip(jsongunz []byte) []byte {
|
2011-12-15 10:32:59 -07:00
|
|
|
var buf bytes.Buffer
|
2012-03-07 15:23:56 -07:00
|
|
|
c := gz.NewWriter(&buf)
|
2011-12-15 10:32:59 -07:00
|
|
|
c.Write(jsongunz)
|
|
|
|
c.Close()
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
return buf.Bytes()
|
2011-12-15 10:32:59 -07:00
|
|
|
}
|
|
|
|
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
func gzip(jsongunz []byte) {
|
2021-04-03 02:10:47 -06:00
|
|
|
c := gz.NewWriter(io.Discard)
|
2011-12-15 10:32:59 -07:00
|
|
|
if _, err := c.Write(jsongunz); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
if err := c.Close(); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
func gunzip(jsongz []byte) {
|
2011-12-15 10:32:59 -07:00
|
|
|
r, err := gz.NewReader(bytes.NewBuffer(jsongz))
|
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
2021-04-03 02:10:47 -06:00
|
|
|
if _, err := io.Copy(io.Discard, r); err != nil {
|
2011-12-15 10:32:59 -07:00
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
r.Close()
|
|
|
|
}
|
|
|
|
|
|
|
|
func BenchmarkGzip(b *testing.B) {
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
jsonbytes := makeJsonBytes()
|
|
|
|
jsongunz := makeGunzip(jsonbytes)
|
|
|
|
b.ResetTimer()
|
2011-12-15 10:32:59 -07:00
|
|
|
b.SetBytes(int64(len(jsongunz)))
|
|
|
|
for i := 0; i < b.N; i++ {
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
gzip(jsongunz)
|
2011-12-15 10:32:59 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func BenchmarkGunzip(b *testing.B) {
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
jsonbytes := makeJsonBytes()
|
|
|
|
jsongunz := makeGunzip(jsonbytes)
|
|
|
|
jsongz := makeGzip(jsongunz)
|
|
|
|
b.ResetTimer()
|
2011-12-15 10:32:59 -07:00
|
|
|
b.SetBytes(int64(len(jsongunz)))
|
|
|
|
for i := 0; i < b.N; i++ {
|
test/bench/go1: eliminate start-up time
The go1 benchmark suite does a lot of work at package init time, which
makes it take quite a while to run even if you're not running any of
the benchmarks, or if you're only running a subset of them. This leads
to an awkward workaround in dist test to compile but not run the
package, unlike roughly all other packages. It also reduces isolation
between benchmarks by affecting the starting heap size of all
benchmarks.
Fix this by initializing all data required by a benchmark when that
benchmark runs, and keeping it local so it gets freed by the GC and
doesn't leak between benchmarks. Now, none of the benchmarks depend on
global state.
Re-initializing the data on each benchmark run does add overhead to an
actual benchmark run, as each benchmark function is called several
times with different values of b.N. A full run of all benchmarks at
the default -benchtime=1s now takes ~10% longer; higher -benchtimes
would be less. It would be quite difficult to cache this data between
invocations of the same benchmark function without leaking between
different benchmarks and affecting GC overheads, as the testing
package doesn't provide any mechanism for this.
This reduces the time to run the binary with no benchmarks from 1.5
seconds to 10 ms, and also reduces the memory required to do this from
342 MiB to 17 MiB.
To make sure data was not leaking between different benchmarks, I ran
the benchmarks with -shuffle=on. The variance remained low: mostly
under 3%. A few benchmarks had higher variance, but in all cases it
was similar to the variance between this change.
This CL naturally changes the measured performance of several of the
benchmarks because it dramatically changes the heap size and hence GC
overheads. However, going forward the benchmarks should be much better
isolated.
For #37486.
Change-Id: I252ebea703a9560706cc1990dc5ad22d1927c7a0
Reviewed-on: https://go-review.googlesource.com/c/go/+/443336
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2022-09-16 13:56:16 -06:00
|
|
|
gunzip(jsongz)
|
2011-12-15 10:32:59 -07:00
|
|
|
}
|
|
|
|
}
|