Reintroduce an optimization discarded during the initial conversion
from 4-bit heap bitmaps to 2-bit heap bitmaps: when we reach the
place in the bitmap where there are no more pointers, mark that position
for the GC so that it can avoid scanning past that place.
During heapBitsSetType we can also avoid initializing heap bitmap
beyond that location, which gives a bit of a win compared to Go 1.4.
This particular optimization (not initializing the heap bitmap) may not last:
we might change typedmemmove to use the heap bitmap, in which
case it would all need to be initialized. The early stop in the GC scan
will stay no matter what.
Compared to Go 1.4 (github.com/rsc/go, branch go14bench):
name old mean new mean delta
SetTypeNode64 80.7ns × (1.00,1.01) 57.4ns × (1.00,1.01) -28.83% (p=0.000)
SetTypeNode64Dead 80.5ns × (1.00,1.01) 13.1ns × (0.99,1.02) -83.77% (p=0.000)
SetTypeNode64Slice 2.16µs × (1.00,1.01) 1.54µs × (1.00,1.01) -28.75% (p=0.000)
SetTypeNode64DeadSlice 2.16µs × (1.00,1.01) 1.52µs × (1.00,1.00) -29.74% (p=0.000)
Compared to previous CL:
name old mean new mean delta
SetTypeNode64 56.7ns × (1.00,1.00) 57.4ns × (1.00,1.01) +1.19% (p=0.000)
SetTypeNode64Dead 57.2ns × (1.00,1.00) 13.1ns × (0.99,1.02) -77.15% (p=0.000)
SetTypeNode64Slice 1.56µs × (1.00,1.01) 1.54µs × (1.00,1.01) -0.89% (p=0.000)
SetTypeNode64DeadSlice 1.55µs × (1.00,1.01) 1.52µs × (1.00,1.00) -2.23% (p=0.000)
This is the last CL in the sequence converting from the 4-bit heap
to the 2-bit heap, with all the same optimizations reenabled.
Compared to before that process began (compared to CL 9701 patch set 1):
name old mean new mean delta
BinaryTree17 5.87s × (0.94,1.09) 5.91s × (0.96,1.06) ~ (p=0.578)
Fannkuch11 4.32s × (1.00,1.00) 4.32s × (1.00,1.00) ~ (p=0.474)
FmtFprintfEmpty 89.1ns × (0.95,1.16) 89.0ns × (0.93,1.10) ~ (p=0.942)
FmtFprintfString 283ns × (0.98,1.02) 298ns × (0.98,1.06) +5.33% (p=0.000)
FmtFprintfInt 284ns × (0.98,1.04) 286ns × (0.98,1.03) ~ (p=0.208)
FmtFprintfIntInt 486ns × (0.98,1.03) 498ns × (0.97,1.06) +2.48% (p=0.000)
FmtFprintfPrefixedInt 400ns × (0.99,1.02) 408ns × (0.98,1.02) +2.23% (p=0.000)
FmtFprintfFloat 566ns × (0.99,1.01) 587ns × (0.98,1.01) +3.69% (p=0.000)
FmtManyArgs 1.91µs × (0.99,1.02) 1.94µs × (0.99,1.02) +1.81% (p=0.000)
GobDecode 15.5ms × (0.98,1.05) 15.8ms × (0.98,1.03) +1.94% (p=0.002)
GobEncode 11.9ms × (0.97,1.03) 12.0ms × (0.96,1.09) ~ (p=0.263)
Gzip 648ms × (0.99,1.01) 648ms × (0.99,1.01) ~ (p=0.992)
Gunzip 143ms × (1.00,1.00) 143ms × (1.00,1.01) ~ (p=0.585)
HTTPClientServer 89.2µs × (0.99,1.02) 90.3µs × (0.98,1.01) +1.24% (p=0.000)
JSONEncode 32.3ms × (0.97,1.06) 31.6ms × (0.99,1.01) -2.29% (p=0.000)
JSONDecode 106ms × (0.99,1.01) 107ms × (1.00,1.01) +0.62% (p=0.000)
Mandelbrot200 6.02ms × (1.00,1.00) 6.03ms × (1.00,1.01) ~ (p=0.250)
GoParse 6.57ms × (0.97,1.06) 6.53ms × (0.99,1.03) ~ (p=0.243)
RegexpMatchEasy0_32 162ns × (1.00,1.00) 161ns × (1.00,1.01) -0.80% (p=0.000)
RegexpMatchEasy0_1K 561ns × (0.99,1.02) 541ns × (0.99,1.01) -3.67% (p=0.000)
RegexpMatchEasy1_32 145ns × (0.95,1.04) 138ns × (1.00,1.00) -5.04% (p=0.000)
RegexpMatchEasy1_1K 864ns × (0.99,1.04) 887ns × (0.99,1.01) +2.57% (p=0.000)
RegexpMatchMedium_32 255ns × (0.99,1.04) 253ns × (0.99,1.01) -1.05% (p=0.012)
RegexpMatchMedium_1K 73.9µs × (0.98,1.04) 72.8µs × (1.00,1.00) -1.51% (p=0.005)
RegexpMatchHard_32 3.92µs × (0.98,1.04) 3.85µs × (1.00,1.01) -1.88% (p=0.002)
RegexpMatchHard_1K 120µs × (0.98,1.04) 117µs × (1.00,1.01) -2.02% (p=0.001)
Revcomp 936ms × (0.95,1.08) 922ms × (0.97,1.08) ~ (p=0.234)
Template 130ms × (0.98,1.04) 126ms × (0.99,1.01) -2.99% (p=0.000)
TimeParse 638ns × (0.98,1.05) 628ns × (0.99,1.01) -1.54% (p=0.004)
TimeFormat 674ns × (0.99,1.01) 668ns × (0.99,1.01) -0.80% (p=0.001)
The slowdown of the first few benchmarks seems to be due to the new
atomic operations for certain small size allocations. But the larger
benchmarks mostly improve, probably due to the decreased memory
pressure from having half as much heap bitmap.
CL 9706, which removes the (never used anymore) wbshadow mode,
gets back what is lost in the early microbenchmarks.
Change-Id: I37423a209e8ec2a2e92538b45cac5422a6acd32d
Reviewed-on: https://go-review.googlesource.com/9705
Reviewed-by: Rick Hudson <rlh@golang.org>
For the conversion of the heap bitmap from 4-bit to 2-bit fields,
I replaced heapBitsSetType with the dumbest thing that could possibly work:
two atomic operations (atomicand8+atomicor8) per 2-bit field.
This CL replaces that code with a proper implementation that
avoids the atomics whenever possible. Benchmarks vs base CL
(before the conversion to 2-bit heap bitmap) and vs Go 1.4 below.
Compared to Go 1.4, SetTypePtr (a 1-pointer allocation)
is 10ns slower because a race against the concurrent GC requires the
use of an atomicor8 that used to be an ordinary write. This slowdown
was present even in the base CL.
Compared to both Go 1.4 and base, SetTypeNode8 (a 10-word allocation)
is 10ns slower because it too needs a new atomic, because with the
denser representation, the byte on the end of the allocation is now shared
with the object next to it; this was not true with the 4-bit representation.
Excluding these two (fundamental) slowdowns due to the use of atomics,
the new code is noticeably faster than both Go 1.4 and the base CL.
The next CL will reintroduce the ``typeDead'' optimization.
Stats are from 5 runs on a MacBookPro10,2 (late 2012 Core i5).
Compared to base CL (** = new atomic)
name old mean new mean delta
SetTypePtr 14.1ns × (0.99,1.02) 14.7ns × (0.93,1.10) ~ (p=0.175)
SetTypePtr8 18.4ns × (1.00,1.01) 18.6ns × (0.81,1.21) ~ (p=0.866)
SetTypePtr16 28.7ns × (1.00,1.00) 22.4ns × (0.90,1.27) -21.88% (p=0.015)
SetTypePtr32 52.3ns × (1.00,1.00) 33.8ns × (0.93,1.24) -35.37% (p=0.001)
SetTypePtr64 79.2ns × (1.00,1.00) 55.1ns × (1.00,1.01) -30.43% (p=0.000)
SetTypePtr126 118ns × (1.00,1.00) 100ns × (1.00,1.00) -15.97% (p=0.000)
SetTypePtr128 130ns × (0.92,1.19) 98ns × (1.00,1.00) -24.36% (p=0.008)
SetTypePtrSlice 726ns × (0.96,1.08) 760ns × (1.00,1.00) ~ (p=0.152)
SetTypeNode1 14.1ns × (0.94,1.15) 12.0ns × (1.00,1.01) -14.60% (p=0.020)
SetTypeNode1Slice 135ns × (0.96,1.07) 88ns × (1.00,1.00) -34.53% (p=0.000)
SetTypeNode8 20.9ns × (1.00,1.01) 32.6ns × (1.00,1.00) +55.37% (p=0.000) **
SetTypeNode8Slice 414ns × (0.99,1.02) 244ns × (1.00,1.00) -41.09% (p=0.000)
SetTypeNode64 80.0ns × (1.00,1.00) 57.4ns × (1.00,1.00) -28.23% (p=0.000)
SetTypeNode64Slice 2.15µs × (1.00,1.01) 1.56µs × (1.00,1.00) -27.43% (p=0.000)
SetTypeNode124 119ns × (0.99,1.00) 100ns × (1.00,1.00) -16.11% (p=0.000)
SetTypeNode124Slice 3.40µs × (1.00,1.00) 2.93µs × (1.00,1.00) -13.80% (p=0.000)
SetTypeNode126 120ns × (1.00,1.01) 98ns × (1.00,1.00) -18.19% (p=0.000)
SetTypeNode126Slice 3.53µs × (0.98,1.08) 3.02µs × (1.00,1.00) -14.49% (p=0.002)
SetTypeNode1024 726ns × (0.97,1.09) 740ns × (1.00,1.00) ~ (p=0.451)
SetTypeNode1024Slice 24.9µs × (0.89,1.37) 23.1µs × (1.00,1.00) ~ (p=0.476)
Compared to Go 1.4 (** = new atomic)
name old mean new mean delta
SetTypePtr 5.71ns × (0.89,1.19) 14.68ns × (0.93,1.10) +157.24% (p=0.000) **
SetTypePtr8 19.3ns × (0.96,1.10) 18.6ns × (0.81,1.21) ~ (p=0.638)
SetTypePtr16 30.7ns × (0.99,1.03) 22.4ns × (0.90,1.27) -26.88% (p=0.005)
SetTypePtr32 51.5ns × (1.00,1.00) 33.8ns × (0.93,1.24) -34.40% (p=0.001)
SetTypePtr64 83.6ns × (0.94,1.12) 55.1ns × (1.00,1.01) -34.12% (p=0.001)
SetTypePtr126 137ns × (0.87,1.26) 100ns × (1.00,1.00) -27.10% (p=0.028)
SetTypePtrSlice 865ns × (0.80,1.23) 760ns × (1.00,1.00) ~ (p=0.243)
SetTypeNode1 15.2ns × (0.88,1.12) 12.0ns × (1.00,1.01) -20.89% (p=0.014)
SetTypeNode1Slice 156ns × (0.93,1.16) 88ns × (1.00,1.00) -43.57% (p=0.001)
SetTypeNode8 23.8ns × (0.90,1.18) 32.6ns × (1.00,1.00) +36.76% (p=0.003) **
SetTypeNode8Slice 502ns × (0.92,1.10) 244ns × (1.00,1.00) -51.46% (p=0.000)
SetTypeNode64 85.6ns × (0.94,1.11) 57.4ns × (1.00,1.00) -32.89% (p=0.001)
SetTypeNode64Slice 2.36µs × (0.91,1.14) 1.56µs × (1.00,1.00) -33.96% (p=0.002)
SetTypeNode124 130ns × (0.91,1.12) 100ns × (1.00,1.00) -23.49% (p=0.004)
SetTypeNode124Slice 3.81µs × (0.90,1.22) 2.93µs × (1.00,1.00) -23.09% (p=0.025)
There are fewer benchmarks vs Go 1.4 because unrolling directly
into the heap bitmap is not yet implemented, so those would not
be meaningful comparisons.
These benchmarks were not present in Go 1.4 as distributed.
The backport to Go 1.4 is in github.com/rsc/go's go14bench branch,
commit 71d5ee5.
Change-Id: I95ed05a22bf484b0fc9efad549279e766c98d2b6
Reviewed-on: https://go-review.googlesource.com/9704
Reviewed-by: Rick Hudson <rlh@golang.org>
Previous CLs changed the representation of the non-heap type bitmaps
to be 1-bit bitmaps (pointer or not). Before this CL, the heap bitmap
stored a 2-bit type for each word and a mark bit and checkmark bit
for the first word of the object. (There used to be additional per-word bits.)
Reduce heap bitmap to 2-bit, with 1 dedicated to pointer or not,
and the other used for mark, checkmark, and "keep scanning forward
to find pointers in this object." See comments for details.
This CL replaces heapBitsSetType with very slow but obviously correct code.
A followup CL will optimize it. (Spoiler: the new code is faster than Go 1.4 was.)
Change-Id: I999577a133f3cfecacebdec9cdc3573c235c7fb9
Reviewed-on: https://go-review.googlesource.com/9703
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The type information in reflect.Type and the GC programs is now
1 bit per word, down from 2 bits.
The in-memory unrolled type bitmap representation are now
1 bit per word, down from 4 bits.
The conversion from the unrolled (now 1-bit) bitmap to the
heap bitmap (still 4-bit) is not optimized. A followup CL will
work on that, after the heap bitmap has been converted to 2-bit.
The typeDead optimization, in which a special value denotes
that there are no more pointers anywhere in the object, is lost
in this CL. A followup CL will bring it back in the final form of
heapBitsSetType.
Change-Id: If61e67950c16a293b0b516a6fd9a1c755b6d5549
Reviewed-on: https://go-review.googlesource.com/9702
Reviewed-by: Austin Clements <austin@google.com>
There was an old benchmark that measured this indirectly
via allocation, but I don't understand how to factor out the
allocation cost when interpreting the numbers.
Replace with a benchmark that only calls heapBitsSetType,
that does not allocate. This was not possible when the
benchmark was first written, because heapBitsSetType had
not been factored out of mallocgc.
Change-Id: I30f0f02362efab3465a50769398be859832e6640
Reviewed-on: https://go-review.googlesource.com/9701
Reviewed-by: Austin Clements <austin@google.com>
Since we now have stack information for code running on the
systemstack, we can traceback over it. To make cpu profiles useful,
add a case in gentraceback to jump over systemstack switches.
Fixes#10609.
Change-Id: I21f47fcc802c07c5d4a1ada56374314e388a6dc7
Reviewed-on: https://go-review.googlesource.com/9506
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
I have around twenty of such values on a Windows 7 development machine.
regedit displays (translated): "invalid 32-bits DWORD value".
Change-Id: Ib37a414ee4c85e891b0a25fed2ddad9e105f5f4e
Reviewed-on: https://go-review.googlesource.com/9901
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
The trace-viewer doesn't use the Go license, so it makes sense
to include the license text into the README.md file.
While we're at here, reformat existing text using real Markdown
syntax.
Change-Id: I13e42d3cc6a0ca7e64e3d46ad460dc0460f7ed09
Reviewed-on: https://go-review.googlesource.com/9882
Reviewed-by: Rob Pike <r@golang.org>
On darwin/arm, the test sometimes fails with:
Process 557 resuming
--- FAIL: TestWriteTimeoutFluctuation (1.64s)
timeout_test.go:706: Write took over 1s; expected 0.1s
FAIL
Process 557 exited with status = 1 (0x00000001)
go_darwin_arm_exec: timeout running tests
This change increaes timeout on iOS builders from 1s to 3s as a
temporarily fix.
Updates #10775.
Change-Id: Ifdaf99cf5b8582c1a636a0f7d5cc66bb276efd72
Reviewed-on: https://go-review.googlesource.com/9915
Reviewed-by: Minux Ma <minux@golang.org>
ExpandString correctly loops on the syscall until it reaches the
required buffer size but truncates it before converting it back to
string. The truncation limit is increased to 2^15 bytes which is the
documented maximum ExpandEnvironmentStrings output size.
This fixes TestExpandString on systems where len($PATH) > 1024.
Change-Id: I2a6f184eeca939121b458bcffe1a436a50f3298e
Reviewed-on: https://go-review.googlesource.com/9805
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
There is no SYS_INOTIFY_INIT on linux/arm64, only SYS_INOTIFY_INIT1.
Change-Id: I97f430f2c2b910fb19dce495ff1adf591b8634fc
Reviewed-on: https://go-review.googlesource.com/9870
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
The html package uses some specific code to escape special characters.
Actually, the strings.Replacer can be used instead, and is much more
efficient. The converse operation is more complex but can still be
slightly optimized.
Credits to Ken Bloom (kabloom@google.com), who first submitted a
similar patch at https://codereview.appspot.com/141930043
Added benchmarks and slightly optimized UnescapeString.
benchmark old ns/op new ns/op delta
BenchmarkEscape-4 118713 19825 -83.30%
BenchmarkEscapeNone-4 87653 3784 -95.68%
BenchmarkUnescape-4 24888 23417 -5.91%
BenchmarkUnescapeNone-4 14423 157 -98.91%
benchmark old allocs new allocs delta
BenchmarkEscape-4 9 2 -77.78%
BenchmarkEscapeNone-4 0 0 +0.00%
BenchmarkUnescape-4 2 2 +0.00%
BenchmarkUnescapeNone-4 0 0 +0.00%
benchmark old bytes new bytes delta
BenchmarkEscape-4 24800 12288 -50.45%
BenchmarkEscapeNone-4 0 0 +0.00%
BenchmarkUnescape-4 10240 10240 +0.00%
BenchmarkUnescapeNone-4 0 0 +0.00%
Fixes#8697
Change-Id: I208261ed7cbe9b3dee6317851f8c0cf15528bce4
Reviewed-on: https://go-review.googlesource.com/9808
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Delete the colon from RUN: for examples, since it's not there for tests.
Add spaces to line up RUN and PASS: lines.
Before:
=== RUN TestCount
--- PASS: TestCount (0.00s)
=== RUN: ExampleFields
--- PASS: ExampleFields (0.00s)
After:
=== RUN TestCount
--- PASS: TestCount (0.00s)
=== RUN ExampleFields
--- PASS: ExampleFields (0.00s)
Fixes#10594.
Change-Id: I189c80a5d99101ee72d8c9c3a4639c07e640cbd8
Reviewed-on: https://go-review.googlesource.com/9846
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Pipelines are altered by inserting sanitizers if they are not
already present. The code makes the assumption that the first
operands of each commands are function identifiers.
This is wrong, since they can also be methods. It results in
a panic with templates such as {{1|print 2|.f 3}}
Adds an extra type assertion to make sure only identifiers
are compared with sanitizers.
Fixes#10673
Change-Id: I3eb820982675231dbfa970f197abc5ef335ce86b
Reviewed-on: https://go-review.googlesource.com/9801
Reviewed-by: Rob Pike <r@golang.org>
In the Slices section of Effective Go, the os package's File.Read
function is used as an example. Unfortunately the function signature
does not match the function's code in the example, nor the os package's
documentation. This change updates the function signature to match
the os package and the pre-existing function code.
Change-Id: Iae9f30c898d3a1ff8d47558ca104dfb3ff07112c
Reviewed-on: https://go-review.googlesource.com/9845
Reviewed-by: Rob Pike <r@golang.org>
This will make it possible for C++ code to #include the export header
file and see the correct declarations.
The preamble remains the user's responsibility. It would not be
appropriate to wrap the preamble in extern "C", because it might
include header files that work with both C and C++. Putting those
header files in an extern "C" block would break them.
Change-Id: Ifb40879d709d26596d5c80b1307a49f1bd70932a
Reviewed-on: https://go-review.googlesource.com/9850
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
With this patch, gdb seems to be able to corretly backtrace Go
process on at least linux/{arm,arm64,ppc64}.
Change-Id: Ic40a2a70e71a19c4a92e4655710f38a807b67e9a
Reviewed-on: https://go-review.googlesource.com/9822
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
It was testing the mark bits on what roots pointed at,
but not the remainder of the live heap, because in
CL 2991 I accidentally inverted this check during
refactoring.
The next CL will turn it back off by default again,
but I want one run on the builders with the full
checkmark checks.
Change-Id: Ic166458cea25c0a56e5387fc527cb166ff2e5ada
Reviewed-on: https://go-review.googlesource.com/9824
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently the heap minimum is set to 4MB which prevents our ability to
collect at every allocation by setting GOGC=0. This adjust the
heap minimum to 4MB*GOGC/100 thus reenabling collecting at every allocation.
Fixes#10681
Change-Id: I912d027dac4b14ae535597e8beefa9ac3fb8ad94
Reviewed-on: https://go-review.googlesource.com/9814
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
This was added during testing but is unnecessary.
Thanks to gravis on GitHub for catching it.
See #10574.
Change-Id: I4a8f76d237e67f5a0ea189a0f3cadddbf426778a
Reviewed-on: https://go-review.googlesource.com/9841
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The code already handled high widths but not high precisions.
Also make sure it handles the harder cases of %U.
Fixes#10745.
Change-Id: Ib4d394d49a9941eeeaff866dc59d80483e312a98
Reviewed-on: https://go-review.googlesource.com/9769
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When
using -buildmode=c-archive or c-shared, and
when installing packages that use cgo, and
when those packages export some functions via //export comments,
then
for each such package, install a pkg.h header file that declares the
functions.
This permits C code to #include the header when calling the Go
functions.
This is a little awkward to use when there are multiple packages that
export functions, as you have to "go install" your c-archive/c-shared
object and then pull it out of the package directory. When compiling
your C code you have to -I pkg/$GOOS_$GOARCH. I haven't thought of
any more convenient approach. It's simpler when only the main package
has exported functions.
When using c-shared you currently have to use a _shared suffix in the
-I option; it would be nice to fix that somehow.
Change-Id: I5d8cf08914b7d3c2b194120c77791d2732ffd26e
Reviewed-on: https://go-review.googlesource.com/9798
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Try to provide hints for common areas, either *interface
were interface would have been better, and note incorrect
capitalization (but don't be more ambitious than that, at
least not today).
Added code and test for cases
ptrInterface.ExistingMethod
ptrInterface.unexportedMethod
ptrInterface.MissingMethod
ptrInterface.withwRongcASEdMethod
interface.withwRongcASEdMethod
ptrStruct.withwRongcASEdMethod
struct.withwRongcASEdMethod
also included tests for related errors to check for
unintentional changes and consistent wording.
Somewhat simplified from previous versions to avoid second-
guessing user errors, yet also biased to point out most-likely
root cause.
Fixes#10700
Change-Id: I16693e93cc8d8ca195e7742a222d640c262105b4
Reviewed-on: https://go-review.googlesource.com/9731
Reviewed-by: Russ Cox <rsc@golang.org>
Current code just checks the consistency (that the functab is correctly
sorted by PC, etc) of the moduledata object that the runtime belongs to.
Change to check all of them.
Change-Id: I544a44c5de7445fff87d3cdb4840ff04c5e2bf75
Reviewed-on: https://go-review.googlesource.com/9773
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When AttrByteSize is not present for a type, we can still determine the
size in two more cases: when the type is a Typedef referring to another
type, and when the type is a pointer and we know the default address
size.
entry.go: return after setting an error if the offset is out of range.
Change-Id: I63a922ca4e4ad2fc9e9be3e5b47f59fae7d0eb5c
Reviewed-on: https://go-review.googlesource.com/9663
Reviewed-by: Austin Clements <austin@google.com>
No code changes, but the test passes here.
And TryBots are happy.
Fixes#8662 maybe
Change-Id: Id37380f72a951c9ad7cf96c0db153c05167e62ed
Reviewed-on: https://go-review.googlesource.com/9778
Reviewed-by: Minux Ma <minux@golang.org>
The -exportheader option tells cgo to generate a header file declaring
expoted functions. The header file is only created if there are, in
fact, some exported functions, so it also serves as a signal as to
whether there were any.
In future CLs the go tool will use this option to install header files
for packages that use cgo and export functions.
Change-Id: I5b04357d453a9a8f0e70d37f8f18274cf40d74c9
Reviewed-on: https://go-review.googlesource.com/9796
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Similar for linux/386 with GO386=387.
Change-Id: If8b6f8a0659a1b3e078d87a43fcfe8a38af20308
Reviewed-on: https://go-review.googlesource.com/9821
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change doesn't work perfectly on IPv6-only kernels including CLAT
enabled kernels, but works enough on IPv4-only kernels.
Fixes#10721.
Updates #10729.
Change-Id: I7db0e572e252aa0a9f9f54c8e557955077b72e44
Reviewed-on: https://go-review.googlesource.com/9777
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Also adds missing temporary file deletion.
Change-Id: Ia644b0898022e05d2f5232af38f51d55e40c6fb5
Reviewed-on: https://go-review.googlesource.com/9772
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: I70dfc2bad13c513c376c7c41058774b40af73dce
Reviewed-on: https://go-review.googlesource.com/9775
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Makes searching in source code easier.
Change-Id: Ie2e85934d23920ac0bc01d28168bcfbbdc465580
Reviewed-on: https://go-review.googlesource.com/9774
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
Also copy doc comments from Go code to _cgo_export.h.
This is a step toward installing this generated file when using
-buildmode=c-archive or c-shared, so that C code can #include it.
Change-Id: I3a243f7b386b58ec5c5ddb9a246bb9f9eddc5fb8
Reviewed-on: https://go-review.googlesource.com/9790
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Already done for constants and funcs, but I didn't realize that some
global vars were also not in the global list. This fixes
go doc build.Default
Change-Id: I768bde13a400259df3e46dddc9f58c8f0e993c72
Reviewed-on: https://go-review.googlesource.com/9764
Reviewed-by: Andrew Gerrand <adg@golang.org>
Currently, the GC uses a moving average of recent scan work ratios to
estimate the total scan work required by this cycle. This is in turn
used to compute how much scan work should be done by mutators when
they allocate in order to perform all expected scan work by the time
the allocated heap reaches the heap goal.
However, our current scan work estimate can be arbitrarily wrong if
the heap topography changes significantly from one cycle to the
next. For example, in the go1 benchmarks, at the beginning of each
benchmark, the heap is dominated by a 256MB no-scan object, so the GC
learns that the scan density of the heap is very low. In benchmarks
that then rapidly allocate pointer-dense objects, by the time of the
next GC cycle, our estimate of the scan work can be too low by a large
factor. This in turn lets the mutator allocate faster than the GC can
collect, allowing it to get arbitrarily far ahead of the scan work
estimate, which leads to very long GC cycles with very little mutator
assist that can overshoot the heap goal by large margins. This is
particularly easy to demonstrate with BinaryTree17:
$ GODEBUG=gctrace=1 ./go1.test -test.bench BinaryTree17
gc #1 @0.017s 2%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 4->262->262 MB, 4 MB goal, 1 P
gc #2 @0.026s 3%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 262->262->262 MB, 524 MB goal, 1 P
testing: warning: no tests to run
PASS
BenchmarkBinaryTree17 gc #3 @1.906s 0%: 0+0+0+0+7 ms clock, 0+0+0+0/0/0+7 ms cpu, 325->325->287 MB, 325 MB goal, 1 P (forced)
gc #4 @12.203s 20%: 0+0+0+10067+10 ms clock, 0+0+0+0/2523/852+10 ms cpu, 430->2092->1950 MB, 574 MB goal, 1 P
1 9150447353 ns/op
Change this estimate to instead use the *current* scannable heap
size. This has the advantage of being based solely on the current
state of the heap, not on past densities or reachable heap sizes, so
it isn't susceptible to falling behind during these sorts of phase
changes. This is strictly an over-estimate, but it's better to
over-estimate and get more assist than necessary than it is to
under-estimate and potentially spiral out of control. Experiments with
scaling this estimate back showed no obvious benefit for mutator
utilization, heap size, or assist time.
This new estimate has little effect for most benchmarks, including
most go1 benchmarks, x/benchmarks, and the 6g benchmark. It has a huge
effect for benchmarks that triggered the bad pacer behavior:
name old mean new mean delta
BinaryTree17 10.0s × (1.00,1.00) 3.5s × (0.98,1.01) -64.93% (p=0.000)
Fannkuch11 2.74s × (1.00,1.01) 2.65s × (1.00,1.00) -3.52% (p=0.000)
FmtFprintfEmpty 56.4ns × (0.99,1.00) 57.8ns × (1.00,1.01) +2.43% (p=0.000)
FmtFprintfString 187ns × (0.99,1.00) 185ns × (0.99,1.01) -1.19% (p=0.010)
FmtFprintfInt 184ns × (1.00,1.00) 183ns × (1.00,1.00) (no variance)
FmtFprintfIntInt 321ns × (1.00,1.00) 315ns × (1.00,1.00) -1.80% (p=0.000)
FmtFprintfPrefixedInt 266ns × (1.00,1.00) 263ns × (1.00,1.00) -1.22% (p=0.000)
FmtFprintfFloat 353ns × (1.00,1.00) 353ns × (1.00,1.00) -0.13% (p=0.035)
FmtManyArgs 1.21µs × (1.00,1.00) 1.19µs × (1.00,1.00) -1.33% (p=0.000)
GobDecode 9.69ms × (1.00,1.00) 9.59ms × (1.00,1.00) -1.07% (p=0.000)
GobEncode 7.89ms × (0.99,1.01) 7.74ms × (1.00,1.00) -1.92% (p=0.000)
Gzip 391ms × (1.00,1.00) 392ms × (1.00,1.00) ~ (p=0.522)
Gunzip 97.1ms × (1.00,1.00) 97.0ms × (1.00,1.00) -0.10% (p=0.000)
HTTPClientServer 55.7µs × (0.99,1.01) 56.7µs × (0.99,1.01) +1.81% (p=0.001)
JSONEncode 19.1ms × (1.00,1.00) 19.0ms × (1.00,1.00) -0.85% (p=0.000)
JSONDecode 66.8ms × (1.00,1.00) 66.9ms × (1.00,1.00) ~ (p=0.288)
Mandelbrot200 4.13ms × (1.00,1.00) 4.12ms × (1.00,1.00) -0.08% (p=0.000)
GoParse 3.97ms × (1.00,1.01) 4.01ms × (1.00,1.00) +0.99% (p=0.000)
RegexpMatchEasy0_32 114ns × (1.00,1.00) 115ns × (0.99,1.00) ~ (p=0.070)
RegexpMatchEasy0_1K 376ns × (1.00,1.00) 376ns × (1.00,1.00) ~ (p=0.900)
RegexpMatchEasy1_32 94.9ns × (1.00,1.00) 96.3ns × (1.00,1.01) +1.53% (p=0.001)
RegexpMatchEasy1_1K 568ns × (1.00,1.00) 567ns × (1.00,1.00) -0.22% (p=0.001)
RegexpMatchMedium_32 159ns × (1.00,1.00) 159ns × (1.00,1.00) ~ (p=0.178)
RegexpMatchMedium_1K 46.4µs × (1.00,1.00) 46.6µs × (1.00,1.00) +0.29% (p=0.000)
RegexpMatchHard_32 2.37µs × (1.00,1.00) 2.37µs × (1.00,1.00) ~ (p=0.722)
RegexpMatchHard_1K 71.1µs × (1.00,1.00) 71.2µs × (1.00,1.00) ~ (p=0.229)
Revcomp 565ms × (1.00,1.00) 562ms × (1.00,1.00) -0.52% (p=0.000)
Template 81.0ms × (1.00,1.00) 80.2ms × (1.00,1.00) -0.97% (p=0.000)
TimeParse 380ns × (1.00,1.00) 380ns × (1.00,1.00) ~ (p=0.148)
TimeFormat 405ns × (0.99,1.00) 385ns × (0.99,1.00) -5.00% (p=0.000)
Change-Id: I11274158bf3affaf62662e02de7af12d5fb789e4
Reviewed-on: https://go-review.googlesource.com/9696
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
This tracks the number of scannable bytes in the allocated heap. That
is, bytes that the garbage collector must scan before reaching the
last pointer field in each object.
This will be used to compute a more robust estimate of the GC scan
work.
Change-Id: I1eecd45ef9cdd65b69d2afb5db5da885c80086bb
Reviewed-on: https://go-review.googlesource.com/9695
Reviewed-by: Russ Cox <rsc@golang.org>
The garbage collector predicts how much "scan work" must be done in a
cycle to determine how much work should be done by mutators when they
allocate. Most code doesn't care what units the scan work is in: it
simply knows that a certain amount of scan work has to be done in the
cycle. Currently, the GC uses the number of pointer slots scanned as
the scan work on the theory that this is the bulk of the time spent in
the garbage collector and hence reflects real CPU resource usage.
However, this metric is difficult to estimate at the beginning of a
cycle.
Switch to counting the total number of bytes scanned, including both
pointer and scalar slots. This is still less than the total marked
heap since it omits no-scan objects and no-scan tails of objects. This
metric may not reflect absolute performance as well as the count of
scanned pointer slots (though it still takes time to scan scalar
fields), but it will be much easier to estimate robustly, which is
more important.
Change-Id: Ie3a5eeeb0384a1ca566f61b2f11e9ff3a75ca121
Reviewed-on: https://go-review.googlesource.com/9694
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, we only flush the per-P gcWork caches in gcMark, at the
beginning of mark termination. This is necessary to ensure that no
work is held up in these caches.
However, this flush happens after we update the GC controller state,
which depends on statistics about marked heap size and scan work that
are only updated by this flush. Hence, the controller is missing the
bulk of heap marking and scan work. This bug was introduced in commit
1b4025f, which introduced the per-P gcWork caches.
Fix this by flushing these caches before we update the GC controller
state. We continue to flush them at the beginning of mark termination
as well to be robust in case any write barriers happened between the
previous flush and entering mark termination, but this should be a
no-op.
Change-Id: I8f0f91024df967ebf0c616d1c4f0c339c304ebaa
Reviewed-on: https://go-review.googlesource.com/9646
Reviewed-by: Russ Cox <rsc@golang.org>
Ramp up the delay on subsequent attempts. Fast builders have the same delay.
Not a perfect fix, but should make it better. And this easy.
Fixes#9903 maybe
Fixes#10680 maybe
Change-Id: I967380c2cb8196e6da9a71116961229d37b36335
Reviewed-on: https://go-review.googlesource.com/9795
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Android has (had?) its own local DNS resolver daemon, also my fault:
007e987fee
And you access that via libc, not DNS.
Fixes#10714
Change-Id: Iaff752872ce19bb5c7771ab048fd50e3f72cb73c
Reviewed-on: https://go-review.googlesource.com/9793
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>