Fixes#49315
Change-Id: I0887bad1059b25ae0749bfa1ed6ddccbecca7951
Reviewed-on: https://go-review.googlesource.com/c/go/+/361074
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
Change-Id: I5ebfc6a89323cc086ea0e0b619370dc45da1f3a3
Reviewed-on: https://go-review.googlesource.com/c/go/+/345437
Reviewed-by: Damien Neil <dneil@google.com>
Trust: Damien Neil <dneil@google.com>
Trust: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Damien Neil <dneil@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Currently, newstack loads gp.stackguard0 twice to check for different
poison values. The race window between these two checks can lead to
unintentional stack doubling, and ultimately to stack overflows.
Specifically, newstack checks if stackguard0 is stackPreempt first,
then it checks if it's stackForceMove. If stackguard0 is set to
stackForceMove on entry, but changes to stackPreempt between the two
checks, newstack will incorrectly double the stack allocation.
Fix this by loading stackguard0 exactly once and then checking it
against different poison values.
The effect of this is relatively minor because stackForceMove is only
used by a small number of runtime tests. I found this because
mayMorestackMove uses stackForceMove aggressively, which makes this
failure mode much more likely.
Change-Id: I1f8b6a6744e45533580a3f45d7030ec2ec65a5fb
Reviewed-on: https://go-review.googlesource.com/c/go/+/361775
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
For #30432
Change-Id: I84f208705483018559b425b3669e724e7d5627ee
Reviewed-on: https://go-review.googlesource.com/c/go/+/361814
Trust: Bryan C. Mills <bcmills@google.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This does not change any code, just reformats the comments in
the asm code.
Change-Id: I70fbfa77db164898d25b59b589d3e85b8399b0fc
Reviewed-on: https://go-review.googlesource.com/c/go/+/361694
Reviewed-by: Cherry Mui <cherryyz@google.com>
Trust: Lynn Boger <laboger@linux.vnet.ibm.com>
The amd64/arm64 relocation processing is used as a template
and updated for ppc64le.
This requires updating the TOC relocation handling code to
support linux type TOC relocations too (note, AIX uses
TOC-indirect accesses).
Noteably, the shared flag of go functions is used as a proxy
for the local entry point offset encoded in elf objects. Functions
in go ppc64le shared objects always[1] insert 2 instructions to
regenerate the TOC pointer.
[1] excepting a couple special runtime functions, see preprocess
in obj9.go for specific details of this behavior.
Change-Id: I3646e6dc8a0a0ffe712771a976983315eae5c418
Reviewed-on: https://go-review.googlesource.com/c/go/+/352829
Run-TryBot: Paul Murphy <murp@ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Lynn Boger <laboger@linux.vnet.ibm.com>
This test appears to deadlock frequently on the only netbsd-arm64
builder we have (netbsd-arm64-bsiegert). Skip the test to provide
more useful test coverage for other failures.
For #49382
Change-Id: I3be32f58ce1e396f7c69163e70cf58f779f57ac6
Reviewed-on: https://go-review.googlesource.com/c/go/+/361615
Trust: Bryan C. Mills <bcmills@google.com>
Trust: Benny Siegert <bsiegert@gmail.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Benny Siegert <bsiegert@gmail.com>
consistentHeapStats is updated during a stack allocation, so a stack
growth during an acquire or release could cause another acquire to
happen before the operation completes fully. This may lead to an invalid
sequence number.
Fixes#49395.
Change-Id: I41ce3393dff80201793e053d4d6394d7b211a5b7
Reviewed-on: https://go-review.googlesource.com/c/go/+/361158
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Currently the scavenger is paced to 1% of 1 CPU because it had
scalability problems. As of the last few CLs, that should be largely
resolved. This change resolves the TODO and paces the scavenger
according to 1% of overall CPU time.
This change is made separately to allow it to be more easily rolled
back.
Change-Id: I1ab4de24ba41c564960701634a128a813c55ece9
Reviewed-on: https://go-review.googlesource.com/c/go/+/358675
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Currently the scavenge rate is determined by a bunch of ad-hoc
mechanisms. Just use a controller instead, now that we have one.
To facilitate this, the scavenger now attempts to scavenge for at least
1 ms at a time, because any less and the timer system is too imprecise to
give useful feedback to the controller. Also increase the amount that we
scavenge at once, to try to reduce the overheads involved (at the
expense of a little bit of latency).
This change also modifies the controller to accept an update period,
because it's useful to allow that to be variable.
Change-Id: I8a15b2355d0a7c6cbac68c957082d5819618f7d7
Reviewed-on: https://go-review.googlesource.com/c/go/+/353975
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
This change adds a new debug flag that makes the runtime map pages
PROT_NONE in sysUnused on Linux, in addition to the usual madvise calls.
This behavior mimics the behavior of decommit on Windows, and is helpful
in debugging the scavenger. sysUsed is also updated to re-map the pages
as PROT_READ|PROT_WRITE, mimicing Windows' explicit commit behavior.
Change-Id: Iaac5fcd0e6920bd1d0e753dd4e7f0c0b128fe842
Reviewed-on: https://go-review.googlesource.com/c/go/+/356612
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
This change modifies the scavenger to no longer hold the heap lock while
actively scavenging pages. To achieve this, the change also:
* Reverses the locking behavior of the (*pageAlloc).scavenge API, to
only acquire the heap lock when necessary.
* Introduces a new lock on the scavenger-related fields in a pageAlloc
so that access to those fields doesn't require the heap lock. There
are a few places in the scavenge path, notably reservation, that
requires synchronization. The heap lock is far too heavy handed for
this case.
* Changes the scavenger to marks pages that are actively being scavenged
as allocated, and "frees" them back to the page allocator the usual
way.
* Lifts the heap-growth scavenging code out of mheap.grow, where the
heap lock is held, and into allocSpan, just after the lock is
released. Releasing the lock during mheap.grow is not feasible if we
want to ensure that allocation always makes progress (post-growth,
another allocator could come in and take all that space, forcing the
goroutine that just grew the heap to do so again).
This change means that the scavenger now must do more work for each
scavenge, but it is also now much more scalable. Although in theory it's
not great by always taking the locked paths in the page allocator, it
takes advantage of some properties of the allocator:
* Most of the time, the scavenger will be working with one page at a
time. The page allocator's locked path is optimized for this case.
* On the allocation path, it doesn't need to do the find operation at
all; it can go straight to setting bits for the range and updating the
summary structure.
Change-Id: Ie941d5e7c05dcc96476795c63fef74bcafc2a0f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/353974
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
CL 287654 converted the syscall package on openbsd/386 to use libc.
However, the mksyscall.pl invocation wasn't adjusted. Do so now to use
syscall_openbsd_libc.go like the other libc-based openbsd ports.
Change-Id: I48a7bd6ce4c25eca5222f560ed584e412b466111
Reviewed-on: https://go-review.googlesource.com/c/go/+/361481
Trust: Tobias Klauser <tobias.klauser@gmail.com>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
The first stack-trace in #49361 shows that traceBuf must precede fin in
lockrank ordering, since traceBuf is acquired in StartTrace(), which
eventually leads to getting fin in queueFinalizer(). It is fine to move
traceBuf above fin, since there are no other conflicting dependencies.
The second stack trace shows that there is an edge bewtween reflectOffs
and fin, since reflectOffs is acquired in addReflectOff, and map
operations can lead to an allocation that eventually causes fin to be
acquired in queueFinalizer().
Fixes#49361
Change-Id: I8e857ef9ecdff37fdd229e4dba22e15bc71d4ba5
Reviewed-on: https://go-review.googlesource.com/c/go/+/361407
Trust: Dan Scales <danscales@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
The argument liveness tests expect outputs where a dead stack slot
has a poisoned value. If the test function is preempted at the
prologue, it will go with the morestack code path which will spill
all the argument registers. Mark them nosplit to avoid that.
Should fix#49354.
Change-Id: I3b13e72e925748687a53c494bfaa70f07d9496fa
Reviewed-on: https://go-review.googlesource.com/c/go/+/361211
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
CL 360057 fixed missing update source type in storeArgOrLoad. However,
we should only update the type when processing struct/array. If we
update the type right before calling storeArgOrLoad, we may generate a
value with invalid type, e.g, OpStructSelect with non-struct type.
Fixes#49378
Change-Id: Ib7e10f72f818880f550aae5c9f653db463ce29b0
Reviewed-on: https://go-review.googlesource.com/c/go/+/361594
Trust: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Currently, we rely on a "crawling" step during export to identify
function and method bodies that need to be exported or re-exported so
we can trim out unnecessary ones and reduce build artifact sizes. To
catch cases where we expect a function to be inlinable but we failed
to export its body, we made this condition a fatal compiler error.
However, with generics, it's much harder to perfectly identify all
function bodies that need to be exported; and several attempts at
tweaking the algorithm have resulted in still having failure cases.
So for now, this CL changes a missing inline body into a graceful
failure instead.
Change-Id: I04b0872d0dcaae9c3de473e92ce584e4ec6fd782
Reviewed-on: https://go-review.googlesource.com/c/go/+/361403
Trust: Matthew Dempsky <mdempsky@google.com>
Trust: Dan Scales <danscales@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Dan Scales <danscales@google.com>
The current code, introduced in CL 2422, mixes K bits of entropy with
the private key and message digest to generate the signature nonce,
where K is half the bit size of the curve. While the ECDLP complexity
(and hence security level) of a curve is half its bit size, the birthday
bound on K bits is only K/2. For P-224, this means we should expect a
collision after 2^56 signatures over the same message with the same key.
A collision, which is unlikely, would still not be a major practical
concern, because the scheme would fall back to a secure deterministic
signature scheme, and simply leak the fact that the two signed messages
are the same (which is presumably already public).
Still, we can simplify the code and remove the eventuality by always
drawing 256 bits of entropy.
Change-Id: I58097bd3cfc9283503e38751c924c53d271af92b
Reviewed-on: https://go-review.googlesource.com/c/go/+/352530
Trust: Filippo Valsorda <filippo@golang.org>
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Roland Shoemaker <roland@golang.org>
This adds a maymorestack hook that moves the stack at every
cooperative preemption point.
For #48297.
Change-Id: Ic15f9bcbc163345e6422586302d57fda4744caec
Reviewed-on: https://go-review.googlesource.com/c/go/+/359797
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: David Chase <drchase@google.com>
This adds a maymorestack hook that forces a preemption at every
possible cooperative preemption point. This would have helped us catch
several recent preemption-related bugs earlier, including #47302,
#47304, and #47441.
For #48297.
Change-Id: Ib82c973589c8a7223900e1842913b8591938fb9f
Reviewed-on: https://go-review.googlesource.com/c/go/+/359796
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: David Chase <drchase@google.com>
This adds a debugging hook for optionally calling a "maymorestack"
function in the prologue of any function that might call morestack
(whether it does at run time or not). The maymorestack function will
let us improve lock checking and add debugging modes that stress
function preemption and stack growth.
Passes toolstash-check -all (except on js/wasm, where toolstash
appears to be broken)
Fixes#48297.
Change-Id: I27197947482b329af75dafb9971fc0d3a52eaf31
Reviewed-on: https://go-review.googlesource.com/c/go/+/359795
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
This moves and slightly generalizes the -d debug flag parser from
cmd/compile/internal/base to cmd/internal/objabi so that we can use
the same debug flag syntax in other tools.
This makes a few minor tweaks to implementation details. The flag
itself is now just a flag.Value that gets constructed explicitly,
rather than at init time, and we've cleaned up the implementation a
little (e.g., using a map instead of a linear search of a slice). The
help text is now automatically alphabetized. Rather than describing
the values of some flags in the help text footer, we simply include it
in the flags' help text and make sure multi-line help text renders
sensibly.
For #48297.
Change-Id: Id373ee3b767e456be483fb28c110d025149be532
Reviewed-on: https://go-review.googlesource.com/c/go/+/359956
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
A code comment on amd64 for windows and plan9 contained a snippet for
splitting apart the sec and nsec components of a unix timestamp, with
produced assembly below, which was then cleaned up by hand. When arm64
was ported, that code snippet in the comment went through the compiler
to produce some code that was then pasted and cleaned up. Unfortunately,
the comment had a typo in it, containing 8 zeros instead of 9.
This resulted in the constant used in the assembly being wrong, spotted
by @bufflig's eagle eyes. So, this commit fixes the comment on all three
platforms, and the assembly on windows/arm64.
Fixes#48072.
Change-Id: I786fe89147328b0d25544f52c927ddfdb9f6f1cf
Reviewed-on: https://go-review.googlesource.com/c/go/+/361474
Trust: Jason A. Donenfeld <Jason@zx2c4.com>
Run-TryBot: Jason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: Patrik Nyblom <pnyb@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
TestHammerStoreLoad involves a stress test of StorePointer, which has a
write barrier. The "pointer" that is being written is not a real value,
which is generally fine (though not *really* safe) on 64-bit systems
because they never point to an actual object.
On 32-bit systems, however, this is much more likely. Because I can't
figure out how to rewrite the test such that it still is testing the
same conditions but is also using real pointers, just disable the GC
during the test, and make sure there isn't one currently in progress.
Fixes#49362.
Change-Id: If81883fedf06568132e6484f40c820aa69027a9c
Reviewed-on: https://go-review.googlesource.com/c/go/+/361455
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
We were re-allocating a new RawSockaddrAny on every UDP read/write.
We can re-use them instead.
This reduces the number of allocs for UDP read/write on windows to zero.
Co-authored-by: David Crawshaw <crawshaw@golang.org>
Change-Id: I2f05c974e2e7b4f67937ae4e1c99583e81d140af
Reviewed-on: https://go-review.googlesource.com/c/go/+/361404
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
...instead of the structs themselves.
Escape analysis can handle this,
and it'll avoid a bunch of large struct copies.
Change-Id: Ia9c6064ed32a4c26d5a96dae2ed7d7ece6d38704
Reviewed-on: https://go-review.googlesource.com/c/go/+/361264
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The new GC pacer has a bug where the hard goal isn't set in relation to
the original heap goal, but rather to the one already extrapolated for
overshoot.
In practice, I have never once seen this case arise because the
extrapolated goal used for overshoot is conservative. No test because
writing a test for this case is impossible in the idealized model the
pacer tests create. It is possible to simulate but will take more work.
For now, just leave a TODO.
Change-Id: I24ff710016cd8100fad54f71b2c8cdea0f7dfa79
Reviewed-on: https://go-review.googlesource.com/c/go/+/361435
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
These error messages contain the expected shortened revision in braces,
but don't explicitly tell the user that this is the expected one.
Just unified it with the "does not match version-control timestamp" error which does the same...
Change-Id: I8e07df7bd776fd1b39c4c90c4788cb3d626ea00b
GitHub-Last-Rev: d14681ad08
GitHub-Pull-Request: golang/go#42578
Reviewed-on: https://go-review.googlesource.com/c/go/+/269877
Trust: Bryan C. Mills <bcmills@google.com>
Trust: Jay Conrod <jayconrod@google.com>
Reviewed-by: Jay Conrod <jayconrod@google.com>
Fixes#49317
Change-Id: I4038fd4c1d845d54ecbbf82bf73060db1b44c9bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/361095
Trust: Bryan C. Mills <bcmills@google.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The help text for the go test -shuffle flag is not indented like the
other flags. This patch brings it into alignment.
Fixes#49357
Change-Id: I3f18dc7cd84d5f23099262acf6e2fedccb11379c
Reviewed-on: https://go-review.googlesource.com/c/go/+/361395
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Trust: Bryan C. Mills <bcmills@google.com>
Trust: Michael Matloob <matloob@golang.org>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
In modules that specify 'go 1.17' or higher, the go.mod file
explicitly requires modules for all packages transitively imported by
the main module. Users tend to use 'go mod download' to prepare for
testing the main module itself, so we should only download those
relevant modules.
In 'go 1.16' and earlier modules, we continue to download all modules
in the module graph (because we cannot in general tell which ones are
relevant without loading the full package import graph).
'go mod download all' continues to download every module in
'go list all', as it did before.
Fixes#44435
Change-Id: I3f286c0e2549d6688b3832ff116e6cd77a19401c
Reviewed-on: https://go-review.googlesource.com/c/go/+/357310
Trust: Bryan C. Mills <bcmills@google.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
The behavior of all Curve methods and package functions when provided an
off-curve point is undefined, except for IsOnCurve which should really
always return false, not panic.
Change-Id: I52f65df25c5af0314fef2c63d0778db72c0f1313
Reviewed-on: https://go-review.googlesource.com/c/go/+/361402
Trust: Filippo Valsorda <filippo@golang.org>
Run-TryBot: Filippo Valsorda <filippo@golang.org>
Reviewed-by: Roland Shoemaker <roland@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Adjust TypeDefn(), which is used by reportTypeLoop(), to work for nodes
with no Ntype set (which are all nodes in -G=3 mode). Normally,
reportTypeLoop() would not be called, because the types2 typechecker
would have already caught it. This is a possible way to report an
unusual type loop involving type params, which is not being caught by
the types2 type checker.
Updates #48962
Change-Id: I55edee46026eece2e8647c5b5b4d8dfb39eeb5f8
Reviewed-on: https://go-review.googlesource.com/c/go/+/361398
Trust: Dan Scales <danscales@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Currently allocToCache ham-handedly calls pageAlloc.allocRange on the
full size of the cache. This is fine as long as scavenged bits are never
set when alloc bits are set. This is true right now, but won't be true
as of the next CL.
This change makes allocToCache more carefully set the bits. Note that in
the allocToCache path, we were also calling update *twice*, erroneously.
The first time, with contig=true! Luckily today there's no correctness
error there because the page cache is small enough that the contig=true
logic doesn't matter, but this should at least improve allocation
performance a little bit.
Change-Id: I3ff9590ac86d251e4c5063cfd633570238b0cdbf
Reviewed-on: https://go-review.googlesource.com/c/go/+/356609
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
The first step toward acquiring the heap lock less frequently in the
scavenger.
Change-Id: Idc69fd8602be2c83268c155951230d60e20b42fe
Reviewed-on: https://go-review.googlesource.com/c/go/+/353973
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
This doesn't handle every possible scenario,
but improves the one we can control. For example,
if the worker panics for some reason, we have no
way of knowing whether the panic occurred in an
expected way (while executing the fuzz target) or
due to an internal error in the worker. So any
panic will still be treated as a crash.
However, if it fails due to some internal bug that
we know how to catch, then the error should be
reported to the user without a new crasher being
written to testdata.
This is very difficult to test. The reasons an
internal error would occur is because something went
very wrong, and we have a bug in our code (which is
why they were previously panics). So simulating
a problem like this in a test is not really feasible.
Fixes#48804
Change-Id: I334618f84eb4a994a8d17419551a510b1fdef071
Reviewed-on: https://go-review.googlesource.com/c/go/+/361115
Trust: Katie Hockman <katie@golang.org>
Run-TryBot: Katie Hockman <katie@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Roland Shoemaker <roland@golang.org>
Change-Id: Ifb90d5482cb0cedee6cb4d6297853ac7913d14ee
Reviewed-on: https://go-review.googlesource.com/c/go/+/358674
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
This change implements the GC pacer redesign outlined in #44167 and the
accompanying design document, behind a GOEXPERIMENT flag that is on by
default.
In addition to adding the new pacer, this CL also includes code to track
and account for stack and globals scan work in the pacer and in the
assist credit system.
The new pacer also deviates slightly from the document in that it
increases the bound on the minimum trigger ratio from 0.6 (scaled by
GOGC) to 0.7. The logic behind this change is that the new pacer much
more consistently hits the goal (good!) leading to slightly less
frequent GC cycles, but _longer_ ones (in this case, bad!). It turns out
that the cost of having the GC on hurts throughput significantly (per
byte of memory used), though tail latencies can improve by up to 10%! To
be conservative, this change moves the value to 0.7 where there is a
small improvement to both throughput and latency, given the memory use.
Because the new pacer accounts for the two most significant sources of
scan work after heap objects, it is now also safer to reduce the minimum
heap size without leading to very poor amortization. This change thus
decreases the minimum heap size to 512 KiB, which corresponds to the
fact that the runtime has around 200 KiB of scannable globals always
there, up-front, providing a baseline.
Benchmark results: https://perf.golang.org/search?q=upload:20211001.6
tile38's KNearest benchmark shows a memory increase, but throughput (and
latency) per byte of memory used is better.
gopher-lua showed an increase in both CPU time and memory usage, but
subsequent attempts to reproduce this behavior are inconsistent.
Sometimes the overall performance is better, sometimes it's worse. This
suggests that the benchmark is fairly noisy in a way not captured by the
benchmarking framework itself.
biogo-igor is the only benchmark to show a significant performance loss.
This benchmark exhibits a very high GC rate, with relatively little work
to do in each cycle. The idle mark workers are quite active. In the new
pacer, mark phases are longer, mark assists are fewer, and some of that
time in mark assists has shifted to idle workers. Linux perf indicates
that the difference in CPU time can be mostly attributed to write-barrier
slow path related calls, which in turn indicates that the write barrier
being on for longer is the primary culprit. This also explains the memory
increase, as a longer mark phase leads to more memory allocated black,
surviving an extra cycle and contributing to the heap goal.
For #44167.
Change-Id: I8ac7cfef7d593e4a642c9b2be43fb3591a8ec9c4
Reviewed-on: https://go-review.googlesource.com/c/go/+/309869
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
On ARM64 PE, when external linking, the PE relocation does not
have an explicit addend, and instead has the addend encoded in
the instruction or data. An instruction (e.g. ADRP, ADD) has
limited width for the addend, so when the addend is large we use
a label symbol, which points to the middle of the original target
symbol, and a smaller addend. But for an absolute address
relocation in the data section, we have the full width to encode
the addend and we should not use the label symbol. Also, since we
do not adjust the addend in the data, using the label symbol will
actually make it point to the wrong address. E.g for an R_ADDR
relocation targeting x+0x123456, we should emit 0x123456 in the
data with an IMAGE_REL_ARM64_ADDR64 relocation pointing to x,
whereas the current code emits 0x123456 in the data with an
IMAGE_REL_ARM64_ADDR64 relocation pointing to the label symbol
x+1MB, so it will actually be resolved to x+0x223456. This CL
fixes this.
Fixes#47557.
Change-Id: I64e02b56f1d792f8c20ca61b78623ef5c3e34d7e
Reviewed-on: https://go-review.googlesource.com/c/go/+/360895
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
In order to know the actual number of bytes
of the entire corpus entry, the coordinator
would likely need to unmarshal the bytes and
tally up the length. That's more work than it
is worth, so this change just clarifies that
the printed # of bytes is the length of the
entire file, not just the entry itself.
Fixes#48989
Change-Id: I6fa0c0206a249cefdf6335040c560ec0c5a55b4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/361414
Trust: Katie Hockman <katie@golang.org>
Run-TryBot: Katie Hockman <katie@golang.org>
Reviewed-by: Roland Shoemaker <roland@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Change-Id: I02724dadacd9b3f23ca7e6bda581cba62ceff828
Reviewed-on: https://go-review.googlesource.com/c/go/+/361274
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Damien Neil <dneil@google.com>
Trust: Damien Neil <dneil@google.com>
Run-TryBot: Damien Neil <dneil@google.com>
TryBot-Result: Go Bot <gobot@golang.org>