Otherwise we may delay the delivery of these signals for an arbitrary
length of time. We are already careful to not block signals that the
program has asked to see.
Also make sure that we don't miss a signal delivery if a thread
decides to stop for a while while executing the signal handler.
Also clean up the TestAtomicStop output a little bit.
Fixes#21433
Change-Id: Ic0c1a4eaf7eba80d1abc1e9537570bf4687c2434
Reviewed-on: https://go-review.googlesource.com/79581
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Compiler and linker changes to support DWARF inlined instances,
see https://go.googlesource.com/proposal/+/HEAD/design/22080-dwarf-inlining.md
for design details.
This functionality is gated via the cmd/compile option -gendwarfinl=N,
where N={0,1,2}, where a value of 0 disables dwarf inline generation,
a value of 1 turns on dwarf generation without tracking of formal/local
vars from inlined routines, and a value of 2 enables inlines with
variable tracking.
Updates #22080
Change-Id: I69309b3b815d9fed04aebddc0b8d33d0dbbfad6e
Reviewed-on: https://go-review.googlesource.com/75550
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This CL is a simple doc typo fix, uncovered while reviewing the go-wasm
port.
Change-Id: I0fce915c341aaaea3a7cc365819abbc5f2c468c3
Reviewed-on: https://go-review.googlesource.com/80715
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Thanks to coypoop for noticing at:
https://github.com/golang/go/issues/22914#issuecomment-347761838
FreeBSD/386 and NetBSD/386 diverged between Go 1.4 and Go 1.5 when
Russ sent https://golang.org/cl/135830043 (git rev 25f6b02ab0)
to change the calling convention of the C compilers to match Go.
But netbsd wasn't updated.
Tested on a NetBSD/386 VM, since the builders aren't back up yet (due
to this bug)
Fixes#22914
Updates #19339
Updates #20852
Updates #16511
Change-Id: Id76ebe8f29bcc85e39b1c11090639d906cd6cf04
Reviewed-on: https://go-review.googlesource.com/80515
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Benny Siegert <bsiegert@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TestGdbAutotmpTypes times out for unknown reasons on NetBSd. Skip the
gdb tests on NetBSD for now.
Updates #22893
Change-Id: Ibb05b7260eabb74d805d374b25a43770939fa5f2
Reviewed-on: https://go-review.googlesource.com/80136
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
exitsyscall should be recursively nosplit, but we don't have a way to
annotate that right now (see #21314). There's exactly one remaining
place where this is violated right now: exitsyscall -> casgstatus ->
print. The other prints in casgstatus are wrapped in systemstack
calls. This fixes the remaining print.
Updates #21431 (in theory could fix it, but that would just indicate
that we have a different G status-related crash and we've *never* seen
that failure on the dashboard.)
Change-Id: I9a5e8d942adce4a5c78cfc6b306ea5bda90dbd33
Reviewed-on: https://go-review.googlesource.com/79815
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Use singular form of panic and remove the unnecessary
'however', when comparing Goexit's behavior to 'a panic'
as well as what happens for deferred recovers with Goexit.
Change-Id: I3116df3336fa135198f6a39cf93dbb88a0e2f46e
Reviewed-on: https://go-review.googlesource.com/79755
Reviewed-by: Rob Pike <r@golang.org>
Add an explanation of why sigtrampgo is nosplit.
Updates #21314.
Change-Id: I3f5909d2b2c180f9fa74d53df13e501826fd4316
Reviewed-on: https://go-review.googlesource.com/79615
Reviewed-by: Ian Lance Taylor <iant@golang.org>
newstack manually prints the stack trace if we try to grow the stack
when throwsplit is set. However, the default behavior is to omit
runtime frames. Since runtime frames can be critical to understanding
this crash, this change fixes this traceback to include them.
Updates #21431.
Change-Id: I5aa43f43aa2f10a8de7d67bcec743427be3a3b5d
Reviewed-on: https://go-review.googlesource.com/79518
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
If exitsyscall tries to grow the stack it will panic, but throw calls
print, which can grow the stack. Move the two bare throws in
exitsyscall to the system stack.
Updates #21431.
Change-Id: I5b29da5d34ade908af648a12075ed327a864476c
Reviewed-on: https://go-review.googlesource.com/79517
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, SetGCPercent(-1) disables GC, but doesn't wait for any
currently running concurrent GC to finish, so GC can still be running
when it returns. This is a change in behavior from Go 1.8, probably
defies user expectations, and can break various runtime tests that
depend on SetGCPercent(-1) to disable garbage collection in order to
prevent preemption deadlocks.
Fix this by making SetGCPercent(-1) block until any concurrently
running GC cycle finishes.
Fixes#22443.
Change-Id: I904133a34acf97a7942ef4531ace0647b13930ef
Reviewed-on: https://go-review.googlesource.com/79195
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The signature of the mapassign_fast* routines need to distinguish
the pointerness of their key argument. If the affected routines
suspend part way through, the object pointed to by the key might
get garbage collected because the key is typed as a uint{32,64}.
This is not a problem for mapaccess or mapdelete because the key
in those situations do not live beyond the call involved. If the
object referenced by the key is garbage collected prematurely, the
code still works fine. Even if that object is subsequently reallocated,
it can't be written to the map in time to affect the lookup/delete.
Fixes#22781
Change-Id: I0bbbc5e9883d5ce702faf4e655348be1191ee439
Reviewed-on: https://go-review.googlesource.com/79018
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
CL 78538 was updated after running TryBots to depend on
syscall.NanoSleep which isn't available on all non-Linux platforms.
Change-Id: I1fa615232b3920453431861310c108b208628441
Reviewed-on: https://go-review.googlesource.com/79175
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Adding s390x to the list of architectures that support c-shared and c-archive.
Required adding load-time initialization (via _rt0_s390x_linux_lib) and adding s390x
to the c-shared and c-archive tests.
Change-Id: I75883b2891c310fe8ce7f08c27b06895c074e123
Reviewed-on: https://go-review.googlesource.com/74910
Reviewed-by: Michael Munday <mike.munday@ibm.com>
I experimented with changing the write barrier to take the value in SI
rather than AX to improve register allocation. It had no effect on
performance and only made the "hello world" text 0.07% smaller, so
let's just remove the comment.
Change-Id: I6a261d14139b7a02a8467b31e74951dfb927ffb4
Reviewed-on: https://go-review.googlesource.com/78033
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The CPU time reported in the gctrace for STW phases is simply
work.stwprocs times the wall-clock duration of these phases. However,
work.stwprocs is set to gcprocs(), which is wrong for multiple
reasons:
1. gcprocs is intended to limit the number of Ms used for mark
termination based on how well the garbage collector actually
scales, but the gctrace wants to report how much CPU time is being
stolen from the application. During STW, that's *all* of the CPU,
regardless of how many the garbage collector can actually use.
2. gcprocs assumes it's being called during STW, so it limits its
result to sched.nmidle+1. However, we're not calling it during STW,
so sched.nmidle is typically quite small, even if GOMAXPROCS is
quite large.
Fix this by setting work.stwprocs to min(ncpu, GOMAXPROCS). This also
fixes the overall GC CPU fraction, which is based on the computed CPU
times.
Fixes#22725.
Change-Id: I64b5ce87e28dbec6870aa068ce7aecdd28c058d1
Reviewed-on: https://go-review.googlesource.com/77710
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
change hash/crc32 package to use cpu package instead of using
runtime internal variables to check crc32 instruction
Change-Id: I8f88d2351bde8ed4e256f9adf822a08b9a00f532
Reviewed-on: https://go-review.googlesource.com/76490
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Just copy some code to make TestWindowsStackMemory build
when CGO_ENABLED is set to 0.
Fixes#22680
Change-Id: I63f9b409a3a97b7718f5d37837ab706d8ed92e81
Reviewed-on: https://go-review.googlesource.com/77430
Reviewed-by: Chris Hines <chris.cs.guy@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
CL 45412 started hiding autogenerated wrapper functions from call
stacks so that call stack semantics better matched language semantics.
This is based on the theory that the wrapper function will call the
"real" function and all the programmer knows about is the real
function.
However, this theory breaks down in two cases:
1. If the wrapper is at the top of the stack, then it didn't call
anything. This can happen, for example, if the "stack" was actually
synthesized by the user.
2. If the wrapper panics, for example by calling panicwrap or by
dereferencing a nil pointer, then it didn't call the wrapped
function and the user needs to see what panicked, even if we can't
attribute it nicely.
This commit modifies the traceback logic to include the wrapper
function in both of these cases.
Fixes#22231.
Change-Id: I6e4339a652f73038bd8331884320f0b8edd86eb1
Reviewed-on: https://go-review.googlesource.com/76770
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
It has always been problematic that there was no way to specify
tool flags that applied only to the build of certain packages;
it was only to specify flags for all packages being built.
The usual workaround was to install all dependencies of something,
then build just that one thing with different flags. Since the
dependencies appeared to be up-to-date, they were not rebuilt
with the different flags. The new content-based staleness
(up-to-date) checks see through this trick, because they detect
changes in flags. This forces us to address the underlying problem
of providing a way to specify per-package flags.
The solution is to allow -gcflags=pattern=flags, which means
that flags apply to packages matching pattern, in addition to the
usual -gcflags=flags, which is now redefined to apply only to
the packages named on the command line.
See #22527 for discussion and rationale.
Fixes#22527.
Change-Id: I6716bed69edc324767f707b5bbf3aaa90e8e7302
Reviewed-on: https://go-review.googlesource.com/76551
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Currently dead goroutines retain their assist credit. This credit can
be used if the goroutine gets recycled, but in general this can make
assist pacing over-aggressive by hiding an amount of credit
proportional to the number of exited (and not reused) goroutines.
Fix this "hidden credit" by flushing assist credit to the global
credit pool when a goroutine exits.
Updates #14812.
Change-Id: I65f7f75907ab6395c04aacea2c97aea963b60344
Reviewed-on: https://go-review.googlesource.com/24703
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This fixes a race on old Linux kernels, in which we might temporarily
set epfd to an invalid value other than -1. It's also the right thing
to do. No test because the problem only occurs on old kernels.
Fixes#22606
Change-Id: Id84bdd6ae6d7c5d47c39e97b74da27576cb51a54
Reviewed-on: https://go-review.googlesource.com/76319
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
A couple of the CPU profiling testpoints make calls to helper
functions (cpuHog1, for example) where the computed value is always
thrown away by the caller without being used. A smart compiler back
end (in this case LLVM) can detect this fact and delete the contents
of the called function, which can cause tests to fail. Harden the test
slighly by passing in a value read from a global and insuring that the
caller stores the value back to a global; this prevents any optimizer
mischief.
Change-Id: Icbd6e3e32ff299c68a6397dc1404a52b21eaeaab
Reviewed-on: https://go-review.googlesource.com/76230
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
This CL adds an automatic, limited "go vet" to "go test".
If the building of a test package fails, vet is not run.
If vet fails, the test is not run.
The goal is that users don't notice vet as part of the "go test"
process at all, until vet speaks up and says something important.
This should help users find real problems in their code faster
(vet can just point to them instead of needing to debug a
test failure) and expands the scope of what kinds of things
vet can help with.
The "go vet" runs in parallel with the linking of the test binary,
so for incremental builds it typically does not slow the overall
"go test" at all: there's spare machine capacity during the link.
all.bash has less spare machine capacity. This CL increases
the time for all.bash on my laptop from 4m41s to 4m48s (+2.5%)
To opt out for a given run, use "go test -vet=off".
The vet checks used during "go test" are a subset of the full set,
restricted to ones that are 100% correct and therefore acceptable
to make mandatory. In this CL, that set is atomic, bool, buildtags,
nilfunc, and printf. Including printf is debatable, but I want to
include it for now and find out what needs to be scaled back.
(It already found one real problem in package os's tests that
previous go vet os had not turned up.)
Now that we can rely on type information it may be that printf
should make its function-name-based heuristic less aggressive
and have a whitelist of known print/printf functions.
Determining the exact set for Go 1.10 is #18085.
Running vet also means that programs now have to type-check
with both cmd/compile and go/types in order to pass "go test".
We don't start vet until cmd/compile has built the test package,
so normally the added go/types check doesn't find anything.
However, there is at least one instance where go/types is more
precise than cmd/compile: declared and not used errors involving
variables captured into closures.
This CL includes a printf fix to os/os_test.go and many declared
and not used fixes in the race detector tests.
Fixes#18084.
Change-Id: I353e00b9d1f9fec540c7557db5653e7501f5e1c9
Reviewed-on: https://go-review.googlesource.com/74356
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Unlike the legacy text format that outputs the count and the number of
cycles, the pprof tool expects contention profiles to include the count
and the delay time measured in nanoseconds. printCountCycleProfile
performs the conversion from cycles to nanoseconds.
(See parseContention function in
cmd/vendor/github.com/google/pprof/profile/legacy_profile.go)
Fixes#21474
Change-Id: I8e8fb6ea803822d7eaaf9ecf1df3e236ad225a7b
Reviewed-on: https://go-review.googlesource.com/64410
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The current GOROOT documentation could indicate that changing the
environment variable at runtime would affect the return value of
GOROOT. This is false as the returned value is the one used for the
build. This CL aims to clarify the confusion.
Fixes#22302
Change-Id: Ib68c30567ac864f152d2da31f001a98531fc9757
Reviewed-on: https://go-review.googlesource.com/75751
Reviewed-by: Russ Cox <rsc@golang.org>
The current code can potentially return a smaller processor count on a
linux kernel when its cpumask_size (controlled by both kernel config and
boot parameter) is not a multiple of the pointer size, because
r/sys.PtrSize will be rounded down. Since sched_getaffinity returns the
size in bytes, we can just allocate the buf as a byte array to avoid the
extra calculation with the pointer size and roundups.
Change-Id: I0c21046012b88d8a56b5dd3dde1d158d94f8eea9
Reviewed-on: https://go-review.googlesource.com/75591
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
To improve readability when exported fields are removed,
forbid the printer from emitting an empty line before the first comment
in a const, var, or type block.
Also, when printing the "Has filtered or unexported fields." message,
add an empty line before it to separate the message from the struct
or interfact contents.
Before the change:
<<<
type NamedArg struct {
// Name is the name of the parameter placeholder.
//
// If empty, the ordinal position in the argument list will be
// used.
//
// Name must omit any symbol prefix.
Name string
// Value is the value of the parameter.
// It may be assigned the same value types as the query
// arguments.
Value interface{}
// contains filtered or unexported fields
}
>>>
After the change:
<<<
type NamedArg struct {
// Name is the name of the parameter placeholder.
//
// If empty, the ordinal position in the argument list will be
// used.
//
// Name must omit any symbol prefix.
Name string
// Value is the value of the parameter.
// It may be assigned the same value types as the query
// arguments.
Value interface{}
// contains filtered or unexported fields
}
>>>
Fixes#18264
Change-Id: I9fe17ca39cf92fcdfea55064bd2eaa784ce48c88
Reviewed-on: https://go-review.googlesource.com/71990
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
* Avoid calculating insertk until needed.
* Avoid a pointer into b.tophash and just track the insertion index.
This avoids b.tophash being marked as escaping to heap.
* Calculate val only once at the end of the mapassign functions.
Function sizes decrease slightly, e.g. for mapassign_faststr:
before "".mapassign_faststr STEXT size=1166 args=0x28 locals=0x78
after "".mapassign_faststr STEXT size=1080 args=0x28 locals=0x68
name old time/op new time/op delta
MapAssign/Int32/256-4 19.4ns ± 4% 19.5ns ±11% ~ (p=0.973 n=20+20)
MapAssign/Int32/65536-4 32.5ns ± 2% 32.4ns ± 3% ~ (p=0.078 n=20+19)
MapAssign/Int64/256-4 20.3ns ± 6% 17.6ns ± 5% -13.01% (p=0.000 n=20+20)
MapAssign/Int64/65536-4 33.3ns ± 2% 33.3ns ± 1% ~ (p=0.444 n=20+20)
MapAssign/Str/256-4 22.3ns ± 3% 22.4ns ± 3% ~ (p=0.343 n=20+20)
MapAssign/Str/65536-4 44.9ns ± 1% 43.9ns ± 1% -2.39% (p=0.000 n=20+19)
Change-Id: I2627bb8a961d366d9473b5922fa129176319eb22
Reviewed-on: https://go-review.googlesource.com/74870
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.
Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0) is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
makehmap function will allocate hmap and intialize hmap's
hash0 field.
Reduces size of the godoc by ~36kb.
AMD64
name old time/op new time/op delta
NewEmptyMap 16.6ns ± 2% 5.5ns ± 2% -66.72% (p=0.000 n=10+10)
NewSmallMap 64.8ns ± 1% 56.5ns ± 1% -12.75% (p=0.000 n=9+10)
Updates #6853
Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Since CL 33071, testCPUProfile is only one user of the badOS map.
Replace it by the corresponding switch, with the "plan9" case removed
because it is already checked earlier in the same function.
Change-Id: Id647b8ee1fd37516bb702b35b3c9296a4f56b61b
Reviewed-on: https://go-review.googlesource.com/75110
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The check of uintptr(newcap) > maxSliceCap(et.size) in addition
to capmem > _MaxMem is needed to prevent a reproducible overflow
on 32bit architectures.
On 64bit platforms this problem is less likely to occur as allocation
of a sufficiently large array or slice to be append is likely to
already exhaust available memory before the call to append can be made.
Example program that without the fix in this CL does segfault on 386:
type T [1<<27 + 1]int64
var d T
var s []T
func main() {
s = append(s, d, d, d, d)
print(len(s), "\n")
}
Fixes#21586
Change-Id: Ib4185435826ef43df71ba0f789e19f5bf9a347e6
Reviewed-on: https://go-review.googlesource.com/55133
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
testing.Skip{,f} will exit the test via runtime.Goexit. Thus, the
successive return is never reached and can be removed.
Change-Id: I1e399f3d5db753ece1ffba648850427e1b4be300
Reviewed-on: https://go-review.googlesource.com/74990
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Otherwise the new numbered directories like b028/ appear in the objects,
and they can change from run to run.
Fixes#22514.
Change-Id: I8d0cf65f3622e48b2547d5757febe0ee1301e2ed
Reviewed-on: https://go-review.googlesource.com/74791
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Since Go1.8, different types of GC mark workers were annotated and the
annotation strings were recorded during StartTrace. This change fixes
two issues around the use of traceString from StartTrace here.
1) "failed to parse trace: no consistent ordering of events possible"
This issue is a result of a missing 'batch' event entry. For efficient
tracing, tracer maintains system allocated buffers and once a buffer
is full, it is Flushed out for writing. Moreover, tracing assumes all
the records in the same buffer (batch) are already ordered and implements
more optimization in encoding and defers the completing order
reconstruction till the trace parsing time. Thus, when a Flush happens
and a new buffer is used, the new buffer should contain an event to
indicate the start of a new batch. Before this CL, the batch entry was
written only by traceEvent only when the buffer position is 0 and
wasn't written when flush occurs during traceString.
This CL fixes it by moving the batch entry write to the traceFlush.
2) crash during tracing due to invalid memory access, or during parsing
due to duplicate string entries
This issue is a result of memory allocation during traceString calls.
Execution tracer traces some memory allocation activities. Before this
CL, traceString took the buffer address (*traceBuf) and mutated the buffer.
If memory tracing occurs in the meantime from the same P, the allocation
tracing (traceEvent) will take the same buffer address through the pointer
to the buffer address (**traceBuf), and mutate the buffer.
As a result, one of the followings can happen:
- the allocation record is overwritten by the following trace string
record (data loss)
- if buffer flush occurs during the allocation tracing, traceString
will attempt to write the string record to the old buffer and
eventually causes invalid memory access crash.
- or flush on the same buffer can occur twice (once from the memory
allocation, and once from the string record write), and in this case
the trace can contain the same data twice and the parse will complain
about duplicate string record entries.
This CL fixes the second issue by making the traceString take
**traceBuf (*traceBufPtr).
Change-Id: I24f629758625b38e1916fbfc7d7be6ea210586af
Reviewed-on: https://go-review.googlesource.com/50873
Run-TryBot: Austin Clements <austin@google.com>
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, both the background mark worker and the goal GC CPU are
both fixed at 25%. The trigger controller's goal is to achieve the
goal CPU usage, and with the previous commit it can actually achieve
this. But this means there are *no* assists, which sounds ideal but
actually causes problems for the trigger controller. Since the
controller can't lower CPU usage below the background mark worker CPU,
it saturates at the CPU goal and no longer gets feedback, which
translates into higher variability in heap growth.
This commit fixes this by allowing assists 5% CPU beyond the 25% fixed
background mark. This avoids saturating the trigger controller, since
it can now get feedback from both sides of the CPU goal. This leads to
low variability in both CPU usage and heap growth, at the cost of
reintroducing a low rate of mark assists.
We also experimented with 20% background plus 5% assist, but 25%+5%
clearly performed better in benchmarks.
Updates #14951.
Updates #14812.
Updates #18534.
Combined with the previous CL, this significantly improves tail
mutator utilization in the x/bechmarks garbage benchmark. On a sample
trace, it increased the 99.9%ile mutator utilization at 10ms from 26%
to 59%, and at 5ms from 17% to 52%. It reduced the 99.9%ile zero
utilization window from 2ms to 700µs. It also helps the mean mutator
utilization: it increased the 10s mutator utilization from 83% to 94%.
The minimum mutator utilization is also somewhat improved, though
there is still some unknown artifact that causes a miniscule fraction
of mutator assists to take 5--10ms (in fact, there was exactly one
10ms mutator assist in my sample trace).
This has no significant effect on the throughput of the
github.com/dr2chase/bent benchmarks-50.
This has little effect on the go1 benchmarks (and the slight overall
improvement makes up for the slight overall slowdown from the previous
commit):
name old time/op new time/op delta
BinaryTree17-12 2.40s ± 0% 2.41s ± 1% +0.26% (p=0.010 n=18+19)
Fannkuch11-12 2.95s ± 0% 2.93s ± 0% -0.62% (p=0.000 n=18+15)
FmtFprintfEmpty-12 42.2ns ± 0% 42.3ns ± 1% +0.37% (p=0.001 n=15+14)
FmtFprintfString-12 67.9ns ± 2% 67.2ns ± 3% -1.03% (p=0.002 n=20+18)
FmtFprintfInt-12 75.6ns ± 3% 76.8ns ± 2% +1.59% (p=0.000 n=19+17)
FmtFprintfIntInt-12 123ns ± 1% 124ns ± 1% +0.77% (p=0.000 n=17+14)
FmtFprintfPrefixedInt-12 148ns ± 1% 150ns ± 1% +1.28% (p=0.000 n=20+20)
FmtFprintfFloat-12 212ns ± 0% 211ns ± 1% -0.67% (p=0.000 n=16+17)
FmtManyArgs-12 499ns ± 1% 500ns ± 0% +0.23% (p=0.004 n=19+16)
GobDecode-12 6.49ms ± 1% 6.51ms ± 1% +0.32% (p=0.008 n=19+19)
GobEncode-12 5.47ms ± 0% 5.43ms ± 1% -0.68% (p=0.000 n=19+20)
Gzip-12 220ms ± 1% 216ms ± 1% -1.66% (p=0.000 n=20+19)
Gunzip-12 38.8ms ± 0% 38.5ms ± 0% -0.80% (p=0.000 n=19+20)
HTTPClientServer-12 78.5µs ± 1% 78.1µs ± 1% -0.53% (p=0.008 n=20+19)
JSONEncode-12 12.2ms ± 0% 11.9ms ± 0% -2.38% (p=0.000 n=17+19)
JSONDecode-12 52.3ms ± 0% 53.3ms ± 0% +1.84% (p=0.000 n=19+20)
Mandelbrot200-12 3.69ms ± 0% 3.69ms ± 0% -0.19% (p=0.000 n=19+19)
GoParse-12 3.17ms ± 1% 3.19ms ± 1% +0.61% (p=0.000 n=20+20)
RegexpMatchEasy0_32-12 73.7ns ± 0% 73.2ns ± 1% -0.66% (p=0.000 n=17+20)
RegexpMatchEasy0_1K-12 238ns ± 0% 239ns ± 0% +0.32% (p=0.000 n=17+16)
RegexpMatchEasy1_32-12 69.1ns ± 1% 69.2ns ± 1% ~ (p=0.669 n=19+13)
RegexpMatchEasy1_1K-12 365ns ± 1% 367ns ± 1% +0.49% (p=0.000 n=19+19)
RegexpMatchMedium_32-12 104ns ± 1% 105ns ± 1% +1.33% (p=0.000 n=16+20)
RegexpMatchMedium_1K-12 33.6µs ± 3% 34.1µs ± 4% +1.67% (p=0.001 n=20+20)
RegexpMatchHard_32-12 1.67µs ± 1% 1.62µs ± 1% -2.78% (p=0.000 n=18+17)
RegexpMatchHard_1K-12 50.3µs ± 2% 48.7µs ± 1% -3.09% (p=0.000 n=19+18)
Revcomp-12 384ms ± 0% 386ms ± 0% +0.59% (p=0.000 n=19+19)
Template-12 61.1ms ± 1% 60.5ms ± 1% -1.02% (p=0.000 n=19+20)
TimeParse-12 307ns ± 0% 303ns ± 1% -1.23% (p=0.000 n=19+15)
TimeFormat-12 323ns ± 0% 323ns ± 0% -0.12% (p=0.011 n=15+20)
[Geo mean] 47.1µs 47.0µs -0.20%
https://perf.golang.org/search?q=upload:20171030.4
It slightly improve the performance the x/benchmarks:
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.29ms ± 3% 2.22ms ± 2% -2.97% (p=0.000 n=18+18)
Garbage/benchmem-MB=64-12 2.24ms ± 2% 2.21ms ± 2% -1.64% (p=0.000 n=18+18)
HTTP-12 12.6µs ± 1% 12.6µs ± 1% ~ (p=0.690 n=19+17)
JSON-12 11.3ms ± 2% 11.3ms ± 1% ~ (p=0.163 n=17+18)
and fixes some of the heap size bloat caused by the previous commit:
name old peak-RSS-bytes new peak-RSS-bytes delta
Garbage/benchmem-MB=1024-12 1.88G ± 2% 1.77G ± 2% -5.52% (p=0.000 n=20+18)
Garbage/benchmem-MB=64-12 248M ± 8% 226M ± 5% -8.93% (p=0.000 n=20+20)
HTTP-12 47.0M ±27% 47.2M ±12% ~ (p=0.512 n=20+20)
JSON-12 206M ±11% 206M ±10% ~ (p=0.841 n=20+20)
https://perf.golang.org/search?q=upload:20171030.5
Combined with the change to add a soft goal in the previous commit,
the achieves a decent performance improvement on the garbage
benchmark:
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.40ms ± 4% 2.22ms ± 2% -7.40% (p=0.000 n=19+18)
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.21ms ± 2% -1.06% (p=0.000 n=19+18)
HTTP-12 12.5µs ± 1% 12.6µs ± 1% ~ (p=0.330 n=20+17)
JSON-12 11.1ms ± 1% 11.3ms ± 1% +1.87% (p=0.000 n=16+18)
https://perf.golang.org/search?q=upload:20171030.6
Change-Id: If04ddb57e1e58ef2fb9eec54c290eb4ae4bea121
Reviewed-on: https://go-review.googlesource.com/59971
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, GC pacing is based on a single hard heap limit computed
based on GOGC. In order to achieve this hard limit, assist pacing
makes the conservative assumption that the entire heap is live.
However, in the steady state (with GOGC=100), only half of the heap is
live. As a result, the garbage collector works twice as hard as
necessary and finishes half way between the trigger and the goal.
Since this is a stable state for the trigger controller, this repeats
from cycle to cycle. Matters are even worse if GOGC is higher. For
example, if GOGC=200, only a third of the heap is live in steady
state, so the GC will work three times harder than necessary and
finish only a third of the way between the trigger and the goal.
Since this causes the garbage collector to consume ~50% of the
available CPU during marking instead of the intended 25%, about 25% of
the CPU goes to mutator assists. This high mutator assist cost causes
high mutator latency variability.
This commit improves the situation by separating the heap goal into
two goals: a soft goal and a hard goal. The soft goal is set based on
GOGC, just like the current goal is, and the hard goal is set at a 10%
larger heap than the soft goal. Prior to the soft goal, assist pacing
assumes the heap is in steady state (e.g., only half of it is live).
Between the soft goal and the hard goal, assist pacing switches to the
current conservative assumption that the entire heap is live.
In benchmarks, this nearly eliminates mutator assists. However, since
background marking is fixed at 25% CPU, this causes the trigger
controller to saturate, which leads to somewhat higher variability in
heap size. The next commit will address this.
The lower CPU usage of course leads to longer mark cycles, though
really it means the mark cycles are as long as they should have been
in the first place. This does, however, lead to two potential
down-sides compared to the current pacing policy: 1. the total
overhead of the write barrier is higher because it's enabled more of
the time and 2. the heap size may be larger because there's more
floating garbage. We addressed 1 by significantly improving the
performance of the write barrier in the preceding commits. 2 can be
demonstrated in intense GC benchmarks, but doesn't seem to be a
problem in any real applications.
Updates #14951.
Updates #14812 (fixes?).
Fixes#18534.
This has no significant effect on the throughput of the
github.com/dr2chase/bent benchmarks-50.
This has little overall throughput effect on the go1 benchmarks:
name old time/op new time/op delta
BinaryTree17-12 2.41s ± 0% 2.40s ± 0% -0.22% (p=0.007 n=20+18)
Fannkuch11-12 2.95s ± 0% 2.95s ± 0% +0.07% (p=0.003 n=17+18)
FmtFprintfEmpty-12 41.7ns ± 3% 42.2ns ± 0% +1.17% (p=0.002 n=20+15)
FmtFprintfString-12 66.5ns ± 0% 67.9ns ± 2% +2.16% (p=0.000 n=16+20)
FmtFprintfInt-12 77.6ns ± 2% 75.6ns ± 3% -2.55% (p=0.000 n=19+19)
FmtFprintfIntInt-12 124ns ± 1% 123ns ± 1% -0.98% (p=0.000 n=18+17)
FmtFprintfPrefixedInt-12 151ns ± 1% 148ns ± 1% -1.75% (p=0.000 n=19+20)
FmtFprintfFloat-12 210ns ± 1% 212ns ± 0% +0.75% (p=0.000 n=19+16)
FmtManyArgs-12 501ns ± 1% 499ns ± 1% -0.30% (p=0.041 n=17+19)
GobDecode-12 6.50ms ± 1% 6.49ms ± 1% ~ (p=0.234 n=19+19)
GobEncode-12 5.43ms ± 0% 5.47ms ± 0% +0.75% (p=0.000 n=20+19)
Gzip-12 216ms ± 1% 220ms ± 1% +1.71% (p=0.000 n=19+20)
Gunzip-12 38.6ms ± 0% 38.8ms ± 0% +0.66% (p=0.000 n=18+19)
HTTPClientServer-12 78.1µs ± 1% 78.5µs ± 1% +0.49% (p=0.035 n=20+20)
JSONEncode-12 12.1ms ± 0% 12.2ms ± 0% +1.05% (p=0.000 n=18+17)
JSONDecode-12 53.0ms ± 0% 52.3ms ± 0% -1.27% (p=0.000 n=19+19)
Mandelbrot200-12 3.74ms ± 0% 3.69ms ± 0% -1.17% (p=0.000 n=18+19)
GoParse-12 3.17ms ± 1% 3.17ms ± 1% ~ (p=0.569 n=19+20)
RegexpMatchEasy0_32-12 73.2ns ± 1% 73.7ns ± 0% +0.76% (p=0.000 n=18+17)
RegexpMatchEasy0_1K-12 239ns ± 0% 238ns ± 0% -0.27% (p=0.000 n=13+17)
RegexpMatchEasy1_32-12 69.0ns ± 2% 69.1ns ± 1% ~ (p=0.404 n=19+19)
RegexpMatchEasy1_1K-12 367ns ± 1% 365ns ± 1% -0.60% (p=0.000 n=19+19)
RegexpMatchMedium_32-12 105ns ± 1% 104ns ± 1% -1.24% (p=0.000 n=19+16)
RegexpMatchMedium_1K-12 34.1µs ± 2% 33.6µs ± 3% -1.60% (p=0.000 n=20+20)
RegexpMatchHard_32-12 1.62µs ± 1% 1.67µs ± 1% +2.75% (p=0.000 n=18+18)
RegexpMatchHard_1K-12 48.8µs ± 1% 50.3µs ± 2% +3.07% (p=0.000 n=20+19)
Revcomp-12 386ms ± 0% 384ms ± 0% -0.57% (p=0.000 n=20+19)
Template-12 59.9ms ± 1% 61.1ms ± 1% +2.01% (p=0.000 n=20+19)
TimeParse-12 301ns ± 2% 307ns ± 0% +2.11% (p=0.000 n=20+19)
TimeFormat-12 323ns ± 0% 323ns ± 0% ~ (all samples are equal)
[Geo mean] 47.0µs 47.1µs +0.23%
https://perf.golang.org/search?q=upload:20171030.1
Likewise, the throughput effect on the x/benchmarks is minimal (and
reasonably positive on the garbage benchmark with a large heap):
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.40ms ± 4% 2.29ms ± 3% -4.57% (p=0.000 n=19+18)
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.24ms ± 2% +0.59% (p=0.016 n=19+18)
HTTP-12 12.5µs ± 1% 12.6µs ± 1% ~ (p=0.326 n=20+19)
JSON-12 11.1ms ± 1% 11.3ms ± 2% +2.15% (p=0.000 n=16+17)
It does increase the heap size of the garbage benchmarks, but seems to
have relatively little impact on more realistic programs. Also, we'll
gain some of this back with the next commit.
name old peak-RSS-bytes new peak-RSS-bytes delta
Garbage/benchmem-MB=1024-12 1.21G ± 1% 1.88G ± 2% +55.59% (p=0.000 n=19+20)
Garbage/benchmem-MB=64-12 168M ± 3% 248M ± 8% +48.08% (p=0.000 n=18+20)
HTTP-12 45.6M ± 9% 47.0M ±27% ~ (p=0.925 n=20+20)
JSON-12 193M ±11% 206M ±11% +7.06% (p=0.001 n=20+20)
https://perf.golang.org/search?q=upload:20171030.2
Change-Id: Ic78904135f832b4d64056cbe734ab979f5ad9736
Reviewed-on: https://go-review.googlesource.com/59970
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>