Since Go1.8, different types of GC mark workers were annotated and the
annotation strings were recorded during StartTrace. This change fixes
two issues around the use of traceString from StartTrace here.
1) "failed to parse trace: no consistent ordering of events possible"
This issue is a result of a missing 'batch' event entry. For efficient
tracing, tracer maintains system allocated buffers and once a buffer
is full, it is Flushed out for writing. Moreover, tracing assumes all
the records in the same buffer (batch) are already ordered and implements
more optimization in encoding and defers the completing order
reconstruction till the trace parsing time. Thus, when a Flush happens
and a new buffer is used, the new buffer should contain an event to
indicate the start of a new batch. Before this CL, the batch entry was
written only by traceEvent only when the buffer position is 0 and
wasn't written when flush occurs during traceString.
This CL fixes it by moving the batch entry write to the traceFlush.
2) crash during tracing due to invalid memory access, or during parsing
due to duplicate string entries
This issue is a result of memory allocation during traceString calls.
Execution tracer traces some memory allocation activities. Before this
CL, traceString took the buffer address (*traceBuf) and mutated the buffer.
If memory tracing occurs in the meantime from the same P, the allocation
tracing (traceEvent) will take the same buffer address through the pointer
to the buffer address (**traceBuf), and mutate the buffer.
As a result, one of the followings can happen:
- the allocation record is overwritten by the following trace string
record (data loss)
- if buffer flush occurs during the allocation tracing, traceString
will attempt to write the string record to the old buffer and
eventually causes invalid memory access crash.
- or flush on the same buffer can occur twice (once from the memory
allocation, and once from the string record write), and in this case
the trace can contain the same data twice and the parse will complain
about duplicate string record entries.
This CL fixes the second issue by making the traceString take
**traceBuf (*traceBufPtr).
Change-Id: I24f629758625b38e1916fbfc7d7be6ea210586af
Reviewed-on: https://go-review.googlesource.com/50873
Run-TryBot: Austin Clements <austin@google.com>
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, both the background mark worker and the goal GC CPU are
both fixed at 25%. The trigger controller's goal is to achieve the
goal CPU usage, and with the previous commit it can actually achieve
this. But this means there are *no* assists, which sounds ideal but
actually causes problems for the trigger controller. Since the
controller can't lower CPU usage below the background mark worker CPU,
it saturates at the CPU goal and no longer gets feedback, which
translates into higher variability in heap growth.
This commit fixes this by allowing assists 5% CPU beyond the 25% fixed
background mark. This avoids saturating the trigger controller, since
it can now get feedback from both sides of the CPU goal. This leads to
low variability in both CPU usage and heap growth, at the cost of
reintroducing a low rate of mark assists.
We also experimented with 20% background plus 5% assist, but 25%+5%
clearly performed better in benchmarks.
Updates #14951.
Updates #14812.
Updates #18534.
Combined with the previous CL, this significantly improves tail
mutator utilization in the x/bechmarks garbage benchmark. On a sample
trace, it increased the 99.9%ile mutator utilization at 10ms from 26%
to 59%, and at 5ms from 17% to 52%. It reduced the 99.9%ile zero
utilization window from 2ms to 700µs. It also helps the mean mutator
utilization: it increased the 10s mutator utilization from 83% to 94%.
The minimum mutator utilization is also somewhat improved, though
there is still some unknown artifact that causes a miniscule fraction
of mutator assists to take 5--10ms (in fact, there was exactly one
10ms mutator assist in my sample trace).
This has no significant effect on the throughput of the
github.com/dr2chase/bent benchmarks-50.
This has little effect on the go1 benchmarks (and the slight overall
improvement makes up for the slight overall slowdown from the previous
commit):
name old time/op new time/op delta
BinaryTree17-12 2.40s ± 0% 2.41s ± 1% +0.26% (p=0.010 n=18+19)
Fannkuch11-12 2.95s ± 0% 2.93s ± 0% -0.62% (p=0.000 n=18+15)
FmtFprintfEmpty-12 42.2ns ± 0% 42.3ns ± 1% +0.37% (p=0.001 n=15+14)
FmtFprintfString-12 67.9ns ± 2% 67.2ns ± 3% -1.03% (p=0.002 n=20+18)
FmtFprintfInt-12 75.6ns ± 3% 76.8ns ± 2% +1.59% (p=0.000 n=19+17)
FmtFprintfIntInt-12 123ns ± 1% 124ns ± 1% +0.77% (p=0.000 n=17+14)
FmtFprintfPrefixedInt-12 148ns ± 1% 150ns ± 1% +1.28% (p=0.000 n=20+20)
FmtFprintfFloat-12 212ns ± 0% 211ns ± 1% -0.67% (p=0.000 n=16+17)
FmtManyArgs-12 499ns ± 1% 500ns ± 0% +0.23% (p=0.004 n=19+16)
GobDecode-12 6.49ms ± 1% 6.51ms ± 1% +0.32% (p=0.008 n=19+19)
GobEncode-12 5.47ms ± 0% 5.43ms ± 1% -0.68% (p=0.000 n=19+20)
Gzip-12 220ms ± 1% 216ms ± 1% -1.66% (p=0.000 n=20+19)
Gunzip-12 38.8ms ± 0% 38.5ms ± 0% -0.80% (p=0.000 n=19+20)
HTTPClientServer-12 78.5µs ± 1% 78.1µs ± 1% -0.53% (p=0.008 n=20+19)
JSONEncode-12 12.2ms ± 0% 11.9ms ± 0% -2.38% (p=0.000 n=17+19)
JSONDecode-12 52.3ms ± 0% 53.3ms ± 0% +1.84% (p=0.000 n=19+20)
Mandelbrot200-12 3.69ms ± 0% 3.69ms ± 0% -0.19% (p=0.000 n=19+19)
GoParse-12 3.17ms ± 1% 3.19ms ± 1% +0.61% (p=0.000 n=20+20)
RegexpMatchEasy0_32-12 73.7ns ± 0% 73.2ns ± 1% -0.66% (p=0.000 n=17+20)
RegexpMatchEasy0_1K-12 238ns ± 0% 239ns ± 0% +0.32% (p=0.000 n=17+16)
RegexpMatchEasy1_32-12 69.1ns ± 1% 69.2ns ± 1% ~ (p=0.669 n=19+13)
RegexpMatchEasy1_1K-12 365ns ± 1% 367ns ± 1% +0.49% (p=0.000 n=19+19)
RegexpMatchMedium_32-12 104ns ± 1% 105ns ± 1% +1.33% (p=0.000 n=16+20)
RegexpMatchMedium_1K-12 33.6µs ± 3% 34.1µs ± 4% +1.67% (p=0.001 n=20+20)
RegexpMatchHard_32-12 1.67µs ± 1% 1.62µs ± 1% -2.78% (p=0.000 n=18+17)
RegexpMatchHard_1K-12 50.3µs ± 2% 48.7µs ± 1% -3.09% (p=0.000 n=19+18)
Revcomp-12 384ms ± 0% 386ms ± 0% +0.59% (p=0.000 n=19+19)
Template-12 61.1ms ± 1% 60.5ms ± 1% -1.02% (p=0.000 n=19+20)
TimeParse-12 307ns ± 0% 303ns ± 1% -1.23% (p=0.000 n=19+15)
TimeFormat-12 323ns ± 0% 323ns ± 0% -0.12% (p=0.011 n=15+20)
[Geo mean] 47.1µs 47.0µs -0.20%
https://perf.golang.org/search?q=upload:20171030.4
It slightly improve the performance the x/benchmarks:
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.29ms ± 3% 2.22ms ± 2% -2.97% (p=0.000 n=18+18)
Garbage/benchmem-MB=64-12 2.24ms ± 2% 2.21ms ± 2% -1.64% (p=0.000 n=18+18)
HTTP-12 12.6µs ± 1% 12.6µs ± 1% ~ (p=0.690 n=19+17)
JSON-12 11.3ms ± 2% 11.3ms ± 1% ~ (p=0.163 n=17+18)
and fixes some of the heap size bloat caused by the previous commit:
name old peak-RSS-bytes new peak-RSS-bytes delta
Garbage/benchmem-MB=1024-12 1.88G ± 2% 1.77G ± 2% -5.52% (p=0.000 n=20+18)
Garbage/benchmem-MB=64-12 248M ± 8% 226M ± 5% -8.93% (p=0.000 n=20+20)
HTTP-12 47.0M ±27% 47.2M ±12% ~ (p=0.512 n=20+20)
JSON-12 206M ±11% 206M ±10% ~ (p=0.841 n=20+20)
https://perf.golang.org/search?q=upload:20171030.5
Combined with the change to add a soft goal in the previous commit,
the achieves a decent performance improvement on the garbage
benchmark:
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.40ms ± 4% 2.22ms ± 2% -7.40% (p=0.000 n=19+18)
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.21ms ± 2% -1.06% (p=0.000 n=19+18)
HTTP-12 12.5µs ± 1% 12.6µs ± 1% ~ (p=0.330 n=20+17)
JSON-12 11.1ms ± 1% 11.3ms ± 1% +1.87% (p=0.000 n=16+18)
https://perf.golang.org/search?q=upload:20171030.6
Change-Id: If04ddb57e1e58ef2fb9eec54c290eb4ae4bea121
Reviewed-on: https://go-review.googlesource.com/59971
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, GC pacing is based on a single hard heap limit computed
based on GOGC. In order to achieve this hard limit, assist pacing
makes the conservative assumption that the entire heap is live.
However, in the steady state (with GOGC=100), only half of the heap is
live. As a result, the garbage collector works twice as hard as
necessary and finishes half way between the trigger and the goal.
Since this is a stable state for the trigger controller, this repeats
from cycle to cycle. Matters are even worse if GOGC is higher. For
example, if GOGC=200, only a third of the heap is live in steady
state, so the GC will work three times harder than necessary and
finish only a third of the way between the trigger and the goal.
Since this causes the garbage collector to consume ~50% of the
available CPU during marking instead of the intended 25%, about 25% of
the CPU goes to mutator assists. This high mutator assist cost causes
high mutator latency variability.
This commit improves the situation by separating the heap goal into
two goals: a soft goal and a hard goal. The soft goal is set based on
GOGC, just like the current goal is, and the hard goal is set at a 10%
larger heap than the soft goal. Prior to the soft goal, assist pacing
assumes the heap is in steady state (e.g., only half of it is live).
Between the soft goal and the hard goal, assist pacing switches to the
current conservative assumption that the entire heap is live.
In benchmarks, this nearly eliminates mutator assists. However, since
background marking is fixed at 25% CPU, this causes the trigger
controller to saturate, which leads to somewhat higher variability in
heap size. The next commit will address this.
The lower CPU usage of course leads to longer mark cycles, though
really it means the mark cycles are as long as they should have been
in the first place. This does, however, lead to two potential
down-sides compared to the current pacing policy: 1. the total
overhead of the write barrier is higher because it's enabled more of
the time and 2. the heap size may be larger because there's more
floating garbage. We addressed 1 by significantly improving the
performance of the write barrier in the preceding commits. 2 can be
demonstrated in intense GC benchmarks, but doesn't seem to be a
problem in any real applications.
Updates #14951.
Updates #14812 (fixes?).
Fixes#18534.
This has no significant effect on the throughput of the
github.com/dr2chase/bent benchmarks-50.
This has little overall throughput effect on the go1 benchmarks:
name old time/op new time/op delta
BinaryTree17-12 2.41s ± 0% 2.40s ± 0% -0.22% (p=0.007 n=20+18)
Fannkuch11-12 2.95s ± 0% 2.95s ± 0% +0.07% (p=0.003 n=17+18)
FmtFprintfEmpty-12 41.7ns ± 3% 42.2ns ± 0% +1.17% (p=0.002 n=20+15)
FmtFprintfString-12 66.5ns ± 0% 67.9ns ± 2% +2.16% (p=0.000 n=16+20)
FmtFprintfInt-12 77.6ns ± 2% 75.6ns ± 3% -2.55% (p=0.000 n=19+19)
FmtFprintfIntInt-12 124ns ± 1% 123ns ± 1% -0.98% (p=0.000 n=18+17)
FmtFprintfPrefixedInt-12 151ns ± 1% 148ns ± 1% -1.75% (p=0.000 n=19+20)
FmtFprintfFloat-12 210ns ± 1% 212ns ± 0% +0.75% (p=0.000 n=19+16)
FmtManyArgs-12 501ns ± 1% 499ns ± 1% -0.30% (p=0.041 n=17+19)
GobDecode-12 6.50ms ± 1% 6.49ms ± 1% ~ (p=0.234 n=19+19)
GobEncode-12 5.43ms ± 0% 5.47ms ± 0% +0.75% (p=0.000 n=20+19)
Gzip-12 216ms ± 1% 220ms ± 1% +1.71% (p=0.000 n=19+20)
Gunzip-12 38.6ms ± 0% 38.8ms ± 0% +0.66% (p=0.000 n=18+19)
HTTPClientServer-12 78.1µs ± 1% 78.5µs ± 1% +0.49% (p=0.035 n=20+20)
JSONEncode-12 12.1ms ± 0% 12.2ms ± 0% +1.05% (p=0.000 n=18+17)
JSONDecode-12 53.0ms ± 0% 52.3ms ± 0% -1.27% (p=0.000 n=19+19)
Mandelbrot200-12 3.74ms ± 0% 3.69ms ± 0% -1.17% (p=0.000 n=18+19)
GoParse-12 3.17ms ± 1% 3.17ms ± 1% ~ (p=0.569 n=19+20)
RegexpMatchEasy0_32-12 73.2ns ± 1% 73.7ns ± 0% +0.76% (p=0.000 n=18+17)
RegexpMatchEasy0_1K-12 239ns ± 0% 238ns ± 0% -0.27% (p=0.000 n=13+17)
RegexpMatchEasy1_32-12 69.0ns ± 2% 69.1ns ± 1% ~ (p=0.404 n=19+19)
RegexpMatchEasy1_1K-12 367ns ± 1% 365ns ± 1% -0.60% (p=0.000 n=19+19)
RegexpMatchMedium_32-12 105ns ± 1% 104ns ± 1% -1.24% (p=0.000 n=19+16)
RegexpMatchMedium_1K-12 34.1µs ± 2% 33.6µs ± 3% -1.60% (p=0.000 n=20+20)
RegexpMatchHard_32-12 1.62µs ± 1% 1.67µs ± 1% +2.75% (p=0.000 n=18+18)
RegexpMatchHard_1K-12 48.8µs ± 1% 50.3µs ± 2% +3.07% (p=0.000 n=20+19)
Revcomp-12 386ms ± 0% 384ms ± 0% -0.57% (p=0.000 n=20+19)
Template-12 59.9ms ± 1% 61.1ms ± 1% +2.01% (p=0.000 n=20+19)
TimeParse-12 301ns ± 2% 307ns ± 0% +2.11% (p=0.000 n=20+19)
TimeFormat-12 323ns ± 0% 323ns ± 0% ~ (all samples are equal)
[Geo mean] 47.0µs 47.1µs +0.23%
https://perf.golang.org/search?q=upload:20171030.1
Likewise, the throughput effect on the x/benchmarks is minimal (and
reasonably positive on the garbage benchmark with a large heap):
name old time/op new time/op delta
Garbage/benchmem-MB=1024-12 2.40ms ± 4% 2.29ms ± 3% -4.57% (p=0.000 n=19+18)
Garbage/benchmem-MB=64-12 2.23ms ± 1% 2.24ms ± 2% +0.59% (p=0.016 n=19+18)
HTTP-12 12.5µs ± 1% 12.6µs ± 1% ~ (p=0.326 n=20+19)
JSON-12 11.1ms ± 1% 11.3ms ± 2% +2.15% (p=0.000 n=16+17)
It does increase the heap size of the garbage benchmarks, but seems to
have relatively little impact on more realistic programs. Also, we'll
gain some of this back with the next commit.
name old peak-RSS-bytes new peak-RSS-bytes delta
Garbage/benchmem-MB=1024-12 1.21G ± 1% 1.88G ± 2% +55.59% (p=0.000 n=19+20)
Garbage/benchmem-MB=64-12 168M ± 3% 248M ± 8% +48.08% (p=0.000 n=18+20)
HTTP-12 45.6M ± 9% 47.0M ±27% ~ (p=0.925 n=20+20)
JSON-12 193M ±11% 206M ±11% +7.06% (p=0.001 n=20+20)
https://perf.golang.org/search?q=upload:20171030.2
Change-Id: Ic78904135f832b4d64056cbe734ab979f5ad9736
Reviewed-on: https://go-review.googlesource.com/59970
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The compiler's instrumentation pass has some out-of-date comments
about the write barrier and some confusing comments about
typedslicecopy. Update these comments and add a comment to
typedslicecopy explaining why it's manually instrumented while none of
the other operations are.
Change-Id: I024e5361d53f1c3c122db0c85155368a30cabd6b
Reviewed-on: https://go-review.googlesource.com/74430
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The content-based staleness code means that
go run -gcflags=-l helloworld.go
recompiles all of helloworld.go's dependencies with -gcflags=-l,
whereas before it would have assumed installed packages were
up-to-date. In this test, that means every race iteration rebuilds
the runtime and maybe a few other packages. Instead, install them
to a temporary location for reuse.
This speeds the test from 17s to 9s on my MacBook Pro.
Change-Id: Ied136ce72650261083bb19cc7dee38dac0ad05ca
Reviewed-on: https://go-review.googlesource.com/73992
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This cuts 23 seconds from all.bash on my MacBook Pro.
Change-Id: Ibc4d7c01660b9e9ebd088dd55ba993f0d7ec6aa3
Reviewed-on: https://go-review.googlesource.com/73991
Reviewed-by: Ian Lance Taylor <iant@golang.org>
If the go install doesn't use the same flags as the main build
it can overwrite the installed standard library, leading to
flakiness and slow future tests.
Force uses of 'go install' etc to propagate $GO_GCFLAGS
or disable them entirely, to avoid problems.
As I understand it, the main place this happens is the ssacheck builder.
If there are other uses that need to run some of the now-disabled
tests we can reenable fixed tests in followup CLs.
Change-Id: Ib860a253539f402f8a96a3c00ec34f0bbf137c9a
Reviewed-on: https://go-review.googlesource.com/74470
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Memory accesses on z are at least as ordered as they are on AMD64.
Change-Id: Ia515430e571ebd07e9314de05c54dc992ab76b95
Reviewed-on: https://go-review.googlesource.com/74010
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Munday <mike.munday@ibm.com>
This modifies bulkBarrierPreWrite to use the buffered write barrier
instead of the eager write barrier. This reduces the number of system
stack switches and sanity checks by a factor of the buffer size
(currently 256). This affects both typedmemmove and typedmemclr.
Since this is purely a runtime change, it applies to all arches
(unlike the pointer write barrier).
name old time/op new time/op delta
BulkWriteBarrier-12 7.33ns ± 6% 4.46ns ± 9% -39.10% (p=0.000 n=20+19)
Updates #22460.
Change-Id: I6a686a63bbf08be02b9b97250e37163c5a90cdd8
Reviewed-on: https://go-review.googlesource.com/73832
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, typedslicecopy meticulously performs a typedmemmove on
every element of the slice. This probably used to be necessary because
we only had an individual element's type, but now we use the heap
bitmap, so we only need to know whether the type has any pointers and
how big it is. Hence, this CL rewrites typedslicecopy to simply
perform one bulk barrier and one memmove.
This also has a side-effect of eliminating two unnecessary write
barriers per slice element that were coming from updates to dstp and
srcp, which were stored in the parent stack frame. However, most of
the win comes from eliminating the loops.
name old time/op new time/op delta
BulkWriteBarrier-12 7.83ns ±10% 7.33ns ± 6% -6.45% (p=0.000 n=20+20)
Updates #22460.
Change-Id: Id3450e9f36cc8e0892f268319b136f0d8f5464b8
Reviewed-on: https://go-review.googlesource.com/73831
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This adds a benchmark of typedslicecopy and its bulk write barriers.
For #22460.
Change-Id: I439ca3b130bb22944468095f8f18b464e5bb43ca
Reviewed-on: https://go-review.googlesource.com/74051
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This implements runtime support for buffered write barriers on amd64.
The buffered write barrier has a fast path that simply enqueues
pointers in a per-P buffer. Unlike the current write barrier, this
fast path is *not* a normal Go call and does not require the compiler
to spill general-purpose registers or put arguments on the stack. When
the buffer fills up, the write barrier takes the slow path, which
spills all general purpose registers and flushes the buffer. We don't
allow safe-points or stack splits while this frame is active, so it
doesn't matter that we have no type information for the spilled
registers in this frame.
One minor complication is cgocheck=2 mode, which uses the write
barrier to detect Go pointers being written to non-Go memory. We
obviously can't buffer this, so instead we set the buffer to its
minimum size, forcing the write barrier into the slow path on every
call. For this specific case, we pass additional information as
arguments to the flush function. This also requires enabling the cgo
write barrier slightly later during runtime initialization, after Ps
(and the per-P write barrier buffers) have been initialized.
The code in this CL is not yet active. The next CL will modify the
compiler to generate calls to the new write barrier.
This reduces the average cost of the write barrier by roughly a factor
of 4, which will pay for the cost of having it enabled more of the
time after we make the GC pacer less aggressive. (Benchmarks will be
in the next CL.)
Updates #14951.
Updates #22460.
Change-Id: I396b5b0e2c5e5c4acfd761a3235fd15abadc6cb1
Reviewed-on: https://go-review.googlesource.com/73711
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently systemstack always calls its argument, even if we're already
on the system stack. Unfortunately, traceback with _TraceJump stops at
the first systemstack it sees, which often cuts off runtime stacks
early in profiles.
Fix this by performing a tail call if we're already on the system
stack. This eliminates it from the traceback entirely, so it won't
stop prematurely (or all get mushed into a single node in the profile
graph).
Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
Reviewed-on: https://go-review.googlesource.com/74050
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Errors occur in runtime test testCgoPprofPIE when the test
is built by passing -pie to the external linker with code
that was not built as PIC. This occurs on ppc64le because
non-PIC is the default, and fails only on newer distros
where the address range used for programs is high enough
to cause relocation overflow. This test should be built
with -buildmode=pie since that correctly generates PIC
with -pie.
Related issues are #21954 and #22126.
Updates #22459
Change-Id: Ib641440bc9f94ad2b97efcda14a4b482647be8f7
Reviewed-on: https://go-review.googlesource.com/73970
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
recordspan has two remaining write barriers from writing to the
pointer to the backing store of h.allspans. However, h.allspans is
always backed by off-heap memory, so let the compiler know this.
Unfortunately, this isn't quite as clean as most go:notinheap uses
because we can't directly name the backing store of a slice, but we
can get it done with some judicious casting.
For #22460.
Change-Id: I296f92fa41cf2cb6ae572b35749af23967533877
Reviewed-on: https://go-review.googlesource.com/73414
Reviewed-by: Rick Hudson <rlh@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which detects that we're calling markroot (which has write
barriers) from gchelper, which is called from the scheduler during STW
apparently without a P.
But it turns out that func helpgc, which wakes up blocked Ms to run
gchelper, installs a P for gchelper to use. This means there *is* a P
when gchelper runs, so it is allowed to have write barriers. Tell the
compiler this by marking gchelper go:yeswritebarrierrec. Also,
document the call to gchelper so I don't have to spend another half a
day puzzling over how on earth this could possibly work before
discovering the spooky action-at-a-distance in helpgc.
Updates #22384.
For #22460.
Change-Id: I7394c9b4871745575f87a2d4fbbc5b8e54d669f7
Reviewed-on: https://go-review.googlesource.com/72772
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which will reveal write barriers in persistentalloc prohibited
by various callers.
The pointers manipulated by persistentalloc are always to off-heap
memory, so this removes these write barriers statically by introducing
a new go:notinheap type to represent generic off-heap memory.
Updates #22384.
For #22460.
Change-Id: Id449d9ebf145b14d55476a833e7f076b0d261d57
Reviewed-on: https://go-review.googlesource.com/72771
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which will reveal write barriers in startpanic_m prohibited by
various callers.
We actually can allow write barriers here because the write barrier is
a no-op when we're panicking. Let the compiler know.
Updates #22384.
For #22460.
Change-Id: Ifb3a38d3dd9a4125c278c3680f8648f987a5b0b8
Reviewed-on: https://go-review.googlesource.com/72770
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently most of these are marked go:nowritebarrier as a hint, but
it's actually important that these not invoke write barriers
recursively. The danger is that some gcWork method would invoke the
write barrier while the gcWork is in an inconsistent state and that
the write barrier would in turn invoke some other gcWork method, which
would crash or permanently corrupt the gcWork. Simply marking the
write barrier itself as go:nowritebarrierrec isn't sufficient to
prevent this if the write barrier doesn't use the outer method.
Thankfully, this doesn't cause any build failures, so we were getting
this right. :)
For #22460.
Change-Id: I35a7292a584200eb35a49507cd3fe359ba2206f6
Reviewed-on: https://go-review.googlesource.com/72554
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, newstack and gogo have write barriers for maintaining the
context register saved in g.sched.ctxt. This is troublesome, because
newstack can be called from go:nowritebarrierrec places that can't
allow write barriers. It happens to be benign because g.sched.ctxt
will always be nil on entry to newstack *and* it so happens the
incoming ctxt will also always be nil in these contexts (I
think/hope), but this is playing with fire. It's also desirable to
mark newstack go:nowritebarrierrec to prevent any other, non-benign
write barriers from creeping in, but we can't do that right now
because of this one write barrier.
Fix all of this by observing that g.sched.ctxt is really just a saved
live pointer register. Hence, we can shade it when we scan g's stack
and otherwise move it back and forth between the actual context
register and g.sched.ctxt without write barriers. This means we can
save it in morestack along with all of the other g.sched, eliminate
the save from newstack along with its troublesome write barrier, and
eliminate the shenanigans in gogo to invoke the write barrier when
restoring it.
Once we've done all of this, we can mark newstack
go:nowritebarrierrec.
Fixes#22385.
For #22460.
Change-Id: I43c24958e3f6785b53c1350e1e83c2844e0d1522
Reviewed-on: https://go-review.googlesource.com/72553
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TestParallelRWMutexReaders has a non-preemptible loop in it that can
deadlock if GC triggers. "Fix" it like we've fixed similar tests.
Updates #10958.
Change-Id: I13618f522f5ef0c864e7171ad2f655edececacd7
Reviewed-on: https://go-review.googlesource.com/73710
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Otherwise low-res timers cause problems at call sites that expect to
be able to use 0 as meaning "no time set" and therefore expect that
nanotime never returns 0 itself. For example, sched.lastpoll == 0
means no last poll.
Fixes#22394.
Change-Id: Iea28acfddfff6f46bc90f041ec173e0fea591285
Reviewed-on: https://go-review.googlesource.com/73410
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The BigEndian constant is only used in boolean context so assign it
boolean constants.
Change-Id: If19d61dd71cdfbffede1d98b401f11e6535fba59
Reviewed-on: https://go-review.googlesource.com/73270
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Make netpollopen return what Windows GetLastError API returns.
It is probably copy / paste error from long time ago.
Change-Id: I28f78718c15fef3e8b5f5d11a259533d7e9c6185
Reviewed-on: https://go-review.googlesource.com/72592
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Increasing the map size with the benchmark iteration count
introduced non-linearities and made benchmark runs slow when
increasing benchtime.
Rework the benchmark to use a map size independent of the
iteration count and instead re-fill it when it becomes empty.
Fixes#21546
Change-Id: Iafb6eb225e81830263f30b3aba0d449c361aec32
Reviewed-on: https://go-review.googlesource.com/57650
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
On systems that use kqueue, we always register descriptors for both
EVFILT_READ and EVFILT_WRITE. On at least FreeBSD and OpenBSD, when
the write end of a pipe is registered for EVFILT_READ and EVFILT_WRITE
events, and the read end of the pipe is closed, kqueue reports an
EVFILT_READ event with EV_EOF set, but does not report an EVFILT_WRITE
event. Since the write to the pipe is waiting for an EVFILT_WRITE
event, closing the read end of a pipe can cause the write end to hang
rather than attempt another write which will fail with EPIPE.
Fix this by treating EVFILT_READ with EV_EOF set as making both reads
and writes ready to proceed.
The real test for this is in CL 71770, which tests using various
timeouts with pipes.
Updates #22114
Change-Id: Ib23fbaaddbccd8eee77bdf18f27a7f0aa50e2742
Reviewed-on: https://go-review.googlesource.com/71973
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently mmap returns an unsafe.Pointer that encodes OS errors as
values less than 4096. In practice this is okay, but it borders on
being really unsafe: for example, the value has to be checked
immediately after return and if stack copying were ever to observe
such a value, it would panic. It's also not remotely idiomatic.
Fix this by making mmap return a separate pointer value and error,
like a normal Go function.
Updates #22218.
Change-Id: Iefd965095ffc82cc91118872753a5d39d785c3a6
Reviewed-on: https://go-review.googlesource.com/71270
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Fixes Darwin 386 build. It turns out that the Darwin pthread_create
function saves the SSE registers, and therefore requires an aligned stack.
This worked before https://golang.org/cl/70530 because the stack sizes
were chosen to leave the stack aligned.
Change-Id: I911a9e8dcde4e41e595d5ef9b9a1ca733e154de6
Reviewed-on: https://go-review.googlesource.com/71432
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
CL 46037 and CL 46038 implemented termination of
locked OS threads when the goroutine exits.
However, this behavior leads to crashes of Go programs
using runtime.LockOSThread on Plan 9. This is notably
the case of the os/exec and net packages.
This change disables termination of locked OS threads
on Plan 9.
Updates #22227.
Change-Id: If9fa241bff1c0b68e7e9e321e06e5203b3923212
Reviewed-on: https://go-review.googlesource.com/71230
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
CL 46033 added a "template thread" mechanism to
allow creation of thread with a known-good state
from a thread of unknown state.
However, we are experiencing issues on Plan 9
with programs using the os/exec and net package.
These package are relying on runtime.LockOSThread.
Updates #22227.
Change-Id: I85b71580a41df9fe8b24bd8623c064b6773288b0
Reviewed-on: https://go-review.googlesource.com/70231
Run-TryBot: David du Colombier <0intro@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Unify the 386 entry point code as much as possible.
The main function could not be unified because on Windows 386 it is
called _main. Putting main in asm_386.s caused multiple definition
errors when using the external linker.
Add the _lib entry point to various operating systems. A future CL
will enable c-archive/c-shared mode for those targets.
Fix _rt0_386_windows_lib_go--it was passing arguments as though it
were amd64.
Change-Id: Ic73f1c95cdbcbea87f633f4a29bbc218a5db4f58
Reviewed-on: https://go-review.googlesource.com/70530
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Use TEXT pseudo-instruction to adjust SP instead of a SUB instruction
so that the assembler knows how to fill in the pcsp table and the frame
description entry correctly.
Updates #21569
Change-Id: I436c840b2af99bbb3042ecd38a7d7c1ab4d7372a
Reviewed-on: https://go-review.googlesource.com/70937
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The code was commented out by https://golang.org/cl/13234050 in 2013.
Let's just remove it.
Change-Id: I46ae1f07386719e991458e782d236214c40bdce1
Reviewed-on: https://go-review.googlesource.com/70770
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
STREX does not permit using the same register for the value to store
and the place where the result is returned. Also the code was wrong
anyhow if the first store failed.
Fixes#22248
Change-Id: I96013497410058514ffcb771c76c86faa1ec559b
Reviewed-on: https://go-review.googlesource.com/70911
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently only a single P can run a fractional mark worker at a time.
This doesn't let us spread out the load, so it gets concentrated on
whatever unlucky P picks up the token to run a fractional worker. This
can significantly delay goroutines on that P.
This commit changes this scheduling rule so each P separately
schedules fractional workers. This can significantly reduce the load
on any individual P and allows workers to self-preempt earlier. It
does have the downside that it's possible for all Ps to be in
fractional workers simultaneously (an effect STW).
Updates #21698.
Change-Id: Ia1e300c422043fa62bb4e3dd23c6232d81e4419c
Reviewed-on: https://go-review.googlesource.com/68574
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently fractional workers run until preempted by the scheduler,
which means they typically run for 20ms. During this time, all other
goroutines on that P are blocked, which can introduce significant
latency variance.
This modifies fractional workers to self-preempt shortly after
achieving the fractional utilization goal. In practice this means they
preempt much sooner, and the scale of their preemption is on the order
of how often the user goroutine block (so, if the application is
compute-bound, the fractional workers will also run for long times,
but if the application blocks frequently, the fractional workers will
also preempt quickly).
Fixes#21698.
Updates #18534.
Change-Id: I03a5ab195dae93154a46c32083c4bb52415d2017
Reviewed-on: https://go-review.googlesource.com/68573
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
We haven't used non-zero gcForcePreemptNS for ages. Remove it and
declutter the code.
Change-Id: Id5cc62f526d21ca394d2b6ca17d34a72959535da
Reviewed-on: https://go-review.googlesource.com/68572
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
When GOMAXPROCS is not small, fractional workers don't add much to
throughput, but they do add to the latency of individual goroutines.
In this case, it makes sense to just use dedicated workers, even if we
can't exactly hit the 25% CPU goal with dedicated workers.
This implements this logic by computing the number of dedicated mark
workers that will us closest to the 25% target. We only fall back to
fractional workers if that would be more than 30% off of the target
(less than 17.5% or more than 32.5%, which in practice happens for
GOMAXPROCS <= 3 and GOMAXPROCS == 6).
Updates #21698.
Change-Id: I484063adeeaa1190200e4ef210193a20e635d552
Reviewed-on: https://go-review.googlesource.com/68571
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently these are the same constant, but are separate concepts.
Split them into two constants for easier experimentation and better
documentation.
Change-Id: I121854d4fd1a4a827f727c8e5153160c24aacda7
Reviewed-on: https://go-review.googlesource.com/68570
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This change adds support for accelerating time.Now by using
the __vdso_clock_gettime fast-path via the vDSO on linux/386
if it is available.
When the vDSO path to the clocks is available, it is typically
5x-10x faster than the syscall path (see benchmark extract
below). Two such calls are made for each time.Now() call
on most platforms as of go 1.9.
- Add vdso_linux_386.go, containing the ELF32 definitions
for use by vdso_linux.go, the maximum array size, and
the symbols to be located in the vDSO.
- Modify runtime.walltime and runtime.nanotime to check for
and use the vDSO fast-path if available, or fall back to
the existing syscall path.
- Reduce the stack reservations for runtime.walltime and
runtime.monotime from 32 to 16 bytes. It appears the syscall
path actually only needed 8 bytes, but 16 is now needed to
cover the syscall and vDSO paths.
- Remove clearing DX from the syscall paths as clock_gettime
only takes 2 args (BX, CX in syscall calling convention),
so there should be no need to clear DX.
The included BenchmarkTimeNow was run with -cpu=1 -count=20
on an "Intel(R) Celeron(R) CPU J1900 @ 1.99GHz", comparing
released go 1.9.1 vs this change. This shows a gain in
performance on linux/386 (6.89x), and that no regression
occurred on linux/amd64 due to this change.
Kernel: linux/i686, GOOS=linux GOARCH=386
name old time/op new time/op delta
TimeNow 978ns ± 0% 142ns ± 0% -85.48% (p=0.000 n=16+20)
Kernel: linux/x86_64, GOOS=linux GOARCH=amd64
name old time/op new time/op delta
TimeNow 125ns ± 0% 125ns ± 0% ~ (all equal)
Gains are more dramatic in virtualized environments,
presumably due to the overhead of virtualizing the syscall.
Fixes#22190
Change-Id: I2f83ce60cb1b8b310c9ced0706bb463c1b3aedf8
Reviewed-on: https://go-review.googlesource.com/69390
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This is a preparation step for adding vDSO support on linux/386.
This change relocates the elf64 and amd64 specifics from
vdso_linux.go to a new vdso_linux_amd64.go.
This should enable vdso_linux.go to be used for vDSO
support on linux architectures other than amd64.
- Relocate the elf64X structure definitions appropriate to amd64,
and change their names to elfX so that the code in vdso_linux.go
is ELFnn-agnostic.
- Relocate the sym_keys and corresponding __vdso_* variables
appropriate to amd64.
- Provide an amd64-specific constant for the maximum byte size of
an array, and use this in vdso_linux.go to compute constants for
sizing the elf structure arrays traversed in the loaded vDSO.
Change-Id: I1edb4e4ec9f2d79b7533aa95fbd09f771fa4edef
Reviewed-on: https://go-review.googlesource.com/69391
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently we look to see if the main.main symbol address is in the
module data text range. This requires access to the main.main
symbol, which usually the runtime has, but does not when building
a plugin.
To avoid a dynamic relocation to main.main (which I haven't worked
out how to have the linker generate on darwin), stop using the
symbol. Instead record a boolean in the moduledata if the module
has the main function.
Fixes#22175
Change-Id: If313a118f17ab499d0a760bbc2519771ed654530
Reviewed-on: https://go-review.googlesource.com/69370
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The .debug_aranges section is an odd vestige of DWARF, since its
contents are easy and efficient for a debugger to reconstruct from the
attributes of the top-level compilation unit DIEs. Neither GCC nor
clang emit it by default these days. GDB and Delve ignore it entirely.
LLDB will use it if present, but is happy to construct the index from
the compilation unit attributes (and, indeed, a remarkable variety of
other ways if those aren't available either).
We're about to split up the compilation units by package, which means
they'll have discontiguous PC ranges, which is going to make
.debug_aranges harder to construct (and larger).
Rather than try to maintain this essentially unused code, let's
simplify things and remove it.
Change-Id: I8e0ccc033b583b5b8908cbb2c879b2f2d5f9a50b
Reviewed-on: https://go-review.googlesource.com/69972
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
The alternative signal stack doesn't work on ios, so the setup of
the alternative stack was skipped. The corresponding unminitSignals
was effectively a no-op on ios until CL 70130. Skip unminitSignals
on ios to restore the previous behaviour.
For the ios builders.
Change-Id: I5692ca7f5997e6b9d10cc5f2383a5a37c42b133c
Reviewed-on: https://go-review.googlesource.com/70270
Run-TryBot: Elias Naur <elias.naur@gmail.com>
Reviewed-by: Austin Clements <austin@google.com>
CL 69292 unified the amd64 entry-points, but Dragonfly doesn't follow
the same entry-point argument conventions as most other amd64
platforms. Fix the Dragonfly entry point.
Change-Id: I0f84e2e4101ce68217af185ee9baaf455b8b6dad
Reviewed-on: https://go-review.googlesource.com/70212
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>