Currently, it's possible for the next_gc calculation to underflow.
Since next_gc is unsigned, this wraps around and effectively disables
GC for the rest of the program's execution. Besides being obviously
wrong, this is causing test failures on 32-bit because some tests are
running out of heap.
This underflow happens for two reasons, both having to do with how we
estimate the reachable heap size at the end of the GC cycle.
One reason is that this calculation depends on the value of heap_live
at the beginning of the GC cycle, but we currently only record that
value during a concurrent GC and not during a forced STW GC. Fix this
by moving the recorded value from gcController to work and recording
it on a common code path.
The other reason is that we use the amount of allocation during the GC
cycle as an approximation of the amount of floating garbage and
subtract it from the marked heap to estimate the reachable heap.
However, since this is only an approximation, it's possible for the
amount of allocation during the cycle to be *larger* than the marked
heap size (since the runtime allocates white and it's possible for
these allocations to never be made reachable from the heap). Currently
this causes wrap-around in our estimate of the reachable heap size,
which in turn causes wrap-around in next_gc. Fix this by bottoming out
the reachable heap estimate at 0, in which case we just fall back to
triggering GC at heapminimum (which is okay since this only happens on
small heaps).
Fixes#10555, fixes#10556, and fixes#10559.
Change-Id: Iad07b529c03772356fede2ae557732f13ebfdb63
Reviewed-on: https://go-review.googlesource.com/9286
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
To achieve a 2% improvement in the garbage benchmark this CL removes
an unneeded assert and avoids one hbits.next() call per object
being scanned.
Change-Id: Ibd542d01e9c23eace42228886f9edc488354df0d
Reviewed-on: https://go-review.googlesource.com/9244
Reviewed-by: Austin Clements <austin@google.com>
newosproc0 does not work on android/arm.
See issue #10548.
Change-Id: Ieaf6f5d0b77cddf5bf0b6c89fd12b1c1b8723f9b
Reviewed-on: https://go-review.googlesource.com/9293
Reviewed-by: David Crawshaw <crawshaw@golang.org>
github.com/kr/goven says it's deprecated and anyway
it would be preferable to point users to a standard Go tool.
Change-Id: Iac4a0d13233604a36538748d498f5770b2afce19
Reviewed-on: https://go-review.googlesource.com/8969
Reviewed-by: Minux Ma <minux@golang.org>
If the machine's network configuration files (resolv.conf,
nsswitch.conf) don't have any unsupported options, prefer Go's DNS
resolver, which doesn't have the cgo & thread over.
It means users can have more than 500 DNS requests outstanding (our
current limit for cgo lookups) and not have one blocked thread per
outstanding request.
Discussed in thread https://groups.google.com/d/msg/golang-dev/2ZUi792oztM/Q0rg_DkF5HMJ
Change-Id: I3f685d70aff6b47bec30b63e9fba674b20507f95
Reviewed-on: https://go-review.googlesource.com/8945
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
They are vestiges of the c2go translation.
Change-Id: I9a10536f5986b751a35cc7d84b5ba69ae0c2ede7
Reviewed-on: https://go-review.googlesource.com/9262
Reviewed-by: Minux Ma <minux@golang.org>
There is an assumption that the function executed in child thread
created by runtime.close should not return. And different systems
enforce that differently: some exit that thread, some exit the
whole process.
The test TestNewOSProc0 introduced in CL 9161 breaks that assumption,
so we need to adjust the code to only exit the thread should the
called function return.
Change-Id: Id631cb2f02ec6fbd765508377a79f3f96c6a2ed6
Reviewed-on: https://go-review.googlesource.com/9246
Reviewed-by: Dave Cheney <dave@cheney.net>
It was only visible when you run godoc with explicit GOOS=windows,
which is less useful for people developing portable application on
non-windows platforms.
Also added a note that log/syslog is not supported on NaCl.
Change-Id: I81650445fb2a5ee161da7e0608c3d3547d5ac2a6
Reviewed-on: https://go-review.googlesource.com/9245
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The constants in cmd/internal/goobj had gone stale (we had three copies of
these constants, working on reducing that was what got me to noticing this).
Some of the changes to link.hello.darwin.amd64 are the change from absolute
to %rip-relative addressing, a change which happened quite a while ago...
Depends on http://golang.org/cl/9113.
Fixes#10501.
Change-Id: Iaa1511f458a32228c2df2ccd0076bb9ae212a035
Reviewed-on: https://go-review.googlesource.com/9105
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, we set the heap goal for the next GC cycle using the size
of the marked heap at the end of the current cycle. This can lead to a
bad feedback loop if the mutator is rapidly allocating and releasing
pointers that can significantly bloat heap size.
If the GC were STW, the marked heap size would be exactly the
reachable heap size (call it stwLive). However, in concurrent GC,
marked=stwLive+floatLive, where floatLive is the amount of "floating
garbage": objects that were reachable at some point during the cycle
and were marked, but which are no longer reachable by the end of the
cycle. If the GC cycle is short, then the mutator doesn't have much
time to create floating garbage, so marked≈stwLive. However, if the GC
cycle is long and the mutator is allocating and creating floating
garbage very rapidly, then it's possible that marked≫stwLive. Since
the runtime currently sets the heap goal based on marked, this will
cause it to set a high heap goal. This means that 1) the next GC cycle
will take longer because of the larger heap and 2) the assist ratio
will be low because of the large distance between the trigger and the
goal. The combination of these lets the mutator produce even more
floating garbage in the next cycle, which further exacerbates the
problem.
For example, on the garbage benchmark with GOMAXPROCS=1, this causes
the heap to grow to ~500MB and the garbage collector to retain upwards
of ~300MB of heap, while the true reachable heap size is ~32MB. This,
in turn, causes the GC cycle to take upwards of ~3 seconds.
Fix this bad feedback loop by estimating the true reachable heap size
(stwLive) and using this rather than the marked heap size
(stwLive+floatLive) as the basis for the GC trigger and heap goal.
This breaks the bad feedback loop and causes the mutator to assist
more, which decreases the rate at which it can create floating
garbage. On the same garbage benchmark, this reduces the maximum heap
size to ~73MB, the retained heap to ~40MB, and the duration of the GC
cycle to ~200ms.
Change-Id: I7712244c94240743b266f9eb720c03802799cdd1
Reviewed-on: https://go-review.googlesource.com/9177
Reviewed-by: Rick Hudson <rlh@golang.org>
Change-Id: I429402dd91243cd9415b054ee17bfebccc68ed57
Reviewed-on: https://go-review.googlesource.com/9197
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
This may or may not be useful to the end user, but it's incredibly
useful for us to understand the behavior of the pacer. Currently this
is fairly easy (though not trivial) to derive from the other heap
stats we print, but we're about to change how we compute the goal,
which will make it much harder to derive.
Change-Id: I796ef233d470c01f606bd9929820c01ece1f585a
Reviewed-on: https://go-review.googlesource.com/9176
Reviewed-by: Rick Hudson <rlh@golang.org>
The trigger controller computes GC CPU utilization by dividing by the
wall-clock time that's passed since concurrent mark began. Since this
delta is nanoseconds it's borderline impossible for it to be zero, but
if it is zero we'll currently divide by zero. Be robust to this
possibility by ignoring the utilization in the error term if no time
has elapsed.
Change-Id: I93dfc9e84735682af3e637f6538d1e7602634f09
Reviewed-on: https://go-review.googlesource.com/9175
Reviewed-by: Rick Hudson <rlh@golang.org>
To make the gcprog for global data containing variables of types defined in other shared
libraries, we need to know a lot about those types. So read the value of any symbol with
a name starting with "type.". If a type uses a mask, the name of the symbol defining the
mask unfortunately cannot be predicted from the type name so I have to keep track of the
addresses of every such symbol and associate them with the type symbols after the fact.
I'm not very happy about this change, but something like this is needed and this is as
pleasant as I know how to make it.
Change-Id: I408d831b08b3b31e0610688c41367b23998e975c
Reviewed-on: https://go-review.googlesource.com/8334
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
There were 10 implementations of the trivial bool2int function, 9 of which
were the only thing in their file. Remove all of them in favor of one in
cmd/internal/obj.
Change-Id: I9c51d30716239df51186860b9842a5e9b27264d3
Reviewed-on: https://go-review.googlesource.com/9230
Reviewed-by: Ian Lance Taylor <iant@golang.org>
since the "precision" parameter means constant arithmetic is not
necessarily exact.
As requested by gri, within go/types, the local import name 'exact'
has been kept, to reduce the diff with the x/tools branch. This may
be changed later.
Since the go/types.bash script was already obsolete, I added a comment
to this effect.
Tested with all.bash.
Change-Id: I45153688d9d8afa8384fb15229b0124c686059b4
Reviewed-on: https://go-review.googlesource.com/9242
Reviewed-by: Rob Pike <r@golang.org>
We initially added clone0 to handle the case when G or M don't exist, but
it turns out that we could have just modified clone. (It also helps that
the function we're invoking in clone0 no longer needs arguments.)
As a side-effect, newosproc0 is now supported on all linux archs.
Change-Id: Ie603af75d8f164310fc16446052d83743961f3ca
Reviewed-on: https://go-review.googlesource.com/9164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Added a prec parameter to MakeFromLiteral (which currently must
always be 0). This will permit go/types to provide an upper limit
for the precision of constant values, eventually. Overflows can be
returned with a special Overflow value (very much like the current
Unknown values).
This is a minimal change that should prevent the need for future
backward-incompatible API changes.
Change-Id: I6c9390d7cc4810375e26c53ed3bde5a383392330
Reviewed-on: https://go-review.googlesource.com/9168
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Alan Donovan <adonovan@google.com>
In the brief window between getConn and persistConn.roundTrip,
a cancel could end up going missing.
Fix by making it possible to inspect if a cancel function was cleared
and checking if we were canceled before entering roundTrip.
Fixes#10511
Change-Id: If6513e63fbc2edb703e36d6356ccc95a1dc33144
Reviewed-on: https://go-review.googlesource.com/9181
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously all errors were 404 errors, even if the real error had
nothing to do with a file being non-existent.
Fixes#10283
Change-Id: I5b08b471a9064c347510cfcf8557373704eef7c0
Reviewed-on: https://go-review.googlesource.com/9200
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
There used to be a small window where if a server declared it would do
a keep-alive connection but then actually closed the connection before
the roundTrip goroutine scheduled after being sent a response from the
readLoop goroutine, then the readLoop goroutine would loop around and
block forever reading from a channel because the numExpectedResponses
accounting was done too late.
Fixes#10457
Change-Id: Icbae937ffe83c792c295b7f4fb929c6a24a4f759
Reviewed-on: https://go-review.googlesource.com/9169
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Unlike linux arm32, linux arm64 does not set the condition codes to indicate
whether a system call failed or not. We must check if the return value
is in the error code range (the same as amd64 does).
Fixes runtime.TestBadOpen test.
Change-Id: I97a8b0a17b5f002a3215c535efa91d199cee3309
Reviewed-on: https://go-review.googlesource.com/9220
Reviewed-by: Russ Cox <rsc@golang.org>
Several naming changes and a real issue in asmcgocall_errno.
Change-Id: Ieb0a328a168819fe233d74e0397358384d7e71b3
Reviewed-on: https://go-review.googlesource.com/9212
Reviewed-by: Minux Ma <minux@golang.org>
This change replaces server tests with new ones that require features
introduced after go1 release, such as runtime-integrated network poller,
Dialer, etc.
Change-Id: Icf1f94f08f33caacd499cfccbe74cda8d05eed30
Reviewed-on: https://go-review.googlesource.com/9195
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change deflakes zero byte read/write tests on datagram sockets, and
enables them by default.
Change-Id: I52f1a76f8ff379d90f40a07bb352fae9343ea41a
Reviewed-on: https://go-review.googlesource.com/9194
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change excludes internal UDP header size from a result of number of
bytes written on WriteTo.
Change-Id: I847d57f7f195657b6f14efdf1b4cfab13d4490dd
Reviewed-on: https://go-review.googlesource.com/9196
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
The purpose of this test is to make sure that -buildmode=c-shared
works even when the shared library can be built without invoking cgo.
Change-Id: Id6f95af755992b209aff770440ca9819b74113ab
Reviewed-on: https://go-review.googlesource.com/9166
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This reverts commit 8d7d02f145.
Reverted because it breaks go/build's "deps" test.
Change-Id: I61db6b2431b3ba0d2b3ece5bab7a04194239c34b
Reviewed-on: https://go-review.googlesource.com/9174
Reviewed-by: Alan Donovan <adonovan@google.com>
In external linking mode, the linker automatically imports
runtime/cgo. When the user uses non-standard compilation options,
they have to know to run go install runtime/cgo. When the go tool
adds non-standard compilation options itself, we can't force the user
to do that. So add the dependency ourselves.
Bad news: we don't currently have a clean way to know whether we are
going to use external linking mode. This CL duplicates logic split
between cmd/6l and cmd/internal/ld.
Good news: adding an unnecessary dependency on runtime/cgo does no
real harm. We aren't going to force the linker to pull it in, we're
just going to build it so that its available if the linker wants it.
Change-Id: Ide676339d4e8b1c3d9792884a2cea921abb281b7
Reviewed-on: https://go-review.googlesource.com/9115
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
This change refactors reflect.Value to consistently use arrayAt when an element
of an array of bytes is indexed.
This effectively replaces:
arr := unsafe.Pointer(...)
arri := unsafe.Pointer(uintptr(arr) + uintptr(i)*elementSize)
with:
arr := unsafe.Pointer(...)
arri := arrayAt(arr, i, elementSize)
Change-Id: I53ffd0d6de693b43d5c10c0aa4cd6d4f5e95a1e3
Reviewed-on: https://go-review.googlesource.com/9183
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This reduces the number of allocations in the compiler
while building the stdlib by 15.66%.
No functional changes. Passes toolstash -cmp.
Change-Id: Ia21b37134a8906a4e23d53fdc15235b4aa7bbb34
Reviewed-on: https://go-review.googlesource.com/9085
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, the GC controller computes the mutator assist ratio at the
beginning of the cycle by estimating that the marked heap size this
cycle will be the same as it was the previous cycle. It then uses that
assist ratio for the rest of the cycle. However, this means that if
the mutator is quickly growing its reachable heap, the heap size is
likely to exceed the heap goal and currently there's no additional
pressure on mutator assists when this happens. For example, 6g (with
GOMAXPROCS=1) frequently exceeds the goal heap size by ~25% because of
this.
This change makes GC revise its work estimate and the resulting assist
ratio every 10ms during the concurrent mark. Instead of
unconditionally using the marked heap size from the last cycle as an
estimate for this cycle, it takes the minimum of the previously marked
heap and the currently marked heap. As a result, as the cycle
approaches or exceeds its heap goal, this will increase the assist
ratio to put more pressure on the mutator assist to bring the cycle to
an end. For 6g, this causes the GC to always finish within 5% and
often within 1% of its heap goal.
Change-Id: I4333b92ad0878c704964be42c655c38a862b4224
Reviewed-on: https://go-review.googlesource.com/9070
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently, in accordance with the GC pacing proposal, we schedule
background marking with a goal of achieving 25% utilization *total*
between mutator assists and background marking. This is stricter than
was set out in the Go 1.5 proposal, which suggests that the garbage
collector can use 25% just for itself and anything the mutator does to
help out is on top of that. It also has several technical
drawbacks. Because mutator assist time is constantly changing and we
can't have instantaneous information on background marking time, it
effectively requires hitting a moving target based on out-of-date
information. This works out in the long run, but works poorly for
short GC cycles and on short time scales. Also, this requires
time-multiplexing all Ps between the mutator and background GC since
the goal utilization of background GC constantly fluctuates. This
results in a complicated scheduling algorithm, poor affinity, and
extra overheads from context switching.
This change modifies the way we schedule and run background marking so
that background marking always consumes 25% of GOMAXPROCS and mutator
assist is in addition to this. This enables a much more robust
scheduling algorithm where we pre-determine the number of Ps we should
dedicate to background marking as well as the utilization goal for a
single floating "remainder" mark worker.
Change-Id: I187fa4c03ab6fe78012a84d95975167299eb9168
Reviewed-on: https://go-review.googlesource.com/9013
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the concurrent sweep follows a 1:1 rule: when allocation
needs a span, it sweeps a span (likewise, when a large allocation
needs N pages, it sweeps until it frees N pages). This rule worked
well for the STW collector (especially when GOGC==100) because it did
no more sweeping than necessary to keep the heap from growing, would
generally finish sweeping just before GC, and ensured good temporal
locality between sweeping a page and allocating from it.
It doesn't work well with concurrent GC. Since concurrent GC requires
starting GC earlier (sometimes much earlier), the sweep often won't be
done when GC starts. Unfortunately, the first thing GC has to do is
finish the sweep. In the mean time, the mutator can continue
allocating, pushing the heap size even closer to the goal size. This
worked okay with the 7/8ths trigger, but it gets into a vicious cycle
with the GC trigger controller: if the mutator is allocating quickly
and driving the trigger lower, more and more sweep work will be left
to GC; this both causes GC to take longer (allowing the mutator to
allocate more during GC) and delays the start of the concurrent mark
phase, which throws off the GC controller's statistics and generally
causes it to push the trigger even lower.
As an example of a particularly bad case, the garbage benchmark with
GOMAXPROCS=4 and -benchmem 512 (MB) spends the first 0.4-0.8 seconds
of each GC cycle sweeping, during which the heap grows by between
109MB and 252MB.
To fix this, this change replaces the 1:1 sweep rule with a
proportional sweep rule. At the end of GC, GC knows exactly how much
heap allocation will occur before the next concurrent GC as well as
how many span pages must be swept. This change computes this "sweep
ratio" and when the mallocgc asks for a span, the mcentral sweeps
enough spans to bring the swept span count into ratio with the
allocated byte count.
On the benchmark from above, this entirely eliminates sweeping at the
beginning of GC, which reduces the time between startGC readying the
GC goroutine and GC stopping the world for sweep termination to ~100µs
during which the heap grows at most 134KB.
Change-Id: I35422d6bba0c2310d48bb1f8f30a72d29e98c1af
Reviewed-on: https://go-review.googlesource.com/8921
Reviewed-by: Rick Hudson <rlh@golang.org>
This field used to decrease with sweeps (and potentially go
negative). Now it is always zero or positive, so change it to a
uintptr so it meshes better with other memory stats.
Change-Id: I6a50a956ddc6077eeaf92011c51743cb69540a3c
Reviewed-on: https://go-review.googlesource.com/8899
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, concurrent GC triggers at a fixed 7/8*GOGC heap growth. For
mutators that allocate slowly, this means GC will trigger too early
and run too often, wasting CPU time on GC. For mutators that allocate
quickly, this means GC will trigger too late, causing the program to
exceed the GOGC heap growth goal and/or to exceed CPU goals because of
a high mutator assist ratio.
This change adds a feedback control loop to dynamically adjust the GC
trigger from cycle to cycle. By monitoring the heap growth and GC CPU
utilization from cycle to cycle, this adjusts the Go garbage collector
to target the GOGC heap growth goal and the 25% CPU utilization goal.
Change-Id: Ic82eef288c1fa122f73b69fe604d32cbb219e293
Reviewed-on: https://go-review.googlesource.com/8851
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the concurrent mark phase is performed by the main GC
goroutine. Prior to the previous commit enabling preemption, this
caused marking to always consume 1/GOMAXPROCS of the available CPU
time. If GOMAXPROCS=1, this meant background GC would consume 100% of
the CPU (effectively a STW). If GOMAXPROCS>4, background GC would use
less than the goal of 25%. If GOMAXPROCS=4, background GC would use
the goal 25%, but if the mutator wasn't using the remaining 75%,
background marking wouldn't take advantage of the idle time. Enabling
preemption in the previous commit made GC miss CPU targets in
completely different ways, but set us up to bring everything back in
line.
This change replaces the fixed GC goroutine with per-P background mark
goroutines. Once started, these goroutines don't go in the standard
run queues; instead, they are scheduled specially such that the time
spent in mutator assists and the background mark goroutines totals 25%
of the CPU time available to the program. Furthermore, this lets
background marking take advantage of idle Ps, which significantly
boosts GC performance for applications that under-utilize the CPU.
This requires also changing how time is reported for gctrace, so this
change splits the concurrent mark CPU time into assist/background/idle
scanning.
This also requires increasing the size of the StackRecord slice used
in a GoroutineProfile test.
Change-Id: I0936ff907d2cee6cb687a208f2df47e8988e3157
Reviewed-on: https://go-review.googlesource.com/8850
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the entire GC process runs with g.m.preemptoff set. In the
concurrent phases, the parts that actually need preemption disabled
are run on a system stack and there's no overall need to stay on the
same M or P during the concurrent phases. Hence, move the setting of
g.m.preemptoff to when we start mark termination, at which point we
really do need preemption disabled.
This dramatically changes the scheduling behavior of the concurrent
mark phase. Currently, since this is non-preemptible, concurrent mark
gets one dedicated P (so 1/GOMAXPROCS utilization). With this change,
the GC goroutine is scheduled like any other goroutine during
concurrent mark, so it gets 1/<runnable goroutines> utilization.
You might think it's not even necessary to set g.m.preemptoff at that
point since the world is stopped, but stackalloc/stackfree use this as
a signal that the per-P pools are not safe to access without
synchronization.
Change-Id: I08aebe8179a7d304650fb8449ff36262b3771099
Reviewed-on: https://go-review.googlesource.com/8839
Reviewed-by: Rick Hudson <rlh@golang.org>
This time is tracked per P and periodically flushed to the global
controller state. This will be used to compute mutator assist
utilization in order to schedule background GC work.
Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec
Reviewed-on: https://go-review.googlesource.com/8837
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, mutator allocation periodically assists the garbage
collector by performing a small, fixed amount of scanning work.
However, to control heap growth, mutators need to perform scanning
work *proportional* to their allocation rate.
This change implements proportional mutator assists. This uses the
scan work estimate computed by the garbage collector at the beginning
of each cycle to compute how much scan work must be performed per
allocation byte to complete the estimated scan work by the time the
heap reaches the goal size. When allocation triggers an assist, it
uses this ratio and the amount allocated since the last assist to
compute the assist work, then attempts to steal as much of this work
as possible from the background collector's credit, and then performs
any remaining scan work itself.
Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba
Reviewed-on: https://go-review.googlesource.com/8836
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the "n" in gcDrainN is in terms of objects to scan. This is
used by gchelpwork to perform a limited amount of work on allocation,
but is a pretty arbitrary way to bound this amount of work since the
number of objects has little relation to how long they take to scan.
Modify gcDrainN to perform a fixed amount of scan work instead. For
now, gchelpwork still performs a fairly arbitrary amount of scan work,
but at least this is much more closely related to how long the work
will take. Shortly, we'll use this to precisely control the scan work
performed by mutator assists during allocation to achieve the heap
size goal.
Change-Id: I3cd07fe0516304298a0af188d0ccdf621d4651cc
Reviewed-on: https://go-review.googlesource.com/8835
Reviewed-by: Rick Hudson <rlh@golang.org>
This tracks scan work done by background GC in a global pool. Mutator
assists will draw on this credit to avoid doing work when background
GC is staying ahead.
Unlike the other GC controller tracking variables, this will be both
written and read throughout the cycle. Hence, we can't arbitrarily
delay updates like we can for scan work and bytes marked. However, we
still want to minimize contention, so this global credit pool is
allowed some error from the "true" amount of credit. Background GC
accumulates credit locally up to a limit and only then flushes to the
global pool. Similarly, mutator assists will draw from the credit pool
in batches.
Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e
Reviewed-on: https://go-review.googlesource.com/8834
Reviewed-by: Rick Hudson <rlh@golang.org>
This implements tracking the scan work ratio of a GC cycle and using
this to estimate the scan work that will be required by the next GC
cycle. Currently this estimate is unused; it will be used to drive
mutator assists.
Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72
Reviewed-on: https://go-review.googlesource.com/8833
Reviewed-by: Rick Hudson <rlh@golang.org>