Currently we harvestwbufs the moment we enter the mark phase, even
before starting the world again. Since cached wbufs are only filled
when we're in mark or mark termination, they should all be empty at
this point, making the harvest pointless. Remove the harvest.
We should, but do not currently harvest at the end of the mark phase
when we're running out of work to do.
Change-Id: I5f4ba874f14dd915b8dfbc4ee5bb526eecc2c0b4
Reviewed-on: https://go-review.googlesource.com/7669
Reviewed-by: Rick Hudson <rlh@golang.org>
Even though the world is stopped the GC may do pointer
writes that need to be protected by write barriers.
This means that the write barrier must be on
continuously from the time the mark phase starts and
the mark termination phase ends. Checks were added to
ensure that no allocation happens during a GC.
Hoist the logic that clears pools the start of the GC
so that the memory can be reclaimed during this GC cycle.
Change-Id: I9d1551ac5db9bac7bac0cb5370d5b2b19a9e6a52
Reviewed-on: https://go-review.googlesource.com/6990
Reviewed-by: Austin Clements <austin@google.com>
Stip uninteresting bottom and top frames from trace stacks.
This makes both binary and json trace files smaller,
and also makes stacks shorter and more readable in the viewer.
Change-Id: Ib9c80ccc280504f0e235f867f53f1d2652c41583
Reviewed-on: https://go-review.googlesource.com/5523
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Starting it lazily causes a memory allocation (for the goroutine) during GC.
First use of channels for runtime implementation.
Change-Id: I9cd24dcadbbf0ee5070ee6d0ed7ea415504f316c
Reviewed-on: https://go-review.googlesource.com/6960
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Freedefer now executes on system stack to not consume
nosplit stack space.
Change-Id: I1a27695838409259d1586a0adfa9f92bccf7ceba
Reviewed-on: https://go-review.googlesource.com/3967
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
The unbounded list-based sudog cache can grow infinitely.
This can happen if a goroutine is routinely blocked on one P
and then unblocked and scheduled on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central sudog cache; local caches become fixed-size
with the only purpose of amortizing accesses to the
central cache.
The change required to move sudog cache from mcache to P,
because mcache is not scanned by GC.
Change-Id: I3bb7b14710354c026dcba28b3d3c8936a8db4e90
Reviewed-on: https://go-review.googlesource.com/3742
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Since allglock is held in this function, there's no point to
tip-toeing around allgs. Just use a for-range loop.
Change-Id: I1ee61c7e8cac8b8ebc8107c0c22f739db5db9840
Reviewed-on: https://go-review.googlesource.com/5882
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, we had three loops in the garbage collector that all
cleared the per-G GC flags. Consolidate these into one function.
This one function is designed to work in a concurrent setting. As a
result, it's slightly more expensive than the loops it replaces during
STW phases, but these happen at most twice per GC.
Change-Id: Id1ec0074fd58865eb0112b8a0547b267802d0df1
Reviewed-on: https://go-review.googlesource.com/5881
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The loop in gcMark is redundant with the gcworkdone resetting
performed by markroot, which called a few lines later in gcMark.
Change-Id: Ie0a826a614ecfa79e6e6b866e8d1de40ba515856
Reviewed-on: https://go-review.googlesource.com/5880
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
When GODEBUG=gctrace=2 two gcs are preformed. During the first gc
the stack scan sets the g's gcscanvalid and gcworkdone flags to true
indicating that the stacks have to be scanned and do not need to
be rescanned. These need to be reset to false for the second GC so the
stacks are rescanned, otherwise if the only pointer to an object is
on the stack it will not be discovered and the object will be freed.
Typically this will include the object that was just allocated in
the mallocgc call that initiated the GC.
Change-Id: Ic25163f4689905fd810c90abfca777324005c02f
Reviewed-on: https://go-review.googlesource.com/5861
Reviewed-by: Russ Cox <rsc@golang.org>
This is a nice split but more importantly it provides a better
way to fit the checkmark phase into the sequencing.
Also factor out common span copying into gcSpanCopy.
Change-Id: Ia058644974e4ed4ac3cf4b017a3446eb2284d053
Reviewed-on: https://go-review.googlesource.com/5333
Reviewed-by: Austin Clements <austin@google.com>
The loop made more sense when gc_m was not its own function.
Change-Id: I71a7f21d777e69c1924e3b534c507476daa4dfdd
Reviewed-on: https://go-review.googlesource.com/5332
Reviewed-by: Austin Clements <austin@google.com>
That is, I accidentally dropped this change of Austin's
when preparing my CL. I blame Git.
Change-Id: I9dd772c84edefad96c4b16785fdd2dea04a4a0d6
Reviewed-on: https://go-review.googlesource.com/5320
Reviewed-by: Austin Clements <austin@google.com>
Move code from malloc1.go, malloc2.go, mem.go, mgc0.go into
appropriate locations.
Factor mgc.go into mgc.go, mgcmark.go, mgcsweep.go, mstats.go.
A lot of this code was in certain files because the right place was in
a C file but it was written in Go, or vice versa. This is one step toward
making things actually well-organized again.
Change-Id: I6741deb88a7cfb1c17ffe0bcca3989e10207968f
Reviewed-on: https://go-review.googlesource.com/5300
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
This converts the garbage collector from directly manipulating work
buffers to using the new gcWork abstraction.
The previous management of work buffers was rather ad hoc. As a
result, switching to the gcWork abstraction changes many details of
work buffer management.
If greyobject fills a work buffer, it can now pull from work.partial
in addition to work.empty.
Previously, gcDrain started with a partial or empty work buffer and
fetched an empty work buffer if it filled its current buffer (in
greyobject). Now, gcDrain starts with a full work buffer and fetches
an partial or empty work buffer if it fills its current buffer (in
greyobject). The original behavior was bad because gcDrain would
immediately drop the empty work buffer returned by greyobject and
fetch a full work buffer, which greyobject was likely to immediately
overflow, fetching another empty work buffer, etc. The new behavior
isn't great at the start because greyobject is likely to immediately
overflow the full buffer, but the steady-state behavior should be more
stable. Both before and after this change, gcDrain fetches a full
work buffer if it drains its current buffer. Basically all of these
choices are bad; the right answer is to use a dual work buffer scheme.
Previously, shade always fetched a work buffer (though usually from
m.currentwbuf), even if the object was already marked. Now it only
fetches a work buffer if it actually greys an object.
Change-Id: I8b880ed660eb63135236fa5d5678f0c1c041881f
Reviewed-on: https://go-review.googlesource.com/5232
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This introduces a producer/consumer abstraction for GC work pointers
that internally handles the details of filling, draining, and
shuffling work buffers.
In addition to simplifying the GC code, this should make it easy for
us to change how we use work buffers, including cleaning up how we use
the work.partial queue, reintroducing a FIFO lookahead cache, adding
prefetching, and using dual buffers to avoid flapping.
This commit doesn't change any existing code. The following commit
will switch the garbage collector from explicit workbuf manipulation
to gcWork.
Change-Id: Ifbfe5fff45bf0362d6d7c3cecb061f0c9874077d
Reviewed-on: https://go-review.googlesource.com/5231
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Nit. There's no reason to take a uintptr and doing so just requires
casts in annoying places.
Change-Id: Ifeb9638c6d94eae619c490930cf724cc315680ba
Reviewed-on: https://go-review.googlesource.com/5230
Reviewed-by: Russ Cox <rsc@golang.org>
drainworkbuf is now gcDrain, since it drains until there's
nothing left to drain. drainobjects is now gcDrainN because it's
the bounded equivalent to gcDrain.
The new names use the Go camel case convention because we have to
start somewhere. The "gc" prefix is because we don't have runtime
packages yet and just "drain" is too ambiguous.
Change-Id: I88dbdf32e8ce4ce6c3b7e1f234664be9b76cb8fd
Reviewed-on: https://go-review.googlesource.com/4785
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
All calls to drainworkbuf now pass true for this argument, so remove
the argument and update the documentation to reflect the simplified
interface.
At a higher level, there are no longer any situations where we drain
"one wbuf" (though drainworkbuf didn't guarantee this anyway). We
either drain everything, or we drain a specific number of objects.
Change-Id: Ib7ee0fde56577eff64232ee1e711ec57c4361335
Reviewed-on: https://go-review.googlesource.com/4784
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
scanblock is only called during _GCscan and _GCmarktermination.
During _GCscan, scanblock didn't call drainworkbufs anyway. During
_GCmarktermination, there's really no point in draining some (largely
arbitrary) amount of work during the scanblock, since the GC is about
to drain everything anyway, so simply eliminate this case.
Change-Id: I7f3c59ce9186a83037c6f9e9b143181acd04c597
Reviewed-on: https://go-review.googlesource.com/4783
Reviewed-by: Russ Cox <rsc@golang.org>
We no longer ever call scanblock with b == 0.
Change-Id: I9b01da39595e0cc251668c24d58748d88f5f0792
Reviewed-on: https://go-review.googlesource.com/4782
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
scanblock(0, 0, nil, nil) was just a confusing way of saying
wbuf = getpartialorempty()
drainworkbuf(wbuf, true)
Make drainworkbuf accept a nil workbuf and perform the
getpartialorempty itself and replace all uses of scanblock(0, 0, nil,
nil) with direct calls to drainworkbuf(nil, true).
Change-Id: I7002a2f8f3eaf6aa85bbf17ccc81d7288acfef1c
Reviewed-on: https://go-review.googlesource.com/4781
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, scanblock called checknocurrentwbuf() after
drainworkbuf(). Move this call into drainworkbuf so that every return
path from drainworkbuf calls checknocurrentwbuf(). This is equivalent
to the previous code because scanblock was the only caller of
drainworkbuf.
Change-Id: I96ef2168c8aa169bfc4d368f296342fa0fbeafb4
Reviewed-on: https://go-review.googlesource.com/4780
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
No code modifications.
This is in preparation for improving the wbuf abstraction.
Change-Id: I719543a345c34d079b7e39b251eccd5dd8a07826
Reviewed-on: https://go-review.googlesource.com/4710
Reviewed-by: Rick Hudson <rlh@golang.org>
Add local workbufs to the m struct in order to reduce contention.
Add consistency checks for workbuf ownership.
Chain workbufs through call change to avoid swapping them
to and from the m struct.
Adjust the size of the workbuf so that the mutators can
more frequently pass modifications to the GC thus shifting
some work from the STW mark termination phase to the concurrent
mark phase.
Change-Id: I557b53af34ad9972265e0ed9f5996e52d548563d
Reviewed-on: https://go-review.googlesource.com/3972
Reviewed-by: Austin Clements <austin@google.com>
m.gcing has become overloaded to mean "don't preempt this g" in
general. Once the garbage collector is preemptible, the one thing it
*won't* mean is that we're in the garbage collector.
So, rename gcing to "preemptoff" and make it a string giving a reason
that preemption is disabled. gcing was never set to anything but 0 or
1, so we don't have to worry about there being a stack of reasons.
Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46
Reviewed-on: https://go-review.googlesource.com/3660
Reviewed-by: Russ Cox <rsc@golang.org>
Commit 656be31 replaced onM with systemstack, but missed updating a
few comments that still referred to onM. Update these.
Change-Id: I0efb017e9a66ea0adebb6e1da6e518ee11263f69
Reviewed-on: https://go-review.googlesource.com/3664
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Prior to the conversion of the runtime to Go, this void* was
necessary to get closure information in to C callbacks. There
are no more C callbacks and parfor is perfectly capable of
invoking a Go closure now, so eliminate ctx and all of its
unsafe-ness. (Plus, the runtime currently doesn't use ctx for
anything.)
Change-Id: I39fc53b7dd3d7f660710abc76b0d831bfc6296d8
Reviewed-on: https://go-review.googlesource.com/3395
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Set the minimum heap size to 4Mbytes except when the hash
table code wants to force a GC. In an unrelated change when a
mutator is asked to assist the GC by marking pointer workbufs
it will keep working until the requested number of pointers
are processed even if it means asking for additional workbufs.
Change-Id: I661cfc0a7f2efcf6286b5d37d73e593d9ecd04d5
Reviewed-on: https://go-review.googlesource.com/3392
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
During a concurrent GC stacks are scanned in
an initial scan phase informing the GC of all
pointers on the stack. The GC only needs to rescan
the stack if it potentially changes which can only
happen if the goroutine runs.
This CL tracks whether the Goroutine has run
since it was last scanned and thus may have changed
its stack. If necessary the stack is rescanned.
Change-Id: I5fb1c4338d42e3f61ab56c9beb63b7b2da25f4f1
Reviewed-on: https://go-review.googlesource.com/3275
Reviewed-by: Russ Cox <rsc@golang.org>
Adjust triggergc so that we trigger when we have used 7/8
of the available heap memory. Do first collection when we
exceed 4Mbytes.
Change-Id: I467b4335e16dc9cd1521d687fc1f99a51cc7e54b
Reviewed-on: https://go-review.googlesource.com/3149
Reviewed-by: Austin Clements <austin@google.com>
Print out the object holding the reference to the object
that checkmark detects as not being properly marked.
Change-Id: Ieedbb6fddfaa65714504af9e7230bd9424cd0ae0
Reviewed-on: https://go-review.googlesource.com/2744
Reviewed-by: Austin Clements <austin@google.com>
The code in mfinal.go is moved from malloc*.go and mgc*.go
and substantially unchanged.
The code in mbitmap.go is also moved from those files, but
cleaned up so that it can be called from those files (in most cases
the code being moved was not already a standalone function).
I also renamed the constants and wrote comments describing
the format. The result is a significant cleanup and isolation of
the bitmap code, but, roughly speaking, it should be treated
and reviewed as new code.
The other files changed only as much as necessary to support
this code movement.
This CL does NOT change the semantics of the heap or type
bitmaps at all, although there are now some obvious opportunities
to do so in followup CLs.
Change-Id: I41b8d5de87ad1d3cd322709931ab25e659dbb21d
Reviewed-on: https://go-review.googlesource.com/2991
Reviewed-by: Keith Randall <khr@golang.org>
I also added new comments at the top of mbarrier.go,
but the rest of the code is just copy-and-paste.
Change-Id: Iaeb2b12f8b1eaa33dbff5c2de676ca902bfddf2e
Reviewed-on: https://go-review.googlesource.com/2990
Reviewed-by: Austin Clements <austin@google.com>
printf, vprintf, snprintf, gc_m_ptr, gc_g_ptr, gc_itab_ptr, gc_unixnanotime.
These were called from C.
There is no more C.
Now that vprintf is gone, delete roundup, which is unsafe (see CL 2814).
Change-Id: If8a7b727d497ffa13165c0d3a1ed62abc18f008c
Reviewed-on: https://go-review.googlesource.com/2824
Reviewed-by: Austin Clements <austin@google.com>
The old name was too ambiguous (is it a verb? is it a predicate? is
it a constant?) and too close to debug.gccheckmark. Hopefully the new
name conveys that this variable indicates that we are currently doing
mark checking.
Change-Id: I031cd48b0906cdc7774f5395281d3aeeb8ef3ec9
Reviewed-on: https://go-review.googlesource.com/2656
Reviewed-by: Rick Hudson <rlh@golang.org>
1) Move non-preemption check even earlier in newstack.
This avoids a few priority inversion problems.
2) Always use atomic operations to update bitmap for 1-word objects.
This avoids lost mark bits during concurrent GC.
3) Stop using work.nproc == 1 as a signal for being single-threaded.
The concurrent GC runs with work.nproc == 1 but other procs are
running mutator code.
The use of work.nproc == 1 in getfull *is* safe, but remove it anyway,
since it is saving only a single atomic operation per GC round.
Fixes#9225.
Change-Id: I24134f100ad592ea8cb59efb6a54f5a1311093dc
Reviewed-on: https://go-review.googlesource.com/2745
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, gccheckmark could only be enabled or disabled by calling
runtime.GCcheckmarkenable/GCcheckmarkdisable. This was a necessary
hack because GODEBUG was broken.
Now that GODEBUG works again, move control over gccheckmark to a
GODEBUG variable and remove these runtime functions. Currently,
gccheckmark is enabled by default (and will probably remain so for
much of the 1.5 development cycle).
Change-Id: I2bc6f30c21b795264edf7dbb6bd7354b050673ab
Reviewed-on: https://go-review.googlesource.com/2603
Reviewed-by: Rick Hudson <rlh@golang.org>
run GC in its own background goroutine making the
caller runnable if resources are available. This is
critical in single goroutine applications.
Allow goroutines that allocate a lot to help out
the GC and in doing so throttle their own allocation.
Adjust test so that it only detects that a GC is run
during init calls and not whether the GC is memory
efficient. Memory efficiency work will happen later
in 1.5.
Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b
Reviewed-on: https://go-review.googlesource.com/2349
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
First, call clearcheckmarks immediately after changing checkmark,
so that there is less time when the checkmark flag and the bitmap
are inconsistent. The tiny gap between the two lines is fine, because
the world is stopped. Before, the gap was much larger and included
such code as "go bgsweep()", which allocated.
Second, modify gcphase only when the world is stopped.
As written, gcscan_m was changing gcphase from 0 to GCscan
and back to 0 while other goroutines were running.
Another goroutine running at the same time might decide to
sleep, see GCscan, call gcphasework, and start "helping" by
scanning its stack. That's fine, except that if gcphase flips back
to 0 as the goroutine calls scanblock, it will start draining the
work buffers prematurely.
Both of these were found wbshadow=2 (and a lot of hard work).
Eventually that will run automatically, but right now it still
doesn't quite work for all.bash, due to mmap conflicts with
pthread-created threads.
Change-Id: I99aa8210cff9c6e7d0a1b62c75be32a23321897b
Reviewed-on: https://go-review.googlesource.com/2340
Reviewed-by: Rick Hudson <rlh@golang.org>
This is the detection code. It works well enough that I know of
a handful of missing write barriers. However, those are subtle
enough that I'll address them in separate followup CLs.
GODEBUG=wbshadow=1 checks for a write that bypassed the
write barrier at the next write barrier of the same word.
If a bug can be detected in this mode it is typically easy to
understand, since the crash says quite clearly what kind of
word has missed a write barrier.
GODEBUG=wbshadow=2 adds a check of the write barrier
shadow copy during garbage collection. Bugs detected at
garbage collection can be difficult to understand, because
there is no context for what the found word means.
Typically you have to reproduce the problem with allocfreetrace=1
in order to understand the type of the badly updated word.
Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e
Reviewed-on: https://go-review.googlesource.com/2061
Reviewed-by: Austin Clements <austin@google.com>
They are no longer needed now that C is gone.
goatoi -> atoi
gofuncname/funcname -> funcname/cfuncname
goroundupsize -> already existing roundupsize
Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7
Reviewed-on: https://go-review.googlesource.com/2154
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
//go:nowritebarrier can only be used in package runtime.
It does not disable write barriers; it is an assertion, checked
by the compiler, that the following function needs no write
barriers.
Change-Id: Id7978b779b66dc1feea39ee6bda9fd4d80280b7c
Reviewed-on: https://go-review.googlesource.com/1224
Reviewed-by: Rick Hudson <rlh@golang.org>
It could only handle one finalizer before it raised an out-of-bounds error.
Fixes issue #9172
Change-Id: Ibb4d0c8aff2d78a1396e248c7129a631176ab427
Reviewed-on: https://go-review.googlesource.com/1201
Reviewed-by: Russ Cox <rsc@golang.org>