A workaround for #10460.
Change-Id: I607a556561d509db6de047892f886fb565513895
Reviewed-on: https://go-review.googlesource.com/10819
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This avoids a race with gcmarkwb_m that was leading to faults.
Fixes#10212.
Change-Id: I6fcf8d09f2692227063ce29152cb57366ea22487
Reviewed-on: https://go-review.googlesource.com/10816
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This tracks the number of scannable bytes in the allocated heap. That
is, bytes that the garbage collector must scan before reaching the
last pointer field in each object.
This will be used to compute a more robust estimate of the GC scan
work.
Change-Id: I1eecd45ef9cdd65b69d2afb5db5da885c80086bb
Reviewed-on: https://go-review.googlesource.com/9695
Reviewed-by: Russ Cox <rsc@golang.org>
shouldtriggergc is slightly expensive due to the call overhead
and the use of an atomic. This CL reduces the number of time
one checks if a GC should be done from one at each allocation
to once when a span is allocated. Since shouldtriggergc is an
important abstraction simply hand inlining it, along with its
atomic instruction would lose the abstraction.
Change-Id: Ia3210655b4b3d433f77064a21ecb54e4d9d435f7
Reviewed-on: https://go-review.googlesource.com/9403
Reviewed-by: Austin Clements <austin@google.com>
Currently, we use a full stop-the-world around enabling write
barriers. This is to ensure that all Gs have enabled write barriers
before any blackening occurs (either in gcBgMarkWorker() or in
gcAssistAlloc()).
However, there's no need to bring the whole world to a synchronous
stop to ensure this. This change replaces the STW with a ragged
barrier that ensures each P has individually observed that write
barriers should be enabled before GC performs any blackening.
Change-Id: If2f129a6a55bd8bdd4308067af2b739f3fb41955
Reviewed-on: https://go-review.googlesource.com/8207
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
The motivation is that sysAlloc/Free() currently aren't safe to be
called without a valid G, because arm's xadd64() uses locks that require
a valid G.
The solution here was proposed by Dmitry Vyukov: use xadduintptr()
instead of xadd64(), until arm can support xadd64 on all of its
architectures (not a trivial task for arm).
Change-Id: I250252079357ea2e4360e1235958b1c22051498f
Reviewed-on: https://go-review.googlesource.com/9002
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
This field used to decrease with sweeps (and potentially go
negative). Now it is always zero or positive, so change it to a
uintptr so it meshes better with other memory stats.
Change-Id: I6a50a956ddc6077eeaf92011c51743cb69540a3c
Reviewed-on: https://go-review.googlesource.com/8899
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, mutator allocation periodically assists the garbage
collector by performing a small, fixed amount of scanning work.
However, to control heap growth, mutators need to perform scanning
work *proportional* to their allocation rate.
This change implements proportional mutator assists. This uses the
scan work estimate computed by the garbage collector at the beginning
of each cycle to compute how much scan work must be performed per
allocation byte to complete the estimated scan work by the time the
heap reaches the goal size. When allocation triggers an assist, it
uses this ratio and the amount allocated since the last assist to
compute the assist work, then attempts to steal as much of this work
as possible from the background collector's credit, and then performs
any remaining scan work itself.
Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba
Reviewed-on: https://go-review.googlesource.com/8836
Reviewed-by: Rick Hudson <rlh@golang.org>
This CL revises CL 7504 to use explicitly uintptr types for the
struct fields that are going to be updated sometimes without
write barriers. The result is that the fields are now updated *always*
without write barriers.
This approach has two important properties:
1) Now the GC never looks at the field, so if the missing reference
could cause a problem, it will do so all the time, not just when the
write barrier is missed at just the right moment.
2) Now a write barrier never happens for the field, avoiding the
(correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1.
Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e
Reviewed-on: https://go-review.googlesource.com/9019
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
'themoduledata' doesn't really make sense now we support multiple moduledata
objects.
Change-Id: I8263045d8f62a42cb523502b37289b0fba054f62
Reviewed-on: https://go-review.googlesource.com/8521
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This tracks the number of heap bytes marked by a GC cycle. We'll use
this information to precisely trigger the next GC cycle.
Currently this aggregates the work counter in gcWork and dispose
atomically aggregates this into a global work counter. dispose happens
relatively infrequently, so the contention on the global counter
should be low. If this turns out to be an issue, we can reduce the
number of disposes, and if it's still a problem, we can switch to
per-P counters.
Change-Id: I1bc377cb2e802ef61c2968602b63146d52e7f5db
Reviewed-on: https://go-review.googlesource.com/8388
Reviewed-by: Russ Cox <rsc@golang.org>
In preparation for being able to run a go program that has code
in several objects, this changes from having several linker
symbols used by the runtime into having one linker symbol that
points at a structure containing the needed data. Multiple
object support will construct a linked list of such structures.
A follow up will initialize the slices in the themoduledata
structure directly from the linker but I was aiming for a minimal
diff for now.
Change-Id: I613cce35309801cf265a1d5ae5aaca8d689c5cbf
Reviewed-on: https://go-review.googlesource.com/7441
Reviewed-by: Ian Lance Taylor <iant@golang.org>
To reduce lock contention in this mode, makes persistent allocation state per-P,
which means at most 64 kB overhead x $GOMAXPROCS, which should be
completely tolerable.
Change-Id: I34ca95e77d7e67130e30822e5a4aff6772b1a1c5
Reviewed-on: https://go-review.googlesource.com/7740
Reviewed-by: Rick Hudson <rlh@golang.org>
Everything has moved to Go, but comments still refer to .c/.h files.
Fix all of those up, at least for these three directories.
Fixes#10138
Change-Id: Ie5efe89b247841e0b3f82aac5256b2c606ef67dc
Reviewed-on: https://go-review.googlesource.com/7431
Reviewed-by: Russ Cox <rsc@golang.org>
Even though the world is stopped the GC may do pointer
writes that need to be protected by write barriers.
This means that the write barrier must be on
continuously from the time the mark phase starts and
the mark termination phase ends. Checks were added to
ensure that no allocation happens during a GC.
Hoist the logic that clears pools the start of the GC
so that the memory can be reclaimed during this GC cycle.
Change-Id: I9d1551ac5db9bac7bac0cb5370d5b2b19a9e6a52
Reviewed-on: https://go-review.googlesource.com/6990
Reviewed-by: Austin Clements <austin@google.com>
Available darwin/arm devices sporadically have trouble mapping 256M.
I would really appreciate it if anyone could check my working on
this, and make sure sure there aren't obviously bad consequences I
haven't considered.
Change-Id: Id1a8edae104d974fcf5f9333274f958625467f79
Reviewed-on: https://go-review.googlesource.com/5752
Reviewed-by: Keith Randall <khr@golang.org>
The routine mallocgc retrieves objects from freelists. Prefetch
the object that will be returned in the next call to mallocgc.
Experiments indicate that this produces a 1% improvement when using
prefetchnta and less when using prefetcht0, prefetcht1, or prefetcht2.
Benchmark numbers indicate a 1% improvement over no
prefetch, much less over prefetcht0, prefetcht1, and prefetcht2.
These numbers were for the garbage benchmark with MAXPROCS=4
no prefetch >> 5.96 / 5.77 / 5.89
prefetcht0(uintptr(v.ptr().next)) >> 5.88 / 6.17 / 5.84
prefetcht1(uintptr(v.ptr().next)) >> 5.88 / 5.89 / 5.91
prefetcht2(uintptr(v.ptr().next)) >> 5.87 / 6.47 / 5.92
prefetchnta(uintptr(v.ptr().next)) >> 5.72 / 5.84 / 5.85
Change-Id: I54e07172081cccb097d5b5ce8789d74daa055ed9
Reviewed-on: https://go-review.googlesource.com/5350
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Move code from malloc1.go, malloc2.go, mem.go, mgc0.go into
appropriate locations.
Factor mgc.go into mgc.go, mgcmark.go, mgcsweep.go, mstats.go.
A lot of this code was in certain files because the right place was in
a C file but it was written in Go, or vice versa. This is one step toward
making things actually well-organized again.
Change-Id: I6741deb88a7cfb1c17ffe0bcca3989e10207968f
Reviewed-on: https://go-review.googlesource.com/5300
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Add local workbufs to the m struct in order to reduce contention.
Add consistency checks for workbuf ownership.
Chain workbufs through call change to avoid swapping them
to and from the m struct.
Adjust the size of the workbuf so that the mutators can
more frequently pass modifications to the GC thus shifting
some work from the STW mark termination phase to the concurrent
mark phase.
Change-Id: I557b53af34ad9972265e0ed9f5996e52d548563d
Reviewed-on: https://go-review.googlesource.com/3972
Reviewed-by: Austin Clements <austin@google.com>
m.gcing has become overloaded to mean "don't preempt this g" in
general. Once the garbage collector is preemptible, the one thing it
*won't* mean is that we're in the garbage collector.
So, rename gcing to "preemptoff" and make it a string giving a reason
that preemption is disabled. gcing was never set to anything but 0 or
1, so we don't have to worry about there being a stack of reasons.
Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46
Reviewed-on: https://go-review.googlesource.com/3660
Reviewed-by: Russ Cox <rsc@golang.org>
During a concurrent GC stacks are scanned in
an initial scan phase informing the GC of all
pointers on the stack. The GC only needs to rescan
the stack if it potentially changes which can only
happen if the goroutine runs.
This CL tracks whether the Goroutine has run
since it was last scanned and thus may have changed
its stack. If necessary the stack is rescanned.
Change-Id: I5fb1c4338d42e3f61ab56c9beb63b7b2da25f4f1
Reviewed-on: https://go-review.googlesource.com/3275
Reviewed-by: Russ Cox <rsc@golang.org>
Adjust triggergc so that we trigger when we have used 7/8
of the available heap memory. Do first collection when we
exceed 4Mbytes.
Change-Id: I467b4335e16dc9cd1521d687fc1f99a51cc7e54b
Reviewed-on: https://go-review.googlesource.com/3149
Reviewed-by: Austin Clements <austin@google.com>
Adujst triggergc so that we trigger when we have used 7/8
of the available memory.
Change-Id: I7ca02546d3084e6a04d60b09479e04a9a9837ae2
Reviewed-on: https://go-review.googlesource.com/3061
Reviewed-by: Russ Cox <rsc@golang.org>
The code in mfinal.go is moved from malloc*.go and mgc*.go
and substantially unchanged.
The code in mbitmap.go is also moved from those files, but
cleaned up so that it can be called from those files (in most cases
the code being moved was not already a standalone function).
I also renamed the constants and wrote comments describing
the format. The result is a significant cleanup and isolation of
the bitmap code, but, roughly speaking, it should be treated
and reviewed as new code.
The other files changed only as much as necessary to support
this code movement.
This CL does NOT change the semantics of the heap or type
bitmaps at all, although there are now some obvious opportunities
to do so in followup CLs.
Change-Id: I41b8d5de87ad1d3cd322709931ab25e659dbb21d
Reviewed-on: https://go-review.googlesource.com/2991
Reviewed-by: Keith Randall <khr@golang.org>
These were fixed in my local commit,
but I forgot that the web Submit button can't see that.
Change-Id: Iec3a70ce3ccd9db2a5394ae2da0b293e45ac2fb5
Reviewed-on: https://go-review.googlesource.com/2822
Reviewed-by: Russ Cox <rsc@golang.org>
During all.bash I got a crash in the GOMAXPROCS=2 runtime test reporting
that the write barrier in the assignment 'c.tiny = add(x, size)' had been
given a pointer pointing into an unexpected span. The problem is that
the tiny allocation was at the end of a span and c.tiny was now pointing
to the end of the allocation and therefore to the end of the span aka
the beginning of the next span.
Rewrite tinyalloc not to do that.
More generally, it's not okay to call add(p, size) unless you know that p
points at > (not just >=) size bytes. Similarly, pretty much any call to
roundup doesn't know how much space p points at, so those are all
broken.
Rewrite persistentalloc not to use add(p, totalsize) and not to use roundup.
There is only one use of roundup left, in vprintf, which is dead code.
I will remove that code and roundup itself in a followup CL.
Change-Id: I211e307d1a656d29087b8fd40b2b71010722fb4a
Reviewed-on: https://go-review.googlesource.com/2814
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
1) Move non-preemption check even earlier in newstack.
This avoids a few priority inversion problems.
2) Always use atomic operations to update bitmap for 1-word objects.
This avoids lost mark bits during concurrent GC.
3) Stop using work.nproc == 1 as a signal for being single-threaded.
The concurrent GC runs with work.nproc == 1 but other procs are
running mutator code.
The use of work.nproc == 1 in getfull *is* safe, but remove it anyway,
since it is saving only a single atomic operation per GC round.
Fixes#9225.
Change-Id: I24134f100ad592ea8cb59efb6a54f5a1311093dc
Reviewed-on: https://go-review.googlesource.com/2745
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, gccheckmark could only be enabled or disabled by calling
runtime.GCcheckmarkenable/GCcheckmarkdisable. This was a necessary
hack because GODEBUG was broken.
Now that GODEBUG works again, move control over gccheckmark to a
GODEBUG variable and remove these runtime functions. Currently,
gccheckmark is enabled by default (and will probably remain so for
much of the 1.5 development cycle).
Change-Id: I2bc6f30c21b795264edf7dbb6bd7354b050673ab
Reviewed-on: https://go-review.googlesource.com/2603
Reviewed-by: Rick Hudson <rlh@golang.org>
run GC in its own background goroutine making the
caller runnable if resources are available. This is
critical in single goroutine applications.
Allow goroutines that allocate a lot to help out
the GC and in doing so throttle their own allocation.
Adjust test so that it only detects that a GC is run
during init calls and not whether the GC is memory
efficient. Memory efficiency work will happen later
in 1.5.
Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b
Reviewed-on: https://go-review.googlesource.com/2349
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This improves the printing of GC times to be both more human-friendly
and to provide enough information for the construction of MMU curves
and other statistics. The new times look like:
GC: #8 72413852ns @143036695895725 pause=622900 maxpause=427037 goroutines=11 gomaxprocs=4
GC: sweep term: 190584ns max=190584 total=275001 procs=4
GC: scan: 260397ns max=260397 total=902666 procs=1
GC: install wb: 5279ns max=5279 total=18642 procs=4
GC: mark: 71530555ns max=71530555 total=186694660 procs=1
GC: mark term: 427037ns max=427037 total=1691184 procs=4
This prints gomaxprocs and the number of procs used in each phase for
the benefit of analyzing mutator utilization during concurrent phases.
This also means the analysis doesn't have to hard-code which phases
are STW.
This prints the absolute start time only for the GC cycle. The other
start times can be derived from the phase durations. This declutters
the view for humans readers and doesn't pose any additional complexity
for machine readers.
This removes the confusing "cycle" terminology. Instead, this places
the phase duration after the phase name and adds a "ns" unit, which
both makes it implicitly clear that this is the duration of that phase
and indicates the units of the times.
This adds a "GC:" prefix to all lines for easier identification.
Finally, this generally cleans up the code as well as the placement of
spaces in the output and adds print locking so the statistics blocks
are never interrupted by other prints.
Change-Id: Ifd056db83ed1b888de7dfa9a8fc5732b01ccc631
Reviewed-on: https://go-review.googlesource.com/2542
Reviewed-by: Rick Hudson <rlh@golang.org>
First, call clearcheckmarks immediately after changing checkmark,
so that there is less time when the checkmark flag and the bitmap
are inconsistent. The tiny gap between the two lines is fine, because
the world is stopped. Before, the gap was much larger and included
such code as "go bgsweep()", which allocated.
Second, modify gcphase only when the world is stopped.
As written, gcscan_m was changing gcphase from 0 to GCscan
and back to 0 while other goroutines were running.
Another goroutine running at the same time might decide to
sleep, see GCscan, call gcphasework, and start "helping" by
scanning its stack. That's fine, except that if gcphase flips back
to 0 as the goroutine calls scanblock, it will start draining the
work buffers prematurely.
Both of these were found wbshadow=2 (and a lot of hard work).
Eventually that will run automatically, but right now it still
doesn't quite work for all.bash, due to mmap conflicts with
pthread-created threads.
Change-Id: I99aa8210cff9c6e7d0a1b62c75be32a23321897b
Reviewed-on: https://go-review.googlesource.com/2340
Reviewed-by: Rick Hudson <rlh@golang.org>
Use typedmemmove, typedslicecopy, and adjust reflect.call
to execute the necessary write barriers.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iec5b5b0c1be5589295e28e5228e37f1a92e07742
Reviewed-on: https://go-review.googlesource.com/2312
Reviewed-by: Keith Randall <khr@golang.org>
A side effect of this change is that when assertI2T writes to the
memory for the T being extracted, it can use typedmemmove
for write barriers.
There are other ways we could have done this, but this one
finishes a TODO in package runtime.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Icbc8aabfd8a9b1f00be2e421af0e3b29fa54d01e
Reviewed-on: https://go-review.googlesource.com/2279
Reviewed-by: Keith Randall <khr@golang.org>
This is the detection code. It works well enough that I know of
a handful of missing write barriers. However, those are subtle
enough that I'll address them in separate followup CLs.
GODEBUG=wbshadow=1 checks for a write that bypassed the
write barrier at the next write barrier of the same word.
If a bug can be detected in this mode it is typically easy to
understand, since the crash says quite clearly what kind of
word has missed a write barrier.
GODEBUG=wbshadow=2 adds a check of the write barrier
shadow copy during garbage collection. Bugs detected at
garbage collection can be difficult to understand, because
there is no context for what the found word means.
Typically you have to reproduce the problem with allocfreetrace=1
in order to understand the type of the badly updated word.
Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e
Reviewed-on: https://go-review.googlesource.com/2061
Reviewed-by: Austin Clements <austin@google.com>
This reverts commit ab0535ae3f.
I think it will remain useful to distinguish code that must
run on a system stack from code that can run on either stack,
even if that distinction is no
longer based on the implementation language.
That is, I expect to add a //go:systemstack comment that,
in terms of the old implementation, tells the compiler,
to pretend this function was written in C.
Change-Id: I33d2ebb2f99ae12496484c6ec8ed07233d693275
Reviewed-on: https://go-review.googlesource.com/2275
Reviewed-by: Russ Cox <rsc@golang.org>
They are no longer needed now that C is gone.
goatoi -> atoi
gofuncname/funcname -> funcname/cfuncname
goroundupsize -> already existing roundupsize
Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7
Reviewed-on: https://go-review.googlesource.com/2154
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Now that we've removed all the C code in runtime and the C compilers,
there is no need to have a separate stackguard field to check for C
code on Go stack.
Remove field g.stackguard1 and rename g.stackguard0 to g.stackguard.
Adjust liblink and cmd/ld as necessary.
Change-Id: I54e75db5a93d783e86af5ff1a6cd497d669d8d33
Reviewed-on: https://go-review.googlesource.com/2144
Reviewed-by: Keith Randall <khr@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Replace with uses of //go:linkname in Go files, direct use of name in .s files.
The only one that really truly needs a jump is reflect.call; the jump is now
next to the runtime.reflectcall assembly implementations.
Change-Id: Ie7ff3020a8f60a8e4c8645fe236e7883a3f23f46
Reviewed-on: https://go-review.googlesource.com/1962
Reviewed-by: Austin Clements <austin@google.com>