If a sigprof happens during a cgo call, we traceback from the entry
point of the cgo call. However, if the SP is outside of the G's stack,
we'll then ignore this traceback, even if it was successful, and
overwrite it with just _ExternalCode.
Fix this by accepting any successful traceback, regardless of whether
we got it from a cgo entry point or from regular Go code.
Fixes#13466.
Change-Id: I5da9684361fc5964f44985d74a8cdf02ffefd213
Reviewed-on: https://go-review.googlesource.com/18327
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
We were setting the signal mask of a new m to the signal mask of the m
that created it. That failed when that m happened to be the one created
by ensureSigM, which sets its signal mask to only include the signals
being caught by os/signal.Notify.
Fixes#13164.
Update #9896.
Change-Id: I705c196fe9d11754e10bab9e9b2e7530ecdfa367
Reviewed-on: https://go-review.googlesource.com/18064
Reviewed-by: Russ Cox <rsc@golang.org>
Avoids an msan error when runtime/cgo is explicitly rebuilt with
-fsanitize=memory.
Fixes#13815.
Change-Id: I70308034011fb308b63585bcd40b0d1e62ec93ef
Reviewed-on: https://go-review.googlesource.com/18263
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, if sigprof determines that the G is in user code (not cgo
or libcall code), it will only traceback the G stack if it can acquire
the stack barrier lock. However, it has no such restriction if the G
is in cgo or libcall code. Because cgo calls count as syscalls, stack
scanning and stack barrier installation can occur during a cgo call,
which means sigprof could attempt to traceback a G in a cgo call while
scanstack is installing stack barriers in that G's stack. As a result,
the following sequence of events can cause the sigprof traceback to
panic with "missed stack barrier":
1. M1: G1 performs a Cgo call (which, on Windows, is any system call,
which could explain why this is easier to reproduce on Windows).
2. M1: The Cgo call puts G1 into _Gsyscall state.
3. M2: GC starts a scan of G1's stack. It puts G1 in to _Gscansyscall
and acquires the stack barrier lock.
4. M3: A profiling signal comes in. On Windows this is a global
(though I don't think this matters), so the runtime stops M1 and
calls sigprof for G1.
5. M3: sigprof fails to acquire the stack barrier lock (because the
GC's stack scan holds it).
6. M3: sigprof observes that G1 is in a Cgo call, so it calls
gentraceback on G1 with its Cgo transition point.
7. M3: gentraceback on G1 grabs the currently empty g.stkbar slice.
8. M2: GC finishes scanning G1's stack and installing stack barriers.
9. M3: gentraceback encounters one of the just-installed stack
barriers and panics.
This commit fixes this by only allowing cgo tracebacks if sigprof can
acquire the stack barrier lock, just like in the regular user
traceback case.
For good measure, we put the same constraint on libcall tracebacks.
This case is probably already safe because, unlike cgo calls, libcalls
leave the G in _Grunning and prevent reaching a safe point, so
scanstack cannot run during a libcall. However, this also means that
sigprof will always acquire the stack barrier lock without contention,
so there's no cost to adding this constraint to libcall tracebacks.
Fixes#12528. For 1.5.3 (will require some backporting).
Change-Id: Ia5a4b8e3d66b23b02ffcd54c6315c81055c0cec2
Reviewed-on: https://go-review.googlesource.com/18023
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, sysmon triggers a forced GC solely based on
memstats.last_gc. However, memstats.last_gc isn't updated until mark
termination, so once sysmon starts triggering forced GC, it will keep
triggering them until GC finishes. The first of these actually starts
a GC; the remainder up to the last print "GC forced", but gcStart
returns immediately because gcphase != _GCoff; then the last may start
another GC if the previous GC finishes (and sets last_gc) between
sysmon triggering it and gcStart checking the GC phase.
Fix this by expanding the condition for starting a forced GC to also
require that no GC is currently running. This, combined with the way
forcegchelper blocks until the GC cycle is started, ensures sysmon
only starts one GC when the time exceeds the forced GC threshold.
Fixes#13458.
Change-Id: Ie6cf841927f6085136be3f45259956cd5cf10d23
Reviewed-on: https://go-review.googlesource.com/17819
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The addition of stack barrier locking to copystack subsumes the
partial fix from commit bbd1a1c for SIGPROF during copystack. With the
stack barrier locking, this commit simplifies the rule in sigprof to:
the user stack can be traced only if sigprof can acquire the stack
barrier lock.
Updates #12932, #13362.
Change-Id: I1c1f80015053d0ac7761e9e0c7437c2aba26663f
Reviewed-on: https://go-review.googlesource.com/17192
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we wake up new worker threads whenever we pass
through the scheduler with nmspinning==0. This leads to
lots of unnecessary thread wake ups.
Instead let only spinning threads wake up new spinning threads.
For the following program:
package main
import "runtime"
func main() {
for i := 0; i < 1e7; i++ {
runtime.Gosched()
}
}
Before:
$ time ./test
real 0m4.278s
user 0m7.634s
sys 0m1.423s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
99.93 9.314936 3 2685009 17536 futex
After:
$ time ./test
real 0m1.200s
user 0m1.181s
sys 0m0.024s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
3.11 0.000049 25 2 futex
Fixes#13527
Change-Id: Ia1f5bf8a896dcc25d8b04beb1f4317aa9ff16f74
Reviewed-on: https://go-review.googlesource.com/17540
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
debug.schedtrace is an int32. Convert it to int64 before
multiplying with constant 1000000. Otherwise, schedtrace
values more than 2147 result in int32 overflow causing
incorrect delays between traces.
Change-Id: I064e8d7b432c1e892a705ee1f31a2e8cdd2c3ea3
Reviewed-on: https://go-review.googlesource.com/17712
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
sysmon runs without a P. This means it can't interact with the garbage
collector, so write barriers not allowed in anything that sysmon does.
Fixes#10600.
Change-Id: I9de1283900dadee4f72e2ebfc8787123e382ae88
Reviewed-on: https://go-review.googlesource.com/17006
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
allocm is a very unusual function: it is specifically designed to
allocate in contexts where m.p is nil by temporarily taking over a P.
Since allocm is used in many contexts where it would make sense to use
nowritebarrierrec, this commit teaches the nowritebarrierrec analysis
to stop at allocm.
Updates #10600.
Change-Id: I8499629461d4fe25712d861720dfe438df7ada9b
Reviewed-on: https://go-review.googlesource.com/17005
Reviewed-by: Russ Cox <rsc@golang.org>
A sigprof during stack barrier insertion or removal can crash if it
detects an inconsistency between the stkbar array and the stack
itself. Currently we protect against this when scanning another G's
stack using stackLock, but we don't protect against it when unwinding
stack barriers for a recover or a memmove to the stack.
This commit cleans up and improves the stack locking code. It
abstracts out the lock and unlock operations. It uses the lock
consistently everywhere we perform stack operations, and pushes the
lock/unlock down closer to where the stack barrier operations happen
to make it more obvious what it's protecting. Finally, it modifies
sigprof so that instead of spinning until it acquires the lock, it
simply doesn't perform a traceback if it can't acquire it. This is
necessary to prevent self-deadlock.
Updates #11863, which introduced stackLock to fix some of these
issues, but didn't go far enough.
Updates #12528.
Change-Id: I9d1fa88ae3744d31ba91500c96c6988ce1a3a349
Reviewed-on: https://go-review.googlesource.com/17036
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Cgo-created threads transition between having associated Go g's and m's and not.
A signal arriving during the transition could think it was safe and appropriate to
run Go signal handlers when it was in fact not.
Avoid the race by masking all signals during the transition.
Fixes#12277.
Change-Id: Ie9711bc1d098391d58362492197a7e0f5b497d14
Reviewed-on: https://go-review.googlesource.com/16915
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The PowerPC ISA does not have a PC-relative load instruction, which poses
obvious challenges when generating position-independent code. The way the ELFv2
ABI addresses this is to specify that r2 points to a per "module" (shared
library or executable) TOC pointer. Maintaining this pointer requires
cooperation between codegen and the system linker:
* Non-leaf functions leave space on the stack at r1+24 to save the TOC pointer.
* A call to a function that *might* have to go via a PLT stub must be followed
by a nop instruction that the system linker can replace with "ld r1, 24(r1)"
to restore the TOC pointer (only when dynamically linking Go code).
* When calling a function via a function pointer, the address of the function
must be in r12, and the first couple of instructions (the "global entry
point") of the called function use this to derive the address of the TOC
for the module it is in.
* When calling a function that is implemented in the same module, the system
linker adjusts the call to skip over the instructions mentioned above (the
"local entry point"), assuming that r2 is already correctly set.
So this changeset adds the global entry point instructions, sets the metadata so
the system linker knows where the local entry point is, inserts code to save the
TOC pointer at 24(r1), adds a nop after any call not known to be local and copes
with the odd non-local code transfer in the runtime (e.g. the stuff around
jmpdefer). It does not actually compile PIC yet.
Change-Id: I7522e22bdfd2f891745a900c60254fe9e372c854
Reviewed-on: https://go-review.googlesource.com/15967
Reviewed-by: Russ Cox <rsc@golang.org>
sigprof tracebacks the stack across systemstack switches to make
profile tracebacks more complete. However, it does this even if the
user stack is currently being copied, which means it may be in an
inconsistent state that will cause the traceback to panic.
One specific way this can happen is during stack shrinking. Some
goroutine blocks for STW, then enters gchelper, which then assists
with root marking. If that root marking happens to pick the original
goroutine and its stack needs to be shrunk, it will begin to copy that
stack. During this copy, the stack is generally inconsistent and, in
particular, the actual locations of the stack barriers and their
recorded locations are temporarily out of sync. If a SIGPROF happens
during this inconsistency, it will walk the stack all the way back to
the blocked goroutine and panic when it fails to unwind the stack
barriers.
Fix this by disallowing jumping to the user stack during SIGPROF if
that user stack is in the process of being copied.
Fixes#12932.
Change-Id: I9ef694c2c01e3653e292ce22612418dd3daff1b4
Reviewed-on: https://go-review.googlesource.com/16819
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
runtime/internal/sys will hold system-, architecture- and config-
specific constants.
Updates #11647
Change-Id: I6db29c312556087a42e8d2bdd9af40d157c56b54
Reviewed-on: https://go-review.googlesource.com/16817
Reviewed-by: Russ Cox <rsc@golang.org>
Applies to types fixAlloc, mCache, mCentral, mHeap, mSpan, and
mSpanList.
Two special cases:
1. mHeap_Scavenge() previously didn't take an *mheap parameter, so it
was specially handled in this CL.
2. mHeap_Free() would have collided with mheap's "free" field, so it's
been renamed to (*mheap).freeSpan to parallel its underlying
(*mheap).freeSpanLocked method.
Change-Id: I325938554cca432c166fe9d9d689af2bbd68de4b
Reviewed-on: https://go-review.googlesource.com/16221
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When we're jumping time forward, it means everyone is asleep, so there
should always be an M available. Furthermore, this causes both
allocation and write barriers in contexts that may be running without
a P (such as in sysmon).
Hence, replace this allocation with a throw.
Updates #10600.
Change-Id: I2cee70d5db828d0044082878995949edb25dda5f
Reviewed-on: https://go-review.googlesource.com/16815
Reviewed-by: Russ Cox <rsc@golang.org>
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.
all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.
Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This moves all of the mark 1 to mark 2 transition and mark termination
to the mark done transition function. This means these transitions are
now handled on the goroutine that detected mark completion. This also
means that the GC coordinator and the background completion barriers
are no longer used and various workarounds to yield to the coordinator
are no longer necessary. These will be removed in follow-up commits.
One consequence of this is that mark workers now need to be
preemptible when performing the mark done transition. This allows them
to stop the world and to perform the final clean-up steps of GC after
restarting the world. They are only made preemptible while performing
this transition, so if the worker findRunnableGCWorker would schedule
isn't available, we didn't want to schedule it anyway.
Fixes#11970.
Change-Id: I9203a2d6287eeff62d589ec02ad9cb1e29ddb837
Reviewed-on: https://go-review.googlesource.com/16391
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, we don't start dedicated or fractional mark workers unless
the mark 1 or mark 2 barriers have been cleared. One intended
consequence of this is that no background workers run between the
forEachP that disposes all gcWork caches and the beginning of mark 2.
However, we (unintentionally) did not apply this restriction to idle
mark workers. As a result, these can start in the interim between mark
1 completion and mark 2 starting. This explains why it was necessary
to reset the root marking jobs using carefully ordered atomic writes
when setting up mark 2. It also means that, even though we definitely
enqueue work before starting mark 2, it may be drained by the time we
reset the mark 2 barrier. If this happens, currently the only thing
preventing the runtime from deadlocking is that the scheduler itself
also checks for mark completion and will signal mark 2 completion.
Were it not for the odd behavior of idle workers, this check in the
scheduler would not be necessary.
Clean all of this up and prepare to remove this check in the scheduler
by applying the same restriction to starting idle mark workers.
Change-Id: Ic1b479e1591bd7773dc27b320ca399a215603b5a
Reviewed-on: https://go-review.googlesource.com/16631
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This begins the conversion of the centralized GC coordinator to a
decentralized state machine by introducing the internal API that
triggers the first state transition from _GCoff to _GCmark (or
_GCmarktermination).
This change introduces the transition lock, the off->mark transition
condition (which is very similar to shouldtriggergc()), and the
general structure of a state transition. Since we're doing this
conversion in stages, it then falls back to the GC coordinator to
actually execute the cycle. We'll start moving logic out of the GC
coordinator and in to transition functions next.
This fixes a minor bug in gcstoptheworld debug mode where passing the
heap trigger once could trigger multiple STW GCs.
Updates #11970.
Change-Id: I964087dd190a639eb5766398f8e1bbf8b352902f
Reviewed-on: https://go-review.googlesource.com/16355
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
This eliminates many write barriers in the scheduler code that are
unnecessary and will interfere with upcoming changes where the garbage
collector will have to invoke run queue functions in contexts that
must not have write barriers.
Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55
Reviewed-on: https://go-review.googlesource.com/16556
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Add explicit memory sanitizer instrumentation to the runtime and syscall
packages. The compiler does not instrument the runtime package. It
does instrument the syscall package, but we need to add a couple of
cases that it can't see.
Change-Id: I2d66073f713fe67e33a6720460d2bb8f72f31394
Reviewed-on: https://go-review.googlesource.com/16164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
88e945f introduced a non-speculative double check of the heap trigger
before actually starting a concurrent GC. This was necessary to fix a
race for heap-triggered GC, but broke sysmon-triggered periodic GC,
since the heap check will of course fail for periodically triggered
GC.
Fix this by telling startGC whether or not this GC was triggered by
heap size or a timer and only doing the heap size double check for GCs
triggered by heap size.
Fixes#12026.
Change-Id: I7c3f6ec364545c36d619f2b4b3bf3b758e3bcbd6
Reviewed-on: https://go-review.googlesource.com/13168
Reviewed-by: Russ Cox <rsc@golang.org>
sysmon triggers a GC if there has been no GC for two minutes.
Currently, this is a STW GC. There is no reason for this to be STW, so
make it concurrent.
Fixes#10261.
Change-Id: I92f3ac37272d5c2a31480ff1fa897ebad08775a9
Reviewed-on: https://go-review.googlesource.com/11955
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
In preparation for rename of cgocall_errno into cgocall and
asmcgocall_errno into asmcgocall in the fllowinng CL.
rsc requested CL 9387 to be split into two parts. This is first part.
Change-Id: I7434f0e4b44dd37017540695834bfcb1eebf0b2f
Reviewed-on: https://go-review.googlesource.com/11166
Reviewed-by: Ian Lance Taylor <iant@golang.org>
There are several steps to stopping and starting the world and
currently they're open-coded in several places. The garbage collector
is the only thing that needs to stop and start the world in a
non-trivial pattern. Replace all other uses with calls to higher-level
functions that implement the entire pattern necessary to stop and
start the world.
This is a pure refectoring and should not change any code semantics.
In the following commits, we'll make changes that are easier to do
with this abstraction in place.
This commit renames the old starttheworld to startTheWorldWithSema.
This is a slight misnomer right now because the callers release
worldsema just before calling this. However, a later commit will swap
these and I don't want to think of another name in the mean time.
Change-Id: I5dc97f87b44fb98963c49c777d7053653974c911
Reviewed-on: https://go-review.googlesource.com/10154
Reviewed-by: Russ Cox <rsc@golang.org>
This CL revises CL 7504 to use explicitly uintptr types for the
struct fields that are going to be updated sometimes without
write barriers. The result is that the fields are now updated *always*
without write barriers.
This approach has two important properties:
1) Now the GC never looks at the field, so if the missing reference
could cause a problem, it will do so all the time, not just when the
write barrier is missed at just the right moment.
2) Now a write barrier never happens for the field, avoiding the
(correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1.
Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e
Reviewed-on: https://go-review.googlesource.com/9019
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change-Id: Ie7f85873978adf3fd5c739176f501ca219592824
Reviewed-on: https://go-review.googlesource.com/9011
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
With the new buildmodes c-archive and c-shared, it is possible for a
cgo call to come in early in the lifecycle of a Go program. Calls
before the runtime has been initialized are caught by
_cgo_wait_runtime_init_done. However a call can come in after the
runtime has initialized, but before the program's package init
functions have finished running.
To avoid this cgocallback checks m.ncgo to see if we are on a thread
running Go. If not, we may be a foreign thread and it blocks until
main_init is complete.
Change-Id: I7a9f137fa2a40c322a0b93764261f9aa17fcf5b8
Reviewed-on: https://go-review.googlesource.com/8897
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
According to Go execution modes, a Go program compiled with
-buildmode=c-archive has a main function, but it is ignored on run.
This gives the runtime the information it needs not to run the main.
I have this working with pending linker changes on darwin/amd64.
Change-Id: I49bd7d65aa619ec847c464a872afa5deea7d4d30
Reviewed-on: https://go-review.googlesource.com/8701
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is Part 2 of the change, see Part 1 here: in https://go-review.googlesource.com/#/c/7692/
Suggested by iant@, we use the library initialization entry point to:
- create a new OS thread and run the "regular" runtime init stack on
that thread
- return immediately from the main (i.e., loader) thread
- at the first CGO invocation, we wait for the runtime initialization
to complete.
The above mechanism is implemented only on linux_amd64. Next step is to
support it on linux_arm. Other platforms don't yet support shared library
compiling/linking, but we intend to use the same strategy there as well.
Change-Id: Ib2c81b1b83bee837134084b75a3beecfb8de6bf4
Reviewed-on: https://go-review.googlesource.com/8094
Run-TryBot: Srdjan Petrovic <spetrovic@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
GODEBUG=gctrace=1 turns on a per-GC cycle trace line. The current line
is left over from the STW garbage collector and includes a lot of
information that is no longer meaningful for the concurrent GC and
doesn't include a lot of information that is important.
Replace this line with a new line designed for the new garbage
collector.
This new line is focused more on helping the user understand the
impact of the garbage collector on their program and less on telling
us, the runtime developers, everything that's happening inside
GC. It's designed to fit in 80 columns and intentionally omit some
potentially useful things that were in the old line. We might want a
"verbose" mode that adds information for us.
We'll be able to further simplify the line once we eliminate the STW
around enabling the write barrier. Then we'll have just one STW phase,
one concurrent phase, and one more STW phase, so we'll be able to
reduce the number of times from five to three.
Change-Id: Icc30939fe4576fb4491b4eac811649395727aa2a
Reviewed-on: https://go-review.googlesource.com/8208
Reviewed-by: Russ Cox <rsc@golang.org>
Racy tests do not fail currently, they do os.Exit(0).
So if you run go test without -v, you won't even notice.
This was probably introduced with testing.TestMain.
Racy programs do not have the right to finish successfully.
Change-Id: Id133d7424f03d90d438bc3478528683dd02b8846
Reviewed-on: https://go-review.googlesource.com/4371
Reviewed-by: Russ Cox <rsc@golang.org>
Stip uninteresting bottom and top frames from trace stacks.
This makes both binary and json trace files smaller,
and also makes stacks shorter and more readable in the viewer.
Change-Id: Ib9c80ccc280504f0e235f867f53f1d2652c41583
Reviewed-on: https://go-review.googlesource.com/5523
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Starting it lazily causes a memory allocation (for the goroutine) during GC.
First use of channels for runtime implementation.
Change-Id: I9cd24dcadbbf0ee5070ee6d0ed7ea415504f316c
Reviewed-on: https://go-review.googlesource.com/6960
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
I asked for this in CL 3742 and it was ignored.
Change-Id: I30ad05f87c7d9eccb11df7e19288e3ed2c7e2e3f
Reviewed-on: https://go-review.googlesource.com/6930
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The unbounded list-based sudog cache can grow infinitely.
This can happen if a goroutine is routinely blocked on one P
and then unblocked and scheduled on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central sudog cache; local caches become fixed-size
with the only purpose of amortizing accesses to the
central cache.
The change required to move sudog cache from mcache to P,
because mcache is not scanned by GC.
Change-Id: I3bb7b14710354c026dcba28b3d3c8936a8db4e90
Reviewed-on: https://go-review.googlesource.com/3742
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Fixes#9791
g.issystem flag setup races with other code wherever we set it.
Even if we set both in parent goroutine and in the system goroutine,
it is still possible that some other goroutine crashes
before the flag is set. We could pass issystem flag to newproc1,
but we start all goroutines with go nowadays.
Instead look at g.startpc to distinguish system goroutines (similar to topofstack).
Change-Id: Ia3467968dee27fa07d9fecedd4c2b00928f26645
Reviewed-on: https://go-review.googlesource.com/4113
Reviewed-by: Keith Randall <khr@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Replace with uses of //go:linkname in Go files, direct use of name in .s files.
The only one that really truly needs a jump is reflect.call; the jump is now
next to the runtime.reflectcall assembly implementations.
Change-Id: Ie7ff3020a8f60a8e4c8645fe236e7883a3f23f46
Reviewed-on: https://go-review.googlesource.com/1962
Reviewed-by: Austin Clements <austin@google.com>