Writing to selectdone on the stack of another goroutine meant a
pretty subtle dance between the select code and the stack copying
code. Instead move the selectdone variable into the g struct.
Change-Id: Id246aaf18077c625adef7ca2d62794afef1bdd1b
Reviewed-on: https://go-review.googlesource.com/53390
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently selectgo is just a wrapper around selectgoImpl. This keeps
the hard-coded frame skip counts for tracing the same between the
channel implementation and the select implementation.
However, this is fragile and confusing, so pass a skip parameter to
send and recv, join selectgo and selectgoImpl into one function, and
use decrease all of the skips in selectgo by one.
Change-Id: I11b8cbb7d805b55f5dc6ab4875ac7dde79412ff2
Reviewed-on: https://go-review.googlesource.com/37860
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
This commit reworks multiway select statements to use normal control
flow primitives instead of the previous setjmp/longjmp-like behavior.
This simplifies liveness analysis and should prevent issues around
"returns twice" function calls within SSA passes.
test/live.go is updated because liveness analysis's CFG is more
representative of actual control flow. The case bodies are the only
real successors of the selectgo call, but previously the selectsend,
selectrecv, etc. calls were included in the successors list too.
Updates #19331.
Change-Id: I7f879b103a4b85e62fc36a270d812f54c0aa3e83
Reviewed-on: https://go-review.googlesource.com/37661
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Extend period of fastrand from (1<<31)-1 to (1<<32)-1 by
choosing other polynom and reacting on high bit before shift.
Polynomial is taken at https://users.ece.cmu.edu/~koopman/lfsr/index.html
from 32.dat.gz . It is referred as F7711115 cause this list of
polynomials is for LFSR with shift to right (and fastrand uses shift to
left). (old polynomial is referred in 31.dat.gz as 7BB88888).
There were couple of places with conversation of fastrand to int, which
leads to negative values on 32bit platforms. They are fixed.
Change-Id: Ibee518a3f9103e0aea220ada494b3aec77babb72
Reviewed-on: https://go-review.googlesource.com/36875
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
To implement the blocking of a select, a goroutine builds a list of
offers to communicate (pseudo-g's, aka sudog), one for each case,
queues them on the corresponding channels, and waits for another
goroutine to complete one of those cases and wake it up. Obviously it
is not OK for two other goroutines to complete multiple cases and both
wake the goroutine blocked in select. To make sure that only one
branch of the select is chosen, all the sudogs contain a pointer to a
shared (single) 'done uint32', which is atomically cas'ed by any
interested goroutines. The goroutine that wins the cas race gets to
wake up the select. A complication is that 'done uint32' is stored on
the stack of the goroutine running the select, and that stack can move
during the select due to stack growth or stack shrinking.
The relevant ordering to block and unblock in select is:
1. Lock all channels.
2. Create list of sudogs and queue sudogs on all channels.
3. Switch to system stack, mark goroutine as asleep,
unlock all channels.
4. Sleep until woken.
5. Wake up on goroutine stack.
6. Lock all channels.
7. Dequeue sudogs from all channels.
8. Free list of sudogs.
9. Unlock all channels.
There are two kinds of stack moves: stack growth and stack shrinking.
Stack growth happens while the original goroutine is running.
Stack shrinking happens asynchronously, during garbage collection.
While a channel listing a sudog is locked by select in this process,
no other goroutine can attempt to complete communication on that
channel, because that other goroutine doesn't hold the lock and can't
find the sudog. If the stack moves while all the channel locks are
held or when the sudogs are not yet or no longer queued in the
channels, no problem, because no goroutine can get to the sudogs and
therefore to selectdone. We only need to worry about the stack (and
'done uint32') moving with the sudogs queued in unlocked channels.
Stack shrinking can happen any time the goroutine is stopped.
That code already acquires all the channel locks before doing the
stack move, so it avoids this problem.
Stack growth can happen essentially any time the original goroutine is
running on its own stack (not the system stack). In the first half of
the select, all the channels are locked before any sudogs are queued,
and the channels are not unlocked until the goroutine has stopped
executing on its own stack and is asleep, so that part is OK. In the
second half of the select, the goroutine wakes up on its own goroutine
stack and immediately locks all channels. But the actual call to lock
might grow the stack, before acquiring any locks. In that case, the
stack is moving with the sudogs queued in unlocked channels. Not good.
One goroutine has already won a cas on the old stack (that goroutine
woke up the selecting goroutine, moving it out of step 4), and the
fact that done = 1 now should prevent any other goroutines from
completing any other select cases. During the stack move, however,
sudog.selectdone is moved from pointing to the old done variable on
the old stack to a new memory location on the new stack. Another
goroutine might observe the moved pointer before the new memory
location has been initialized. If the new memory word happens to be
zero, that goroutine might win a cas on the new location, thinking it
can now complete the select (again). It will then complete a second
communication (reading from or writing to the goroutine stack
incorrectly) and then attempt to wake up the selecting goroutine,
which is already awake.
The scribbling over the goroutine stack unexpectedly is already bad,
but likely to go unnoticed, at least immediately. As for the second
wakeup, there are a variety of ways it might play out.
* The goroutine might not be asleep.
That will produce a runtime crash (throw) like in #17007:
runtime: gp: gp=0xc0422dcb60, goid=2299, gp->atomicstatus=8
runtime: g: g=0xa5cfe0, goid=0, g->atomicstatus=0
fatal error: bad g->status in ready
Here, atomicstatus=8 is copystack; the second, incorrect wakeup is
observing that the selecting goroutine is in state "Gcopystack"
instead of "Gwaiting".
* The goroutine might be sleeping in a send on a nil chan.
If it wakes up, it will crash with 'fatal error: unreachable'.
* The goroutine might be sleeping in a send on a non-nil chan.
If it wakes up, it will crash with 'fatal error: chansend:
spurious wakeup'.
* The goroutine might be sleeping in a receive on a nil chan.
If it wakes up, it will crash with 'fatal error: unreachable'.
* The goroutine might be sleeping in a receive on a non-nil chan.
If it wakes up, it will silently (incorrectly!) continue as if it
received a zero value from a closed channel, leaving a sudog queued on
the channel pointing at that zero vaue on the goroutine's stack; that
space will be reused as the goroutine executes, and when some other
goroutine finally completes the receive, it will do a stray write into
the goroutine's stack memory, which may cause problems. Then it will
attempt the real wakeup of the goroutine, leading recursively to any
of the cases in this list.
* The goroutine might have been running a select in a finalizer
(I hope not!) and might now be sleeping waiting for more things to
finalize. If it wakes up, as long as it goes back to sleep quickly
(before the real GC code tries to wake it), the spurious wakeup does
no harm (but the stack was still scribbled on).
* The goroutine might be sleeping in gcParkAssist.
If it wakes up, that will let the goroutine continue executing a bit
earlier than we would have liked. Eventually the GC will attempt the
real wakeup of the goroutine, leading recursively to any of the cases
in this list.
* The goroutine cannot be sleeping in bgsweep, because the background
sweepers never use select.
* The goroutine might be sleeping in netpollblock.
If it wakes up, it will crash with 'fatal error: netpollblock:
corrupted state'.
* The goroutine might be sleeping in main as another thread crashes.
If it wakes up, it will exit(0) instead of letting the other thread
crash with a non-zero exit status.
* The goroutine cannot be sleeping in forcegchelper,
because forcegchelper never uses select.
* The goroutine might be sleeping in an empty select - select {}.
If it wakes up, it will return to the next line in the program!
* The goroutine might be sleeping in a non-empty select (again).
In this case, it will wake up spuriously, with gp.param == nil (no
reason for wakeup), but that was fortuitously overloaded for handling
wakeup due to a closing channel and the way it is handled is to rerun
the select, which (accidentally) handles the spurious wakeup
correctly:
if cas == nil {
// This can happen if we were woken up by a close().
// TODO: figure that out explicitly so we don't need this loop.
goto loop
}
Before looping, it will dequeue all the sudogs on all the channels
involved, so that no other goroutine will attempt to wake it.
Since the goroutine was blocked in select before, being blocked in
select again when the spurious wakeup arrives may be quite likely.
In this case, the spurious wakeup does no harm (but the stack was
still scribbled on).
* The goroutine might be sleeping in semacquire (mutex slow path).
If it wakes up, that is taken as a signal to try for the semaphore
again, not a signal that the semaphore is now held, but the next
iteration around the loop will queue the sudog a second time, causing
a cycle in the wakeup list for the given address. If that sudog is the
only one in the list, when it is eventually dequeued, it will
(due to the precise way the code is written) leave the sudog on the
queue inactive with the sudog broken. But the sudog will also be in
the free list, and that will eventually cause confusion.
* The goroutine might be sleeping in notifyListWait, for sync.Cond.
If it wakes up, (*Cond).Wait returns. The docs say "Unlike in other
systems, Wait cannot return unless awoken by Broadcast or Signal,"
so the spurious wakeup is incorrect behavior, but most callers do not
depend on that fact. Eventually the condition will happen, attempting
the real wakeup of the goroutine and leading recursively to any of the
cases in this list.
* The goroutine might be sleeping in timeSleep aka time.Sleep.
If it wakes up, it will continue running, leaving a timer ticking.
When that time bomb goes off, it will try to ready the goroutine
again, leading to any one of the cases in this list.
* The goroutine cannot be sleeping in timerproc,
because timerproc never uses select.
* The goroutine might be sleeping in ReadTrace.
If it wakes up, it will print 'runtime: spurious wakeup of trace
reader' and return nil. All future calls to ReadTrace will print
'runtime: ReadTrace called from multiple goroutines simultaneously'.
Eventually, when trace data is available, a true wakeup will be
attempted, leading to any one of the cases in this list.
None of these fatal errors appear in any of the trybot or dashboard
logs. The 'bad g->status in ready' that happens if the goroutine is
running (the most likely scenario anyway) has happened once on the
dashboard and eight times in trybot logs. Of the eight, five were
atomicstatus=8 during net/http tests, so almost certainly this bug.
The other three were atomicstatus=2, all near code in select,
but in a draft CL by Dmitry that was rewriting select and may or may
not have had its own bugs.
This bug has existed since Go 1.4. Until then the select code was
implemented in C, 'done uint32' was a C stack variable 'uint32 done',
and C stacks never moved. I believe it has become more common recently
because of Brad's work to run more and more tests in net/http in
parallel, which lengthens race windows.
The fix is to run step 6 on the system stack,
avoiding possibility of stack growth.
Fixes#17007 and possibly other mysterious failures.
Change-Id: I9d6575a51ac96ae9d67ec24da670426a4a45a317
Reviewed-on: https://go-review.googlesource.com/34835
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The hybrid barrier requires distinguishing typed and untyped memory
even when zeroing because the *current* contents of the memory matters
even when overwriting.
This commit introduces runtime.typedmemclr and runtime.memclrHasPointers
as a typed memory clearing functions parallel to runtime.typedmemmove.
Currently these simply call memclr, but with the hybrid barrier we'll
need to shade any pointers we're overwriting. These will provide us
with the necessary hooks to do so.
Updates #17503.
Change-Id: I74478619f8907825898092aaa204d6e4690f27e6
Reviewed-on: https://go-review.googlesource.com/31366
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
mallocgc can calculate noscan itself. The only remaining
flag argument is needzero, so we just make that a boolean arg.
Fixes#15379
Change-Id: I839a70790b2a0c9dbcee2600052bfbd6c8148e20
Reviewed-on: https://go-review.googlesource.com/22290
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
gopark calls the unlock function after setting the G to _Gwaiting.
This means it's generally unsafe to access the G's stack from the
unlock function because the G may start running on another P. Once we
start shrinking stacks concurrently, a stack shrink could also move
the stack the moment after it enters _Gwaiting and before the unlock
function is called.
Document this restriction and fix the two places where we currently
violate it.
This is unlikely to be a problem in practice for these two places
right now, but they're already skating on thin ice. For example, the
following sequence could in principle cause corruption, deadlock, or a
panic in the select code:
On M1/P1:
1. G1 selects on channels A and B.
2. selectgoImpl calls gopark.
3. gopark puts G1 in _Gwaiting.
4. gopark calls selparkcommit.
5. selparkcommit releases the lock on channel A.
On M2/P2:
6. G2 sends to channel A.
7. The send puts G1 in _Grunnable and puts it on P2's run queue.
8. The scheduler runs, selects G1, puts it in _Grunning, and resumes G1.
9. On G1, the sellock immediately following the gopark gets called.
10. sellock grows and moves the stack.
On M1/P1:
11. selparkcommit continues to scan the lock order for the next
channel to unlock, but it's now reading from a freed (and possibly
reused) stack.
This shouldn't happen in practice because step 10 isn't the first call
to sellock, so the stack should already be big enough. However, once
we start shrinking stacks concurrently, this reasoning won't work any
more.
For #12967.
Change-Id: I3660c5be37e5be9f87433cb8141bdfdf37fadc4c
Reviewed-on: https://go-review.googlesource.com/20038
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the g.waiting list created by a select is in poll order.
However, nothing depends on this, and we're going to need access to
the channel lock order in other places shortly, so modify select to
put the waiting list in channel lock order.
For #12967.
Change-Id: If0d38816216ecbb37a36624d9b25dd96e0a775ec
Reviewed-on: https://go-review.googlesource.com/20037
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently the select lock order is a []*hchan. We're going to need to
refer to things other than the channel itself in lock order shortly,
so switch this to a []uint16 of indexes into the select cases. This
parallels the existing representation for the poll order.
Change-Id: I89262223fe20b4ddf5321592655ba9eac489cda1
Reviewed-on: https://go-review.googlesource.com/20036
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Given a G, there's currently no way to find the channel it's blocking
on. We'll need this information to fix a (probably theoretical) bug in
select and to implement concurrent stack shrinking, so record the
channel in the sudog.
For #12967.
Change-Id: If8fb63a140f1d07175818824d08c0ebeec2bdf66
Reviewed-on: https://go-review.googlesource.com/20035
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In particular, document that *sel is on the stack no matter what.
Change-Id: I1c264215e026c27721b13eedae73ec845066cdec
Reviewed-on: https://go-review.googlesource.com/20032
Reviewed-by: Rick Hudson <rlh@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The channel code must not allow stack splits between when it assigns a
potential stack pointer to sudog.elem (or sudog.selectdone) and when
it makes the sudog visible to copystack by putting it on the g.waiting
list. We do get this right everywhere, but add a comment about this
subtlety for future eyes.
Change-Id: I941da150437167acff37b0e56983c793f40fcf79
Reviewed-on: https://go-review.googlesource.com/19632
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Also eliminates per-maptype hiter and hmap types, since they're not
really needed anyway. Update packages reflect and runtime
accordingly.
Reduces golang.org/x/tools/cmd/godoc's text segment by ~170kB:
text data bss dec hex filename
13085702 140640 151520 13377862 cc2146 godoc.before
12915382 140640 151520 13207542 c987f6 godoc.after
Updates #6853.
Change-Id: I948b2bc1f22d477c1756204996b4e3e1fb568d81
Reviewed-on: https://go-review.googlesource.com/16610
Reviewed-by: Keith Randall <khr@golang.org>
runtime/internal/sys will hold system-, architecture- and config-
specific constants.
Updates #11647
Change-Id: I6db29c312556087a42e8d2bdd9af40d157c56b54
Reviewed-on: https://go-review.googlesource.com/16817
Reviewed-by: Russ Cox <rsc@golang.org>
This change is the same as CL #9345 which was reverted,
except for a small bug fix.
The only change is to the body of sendDirect and its callsite.
Also added a test.
The problem was during a channel send operation. The target
of the send was a sleeping goroutine waiting to receive. We
basically do:
1) Read the destination pointer out of the sudog structure
2) Copy the value we're sending to that destination pointer
Unfortunately, the previous change had a goroutine suspend
point between 1 & 2 (the call to sendDirect). At that point
the destination goroutine's stack could be copied (shrunk).
The pointer we read in step 1 is no longer valid for step 2.
Fixed by not allowing any suspension points between 1 & 2.
I suspect the old code worked correctly basically by accident.
Fixes#13169
The original 9345:
This change removes the retry mechanism we use for buffered channels.
Instead, any sender waking up a receiver or vice versa completes the
full protocol with its counterpart. This means the counterpart does
not need to relock the channel when it wakes up. (Currently
buffered channels need to relock on wakeup.)
For sends on a channel with waiting receivers, this change replaces
two copies (sender->queue, queue->receiver) with one (sender->receiver).
For receives on channels with a waiting sender, two copies are still required.
This change unifies to a large degree the algorithm for buffered
and unbuffered channels, simplifying the overall implementation.
Fixes#11506
Change-Id: I57dfa3fc219cffa4d48301ee15fe5479299efa09
Reviewed-on: https://go-review.googlesource.com/16740
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Revert for now until #13169 is understood.
This reverts commit 8e496f1d69.
Change-Id: Ib3eb2588824ef47a2b6eb9e377a24e5c817fcc81
Reviewed-on: https://go-review.googlesource.com/16716
Reviewed-by: Keith Randall <khr@golang.org>
This change removes the retry mechanism we use for buffered channels.
Instead, any sender waking up a receiver or vice versa completes the
full protocol with its counterpart. This means the counterpart does
not need to relock the channel when it wakes up. (Currently
buffered channels need to relock on wakeup.)
For sends on a channel with waiting receivers, this change replaces
two copies (sender->queue, queue->receiver) with one (sender->receiver).
For receives on channels with a waiting sender, two copies are still required.
This change unifies to a large degree the algorithm for buffered
and unbuffered channels, simplifying the overall implementation.
Fixes#11506
benchmark old ns/op new ns/op delta
BenchmarkChanProdCons10 125 110 -12.00%
BenchmarkChanProdCons0 303 284 -6.27%
BenchmarkChanProdCons100 75.5 71.3 -5.56%
BenchmarkChanContended 6452 6125 -5.07%
BenchmarkChanNonblocking 11.5 11.0 -4.35%
BenchmarkChanCreation 149 143 -4.03%
BenchmarkChanSem 63.6 61.6 -3.14%
BenchmarkChanUncontended 6390 6212 -2.79%
BenchmarkChanSync 282 276 -2.13%
BenchmarkChanProdConsWork10 516 506 -1.94%
BenchmarkChanProdConsWork0 696 685 -1.58%
BenchmarkChanProdConsWork100 470 469 -0.21%
BenchmarkChanPopular 660427 660012 -0.06%
Change-Id: I164113a56432fbc7cace0786e49c5a6e6a708ea4
Reviewed-on: https://go-review.googlesource.com/9345
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Add explicit memory sanitizer instrumentation to the runtime and syscall
packages. The compiler does not instrument the runtime package. It
does instrument the syscall package, but we need to add a couple of
cases that it can't see.
Change-Id: I2d66073f713fe67e33a6720460d2bb8f72f31394
Reviewed-on: https://go-review.googlesource.com/16164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Missed select case when adding the barrier last time.
All the more reason to refactor this code in Go 1.6.
Fixes#11643.
Change-Id: Ib0d19d6e0939296c0a3e06dda5e9b76f813bbc7e
Reviewed-on: https://go-review.googlesource.com/12086
Reviewed-by: Austin Clements <austin@google.com>
This is a follow-up to CL 9269, as suggested
by dvyukov.
There is probably even more that can be done
to speed up this shuffle. It will matter more
once CL 7570 (fine-grained locking in select)
is in and can be revisited then, with benchmarks.
Change-Id: Ic13a27d11cedd1e1f007951214b3bb56b1644f02
Reviewed-on: https://go-review.googlesource.com/9393
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
This is the optimization made to math/rand in CL 21030043.
Change-Id: I231b24fa77cac1fe74ba887db76313b5efaab3e8
Reviewed-on: https://go-review.googlesource.com/9269
Reviewed-by: Minux Ma <minux@golang.org>
By removing type slice, renaming type sliceStruct to type slice and
whacking until it compiles.
Has a pleasing net reduction of conversions.
Fixes#10188
Change-Id: I77202b8df637185b632fd7875a1fdd8d52c7a83c
Reviewed-on: https://go-review.googlesource.com/8770
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Channels and sync.Mutex'es allow another goroutine to acquire resource
ahead of an unblocked goroutine. This is good for performance, but
leads to futile wakeups (the unblocked goroutine needs to block again).
Futile wakeups caused user confusion during the very first evaluation
of tracing functionality on a real server (a goroutine as if acquires a mutex
in a loop, while there is no loop in user code).
This change detects futile wakeups on channels and emits a special event
to denote the fact. Later parser finds entire wakeup sequences
(unblock->start->block) and removes them.
sync.Mutex will be supported in a separate change.
Change-Id: Iaaaee9d5c0921afc62b449a97447445030ac19d3
Reviewed-on: https://go-review.googlesource.com/7380
Reviewed-by: Keith Randall <khr@golang.org>
Everything has moved to Go, but comments still refer to .c/.h files.
Fix all of those up, at least for these three directories.
Fixes#10138
Change-Id: Ie5efe89b247841e0b3f82aac5256b2c606ef67dc
Reviewed-on: https://go-review.googlesource.com/7431
Reviewed-by: Russ Cox <rsc@golang.org>
Stip uninteresting bottom and top frames from trace stacks.
This makes both binary and json trace files smaller,
and also makes stacks shorter and more readable in the viewer.
Change-Id: Ib9c80ccc280504f0e235f867f53f1d2652c41583
Reviewed-on: https://go-review.googlesource.com/5523
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Move type definitions from chan1.go to chan.go and select.go.
Remove underscores from names.
Make c.buf unsafe.Pointer instead of *uint8.
Change-Id: I75cf8385bdb9f79eb5a7f7ad319495abbacbe942
Reviewed-on: https://go-review.googlesource.com/4900
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iea83d693480c2f3008b4e80d55821acff65970a6
Reviewed-on: https://go-review.googlesource.com/2277
Reviewed-by: Keith Randall <khr@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Replace with uses of //go:linkname in Go files, direct use of name in .s files.
The only one that really truly needs a jump is reflect.call; the jump is now
next to the runtime.reflectcall assembly implementations.
Change-Id: Ie7ff3020a8f60a8e4c8645fe236e7883a3f23f46
Reviewed-on: https://go-review.googlesource.com/1962
Reviewed-by: Austin Clements <austin@google.com>
Avoids a potential O(n^2) performance problem when dequeueing
from very popular channels.
benchmark old ns/op new ns/op delta
BenchmarkChanPopular 2563782 627201 -75.54%
Change-Id: I231aaeafea0ecd93d27b268a0b2128530df3ddd6
Reviewed-on: https://go-review.googlesource.com/1200
Reviewed-by: Russ Cox <rsc@golang.org>
The SudoG used to sit on the stack, so it was cheap to allocated
and didn't need to be cleaned up when finished.
For the conversion to Go, we had to move sudog off the stack
for a few reasons, so we added a cache of recently used sudogs
to keep allocation cheap. But we didn't add any of the necessary
cleanup before adding a SudoG to the new cache, and so the cached
SudoGs had stale pointers inside them that have caused all sorts
of awful, hard to debug problems.
CL 155760043 made sure SudoG.elem is cleaned up.
CL 150520043 made sure SudoG.selectdone is cleaned up.
This CL makes sure SudoG.next, SudoG.prev, and SudoG.waitlink
are cleaned up. I should have done this when I did the other two
fields; instead I wasted a week tracking down a leak they caused.
A dangling SudoG.waitlink can point into a sudogcache list that
has been "forgotten" in order to let the GC collect it, but that
dangling .waitlink keeps the list from being collected.
And then the list holding the SudoG with the dangling waitlink
can find itself in the same situation, and so on. We end up
with lists of lists of unusable SudoGs that are still linked into
the object graph and never collected (given the right mix of
non-trivial selects and non-channel synchronization).
More details in golang.org/issue/9110.
Fixes#9110.
LGTM=r
R=r
CC=dvyukov, golang-codereviews, iant, khr
https://golang.org/cl/177870043
The garbage collector is now written in Go.
There is plenty to clean up (just like on dev.cc).
all.bash passes on darwin/amd64, darwin/386, linux/amd64, linux/386.
TBR=rlh
R=austin, rlh, bradfitz
CC=golang-codereviews
https://golang.org/cl/173250043
The conversion was done with an automated tool and then
modified only as necessary to make it compile and run.
[This CL is part of the removal of C code from package runtime.
See golang.org/s/dev.cc for an overview.]
LGTM=r
R=r, daniel.morsing
CC=austin, dvyukov, golang-codereviews, iant, khr
https://golang.org/cl/172260043
select {
case <- c:
case <- c:
}
In this case, c.recvq lists two SudoGs which have the same G.
So we can't use the G as the key to dequeue the correct SudoG,
as that key is ambiguous. Dequeueing the wrong SudoG ends up
freeing a SudoG that is still in c.recvq.
The fix is to use the actual SudoG pointer as the key.
LGTM=dvyukov
R=rsc, bradfitz, dvyukov, khr
CC=austin, golang-codereviews
https://golang.org/cl/159040043
Removes another dangling pointer that might
cause a memory leak in 1.4 or crash the GC in 1.5.
LGTM=rlh
R=golang-codereviews
CC=golang-codereviews, iant, khr, r, rlh
https://golang.org/cl/150520043
This change was necessary on the dev.garbage branch
to keep the garbage collector from seeing pointers into
invalid heap areas.
On this default (Go 1.4) branch, the change removes
some possibility for memory leaks.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, r, rlh
https://golang.org/cl/155760043
In old conservative Go, this could cause memory leaks.
A new pickier collector might reasonably crash when it saw one of these.
LGTM=rlh
R=rlh
CC=golang-codereviews
https://golang.org/cl/147480043
I assumed they were the same when I wrote
cgocallback.go earlier today. Merge them
to eliminate confusion.
I can't tell what gomallocgc did before with
a nil type but without FlagNoScan.
I created a call like that in cgocallback.go
this morning, translating from a C file.
It was supposed to do what the C version did,
namely treat the block conservatively.
Now it will.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/141810043