The issue is discovered during testing of a change to runtime.
Even if it is unlikely to happen, the comment can safe an hour
next person who hits it.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/116790043
The test prints an excessive \n when /dev/null is not present.
R=golang-codereviews, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/54890043
What was happenning is as follows:
Each writer goroutine always triggers GC during its scheduling quntum.
After GC goroutines are shuffled so that the timer goroutine is always second in the queue.
This repeats infinitely, causing timer goroutine starvation.
Fixes#7126.
R=golang-codereviews, shanemhansen, khr, khr
CC=golang-codereviews
https://golang.org/cl/53080043
We see timeouts in these tests on some platforms,
but not on the others. The hypothesis is that
the problematic platforms are slow uniprocessors.
Stack traces do not suggest that the process
is completely hang, and it is able to schedule
the alarm goroutine. And if it actually hangs,
we still will be able to detect that.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12253043
If the stack frame size is larger than the known-unmapped region at the
bottom of the address space, then the stack split prologue cannot use the usual
condition:
SP - size >= stackguard
because SP - size may wrap around to a very large number.
Instead, if the stack frame is large, the prologue tests:
SP - stackguard >= size
(This ends up being a few instructions more expensive, so we don't do it always.)
Preemption requests register by setting stackguard to a very large value, so
that the first test (SP - size >= stackguard) cannot possibly succeed.
Unfortunately, that same very large value causes a wraparound in the
second test (SP - stackguard >= size), making it succeed incorrectly.
To avoid *that* wraparound, we have to amend the test:
stackguard != StackPreempt && SP - stackguard >= size
This test is only used for functions with large frames, which essentially
always split the stack, so the cost of the few instructions is noise.
This CL and CL 11085043 together fix the known issues with preemption,
at the beginning of a function, so we will be able to try turning it on again.
R=ken2
CC=golang-dev
https://golang.org/cl/11205043
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
This is reincarnation of cl/9776044 with the bug fixed.
The bug was due to code added after cl/9776044 was created:
if(tick - (((uint64)tick*0x4325c53fu)>>36)*61 == 0 && runtime·sched.runqsize > 0) {
runtime·lock(&runtime·sched);
gp = globrunqget(m->p, 1);
runtime·unlock(&runtime·sched);
}
If M gets gp from global runq here, it does not reset m->spinning.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10743044
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.
R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
Failure on bot:
http://build.golang.org/log/f4c648906e1289ec2237c1d0880fb1a8b1852a08
««« original CL description
runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
TBR=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/10692043
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
Currently global runqueue is starved if a group of goroutines
constantly respawn each other (local runqueue never becomes empty).
Fixes#5639.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10042044
The removed code leads to the situation when M executes the same locked G again
and again.
This is https://golang.org/cl/7310096 but with return instead of break
in the nested switch.
Fixes#4820.
R=golang-dev, alex.brainman, rsc
CC=golang-dev
https://golang.org/cl/7304102
broke windows build
««« original CL description
runtime: ensure forward progress of runtime.Gosched() for locked goroutines
The removed code leads to the situation when M executes the same locked G again and again.
Fixes#4820.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7310096
»»»
TBR=dvyukov
CC=golang-dev
https://golang.org/cl/7343050
The removed code leads to the situation when M executes the same locked G again and again.
Fixes#4820.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7310096