1
0
mirror of https://github.com/golang/go synced 2024-10-04 06:21:23 -06:00
Commit Graph

225 Commits

Author SHA1 Message Date
Dmitriy Vyukov
326ae8d14e runtime: fix traceback in cgo programs
Fixes #6061.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12609043
2013-08-08 00:31:52 +04:00
Dmitriy Vyukov
9c0500b466 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

This is the same as reverted change 12250043
with save marked as textflag 7.
The problem was that if save calls morestack,
then subsequent lessstack spoils g->sched.pc/sp.
And that bad values were remembered in g->syscallpc/sp.
Entersyscallblock had the same problem,
but it was never triggered to date.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12478043
2013-08-06 13:38:44 +04:00
Russ Cox
d3066e47b1 runtime/pprof: test multithreaded profile, remove OS X workarounds
This means that pprof will no longer report profiles on OS X.
That's unfortunate, but the profiles were often wrong and, worse,
it was difficult to tell whether the profile was wrong or not.

The workarounds were making the scheduler more complex,
possibly caused a deadlock (see issue 5519), and did not actually
deliver reliable results.

It may be possible for adventurous users to apply a patch to
their kernels to get working results, or perhaps having no results
will encourage someone to do the work of creating a profiling
thread like on Windows. Issue 6047 has details.

Fixes #5519.
Fixes #6047.

R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/12429045
2013-08-05 19:49:02 -04:00
Russ Cox
10ebb84d48 runtime: remove debugging knob to turn off preemption
It's still easy to turn off, but the builders are happy.
Also document.

R=golang-dev, iant, dvyukov
CC=golang-dev
https://golang.org/cl/12371043
2013-08-05 16:06:24 -04:00
Dmitriy Vyukov
f38ff9e5ea undo CL 12250043 / e911f94c4902
Break all 386 builders.

««« original CL description
runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
»»»

R=rsc
CC=golang-dev
https://golang.org/cl/12424045
2013-08-05 23:33:50 +04:00
Dmitriy Vyukov
d5ab784611 runtime: remove singleproc var
It was needed for the old scheduler,
because there temporary could be more threads than gomaxprocs.
In the new scheduler gomaxprocs is always respected.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12438043
2013-08-05 22:58:02 +04:00
Dmitriy Vyukov
f73972fa33 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
2013-08-05 22:55:54 +04:00
Keith Randall
9cd570680b runtime: reimplement reflect.call to not use stack splitting.
R=golang-dev, r, khr, rsc
CC=golang-dev
https://golang.org/cl/12053043
2013-08-02 13:03:14 -07:00
Dmitriy Vyukov
c33d490020 runtime: print "created by" for running goroutines in traceback
This allows to at least determine goroutine "identity".
Now it looks like:
goroutine 12 [running]:
        goroutine running on other thread; stack unavailable
created by testing.RunTests
        src/pkg/testing/testing.go:440 +0x88e

R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/12248043
2013-08-01 19:28:38 +04:00
Dmitriy Vyukov
658d19a53f runtime: do not park sysmon thread if any goroutines are running
Sysmon thread parks if no goroutines are running (runtime.sched.npidle ==
runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.

R=rsc
CC=golang-dev
https://golang.org/cl/12176043
2013-07-31 20:09:03 +04:00
Dmitriy Vyukov
6ee69a9726 undo CL 12167043 / 475e11851fc1
Submitted with some unrelated changes that were not intended to go in.

««« original CL description
runtime: do not park sysmon thread if any goroutines are running
Sysmon thread parks if no goroutines are running (runtime.sched.npidle == runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12167043
»»»

R=rsc
CC=golang-dev
https://golang.org/cl/12171044
2013-07-31 20:03:05 +04:00
Dmitriy Vyukov
156e8b306d runtime: do not park sysmon thread if any goroutines are running
Sysmon thread parks if no goroutines are running (runtime.sched.npidle == runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12167043
2013-07-31 19:59:27 +04:00
Dmitriy Vyukov
a20784bdaf runtime: enable goroutine preemption
All known issues with preemption have beed fixed.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12008044
2013-07-30 22:17:38 +04:00
Dmitriy Vyukov
e84d9e1fb3 runtime: do not split stacks in syscall status
Split stack checks (morestack) corrupt g->sched,
but g->sched must be preserved consistent for GC/traceback.
The change implements runtime.notetsleepg function,
which does entersyscall/exitsyscall and is carefully arranged
to not call any split functions in between.

R=rsc
CC=golang-dev
https://golang.org/cl/11575044
2013-07-29 22:22:34 +04:00
Russ Cox
14062efb16 runtime: handle runtime.Goexit during init
Fixes #5963.

R=golang-dev, dsymonds, dvyukov
CC=golang-dev
https://golang.org/cl/11879045
2013-07-26 13:54:44 -04:00
Dmitriy Vyukov
f8a850b250 runtime: refactor mallocgc
Make it accept type, combine flags.
Several reasons for the change:
1. mallocgc and settype must be atomic wrt GC
2. settype is called from only one place now
3. it will help performance (eventually settype
functionality must be combined with markallocated)
4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore)

R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r
CC=golang-dev
https://golang.org/cl/10136043
2013-07-26 21:17:24 +04:00
Keith Randall
3453a2204b runtime: only define SEH when we need it.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/11769043
2013-07-24 09:59:47 -07:00
Russ Cox
f011282578 runtime: more cgocallback_gofunc work
Debugging the Windows breakage I noticed that SEH
only exists on 386, so we can balance the two stacks
a little more on amd64 and reclaim another word.

Now we're down to just one word consumed by
cgocallback_gofunc, having reclaimed 25% of the
overall budget (4 words out of 16).

Separately, fix windows/386 - the SEH must be on the
m0 stack, as must the saved SP, so we are forced to have
a three-word frame for 386. It matters much less for
386, because there 128 bytes gives 32 words to use.

R=dvyukov, alex.brainman
CC=golang-dev
https://golang.org/cl/11551044
2013-07-24 09:01:57 -04:00
Russ Cox
dba623b1c7 runtime: reduce frame size for runtime.cgocallback_gofunc
Tying preemption to stack splits means that we have to able to
complete the call to exitsyscall (inside cgocallbackg at least for now)
without any stack split checks, meaning that the whole sequence
has to work within 128 bytes of stack, unless we increase the size
of the red zone. This CL frees up 24 bytes along that critical path
on amd64. (The 32-bit systems have plenty of space because all
their words are smaller.)

R=dvyukov
CC=golang-dev
https://golang.org/cl/11676043
2013-07-23 18:40:02 -04:00
David Symonds
ae5991695c runtime: add a missing newline in a debug printf.
Trivial, but annoying while debugging this code.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/11656043
2013-07-22 12:42:42 +10:00
Dmitriy Vyukov
6857264457 runtime: prevent sysmon from polling network excessivly
If the network is not polled for 10ms, sysmon starts polling network
on every iteration (every 20us) until another thread blocks in netpoll.
Fixes #5922.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/11569043
2013-07-19 17:45:34 +04:00
Dmitriy Vyukov
bc31bcccd3 runtime: preempt long-running goroutines
If a goroutine runs for more than 10ms, preempt it.
Update #543.

R=rsc
CC=golang-dev
https://golang.org/cl/10796043
2013-07-19 01:22:26 +04:00
Russ Cox
58f12ffd79 runtime: handle morestack/lessstack in stack trace
If we start a garbage collection on g0 during a
stack split or unsplit, we'll see morestack or lessstack
at the top of the stack. Record an argument frame size
for those, and record that they terminate the stack.

R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11533043
2013-07-18 16:53:45 -04:00
Ian Lance Taylor
1da96a3039 runtime: disable preemption again to fix linux build
Otherwise the tests in pkg/runtime fail:

runtime: unknown argument frame size for runtime.deferreturn called from 0x48657b [runtime_test.func·022]
fatal error: invalid stack
...

R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/11483043
2013-07-17 16:15:46 -07:00
Russ Cox
b913cf84dc runtime: re-enable preemption
Update #543

I believe the runtime is strong enough now to reenable
preemption during the function prologue.
Assuming this is or can be made stable, it will be in Go 1.2.
More aggressive preemption is not planned for Go 1.2.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/11433045
2013-07-17 14:03:27 -04:00
Dmitriy Vyukov
5887f142a3 runtime: more reliable preemption
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
2013-07-17 12:52:37 -04:00
Russ Cox
a83748596c runtime: use new frame argument size information
With this CL, I believe the runtime always knows
the frame size during the gc walk. There is no fallback
to "assume entire stack frame of caller" anymore.

R=golang-dev, khr, cshapiro, dvyukov
CC=golang-dev
https://golang.org/cl/11374044
2013-07-17 12:47:18 -04:00
Dmitriy Vyukov
32fef9908a runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes #5586.
This is reincarnation of cl/9776044 with the bug fixed.
The bug was due to code added after cl/9776044 was created:
if(tick - (((uint64)tick*0x4325c53fu)>>36)*61 == 0 && runtime·sched.runqsize > 0) {
        runtime·lock(&runtime·sched);
        gp = globrunqget(m->p, 1);
        runtime·unlock(&runtime·sched);
}
If M gets gp from global runq here, it does not reset m->spinning.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10743044
2013-07-11 15:57:36 -04:00
Russ Cox
08e064135d runtime: disable preemption
There are various problems, and both Dmitriy and I
will be away for the next week. Make the runtime a bit
more stable while we're gone.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10848043
2013-07-01 17:57:09 -04:00
Dmitriy Vyukov
4b536a550f runtime: introduce GODEBUG env var
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.

R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
2013-06-28 18:37:06 +04:00
Dmitriy Vyukov
1e112cd59f runtime: preempt goroutines for GC
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.

R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
2013-06-28 17:52:17 +04:00
Ian Lance Taylor
ab1270bcfc runtime: remove declaration of function that does not exist
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/10730045
2013-06-27 22:43:30 -07:00
Dmitriy Vyukov
7ebb187e8e undo CL 9776044 / 1e280889f997
Failure on bot:
http://build.golang.org/log/f4c648906e1289ec2237c1d0880fb1a8b1852a08

««« original CL description
runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes #5586.

R=rsc
TBR=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
»»»

R=golang-dev
CC=golang-dev
https://golang.org/cl/10692043
2013-06-27 21:03:35 +04:00
Dmitriy Vyukov
15a1c3d1e4 runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes #5586.

R=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
2013-06-27 20:52:12 +04:00
Russ Cox
6fa3c89b77 runtime: record proper goroutine state during stack split
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.

For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:

        TEXT myfunc
                check for split
                jmpcond nosplit
                call morestack
        nosplit:
                sub $xxx, sp

Now an extra instruction is inserted after the call:

        TEXT myfunc
        start:
                check for split
                jmpcond nosplit
                call morestack
                jmp start
        nosplit:
                sub $xxx, sp

The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.

The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.

The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.

Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)

runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
        morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
        sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow

goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
        /Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
        /Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
        /Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
        /Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
        /Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8

And here is a seg fault during oldstack:

SIGSEGV: segmentation violation
PC=0x1b2a6

runtime.oldstack()
        /Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
        /Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22

goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
        /Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
        /Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
        /Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
        strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8

goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
        /Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
        /Users/rsc/g/go/src/pkg/runtime/proc.c:166

rax     0x23ccc0
rbx     0x23ccc0
rcx     0x0
rdx     0x38
rdi     0x2102c0170
rsi     0x221032cfe0
rbp     0x221032cfa0
rsp     0x7fff5fbff5b0
r8      0x2102c0120
r9      0x221032cfa0
r10     0x221032c000
r11     0x104ce8
r12     0xe5c80
r13     0x1be82baac718
r14     0x13091135f7d69200
r15     0x0
rip     0x1b2a6
rflags  0x10246
cs      0x2b
fs      0x0
gs      0x0

Fixes #5723.

R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
2013-06-27 11:32:01 -04:00
Dmitriy Vyukov
4bb491b12e runtime: improve scheduler fairness
Currently global runqueue is starved if a group of goroutines
constantly respawn each other (local runqueue never becomes empty).
Fixes #5639.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10042044
2013-06-15 16:06:28 +04:00
Russ Cox
d67e7e3acf runtime: add lr, ctxt, ret to Gobuf
Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.

R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
2013-06-12 15:22:26 -04:00
Dmitriy Vyukov
dbcfed93e7 runtime: fix scheduler race condition
In starttheworld() we assume that P's with local work
are situated in the beginning of idle P list.
However, once we start the first M, it can execute all local G's
and steal G's from other P's.
That breaks the assumption above. Thus starttheworld() will fail
to start some P's with local work.
It seems that it can not lead to very bad things, but still
it's wrong and breaks other assumtions
(e.g. we can have a spinning M with local work).
The fix is to collect all P's with local work first,
and only then start them.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10051045
2013-06-12 18:46:35 +04:00
Russ Cox
e58f798c0c runtime: adjust traceback / garbage collector boundary
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.

Should make addframeroots significantly more robust.
It's certainly smaller.

Also try to standardize on uintptr for saved pc, sp values.

Will make CL 10036044 trivial.

R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
2013-06-12 08:49:38 -04:00
Ian Lance Taylor
b6e52ecffa runtime: remove unused mid function
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10036047
2013-06-06 18:10:42 -07:00
Dmitriy Vyukov
4a8ef1f65d runtime: disable preemption in several scheduler functions
Required for preemptive scheduler, see the comments for details.

R=golang-dev, khr, iant, khr
CC=golang-dev
https://golang.org/cl/9740051
2013-06-03 14:40:38 +04:00
Dmitriy Vyukov
354ec51666 runtime: introduce preemption function (not used for now)
This is part of preemptive scheduler.

R=golang-dev, cshapiro, iant
CC=golang-dev
https://golang.org/cl/9843046
2013-06-03 13:20:17 +04:00
Dmitriy Vyukov
f5becf4233 runtime: add stackguard0 to G
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).

R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
2013-06-03 12:28:24 +04:00
Dmitriy Vyukov
573d25a423 runtime: mark runtime.goexit as nosplit
Required for preemptive scheduler, see the comment.

R=golang-dev, daniel.morsing
CC=golang-dev
https://golang.org/cl/9841047
2013-05-30 14:11:49 +04:00
Dmitriy Vyukov
081129e286 runtime: allocate internal symbol table eagerly
we need it for GC anyway.

R=golang-dev, khr, dave, khr
CC=golang-dev
https://golang.org/cl/9728044
2013-05-28 21:10:10 +04:00
Dmitriy Vyukov
34c67eb24e runtime: detect deadlocks in programs using cgo
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes #4973.
Fixes #5475.

R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
2013-05-22 22:57:47 +04:00
Dmitriy Vyukov
1308194204 runtime: zeroize g->fnstart to not prevent GC of the closure
Fixes #5493.

R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9557043
2013-05-20 08:17:21 +04:00
Dmitriy Vyukov
fee1d1cda0 runtime: properly set G status after syscall
R=golang-dev, r, dave
CC=golang-dev
https://golang.org/cl/9307045
2013-05-19 19:35:09 +04:00
Anthony Martin
b65271d008 runtime: fix newproc debugging print
R=golang-dev, remyoudompheng, r
CC=golang-dev
https://golang.org/cl/9249044
2013-05-18 15:47:15 -07:00
Dmitriy Vyukov
0b5d55984f runtime: fix deadlock in network poller
The invariant is that there must be at least one running P or a thread polling network.
It was broken.
Fixes #5216.

R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/8459043
2013-04-06 22:27:54 -07:00