1
0
mirror of https://github.com/golang/go synced 2024-10-05 04:21:22 -06:00
Commit Graph

271 Commits

Author SHA1 Message Date
Dmitriy Vyukov
4f2e382c9f runtime: dump scheduler state if GODEBUG=schedtrace is set
The schedtrace value sets dump period in milliseconds.
In default mode the trace looks as follows:
SCHED 0ms: gomaxprocs=4 idleprocs=0 threads=3 idlethreads=0 runqueue=0 [1 0 0 0]
SCHED 1001ms: gomaxprocs=4 idleprocs=3 threads=6 idlethreads=3 runqueue=0 [0 0 0 0]
SCHED 2008ms: gomaxprocs=4 idleprocs=1 threads=6 idlethreads=1 runqueue=0 [0 1 0 0]
If GODEBUG=scheddetail=1 is set as well, then the detailed trace is printed:
SCHED 0ms: gomaxprocs=4 idleprocs=0 threads=3 idlethreads=0 runqueue=0 singleproc=0 gcwaiting=1 mlocked=0 nmspinning=0 stopwait=0 sysmonwait=0
  P0: status=3 tick=1 m=0 runqsize=1/128 gfreecnt=0
  P1: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  P2: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  P3: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  M2: p=-1 curg=-1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=-1
  M1: p=-1 curg=-1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=-1
  M0: p=0 curg=1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=1
  G1: status=2() m=0 lockedm=0
  G2: status=1() m=-1 lockedm=-1

R=golang-dev, raggi, rsc
CC=golang-dev
https://golang.org/cl/11435044
2013-08-14 00:30:55 +04:00
Dmitriy Vyukov
f9066fe1c0 runtime: more reliable preemption
Currently it's possible that a goroutine
that periodically executes non-blocking
cgo/syscalls is never preempted.
This change splits scheduler and syscall
ticks to prevent such situation.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12658045
2013-08-13 22:14:04 +04:00
Dmitriy Vyukov
fa4628346b runtime: remove unused m->racepc
The original plan was to collect allocation stacks
for all memory blocks. But it was never implemented
and it's not in near plans and it's unclear how to do it at all.

R=golang-dev, dave, bradfitz
CC=golang-dev
https://golang.org/cl/12724044
2013-08-12 21:48:19 +04:00
Dmitriy Vyukov
23e15f7253 net: add special netFD mutex
The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.

On linux/amd64, Intel E5-2690:
benchmark                             old ns/op    new ns/op    delta
BenchmarkTCP4Persistent                    9595         9454   -1.47%
BenchmarkTCP4Persistent-2                  8978         8772   -2.29%
BenchmarkTCP4ConcurrentReadWrite           4900         4625   -5.61%
BenchmarkTCP4ConcurrentReadWrite-2         2603         2500   -3.96%

In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.

Fixes #6074.

R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
2013-08-09 21:43:00 +04:00
Dmitriy Vyukov
01f1e3da48 runtime: traceback running goroutines
Introduce freezetheworld function that is a best-effort attempt to stop any concurrently running goroutines. Call it during crash.
Fixes #5873.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12054044
2013-08-09 12:53:35 +04:00
Dmitriy Vyukov
65834685d3 runtime: use GetQueuedCompletionStatusEx on windows if available
GetQueuedCompletionStatusEx allows to dequeue a batch of completion
notifications, which is more efficient than dequeueing one by one.

benchmark                           old ns/op    new ns/op    delta
BenchmarkClientServerParallel4         100605        90945   -9.60%
BenchmarkClientServerParallel4-2        90225        74504  -17.42%

R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12436044
2013-08-08 17:41:57 +04:00
Rob Pike
98a80b95b4 runtime: use correct types for maxstring and concatstring
Updates #6046.
This CL just does maxstring and concatstring. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.

R=golang-dev, dave, iant
CC=golang-dev
https://golang.org/cl/12519044
2013-08-07 06:49:11 +10:00
Rob Pike
82f5ca1ef0 runtime: change int32 to intgo in findnull and findnullw
Update #6046.
This CL just does findnull and findnullw. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.

R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12520043
2013-08-06 21:49:03 +10:00
Dmitriy Vyukov
9c0500b466 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

This is the same as reverted change 12250043
with save marked as textflag 7.
The problem was that if save calls morestack,
then subsequent lessstack spoils g->sched.pc/sp.
And that bad values were remembered in g->syscallpc/sp.
Entersyscallblock had the same problem,
but it was never triggered to date.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12478043
2013-08-06 13:38:44 +04:00
Russ Cox
d3066e47b1 runtime/pprof: test multithreaded profile, remove OS X workarounds
This means that pprof will no longer report profiles on OS X.
That's unfortunate, but the profiles were often wrong and, worse,
it was difficult to tell whether the profile was wrong or not.

The workarounds were making the scheduler more complex,
possibly caused a deadlock (see issue 5519), and did not actually
deliver reliable results.

It may be possible for adventurous users to apply a patch to
their kernels to get working results, or perhaps having no results
will encourage someone to do the work of creating a profiling
thread like on Windows. Issue 6047 has details.

Fixes #5519.
Fixes #6047.

R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/12429045
2013-08-05 19:49:02 -04:00
Dmitriy Vyukov
f38ff9e5ea undo CL 12250043 / e911f94c4902
Break all 386 builders.

««« original CL description
runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
»»»

R=rsc
CC=golang-dev
https://golang.org/cl/12424045
2013-08-05 23:33:50 +04:00
Dmitriy Vyukov
d5ab784611 runtime: remove singleproc var
It was needed for the old scheduler,
because there temporary could be more threads than gomaxprocs.
In the new scheduler gomaxprocs is always respected.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12438043
2013-08-05 22:58:02 +04:00
Dmitriy Vyukov
f73972fa33 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
2013-08-05 22:55:54 +04:00
Dmitriy Vyukov
49217cf5fd runtime: remove unused scheduler knob
Blockingsyscall was used in net package on windows,
it's not used anymore.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12436043
2013-08-04 23:32:40 +04:00
Keith Randall
9cd570680b runtime: reimplement reflect.call to not use stack splitting.
R=golang-dev, r, khr, rsc
CC=golang-dev
https://golang.org/cl/12053043
2013-08-02 13:03:14 -07:00
Dmitriy Vyukov
c33d490020 runtime: print "created by" for running goroutines in traceback
This allows to at least determine goroutine "identity".
Now it looks like:
goroutine 12 [running]:
        goroutine running on other thread; stack unavailable
created by testing.RunTests
        src/pkg/testing/testing.go:440 +0x88e

R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/12248043
2013-08-01 19:28:38 +04:00
Dmitriy Vyukov
3d6bce411c runtime: fix code formatting
This is mainly to force another build
with goroutine preemption.

R=rsc
CC=golang-dev
https://golang.org/cl/12006045
2013-07-30 23:48:18 +04:00
Dmitriy Vyukov
e84d9e1fb3 runtime: do not split stacks in syscall status
Split stack checks (morestack) corrupt g->sched,
but g->sched must be preserved consistent for GC/traceback.
The change implements runtime.notetsleepg function,
which does entersyscall/exitsyscall and is carefully arranged
to not call any split functions in between.

R=rsc
CC=golang-dev
https://golang.org/cl/11575044
2013-07-29 22:22:34 +04:00
Pieter Droogendijk
6350e45892 runtime: allow SetFinalizer with a func(interface{})
Fixes #5368.

R=golang-dev, dvyukov
CC=golang-dev, rsc
https://golang.org/cl/11858043
2013-07-29 19:43:08 +04:00
Dmitriy Vyukov
e97d677b4e runtime: introduce notetsleepg function
notetsleepg is the same as notetsleep, but is called on user g.
It includes entersyscall/exitsyscall and will help to avoid
split stack functions in syscall status.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11681043
2013-07-22 23:02:27 +04:00
Dmitriy Vyukov
27134567fa runtime: clarify comment for m->locked
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11671043
2013-07-22 16:37:31 +04:00
Russ Cox
c758841853 cmd/ld, runtime: remove unused fields from Func
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11604043
2013-07-19 18:52:35 -04:00
Russ Cox
48769bf546 runtime: use funcdata to supply garbage collection information
This CL introduces a FUNCDATA number for runtime-specific
garbage collection metadata, changes the C and Go compilers
to emit that metadata, and changes the runtime to expect it.

The old pseudo-instructions that carried this information
are gone, as is the linker code to process them.

R=golang-dev, dvyukov, cshapiro
CC=golang-dev
https://golang.org/cl/11406044
2013-07-19 16:04:09 -04:00
Keith Randall
6fc49c1854 runtime: cleanup: use ArgsSizeUnknown to mark all functions
whose argument size is unknown (C vararg functions, and
assembly code without an explicit specification).

We used to use 0 to mean "unknown" and 1 to mean "zero".
Now we use ArgsSizeUnknown (0x80000000) to mean "unknown".

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11590043
2013-07-19 11:19:18 -07:00
Russ Cox
c3de91bb15 cmd/ld, runtime: use new contiguous pcln table
R=golang-dev, r, dave
CC=golang-dev
https://golang.org/cl/11494043
2013-07-18 10:43:22 -04:00
Dmitriy Vyukov
5887f142a3 runtime: more reliable preemption
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
2013-07-17 12:52:37 -04:00
Russ Cox
a83748596c runtime: use new frame argument size information
With this CL, I believe the runtime always knows
the frame size during the gc walk. There is no fallback
to "assume entire stack frame of caller" anymore.

R=golang-dev, khr, cshapiro, dvyukov
CC=golang-dev
https://golang.org/cl/11374044
2013-07-17 12:47:18 -04:00
Russ Cox
5d363c6357 cmd/ld, runtime: new in-memory symbol table format
Design at http://golang.org/s/go12symtab.

This enables some cleanup of the garbage collector metadata
that will be done in future CLs.

This CL does not move the old symtab and pclntab back into
an unmapped section of the file. That's a bit tricky and will be
done separately.

Fixes #4020.

R=golang-dev, dave, cshapiro, iant, r
CC=golang-dev, nigeltao
https://golang.org/cl/11085043
2013-07-16 09:41:38 -04:00
Russ Cox
fb63e4fefb runtime: make cas64 like cas32 and casp
The current cas64 definition hard-codes the x86 behavior
of updating *old with the new value when the cas fails.
This is inconsistent with cas32 and casp.
Make it consistent.

This means that the cas64 uses will be epsilon less efficient
than they might be, because they have to do an unnecessary
memory load on x86. But so be it. Code clarity and consistency
is more important.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10909045
2013-07-12 00:03:32 -04:00
Dmitriy Vyukov
4b536a550f runtime: introduce GODEBUG env var
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.

R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
2013-06-28 18:37:06 +04:00
Dmitriy Vyukov
1e112cd59f runtime: preempt goroutines for GC
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.

R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
2013-06-28 17:52:17 +04:00
Russ Cox
6fa3c89b77 runtime: record proper goroutine state during stack split
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.

For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:

        TEXT myfunc
                check for split
                jmpcond nosplit
                call morestack
        nosplit:
                sub $xxx, sp

Now an extra instruction is inserted after the call:

        TEXT myfunc
        start:
                check for split
                jmpcond nosplit
                call morestack
                jmp start
        nosplit:
                sub $xxx, sp

The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.

The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.

The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.

Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)

runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
        morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
        sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow

goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
        /Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
        /Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
        /Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
        /Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
        /Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8

And here is a seg fault during oldstack:

SIGSEGV: segmentation violation
PC=0x1b2a6

runtime.oldstack()
        /Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
        /Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22

goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
        /Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
        /Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
        /Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
        strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8

goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
        /Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
        /Users/rsc/g/go/src/pkg/runtime/proc.c:166

rax     0x23ccc0
rbx     0x23ccc0
rcx     0x0
rdx     0x38
rdi     0x2102c0170
rsi     0x221032cfe0
rbp     0x221032cfa0
rsp     0x7fff5fbff5b0
r8      0x2102c0120
r9      0x221032cfa0
r10     0x221032c000
r11     0x104ce8
r12     0xe5c80
r13     0x1be82baac718
r14     0x13091135f7d69200
r15     0x0
rip     0x1b2a6
rflags  0x10246
cs      0x2b
fs      0x0
gs      0x0

Fixes #5723.

R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
2013-06-27 11:32:01 -04:00
Ian Lance Taylor
8cd0689a63 runtime: remove unused typedef
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/10660044
2013-06-26 22:02:32 -07:00
Alex Brainman
05a5de30f0 runtime: do not generate code during runtime in windows NewCallback
Update #5494

R=golang-dev, minux.ma, rsc, iant
CC=golang-dev
https://golang.org/cl/10368043
2013-06-24 17:17:45 +10:00
Dmitriy Vyukov
5caf762457 runtime: remove unused moreframesize_minalloc field
It was used to request large stack segment for GC
when it was running not on g0.
Now GC is running on g0 with large stack,
and it is not needed anymore.

R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/10242045
2013-06-15 16:02:39 +04:00
Russ Cox
d67e7e3acf runtime: add lr, ctxt, ret to Gobuf
Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.

R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
2013-06-12 15:22:26 -04:00
Russ Cox
e58f798c0c runtime: adjust traceback / garbage collector boundary
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.

Should make addframeroots significantly more robust.
It's certainly smaller.

Also try to standardize on uintptr for saved pc, sp values.

Will make CL 10036044 trivial.

R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
2013-06-12 08:49:38 -04:00
Dmitriy Vyukov
b36f2db12a runtime: use persistentalloc instead of mallocgc for itab
Reduces heap size.

R=golang-dev, remyoudompheng, bradfitz
CC=golang-dev
https://golang.org/cl/10139043
2013-06-09 21:58:35 +04:00
Dmitriy Vyukov
f5becf4233 runtime: add stackguard0 to G
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).

R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
2013-06-03 12:28:24 +04:00
Dmitriy Vyukov
e932c2035f runtime: make notetsleep() return false if timeout happens
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.

R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
2013-05-29 11:49:45 +04:00
Carl Shapiro
4e0a51c210 cmd/5l, cmd/6l, cmd/8l, cmd/gc, runtime: generate and use bitmaps of argument pointer locations
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area.  If an argument word
is known to contain a pointer, a bit is set.  The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.

R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
2013-05-28 17:59:10 -07:00
Dmitriy Vyukov
081129e286 runtime: allocate internal symbol table eagerly
we need it for GC anyway.

R=golang-dev, khr, dave, khr
CC=golang-dev
https://golang.org/cl/9728044
2013-05-28 21:10:10 +04:00
Dmitriy Vyukov
34c67eb24e runtime: detect deadlocks in programs using cgo
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes #4973.
Fixes #5475.

R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
2013-05-22 22:57:47 +04:00
Alex Brainman
38abb09a2e runtime: change PollDesc.fd from int32 to uintptr
This is in preparation for netpoll windows version.

R=golang-dev, bradfitz
CC=dvyukov, golang-dev, mikioh.mikioh
https://golang.org/cl/9569043
2013-05-20 12:55:50 +10:00
Ian Lance Taylor
6732ad94c7 runtime: make CgoMal alloc field void*
This makes it an unsafe.Pointer in Go so the garbage collector
will treat it as a pointer to untyped data, not a pointer to
bytes.

R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/8286045
2013-04-06 20:18:15 -07:00
Dmitriy Vyukov
58030c541b runtime: change Note from union to struct
Unions can break precise GC.
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8362046
2013-04-06 20:09:02 -07:00
Dmitriy Vyukov
d617454379 runtime: change Lock from union to struct
Unions can break precise GC.
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8457043
2013-04-06 20:07:07 -07:00
Dmitriy Vyukov
54340bf56f runtime: reset dangling typed pointer
+untype it because it can point to different types
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8454043
2013-04-06 20:01:28 -07:00
Keith Randall
3d5daa2319 runtime: Implement faster equals for strings and bytes.
(amd64)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            6  -63.15%
BenchmarkEqual9            22            7  -65.37%
BenchmarkEqual32           36            9  -74.91%
BenchmarkEqual4K         2187          120  -94.51%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        392.22      1134.38    2.89x
BenchmarkEqual32       866.72      3457.39    3.99x
BenchmarkEqual4K      1872.73     33998.87   18.15x

(386)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            5  -63.85%
BenchmarkEqual9            22            7  -67.84%
BenchmarkEqual32           34           12  -64.94%
BenchmarkEqual4K         2196          113  -94.85%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        405.81      1260.18    3.11x
BenchmarkEqual32       919.55      2631.21    2.86x
BenchmarkEqual4K      1864.85     36072.54   19.34x

Update #3751

R=bradfitz, r, khr, dave, remyoudompheng, fullung, minux.ma, ality
CC=golang-dev
https://golang.org/cl/8056043
2013-04-02 16:26:15 -07:00
Carl Shapiro
c676b8b27b cmd/ld, runtime: restrict stack root scan to locals and arguments
Updates #5134

R=golang-dev, bradfitz, cshapiro, daniel.morsing, ality, iant
CC=golang-dev
https://golang.org/cl/8022044
2013-03-28 14:36:23 -07:00