1
0
mirror of https://github.com/golang/go synced 2024-10-05 02:21:22 -06:00
Commit Graph

302 Commits

Author SHA1 Message Date
Dmitriy Vyukov
e97d677b4e runtime: introduce notetsleepg function
notetsleepg is the same as notetsleep, but is called on user g.
It includes entersyscall/exitsyscall and will help to avoid
split stack functions in syscall status.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11681043
2013-07-22 23:02:27 +04:00
Dmitriy Vyukov
27134567fa runtime: clarify comment for m->locked
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11671043
2013-07-22 16:37:31 +04:00
Russ Cox
c758841853 cmd/ld, runtime: remove unused fields from Func
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11604043
2013-07-19 18:52:35 -04:00
Russ Cox
48769bf546 runtime: use funcdata to supply garbage collection information
This CL introduces a FUNCDATA number for runtime-specific
garbage collection metadata, changes the C and Go compilers
to emit that metadata, and changes the runtime to expect it.

The old pseudo-instructions that carried this information
are gone, as is the linker code to process them.

R=golang-dev, dvyukov, cshapiro
CC=golang-dev
https://golang.org/cl/11406044
2013-07-19 16:04:09 -04:00
Keith Randall
6fc49c1854 runtime: cleanup: use ArgsSizeUnknown to mark all functions
whose argument size is unknown (C vararg functions, and
assembly code without an explicit specification).

We used to use 0 to mean "unknown" and 1 to mean "zero".
Now we use ArgsSizeUnknown (0x80000000) to mean "unknown".

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11590043
2013-07-19 11:19:18 -07:00
Russ Cox
c3de91bb15 cmd/ld, runtime: use new contiguous pcln table
R=golang-dev, r, dave
CC=golang-dev
https://golang.org/cl/11494043
2013-07-18 10:43:22 -04:00
Dmitriy Vyukov
5887f142a3 runtime: more reliable preemption
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
2013-07-17 12:52:37 -04:00
Russ Cox
a83748596c runtime: use new frame argument size information
With this CL, I believe the runtime always knows
the frame size during the gc walk. There is no fallback
to "assume entire stack frame of caller" anymore.

R=golang-dev, khr, cshapiro, dvyukov
CC=golang-dev
https://golang.org/cl/11374044
2013-07-17 12:47:18 -04:00
Russ Cox
5d363c6357 cmd/ld, runtime: new in-memory symbol table format
Design at http://golang.org/s/go12symtab.

This enables some cleanup of the garbage collector metadata
that will be done in future CLs.

This CL does not move the old symtab and pclntab back into
an unmapped section of the file. That's a bit tricky and will be
done separately.

Fixes #4020.

R=golang-dev, dave, cshapiro, iant, r
CC=golang-dev, nigeltao
https://golang.org/cl/11085043
2013-07-16 09:41:38 -04:00
Russ Cox
fb63e4fefb runtime: make cas64 like cas32 and casp
The current cas64 definition hard-codes the x86 behavior
of updating *old with the new value when the cas fails.
This is inconsistent with cas32 and casp.
Make it consistent.

This means that the cas64 uses will be epsilon less efficient
than they might be, because they have to do an unnecessary
memory load on x86. But so be it. Code clarity and consistency
is more important.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10909045
2013-07-12 00:03:32 -04:00
Dmitriy Vyukov
4b536a550f runtime: introduce GODEBUG env var
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.

R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
2013-06-28 18:37:06 +04:00
Dmitriy Vyukov
1e112cd59f runtime: preempt goroutines for GC
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.

R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
2013-06-28 17:52:17 +04:00
Russ Cox
6fa3c89b77 runtime: record proper goroutine state during stack split
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.

For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:

        TEXT myfunc
                check for split
                jmpcond nosplit
                call morestack
        nosplit:
                sub $xxx, sp

Now an extra instruction is inserted after the call:

        TEXT myfunc
        start:
                check for split
                jmpcond nosplit
                call morestack
                jmp start
        nosplit:
                sub $xxx, sp

The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.

The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.

The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.

Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)

runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
        morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
        sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow

goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
        /Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
        /Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
        /Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
        /Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
        /Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8

And here is a seg fault during oldstack:

SIGSEGV: segmentation violation
PC=0x1b2a6

runtime.oldstack()
        /Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
        /Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22

goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
        /Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
        /Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
        /Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
        strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8

goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
        /Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
        /Users/rsc/g/go/src/pkg/runtime/proc.c:166

rax     0x23ccc0
rbx     0x23ccc0
rcx     0x0
rdx     0x38
rdi     0x2102c0170
rsi     0x221032cfe0
rbp     0x221032cfa0
rsp     0x7fff5fbff5b0
r8      0x2102c0120
r9      0x221032cfa0
r10     0x221032c000
r11     0x104ce8
r12     0xe5c80
r13     0x1be82baac718
r14     0x13091135f7d69200
r15     0x0
rip     0x1b2a6
rflags  0x10246
cs      0x2b
fs      0x0
gs      0x0

Fixes #5723.

R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
2013-06-27 11:32:01 -04:00
Ian Lance Taylor
8cd0689a63 runtime: remove unused typedef
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/10660044
2013-06-26 22:02:32 -07:00
Alex Brainman
05a5de30f0 runtime: do not generate code during runtime in windows NewCallback
Update #5494

R=golang-dev, minux.ma, rsc, iant
CC=golang-dev
https://golang.org/cl/10368043
2013-06-24 17:17:45 +10:00
Dmitriy Vyukov
5caf762457 runtime: remove unused moreframesize_minalloc field
It was used to request large stack segment for GC
when it was running not on g0.
Now GC is running on g0 with large stack,
and it is not needed anymore.

R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/10242045
2013-06-15 16:02:39 +04:00
Russ Cox
d67e7e3acf runtime: add lr, ctxt, ret to Gobuf
Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.

R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
2013-06-12 15:22:26 -04:00
Russ Cox
e58f798c0c runtime: adjust traceback / garbage collector boundary
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.

Should make addframeroots significantly more robust.
It's certainly smaller.

Also try to standardize on uintptr for saved pc, sp values.

Will make CL 10036044 trivial.

R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
2013-06-12 08:49:38 -04:00
Dmitriy Vyukov
b36f2db12a runtime: use persistentalloc instead of mallocgc for itab
Reduces heap size.

R=golang-dev, remyoudompheng, bradfitz
CC=golang-dev
https://golang.org/cl/10139043
2013-06-09 21:58:35 +04:00
Dmitriy Vyukov
f5becf4233 runtime: add stackguard0 to G
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).

R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
2013-06-03 12:28:24 +04:00
Dmitriy Vyukov
e932c2035f runtime: make notetsleep() return false if timeout happens
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.

R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
2013-05-29 11:49:45 +04:00
Carl Shapiro
4e0a51c210 cmd/5l, cmd/6l, cmd/8l, cmd/gc, runtime: generate and use bitmaps of argument pointer locations
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area.  If an argument word
is known to contain a pointer, a bit is set.  The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.

R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
2013-05-28 17:59:10 -07:00
Dmitriy Vyukov
081129e286 runtime: allocate internal symbol table eagerly
we need it for GC anyway.

R=golang-dev, khr, dave, khr
CC=golang-dev
https://golang.org/cl/9728044
2013-05-28 21:10:10 +04:00
Dmitriy Vyukov
34c67eb24e runtime: detect deadlocks in programs using cgo
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes #4973.
Fixes #5475.

R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
2013-05-22 22:57:47 +04:00
Alex Brainman
38abb09a2e runtime: change PollDesc.fd from int32 to uintptr
This is in preparation for netpoll windows version.

R=golang-dev, bradfitz
CC=dvyukov, golang-dev, mikioh.mikioh
https://golang.org/cl/9569043
2013-05-20 12:55:50 +10:00
Ian Lance Taylor
6732ad94c7 runtime: make CgoMal alloc field void*
This makes it an unsafe.Pointer in Go so the garbage collector
will treat it as a pointer to untyped data, not a pointer to
bytes.

R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/8286045
2013-04-06 20:18:15 -07:00
Dmitriy Vyukov
58030c541b runtime: change Note from union to struct
Unions can break precise GC.
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8362046
2013-04-06 20:09:02 -07:00
Dmitriy Vyukov
d617454379 runtime: change Lock from union to struct
Unions can break precise GC.
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8457043
2013-04-06 20:07:07 -07:00
Dmitriy Vyukov
54340bf56f runtime: reset dangling typed pointer
+untype it because it can point to different types
Update #5193.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8454043
2013-04-06 20:01:28 -07:00
Keith Randall
3d5daa2319 runtime: Implement faster equals for strings and bytes.
(amd64)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            6  -63.15%
BenchmarkEqual9            22            7  -65.37%
BenchmarkEqual32           36            9  -74.91%
BenchmarkEqual4K         2187          120  -94.51%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        392.22      1134.38    2.89x
BenchmarkEqual32       866.72      3457.39    3.99x
BenchmarkEqual4K      1872.73     33998.87   18.15x

(386)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            5  -63.85%
BenchmarkEqual9            22            7  -67.84%
BenchmarkEqual32           34           12  -64.94%
BenchmarkEqual4K         2196          113  -94.85%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        405.81      1260.18    3.11x
BenchmarkEqual32       919.55      2631.21    2.86x
BenchmarkEqual4K      1864.85     36072.54   19.34x

Update #3751

R=bradfitz, r, khr, dave, remyoudompheng, fullung, minux.ma, ality
CC=golang-dev
https://golang.org/cl/8056043
2013-04-02 16:26:15 -07:00
Carl Shapiro
c676b8b27b cmd/ld, runtime: restrict stack root scan to locals and arguments
Updates #5134

R=golang-dev, bradfitz, cshapiro, daniel.morsing, ality, iant
CC=golang-dev
https://golang.org/cl/8022044
2013-03-28 14:36:23 -07:00
Dmitriy Vyukov
ea15104110 net: band-aid for windows network poller
Fixes performance of the current windows network poller
with the new scheduler.
Gives runtime a hint when GetQueuedCompletionStatus() will block.
Fixes #5068.

benchmark                    old ns/op    new ns/op    delta
BenchmarkTCP4Persistent        4004000        33906  -99.15%
BenchmarkTCP4Persistent-2        21790        17513  -19.63%
BenchmarkTCP4Persistent-4        44760        34270  -23.44%
BenchmarkTCP4Persistent-6        45280        43000   -5.04%

R=golang-dev, alex.brainman, coocood, rsc
CC=golang-dev
https://golang.org/cl/7612045
2013-03-25 20:57:36 +04:00
Dmitriy Vyukov
44840786ae runtime: explicitly remove fd's from epoll waitset before close()
Fixes #5061.

Current code relies on the fact that fd's are automatically removed from epoll set when closed. However, it is not true. Underlying file description is removed from epoll set only when *all* fd's referring to it are closed.

There are 2 bad consequences:
1. Kernel delivers notifications on already closed fd's.
2. The following sequence of events leads to error:
   - add fd1 to epoll
   - dup fd1 = fd2
   - close fd1 (not removed from epoll since we've dup'ed the fd)
   - dup fd2 = fd1 (get the same fd as fd1)
   - add fd1 to epoll = EEXIST

So, if fd can be potentially dup'ed of fork'ed, it's necessary to explicitly remove the fd from epoll set.

R=golang-dev, bradfitz, dave
CC=golang-dev
https://golang.org/cl/7870043
2013-03-21 12:54:19 +04:00
Keith Randall
00224a356a runtime: faster hashmap implementation.
Hashtable is arranged as an array of
8-entry buckets with chained overflow.
Each bucket has 8 extra hash bits
per key to provide quick lookup within
a bucket.  Table is grown incrementally.

Update #3885
Go time drops from 0.51s to 0.34s.

R=r, rsc, m3b, dave, bradfitz, khr, ugorji, remyoudompheng
CC=golang-dev
https://golang.org/cl/7504044
2013-03-20 13:51:29 -07:00
Russ Cox
5146a93e72 runtime: accept GOTRACEBACK=crash to mean 'crash after panic'
This provides a way to generate core dumps when people need them.
The settings are:

        GOTRACEBACK=0  no traceback on panic, just exit
        GOTRACEBACK=1  default - traceback on panic, then exit
        GOTRACEBACK=2  traceback including runtime frames on panic, then exit
        GOTRACEBACK=crash traceback including runtime frames on panic, then crash

Fixes #3257.

R=golang-dev, devon.odell, r, daniel.morsing, ality
CC=golang-dev
https://golang.org/cl/7666044
2013-03-15 01:11:03 -04:00
Russ Cox
cb4428e555 os/signal: add Stop, be careful about SIGHUP
Fixes #4268.
Fixes #4491.

R=golang-dev, nightlyone, fullung, r
CC=golang-dev
https://golang.org/cl/7546048
2013-03-15 00:00:02 -04:00
Dmitriy Vyukov
5b79aa82ff runtime: revert UseSpanType back to 1
R=golang-dev
CC=golang-dev
https://golang.org/cl/7812043
2013-03-14 13:48:19 +04:00
Dmitriy Vyukov
0bee99ab3b runtime: integrated network poller for darwin
vs tip:
benchmark                           old ns/op    new ns/op    delta
BenchmarkTCP4Persistent                 67786        33175  -51.06%
BenchmarkTCP4Persistent-2               49085        31227  -36.38%
BenchmarkTCP4PersistentTimeout          69265        32565  -52.98%
BenchmarkTCP4PersistentTimeout-2        49217        32588  -33.79%

vs old scheduler:
benchmark                           old ns/op    new ns/op    delta
BenchmarkTCP4Persistent                 63517        33175  -47.77%
BenchmarkTCP4Persistent-2               54760        31227  -42.97%
BenchmarkTCP4PersistentTimeout          63234        32565  -48.50%
BenchmarkTCP4PersistentTimeout-2        56956        32588  -42.78%

R=golang-dev, bradfitz, devon.odell, mikioh.mikioh, iant, rsc
CC=golang-dev, pabuhr
https://golang.org/cl/7569043
2013-03-14 10:38:37 +04:00
Keith Randall
a5d4024139 runtime: faster & safer hash function
Uses AES hardware instructions on 386/amd64 to implement
a fast hash function.  Incorporates a random key to
thwart hash collision DOS attacks.
Depends on CL#7548043 for new assembly instructions.

Update #3885
Helps some by making hashing faster.  Go time drops from
0.65s to 0.51s.

R=rsc, r, bradfitz, remyoudompheng, khr, dsymonds, minux.ma, elias.naur
CC=golang-dev
https://golang.org/cl/7543043
2013-03-12 10:47:44 -07:00
Dmitriy Vyukov
c211884731 runtime: add network polling support into scheduler
This is a part of the bigger change that moves network poller into runtime:
https://golang.org/cl/7326051/

R=golang-dev, bradfitz, mikioh.mikioh, rsc
CC=golang-dev
https://golang.org/cl/7448048
2013-03-12 21:14:26 +04:00
Dmitriy Vyukov
6ee739d7e9 runtime: fix deadlock detector false negative
The issue was that scvg is assigned *after* the scavenger goroutine is started,
so when the scavenger calls entersyscall() the g==scvg check can fail.
Fixes #5025.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/7629045
2013-03-12 17:21:44 +04:00
Dmitriy Vyukov
433824d808 runtime: fix misaligned 64-bit atomic
Fixes #4869.
Fixes #5007.
Update #5005.

R=golang-dev, 0xe2.0x9a.0x9b, bradfitz
CC=golang-dev
https://golang.org/cl/7534044
2013-03-10 20:46:11 +04:00
Rémy Oudompheng
1b8f51c917 runtime: fix integer overflow in amd64 memmove.
Fixes #4981.

R=bradfitz, fullung, rsc, minux.ma
CC=golang-dev
https://golang.org/cl/7474047
2013-03-09 00:41:03 +01:00
Akshat Kumar
a566deace1 syscall: Plan 9: use lightweight errstr in entersyscall mode
Change 231af8ac63aa (CL 7314062) made runtime.enteryscall()
set m->mcache = nil, which means that we can no longer use
syscall.errstr in syscall.Syscall and syscall.Syscall6, since it
requires a new buffer to be allocated for holding the error string.
Instead, we use pre-allocated per-M storage to hold error strings
from syscalls made while in entersyscall mode, and call
runtime.findnull to calculate the lengths.

Fixes #4994.

R=rsc, rminnich, ality, dvyukov, rminnich, r
CC=golang-dev
https://golang.org/cl/7567043
2013-03-08 00:54:44 +01:00
Russ Cox
e0deb2ef7f undo CL 7301062 / 9742f722b558
broke arm garbage collector

traceback_arm fails with a missing pc. It needs CL 7494043.
But that only makes the build break later, this time with
"invalid freelist". Roll back until it can be fixed correctly.

««« original CL description
runtime: restrict stack root scan to locals and arguments

R=rsc
CC=golang-dev
https://golang.org/cl/7301062
»»»

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/7493044
2013-03-05 15:36:40 -05:00
Dmitriy Vyukov
add3349867 runtime: add atomic xchg64
It will be handy for network poller.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7429048
2013-03-05 09:46:52 +02:00
Dmitriy Vyukov
d0c11d20b8 runtime: declare addtimer/deltimer in runtime.h
In preparation for integrated network poller
(https://golang.org/cl/7326051),
this is required to handle deadlines.

R=golang-dev, remyoudompheng, rsc
CC=golang-dev
https://golang.org/cl/7446047
2013-03-05 09:38:15 +02:00
Carl Shapiro
db018bf77c runtime: restrict stack root scan to locals and arguments
R=rsc
CC=golang-dev
https://golang.org/cl/7301062
2013-03-04 19:48:50 -08:00
Russ Cox
3611553c3b runtime: add atomics to fix arm
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/7429046
2013-03-01 14:57:05 -05:00
Russ Cox
e6a3e22c75 runtime: start all threads with runtime.mstart
Putting the M initialization in multiple places will not scale.
Various code assumes mstart is the start already. Make it so.

R=golang-dev, devon.odell
CC=golang-dev
https://golang.org/cl/7420048
2013-03-01 11:44:43 -05:00
Russ Cox
d0d7416d3f runtime: more build fixing
Move the mstartfn into its own field.
Simpler, more likely to be correct.

R=golang-dev, devon.odell
CC=golang-dev
https://golang.org/cl/7414046
2013-03-01 09:24:17 -05:00
Russ Cox
c5f694a5c9 runtime: fix new scheduler on freebsd, windows
R=devon.odell
CC=golang-dev
https://golang.org/cl/7443046
2013-03-01 08:30:11 -05:00
Dmitriy Vyukov
779c45a507 runtime: improved scheduler
Distribute runnable queues, memory cache
and cache of dead G's per processor.
Faster non-blocking syscall enter/exit.
More conservative worker thread blocking/unblocking.

R=dave, bradfitz, remyoudompheng, rsc
CC=golang-dev
https://golang.org/cl/7314062
2013-03-01 13:49:16 +02:00
Dmitriy Vyukov
6cdfb00f4e runtime: more changes in preparation to the new scheduler
add per-P cache of dead G's
add global runnable queue (not used for now)
add list of idle P's (not used for now)

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7397061
2013-02-27 21:17:53 +02:00
Jan Ziak
a656f82071 runtime: precise garbage collection of channels
This changeset adds a mostly-precise garbage collection of channels.
The garbage collection support code in the linker isn't recognizing
channel types yet.

Fixes issue http://stackoverflow.com/questions/14712586/memory-consumption-skyrocket

R=dvyukov, rsc, bradfitz
CC=dave, golang-dev, minux.ma, remyoudompheng
https://golang.org/cl/7307086
2013-02-25 15:58:23 -05:00
Dmitriy Vyukov
353ce60f6e runtime: implement local work queues (in preparation for new scheduler)
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7402047
2013-02-23 08:48:02 +04:00
Russ Cox
6066fdcf38 cmd/6g, cmd/8g: switch to DX for indirect call block
runtime: add context argument to gogocall

Too many other things use AX, and at least one
(stack zeroing) cannot be moved onto a different
register. Use the less special DX instead.

Preparation for step 2 of http://golang.org/s/go11func.
Nothing interesting here, just split out so that we can
see it's correct before moving on.

R=ken2
CC=golang-dev
https://golang.org/cl/7395050
2013-02-22 10:47:54 -05:00
Russ Cox
1903ad7189 cmd/gc, reflect, runtime: switch to indirect func value representation
Step 1 of http://golang.org/s/go11func.

R=golang-dev, r, daniel.morsing, remyoudompheng
CC=golang-dev
https://golang.org/cl/7393045
2013-02-21 17:01:13 -05:00
Carl Shapiro
f466617a62 cmd/5g, cmd/5l, cmd/6l, cmd/8l, cmd/gc, cmd/ld, runtime: accurate args and locals information
Previously, the func structure contained an inaccurate value for
the args member and a 0 value for the locals member.

This change populates the func structure with args and locals
values computed by the compiler.  The number of args was
already available in the ATEXT instruction.  The number of
locals is now passed through in the new ALOCALS instruction.

This change also switches the unit of args and locals to be
bytes, just like the frame member, instead of 32-bit words.

R=golang-dev, bradfitz, cshapiro, dave, rsc
CC=golang-dev
https://golang.org/cl/7399045
2013-02-21 12:52:26 -08:00
Dmitriy Vyukov
a0955a2aa2 runtime: split minit() to mpreinit() and minit()
mpreinit() is called on the parent thread and with mcache (can allocate memory),
minit() is called on the child thread and can not allocate memory.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7389043
2013-02-21 16:24:38 +04:00
Russ Cox
6c976393ae runtime: allow cgo callbacks on non-Go threads
Fixes #4435.

R=golang-dev, iant, alex.brainman, minux.ma, dvyukov
CC=golang-dev
https://golang.org/cl/7304104
2013-02-20 17:48:23 -05:00
Dmitriy Vyukov
e25f19a638 runtime: introduce entersyscallblock()
In preparation for the new scheduler.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7386044
2013-02-20 20:21:45 +04:00
Carl Shapiro
7f9c02a10d runtime: add conversion specifier to printf for char values
R=r, golang-dev
CC=golang-dev
https://golang.org/cl/7327053
2013-02-19 18:05:44 -08:00
Russ Cox
c7f7bbbf03 runtime: fix build on linux
In addition to the compile failure fixed in signal*.c,
preserving the signal mask led to very strange crashes.
Testing shows that looking for SIG_IGN is all that
matters to get along with nohup, so reintroduce
sigset_zero instead of trying to preserve the signal mask.

TBR=iant
CC=golang-dev
https://golang.org/cl/7323067
2013-02-15 12:18:33 -05:00
Russ Cox
f3407f445d runtime: fix running under nohup
There are two ways nohup(1) might be implemented:
it might mask away the signal, or it might set the handler
to SIG_IGN, both of which are inherited across fork+exec.
So two fixes:

* Make sure to preserve the inherited signal mask at
minit instead of clearing it.

* If the SIGHUP handler is SIG_IGN, leave it that way.

Fixes #4491.

R=golang-dev, mikioh.mikioh, iant
CC=golang-dev
https://golang.org/cl/7308102
2013-02-15 11:18:55 -05:00
Dmitriy Vyukov
0a40cd2661 runtime/race: switch to explicit race context instead of goroutine id's
Removes limit on maximum number of goroutines ever existed.
code.google.com/p/goexecutor tests now pass successfully.
Also slightly improves performance.
Before: $ time ./flate.test -test.short
real	0m9.314s
After:  $ time ./flate.test -test.short
real	0m8.958s
Fixes #4286.
The runtime is built from llvm rev 174312.

R=rsc
CC=golang-dev
https://golang.org/cl/7218044
2013-02-06 11:40:54 +04:00
Russ Cox
b0a29f393b runtime: cgo-related fixes
* Separate internal and external LockOSThread, for cgo safety.
* Show goroutine that made faulting cgo call.
* Never start a panic due to a signal caused by a cgo call.

Fixes #3774.
Fixes #3775.
Fixes #3797.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/7228081
2013-02-01 08:34:41 -08:00
Jan Ziak
4da6b36fbf runtime: local allocation in mprof.goc
Binary data in mprof.goc may prevent the garbage collector from freeing
memory blocks. This patch replaces all calls to runtime·mallocgc() with
calls to an allocator private to mprof.goc, thus making the private
memory invisible to the garbage collector. The addrhash variable is
moved outside of the .bss section.

R=golang-dev, dvyukov, rsc, minux.ma
CC=dave, golang-dev, remyoudompheng
https://golang.org/cl/7135063
2013-01-30 09:01:31 -08:00
Akshat Kumar
c74f3c4576 runtime: add support for panic/recover in Plan 9 note handler
This change also resolves some issues with note handling: we now make
sure that there is enough room at the bottom of every goroutine to
execute the note handler, and the `exitstatus' is no longer a global
entity, which resolves some race conditions.

R=rminnich, npe, rsc, ality
CC=golang-dev
https://golang.org/cl/6569068
2013-01-30 02:53:56 -08:00
Dmitriy Vyukov
81221f512d runtime: dump the full stack of a throwing goroutine
Useful for debugging of runtime bugs.
+ Do not print "stack segment boundary" unless GOTRACEBACK>1.
+ Do not traceback system goroutines unless GOTRACEBACK>1.

R=rsc, minux.ma
CC=golang-dev
https://golang.org/cl/7098050
2013-01-29 14:57:11 +04:00
Shenghou Ma
4019d0e424 runtime: avoid defining the same variable in more than one translation unit
For gccgo runtime and Darwin where -fno-common is the default.

R=iant, dave
CC=golang-dev
https://golang.org/cl/7094061
2013-01-26 09:57:06 +08:00
Dmitriy Vyukov
f82db7d9e4 runtime: less aggressive per-thread stack segment caching
Introduce global stack segment cache and limit per-thread cache size.
This greatly reduces StackSys memory on workloads that create lots of threads.

benchmark                      old ns/op    new ns/op    delta
BenchmarkStackGrowth                 665          656   -1.35%
BenchmarkStackGrowth-2               333          328   -1.50%
BenchmarkStackGrowth-4               224          172  -23.21%
BenchmarkStackGrowth-8               124           91  -26.13%
BenchmarkStackGrowth-16               82           47  -41.94%
BenchmarkStackGrowth-32               73           40  -44.79%

BenchmarkStackGrowthDeep           97231        94391   -2.92%
BenchmarkStackGrowthDeep-2         47230        58562  +23.99%
BenchmarkStackGrowthDeep-4         24993        49356  +97.48%
BenchmarkStackGrowthDeep-8         15105        30072  +99.09%
BenchmarkStackGrowthDeep-16        10005        15623  +56.15%
BenchmarkStackGrowthDeep-32        12517        13069   +4.41%

TestStackMem#1,MB                  310          12       -96.13%
TestStackMem#2,MB                  296          14       -95.27%
TestStackMem#3,MB                  479          14       -97.08%

TestStackMem#1,sec                 3.22         2.26     -29.81%
TestStackMem#2,sec                 2.43         2.15     -11.52%
TestStackMem#3,sec                 2.50         2.38      -4.80%

R=sougou, no.smile.face, rsc
CC=golang-dev, msolomon
https://golang.org/cl/7029044
2013-01-10 09:57:06 +04:00
Ian Lance Taylor
63bee953a2 runtime: always incorporate hash seed at start of hash computation
Otherwise we can get predictable collisions.

R=golang-dev, dave, patrick, rsc
CC=golang-dev
https://golang.org/cl/7051043
2013-01-04 07:53:42 -08:00
Russ Cox
0de71619ce runtime: aggregate defer allocations
benchmark             old ns/op    new ns/op    delta
BenchmarkDefer              165          113  -31.52%
BenchmarkDefer10            155          103  -33.55%
BenchmarkDeferMany          216          158  -26.85%

benchmark            old allocs   new allocs    delta
BenchmarkDefer                1            0  -100.00%
BenchmarkDefer10              1            0  -100.00%
BenchmarkDeferMany            1            0  -100.00%

benchmark             old bytes    new bytes    delta
BenchmarkDefer               64            0  -100.00%
BenchmarkDefer10             64            0  -100.00%
BenchmarkDeferMany           64           66    3.12%

Fixes #2364.

R=ken2
CC=golang-dev
https://golang.org/cl/7001051
2012-12-22 14:54:39 -05:00
Jingcheng Zhang
70e967b7bc runtime: use "mp" and "gp" instead of "m" and "g" for local variable name to avoid confusion with the global "m" and "g".
R=golang-dev, minux.ma, rsc
CC=bradfitz, golang-dev
https://golang.org/cl/6939064
2012-12-19 00:30:29 +08:00
Jan Ziak
51b8edcb37 runtime: use reflect·call() to enter the function gc()
Garbage collection code (to be merged later) is calling functions
which have many local variables. This increases the probability that
the stack capacity won't be big enough to hold the local variables.
So, start gc() on a bigger stack to eliminate a potentially large number
of calls to runtime·morestack().

R=rsc, remyoudompheng, dsymonds, minux.ma, iant, iant
CC=golang-dev
https://golang.org/cl/6846044
2012-11-27 13:04:59 -05:00
Ian Lance Taylor
e9a3087e29 runtime, runtime/cgo: track memory allocated by non-Go code
Otherwise a poorly timed GC can collect the memory before it
is returned to the Go program.

R=golang-dev, dave, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/6819119
2012-11-10 11:19:06 -08:00
Jan Ziak
5c1422afab runtime: move Itab to runtime.h
The 'type' field of Itab will be used by the garbage collector.

R=rsc
CC=golang-dev
https://golang.org/cl/6815059
2012-11-01 13:13:20 -04:00
Dmitriy Vyukov
320df44f04 runtime: switch to 64-bit goroutine ids
Fixes #4275.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6759053
2012-10-26 10:13:06 +04:00
Jan Ziak
4a191c2c1b runtime: store types of allocated objects
R=rsc
CC=golang-dev
https://golang.org/cl/6569057
2012-10-21 17:41:32 -04:00
Nigel Tao
90ad6a2d11 runtime: update comment for the "extern register" variables g and m.
R=rsc, minux.ma, ality
CC=dave, golang-dev
https://golang.org/cl/6620050
2012-10-19 11:02:32 +11:00
Dmitriy Vyukov
2f6cbc74f1 race: runtime changes
This is a part of a bigger change that adds data race detection feature:
https://golang.org/cl/6456044

R=rsc
CC=gobot, golang-dev
https://golang.org/cl/6535050
2012-10-07 22:05:32 +04:00
Dmitriy Vyukov
4cc7bf326a pprof: add goroutine blocking profiling
The profiler collects goroutine blocking information similar to Google Perf Tools.
You may see an example of the profile (converted to svg) attached to
http://code.google.com/p/go/issues/detail?id=3946
The public API changes are:
+pkg runtime, func BlockProfile([]BlockProfileRecord) (int, bool)
+pkg runtime, func SetBlockProfileRate(int)
+pkg runtime, method (*BlockProfileRecord) Stack() []uintptr
+pkg runtime, type BlockProfileRecord struct
+pkg runtime, type BlockProfileRecord struct, Count int64
+pkg runtime, type BlockProfileRecord struct, Cycles int64
+pkg runtime, type BlockProfileRecord struct, embedded StackRecord

R=rsc, dave, minux.ma, r
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/6443115
2012-10-06 12:56:04 +04:00
Russ Cox
10ea6519e4 build: make int 64 bits on amd64
The assembly offsets were converted mechanically using
code.google.com/p/rsc/cmd/asmlint. The instruction
changes were done by hand.

Fixes #2188.

R=iant, r, bradfitz, remyoudompheng
CC=golang-dev
https://golang.org/cl/6550058
2012-09-24 20:57:01 -04:00
Jan Ziak
f8c58373e5 runtime: add types to MSpan
R=rsc
CC=golang-dev
https://golang.org/cl/6554060
2012-09-24 20:08:05 -04:00
Russ Cox
0b08c9483f runtime: prepare for 64-bit ints
This CL makes the runtime understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32',
and it is also careful to distinguish between function arguments
and results of type 'int' vs type 'int32'.

In the runtime, the new typedefs 'intgo' and 'uintgo' refer
to Go int and uint. The C types int and uint continue to be
unavailable (cause intentional compile errors).

This CL does not change the meaning of int, but it should make
the eventual change of the meaning of int on amd64 a bit
smoother.

Update #2188.

R=iant, r, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6551067
2012-09-24 14:58:34 -04:00
Shenghou Ma
b151af1f36 runtime: fix mmap comments
We only pass lower 32 bits of file offset to asm routine.

R=r, dave, rsc
CC=golang-dev
https://golang.org/cl/6499118
2012-09-21 13:50:02 +08:00
Dmitriy Vyukov
f20fd87384 runtime: refactor goroutine blocking
The change is a preparation for the new scheduler.
It introduces runtime.park() function,
that will atomically unlock the mutex and park the goroutine.
It will allow to remove the racy readyonstop flag
that is difficult to implement w/o the global scheduler mutex.

R=rsc, remyoudompheng, dave
CC=golang-dev
https://golang.org/cl/6501077
2012-09-18 21:15:46 +04:00
Jan Ziak
54193689cc cmd/ld: fix compilation when GOARCH != GOHOSTARCH
R=rsc, dave, minux.ma
CC=golang-dev
https://golang.org/cl/6493123
2012-09-17 17:18:21 -04:00
Ivan Krasin
5287175ad9 runtime: add vdso support for linux/amd64. Fixes issue 1933.
R=iant, imkrasin, krasin, iant, minux.ma, rsc, nigeltao, r, fullung
CC=golang-dev
https://golang.org/cl/6454046
2012-08-31 18:07:04 -04:00
Shenghou Ma
0157c72d13 runtime: inline several float64 routines to speed up complex128 division
Depends on CL 6197045.

Result obtained on Core i7 620M, Darwin/amd64:
benchmark                       old ns/op    new ns/op    delta
BenchmarkComplex128DivNormal           57           28  -50.78%
BenchmarkComplex128DivNisNaN           49           15  -68.90%
BenchmarkComplex128DivDisNaN           49           15  -67.88%
BenchmarkComplex128DivNisInf           40           12  -68.50%
BenchmarkComplex128DivDisInf           33           13  -61.06%

Result obtained on Core i7 620M, Darwin/386:
benchmark                       old ns/op    new ns/op    delta
BenchmarkComplex128DivNormal           89           50  -44.05%
BenchmarkComplex128DivNisNaN          307          802  +161.24%
BenchmarkComplex128DivDisNaN          309          788  +155.02%
BenchmarkComplex128DivNisInf          278          237  -14.75%
BenchmarkComplex128DivDisInf           46           22  -52.46%

Result obtained on 700MHz OMAP4460, Linux/ARM:
benchmark                       old ns/op    new ns/op    delta
BenchmarkComplex128DivNormal         1557          465  -70.13%
BenchmarkComplex128DivNisNaN         1443          220  -84.75%
BenchmarkComplex128DivDisNaN         1481          218  -85.28%
BenchmarkComplex128DivNisInf          952          216  -77.31%
BenchmarkComplex128DivDisInf          861          231  -73.17%

The 386 version has a performance regression, but as we have
decided to use SSE2 instead of x87 FPU for 386 too (issue 3912),
I won't address this issue.

R=dsymonds, mchaten, iant, dave, mtj, rsc, r
CC=golang-dev
https://golang.org/cl/6024045
2012-08-07 23:45:50 +08:00
Dmitriy Vyukov
a54f920bfe runtime: move panic/defer/recover-related stuff to a separate file
Move panic/defer/recover-related stuff from proc.c/runtime.c to a new file panic.c.
No semantic changes.
proc.c is 1800+ LOC and is a bit difficult to work with.

R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/6343071
2012-07-04 14:52:51 +04:00
Dmitriy Vyukov
ed516df4e4 runtime: add freemcache() function
It will be required for scheduler that maintains
GOMAXPROCS MCache's.

R=golang-dev, r
CC=golang-dev
https://golang.org/cl/6350062
2012-07-01 13:10:01 +04:00
Jan Ziak
334bf95f9e runtime: update field types in preparation for GC changes
R=rsc, remyoudompheng, minux.ma, ality
CC=golang-dev
https://golang.org/cl/6242061
2012-05-30 13:07:52 -04:00
Alex Brainman
afe0e97aa6 runtime: handle windows exceptions, even in cgo programs
Fixes #3543.

R=golang-dev, kardianos, rsc
CC=golang-dev, hectorchu, vcc.163
https://golang.org/cl/6245063
2012-05-30 15:10:54 +10:00
Russ Cox
6dbaa206fb runtime: replace runtime·rnd function with ROUND macro
It's sad to introduce a new macro, but rnd shows up consistently
in profiles, and the function call overwhelms the two arithmetic
instructions it performs.

R=r
CC=golang-dev
https://golang.org/cl/6260051
2012-05-29 14:02:29 -04:00
Dmitriy Vyukov
01826280eb runtime: refactor helpgc functionality in preparation for parallel GC
Parallel GC needs to know in advance how many helper threads will be there.
Hopefully it's the last patch before I can tackle parallel sweep phase.
The benchmarks are unaffected.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6200064
2012-05-15 19:10:16 +04:00
Dmitriy Vyukov
95643647ae runtime: add parallel for algorithm
This is factored out part of:
https://golang.org/cl/5279048/
(parallel GC)

R=bsiegert, mpimenov, rsc, minux.ma, r
CC=golang-dev
https://golang.org/cl/5986054
2012-05-11 10:50:03 +04:00
Dmitriy Vyukov
a5dc7793c0 runtime: add lock-free stack
This is factored out part of the:
https://golang.org/cl/5279048/
(parallel GC)

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/5993043
2012-04-12 11:49:25 +04:00
Dmitriy Vyukov
d839a809b2 runtime: make GC stats per-M
This is factored out part of:
https://golang.org/cl/5279048/
(Parallel GC)

benchmark                             old ns/op    new ns/op    delta
garbage.BenchmarkParser              3999106750   3975026500   -0.60%
garbage.BenchmarkParser-2            3720553750   3719196500   -0.04%
garbage.BenchmarkParser-4            3502857000   3474980500   -0.80%
garbage.BenchmarkParser-8            3375448000   3341310500   -1.01%
garbage.BenchmarkParserLastPause      329401000    324097000   -1.61%
garbage.BenchmarkParserLastPause-2    208953000    214222000   +2.52%
garbage.BenchmarkParserLastPause-4    110933000    111656000   +0.65%
garbage.BenchmarkParserLastPause-8     71969000     78230000   +8.70%
garbage.BenchmarkParserPause          230808842    197237400  -14.55%
garbage.BenchmarkParserPause-2        123674365    125197595   +1.23%
garbage.BenchmarkParserPause-4         80518525     85710333   +6.45%
garbage.BenchmarkParserPause-8         58310243     56940512   -2.35%
garbage.BenchmarkTree2                 31471700     31289400   -0.58%
garbage.BenchmarkTree2-2               21536800     21086300   -2.09%
garbage.BenchmarkTree2-4               11074700     10880000   -1.76%
garbage.BenchmarkTree2-8                7568600      7351400   -2.87%
garbage.BenchmarkTree2LastPause       314664000    312840000   -0.58%
garbage.BenchmarkTree2LastPause-2     215319000    210815000   -2.09%
garbage.BenchmarkTree2LastPause-4     110698000    108751000   -1.76%
garbage.BenchmarkTree2LastPause-8      75635000     73463000   -2.87%
garbage.BenchmarkTree2Pause           174280857    173147571   -0.65%
garbage.BenchmarkTree2Pause-2         131332714    129665761   -1.27%
garbage.BenchmarkTree2Pause-4          93803095     93422904   -0.41%
garbage.BenchmarkTree2Pause-8          86242333     85146761   -1.27%

R=rsc
CC=golang-dev
https://golang.org/cl/5987045
2012-04-05 20:48:28 +04:00