1
0
mirror of https://github.com/golang/go synced 2024-10-05 00:21:21 -06:00
Commit Graph

291 Commits

Author SHA1 Message Date
Aram Hăvărneanu
a46b434931 runtime: add support for GOOS=solaris
R=alex.brainman, dave, jsing, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/35990043
2014-01-17 17:58:10 +13:00
Dmitriy Vyukov
c0b9e6218c runtime: output how long goroutines are blocked
Example of output:

goroutine 4 [sleep for 3 min]:
time.Sleep(0x34630b8a000)
        src/pkg/runtime/time.goc:31 +0x31
main.func·002()
        block.go:16 +0x2c
created by main.main
        block.go:17 +0x33

Full program and output are here:
http://play.golang.org/p/NEZdADI3Td

Fixes #6809.

R=golang-codereviews, khr, kamil.kisiel, bradfitz, rsc
CC=golang-codereviews
https://golang.org/cl/50420043
2014-01-16 12:54:46 +04:00
Dmitriy Vyukov
4722b1cbd3 runtime: use lock-free ring for work queues
Use lock-free fixed-size ring for work queues
instead of an unbounded mutex-protected array.
The ring has single producer and multiple consumers.
If the ring overflows, work is put onto global queue.

benchmark              old ns/op    new ns/op    delta
BenchmarkMatmult               7            5  -18.12%
BenchmarkMatmult-4             2            2  -18.98%
BenchmarkMatmult-16            1            0  -12.84%

BenchmarkCreateGoroutines                     105           88  -16.10%
BenchmarkCreateGoroutines-4                   376          219  -41.76%
BenchmarkCreateGoroutines-16                  241          174  -27.80%
BenchmarkCreateGoroutinesParallel             103           87  -14.66%
BenchmarkCreateGoroutinesParallel-4           169          143  -15.38%
BenchmarkCreateGoroutinesParallel-16          158          151   -4.43%

R=golang-codereviews, rsc
CC=ddetlefs, devon.odell, golang-codereviews
https://golang.org/cl/46170044
2014-01-16 12:17:00 +04:00
Keith Randall
020b39c3f3 runtime: use special records hung off the MSpan to
record finalizers and heap profile info.  Enables
removing the special bit from the heap bitmap.  Also
provides a generic mechanism for annotating occasional
heap objects.

finalizers
        overhead      per obj
old	680 B	      80 B avg
new	16 B/span     48 B

profile
        overhead      per obj
old	32KB	      24 B + hash tables
new	16 B/span     24 B

R=cshapiro, khr, dvyukov, gobot
CC=golang-codereviews
https://golang.org/cl/13314053
2014-01-07 13:45:50 -08:00
Alex Brainman
7f8a5057dd syscall: add NewCallbackCDecl again
Fixes #6338

R=golang-dev, kin.wilson.za, rsc
CC=golang-dev
https://golang.org/cl/36180044
2013-12-19 14:38:50 +11:00
Carl Shapiro
76c54c1193 runtime: add GODEBUG option for an electric fence like heap mode
When enabled this new debugging mode will allocate objects on
their own page and never recycle memory addresses.  This is an
essential tool to root cause a broad class of heap corruption.

R=golang-dev, dave, daniel.morsing, dvyukov, rsc, iant, cshapiro
CC=golang-dev
https://golang.org/cl/22060046
2013-12-06 14:40:45 -08:00
Carl Shapiro
48279bd567 runtime: add an allocation and free tracing for gc debugging
Output for an allocation and free (sweep) follows

MProf_Malloc(p=0xc2100210a0, size=0x50, type=0x0 <single object>)
        #0 0x46ee15 runtime.mallocgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:141
        #1 0x47004f runtime.settype_flush /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:612
        #2 0x45f92c gc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2071
        #3 0x45f89e mgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2050
        #4 0x45258b runtime.mcall /usr/local/google/home/cshapiro/go/src/pkg/runtime/asm_amd64.s:179

MProf_Free(p=0xc2100210a0, size=0x50)
        #0 0x46ee15 runtime.mallocgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:141
        #1 0x47004f runtime.settype_flush /usr/local/google/home/cshapiro/go/src/pkg/runtime/malloc.goc:612
        #2 0x45f92c gc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2071
        #3 0x45f89e mgc /usr/local/google/home/cshapiro/go/src/pkg/runtime/mgc0.c:2050
        #4 0x45258b runtime.mcall /usr/local/google/home/cshapiro/go/src/pkg/runtime/asm_amd64.s:179

R=golang-dev, dvyukov, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/21990045
2013-12-03 14:42:38 -08:00
Keith Randall
3278dc158e runtime: pass key/value to map accessors by reference, not by value.
This change is part of the plan to get rid of all vararg C calls
which are a pain for getting exact stack scanning.

We allocate a chunk of zero memory to return a pointer to when a
map access doesn't find the key.  This is simpler than returning nil
and fixing things up in the caller.  Linker magic allocates a single
zero memory area that is shared by all (non-reflect-generated) map
types.

Passing things by reference gets rid of some copies, so it speeds
up code with big keys/values.

benchmark             old ns/op    new ns/op    delta
BenchmarkBigKeyMap           34           31   -8.48%
BenchmarkBigValMap           37           30  -18.62%
BenchmarkSmallKeyMap         26           23  -11.28%

R=golang-dev, dvyukov, khr, rsc
CC=golang-dev
https://golang.org/cl/14794043
2013-12-02 13:05:04 -08:00
Russ Cox
b88148b9a0 undo CL 19810043 / 352f3b7c9664
The CL causes misc/cgo/test to fail randomly.
I suspect that the problem is the use of a division instruction
in usleep, which can be called while trying to acquire an m
and therefore cannot store the denominator in m.
The solution to that would be to rewrite the code to use a
magic multiply instead of a divide, but now we're getting
pretty far off the original code.

Go back to the original in preparation for a different,
less efficient but simpler fix.

««« original CL description
cmd/5l, runtime: make ARM integer division profiler-friendly

The implementation of division constructed non-standard
stack frames that could not be handled by the traceback
routines.

CL 13239052 left the frames non-standard but fixed them
for the specific case of a divide-by-zero panic.
A profiling signal can arrive at any time, so that fix
is not sufficient.

Change the division to store the extra argument in the M struct
instead of in a new stack slot. That keeps the frames bog standard
at all times.

Also fix a related bug in the traceback code: when starting
a traceback, the LR register should be ignored if the current
function has already allocated its stack frame and saved the
original LR on the stack. The stack copy should be used, as the
LR register may have been modified.

Combined, these make the torture test from issue 6681 pass.

Fixes #6681.

R=golang-dev, r, josharian
CC=golang-dev
https://golang.org/cl/19810043
»»»

TBR=r
CC=golang-dev
https://golang.org/cl/20350043
2013-10-31 17:18:57 +00:00
Russ Cox
b0db472ea2 cmd/5l, runtime: make ARM integer division profiler-friendly
The implementation of division constructed non-standard
stack frames that could not be handled by the traceback
routines.

CL 13239052 left the frames non-standard but fixed them
for the specific case of a divide-by-zero panic.
A profiling signal can arrive at any time, so that fix
is not sufficient.

Change the division to store the extra argument in the M struct
instead of in a new stack slot. That keeps the frames bog standard
at all times.

Also fix a related bug in the traceback code: when starting
a traceback, the LR register should be ignored if the current
function has already allocated its stack frame and saved the
original LR on the stack. The stack copy should be used, as the
LR register may have been modified.

Combined, these make the torture test from issue 6681 pass.

Fixes #6681.

R=golang-dev, r, josharian
CC=golang-dev
https://golang.org/cl/19810043
2013-10-30 18:50:34 +00:00
Dmitriy Vyukov
f6329700ae runtime: remove nomemprof
Nomemprof seems to be unneeded now, there is no recursion.
If the recursion will be re-introduced, it will break loudly by deadlocking.
Fixes #6566.

R=golang-dev, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/14695044
2013-10-18 10:45:19 +04:00
Russ Cox
551ada4742 runtime: avoid allocation of internal panic values
If a fault happens in malloc, inevitably the next thing that happens
is a deadlock trying to allocate the panic value that says the fault
happened. Stop doing that, two ways.

First, reject panic in malloc just as we reject panic in garbage collection.

Second, runtime.panicstring was using an error implementation
backed by a Go string, so the interface held an allocated *string.
Since the actual errors are C strings, define a new error
implementation backed by a C char*, which needs no indirection
and therefore no allocation.

This second fix will avoid allocation for errors like nil panic derefs
or division by zero, so it is worth doing even though the first fix
should take care of faults during malloc.

Update #6419

R=golang-dev, dvyukov, dave
CC=golang-dev
https://golang.org/cl/13774043
2013-09-20 15:15:25 -04:00
Carl Shapiro
16d6b6c771 runtime: export PCDATA value reader
This interface is required to use the PCDATA interface
implemented in Go 1.2.  While initially entirely private, the
FUNCDATA side of the interface has been made public.  This
change completes the FUNCDATA/PCDATA interface.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/13735043
2013-09-16 19:03:19 -07:00
Russ Cox
30ecb4cd05 build: disable precise collection of stack frames
The code for call site-specific pointer bitmaps was not ready in time,
but the zeroing required without it is too expensive to use by default.
We will have to wait for precise collection of stack frames until Go 1.3.

The precise collection can be re-enabled by

        GOEXPERIMENT=precisestack ./all.bash

but that will not be the default for a Go 1.2 build.

Fixes #6087.

R=golang-dev, jeremyjackins, dan.kortschak, r
CC=golang-dev
https://golang.org/cl/13677045
2013-09-16 20:26:10 -04:00
Russ Cox
7276c02b41 runtime, cmd/gc, cmd/ld: ignore method wrappers in recover
Bug #1:

Issue 5406 identified an interesting case:
        defer iface.M()
may end up calling a wrapper that copies an indirect receiver
from the iface value and then calls the real M method. That's
two calls down, not just one, and so recover() == nil always
in the real M method, even during a panic.

[For the purposes of this entire discussion, a wrapper's
implementation is a function containing an ordinary call, not
the optimized tail call form that is somtimes possible. The
tail call does not create a second frame, so it is already
handled correctly.]

Fix this bug by introducing g->panicwrap, which counts the
number of bytes on current stack segment that are due to
wrapper calls that should not count against the recover
check. All wrapper functions must now adjust g->panicwrap up
on entry and back down on exit. This adds slightly to their
expense; on the x86 it is a single instruction at entry and
exit; on the ARM it is three. However, the alternative is to
make a call to recover depend on being able to walk the stack,
which I very much want to avoid. We have enough problems
walking the stack for garbage collection and profiling.
Also, if performance is critical in a specific case, it is already
faster to use a pointer receiver and avoid this kind of wrapper
entirely.

Bug #2:

The old code, which did not consider the possibility of two
calls, already contained a check to see if the call had split
its stack and so the panic-created segment was one behind the
current segment. In the wrapper case, both of the two calls
might split their stacks, so the panic-created segment can be
two behind the current segment.

Fix this by propagating the Stktop.panic flag forward during
stack splits instead of looking backward during recover.

Fixes #5406.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13367052
2013-09-12 14:00:16 -04:00
Russ Cox
ce9ddd0eee runtime: keep args and frame in struct Func
args is useful for printing tracebacks.

frame is not necessary anymore, but we might some day
get back to functions where the frame size does not vary
by program counter, and if so we'll need it. Avoid needing
to introduce a new struct format later by keeping it now.

Fixes #5907.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13632051
2013-09-11 14:18:52 -04:00
Keith Randall
4487054751 runtime: clean up / align comment tabbing
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/13336046
2013-09-10 09:02:22 -07:00
Russ Cox
757e0de89f runtime: impose stack size limit
The goal is to stop only those programs that would keep
going and run the machine out of memory, but before they do that.
1 GB on 64-bit, 250 MB on 32-bit.
That seems implausibly large, and it can be adjusted.

Fixes #2556.
Fixes #4494.
Fixes #5173.

R=khr, r, dvyukov
CC=golang-dev
https://golang.org/cl/12541052
2013-08-15 22:34:06 -04:00
Dmitriy Vyukov
f195ae94ca runtime: remove old preemption checks
runtime.gcwaiting checks are not needed anymore

R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/12934043
2013-08-15 14:32:10 +04:00
Russ Cox
5822e7848a runtime: make SetFinalizer(x, f) accept any f for which f(x) is valid
Originally the requirement was f(x) where f's argument is
exactly x's type.

CL 11858043 relaxed the requirement in a non-standard
way: f's argument must be exactly x's type or interface{}.

If we're going to relax the requirement, it should be done
in a way consistent with the rest of Go. This CL allows f's
argument to have any type for which x is assignable;
that's the same requirement the compiler would impose
if compiling f(x) directly.

Fixes #5368.

R=dvyukov, bradfitz, pieter
CC=golang-dev
https://golang.org/cl/12895043
2013-08-14 14:54:31 -04:00
Dmitriy Vyukov
4f2e382c9f runtime: dump scheduler state if GODEBUG=schedtrace is set
The schedtrace value sets dump period in milliseconds.
In default mode the trace looks as follows:
SCHED 0ms: gomaxprocs=4 idleprocs=0 threads=3 idlethreads=0 runqueue=0 [1 0 0 0]
SCHED 1001ms: gomaxprocs=4 idleprocs=3 threads=6 idlethreads=3 runqueue=0 [0 0 0 0]
SCHED 2008ms: gomaxprocs=4 idleprocs=1 threads=6 idlethreads=1 runqueue=0 [0 1 0 0]
If GODEBUG=scheddetail=1 is set as well, then the detailed trace is printed:
SCHED 0ms: gomaxprocs=4 idleprocs=0 threads=3 idlethreads=0 runqueue=0 singleproc=0 gcwaiting=1 mlocked=0 nmspinning=0 stopwait=0 sysmonwait=0
  P0: status=3 tick=1 m=0 runqsize=1/128 gfreecnt=0
  P1: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  P2: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  P3: status=3 tick=0 m=-1 runqsize=0/128 gfreecnt=0
  M2: p=-1 curg=-1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=-1
  M1: p=-1 curg=-1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=-1
  M0: p=0 curg=1 mallocing=0 throwing=0 gcing=0 locks=1 dying=0 helpgc=0 spinning=0 lockedg=1
  G1: status=2() m=0 lockedm=0
  G2: status=1() m=-1 lockedm=-1

R=golang-dev, raggi, rsc
CC=golang-dev
https://golang.org/cl/11435044
2013-08-14 00:30:55 +04:00
Dmitriy Vyukov
f9066fe1c0 runtime: more reliable preemption
Currently it's possible that a goroutine
that periodically executes non-blocking
cgo/syscalls is never preempted.
This change splits scheduler and syscall
ticks to prevent such situation.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12658045
2013-08-13 22:14:04 +04:00
Dmitriy Vyukov
fa4628346b runtime: remove unused m->racepc
The original plan was to collect allocation stacks
for all memory blocks. But it was never implemented
and it's not in near plans and it's unclear how to do it at all.

R=golang-dev, dave, bradfitz
CC=golang-dev
https://golang.org/cl/12724044
2013-08-12 21:48:19 +04:00
Dmitriy Vyukov
23e15f7253 net: add special netFD mutex
The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.

On linux/amd64, Intel E5-2690:
benchmark                             old ns/op    new ns/op    delta
BenchmarkTCP4Persistent                    9595         9454   -1.47%
BenchmarkTCP4Persistent-2                  8978         8772   -2.29%
BenchmarkTCP4ConcurrentReadWrite           4900         4625   -5.61%
BenchmarkTCP4ConcurrentReadWrite-2         2603         2500   -3.96%

In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.

Fixes #6074.

R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
2013-08-09 21:43:00 +04:00
Dmitriy Vyukov
01f1e3da48 runtime: traceback running goroutines
Introduce freezetheworld function that is a best-effort attempt to stop any concurrently running goroutines. Call it during crash.
Fixes #5873.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12054044
2013-08-09 12:53:35 +04:00
Dmitriy Vyukov
65834685d3 runtime: use GetQueuedCompletionStatusEx on windows if available
GetQueuedCompletionStatusEx allows to dequeue a batch of completion
notifications, which is more efficient than dequeueing one by one.

benchmark                           old ns/op    new ns/op    delta
BenchmarkClientServerParallel4         100605        90945   -9.60%
BenchmarkClientServerParallel4-2        90225        74504  -17.42%

R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12436044
2013-08-08 17:41:57 +04:00
Rob Pike
98a80b95b4 runtime: use correct types for maxstring and concatstring
Updates #6046.
This CL just does maxstring and concatstring. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.

R=golang-dev, dave, iant
CC=golang-dev
https://golang.org/cl/12519044
2013-08-07 06:49:11 +10:00
Rob Pike
82f5ca1ef0 runtime: change int32 to intgo in findnull and findnullw
Update #6046.
This CL just does findnull and findnullw. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.

R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12520043
2013-08-06 21:49:03 +10:00
Dmitriy Vyukov
9c0500b466 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

This is the same as reverted change 12250043
with save marked as textflag 7.
The problem was that if save calls morestack,
then subsequent lessstack spoils g->sched.pc/sp.
And that bad values were remembered in g->syscallpc/sp.
Entersyscallblock had the same problem,
but it was never triggered to date.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12478043
2013-08-06 13:38:44 +04:00
Russ Cox
d3066e47b1 runtime/pprof: test multithreaded profile, remove OS X workarounds
This means that pprof will no longer report profiles on OS X.
That's unfortunate, but the profiles were often wrong and, worse,
it was difficult to tell whether the profile was wrong or not.

The workarounds were making the scheduler more complex,
possibly caused a deadlock (see issue 5519), and did not actually
deliver reliable results.

It may be possible for adventurous users to apply a patch to
their kernels to get working results, or perhaps having no results
will encourage someone to do the work of creating a profiling
thread like on Windows. Issue 6047 has details.

Fixes #5519.
Fixes #6047.

R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/12429045
2013-08-05 19:49:02 -04:00
Dmitriy Vyukov
f38ff9e5ea undo CL 12250043 / e911f94c4902
Break all 386 builders.

««« original CL description
runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
»»»

R=rsc
CC=golang-dev
https://golang.org/cl/12424045
2013-08-05 23:33:50 +04:00
Dmitriy Vyukov
d5ab784611 runtime: remove singleproc var
It was needed for the old scheduler,
because there temporary could be more threads than gomaxprocs.
In the new scheduler gomaxprocs is always respected.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12438043
2013-08-05 22:58:02 +04:00
Dmitriy Vyukov
f73972fa33 runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
2013-08-05 22:55:54 +04:00
Dmitriy Vyukov
49217cf5fd runtime: remove unused scheduler knob
Blockingsyscall was used in net package on windows,
it's not used anymore.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12436043
2013-08-04 23:32:40 +04:00
Keith Randall
9cd570680b runtime: reimplement reflect.call to not use stack splitting.
R=golang-dev, r, khr, rsc
CC=golang-dev
https://golang.org/cl/12053043
2013-08-02 13:03:14 -07:00
Dmitriy Vyukov
c33d490020 runtime: print "created by" for running goroutines in traceback
This allows to at least determine goroutine "identity".
Now it looks like:
goroutine 12 [running]:
        goroutine running on other thread; stack unavailable
created by testing.RunTests
        src/pkg/testing/testing.go:440 +0x88e

R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/12248043
2013-08-01 19:28:38 +04:00
Dmitriy Vyukov
3d6bce411c runtime: fix code formatting
This is mainly to force another build
with goroutine preemption.

R=rsc
CC=golang-dev
https://golang.org/cl/12006045
2013-07-30 23:48:18 +04:00
Dmitriy Vyukov
e84d9e1fb3 runtime: do not split stacks in syscall status
Split stack checks (morestack) corrupt g->sched,
but g->sched must be preserved consistent for GC/traceback.
The change implements runtime.notetsleepg function,
which does entersyscall/exitsyscall and is carefully arranged
to not call any split functions in between.

R=rsc
CC=golang-dev
https://golang.org/cl/11575044
2013-07-29 22:22:34 +04:00
Pieter Droogendijk
6350e45892 runtime: allow SetFinalizer with a func(interface{})
Fixes #5368.

R=golang-dev, dvyukov
CC=golang-dev, rsc
https://golang.org/cl/11858043
2013-07-29 19:43:08 +04:00
Dmitriy Vyukov
e97d677b4e runtime: introduce notetsleepg function
notetsleepg is the same as notetsleep, but is called on user g.
It includes entersyscall/exitsyscall and will help to avoid
split stack functions in syscall status.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11681043
2013-07-22 23:02:27 +04:00
Dmitriy Vyukov
27134567fa runtime: clarify comment for m->locked
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11671043
2013-07-22 16:37:31 +04:00
Russ Cox
c758841853 cmd/ld, runtime: remove unused fields from Func
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11604043
2013-07-19 18:52:35 -04:00
Russ Cox
48769bf546 runtime: use funcdata to supply garbage collection information
This CL introduces a FUNCDATA number for runtime-specific
garbage collection metadata, changes the C and Go compilers
to emit that metadata, and changes the runtime to expect it.

The old pseudo-instructions that carried this information
are gone, as is the linker code to process them.

R=golang-dev, dvyukov, cshapiro
CC=golang-dev
https://golang.org/cl/11406044
2013-07-19 16:04:09 -04:00
Keith Randall
6fc49c1854 runtime: cleanup: use ArgsSizeUnknown to mark all functions
whose argument size is unknown (C vararg functions, and
assembly code without an explicit specification).

We used to use 0 to mean "unknown" and 1 to mean "zero".
Now we use ArgsSizeUnknown (0x80000000) to mean "unknown".

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11590043
2013-07-19 11:19:18 -07:00
Russ Cox
c3de91bb15 cmd/ld, runtime: use new contiguous pcln table
R=golang-dev, r, dave
CC=golang-dev
https://golang.org/cl/11494043
2013-07-18 10:43:22 -04:00
Dmitriy Vyukov
5887f142a3 runtime: more reliable preemption
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
2013-07-17 12:52:37 -04:00
Russ Cox
a83748596c runtime: use new frame argument size information
With this CL, I believe the runtime always knows
the frame size during the gc walk. There is no fallback
to "assume entire stack frame of caller" anymore.

R=golang-dev, khr, cshapiro, dvyukov
CC=golang-dev
https://golang.org/cl/11374044
2013-07-17 12:47:18 -04:00
Russ Cox
5d363c6357 cmd/ld, runtime: new in-memory symbol table format
Design at http://golang.org/s/go12symtab.

This enables some cleanup of the garbage collector metadata
that will be done in future CLs.

This CL does not move the old symtab and pclntab back into
an unmapped section of the file. That's a bit tricky and will be
done separately.

Fixes #4020.

R=golang-dev, dave, cshapiro, iant, r
CC=golang-dev, nigeltao
https://golang.org/cl/11085043
2013-07-16 09:41:38 -04:00
Russ Cox
fb63e4fefb runtime: make cas64 like cas32 and casp
The current cas64 definition hard-codes the x86 behavior
of updating *old with the new value when the cas fails.
This is inconsistent with cas32 and casp.
Make it consistent.

This means that the cas64 uses will be epsilon less efficient
than they might be, because they have to do an unnecessary
memory load on x86. But so be it. Code clarity and consistency
is more important.

R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10909045
2013-07-12 00:03:32 -04:00
Dmitriy Vyukov
4b536a550f runtime: introduce GODEBUG env var
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.

R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
2013-06-28 18:37:06 +04:00