1
0
mirror of https://github.com/golang/go synced 2024-11-19 17:44:43 -07:00
Commit Graph

75 Commits

Author SHA1 Message Date
Josh Bleecher Snyder
339a24da66 runtime: fix typo in comment
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/125500043
2014-08-19 08:50:35 -07:00
Keith Randall
7aa4e5ac5f runtime: convert equality functions to Go
LGTM=rsc
R=rsc, khr
CC=golang-codereviews
https://golang.org/cl/121330043
2014-08-07 14:52:55 -07:00
Keith Randall
a2a9768414 runtime: convert hash functions to Go calling convention.
Create proper closures so hash functions can be called
directly from Go.  Rearrange calling convention so return
value is directly accessible.

LGTM=dvyukov
R=golang-codereviews, dvyukov, dave, khr
CC=golang-codereviews
https://golang.org/cl/119360043
2014-07-31 15:07:05 -07:00
Rob Pike
aff7883d9a runtime: fix assembler macro definitions to be consistent in use of center-dot
The DISPATCH and CALLFN macro definitions depend on an inconsistency
between the internal cpp mini-implementation and the language proper in
whether center-dot is an identifier character. The macro depends on it not
being an identifier character, but the resulting code depends on it being one.

Remove the dependence on the inconsistency by placing the center-dot into
the macro invocation rather that the body.

No semantic change. This is just renaming macro arguments.

LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/119320043
2014-07-30 10:11:44 -07:00
Keith Randall
4aa50434e1 runtime: rewrite malloc in Go.
This change introduces gomallocgc, a Go clone of mallocgc.
Only a few uses have been moved over, so there are still
lots of uses from C. Many of these C uses will be moved
over to Go (e.g. in slice.goc), but probably not all.
What should remain of C's mallocgc is an open question.

LGTM=rsc, dvyukov
R=rsc, khr, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/108840046
2014-07-30 09:01:52 -07:00
Keith Randall
0c6b55e76b runtime: convert map implementation to Go.
It's a bit slower, but not painfully so.  There is still room for
improvement (saving space so we can use nosplit, and removing the
requirement for hash/eq stubs).

benchmark                              old ns/op     new ns/op     delta
BenchmarkMegMap                        23.5          24.2          +2.98%
BenchmarkMegOneMap                     14.9          15.7          +5.37%
BenchmarkMegEqMap                      71668         72234         +0.79%
BenchmarkMegEmptyMap                   4.05          4.93          +21.73%
BenchmarkSmallStrMap                   21.9          22.5          +2.74%
BenchmarkMapStringKeysEight_16         23.1          26.3          +13.85%
BenchmarkMapStringKeysEight_32         21.9          25.0          +14.16%
BenchmarkMapStringKeysEight_64         21.9          25.1          +14.61%
BenchmarkMapStringKeysEight_1M         21.9          25.0          +14.16%
BenchmarkIntMap                        21.8          12.5          -42.66%
BenchmarkRepeatedLookupStrMapKey32     39.3          30.2          -23.16%
BenchmarkRepeatedLookupStrMapKey1M     322353        322675        +0.10%
BenchmarkNewEmptyMap                   129           136           +5.43%
BenchmarkMapIter                       137           107           -21.90%
BenchmarkMapIterEmpty                  7.14          8.71          +21.99%
BenchmarkSameLengthMap                 5.24          6.82          +30.15%
BenchmarkBigKeyMap                     34.5          35.3          +2.32%
BenchmarkBigValMap                     36.1          36.1          +0.00%
BenchmarkSmallKeyMap                   26.9          26.7          -0.74%

LGTM=rsc
R=golang-codereviews, dave, dvyukov, rsc, gobot, khr
CC=golang-codereviews
https://golang.org/cl/99380043
2014-07-16 14:16:19 -07:00
Shenghou Ma
d1177ed40d runtime: nacl/arm support.
LGTM=rsc
R=rsc, iant, dave
CC=golang-codereviews
https://golang.org/cl/103680046
2014-07-10 15:14:49 -04:00
David Crawshaw
12b990ba7d cmd/go, cmd/ld, runtime, os/user: TLS emulation for android
Based on cl/69170045 by Elias Naur.

There are currently several schemes for acquiring a TLS
slot to save the g register. None of them appear to work
for android. The closest are linux and darwin.

Linux uses a linker TLS relocation. This is not supported
by the android linker.

Darwin uses a fixed offset, and calls pthread_key_create
until it gets the slot it wants. As the runtime loads
late in the android process lifecycle, after an
arbitrary number of other libraries, we cannot rely on
any particular slot being available.

So we call pthread_key_create, take the first slot we are
given, and put it in runtime.tlsg, which we turn into a
regular variable in cmd/ld.

Makes android/arm cgo binaries work.

LGTM=minux
R=elias.naur, minux, dave, josharian
CC=golang-codereviews
https://golang.org/cl/106380043
2014-07-03 16:14:34 -04:00
David Crawshaw
54951023cb runtime: update arm comments now register m is gone
LGTM=minux
R=golang-codereviews, minux
CC=golang-codereviews
https://golang.org/cl/109220046
2014-06-30 19:10:41 -04:00
Russ Cox
89f185fe8a all: remove 'extern register M *m' from runtime
The runtime has historically held two dedicated values g (current goroutine)
and m (current thread) in 'extern register' slots (TLS on x86, real registers
backed by TLS on ARM).

This CL removes the extern register m; code now uses g->m.

On ARM, this frees up the register that formerly held m (R9).
This is important for NaCl, because NaCl ARM code cannot use R9 at all.

The Go 1 macrobenchmarks (those with per-op times >= 10 µs) are unaffected:

BenchmarkBinaryTree17              5491374955     5471024381     -0.37%
BenchmarkFannkuch11                4357101311     4275174828     -1.88%
BenchmarkGobDecode                 11029957       11364184       +3.03%
BenchmarkGobEncode                 6852205        6784822        -0.98%
BenchmarkGzip                      650795967      650152275      -0.10%
BenchmarkGunzip                    140962363      141041670      +0.06%
BenchmarkHTTPClientServer          71581          73081          +2.10%
BenchmarkJSONEncode                31928079       31913356       -0.05%
BenchmarkJSONDecode                117470065      113689916      -3.22%
BenchmarkMandelbrot200             6008923        5998712        -0.17%
BenchmarkGoParse                   6310917        6327487        +0.26%
BenchmarkRegexpMatchMedium_1K      114568         114763         +0.17%
BenchmarkRegexpMatchHard_1K        168977         169244         +0.16%
BenchmarkRevcomp                   935294971      914060918      -2.27%
BenchmarkTemplate                  145917123      148186096      +1.55%

Minux previous reported larger variations, but these were caused by
run-to-run noise, not repeatable slowdowns.

Actual code changes by Minux.
I only did the docs and the benchmarking.

LGTM=dvyukov, iant, minux
R=minux, josharian, iant, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/109050043
2014-06-26 11:54:39 -04:00
Keith Randall
14c8143c31 runtime: fix gogetcallerpc.
Make assembly govet-clean.
Clean up fixes for CL 93380044.

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/107160047
2014-06-17 21:59:50 -07:00
Keith Randall
61dca94e10 runtime: implement string ops in Go
Also implement go:nosplit annotation.  Not really needed
for now, but we'll definitely need it for other conversions.

benchmark                 old ns/op     new ns/op     delta
BenchmarkRuneIterate      534           474           -11.24%
BenchmarkRuneIterate2     535           470           -12.15%

LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
2014-06-16 23:03:03 -07:00
Keith Randall
b36ed9056f runtime: implement eqstring in assembly.
BenchmarkCompareStringEqual               10.4          7.33          -29.52%
BenchmarkCompareStringIdentical           3.99          3.67          -8.02%
BenchmarkCompareStringSameLength          9.80          6.84          -30.20%
BenchmarkCompareStringDifferentLength     1.09          0.95          -12.84%
BenchmarkCompareStringBigUnaligned        75220         76071         +1.13%
BenchmarkCompareStringBig                 69843         74746         +7.02%

LGTM=bradfitz, josharian
R=golang-codereviews, bradfitz, josharian, dave, khr
CC=golang-codereviews
https://golang.org/cl/105280044
2014-06-16 21:00:37 -07:00
Russ Cox
597f87c997 runtime: do not trace past jmpdefer during pprof traceback on arm
jmpdefer modifies PC, SP, and LR, and not atomically,
so walking past jmpdefer will often end up in a state
where the three are not a consistent execution snapshot.
This was causing warning messages a few frames later
when the traceback realized it was confused, but given
the right memory it could easily crash instead.

Update #8153

LGTM=minux, iant
R=golang-codereviews, minux, iant
CC=golang-codereviews, r
https://golang.org/cl/107970043
2014-06-12 16:34:54 -04:00
Keith Randall
cee8bcabfa runtime: provide gc maps for the reflect.callXX frames.
Update #8030

LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/100620045
2014-05-21 14:28:34 -07:00
Keith Randall
51b72d94de runtime: use duff zero and copy to initialize memory
benchmark                 old ns/op     new ns/op     delta
BenchmarkCopyFat512       1307          329           -74.83%
BenchmarkCopyFat256       666           169           -74.62%
BenchmarkCopyFat1024      2617          671           -74.36%
BenchmarkCopyFat128       343           89.0          -74.05%
BenchmarkCopyFat64        182           48.9          -73.13%
BenchmarkCopyFat32        103           28.8          -72.04%
BenchmarkClearFat128      102           46.6          -54.31%
BenchmarkClearFat512      344           167           -51.45%
BenchmarkClearFat64       50.5          26.5          -47.52%
BenchmarkClearFat256      147           87.2          -40.68%
BenchmarkClearFat32       22.7          16.4          -27.75%
BenchmarkClearFat1024     511           662           +29.55%

Fixes #7624

LGTM=rsc
R=golang-codereviews, khr, bradfitz, josharian, dave, rsc
CC=golang-codereviews
https://golang.org/cl/92760044
2014-05-07 13:17:10 -07:00
Dmitriy Vyukov
350a8fcde1 runtime: make MemStats.LastGC Unix time again
The monotonic clock patch changed all runtime times
to abstract monotonic time. As the result user-visible
MemStats.LastGC become monotonic time as well.
Restore Unix time for LastGC.

This is the simplest way to expose time.now to runtime that I found.
Another option would be to change time.now to C called
int64 runtime.unixnanotime() and then express time.now in terms of it.
But this would require to introduce 2 64-bit divisions into time.now.
Another option would be to change time.now to C called
void runtime.unixnanotime1(struct {int64 sec, int32 nsec} *now)
and then express both time.now and runtime.unixnanotime in terms of it.

Fixes #7852.

LGTM=minux.ma, iant
R=minux.ma, rsc, iant
CC=golang-codereviews
https://golang.org/cl/93720045
2014-05-02 17:32:42 +01:00
Russ Cox
72c5d5e756 reflect, runtime: fix crash in GC due to reflect.call + precise GC
Given
        type Outer struct {
                *Inner
                ...
        }
the compiler generates the implementation of (*Outer).M dispatching to
the embedded Inner. The implementation is logically:
        func (p *Outer) M() {
                (p.Inner).M()
        }
but since the only change here is the replacement of one pointer
receiver with another, the actual generated code overwrites the
original receiver with the p.Inner pointer and then jumps to the M
method expecting the *Inner receiver.

During reflect.Value.Call, we create an argument frame and the
associated data structures to describe it to the garbage collector,
populate the frame, call reflect.call to run a function call using
that frame, and then copy the results back out of the frame. The
reflect.call function does a memmove of the frame structure onto the
stack (to set up the inputs), runs the call, and the memmoves the
stack back to the frame structure (to preserve the outputs).

Originally reflect.call did not distinguish inputs from outputs: both
memmoves were for the full stack frame. However, in the case where the
called function was one of these wrappers, the rewritten receiver is
almost certainly a different type than the original receiver. This is
not a problem on the stack, where we use the program counter to
determine the type information and understand that during (*Outer).M
the receiver is an *Outer while during (*Inner).M the receiver in the
same memory word is now an *Inner. But in the statically typed
argument frame created by reflect, the receiver is always an *Outer.
Copying the modified receiver pointer off the stack into the frame
will store an *Inner there, and then if a garbage collection happens
to scan that argument frame before it is discarded, it will scan the
*Inner memory as if it were an *Outer. If the two have different
memory layouts, the collection will intepret the memory incorrectly.

Fix by only copying back the results.

Fixes #7725.

LGTM=khr
R=khr
CC=dave, golang-codereviews
https://golang.org/cl/85180043
2014-04-08 11:11:35 -04:00
Russ Cox
f884e15aab runtime: fix arm build (B not JMP)
TBR=dvyukov
CC=golang-codereviews
https://golang.org/cl/71060046
2014-03-04 14:03:39 -05:00
Russ Cox
c2dd33a46f cmd/ld: clear unused ctxt before morestack
For non-closure functions, the context register is uninitialized
on entry and will not be used, but morestack saves it and then the
garbage collector treats it as live. This can be a source of memory
leaks if the context register points at otherwise dead memory.
Avoid this by introducing a parallel set of morestack functions
that clear the context register, and use those for the non-closure functions.

I hope this will help with some of the finalizer flakiness, but it probably won't.

Fixes #7244.

LGTM=dvyukov
R=khr, dvyukov
CC=golang-codereviews
https://golang.org/cl/71030044
2014-03-04 13:53:08 -05:00
Russ Cox
b377c9c6a9 liblink, runtime: fix cgo on arm
The addition of TLS to ARM rewrote the MRC instruction
differently depending on whether we were using internal
or external linking mode. That's clearly not okay, since we
don't know that during compilation, which is when we now
generate the code. Also, because the change did not introduce
a real MRC instruction but instead just macro-expanded it
in the assembler, liblink is rewriting a WORD instruction that
may actually be looking for that specific constant, which would
lead to very unexpected results. It was also using one value
that happened to be 8 where a different value that also
happened to be 8 belonged. So the code was correct for those
values but not correct in general, and very confusing.

Throw it all away.

Replace with the following. There is a linker-provided symbol
runtime.tlsgm with a value (address) set to the offset from the
hardware-provided TLS base register to the g and m storage.
Any reference to that name emits an appropriate TLS relocation
to be resolved by either the internal linker or the external linker,
depending on the link mode. The relocation has exactly the
semantics of the R_ARM_TLS_LE32 relocation, which is what
the external linker provides.

This symbol is only used in two routines, runtime.load_gm and
runtime.save_gm. In both cases it is now used like this:

        MRC		15, 0, R0, C13, C0, 3 // fetch TLS base pointer
        MOVW	$runtime·tlsgm(SB), R2
        ADD	R2, R0 // now R0 points at thread-local g+m storage

It is likely that this change breaks the generation of shared libraries
on ARM, because the MOVW needs to be rewritten to use the global
offset table and a different relocation type. But let's get the supported
functionality working again before we worry about unsupported
functionality.

LGTM=dave, iant
R=iant, dave
CC=golang-codereviews
https://golang.org/cl/56120043
2014-01-23 22:51:39 -05:00
Russ Cox
dab127baf5 liblink: remove use of linkmode on ARM
Now that liblink is compiled into the compilers and assemblers,
it must not refer to the "linkmode", since that is not known until
link time. This CL makes the ARM support no longer use linkmode,
which fixes a bug with cgo binaries that contain their own TLS
variables.

The x86 code must also remove linkmode; that is issue 7164.

Fixes #6992.

R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/55160043
2014-01-21 19:46:34 -05:00
Dave Cheney
d2fe44d568 runtime: load runtime.goarm as a byte, not a word
Fixes #6952.

runtime.asminit was incorrectly loading runtime.goarm as a word, not a uint8 which made it subject to alignment issues on arm5 platforms.

Alignment aside, this also meant that the top 3 bytes in R11 would have been garbage and could not be assumed to be setting up the FPU reliably.

R=iant, minux.ma
CC=golang-codereviews
https://golang.org/cl/46240043
2013-12-29 15:25:34 +11:00
Russ Cox
4230044bb8 runtime: remove non-extern decls of runtime.goarm
The linker is in charge of providing the one true declaration.

R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/39560043
2013-12-09 19:35:07 -05:00
Russ Cox
7276c02b41 runtime, cmd/gc, cmd/ld: ignore method wrappers in recover
Bug #1:

Issue 5406 identified an interesting case:
        defer iface.M()
may end up calling a wrapper that copies an indirect receiver
from the iface value and then calls the real M method. That's
two calls down, not just one, and so recover() == nil always
in the real M method, even during a panic.

[For the purposes of this entire discussion, a wrapper's
implementation is a function containing an ordinary call, not
the optimized tail call form that is somtimes possible. The
tail call does not create a second frame, so it is already
handled correctly.]

Fix this bug by introducing g->panicwrap, which counts the
number of bytes on current stack segment that are due to
wrapper calls that should not count against the recover
check. All wrapper functions must now adjust g->panicwrap up
on entry and back down on exit. This adds slightly to their
expense; on the x86 it is a single instruction at entry and
exit; on the ARM it is three. However, the alternative is to
make a call to recover depend on being able to walk the stack,
which I very much want to avoid. We have enough problems
walking the stack for garbage collection and profiling.
Also, if performance is critical in a specific case, it is already
faster to use a pointer receiver and avoid this kind of wrapper
entirely.

Bug #2:

The old code, which did not consider the possibility of two
calls, already contained a check to see if the call had split
its stack and so the panic-created segment was one behind the
current segment. In the wrapper case, both of the two calls
might split their stacks, so the panic-created segment can be
two behind the current segment.

Fix this by propagating the Stktop.panic flag forward during
stack splits instead of looking backward during recover.

Fixes #5406.

R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13367052
2013-09-12 14:00:16 -04:00
Keith Randall
32b770b2c0 runtime: jump to badmcall instead of calling it.
This replaces the mcall frame with the badmcall frame instead of
leaving the mcall frame on the stack and adding the badmcall frame.
Because mcall is no longer on the stack, traceback will now report what
called mcall, which is what we would like to see in this situation.

R=golang-dev, cshapiro
CC=golang-dev
https://golang.org/cl/13012044
2013-08-29 15:53:34 -07:00
Elias Naur
45233734e2 runtime.cmd/ld: Add ARM external linking and implement -shared in terms of external linking
This CL is an aggregate of 10271047, 10499043, 9733044. Descriptions of each follow:

10499043
runtime,cmd/ld: Merge TLS symbols and teach 5l about ARM TLS

This CL prepares for external linking support to ARM.

The pseudo-symbols runtime.g and runtime.m are merged into a single
runtime.tlsgm symbol. When external linking, the offset of a thread local
variable is stored at a memory location instead of being embedded into a offset
of a ldr instruction. With a single runtime.tlsgm symbol for both g and m, only
one such offset is needed.

The larger part of this CL moves TLS code from gcc compiled to internally
compiled. The TLS code now uses the modern MRC instruction, and 5l is taught
about TLS fallbacks in case the instruction is not available or appropriate.

10271047
This CL adds support for -linkmode external to 5l.

For 5l itself, use addrel to allow for D_CALL relocations to be handled by the
host linker. Of the cases listed in rsc's comment in issue 4069, only case 5 and
63 needed an update. One of the TODO: addrel cases was since replaced, and the
rest of the cases are either covered by indirection through addpool (cases with
LTO or LFROM flags) or stubs (case 74). The addpool cases are covered because
addpool emits AWORD instructions, which in turn are handled by case 11.

In the runtime, change the argv argument in the rt0* functions slightly to be a
pointer to the argv list, instead of relying on a particular location of argv.

9733044
The -shared flag to 6l outputs a shared library, implemented in Go
and callable from non-Go programs such as C.

The main part of this CL change the thread local storage model.
Go uses the fastest and least general mode, local exec. TLS data in shared
libraries normally requires at least the local dynamic mode, however, this CL
instead opts for using the initial exec mode. Initial exec mode is faster than
local dynamic mode and can be used in linux since the linker has reserved a
limited amount of TLS space for performance sensitive TLS code.

Initial exec mode requires an extra load from the GOT table to determine the
TLS offset. This penalty will not be paid if ld is not in -shared mode, since
TLS accesses will be reduced to local exec.

The elf sections .init_array and .rela.init_array are added to register the Go
runtime entry with cgo at library load time.

The "hidden" attribute is added to Cgo functions called from Go, since Go
does not generate call through the GOT table, and adding non-GOT relocations for
a global function is not supported by gcc. Cgo symbols don't need to be global
and avoiding the GOT table is also faster.

The changes to 8l are only removes code relevant to the old -shared mode where
internal linking was used.

This CL only address the low level linker work. It can be submitted by itself,
but to be useful, the runtime changes in CL 9738047 is also needed.

Design discussion at
https://groups.google.com/forum/?fromgroups#!topic/golang-nuts/zmjXkGrEx6Q

Fixes #5590.

R=rsc
CC=golang-dev
https://golang.org/cl/12871044
2013-08-14 15:38:54 +00:00
Dmitriy Vyukov
92254d4463 runtime: fix ARM assembly formatting
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12702048
2013-08-12 21:36:33 +04:00
Keith Randall
a97a91de06 runtime: Record jmpdefer's argument size.
Fixes bug 6055.

R=golang-dev, bradfitz, dvyukov, khr
CC=golang-dev
https://golang.org/cl/12536045
2013-08-07 14:03:50 -07:00
Keith Randall
5a54696d78 cmd/ld: Put the textflag constants in a separate file.
We can then include this file in assembly to replace
cryptic constants like "7" with meaningful constants
like "(NOPROF|DUPOK|NOSPLIT)".

Converting just pkg/runtime/asm*.s for now.  Dropping NOPROF
and DUPOK from lots of places where they aren't needed.
More .s files to come in a subsequent changelist.

A nonzero number in the textflag field now means
"has not been converted yet".

R=golang-dev, daniel.morsing, rsc, khr
CC=golang-dev
https://golang.org/cl/12568043
2013-08-07 10:23:24 -07:00
Keith Randall
12e46e42ec runtime: don't mark the new call trampolines as NOSPLIT.
They may call other NOSPLIT routines, and that might
overflow the stack.

R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12563043
2013-08-06 14:33:55 -07:00
Brad Fitzpatrick
598c78967f strings: use runtime assembly for IndexByte
Fixes #3751

R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/12483043
2013-08-05 15:04:05 -07:00
Keith Randall
9cd570680b runtime: reimplement reflect.call to not use stack splitting.
R=golang-dev, r, khr, rsc
CC=golang-dev
https://golang.org/cl/12053043
2013-08-02 13:03:14 -07:00
Brad Fitzpatrick
e2a1bd68b3 bytes: move IndexByte assembly to pkg runtime
Per suggestion from Russ in February. Then strings.IndexByte
can be implemented in terms of the shared code in pkg runtime.

Update #3751

R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12289043
2013-08-01 16:11:19 -07:00
Russ Cox
13507e0697 runtime: fix traceback across morestack
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12287043
2013-08-01 18:51:55 -04:00
Russ Cox
dba623b1c7 runtime: reduce frame size for runtime.cgocallback_gofunc
Tying preemption to stack splits means that we have to able to
complete the call to exitsyscall (inside cgocallbackg at least for now)
without any stack split checks, meaning that the whole sequence
has to work within 128 bytes of stack, unless we increase the size
of the red zone. This CL frees up 24 bytes along that critical path
on amd64. (The 32-bit systems have plenty of space because all
their words are smaller.)

R=dvyukov
CC=golang-dev
https://golang.org/cl/11676043
2013-07-23 18:40:02 -04:00
Russ Cox
58f12ffd79 runtime: handle morestack/lessstack in stack trace
If we start a garbage collection on g0 during a
stack split or unsplit, we'll see morestack or lessstack
at the top of the stack. Record an argument frame size
for those, and record that they terminate the stack.

R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11533043
2013-07-18 16:53:45 -04:00
Russ Cox
9ddfb64365 runtime: record argument size in assembly functions
I have not done the system call stubs in sys_*.s.
I hope to avoid that, because those do not block, so those
frames will not appear in stack traces during garbage
collection.

R=golang-dev, dvyukov, khr
CC=golang-dev
https://golang.org/cl/11360043
2013-07-16 16:24:09 -04:00
Russ Cox
5d363c6357 cmd/ld, runtime: new in-memory symbol table format
Design at http://golang.org/s/go12symtab.

This enables some cleanup of the garbage collector metadata
that will be done in future CLs.

This CL does not move the old symtab and pclntab back into
an unmapped section of the file. That's a bit tricky and will be
done separately.

Fixes #4020.

R=golang-dev, dave, cshapiro, iant, r
CC=golang-dev, nigeltao
https://golang.org/cl/11085043
2013-07-16 09:41:38 -04:00
Russ Cox
f0d73fbc7c runtime: use gp->sched.sp for stack overflow check
On x86 it is a few words lower on the stack than m->morebuf.sp
so it is a more precise check. Enabling the check requires recording
a valid gp->sched in reflect.call too. This is a good thing in general,
since it will make stack traces during reflect.call work better, and it
may be useful for preemption too.

R=dvyukov
CC=golang-dev
https://golang.org/cl/10709043
2013-06-27 16:51:06 -04:00
Russ Cox
6fa3c89b77 runtime: record proper goroutine state during stack split
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.

For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:

        TEXT myfunc
                check for split
                jmpcond nosplit
                call morestack
        nosplit:
                sub $xxx, sp

Now an extra instruction is inserted after the call:

        TEXT myfunc
        start:
                check for split
                jmpcond nosplit
                call morestack
                jmp start
        nosplit:
                sub $xxx, sp

The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.

The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.

The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.

Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)

runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
        morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
        sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow

goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
        /Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
        /Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
        /Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
        /Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
        /Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
        /Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
        /Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
        /Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8

And here is a seg fault during oldstack:

SIGSEGV: segmentation violation
PC=0x1b2a6

runtime.oldstack()
        /Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
        /Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22

goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
        /Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
        /Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
        /Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
        /Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
        /Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
        strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8

goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
        /Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
        /Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
        /Users/rsc/g/go/src/pkg/runtime/proc.c:166

rax     0x23ccc0
rbx     0x23ccc0
rcx     0x0
rdx     0x38
rdi     0x2102c0170
rsi     0x221032cfe0
rbp     0x221032cfa0
rsp     0x7fff5fbff5b0
r8      0x2102c0120
r9      0x221032cfa0
r10     0x221032c000
r11     0x104ce8
r12     0xe5c80
r13     0x1be82baac718
r14     0x13091135f7d69200
r15     0x0
rip     0x1b2a6
rflags  0x10246
cs      0x2b
fs      0x0
gs      0x0

Fixes #5723.

R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
2013-06-27 11:32:01 -04:00
Ian Lance Taylor
0627248a1f runtime: update runtime·gogo comment in asm files
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/10244043
2013-06-12 15:05:10 -07:00
Russ Cox
d67e7e3acf runtime: add lr, ctxt, ret to Gobuf
Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.

R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
2013-06-12 15:22:26 -04:00
Russ Cox
6120ef0799 runtime: rename _rt0_$GOARCH to _rt0_go
There's no reason to use a different name on each architecture,
and doing so makes it impossible for portable code to refer to
the original Go runtime entry point. Rename it _rt0_go everywhere.

This is a global search and replace only.

R=golang-dev, bradfitz, minux.ma
CC=golang-dev
https://golang.org/cl/10196043
2013-06-11 16:49:24 -04:00
Russ Cox
528534c1d4 runtime: fix comments (g->gobuf became g->sched long ago)
Should reduce size of CL 9868044.

R=golang-dev, ality
CC=golang-dev
https://golang.org/cl/10045043
2013-06-05 07:16:53 -04:00
Dmitriy Vyukov
f5becf4233 runtime: add stackguard0 to G
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).

R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
2013-06-03 12:28:24 +04:00
Shenghou Ma
5b78cee376 runtime: fix stack pointer corruption in runtime.cgocallback_gofunc()
runtime.setmg() calls another function (cgo_save_gm), so it must save
LR onto stack.
Re-enabled TestCthread test in misc/cgo/test.

Fixes #4863.

R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9019043
2013-04-30 04:13:32 +08:00
Keith Randall
3d5daa2319 runtime: Implement faster equals for strings and bytes.
(amd64)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            6  -63.15%
BenchmarkEqual9            22            7  -65.37%
BenchmarkEqual32           36            9  -74.91%
BenchmarkEqual4K         2187          120  -94.51%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        392.22      1134.38    2.89x
BenchmarkEqual32       866.72      3457.39    3.99x
BenchmarkEqual4K      1872.73     33998.87   18.15x

(386)
benchmark           old ns/op    new ns/op    delta
BenchmarkEqual0            16            5  -63.85%
BenchmarkEqual9            22            7  -67.84%
BenchmarkEqual32           34           12  -64.94%
BenchmarkEqual4K         2196          113  -94.85%

benchmark            old MB/s     new MB/s  speedup
BenchmarkEqual9        405.81      1260.18    3.11x
BenchmarkEqual32       919.55      2631.21    2.86x
BenchmarkEqual4K      1864.85     36072.54   19.34x

Update #3751

R=bradfitz, r, khr, dave, remyoudompheng, fullung, minux.ma, ality
CC=golang-dev
https://golang.org/cl/8056043
2013-04-02 16:26:15 -07:00
Carl Shapiro
4cb921bbf1 runtime: store asmcgocall return PC where the ARM unwind expects it
The ARM implementation of runtime.cgocallback_gofunc diverged
from the calling convention by leaving a word of garbage at
the top of the stack and storing the return PC above the
locals.  This change stores the return PC at the top of the
stack and removes the save area above the locals.

Update #5124
This CL fixes first part of the ARM issues and added the unwind test.

R=golang-dev, bradfitz, minux.ma, cshapiro, rsc
CC=golang-dev
https://golang.org/cl/7728045
2013-03-25 14:10:28 -07:00
Keith Randall
a5d4024139 runtime: faster & safer hash function
Uses AES hardware instructions on 386/amd64 to implement
a fast hash function.  Incorporates a random key to
thwart hash collision DOS attacks.
Depends on CL#7548043 for new assembly instructions.

Update #3885
Helps some by making hashing faster.  Go time drops from
0.65s to 0.51s.

R=rsc, r, bradfitz, remyoudompheng, khr, dsymonds, minux.ma, elias.naur
CC=golang-dev
https://golang.org/cl/7543043
2013-03-12 10:47:44 -07:00