Most of the test cases in the test directory use the new go:build syntax
already. Convert the rest. In general, try to place the build constraint
line below the test directive comment in more places.
For #41184.
For #60268.
Change-Id: I11c41a0642a8a26dc2eda1406da908645bbc005b
Cq-Include-Trybots: luci.golang.try:gotip-linux-386-longtest,gotip-linux-amd64-longtest,gotip-windows-amd64-longtest
Reviewed-on: https://go-review.googlesource.com/c/go/+/536236
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ia94094c5e47897e7c088d24b4a5e33f5c2768db5
Reviewed-on: https://go-review.googlesource.com/c/go/+/486976
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts commit CL 486380.
Submitted out of order and breaks bootstrap.
Change-Id: I67bd225094b5c9713b97f70feba04d2c99b7da76
Reviewed-on: https://go-review.googlesource.com/c/go/+/486916
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ibe20825ebe0076bbd7b0b7501177b16c9dbcb79e
Reviewed-on: https://go-review.googlesource.com/c/go/+/486380
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Fixes#58141
Co-authored-by: Richard Musiol <neelance@gmail.com>
Co-authored-by: Achille Roussel <achille.roussel@gmail.com>
Co-authored-by: Julien Fabre <ju.pryz@gmail.com>
Co-authored-by: Evan Phoenix <evan@phx.io>
Change-Id: I49b66946acc90fdf09ed9223096bfec9a1e5b923
Reviewed-on: https://go-review.googlesource.com/c/go/+/479627
Run-TryBot: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Auto-Submit: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Bypass: Ian Lance Taylor <iant@golang.org>
Contributors to the loong64 port are:
Weining Lu <luweining@loongson.cn>
Lei Wang <wanglei@loongson.cn>
Lingqin Gong <gonglingqin@loongson.cn>
Xiaolin Zhao <zhaoxiaolin@loongson.cn>
Meidan Li <limeidan@loongson.cn>
Xiaojuan Zhai <zhaixiaojuan@loongson.cn>
Qiyuan Pu <puqiyuan@loongson.cn>
Guoqi Chen <chenguoqi@loongson.cn>
This port has been updated to Go 1.15.6:
https://github.com/loongson/go
Updates #46229
Change-Id: I6760b4a7e51646773cd0f48baa1baba01b213b7d
Reviewed-on: https://go-review.googlesource.com/c/go/+/342325
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
The linker performs a global analysis of all nosplit call chains to
check they fit in the stack space ensured by splittable functions.
That analysis has two problems right now:
1. It's inefficient. It performs a top-down analysis, starting with
every nosplit function and the nosplit stack limit and walking *down*
the call graph to compute how much stack remains at every call. As a
result, it visits the same functions over and over, often with
different remaining stack depths. This approach is historical: this
check was originally written in C and this approach avoided the need
for any interesting data structures.
2. If some call chain is over the limit, it only reports a single call
chain. As a result, if the check does fail, you often wind up playing
whack-a-mole by guessing where the problem is in the one chain, trying
to reduce the stack size, and then seeing if the link works or reports
a different path.
This CL completely rewrites the nosplit stack check. It now uses a
bottom-up analysis, computing the maximum stack height required by
every function's call tree. This visits every function exactly once,
making it much more efficient. It uses slightly more heap space for
intermediate storage, but still very little in the scheme of the
overall link. For example, when linking cmd/go, the new algorithm
virtually eliminates the time spent in this pass, and reduces overall
link time:
│ before │ after │
│ sec/op │ sec/op vs base │
Dostkcheck 7.926m ± 4% 1.831m ± 6% -76.90% (p=0.000 n=20)
TotalTime 301.3m ± 1% 296.4m ± 3% -1.62% (p=0.040 n=20)
│ before │ after │
│ B/op │ B/op vs base │
Dostkcheck 40.00Ki ± 0% 212.15Ki ± 0% +430.37% (p=0.000 n=20)
Most of this time is spent analyzing the runtime, so for larger
binaries, the total time saved is roughly the same, and proportionally
less of the overall link.
If the new implementation finds an error, it redoes the analysis,
switching to preferring quality of error reporting over performance.
For error reporting, it computes stack depths top-down (like the old
algorithm), and reports *all* paths that are over the stack limit,
presented as a tree for compactness. For example, this is the output
from a simple test case from test/nosplit with two over-limit paths
from f1:
main.f1: nosplit stack overflow
main.f1
grows 768 bytes, calls main.f2
grows 56 bytes, calls main.f4
grows 48 bytes
80 bytes over limit
grows 768 bytes, calls main.f3
grows 104 bytes
80 bytes over limit
While we're here, we do a few nice cleanups:
- We add a debug output flag, which will be useful for understanding
what our nosplit chains look like and which ones are close to
running over.
- We move the implementation out of the fog of lib.go to its own file.
- The implementation is generally more Go-like and less C-like.
Change-Id: If1ab31197f5215475559b93695c44a01bd16e276
Reviewed-on: https://go-review.googlesource.com/c/go/+/398176
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
The nosplit test was originally written when the stack limit was a
mere 128 bytes. Now it's much larger, but rather than rewriting all of
the tests, we apply a hack to just add the extra space into the stack
frames of the existing tests.
Unfortunately, we add it in the wrong place. The extra space should be
added just once per chain of nosplit functions, but instead we add it
to every frame that appears first on a line in the test's little
script language. This means that for tests like
start 0 call f1
f1 16 nosplit call f2
f2 16 nosplit call f3
f3 16 nosplit call f4
f4 16 nosplit call f5
f5 16 nosplit call f6
f6 16 nosplit call f7
f7 16 nosplit call f8
f8 16 nosplit call end
end 1000
REJECT
we add 672 bytes to *every* frame, meaning that we wind up way over
the stack limit by the end of the stanza, rather than just a little as
originally intended.
Fix this by instead adding the extra space to the first nosplit
function in a stanza. This isn't perfect either, since we could have a
nosplit -> split -> nosplit chain, but it's the best we can do without
a graph analysis.
Change-Id: Ibf156c68fe3eb1b64a438115f4a17f1a6c7e2bd1
Reviewed-on: https://go-review.googlesource.com/c/go/+/398174
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Currently, the nosplit test disables ABI wrapper generation because it
generates a main.main in assembly, and so the ABI wrapper for calling
from runtime.main to main.main counts against the nosplit limit, which
cases some of the tests to fail.
Fix this by first entering ABI0 in a splittable context and then
calling from there into the test entry point, since this doesn't
introduce an ABI wrapper.
While we're here, this CL removes the test's check for the
framepointer experiment. That's now statically enabled, so it doesn't
appear in the experiment line, and enabling any other experiment
causes the test to think that the framepointer experiment *isn't*
enabled.
For #40724.
Change-Id: I6291eb9391f129779e726c5fc8c41b7b4a14eeb9
Reviewed-on: https://go-review.googlesource.com/c/go/+/302772
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Add compiler support for emitting ABI wrappers by creating real IR as
opposed to introducing ABI aliases. At the moment these are "no-op"
wrappers in the sense that they make a simple call (using the existing
ABI) to their target. The assumption here is that once late call
expansion can handle both ABI0 and the "new" ABIInternal (register
version), it can expand the call to do the right thing.
Note that the runtime contains functions that do not strictly follow
the rules of the current Go ABI0; this has been handled in most cases
by treating these as ABIInternal instead (these changes have been made
in previous patches).
Generation of ABI wrappers (as opposed to ABI aliases) is currently
gated by GOEXPERIMENT=regabi -- wrapper generation is on by default if
GOEXPERIMENT=regabi is set and off otherwise (but can be turned on
using "-gcflags=all=-abiwrap -ldflags=-abiwrap"). Wrapper generation
currently only workd on AMD64; explicitly enabling wrapper for other
architectures (via the command line) is not supported.
Also in this patch are a few other command line options for debugging
(tracing and/or limiting wrapper creation). These will presumably go
away at some point.
Updates #27539, #40724.
Change-Id: I1ee3226fc15a3c32ca2087b8ef8e41dbe6df4a75
Reviewed-on: https://go-review.googlesource.com/c/go/+/270863
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Trust: Than McIntosh <thanm@google.com>
Change the run.go driver to recognize the "gc" build tag.
Change existing tests to use the "gc" build tag if they use some
feature that seems specific to the gc compiler, such as passing specific
options to or expecting specific behavior from "go tool compile".
Change tests to use the "!gccgo" build tag if they use "go build" or
"go run", as while those might work with compilers other than gc, they
won't work with the way that gccgo runs its testsuite (which happens
independently of the go command).
For #43252
Change-Id: I666e04b6d7255a77dfc256ee304094e3a6bb15ad
Reviewed-on: https://go-review.googlesource.com/c/go/+/279052
Trust: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes#38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This disables some tests that are unsupported on riscv64 and adds support
for risc64 to test/nosplit.
Updates #27532, #36739 and #36765
Change-Id: I0a57797a05bc80236709fc240c0a0efb0ee0d16b
Reviewed-on: https://go-review.googlesource.com/c/go/+/216263
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go binary without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I63b1a60d1ebf28126f55ee9fd7ecffe9cb23d1ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/202340
Reviewed-by: Austin Clements <austin@google.com>
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go cmd without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I51a389860b9676cfa1b84722f5fb84d3c4ee9e28
Reviewed-on: https://go-review.googlesource.com/c/go/+/190098
Reviewed-by: Austin Clements <austin@google.com>
This test invokes 'go build', so in module mode it needs a module
cache to guard edits to go.mod.
Fixes#30776
Change-Id: I89ebef1fad718247e7f972cd830e31d6f4a83e4c
Reviewed-on: https://go-review.googlesource.com/c/go/+/167085
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Jay Conrod <jayconrod@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This commit skips tests which aren't yet supported on AIX.
nosplit.go is disabled because stackGuardMultiplier is increased for
syscalls.
Change-Id: Ib5ff9a4539c7646bcb6caee159f105ff8a160ad7
Reviewed-on: https://go-review.googlesource.com/c/146939
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Adds a new build tag "gcflags_noopt" that can be used in test/*.go
tests.
Fixes#27833
Change-Id: I4ea0ccd9e9e58c4639de18645fec81eb24a3a929
Reviewed-on: https://go-review.googlesource.com/136898
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The general policy for the current state of js/wasm is that it only
has to support tests that are also supported by nacl.
The test nilptr3.go makes assumptions about which nil checks can be
removed. Since WebAssembly does not signal on reading a null pointer,
all nil checks have to be explicit.
Updates #18892
Change-Id: I06a687860b8d22ae26b1c391499c0f5183e4c485
Reviewed-on: https://go-review.googlesource.com/110096
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The test was skipped because it did not work on AMD64 with
frame pointer enabled, and accidentally skipped on other
architectures. Now frame pointer is the default on AMD64.
Update the test to work with frame pointer. Now the test
is skipped only when frame pointer is NOT enabled on AMD64.
Fixes#18317.
Change-Id: I724cb6874e562f16e67ce5f389a1d032a2003115
Reviewed-on: https://go-review.googlesource.com/68610
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Framepointer is the default now. Only print an X: list
if the settings are _not_ the default.
Before:
$ go tool compile -V
compile version devel +a5f30d9508 Sun Jul 16 14:43:48 2017 -0400 X:framepointer
$ go1.8 tool compile -V
compile version go1.8 X:framepointer
$
After:
$ go tool compile -V
compile version devel +a5f30d9508 Sun Jul 16 14:43:48 2017 -0400
$ go1.9 tool compile -V # imagined
compile version go1.9
$
Perpetuates #18317.
Change-Id: I981ba5c62be32e650a166fc9740703122595639b
Reviewed-on: https://go-review.googlesource.com/49252
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Now only cmd/asm and cmd/compile depend on cmd/internal/obj. Changing
the assembler backends no longer requires reinstalling cmd/link or
cmd/addr2line.
There's also now one canonical definition of the object file format in
cmd/internal/objabi/doc.go, with a warning to update all three
implementations.
objabi is still something of a grab bag of unrelated code (e.g., flag
and environment variable handling probably belong in a separate "tool"
package), but this is still progress.
Fixes#15165.
Fixes#20026.
Change-Id: Ic4b92fac7d0d35438e0d20c9579aad4085c5534c
Reviewed-on: https://go-review.googlesource.com/40972
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
This time with the cherry-pick from the proper patch of
the old CL.
Stack size increased.
Corrected NaN-comparison glitches.
Marked g register as clobbered by calls.
Fixed shared libraries.
live_ssa.go still disabled because of differences.
Presumably turning on more optimization will fix
both the stack size and the live_ssa.go glitches.
Enhanced debugging output for shared libs test.
Rebased onto master.
Updates #16010.
Change-Id: I40864faf1ef32c118fb141b7ef8e854498e6b2c4
Reviewed-on: https://go-review.googlesource.com/27159
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Our stack frame sizes look pretty good now. Lower the stack
guard from 1024 to 720.
Tip is currently using 720.
We could go lower (to 640 at least) except PPC doesn't like that.
Change-Id: Ie5f96c0e822435638223f1e8a2bd1a1eed68e6aa
Reviewed-on: https://go-review.googlesource.com/19922
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Larger stack frames mean nosplit functions use more stack and so the limit
needs to increase.
The change to test/nosplit.go is a bit ugly but I can't really think of a
way to make it nicer.
Change-Id: I2616b58015f0b62abbd62951575fcd0d2d8643c2
Reviewed-on: https://go-review.googlesource.com/16504
Reviewed-by: Russ Cox <rsc@golang.org>
Skip fixedbugs/issue10607.go because external linking is not supported
yet.
Skip nilptr3.go because of issue #9058 (same as ppc64).
Change-Id: Ib3dfbd9a03ee4052871cf57c74b3cc5e745e1f80
Reviewed-on: https://go-review.googlesource.com/14461
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Modified GOSSA{HASH.PKG} environment variable filters to
make it easier to make/run with all SSA for testing.
Disable attempts at SSA for architectures that are not
amd64 (avoid spurious errors/unimplementeds.)
Removed easy out for unimplemented features.
Add convert op for proper liveness in presence of uintptr
to/from unsafe.Pointer conversions.
Tweaked stack sizes to get a pass on windows;
1024 instead 768, was observed to pass at least once.
Change-Id: Ida3800afcda67d529e3b1cf48ca4a3f0fa48b2c5
Reviewed-on: https://go-review.googlesource.com/16201
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
For variables which get SSA'd, SSA keeps track of all the def/kill.
It is only for on-stack variables that we need them.
This reduces stack frame sizes significantly because often the
only use of a variable was a varkill, and without that last use
the variable doesn't get allocated in the frame at all.
Fixes#12602
Change-Id: I3f00a768aa5ddd8d7772f375b25f846086a3e689
Reviewed-on: https://go-review.googlesource.com/14758
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
For now, we only use typedmemmove. This can be optimized
in future CLs.
Also add a feature to help with binary searching bad compilations.
Together with GOSSAPKG, GOSSAHASH specifies the last few binary digits
of the hash of function names that should be compiled. So
GOSSAHASH=0110 means compile only those functions whose last 4 bits
of hash are 0110. By adding digits to the front we can binary search
for the function whose SSA-generated code is causing a test to fail.
Change-Id: I5a8b6b70c6f034f59e5753965234cd42ea36d524
Reviewed-on: https://go-review.googlesource.com/14530
Reviewed-by: Keith Randall <khr@golang.org>
The layout code has to date insisted on stack frames that are 16-aligned
including the saved LR, and it ensured this by growing the frame itself.
This breaks code that refers to values near the top of the frame by positive
offset from SP, and in general it's too magical: if you see TEXT xxx, $N,
you expect that the frame size is actually N, not sometimes N and sometimes N+8.
This led to a serious bug in the compiler where ambiguously live values
were not being zeroed correctly, which in turn triggered an assertion
in the GC about finding only valid pointers. The compiler has been
fixed to always emit aligned frames, and the hand-written assembly
has also been fixed.
Now that everything is aligned, make unaligned an error instead of
something to "fix" silently.
For #9880.
Change-Id: I05f01a9df174d64b37fa19b36a6b6c5f18d5ba2d
Reviewed-on: https://go-review.googlesource.com/12848
Reviewed-by: Austin Clements <austin@google.com>
This is dead code that was missed
during the 'go tool compile' migration.
Change-Id: Ice2af8a9ef72f8fd5f82225ee261854d93b659f1
Reviewed-on: https://go-review.googlesource.com/10430
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This CL fixes the build to use the newly created go tool compile
and go tool link in place of go tool 5g, go tool 5l, and so on.
See golang-dev thread titled "go tool compile, etc" for background.
Although it was not a primary motivation, this conversion does
reduce the wall clock time and cpu time required for make.bash
by about 10%.
Change-Id: I79cbbdb676cab029db8aeefb99a53178ff55f98d
Reviewed-on: https://go-review.googlesource.com/10288
Reviewed-by: Rob Pike <r@golang.org>
Noopt builds get a larger stack guard. This test must take that into account.
Change-Id: I1b5cbafdbbfee8c369ae1bebd0b900524ebf0d7d
Reviewed-on: https://go-review.googlesource.com/9610
Reviewed-by: Russ Cox <rsc@golang.org>
The main issue is that the misc/cgo/{stdio,life} tests are silently
getting skipped when invoked from run.bash.
run.go should ignore any build tags after the first blank line in
source file. It already checks for test actions only upto the first
blank line. Build tags must be specified in the same block.
See http://golang.org/cl/3675 for background.
Change-Id: Id8abf000119e3335f7250d8ef34aac7811fc9dff
Reviewed-on: https://go-review.googlesource.com/3812
Reviewed-by: Minux Ma <minux@golang.org>
This adds a "framepointer" GOEXPERIMENT that that makes the amd64
toolchain maintain base pointer chains in the same way that gcc
-fno-omit-frame-pointer does. Go doesn't use these saved base
pointers, but this does enable external tools like Linux perf and
VTune to unwind Go stacks when collecting system-wide profiles.
This requires support in the compilers to not clobber BP, support in
liblink for generating the BP-saving function prologue and unwinding
epilogue, and support in the runtime to save BPs across preemption, to
skip saved BPs during stack unwinding and, and to adjust saved BPs
during stack moving.
As with other GOEXPERIMENTs, everything from the toolchain to the
runtime must be compiled with this experiment enabled. To do this,
run make.bash (or all.bash) with GOEXPERIMENT=framepointer.
Change-Id: I4024853beefb9539949e5ca381adfdd9cfada544
Reviewed-on: https://go-review.googlesource.com/2992
Reviewed-by: Russ Cox <rsc@golang.org>
When we start work on Gerrit, ppc64 and garbage collection
work will continue in the master branch, not the dev branches.
(We may still use dev branches for other things later, but
these are ready to be merged, and doing it now, before moving
to Git means we don't have to have dev branches working
in the Gerrit workflow on day one.)
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/183140043
640 bytes ought to be enough for anybody.
We'll bring this back down before Go 1.5. That's issue 9214.
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/188730043