Turns out the summary information for the ... args was
already correctly computed, all that lacked was to make
use of it and correct tests that documented our prior
deficiencies.
Fixes#12006
Change-Id: Ie8adfab7547f179391d470679598f0904aabf9f7
Reviewed-on: https://go-review.googlesource.com/15200
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Modified esc.go to allow slice literals (before append)
to be non-escaping. Modified tests to account for changes
in escape behavior and to also test the two cases that
were previously not tested.
Also minor cleanups to debug-printing within esc.go
Allocation stats for running compiler
( cd src/html/template;
for i in {1..5} ; do
go tool 6g -memprofile=testzz.${i}.prof -memprofilerate=1 *.go ;
go tool pprof -alloc_objects -text testzz.${i}.prof ;
done ; )
before about 86k allocations
after about 83k allocations
Fixes#8972
Change-Id: Ib61dd70dc74adb40d6f6fdda6eaa4bf7d83481de
Reviewed-on: https://go-review.googlesource.com/10118
Reviewed-by: Russ Cox <rsc@golang.org>
This includes the following information in the per-function summary:
outK = paramJ encoded in outK bits for paramJ
outK = *paramJ encoded in outK bits for paramJ
heap = paramJ EscHeap
heap = *paramJ EscContentEscapes
Note that (currently) if the address of a parameter is taken and
returned, necessarily a heap allocation occurred to contain that
reference, and the heap can never refer to stack, therefore the
parameter and everything downstream from it escapes to the heap.
The per-function summary information now has a tuneable number of bits
(2 is probably noticeably better than 1, 3 is likely overkill, but it
is now easy to check and the -m debugging output includes information
that allows you to figure out if more would be better.)
A new test was added to check pointer flow through struct-typed and
*struct-typed parameters and returns; some of these are sensitive to
the number of summary bits, and ought to yield better results with a
more competent escape analysis algorithm. Another new test checks
(some) correctness with array parameters, results, and operations.
The old analysis inferred a piece of plan9 runtime was non-escaping by
counteracting overconservative analysis with buggy analysis; with the
bug fixed, the result was too conservative (and it's not easy to fix
in this framework) so the source code was tweaked to get the desired
result. A test was added against the discovered bug.
The escape analysis was further improved splitting the "level" into
3 parts, one tracking the conventional "level" and the other two
computing the highest-level-suffix-from-copy, which is used to
generally model the cancelling effect of indirection applied to
address-of.
With the improved escape analysis enabled, it was necessary to
modify one of the runtime tests because it now attempts to allocate
too much on the (small, fixed-size) G0 (system) stack and this
failed the test.
Compiling src/std after touching src/runtime/*.go with -m logging
turned on shows 420 fewer heap allocation sites (10538 vs 10968).
Profiling allocations in src/html/template with
for i in {1..5} ;
do go tool 6g -memprofile=mastx.${i}.prof -memprofilerate=1 *.go;
go tool pprof -alloc_objects -text mastx.${i}.prof ;
done
showed a 15% reduction in allocations performed by the compiler.
Update #3753
Update #4720Fixes#10466
Change-Id: I0fd97d5f5ac527b45f49e2218d158a6e89951432
Reviewed-on: https://go-review.googlesource.com/8202
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The flag updates error annotations in test files from actual compiler output.
This is useful when doing compiler changes that add/remove/change lots of errors,
or when adding lots of new tests.
Also I noticed at least 2 cases where annotation were sub-optimal:
1. The annotation was "leaking param p" when the actual error is
"leaking param p to result ~r1".
2. The annotation was "leaking param m" when the actual errors
are "leaking param m" and "leaking param mv1".
For now it works only for errorcheck mode.
Also, apply the update to escape and liveness tests.
Some files have gccgo-specific errors of the form "gc error|gccgo error",
so it is risky to run update on all files. Gccgo-specific error
does not necessary contain '|', it can be just truncated.
Change-Id: Iaaae767f859dcb8321a8cb4970b2b70969e8a345
Reviewed-on: https://go-review.googlesource.com/5310
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Extend escape analysis to convT2E and conT2I. If the interface value
does not escape supply runtime with a stack buffer for the object copy.
This is a straight port from .c to .go of Dmitry's patch
Change-Id: Ic315dd50d144d94dd3324227099c116be5ca70b6
Reviewed-on: https://go-review.googlesource.com/8201
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The false positives (var incorrectly escapes) are marked with BAD.
Change-Id: If64fabb6ea96de44a1177d9ab12e2ccc579fe0c4
Reviewed-on: https://go-review.googlesource.com/5294
Reviewed-by: Keith Randall <khr@golang.org>
Support the following conversions in escape analysis:
[]rune("foo")
[]byte("foo")
string([]rune{})
If the result does not escape, allocate temp buffer on stack
and pass it to runtime functions.
Change-Id: I1d075907eab8b0109ad7ad1878104b02b3d5c690
Reviewed-on: https://go-review.googlesource.com/3590
Reviewed-by: Russ Cox <rsc@golang.org>
Language specification says that variables are captured by reference.
And that is what gc compiler does. However, in lots of cases it is
possible to capture variables by value under the hood without
affecting visible behavior of programs. For example, consider
the following typical pattern:
func (o *Obj) requestMany(urls []string) []Result {
wg := new(sync.WaitGroup)
wg.Add(len(urls))
res := make([]Result, len(urls))
for i := range urls {
i := i
go func() {
res[i] = o.requestOne(urls[i])
wg.Done()
}()
}
wg.Wait()
return res
}
Currently o, wg, res, and i are captured by reference causing 3+len(urls)
allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap).
But all of them can be captured by value without changing behavior.
This change implements simple strategy for capturing by value:
if a captured variable is not addrtaken and never assigned to,
then it is captured by value (it is effectively const).
This simple strategy turned out to be very effective:
~80% of all captures in std lib are turned into value captures.
The remaining 20% are mostly in defers and non-escaping closures,
that is, they do not cause allocations anyway.
benchmark old allocs new allocs delta
BenchmarkCompressedZipGarbage 153 126 -17.65%
BenchmarkEncodeDigitsSpeed1e4 91 69 -24.18%
BenchmarkEncodeDigitsSpeed1e5 178 129 -27.53%
BenchmarkEncodeDigitsSpeed1e6 1510 1051 -30.40%
BenchmarkEncodeDigitsDefault1e4 100 75 -25.00%
BenchmarkEncodeDigitsDefault1e5 193 139 -27.98%
BenchmarkEncodeDigitsDefault1e6 1420 985 -30.63%
BenchmarkEncodeDigitsCompress1e4 100 75 -25.00%
BenchmarkEncodeDigitsCompress1e5 193 139 -27.98%
BenchmarkEncodeDigitsCompress1e6 1420 985 -30.63%
BenchmarkEncodeTwainSpeed1e4 109 81 -25.69%
BenchmarkEncodeTwainSpeed1e5 211 151 -28.44%
BenchmarkEncodeTwainSpeed1e6 1588 1097 -30.92%
BenchmarkEncodeTwainDefault1e4 103 77 -25.24%
BenchmarkEncodeTwainDefault1e5 199 143 -28.14%
BenchmarkEncodeTwainDefault1e6 1324 917 -30.74%
BenchmarkEncodeTwainCompress1e4 103 77 -25.24%
BenchmarkEncodeTwainCompress1e5 190 137 -27.89%
BenchmarkEncodeTwainCompress1e6 1327 919 -30.75%
BenchmarkConcurrentDBExec 16223 16220 -0.02%
BenchmarkConcurrentStmtQuery 17687 16182 -8.51%
BenchmarkConcurrentStmtExec 5191 5186 -0.10%
BenchmarkConcurrentTxQuery 17665 17661 -0.02%
BenchmarkConcurrentTxExec 15154 15150 -0.03%
BenchmarkConcurrentTxStmtQuery 17661 16157 -8.52%
BenchmarkConcurrentTxStmtExec 3677 3673 -0.11%
BenchmarkConcurrentRandom 14000 13614 -2.76%
BenchmarkManyConcurrentQueries 25 22 -12.00%
BenchmarkDecodeComplex128Slice 318 252 -20.75%
BenchmarkDecodeFloat64Slice 318 252 -20.75%
BenchmarkDecodeInt32Slice 318 252 -20.75%
BenchmarkDecodeStringSlice 2318 2252 -2.85%
BenchmarkDecode 11 8 -27.27%
BenchmarkEncodeGray 64 56 -12.50%
BenchmarkEncodeNRGBOpaque 64 56 -12.50%
BenchmarkEncodeNRGBA 67 58 -13.43%
BenchmarkEncodePaletted 68 60 -11.76%
BenchmarkEncodeRGBOpaque 64 56 -12.50%
BenchmarkGoLookupIP 153 139 -9.15%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServer 62 59 -4.84%
BenchmarkClientServerParallel4 62 59 -4.84%
BenchmarkClientServerParallel64 62 59 -4.84%
BenchmarkClientServerParallelTLS4 79 76 -3.80%
BenchmarkClientServerParallelTLS64 112 109 -2.68%
BenchmarkCreateGoroutinesCapture 10 6 -40.00%
BenchmarkAfterFunc 1006 1005 -0.10%
Fixes#6632.
Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca
Reviewed-on: https://go-review.googlesource.com/3166
Reviewed-by: Russ Cox <rsc@golang.org>
If result of string(i) does not escape,
allocate a [4]byte temp on stack for it.
Change-Id: If31ce9447982929d5b3b963fd0830efae4247c37
Reviewed-on: https://go-review.googlesource.com/3411
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we always allocate string buffers in heap.
For example, in the following code we allocate a temp string
just for comparison:
if string(byteSlice) == "abc" { ... }
This change extends escape analysis to cover []byte->string
conversions and string concatenation. If the result of operations
does not escape, compiler allocates a small buffer
on stack and passes it to slicebytetostring and concatstrings.
Then runtime uses the buffer if the result fits into it.
Size of the buffer is 32 bytes. There is no fundamental theory
behind this number. Just an observation that on std lib
tests/benchmarks frequency of string allocation is inversely
proportional to string length; and there is significant number
of allocations up to length 32.
benchmark old allocs new allocs delta
BenchmarkFprintfBytes 2 1 -50.00%
BenchmarkDecodeComplex128Slice 318 316 -0.63%
BenchmarkDecodeFloat64Slice 318 316 -0.63%
BenchmarkDecodeInt32Slice 318 316 -0.63%
BenchmarkDecodeStringSlice 2318 2316 -0.09%
BenchmarkStripTags 11 5 -54.55%
BenchmarkDecodeGray 111 102 -8.11%
BenchmarkDecodeNRGBAGradient 200 188 -6.00%
BenchmarkDecodeNRGBAOpaque 165 152 -7.88%
BenchmarkDecodePaletted 319 309 -3.13%
BenchmarkDecodeRGB 166 157 -5.42%
BenchmarkDecodeInterlacing 279 268 -3.94%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServerParallel4 62 61 -1.61%
BenchmarkClientServerParallel64 62 61 -1.61%
BenchmarkClientServerParallelTLS4 79 78 -1.27%
BenchmarkClientServerParallelTLS64 112 111 -0.89%
benchmark old ns/op new ns/op delta
BenchmarkFprintfBytes 381 311 -18.37%
BenchmarkStripTags 2615 2351 -10.10%
BenchmarkDecodeNRGBAGradient 3715887 3635096 -2.17%
BenchmarkDecodeNRGBAOpaque 3047645 2928644 -3.90%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
Change-Id: I9ec01da816945c3329d7be3c7794b520418c3f99
Reviewed-on: https://go-review.googlesource.com/3120
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Escape analysis treats everything assigned to OIND/ODOTPTR as escaping.
As the result b escapes in the following code:
func (b *Buffer) Foo() {
n, m := ...
b.buf = b.buf[n:m]
}
This change recognizes such assignments and ignores them.
Update issue #9043.
Update issue #7921.
There are two similar cases in std lib that benefit from this optimization.
First is in archive/zip:
type readBuf []byte
func (b *readBuf) uint32() uint32 {
v := binary.LittleEndian.Uint32(*b)
*b = (*b)[4:]
return v
}
Second is in time:
type data struct {
p []byte
error bool
}
func (d *data) read(n int) []byte {
if len(d.p) < n {
d.p = nil
d.error = true
return nil
}
p := d.p[0:n]
d.p = d.p[n:]
return p
}
benchmark old ns/op new ns/op delta
BenchmarkCompressedZipGarbage 32431724 32217851 -0.66%
benchmark old allocs new allocs delta
BenchmarkCompressedZipGarbage 153 143 -6.54%
Change-Id: Ia6cd32744e02e36d6d8c19f402f8451101711626
Reviewed-on: https://go-review.googlesource.com/3162
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently all PTRLIT element initializers escape. There is no reason for that.
This change links STRUCTLIT to PTRLIT; STRUCTLIT element initializers are
already linked to the STRUCTLIT. As the result, PTRLIT element initializers
escape when PTRLIT itself escapes.
Change-Id: I89ecd8677cbf81addcfd469cd2fd461c0e9bf7dd
Reviewed-on: https://go-review.googlesource.com/3031
Reviewed-by: Russ Cox <rsc@golang.org>
There is a hierarchy of location defined by loop depth:
-1 = the heap
0 = function results
1 = local variables (and parameters)
2 = local variable declared inside a loop
3 = local variable declared inside a loop inside a loop
etc
In general if an address from loopdepth n is assigned to
something in loop depth m < n, that indicates an extended
lifetime of some form that requires a heap allocation.
Function results can be local variables too, though, and so
they don't actually fit into the hierarchy very well.
Treat the address of a function result as level 1 so that
if it is written back into a result, the address is treated
as escaping.
Fixes#8185.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108870044
The analysis for &x was using the loop depth on x set
during x's declaration. A type switch creates a list of
implicit declarations that were not getting initialized
with loop depths.
Fixes#8176.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108860043
I introduced this bug when I changed the escape
analysis to run in phases based on call graph
dependency order, in order to be more precise about
inputs escaping back to outputs (functions returning
their arguments).
Given
func f(z **int) *int { return *z }
we were tagging the function as 'z does not escape
and is not returned', which is all true, but not
enough information.
If used as:
var x int
p := &x
q := &p
leak(f(q))
then the compiler might try to keep x, p, and q all
on the stack, since (according to the recorded
information) nothing interesting ends up being
passed to leak.
In fact since f returns *q = p, &x is passed to leak
and x needs to be heap allocated.
To trigger the bug, you need a chain that the
compiler wants to keep on the stack (like x, p, q
above), and you need a function that returns an
indirect of its argument, and you need to pass the
head of the chain to that function. This doesn't
come up very often: this bug has been present since
June 2012 (between Go 1 and Go 1.1) and we haven't
seen it until now. It helps that most functions that
return indirects are getters that are simple enough
to be inlined, avoiding the bug.
Earlier versions of Go also had the benefit that if
&x really wasn't used beyond x's lifetime, nothing
broke if you put &x in a heap-allocated structure
accidentally. With the new stack copying, though,
heap-allocated structures containing &x are not
updated when the stack is copied and x moves,
leading to crashes in Go 1.3 that were not crashes
in Go 1.2 or Go 1.1.
The fix is in two parts.
First, in the analysis of a function, recognize when
a value obtained via indirect of a parameter ends up
being returned. Mark those parameters as having
content escape back to the return results (but we
don't bother to write down which result).
Second, when using the analysis to analyze, say,
f(q), mark parameters with content escaping as
having any indirections escape to the heap. (We
don't bother trying to match the content to the
return value.)
The fix could be less precise (simpler).
In the first part we might mark all content-escaping
parameters as plain escaping, and then the second
part could be dropped. Or we might assume that when
calling f(q) all the things pointed at by q escape
always (for any f and q).
The fix could also be more precise (more complex).
We might record the specific mapping from parameter
to result along with the number of indirects from the
parameter to the thing being returned as the result,
and then at the call sites we could set up exactly the
right graph for the called function. That would make
notleaks(f(q)) be able to keep x on the stack, because
the reuslt of f(q) isn't passed to anything that leaks it.
The less precise the fix, the more stack allocations
become heap allocations.
This fix is exactly as precise as it needs to be so that
none of the current stack allocations in the standard
library turn into heap allocations.
Fixes#8120.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, khr, r
https://golang.org/cl/102040046
If the ... element type contained no pointers,
then the escape analysis did not track the ... itself.
This manifested in an escaping ...byte being treated
as non-escaping.
Fixes#7934.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100310043
Make the loop nesting depth of &x depend on where x is declared,
not on where the &x appears. The latter is only a conservative
estimate of the former. Being more careful can avoid some
variables escaping, and it is easier to reason about.
It would have avoided issue 7313, although that was still a bug
worth fixing.
Not much effect in the tree: one variable in the whole tree
is saved from a heap allocation (something in x509 parsing).
LGTM=daniel.morsing
R=daniel.morsing
CC=golang-codereviews
https://golang.org/cl/62380043
Logically, the init statement is in the enclosing scopes loopdepth, not inside the for loop.
Fixes#7313.
LGTM=rsc
R=golang-codereviews, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/62430043
Functions without bodies were excluded from the ordering logic,
because when I wrote the ordering logic there was no reason to
analyze them.
But then we added //go:noescape tags that need analysis, and we
didn't update the ordering logic.
So in the absence of good ordering, //go:noescape only worked
if it appeared before the use in the source code.
Fixes#5773.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10570043
It used to not mark parameters as escaping if only one of the
fields it points to leaks out of the function. This causes
problems when importing from another package.
Fixes#4964.
R=rsc, lvd, dvyukov, daniel.morsing
CC=golang-dev
https://golang.org/cl/7648045
A new comment directive //go:noescape instructs the compiler
that the following external (no body) func declaration should be
treated as if none of its arguments escape to the heap.
Fixes#4099.
R=golang-dev, dave, minux.ma, daniel.morsing, remyoudompheng, adg, agl, iant
CC=golang-dev
https://golang.org/cl/7289048
If the analysis reached a node twice, then the analysis was cut off.
However, if the second arrival is at a lower depth (closer to escaping)
then it is important to repeat the traversal.
The repeating must be cut off at some point to avoid the occasional
infinite recursion. This CL cuts it off as soon as possible while still
passing all tests.
Fixes#4751.
R=ken2
CC=golang-dev, lvd
https://golang.org/cl/7303043
A fatal error used to happen when escassign-ing a multiple
function return to a single node. However, the situation
naturally appears when using "go f(g())" or "defer f(g())",
because g() is escassign-ed to sink.
Fixes#4529.
R=golang-dev, lvd, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6920060
includes step 0: synthesize outparams, from 6600044
includes step 1,2: give outparams loopdepth 0 and verify unchanged results
generate esc:$mask tags, but still tie to sink if a param has mask != 0
from 6610054
adds final steps:
- have esccall generate n->escretval, a list of nodes the function results flow to
- use these in esccall and ORETURN/OAS2FUNC/and f(g())
- only tie parameters to sink if tag is absent, otherwise according to mask, tie them to escretval
R=rsc, bradfitz
CC=dave, gobot, golang-dev, iant, rsc
https://golang.org/cl/6741044
Was not handling &x.y[0] and &x.y.z correctly where
y is an array or struct-valued field (not a pointer).
R=ken2
CC=golang-dev
https://golang.org/cl/6551059
Today, if run.go doesn't understand a test header line it just ignores
the test, making it too easy to write or edit tests that are not actually
being run.
- expand errorcheck to accept flags, so that bounds.go and escape*.go can run.
- create a whitelist of skippable tests in run.go; skipping others is an error.
- mark all skipped tests at top of file.
Update #4139.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6549054
The escape analysis code does not make a distinction between
scalar and pointers fields in structs. Non-pointer fields
that escape should not make the whole struct escape.
R=lvd, rsc
CC=golang-dev, remy
https://golang.org/cl/5489128
This avoids degraded performance caused by extra labels
emitted by inlining (breaking strconv ftoa alloc count unittest) and is better in any case.
R=rsc
CC=golang-dev
https://golang.org/cl/5483071
mark OADDR inserted by typecheck as implicit
OCOPY takes ->left and ->right, not ->list
OMAKE*'s can all have arguments
precedence for OIND was initalized twice
fixes#2414
R=rsc, dave
CC=golang-dev
https://golang.org/cl/5319065
string literals used as package qualifiers are now prefixed with '@'
which obviates the need for the extra ':' before tags.
R=rsc, gri, lvd
CC=golang-dev
https://golang.org/cl/5129057
Was keeping a pointer to the labeled statement in n->right,
which meant that generic traversals of the tree visited it twice.
That combined with aggressive flattening of the block
structure when possible during parsing meant that
the kinds of label: code label: code label: code sequences
generated by yacc were giving the recursion 2ⁿ paths
through the program.
Fixes#2212.
R=lvd
CC=golang-dev
https://golang.org/cl/4960050