The biggest change is that each test is now responsible for managing
the starting and stopping of its parallel subtests.
The "Main" test could be run as a tRunner as well. This shows that
the introduction of subtests is merely a generalization of and
consistent with the current semantics.
Change-Id: Ibf8388c08f85d4b2c0df69c069326762ed36a72e
Reviewed-on: https://go-review.googlesource.com/18893
Reviewed-by: Russ Cox <rsc@golang.org>
Make sure we don't generate write barriers in runtime
code that is marked to forbid write barriers.
Implement the optimization that if we're writing a sliced
slice back to the location it came from, we don't need a
write barrier.
Fixes#14784
Change-Id: I04b6a3b2ac303c19817e932a36a3b006de103aaa
Reviewed-on: https://go-review.googlesource.com/20791
Reviewed-by: Austin Clements <austin@google.com>
This should probably be considered "experimental" at this stage, but
what it needs is feedback from adventurous adopters. I think the data
structure used for describing escape reasons might be extendable to
allow a cleanup of the underlying algorithms, which suffers from
insufficiently separated concerns (the graph does not deal well with
escape level adjustments, so it is augmented by a second custom-walk
portion of the "flood" phase. It would be better to put it all,
including level adjustments, in a single graph structure, and then
simply flood the graph.
Tweaked to avoid allocations in the no-logging case.
Modified run.go to ignore lines with leading "#" in the output (since
it can never match a line), and in -update_errors to ignore leading
tabs in output lines and to normalize embedded filenames.
Currently requires -m -m because otherwise the noise/update
burden for the other escape tests is considerable.
There is a partial test. Existing escape analysis tests seem to
cover all except the panic case and what looks like it might be
unreachable code in escape analysis.
Fixes#10526.
Change-Id: I2524fdec54facae48b00b2548e25d9e46fcaf832
Reviewed-on: https://go-review.googlesource.com/18041
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Keep track of how many uses each Value has. Each appearance in
Value.Args and in Block.Control counts once.
The number of uses of a value is generically useful to
constrain rewrite rules. For instance, we might want to
prevent merging index operations into loads if the same
index expression is used lots of times.
But I have one use in particular for which the use count is required.
We must make sure we don't combine ops with loads if the load has
more than one use. Otherwise, we may split a single load
into multiple loads and that breaks perceived behavior in
the presence of races. In particular, the load of m.state
in sync/mutex.go:Lock can't be done twice. (I have a separate
CL which triggers the mutex failure. This CL has a test which
demonstrates a similar failure.)
Change-Id: Icaafa479239f48632a069d0c3f624e6ebc6b1f0e
Reviewed-on: https://go-review.googlesource.com/20790
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
Receiver parameters generally aren't relevant to the function
signature type. In particular:
1. When checking whether a type's method implements an interface's
method, we specifically want to ignore the receiver parameters,
because they'll be different.
2. When checking interface type equality, interface methods always
use the same "fakethis" *struct{} type as their receiver.
3. Finally, method expressions and method values degenerate into
receiver-less function types.
The only case where we care about receiver types matching is in
addmethod, which is easily handled by adding an extra Eqtype check of
the receiver parameters. Also, added a test for this, since
(surprisingly) there weren't any.
As precedence, go/types.Identical ignores receiver parameters when
comparing go/types.Signature values.
Notably, this allows us to slightly simplify the "implements"
function, which is used for checking whether type/interface t
implements interface iface. Currently, cmd/compile actually works
around Eqtype's receiver parameter checking by creating new throwaway
TFUNC Types without the receiver parameter.
(Worse, the compiler currently only provides APIs to build TFUNC Types
from Nod syntax trees, so building those throwaway types also involves
first building throwaway syntax trees.)
Passes toolstash -cmp.
Change-Id: Ib07289c66feacee284e016bc312e8c5ff674714f
Reviewed-on: https://go-review.googlesource.com/20602
Reviewed-by: Robert Griesemer <gri@golang.org>
Currently we generate write barriers when the right side of an
assignment is a global function. This doesn't fall into the existing
case of storing an address of a global because we haven't lowered the
function to a pointer yet.
This write barrier is unnecessary, so eliminate it.
Fixes#13901.
Change-Id: Ibc10e00a8803db0fd75224b66ab94c3737842a79
Reviewed-on: https://go-review.googlesource.com/20772
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Step 2 of stream-lining parameter parsing
- do parameter validity checks in parser
- two passes instead of multiple (and theoretically quadratic) passes
when checking parameters
- removes the need for OKEY and some ONONAME nodes in those passes
This removes allocation of ~123K OKEY (incl. some ONONAME) nodes
out of a total of ~10M allocated nodes when running make.bash, or
a reduction of the number of alloacted nodes by ~1.2%.
Change-Id: I4a8ec578d0ee2a7b99892ac6b92e56f8e0415f03
Reviewed-on: https://go-review.googlesource.com/20748
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
* Refacts a bit saving and restoring parents restrictions
* Shaves ~100k from pkg/tools/linux_amd64,
but most of the savings come from the rewrite rules.
* Improves on the following artificial test case:
func f1(a4 bool, a6 bool) bool {
return a6 || (a6 || (a6 || a4)) || (a6 || (a4 || a6 || (false || a6)))
}
Change-Id: I714000f75a37a3a6617c6e6834c75bd23674215f
Reviewed-on: https://go-review.googlesource.com/20306
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Removes an intermediate layer of functions that was clogging up a
corner of the compiler's profile graph.
I can't measure a performance improvement running a large build
like jujud, but the profile reports less total time spent in
gc.(*lexer).getr.
Change-Id: I3000585cfcb0f9729d3a3859e9023690a6528591
Reviewed-on: https://go-review.googlesource.com/20565
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In addition to reflect.Value.Call, exported methods can be invoked
by the Func value in the reflect.Method struct. This CL has the
compiler track what functions get access to a legitimate reflect.Method
struct by looking for interface calls to either of:
Method(int) reflect.Method
MethodByName(string) (reflect.Method, bool)
This is a little overly conservative. If a user implements a type
with one of these methods without using the underlying calls on
reflect.Type, the linker will assume the worst and include all
exported methods. But it's cheap.
No change to any of the binary sizes reported in cl/20483.
For #14740
Change-Id: Ie17786395d0453ce0384d8b240ecb043b7726137
Reviewed-on: https://go-review.googlesource.com/20489
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The location of VARDEFs is incorrect for PPARAMOUT variables
which are also used as temporary locations. We put in VARDEFs
when setting the variable at return time, but when the location
is also used as a temporary the lifetime values are wrong.
Fix copyelim to update the names map properly. This is a
real name bug fix which, as a result, allows me to
write a reasonable test to trigger the PPARAMOUT bug.
This is kind of a band-aid fix for #14591. A more pricipled
fix (which allows values to be stored in the return variable
earlier than the return point) will be harder.
Fixes#14591
Change-Id: I7df8ae103a982d1f218ed704c080d7b83cdcfdd9
Reviewed-on: https://go-review.googlesource.com/20457
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Make sure we do any just-before-return cleanup on all paths out of a
function, including when recovering. Each exit path should include
deferreturn (if there are any defers) and then the exit
code (e.g. copying heap-escaping return values back to the stack).
Introduce a Defer SSA block type which has two outgoing edges - one the
fallthrough edge (the defer was queued successfully) and one which
immediately returns (the defer had a successful recover() call and
normal execution should resume at the return point).
Fixes#14725
Change-Id: Iad035c9fd25ef8b7a74dafbd7461cf04833d981f
Reviewed-on: https://go-review.googlesource.com/20486
Reviewed-by: David Chase <drchase@google.com>
In increment and decrement statements, explicit check that the type
of operand is numeric earlier. This avoids a related but less clear
error about converting "1" to be emitted.
So, when compiling
package main
func main() {
var x bool
x++
}
instead of emitting two errors
prog.go:5: cannot convert 1 to type bool
prog.go:5: invalid operation: x++ (non-numeric type bool)
just emits the second error.
Fixes#12525.
Change-Id: I6e81330703765bef0d6eb6c57098c1336af7c799
Reviewed-on: https://go-review.googlesource.com/20245
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The new check corresponds to the (etype != TANY || Debug['A'] != 0)
that was lost in golang.org/cl/19936.
Fixes#14652.
Change-Id: Iec3788ff02529b3b0f0d4dd92ec9f3ef20aec849
Reviewed-on: https://go-review.googlesource.com/20271
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When the linker was written in C, command line arguments were passed
around as null-terminated byte arrays which encouraged checking
characters one at a time. In Go, that can easily lead to
out-of-bounds panics.
Use the more idiomatic strings.HasPrefix when checking cmd/link's -B
argument to avoid the panic, and replace the manual hex decode with
use of the encoding/hex package.
Fixes#14636
Change-Id: I45f765bbd8cf796fee1a9a3496178bf76b117827
Reviewed-on: https://go-review.googlesource.com/20211
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The changes to internal/big are completely automatic
by running vendor.bash in that directory.
Also added respective test case.
For #14553.
Change-Id: I98b124bcc9ad9e9bd987943719be27864423cb5d
Reviewed-on: https://go-review.googlesource.com/20199
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Static branch predictions (which guide block ordering) are
adjusted based on:
loop/not-loop (favor looping)
abnormal-exit/not (avoid panic)
call/not-call (avoid call)
ret/default (treat returns as rare)
This appears to make no difference in performance of real
code, meaning the compiler itself. The earlier version of
this has been stripped down to help make the cost of this
only-aesthetic-on-Intel phase be as cheap as possible (we
probably want information about inner loops for improving
register allocation, but because register allocation follows
close behind this pass, conceivably the information could be
reused -- so we might do this anyway just to normalize
output).
For a ./make.bash that takes 200 user seconds, about .75
second is reported in likelyadjust (summing nanoseconds
reported with -d=ssa/likelyadjust/time ).
Upstream predictions are respected.
Includes test, limited to build on amd64 only.
Did several iterations on the debugging output to allow
some rough checks on behavior.
Debug=1 logging notes agree/disagree with earlier passes,
allowing analysis like the following:
Run on make.bash:
GO_GCFLAGS=-d=ssa/likelyadjust/debug \
./make.bash >& lkly5.log
grep 'ranch prediction' lkly5.log | wc -l
78242 // 78k predictions
grep 'ranch predi' lkly5.log | egrep -v 'agrees with' | wc -l
29633 // 29k NEW predictions
grep 'disagrees' lkly5.log | wc -l
444 // contradicted 444 times
grep '< exit' lkly5.log | wc -l
10212 // 10k exit predictions
grep '< exit' lkly5.log | egrep 'disagrees' | wc -l
5 // 5 contradicted by previous prediction
grep '< exit' lkly5.log | egrep -v 'agrees' | wc -l
702 // 702-5 redundant with previous prediction
grep '< call' lkly5.log | egrep -v 'agrees' | wc -l
16699 // 16k new call predictions
grep 'stay in loop' lkly5.log | egrep -v 'agrees' | wc -l
3951 // 4k new "remain in loop" predictions
Fixes#11451.
Change-Id: Iafb0504f7030d304ef4b6dc1aba9a5789151a593
Reviewed-on: https://go-review.googlesource.com/19995
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
* It does very simple bounds checking elimination. E.g.
removes the second check in for i := range a { a[i]++; a[i++]; }
* Improves on the following redundant expression:
return a6 || (a6 || (a6 || a4)) || (a6 || (a4 || a6 || (false || a6)))
* Linear in the number of block edges.
I patched in CL 12960 that does bounds, nil and constant propagation
to make sure this CL is not just redundant. Size of pkg/tool/linux_amd64/*
(excluding compile which is affected by this change):
With IsInBounds and IsSliceInBounds
-this -12960 92285080
+this -12960 91947416
-this +12960 91978976
+this +12960 91923088
Gain is ~110% of 12960.
Without IsInBounds and IsSliceInBounds (older run)
-this -12960 95515512
+this -12960 95492536
-this +12960 95216920
+this +12960 95204440
Shaves 22k on its own.
* Can we handle IsInBounds better with this? In
for i := range a { a[i]++; } the bounds checking at a[i]
is not eliminated.
Change-Id: I98957427399145fb33693173fd4d5a8d71c7cc20
Reviewed-on: https://go-review.googlesource.com/19710
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
If a general comment contains multiple newline characters, we can't
simply unread one and then re-lex it via the general whitespace lexing
phase, because then we'll reset lineno to the line before the "*/"
marker, rather than keeping it where we found the "/*" marker.
Also, for processing imports, call importfile before advancing the
lexer with p.next(), so that lineno reflects the line where we found
the import path, and not the token afterwards.
Fixes#14520.
Change-Id: I785a2d83d632280113d4b757de0d57c88ba2caf4
Reviewed-on: https://go-review.googlesource.com/19934
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
non-SSA backends are all over the map as to whether nil checks
get removed or not. amd64, 386, 386/387, arm are all subtly different.
Remove these extra checks for now, they are in nilptr3_ssa.go so they
won't get lost.
Change-Id: I2e0051f488fb2cb7278c6fdd44cb9d68b5778345
Reviewed-on: https://go-review.googlesource.com/19961
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Just like we do for integer loads/stores.
Update #14511
Change-Id: Ic6ca6b54301438a5701ea5fb0be755451cb24d45
Reviewed-on: https://go-review.googlesource.com/19923
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Keith Randall <khr@golang.org>
Our stack frame sizes look pretty good now. Lower the stack
guard from 1024 to 720.
Tip is currently using 720.
We could go lower (to 640 at least) except PPC doesn't like that.
Change-Id: Ie5f96c0e822435638223f1e8a2bd1a1eed68e6aa
Reviewed-on: https://go-review.googlesource.com/19922
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Always reading runes (rather than bytes) has negligible overhead
(a simple if at the moment - it can be eliminated eventually) but
simplifies the lexer logic and opens up the door for speedups.
In the process remove many int conversions that are now not needed
anymore.
Also, because identifiers are now more easily recognized, remove
talph label and move identifier lexing "in place".
Also, instead of accepting all chars < 0x80 and then check for
"frogs", only permit valid characters in the first place. Removes
an extra call for common simple tokens and leads to simpler logic.
`time go build -a net/http` (best of 5 runs) seems 1% faster.
Assuming this is in the noise, there is no noticeable performance
degradation with this change.
Change-Id: I3454c9bf8b91808188cf7a5f559341749da9a1eb
Reviewed-on: https://go-review.googlesource.com/19847
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Merge push_parser and pop_parser into a single parse_import function
and inline unimportfile. Shake out function boundaries a little bit so
that the symmetry is readily visible.
Move the import_package call into parse_import (and inline
import_there into import_package). This means importfile no longer
needs to provide fake import data to be needlessly lexed/parsed every
time it's called.
Also, instead of indicating import success/failure by whether the next
token is "package", import_spec can just check whether importpkg is
non-nil.
Tangentially, this somehow alters the diagnostics produced for
test/fixedbugs/issue11610.go. However, the new diagnostics are more
consistent with those produced when the empty import statement is
absent, which seems more desirable than maintaining the previous
errors.
Change-Id: I5cd1c22aa14da8a743ef569ff084711d137279d5
Reviewed-on: https://go-review.googlesource.com/19650
Reviewed-by: Robert Griesemer <gri@golang.org>
Walking the field name as if it were an expression
caused a called to haspointers with a TFIELD, which panics.
Trigger was a field at a large offset within a large struct,
combined with a struct literal expression mentioning that
field.
Fixes#14405
Change-Id: I4589badae27cf3d7cf365f3a66c13447512f41f9
Reviewed-on: https://go-review.googlesource.com/19699
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The Go 1.6 release notes say we'll remove the “-X name value” form
(in favor of the “-X name=value” form) in Go 1.7.
Do that.
Also establish the doc/go1.7.txt file.
Change-Id: Ie4565a6bc5dbcf155181754d8d92bfbb23c75338
Reviewed-on: https://go-review.googlesource.com/19614
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This bug was introduced in golang.org/cl/18217,
while trying to fix#13777.
Originally I wanted to just disable inlining for the case
being handled incorrectly, but it's fairly difficult to detect
and much easier just to fix. Since the case being handled
incorrectly was inlined correctly in Go 1.5, not inlining it
would also be somewhat of a regression.
So just fix it.
Test case copied from Ian's CL 19520.
The mistake to worry about in this CL would be relaxing
the condition too much (we now print the note more often
than we did yesterday). To confirm that we'd catch this mistake,
I checked that changing (!fmtbody || !t.Funarg) to (true) does
cause fixedbugs/issue13777.go to fail. And putting it back
to what is written in this CL makes that test pass again
as well as the new fixedbugs/issue14331.go.
So I believe that the new condition is correct for both constraints.
Fixes#14331.
Change-Id: I91f75a4d5d07c53af5caea1855c780d9874b8df6
Reviewed-on: https://go-review.googlesource.com/19514
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Type switches need write barriers if the written-to
variable is heap allocated.
For the added needwritebarrier call, the right arg doesn't
really matter, I just pass something that will never disqualify
the write barrier. The left arg is the one that matters.
Fixes#14306
Change-Id: Ic2754167cce062064ea2eeac2944ea4f77cc9c3b
Reviewed-on: https://go-review.googlesource.com/19481
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Semi-regular merge from tip to dev.ssa.
Two fixes:
1) Mark selectgo as not returning. This caused problems
because there are no VARKILL ops on the selectgo path,
causing things to be marked live that shouldn't be.
2) Tell the amd64 assembler that addressing modes like
name(SP)(AX*4) are ok.
Change-Id: I9ca81c76391b1a65cc47edc8610c70ff1a621913
It is one of the slowest compiler phases right now, and we
run two of them.
Instead of using a map to make the initial partition, use a sort.
It is much less memory intensive.
Do a few optimizations to avoid work for size-1 equivalence classes.
Implement -N.
Change-Id: I1d2d85d3771abc918db4dd7cc30b0b2d854b15e1
Reviewed-on: https://go-review.googlesource.com/19024
Reviewed-by: David Chase <drchase@google.com>
Empty blocks are introduced to remove critical edges.
After regalloc, we can remove any of the added blocks
that are still empty.
Change-Id: I0b40e95ac3a6cc1e632a479443479532b6c5ccd9
Reviewed-on: https://go-review.googlesource.com/18833
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Consider this code:
func f(*int)
func g() {
p := new(int)
f(p)
}
where f is an assembly function.
In general liveness analysis assumes that during the call to f, p is dead
in this frame. If f has retained p, p will be found alive in f's frame and keep
the new(int) from being garbage collected. This is all correct and works.
We use the Go func declaration for f to give the assembly function
liveness information (the arguments are assumed live for the entire call).
Now consider this code:
func h1() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
}
Here syscall.Syscall is taking the place of f, but because its arguments
are uintptr, the liveness analysis and the garbage collector ignore them.
Since p is no longer live in h once the call starts, if the garbage collector
scans the stack while the system call is blocked, it will find no reference
to the new(int) and reclaim it. If the kernel is going to write to *p once
the call finishes, reclaiming the memory is a mistake.
We can't change the arguments or the liveness information for
syscall.Syscall itself, both for compatibility and because sometimes the
arguments really are integers, and the garbage collector will get quite upset
if it finds an integer where it expects a pointer. The problem is that
these arguments are fundamentally untyped.
The solution we have taken in the syscall package's wrappers in past
releases is to insert a call to a dummy function named "use", to make
it look like the argument is live during the call to syscall.Syscall:
func h2() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
use(unsafe.Pointer(p))
}
Keeping p alive during the call means that if the garbage collector
scans the stack during the system call now, it will find the reference to p.
Unfortunately, this approach is not available to users outside syscall,
because 'use' is unexported, and people also have to realize they need
to use it and do so. There is much existing code using syscall.Syscall
without a 'use'-like function. That code will fail very occasionally in
mysterious ways (see #13372).
This CL fixes all that existing code by making the compiler do the right
thing automatically, without any code modifications. That is, it takes h1
above, which is incorrect code today, and makes it correct code.
Specifically, if the compiler sees a foreign func definition (one
without a body) that has uintptr arguments, it marks those arguments
as "unsafe uintptrs". If it later sees the function being called
with uintptr(unsafe.Pointer(x)) as an argument, it arranges to mark x
as having escaped, and it makes sure to hold x in a live temporary
variable until the call returns, so that the garbage collector cannot
reclaim whatever heap memory x points to.
For now I am leaving the explicit calls to use in package syscall,
but they can be removed early in a future cycle (likely Go 1.7).
The rule has no effect on escape analysis, only on liveness analysis.
Fixes#13372.
Change-Id: I2addb83f70d08db08c64d394f9d06ff0a063c500
Reviewed-on: https://go-review.googlesource.com/18584
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Brief background on "why heap allocate". Things can be
forced to the heap for the following reasons:
1) address published, hence lifetime unknown.
2) size unknown/too large, cannot be stack allocated
3) multiplicity unknown/too large, cannot be stack allocated
4) reachable from heap (not necessarily published)
The bug here is a case of failing to enforce 4) when an
object Y was reachable from a heap allocation X forced
because of 3). It was found in the case of a closure
allocated within a loop (X) and assigned to a variable
outside the loop (multiplicity unknown) where the closure
also captured a map (Y) declared outside the loop (reachable
from heap). Note the variable declared outside the loop (Y)
is not published, has known size, and known multiplicity
(one). The only reason for heap allocation is that it was
reached from a heap allocated item (X), but because that was
not forced by publication, it has to be tracked by loop
level, but escape-loop level was not tracked and thus a bug
results.
The fix is that when a heap allocation is newly discovered,
use its looplevel as the minimum loop level for downstream
escape flooding.
Every attempt to generalize this bug to X-in-loop-
references-Y-outside loop succeeded, so the fix was aimed
to be general. Anywhere that loop level forces heap
allocation, the loop level is tracked. This is not yet
tested for all possible X and Y, but it is correctness-
conservative and because it caused only one trivial
regression in the escape tests, it is probably also
performance-conservative.
The new test checks the following:
1) in the map case, that if fn escapes, so does the map.
2) in the map case, if fn does not escape, neither does the map.
3) in the &x case, that if fn escapes, so does &x.
4) in the &x case, if fn does not escape, neither does &x.
Fixes#13799.
Change-Id: Ie280bef2bb86ec869c7c206789d0b68f080c3fdb
Reviewed-on: https://go-review.googlesource.com/18234
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Adding the evconst(n) call for OANDAND and OOROR in
golang.org/cl/18262 was originally just to parallel the above iscmp
branch, but upon further inspection it seemed odd that removing it
caused test/fixedbugs/issue6671.go's
var b mybool
// ...
b = bool(true) && true // ERROR "cannot use"
to start failing (i.e., by not emitting the expected "cannot use"
error).
The problem is that evconst(n)'s settrue and setfalse paths always
reset n.Type to idealbool, even for logical operators where n.Type
should preserve the operand type. Adding the evconst(n) call for
OANDAND/OOROR inadvertantly worked around this by turning the later
evconst(n) call at line 2167 into a noop, so the "n.Type = t"
assignment at line 739 would preserve the operand type.
However, that means evconst(n) was still clobbering n.Type for ONOT,
so declarations like:
const _ bool = !mybool(true)
were erroneously accepted.
Update #13821.
Change-Id: I18e37287f05398fdaeecc0f0d23984e244f025da
Reviewed-on: https://go-review.googlesource.com/18362
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>