This enables the implementation for proposal #58671, which is
a likely accept. By enabling it early we get a bit extra soak
time for this feature. The change can be reverted trivially, if
need be.
For #58671.
Change-Id: Id6c27515e45ff79f4f1d2fc1706f3f672ccdd1ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/495955
Run-TryBot: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
Auto-Submit: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Findley <rfindley@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
In the types1 universe, we only need to represent value types. For
interfaces, this means we only need to worry about pure interfaces. A
pure interface can embed a union type, but the overall union must be
equivalent to "any".
In go.dev/cl/458619, we changed the types1 reader to return "any", but
to incorporate a consistency check to make sure this is valid.
Unfortunately, a pure interface can actually still reference impure
interfaces, and in general this is hard to check precisely without
reimplementing a lot of types2 data structures and logic into types1.
We haven't had any other reports of this check failing since 1.20, so
it seems simplest to just suppress for now.
Fixes#60117.
Change-Id: I5053faafe2d1068c6d438b2193347546bf5330cd
Reviewed-on: https://go-review.googlesource.com/c/go/+/495455
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Be more liberal about expanding the OR tree. Handle any tree shape
instead of a fully left or right associative tree.
Also remove tail feature, it isn't ever needed.
Change-Id: If16bebef94b952a604d6069e9be3d9129994cb6f
Reviewed-on: https://go-review.googlesource.com/c/go/+/494056
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Ryan Berger <ryanbberger@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
[This is a roll-forward of CL 458755, which was reverted due to make.bash
being broken on GOAMD64=v3. But it turned out that the problem was caused
by wrong bswap/load rewrite rules, and it was fixed in CL 492616.]
This CL enhances the tighten pass. Previously if a value has memory arg,
then the tighten pass won't move it, actually if the memory state is
consistent among definition and use block, we can move the value. This
CL optimizes this case. This is useful for the following situation:
b1:
x = load(...mem)
if(...) goto b2 else b3
b2:
use(x)
b3:
some_op_not_use_x
For the micro-benchmark mentioned in #56620, the performance improvement
is about 15%.
There's no noticeable performance change in the go1 benchmark.
Fixes#56620
Change-Id: I36ea68bed384986cd3ae81cb9e6efe84bb213adc
Reviewed-on: https://go-review.googlesource.com/c/go/+/492895
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Eric Fang <eric.fang@arm.com>
This modifies some existing rules to allow more prefixed instructions
to be generated when using GOPPC64=power10. Some rules also check
if PCRel is available, which is currently supported for linux/ppc64le
and linux/ppc64 (internal linking only).
Prior to p10, DS-offset loads and stores had a 16 bit size limit for
the offset field. If the offset of the data for load or store was
beyond this range then an indexed load or store would be selected by
the rules.
In p10 the assembler can generate prefixed instructions in this case,
but does not if an indexed instruction was selected during the lowering
pass.
This allows many more cases to use prefixed loads or stores, reducing
function sizes and improving performance in some cases where the code
change happens in key loops.
For example in strconv BenchmarkAppendQuoteRune before:
12c5e4: 15 00 10 06 pla r10,1425660
12c5e8: fc c0 40 39
12c5ec: 00 00 6a e8 ld r3,0(r10)
12c5f0: 10 00 aa e8 ld r5,16(r10)
After this change:
12a828: 15 00 10 04 pld r3,1433272
12a82c: b8 de 60 e4
12a830: 15 00 10 04 pld r5,1433280
12a834: c0 de a0 e4
Performs better in the second case.
A testcase was added to verify that the rules correctly select a load or
store based on the offset and whether power10 or earlier.
Change-Id: I4335fed0bd9b8aba8a4f84d69b89f819cc464846
Reviewed-on: https://go-review.googlesource.com/c/go/+/477398
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Archana Ravindar <aravind5@in.ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Paul Murphy <murp@ibm.com>
With CL 408826 reflect.Value does not always escape. We need to
make sure Value operations does (or does not) escape the Value
correctly. This CL adds a test.
There are still a few unfortunate cases, where some Value
operations escape more than necessary (comparing to a non-reflect
version of the code), but hard to fix. These are mostly that a
Value would escape conditionally (mostly on the type of the Value),
but currently we don't have a good way to express that.
Change-Id: I9fdfc7584670aa09c5a01f6b2803f2043aaddb65
Reviewed-on: https://go-review.googlesource.com/c/go/+/441938
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Types are either static (for compiler-created types) or heap
allocated and always reachable (for reflection-created types, held
in the central map). So there is no need to escape types.
With CL 408826 reflect.Value does not always escape. Some functions
that escapes Value.typ would make the Value escape without this CL.
Had to add a special case for the inliner to keep (*Value).Type
still inlineable.
Change-Id: I7c14d35fd26328347b509a06eb5bd1534d40775f
Reviewed-on: https://go-review.googlesource.com/c/go/+/413474
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
The effect and motivation is for the test to be selected when doing
'go test cmd' and not when doing 'go test std' since it's primarily
about testing the Go compiler and linker. Other than that, it's run
by all.bash and 'go test std cmd' as before.
For #56844.
Fixes#60059.
Change-Id: I2d499af013f9d9b8761fdf4573f8d27d80c1fccf
Reviewed-on: https://go-review.googlesource.com/c/go/+/493876
Run-TryBot: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Russ added test/bench/go1 in CL 5484071 to have a stable suite of
programs to use as benchmarks. For the compiler and runtime we had
back then, those were reasonable benchmarks, but the compiler and
runtime are now far more sophisticated and these benchmarks no longer
have good coverage. We also now have better benchmark suites
maintained outside the repo (e.g., golang.org/x/benchmarks). Keeping
test/bench/go1 at this point is actively misleading.
Indirectly related to #37486, as this also removes the last package
dist test runs outside of src/.
Change-Id: I2867ef303fe48a02acce58ace4ee682add8acdbf
Reviewed-on: https://go-review.googlesource.com/c/go/+/494193
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
The test directory driver currently sets the GOOS/GOARCH environment
variables if they aren't set. This appears to be in service of a
single test, test/env.go, which was introduced in September 2008 along
with os.Getenv. It's not entirely clear what that test is even trying
to check, since runtime.GOOS isn't necessarily the same as $GOOS. We
keep the test around because golang.org/x/tools/go/ssa/interp uses it
as a test case, but we simplify the test and eliminate the need for
the driver to set GOOS/GOARCH.
Change-Id: I5acc0093b557c95d1f0a526d031210256a68222d
Reviewed-on: https://go-review.googlesource.com/c/go/+/493601
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Adds rules that rewrites statements such as ~P&~Q as ~(P|Q) and ~P|~Q as ~(P&Q), removing an extraneous instruction.
Change-Id: Icedb97df741680ddf9799df79df78657173aa500
GitHub-Last-Rev: f22e2350c9
GitHub-Pull-Request: golang/go#60018
Reviewed-on: https://go-review.googlesource.com/c/go/+/493175
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Stefan M <st3f4nm4d4@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Thanks to the recent addition of the memcombine pass, the
ppc64 ports now have the memcombine optimizations. Previously
in PPC64.rules, the memcombine rules were only added for
ppc64le targets due to the significant increase in size of
the rewritePPC64.go file when those rules were added. The
ppc64 and ppc64le rules had to be different because of the
byte order due to endianness differences.
This enables the memcombine tests to be run on ppc64 as well
as ppc64le.
Change-Id: I4081e2d94617a1b66541d536c0c2662e266c9c1e
Reviewed-on: https://go-review.googlesource.com/c/go/+/492615
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Lynn Boger <laboger@linux.vnet.ibm.com>
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis, then later fixed
first with an attempt in CL 482355, then again in CL 484859, and then
one more time with CL 492135.]
Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:
- If the closure contains a disallowed operation (e.g., a defer), then
the outer function can't be inlined. It makes sense that the
*closure* can't be inlined in this case, but it doesn't make sense
to punish the function that constructs the closure.
- The hairiness of the closure counts against the inlining budget of
the outer function. Since we currently copy the closure body when
inlining the outer function, this makes sense from the perspective
of export data size and binary size, but ultimately doesn't make
much sense from the perspective of what should be inlineable.
- Since the inliner walks into every closure created by an outer
function in addition to starting a walk at every closure, this adds
an n^2 factor to inlinability analysis.
This CL simply drops this behavior.
In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).
This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.
The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:
│ before │ after │
│ bytes │ bytes vs base │
Go/binary 15.12Mi ± 0% 15.14Mi ± 0% +0.16% (n=1)
Go/text 5.220Mi ± 0% 5.237Mi ± 0% +0.32% (n=1)
Compile/binary 22.92Mi ± 0% 22.94Mi ± 0% +0.07% (n=1)
Compile/text 8.428Mi ± 0% 8.435Mi ± 0% +0.08% (n=1)
Change-Id: I5f75fcceb177f05853996b75184a486528eafe96
Reviewed-on: https://go-review.googlesource.com/c/go/+/492017
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
If the function referenced by a closure expression is incorporated
into a static init, be sure to mark it as non-hidden, since otherwise
it will be live but no longer reachable from the init func, hence it
will be skipped during escape analysis, which can lead to
miscompilations.
Fixes#59680.
Change-Id: Ib858aee296efcc0b7655d25c23ab8a6a8dbdc5f9
Reviewed-on: https://go-review.googlesource.com/c/go/+/492135
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
[This is a roll-forward of CL 484859, this time including a fix for
issue #59709. The call to do dead function marking was taking place in
the wrong spot, causing it to run more than once if generics were
instantiated.]
This patch generalizes the code in the inliner that marks unreferenced
hidden closure functions as dead. Rather than doing the marking on the
fly (previous approach), this new approach does a single pass at the
end of inlining, which catches more dead functions.
Change-Id: I0e079ad755c21295477201acbd7e1a732a98fffd
Reviewed-on: https://go-review.googlesource.com/c/go/+/492016
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Previously, type arguments could only be inferred for generic
functions in call expressions, whereas with the reverse type inference
proposal they can now be inferred in assignment contexts too. As a
consequence, we now need to check Info.Instances to find the inferred
type for more cases now.
Updates #59338.
Fixes#59955.
Change-Id: I9b6465395869459c2387d0424febe7337b28b90e
Reviewed-on: https://go-review.googlesource.com/c/go/+/492455
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
This CL enhances the tighten pass. Previously if a value has memory arg,
then the tighten pass won't move it, actually if the memory state is
consistent among definition and use block, we can move the value. This
CL optimizes this case. This is useful for the following situation:
b1:
x = load(...mem)
if(...) goto b2 else b3
b2:
use(x)
b3:
some_op_not_use_x
For the micro-benchmark mentioned in #56620, the performance improvement
is about 15%.
There's no noticeable performance change in the go1 benchmark.
Fixes#56620
Change-Id: I9b152754f27231f583a6995fc7cd8472aa7d390c
Reviewed-on: https://go-review.googlesource.com/c/go/+/458755
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
Put zero-sized data symbols at same address as runtime.zerobase,
so zero-sized global variables have the same address as zero-sized
allocations.
Change-Id: Ib3145dc1b663a9794dfabc0e6abd2384960f2c49
Reviewed-on: https://go-review.googlesource.com/c/go/+/490435
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ia94094c5e47897e7c088d24b4a5e33f5c2768db5
Reviewed-on: https://go-review.googlesource.com/c/go/+/486976
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts commit CL 486380.
Submitted out of order and breaks bootstrap.
Change-Id: I67bd225094b5c9713b97f70feba04d2c99b7da76
Reviewed-on: https://go-review.googlesource.com/c/go/+/486916
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ibe20825ebe0076bbd7b0b7501177b16c9dbcb79e
Reviewed-on: https://go-review.googlesource.com/c/go/+/486380
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Marking variables in erroneous variable declarations as used is
convenient for tests but doesn't necessarily hide follow-on errors
in real code: either the variable is not supposed to be declared in
the first place and then we should get an error if it is not used,
or it is there because it is intended to be used, and the we expect
an error it if is not used.
This brings types2 closer to go/types.
Change-Id: If7ee1298fc770f7ad0cefe7e968533fd50ec2343
Reviewed-on: https://go-review.googlesource.com/c/go/+/486175
Run-TryBot: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
Auto-Submit: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Findley <rfindley@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Add more rules to ensure that order doesn't matter.
Add memequal 0 rule.
Try to use a constant argument to memequal when one is available.
Fixes#59684
Change-Id: I36e85ffbd949396ed700ed6e8ec2bc3ae013f5d2
Reviewed-on: https://go-review.googlesource.com/c/go/+/485535
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts commit http://go.dev/cl//484859
Reason for revert: causes linker errors in a number of google-internal tests.
Change-Id: I322252f784a46d2b1d447ebcdca86ce14bc0cc91
Reviewed-on: https://go-review.googlesource.com/c/go/+/485755
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis, then later fixed
fist with an attempt in CL 482355, then again in 484859 .]
Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:
- If the closure contains a disallowed operation (e.g., a defer), then
the outer function can't be inlined. It makes sense that the
*closure* can't be inlined in this case, but it doesn't make sense
to punish the function that constructs the closure.
- The hairiness of the closure counts against the inlining budget of
the outer function. Since we currently copy the closure body when
inlining the outer function, this makes sense from the perspective
of export data size and binary size, but ultimately doesn't make
much sense from the perspective of what should be inlineable.
- Since the inliner walks into every closure created by an outer
function in addition to starting a walk at every closure, this adds
an n^2 factor to inlinability analysis.
This CL simply drops this behavior.
In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).
This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.
The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:
│ before │ after │
│ bytes │ bytes vs base │
Go/binary 15.12Mi ± 0% 15.14Mi ± 0% +0.16% (n=1)
Go/text 5.220Mi ± 0% 5.237Mi ± 0% +0.32% (n=1)
Compile/binary 22.92Mi ± 0% 22.94Mi ± 0% +0.07% (n=1)
Compile/text 8.428Mi ± 0% 8.435Mi ± 0% +0.08% (n=1)
Updates #56102.
Change-Id: I6e938d596992ffb473cf51e7e598f372ce08deb0
Reviewed-on: https://go-review.googlesource.com/c/go/+/484860
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
This patch generalizes the code in the inliner that marks unreferenced
hidden closure functions as dead. Rather than doing the marking on the
fly (previous approach), this new approach does a single pass at the
end of inlining, which catches more dead functions.
Fixes#59638.
Updates #59404.
Updates #59547.
Change-Id: I54fd63e9e37c9123b08a3e7def7d1989919bba91
Reviewed-on: https://go-review.googlesource.com/c/go/+/484859
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
CL 399694 added constant-fold switch early in compilation. So function:
func f() string {
switch intSize {
case 32:
return "32"
case 64:
return "64"
default:
panic("unreachable")
}
}
will be constant-fold to:
func f() string {
switch intSize {
case 64:
return "64"
}
}
When this function get inlined, there is a check whether we can delay
declaring the result parameter until the "return" statement. For the
original function, we can't delay the result, because there's more than
one return statement. However, the constant-fold one can, because
there's on one return statement in the body now. The result parameter
~R0 ends up declaring inside the switch statement scope.
Now, when walking the switch statement, it's re-written into if-else
statement. Without typecheck.EvalConst, the if condition "if 64 == 64"
is passed as-is to the ssa generation pass. Because "64 == 64" is not a
constant, the ssagen creates normal blocks for branching the results.
This confuses the liveness analysis, because ~R0 is only live inside the
if block. With typecheck.EvalConst, "64 == 64" is evaluated to "true",
so ssagen can branch the result without emitting conditional blocks.
Instead, the constant-fold can be re-written as:
switch {
case true:
// Body
}
So it does not depend on the delay results check during inlining. Adding
a test, which will fail when typecheck.EvalConst is removed, so we can
do the cleanup without breaking things.
Change-Id: I638730bb147140de84260653741431b807ff2f15
Reviewed-on: https://go-review.googlesource.com/c/go/+/484316
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Static init inliner is using typecheck.EvalConst to handle string
concatenation expressions. But static init inliner may reveal constant
expressions after substitution, and the compiler needs to evaluate those
expressions in non-constant semantic. Using typecheck.EvalConst, which
always evaluates expressions in constant semantic, is not the right
choice.
For safety, this CL fold the logic to handle string concatenation to
static init inliner, so there won't be regression in handling constant
expressions in non-constant semantic. And also, future CL can simplify
typecheck.EvalConst logic.
Updates #58293
Updates #58339Fixes#58439
Change-Id: I74068d99c245938e576afe9460cbd2b39677bbff
Reviewed-on: https://go-review.googlesource.com/c/go/+/466277
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
(This is a retry of CL 462035 which was reverted at 474976.
The only change from that CL is the aix fix SRODATA->SNOPTRDATA
at inittask.go:141)
As described here:
https://github.com/golang/go/issues/31636#issuecomment-493271830
"Find the lexically earliest package that is not initialized yet,
but has had all its dependencies initialized, initialize that package,
and repeat."
Simplify the runtime a bit, by just computing the ordering required
in the linker and giving a list to the runtime.
Update #31636Fixes#57411
RELNOTE=yes
Change-Id: I28c09451d6aa677d7394c179d23c2c02c503fc56
Reviewed-on: https://go-review.googlesource.com/c/go/+/478916
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts commit http://go.dev/cl/c/482356.
Reason for revert: Reverting this change again, since it is causing additional failures in google-internal testing.
Change-Id: I9234946f62e5bb18c2f873a65e8b298d04af0809
Reviewed-on: https://go-review.googlesource.com/c/go/+/484735
Reviewed-by: Florian Zenker <floriank@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Auto-Submit: Than McIntosh <thanm@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Range statement will mutate the key and value, so we should treat them as reassigned.
Fixes#59572
Change-Id: I9c6b67d938760a0c6a1d9739f2737c67af4a3a10
Reviewed-on: https://go-review.googlesource.com/c/go/+/483855
Run-TryBot: Wayne Zuo <wdvxdr@golangcn.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Fixes#58141
Co-authored-by: Richard Musiol <neelance@gmail.com>
Co-authored-by: Achille Roussel <achille.roussel@gmail.com>
Co-authored-by: Julien Fabre <ju.pryz@gmail.com>
Co-authored-by: Evan Phoenix <evan@phx.io>
Change-Id: I49b66946acc90fdf09ed9223096bfec9a1e5b923
Reviewed-on: https://go-review.googlesource.com/c/go/+/479627
Run-TryBot: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Auto-Submit: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Bypass: Ian Lance Taylor <iant@golang.org>
For now, only apply the rule if either of arguments are constants. That
would catch a lot of real user code, without slowing down the compiler
with code generated for string comparison (experience in CL 410336).
Updates #57959Fixes#45928
Change-Id: Ie2e830d6d0d71cda3947818b22c2775bd94f7971
Reviewed-on: https://go-review.googlesource.com/c/go/+/483359
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
So iterators that are in progress can know entries have been deleted and
terminate the iterator properly.
Update #55002
Update #56351Fixes#59411
Change-Id: I924f16a00fe4ed6564f730a677348a6011d3fb67
Reviewed-on: https://go-review.googlesource.com/c/go/+/481935
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Use the type of the store for the byteswap, not the type of the
store's value argument.
Normally when we're storing a 16-bit value, the value being stored is
also typed as 16 bits. But sometimes it is typed as something smaller,
usually because it is the result of an upcast from a smaller value,
and that upcast needs no instructions.
If the type of the store's arg is thinner than the type being stored,
and the byteswap'd value uses that thinner type, and the byteswap'd
value needs to be spilled & restored, that spill/restore happens using
the thinner type, which causes us to lose some of the top bits of the
value.
Fixes#59367
Change-Id: If6ce1e8a76f18bf8e9d79871b6caa438bc3cce4d
Reviewed-on: https://go-review.googlesource.com/c/go/+/481395
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
[This is a roll-forward of CL 479095, which was reverted due to a bad
interaction between inlining and escape analysis since fixed in CL 482355.]
Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:
- If the closure contains a disallowed operation (e.g., a defer), then
the outer function can't be inlined. It makes sense that the
*closure* can't be inlined in this case, but it doesn't make sense
to punish the function that constructs the closure.
- The hairiness of the closure counts against the inlining budget of
the outer function. Since we currently copy the closure body when
inlining the outer function, this makes sense from the perspective
of export data size and binary size, but ultimately doesn't make
much sense from the perspective of what should be inlineable.
- Since the inliner walks into every closure created by an outer
function in addition to starting a walk at every closure, this adds
an n^2 factor to inlinability analysis.
This CL simply drops this behavior.
In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).
This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.
The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:
│ before │ after │
│ bytes │ bytes vs base │
Go/binary 15.12Mi ± 0% 15.14Mi ± 0% +0.16% (n=1)
Go/text 5.220Mi ± 0% 5.237Mi ± 0% +0.32% (n=1)
Compile/binary 22.92Mi ± 0% 22.94Mi ± 0% +0.07% (n=1)
Compile/text 8.428Mi ± 0% 8.435Mi ± 0% +0.08% (n=1)
Updates #56102.
Change-Id: I1f4fc96c71609c8feb59fecdb92b69ba7e3b5b41
Reviewed-on: https://go-review.googlesource.com/c/go/+/482356
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
When a closure is inlined, it may contain other hidden closures, which
the inliner will duplicate, rendering the original nested closures as
unreachable. Because they are unreachable, they don't get processed in
escape analysis, meaning that go/defer statements don't get rewritten,
which can then in turn trigger errors in walk. This patch looks for
nested hidden closures and marks them as dead, so that they can be
skipped later on in the compilation flow. NB: if during escape
analysis we rediscover a hidden closure (due to an explicit reference)
that was previously marked dead, revive it at that point.
Fixes#59404.
Change-Id: I76db1e9cf1ee38bd1147aeae823f916dbbbf081b
Reviewed-on: https://go-review.googlesource.com/c/go/+/482355
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Currently, the prove pass can get knowledge from some specific logic
operators only before the CFG is explored, which means that the bounds
information of the branch will be ignored.
This CL updates the facts table by the logic operators in every
branch. Combined with the branch information, this will be helpful for
BCE in some circumstances.
Fixes#57243
Change-Id: I0bd164f1b47804ccfc37879abe9788740b016fd5
Reviewed-on: https://go-review.googlesource.com/c/go/+/419555
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Eric Fang <eric.fang@arm.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Unified IR already records the correct type for them.
Fixes#59378
Change-Id: I275c45b48f67bde55c8e2079d60b5868d0acde7f
Reviewed-on: https://go-review.googlesource.com/c/go/+/481555
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Casting to a *uintptr is not ok if there isn't at least 8 bytes of
data backing that pointer (on 64-bit archs).
So although we end up making a slice of 0 length with that pointer,
the cast itself doesn't know that.
Instead, bail early if the result is going to be 0 length.
Fixes#59334
Change-Id: Id3c0e09d341d838835c0382cccfb0f71dc3dc7e6
Reviewed-on: https://go-review.googlesource.com/c/go/+/480575
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:
- If the closure contains a disallowed operation (e.g., a defer), then
the outer function can't be inlined. It makes sense that the
*closure* can't be inlined in this case, but it doesn't make sense
to punish the function that constructs the closure.
- The hairiness of the closure counts against the inlining budget of
the outer function. Since we currently copy the closure body when
inlining the outer function, this makes sense from the perspective
of export data size and binary size, but ultimately doesn't make
much sense from the perspective of what should be inlineable.
- Since the inliner walks into every closure created by an outer
function in addition to starting a walk at every closure, this adds
an n^2 factor to inlinability analysis.
This CL simply drops this behavior.
In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).
This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.
The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:
│ before │ after │
│ bytes │ bytes vs base │
Go/binary 15.12Mi ± 0% 15.14Mi ± 0% +0.16% (n=1)
Go/text 5.220Mi ± 0% 5.237Mi ± 0% +0.32% (n=1)
Compile/binary 22.92Mi ± 0% 22.94Mi ± 0% +0.07% (n=1)
Compile/text 8.428Mi ± 0% 8.435Mi ± 0% +0.08% (n=1)
Change-Id: Ie9e38104fed5689a94c368288653fd7cb4b7a35e
Reviewed-on: https://go-review.googlesource.com/c/go/+/479095
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
The cast is proceeded by a bounds check. If the bounds check passes
then we know the pointer in the slice is non-nil.
... except casts to pointers of 0-sized arrays. They are strange, as
the bounds check can pass for a nil input.
Change-Id: Ic01cf4a82d59fbe3071d4b271c94efca9cafaec1
Reviewed-on: https://go-review.googlesource.com/c/go/+/479335
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Change the Checker.use/useLHS functions to report if all "used"
expressions evaluated without error. Use that information to
control whether to report an assignment mismatch error or not.
This will reduce the number of errors reported per assignment,
where the assignment mismatch is only one of the errors.
Change-Id: Ia0fc3203253b002e4e1d5759d8d5644999af6884
Reviewed-on: https://go-review.googlesource.com/c/go/+/478756
Reviewed-by: Robert Findley <rfindley@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Robert Griesemer <gri@google.com>
Auto-Submit: Robert Griesemer <gri@google.com>