Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go cmd without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I51a389860b9676cfa1b84722f5fb84d3c4ee9e28
Reviewed-on: https://go-review.googlesource.com/c/go/+/190098
Reviewed-by: Austin Clements <austin@google.com>
The Go spec requires
If a deferred function value evaluates to nil, execution
panics when the function is invoked, not when the "defer"
statement is executed.
On Wasm and AIX, currently we actually emit a nil check at the
point of defer statement, which will make it panic too early.
This CL fixes this.
Also, on Wasm, now the nil function will be passed through
deferreturn to jmpdefer, which does an explicit nil check and
calls sigpanic if it is nil. This sigpanic, being called from
assembly, is ABI0. So change the assembler backend to also
handle sigpanic in ABI0.
Fixes#34926.
Updates #8047.
Change-Id: I28489a571cee36d2aef041f917b8cfdc31d557d4
Reviewed-on: https://go-review.googlesource.com/c/go/+/201297
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When a subsequent load/store of a ptr makes the nil check of that pointer
unnecessary, if their lines differ, change the line of the load/store
to that of the nilcheck, and attempt to rehome the load/store position
instead.
This fix makes profiling less accurate in order to make panics more
informative.
Fixes#33724
Change-Id: Ib9afaac12fe0d0320aea1bf493617facc34034b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/200197
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Add to the testcase originally created for issue 34577 so
as to also trigger the error condition for issue 34852 (the
two bugs are closely related).
Updates #34577.
Updates #34852.
Change-Id: I2347369652ce500184347606b2bb3e76d802b204
Reviewed-on: https://go-review.googlesource.com/c/go/+/201017
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When assessing whether A <= B, the poset's OrderedOrEqual has a passing
condition which permits A <= B, but is not sufficient to infer that A <= B.
This CL removes that incorrect passing condition.
Having identified that A and B are in the poset, the method will report that
A <= B if any of these three conditions are true:
(1) A and B are the same node in the poset.
- This means we know that A == B.
(2) There is a directed path, strict or not, from A -> B
- This means we know that, at least, A <= B, but A < B is possible.
(3) There is a directed path from B -> A, AND that path has no strict edges.
- This means we know that B <= A, but do not know that B < A.
In condition (3), we do not have enough information to say that A <= B, rather
we only know that B == A (which satisfies A <= B) is possible. The way I
understand it, a strict edge shows a known, strictly-ordered relation (<) but
the lack of a strict edge does not show the lack of a strictly-ordered relation.
The difference is highlighted by the example in #34802, where a bounds check is
incorrectly removed by prove, such that negative indexes into a slice
succeed:
n := make([]int, 1)
for i := -1; i <= 0; i++ {
fmt.Printf("i is %d\n", i)
n[i] = 1 // No Bounds check, program runs, assignment to n[-1] succeeds!!
}
When prove is checking the negative/failed branch from the bounds check at n[i],
in the signed domain we learn (0 > i || i >= len(n)). Because prove can't learn
the OR condition, we check whether we know that i is non-negative so we can
learn something, namely that i >= len(n). Prove uses the poset to check whether
we know that i is non-negative. At this point the poset holds the following
relations as a directed graph:
-1 <= i <= 0
-1 < 0
In poset.OrderedOrEqual, we are testing for 0 <= i. In this case, condition (3)
above is true because there is a non-strict path from i -> 0, and that path
does NOT have any strict edges. Because this condition is true, the poset
reports to prove that i is known to be >= 0. Knowing, incorrectly, that i >= 0,
prove learns from the failed bounds check that i >= len(n) in the signed domain.
When the slice, n, was created, prove learned that len(n) == 1. Because i is
also the induction variable for the loop, upon entering the loop, prove previously
learned that i is in [-1,0]. So when prove attempts to learn from the failed
bounds check, it finds the new fact, i > len(n), unsatisfiable given that it
previously learned that i <= 0 and len(n) = 1.
Fixes#34802
Change-Id: I235f4224bef97700c3aa5c01edcc595eb9f13afc
Reviewed-on: https://go-review.googlesource.com/c/go/+/200759
Run-TryBot: Zach Jones <zachj1@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Giovanni Bajo <rasky@develer.com>
Reviewed-by: Keith Randall <khr@golang.org>
Increases the exec timeout from 5sec to 1min, but
also print out the error value on any test failure.
Fixes#34836
Change-Id: Ida2b8bd460243491ef0f90dfe0f978dfe02a0703
Reviewed-on: https://go-review.googlesource.com/c/go/+/200519
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Test case with code that caused a gccgo error while emitting export
data for an inlinable function.
Updates #34577.
Change-Id: I28b598c4c893c77f4a76bb4f2d27e5b42f702992
Reviewed-on: https://go-review.googlesource.com/c/go/+/198057
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, escape analysis is able to record at most one dereference
when a parameter leaks to the heap; that is, at call sites, it can't
distinguish between any of these three functions:
func x1(p ****int) { sink = *p }
func x2(p ****int) { sink = **p }
func x3(p ****int) { sink = ***p }
Similarly, it's limited to recording parameter leaks to only the first
4 parameters, and only up to 6 dereferences.
All of these limitations are due to the awkward encoding scheme used
at the moment.
This CL replaces the encoding scheme with a simple [8]uint8 array,
which can handle up to the first 7 parameters, and up to 254
dereferences, which ought to be enough for anyone. And if not, it's
much more easily increased.
Shrinks export data size geometric mean for Kubernetes by 0.07%.
Fixes#33981.
Change-Id: I10a94b9accac9a0c91490e0d6d458316f5ca1e13
Reviewed-on: https://go-review.googlesource.com/c/go/+/197680
Reviewed-by: Cherry Zhang <cherryyz@google.com>
CL 170950 had a regression that makes the compiler produce
an invalid wasm binary if the data section is too large.
Loading such a binary gives the following error:
"LinkError: WebAssembly.instantiate(): data segment is out of bounds"
This change fixes the issue by ensuring that the minimum size of the
linear memory is larger than the end of the data section.
Fixes#34395.
Change-Id: I0c8629de7ffd0d85895ad31bf8c9d45fef197a57
Reviewed-on: https://go-review.googlesource.com/c/go/+/199358
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We're allowed to remove a write barrier when both the old
value in memory and the new value we're writing are not heap pointers.
Improve both those checks a little bit.
A pointer is known to not be a heap pointer if it is read from
read-only memory. This sometimes happens for loads of pointers
from string constants in read-only memory.
Do a better job of tracking which parts of memory are known to be
zero. Before we just kept track of a range of offsets in the most
recently allocated object. For code that initializes the new object's
fields in a nonstandard order, that tracking is imprecise. Instead,
keep a bit map of the first 64 words of that object, so we can track
precisely what we know to be zeroed.
The new scheme is only precise up to the first 512 bytes of the object.
After that, we'll use write barriers unnecessarily. Hopefully most
initializers of large objects will use typedmemmove, which does only one
write barrier check for the whole initialization.
Fixes#34723
Update #21561
Change-Id: Idf6e1b7d525042fb67961302d4fc6f941393cac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/199558
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
For commuting ops, check whether the second argument is dead before
checking if the first argument is rematerializeable. Reusing the register
holding a dead value is always best.
Fixes#33580
Change-Id: I7372cfc03d514e6774d2d9cc727a3e6bf6ce2657
Reviewed-on: https://go-review.googlesource.com/c/go/+/199559
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
CL 188317 introduced a compiler crash during dwarf generation which
was reported as Issue #34520. After CL 188217, the issue appears to be
fixed. Add a testcase to avoid future regressions.
Fixes#34520
Change-Id: I73544a9e9baf8dbfb85c19eb6d202beea05affb6
Reviewed-on: https://go-review.googlesource.com/c/go/+/198546
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Nilcheck would move statements from NilCheck values to others that
turned out were already dead, which leads to lost statements. Better
to eliminate the dead code first.
One "error" is removed from test/prove.go because the code is
actually dead, and the additional deadcode pass removes it before
prove can run.
Change-Id: If75926ca1acbb59c7ab9c8ef14d60a02a0a94f8b
Reviewed-on: https://go-review.googlesource.com/c/go/+/198479
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
During package initialization, the compiler tries to optimize:
var A = "foo"
var B = A
into
var A = "foo"
var B = "foo"
so that we can statically initialize both A and B and skip emitting
dynamic initialization code to assign "B = A".
However, this isn't safe in the presence of cmd/link's -X flag, which
might overwrite an initialized string-typed variable at link time. In
particular, if cmd/link changes A's static initialization, it won't
know it also needs to change B's static initialization.
To address this, this CL disables this optimization for string-typed
variables.
Fixes#34675.
Change-Id: I1c18f3b855f6d7114aeb39f96aaaf1b452b88236
Reviewed-on: https://go-review.googlesource.com/c/go/+/198657
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
By lazily starting the signal watch loop only on Notify,
we are able to have deadlock detection even when
"os/signal" is imported.
Thanks to Ian Lance Taylor for the solution and discussion.
With this change in, fix a runtime gorountine count test that
assumed that os/signal.init would unconditionally start the
signal watching goroutine, but alas no more.
Fixes#21576.
Change-Id: I6eecf82a887f59f2ec8897f1bcd67ca311ca42ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/101036
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
ORUNESTR represents the special case of integer->string conversion. If
the integer is a constant, then the string is a constant too, so
evconst needs to perform constant folding here.
Passes toolstash-check.
Fixes#34563.
Change-Id: Ieab3d76794d8ce570106b6b707a4bcd725d156e9
Reviewed-on: https://go-review.googlesource.com/c/go/+/197677
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
prove wasn't able to detect induction variables that was bound
by another inducation variable. This happened because an indvar
is a Phi, and thus in case of a dependency, the loop bounding
condition looked as Phi < Phi. This triggered an existing
codepath that checked whether the upper bound was a Phi to
detect loop conditions written in reversed order respect to the
idiomatic way (eg: for i:=0; len(n)>i; i++).
To fix this, we call the indvar pattern matching on both operands
of the loop condition, so that the first operand that matches
will be treated as the indvar.
Updates #24660 (removes a boundcheck from Fannkuch)
Change-Id: Iade83d8deb54f14277ed3f2e37b190e1ed173d11
Reviewed-on: https://go-review.googlesource.com/c/go/+/195220
Reviewed-by: David Chase <drchase@google.com>
Previously, we would recognize &(T{...}) expressions during type
checking, rewrite them into (*T){...}, and then do a lot of extra work
to make sure the user doesn't write (*T){...} themselves and
resynthesizing the OPTRLIT later on.
This CL simply handles &T{...} directly in the straight forward
manner, by changing OADDR directly to OPTRLIT when appropriate.
While here, match go/types's invalid composite literal type error
message.
Passes toolstash-check.
Change-Id: I902b14c7e2cd9fa93e6915dd58272d2352ba38f8
Reviewed-on: https://go-review.googlesource.com/c/go/+/197120
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Now that OpSliceMake is called by runtime.makeslice callers,
prove can see and record the actual length and cap of each
slice being constructed.
This small patch is enough to remove 260 additional bound checks
from cmd+std.
Thanks to Martin Möhrmann for pointing me to CL141822 that
I had missed.
Updates #24660
Change-Id: I14556850f285392051f3f07d13b456b608b64eb9
Reviewed-on: https://go-review.googlesource.com/c/go/+/196784
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Add a bunch of extra tests and benchmarks for defer, in preparation for new
low-cost (open-coded) implementation of defers (see #34481),
- New file defer_test.go that tests a bunch more unusual defer scenarios,
including things that might have problems for open-coded defers.
- Additions to callers_test.go actually verifying what the stack trace looks like
for various panic or panic-recover scenarios.
- Additions to crash_test.go testing several more crash scenarios involving
recursive panics.
- New benchmark in runtime_test.go measuring speed of panic-recover
- New CGo benchmark in cgo_test.go calling from Go to C back to Go that
shows defer overhead
Updates #34481
Change-Id: I423523f3e05fc0229d4277dd00073289a5526188
Reviewed-on: https://go-review.googlesource.com/c/go/+/197017
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, when we create an OPTRLIT node, it defaults to the
OCOMPLIT's final element's position. But it improves error messages to
use the OCOMPLIT's own position instead.
Change-Id: Ibb031f543c7248d88d99fd0737685e01d86e2500
Reviewed-on: https://go-review.googlesource.com/c/go/+/197119
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This commit just adds a regress test for a few of the important corner
cases that I identified in #27557, which turn out to not be tested
anywhere.
While here, annotate a few of the existing test cases where we could
improve escape analysis.
Updates #27557.
Change-Id: Ie57792a538f7899bb17915485fabc86100f469a3
Reviewed-on: https://go-review.googlesource.com/c/go/+/197137
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
'go run run.go -all_codegen -v codegen' passes with following adjustments:
ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
since ORshiftRL generation fusing '>> rc' and '|' interferes
with matching ((x << lc) >> rc) to generate UBFX. Previously
ORshiftLL was created first using the shifts generated for (y & ac).
S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.
Updates #33826
Updates #32781
Change-Id: I5a59f6239660d53c029cd22dfb44ddf39f93a56c
Reviewed-on: https://go-review.googlesource.com/c/go/+/196810
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
'go run run.go -all_codegen -v codegen' passes with following adjustments:
ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
since ORshiftRL generation fusing '>> rc' and '|' interferes
with matching ((x << lc) >> rc) to generate UBFX. Previously
ORshiftLL was created first using the shifts generated for (y & ac).
S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.
Updates #33826
Updates #32781
Change-Id: I43227da76b625de03fbc51117162b23b9c678cdb
Reviewed-on: https://go-review.googlesource.com/c/go/+/194297
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
i32.eqz instructions don't appear unless needed in if conditions anymore
after CL 195204. I forgot to run the codegen tests while submitting the CL.
Thanks to @martisch for catching it.
Fixes#34442
Change-Id: I177b064b389be48e39d564849714d7a8839be13e
Reviewed-on: https://go-review.googlesource.com/c/go/+/196580
Run-TryBot: Agniva De Sarker <agniva.quicksilver@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
When compiling expression switches, we try to optimize runs of
constants into binary searches. The ordering used isn't visible to the
application, so it's unimportant as long as we're consistent between
sorting and searching.
For strings, it's much cheaper to compare string lengths than strings
themselves, so instead of ordering strings by "si <= sj", we currently
order them by "len(si) < len(sj) || len(si) == len(sj) && si <= sj"
(i.e., the lexicographical ordering on the 2-tuple (len(s), s)).
However, it's also somewhat cheaper to compare strings for equality
(i.e., ==) than for ordering (i.e., <=). And if there were two or
three string constants of the same length in a switch statement, we
might unnecessarily emit ordering comparisons.
For example, given:
switch s {
case "", "1", "2", "3": // ordered by length then content
goto L
}
we currently compile this as:
if len(s) < 1 || len(s) == 1 && s <= "1" {
if s == "" { goto L }
else if s == "1" { goto L }
} else {
if s == "2" { goto L }
else if s == "3" { goto L }
}
This CL switches to using a 2-level binary search---first on len(s),
then on s itself---so that string ordering comparisons are only needed
when there are 4 or more strings of the same length. (4 being the
cut-off for when using binary search is actually worthwhile.)
So the above switch instead now compiles to:
if len(s) == 0 {
if s == "" { goto L }
} else if len(s) == 1 {
if s == "1" { goto L }
else if s == "2" { goto L }
else if s == "3" { goto L }
}
which is better optimized by walk and SSA. (Notably, because there are
only two distinct lengths and no more than three strings of any
particular length, this example ends up falling back to simply using
linear search.)
Test case by khr@ from CL 195138.
Fixes#33934.
Change-Id: I8eeebcaf7e26343223be5f443d6a97a0daf84f07
Reviewed-on: https://go-review.googlesource.com/c/go/+/195340
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
golang.org/cl/109517 optimized the compiler to avoid the allocation for make in
append(x, make([]T, y)...). This was only implemented for the case that y has type int.
This change extends the optimization to trigger for all integer types where the value
is known at compile time to fit into an int.
name old time/op new time/op delta
ExtendInt-12 106ns ± 4% 106ns ± 0% ~ (p=0.351 n=10+6)
ExtendUint64-12 1.03µs ± 5% 0.10µs ± 4% -90.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
ExtendInt-12 0.00B 0.00B ~ (all equal)
ExtendUint64-12 13.6kB ± 0% 0.0kB -100.00% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
ExtendInt-12 0.00 0.00 ~ (all equal)
ExtendUint64-12 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
Updates #29785
Change-Id: Ief7760097c285abd591712da98c5b02bc3961fcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/182559
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This CL expands the test for #29612 to check that type switches also
work correctly when type hashes collide.
Change-Id: Ia153743e6ea0736c1a33191acfe4d8ba890be527
Reviewed-on: https://go-review.googlesource.com/c/go/+/195782
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Support for overlapping interfaces is a new (proposed) Go language
feature to be supported in Go 1.14, so it shouldn't be supported under
-lang=go1.13 or earlier.
Fixes#34329.
Change-Id: I5fea5716b7d135476980bc40b4f6e8c611b67735
Reviewed-on: https://go-review.googlesource.com/c/go/+/195678
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
This reverts CL 192101.
Reason for revert: The same paragraph was added 2 weeks ago
(look a few lines above)
Change-Id: I05efb2631d7b4966f66493f178f2a649c715a3cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/195637
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This information is redundant with the position information already
provided. Also, no other -m diagnostics print out function name.
While here, report parameter leak diagnostics against the parameter
declaration position rather than the function, and use Warnl for
"moved to heap" messages.
Test cases updated programmatically by removing the first word from
every "no match for" error emitted by run.go:
go run run.go |& \
sed -E -n 's/^(.*):(.*): no match for `([^ ]* (.*))` in:$/\1!\2!\3!\4/p' | \
while IFS='!' read -r fn line before after; do
before=$(echo "$before" | sed 's/[.[\*^$()+?{|]/\\&/g')
after=$(echo "$after" | sed -E 's/(\&|\\)/\\&/g')
fn=$(find . -name "${fn}" | head -1)
sed -i -E -e "${line}s/\"${before}\"/\"${after}\"/" "${fn}"
done
Passes toolstash-check.
Change-Id: I6e02486b1409e4a8dbb2b9b816d22095835426b5
Reviewed-on: https://go-review.googlesource.com/c/go/+/195040
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
It is useful to know about the -all_codegen option for running
codegen tests for all platforms. I was puzzling that some codegen
test was not failing on my local machine or on trybot, until I
found this option.
Change-Id: I062cf4d73f6a6c9ebc2258195779d2dab21bc36d
Reviewed-on: https://go-review.googlesource.com/c/go/+/192101
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change adds an intrinsic for Mul64 on s390x. To achieve that,
a new assembly instruction, MLGR, is introduced in s390x/asmz.go. This assembly
instruction directly uses an existing instruction on Z and supports multiplication
of two 64 bit unsigned integer and stores the result in two separate registers.
In this case, we require the multiplcand to be stored in register R3 and
the output result (the high and low 64 bit of the product) to be stored in
R2 and R3 respectively.
A test case is also added.
Benchmark:
name old time/op new time/op delta
Mul-18 11.1ns ± 0% 1.4ns ± 0% -87.39% (p=0.002 n=8+10)
Mul32-18 2.07ns ± 0% 2.07ns ± 0% ~ (all equal)
Mul64-18 11.1ns ± 1% 1.4ns ± 0% -87.42% (p=0.000 n=10+10)
Change-Id: Ieca6ad1f61fff9a48a31d50bbd3f3c6d9e6675c1
Reviewed-on: https://go-review.googlesource.com/c/go/+/194572
Reviewed-by: Michael Munday <mike.munday@ibm.com>
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Part of the general trend of moving yyerror calls out of walk and into
typecheck.
Notably, this requires splitting test/typeswitch2.go into two files,
because now some of the errors are reported during typecheck and
others are still reported during walk; and if there were any errors
during typecheck, then cmd/compile exits without invoking walk.
Passes toolstash-check.
Change-Id: I05ee0c00b99af659ee1eef098d342d0d736cf31e
Reviewed-on: https://go-review.googlesource.com/c/go/+/194659
Reviewed-by: Robert Griesemer <gri@golang.org>
This CL moves parameter tagging to before escape analysis is complete,
so we still have access to EscLocation. This will be useful once
EscLocation starts tracking higher-fidelity escape details.
Notably, this CL stops using n.Esc to record parameter escape analysis
details. Now escape analysis only ever sets n.Esc to EscNone or
EscHeap. (It still defaults to EscUnknown, and is set to EscNever in
some places though.)
Passes toolstash-check.
Updates #33981.
Change-Id: I50a91ea1e38c442092de6cd14e20b211f8f818c9
Reviewed-on: https://go-review.googlesource.com/c/go/+/193178
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This CL gets rid of the MOVDreg and MOVDnop SSA operations on
s390x. They were originally inserted to help avoid situations
where a sign/zero extension was elided but a spill invalidated
the optimization. It's not really clear we need to do this though
(amd64 doesn't have these ops for example) so long as we are
careful when removing sign/zero extensions. Also, the MOVDreg
technique doesn't work if the register is spilled before the
MOVDreg op (I haven't seen that in practice).
Removing these ops reduces the complexity of the rules and also
allows us to unblock optimizations. For example, the compiler can
now merge the loads in binary.{Big,Little}Endian.PutUint16 which
it wasn't able to do before. This CL reduces the size of the .text
section in the go tool by about 4.7KB (0.09%).
Change-Id: Icaddae7f2e4f9b2debb6fabae845adb3f73b41db
Reviewed-on: https://go-review.googlesource.com/c/go/+/173897
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Previously, we used a single "untyped number" type for all untyped
numeric constants. This led to vague error messages like "string(1.0)"
reporting that "1 (type untyped number)" can't be converted to string,
even though "string(1)" is valid.
This CL makes cmd/compile more like go/types by utilizing
types.Ideal{int,rune,float,complex} instead of types.Types[TIDEAL],
and keeping n.Type in sync with n.Val().Ctype() during constant
folding.
Thanks to K Heller for looking into this issue, and for the included
test case.
Fixes#21979.
Change-Id: Ibfea88c05704bc3c0a502a455d018a375589754d
Reviewed-on: https://go-review.googlesource.com/c/go/+/194019
Reviewed-by: Robert Griesemer <gri@golang.org>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
Updates #33826
Updates #32781
Change-Id: I862d3587446410c447b9a7265196b57f85358633
Reviewed-on: https://go-review.googlesource.com/c/go/+/191780
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>