Increases the exec timeout from 5sec to 1min, but
also print out the error value on any test failure.
Fixes#34836
Change-Id: Ida2b8bd460243491ef0f90dfe0f978dfe02a0703
Reviewed-on: https://go-review.googlesource.com/c/go/+/200519
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Test case with code that caused a gccgo error while emitting export
data for an inlinable function.
Updates #34577.
Change-Id: I28b598c4c893c77f4a76bb4f2d27e5b42f702992
Reviewed-on: https://go-review.googlesource.com/c/go/+/198057
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, escape analysis is able to record at most one dereference
when a parameter leaks to the heap; that is, at call sites, it can't
distinguish between any of these three functions:
func x1(p ****int) { sink = *p }
func x2(p ****int) { sink = **p }
func x3(p ****int) { sink = ***p }
Similarly, it's limited to recording parameter leaks to only the first
4 parameters, and only up to 6 dereferences.
All of these limitations are due to the awkward encoding scheme used
at the moment.
This CL replaces the encoding scheme with a simple [8]uint8 array,
which can handle up to the first 7 parameters, and up to 254
dereferences, which ought to be enough for anyone. And if not, it's
much more easily increased.
Shrinks export data size geometric mean for Kubernetes by 0.07%.
Fixes#33981.
Change-Id: I10a94b9accac9a0c91490e0d6d458316f5ca1e13
Reviewed-on: https://go-review.googlesource.com/c/go/+/197680
Reviewed-by: Cherry Zhang <cherryyz@google.com>
CL 170950 had a regression that makes the compiler produce
an invalid wasm binary if the data section is too large.
Loading such a binary gives the following error:
"LinkError: WebAssembly.instantiate(): data segment is out of bounds"
This change fixes the issue by ensuring that the minimum size of the
linear memory is larger than the end of the data section.
Fixes#34395.
Change-Id: I0c8629de7ffd0d85895ad31bf8c9d45fef197a57
Reviewed-on: https://go-review.googlesource.com/c/go/+/199358
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We're allowed to remove a write barrier when both the old
value in memory and the new value we're writing are not heap pointers.
Improve both those checks a little bit.
A pointer is known to not be a heap pointer if it is read from
read-only memory. This sometimes happens for loads of pointers
from string constants in read-only memory.
Do a better job of tracking which parts of memory are known to be
zero. Before we just kept track of a range of offsets in the most
recently allocated object. For code that initializes the new object's
fields in a nonstandard order, that tracking is imprecise. Instead,
keep a bit map of the first 64 words of that object, so we can track
precisely what we know to be zeroed.
The new scheme is only precise up to the first 512 bytes of the object.
After that, we'll use write barriers unnecessarily. Hopefully most
initializers of large objects will use typedmemmove, which does only one
write barrier check for the whole initialization.
Fixes#34723
Update #21561
Change-Id: Idf6e1b7d525042fb67961302d4fc6f941393cac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/199558
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
For commuting ops, check whether the second argument is dead before
checking if the first argument is rematerializeable. Reusing the register
holding a dead value is always best.
Fixes#33580
Change-Id: I7372cfc03d514e6774d2d9cc727a3e6bf6ce2657
Reviewed-on: https://go-review.googlesource.com/c/go/+/199559
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
CL 188317 introduced a compiler crash during dwarf generation which
was reported as Issue #34520. After CL 188217, the issue appears to be
fixed. Add a testcase to avoid future regressions.
Fixes#34520
Change-Id: I73544a9e9baf8dbfb85c19eb6d202beea05affb6
Reviewed-on: https://go-review.googlesource.com/c/go/+/198546
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Nilcheck would move statements from NilCheck values to others that
turned out were already dead, which leads to lost statements. Better
to eliminate the dead code first.
One "error" is removed from test/prove.go because the code is
actually dead, and the additional deadcode pass removes it before
prove can run.
Change-Id: If75926ca1acbb59c7ab9c8ef14d60a02a0a94f8b
Reviewed-on: https://go-review.googlesource.com/c/go/+/198479
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
During package initialization, the compiler tries to optimize:
var A = "foo"
var B = A
into
var A = "foo"
var B = "foo"
so that we can statically initialize both A and B and skip emitting
dynamic initialization code to assign "B = A".
However, this isn't safe in the presence of cmd/link's -X flag, which
might overwrite an initialized string-typed variable at link time. In
particular, if cmd/link changes A's static initialization, it won't
know it also needs to change B's static initialization.
To address this, this CL disables this optimization for string-typed
variables.
Fixes#34675.
Change-Id: I1c18f3b855f6d7114aeb39f96aaaf1b452b88236
Reviewed-on: https://go-review.googlesource.com/c/go/+/198657
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
By lazily starting the signal watch loop only on Notify,
we are able to have deadlock detection even when
"os/signal" is imported.
Thanks to Ian Lance Taylor for the solution and discussion.
With this change in, fix a runtime gorountine count test that
assumed that os/signal.init would unconditionally start the
signal watching goroutine, but alas no more.
Fixes#21576.
Change-Id: I6eecf82a887f59f2ec8897f1bcd67ca311ca42ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/101036
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
ORUNESTR represents the special case of integer->string conversion. If
the integer is a constant, then the string is a constant too, so
evconst needs to perform constant folding here.
Passes toolstash-check.
Fixes#34563.
Change-Id: Ieab3d76794d8ce570106b6b707a4bcd725d156e9
Reviewed-on: https://go-review.googlesource.com/c/go/+/197677
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
prove wasn't able to detect induction variables that was bound
by another inducation variable. This happened because an indvar
is a Phi, and thus in case of a dependency, the loop bounding
condition looked as Phi < Phi. This triggered an existing
codepath that checked whether the upper bound was a Phi to
detect loop conditions written in reversed order respect to the
idiomatic way (eg: for i:=0; len(n)>i; i++).
To fix this, we call the indvar pattern matching on both operands
of the loop condition, so that the first operand that matches
will be treated as the indvar.
Updates #24660 (removes a boundcheck from Fannkuch)
Change-Id: Iade83d8deb54f14277ed3f2e37b190e1ed173d11
Reviewed-on: https://go-review.googlesource.com/c/go/+/195220
Reviewed-by: David Chase <drchase@google.com>
Previously, we would recognize &(T{...}) expressions during type
checking, rewrite them into (*T){...}, and then do a lot of extra work
to make sure the user doesn't write (*T){...} themselves and
resynthesizing the OPTRLIT later on.
This CL simply handles &T{...} directly in the straight forward
manner, by changing OADDR directly to OPTRLIT when appropriate.
While here, match go/types's invalid composite literal type error
message.
Passes toolstash-check.
Change-Id: I902b14c7e2cd9fa93e6915dd58272d2352ba38f8
Reviewed-on: https://go-review.googlesource.com/c/go/+/197120
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Now that OpSliceMake is called by runtime.makeslice callers,
prove can see and record the actual length and cap of each
slice being constructed.
This small patch is enough to remove 260 additional bound checks
from cmd+std.
Thanks to Martin Möhrmann for pointing me to CL141822 that
I had missed.
Updates #24660
Change-Id: I14556850f285392051f3f07d13b456b608b64eb9
Reviewed-on: https://go-review.googlesource.com/c/go/+/196784
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Add a bunch of extra tests and benchmarks for defer, in preparation for new
low-cost (open-coded) implementation of defers (see #34481),
- New file defer_test.go that tests a bunch more unusual defer scenarios,
including things that might have problems for open-coded defers.
- Additions to callers_test.go actually verifying what the stack trace looks like
for various panic or panic-recover scenarios.
- Additions to crash_test.go testing several more crash scenarios involving
recursive panics.
- New benchmark in runtime_test.go measuring speed of panic-recover
- New CGo benchmark in cgo_test.go calling from Go to C back to Go that
shows defer overhead
Updates #34481
Change-Id: I423523f3e05fc0229d4277dd00073289a5526188
Reviewed-on: https://go-review.googlesource.com/c/go/+/197017
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, when we create an OPTRLIT node, it defaults to the
OCOMPLIT's final element's position. But it improves error messages to
use the OCOMPLIT's own position instead.
Change-Id: Ibb031f543c7248d88d99fd0737685e01d86e2500
Reviewed-on: https://go-review.googlesource.com/c/go/+/197119
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This commit just adds a regress test for a few of the important corner
cases that I identified in #27557, which turn out to not be tested
anywhere.
While here, annotate a few of the existing test cases where we could
improve escape analysis.
Updates #27557.
Change-Id: Ie57792a538f7899bb17915485fabc86100f469a3
Reviewed-on: https://go-review.googlesource.com/c/go/+/197137
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
'go run run.go -all_codegen -v codegen' passes with following adjustments:
ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
since ORshiftRL generation fusing '>> rc' and '|' interferes
with matching ((x << lc) >> rc) to generate UBFX. Previously
ORshiftLL was created first using the shifts generated for (y & ac).
S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.
Updates #33826
Updates #32781
Change-Id: I5a59f6239660d53c029cd22dfb44ddf39f93a56c
Reviewed-on: https://go-review.googlesource.com/c/go/+/196810
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
'go run run.go -all_codegen -v codegen' passes with following adjustments:
ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
since ORshiftRL generation fusing '>> rc' and '|' interferes
with matching ((x << lc) >> rc) to generate UBFX. Previously
ORshiftLL was created first using the shifts generated for (y & ac).
S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.
Updates #33826
Updates #32781
Change-Id: I43227da76b625de03fbc51117162b23b9c678cdb
Reviewed-on: https://go-review.googlesource.com/c/go/+/194297
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
i32.eqz instructions don't appear unless needed in if conditions anymore
after CL 195204. I forgot to run the codegen tests while submitting the CL.
Thanks to @martisch for catching it.
Fixes#34442
Change-Id: I177b064b389be48e39d564849714d7a8839be13e
Reviewed-on: https://go-review.googlesource.com/c/go/+/196580
Run-TryBot: Agniva De Sarker <agniva.quicksilver@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
When compiling expression switches, we try to optimize runs of
constants into binary searches. The ordering used isn't visible to the
application, so it's unimportant as long as we're consistent between
sorting and searching.
For strings, it's much cheaper to compare string lengths than strings
themselves, so instead of ordering strings by "si <= sj", we currently
order them by "len(si) < len(sj) || len(si) == len(sj) && si <= sj"
(i.e., the lexicographical ordering on the 2-tuple (len(s), s)).
However, it's also somewhat cheaper to compare strings for equality
(i.e., ==) than for ordering (i.e., <=). And if there were two or
three string constants of the same length in a switch statement, we
might unnecessarily emit ordering comparisons.
For example, given:
switch s {
case "", "1", "2", "3": // ordered by length then content
goto L
}
we currently compile this as:
if len(s) < 1 || len(s) == 1 && s <= "1" {
if s == "" { goto L }
else if s == "1" { goto L }
} else {
if s == "2" { goto L }
else if s == "3" { goto L }
}
This CL switches to using a 2-level binary search---first on len(s),
then on s itself---so that string ordering comparisons are only needed
when there are 4 or more strings of the same length. (4 being the
cut-off for when using binary search is actually worthwhile.)
So the above switch instead now compiles to:
if len(s) == 0 {
if s == "" { goto L }
} else if len(s) == 1 {
if s == "1" { goto L }
else if s == "2" { goto L }
else if s == "3" { goto L }
}
which is better optimized by walk and SSA. (Notably, because there are
only two distinct lengths and no more than three strings of any
particular length, this example ends up falling back to simply using
linear search.)
Test case by khr@ from CL 195138.
Fixes#33934.
Change-Id: I8eeebcaf7e26343223be5f443d6a97a0daf84f07
Reviewed-on: https://go-review.googlesource.com/c/go/+/195340
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
golang.org/cl/109517 optimized the compiler to avoid the allocation for make in
append(x, make([]T, y)...). This was only implemented for the case that y has type int.
This change extends the optimization to trigger for all integer types where the value
is known at compile time to fit into an int.
name old time/op new time/op delta
ExtendInt-12 106ns ± 4% 106ns ± 0% ~ (p=0.351 n=10+6)
ExtendUint64-12 1.03µs ± 5% 0.10µs ± 4% -90.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
ExtendInt-12 0.00B 0.00B ~ (all equal)
ExtendUint64-12 13.6kB ± 0% 0.0kB -100.00% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
ExtendInt-12 0.00 0.00 ~ (all equal)
ExtendUint64-12 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
Updates #29785
Change-Id: Ief7760097c285abd591712da98c5b02bc3961fcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/182559
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This CL expands the test for #29612 to check that type switches also
work correctly when type hashes collide.
Change-Id: Ia153743e6ea0736c1a33191acfe4d8ba890be527
Reviewed-on: https://go-review.googlesource.com/c/go/+/195782
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Support for overlapping interfaces is a new (proposed) Go language
feature to be supported in Go 1.14, so it shouldn't be supported under
-lang=go1.13 or earlier.
Fixes#34329.
Change-Id: I5fea5716b7d135476980bc40b4f6e8c611b67735
Reviewed-on: https://go-review.googlesource.com/c/go/+/195678
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
This reverts CL 192101.
Reason for revert: The same paragraph was added 2 weeks ago
(look a few lines above)
Change-Id: I05efb2631d7b4966f66493f178f2a649c715a3cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/195637
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This information is redundant with the position information already
provided. Also, no other -m diagnostics print out function name.
While here, report parameter leak diagnostics against the parameter
declaration position rather than the function, and use Warnl for
"moved to heap" messages.
Test cases updated programmatically by removing the first word from
every "no match for" error emitted by run.go:
go run run.go |& \
sed -E -n 's/^(.*):(.*): no match for `([^ ]* (.*))` in:$/\1!\2!\3!\4/p' | \
while IFS='!' read -r fn line before after; do
before=$(echo "$before" | sed 's/[.[\*^$()+?{|]/\\&/g')
after=$(echo "$after" | sed -E 's/(\&|\\)/\\&/g')
fn=$(find . -name "${fn}" | head -1)
sed -i -E -e "${line}s/\"${before}\"/\"${after}\"/" "${fn}"
done
Passes toolstash-check.
Change-Id: I6e02486b1409e4a8dbb2b9b816d22095835426b5
Reviewed-on: https://go-review.googlesource.com/c/go/+/195040
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
It is useful to know about the -all_codegen option for running
codegen tests for all platforms. I was puzzling that some codegen
test was not failing on my local machine or on trybot, until I
found this option.
Change-Id: I062cf4d73f6a6c9ebc2258195779d2dab21bc36d
Reviewed-on: https://go-review.googlesource.com/c/go/+/192101
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change adds an intrinsic for Mul64 on s390x. To achieve that,
a new assembly instruction, MLGR, is introduced in s390x/asmz.go. This assembly
instruction directly uses an existing instruction on Z and supports multiplication
of two 64 bit unsigned integer and stores the result in two separate registers.
In this case, we require the multiplcand to be stored in register R3 and
the output result (the high and low 64 bit of the product) to be stored in
R2 and R3 respectively.
A test case is also added.
Benchmark:
name old time/op new time/op delta
Mul-18 11.1ns ± 0% 1.4ns ± 0% -87.39% (p=0.002 n=8+10)
Mul32-18 2.07ns ± 0% 2.07ns ± 0% ~ (all equal)
Mul64-18 11.1ns ± 1% 1.4ns ± 0% -87.42% (p=0.000 n=10+10)
Change-Id: Ieca6ad1f61fff9a48a31d50bbd3f3c6d9e6675c1
Reviewed-on: https://go-review.googlesource.com/c/go/+/194572
Reviewed-by: Michael Munday <mike.munday@ibm.com>
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Part of the general trend of moving yyerror calls out of walk and into
typecheck.
Notably, this requires splitting test/typeswitch2.go into two files,
because now some of the errors are reported during typecheck and
others are still reported during walk; and if there were any errors
during typecheck, then cmd/compile exits without invoking walk.
Passes toolstash-check.
Change-Id: I05ee0c00b99af659ee1eef098d342d0d736cf31e
Reviewed-on: https://go-review.googlesource.com/c/go/+/194659
Reviewed-by: Robert Griesemer <gri@golang.org>
This CL moves parameter tagging to before escape analysis is complete,
so we still have access to EscLocation. This will be useful once
EscLocation starts tracking higher-fidelity escape details.
Notably, this CL stops using n.Esc to record parameter escape analysis
details. Now escape analysis only ever sets n.Esc to EscNone or
EscHeap. (It still defaults to EscUnknown, and is set to EscNever in
some places though.)
Passes toolstash-check.
Updates #33981.
Change-Id: I50a91ea1e38c442092de6cd14e20b211f8f818c9
Reviewed-on: https://go-review.googlesource.com/c/go/+/193178
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This CL gets rid of the MOVDreg and MOVDnop SSA operations on
s390x. They were originally inserted to help avoid situations
where a sign/zero extension was elided but a spill invalidated
the optimization. It's not really clear we need to do this though
(amd64 doesn't have these ops for example) so long as we are
careful when removing sign/zero extensions. Also, the MOVDreg
technique doesn't work if the register is spilled before the
MOVDreg op (I haven't seen that in practice).
Removing these ops reduces the complexity of the rules and also
allows us to unblock optimizations. For example, the compiler can
now merge the loads in binary.{Big,Little}Endian.PutUint16 which
it wasn't able to do before. This CL reduces the size of the .text
section in the go tool by about 4.7KB (0.09%).
Change-Id: Icaddae7f2e4f9b2debb6fabae845adb3f73b41db
Reviewed-on: https://go-review.googlesource.com/c/go/+/173897
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Previously, we used a single "untyped number" type for all untyped
numeric constants. This led to vague error messages like "string(1.0)"
reporting that "1 (type untyped number)" can't be converted to string,
even though "string(1)" is valid.
This CL makes cmd/compile more like go/types by utilizing
types.Ideal{int,rune,float,complex} instead of types.Types[TIDEAL],
and keeping n.Type in sync with n.Val().Ctype() during constant
folding.
Thanks to K Heller for looking into this issue, and for the included
test case.
Fixes#21979.
Change-Id: Ibfea88c05704bc3c0a502a455d018a375589754d
Reviewed-on: https://go-review.googlesource.com/c/go/+/194019
Reviewed-by: Robert Griesemer <gri@golang.org>
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.
Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.
However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.
Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.
For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f SHRQ $0x3f, DX
48c1e23f SHLQ $0x3f, DX
after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080 MOVQ $0x8000000000000000, AX
4821d0 ANDQ DX, AX
Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.
Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:
var GlobalU uint
func BenchmarkAndHighBits(b *testing.B) {
x := uint(0)
for i := 0; i < b.N; i++ {
x &= 1 << 63
}
GlobalU = x
}
amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name old time/op new time/op delta
AndHighBits-4 0.61ns ± 6% 0.42ns ± 6% -31.42% (p=0.000 n=25+25):
Updates #33826
Updates #32781
Change-Id: I862d3587446410c447b9a7265196b57f85358633
Reviewed-on: https://go-review.googlesource.com/c/go/+/191780
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Use the following (suboptimal) script to obtain a list of possible
typos:
#!/usr/bin/env sh
set -x
git ls-files |\
grep -e '\.\(c\|cc\|go\)$' |\
xargs -n 1\
awk\
'/\/\// { gsub(/.*\/\//, ""); print; } /\/\*/, /\*\// { gsub(/.*\/\*/, ""); gsub(/\*\/.*/, ""); }' |\
hunspell -d en_US -l |\
grep '^[[:upper:]]\{0,1\}[[:lower:]]\{1,\}$' |\
grep -v -e '^.\{1,4\}$' -e '^.\{16,\}$' |\
sort -f |\
uniq -c |\
awk '$1 == 1 { print $2; }'
Then, go through the results manually and fix the most obvious typos in
the non-vendored code.
Change-Id: I3cb5830a176850e1a0584b8a40b47bde7b260eae
Reviewed-on: https://go-review.googlesource.com/c/go/+/193848
Reviewed-by: Robert Griesemer <gri@golang.org>
This CL detangles the hairy mess that was convlit+defaultlit. In
particular, it makes the following changes:
1. convlit1 now follows the standard typecheck behavior of setting
"n.Type = nil" if there's an error. Notably, this means for a lot of
test cases, we now avoid reporting useless follow-on error messages.
For example, after reporting that "1 << s + 1.0" has an invalid shift,
we no longer also report that it can't be assigned to string.
2. Previously, assignconvfn had some extra logic for trying to
suppress errors from convlit/defaultlit so that it could provide its
own errors with better context information. Instead, this extra
context information is now passed down into convlit1 directly.
3. Relatedly, this CL also removes redundant calls to defaultlit prior
to assignconv. As a consequence, when an expression doesn't make sense
for a particular assignment (e.g., assigning an untyped string to an
integer), the error messages now say "untyped string" instead of just
"string". This is more consistent with go/types behavior.
4. defaultlit2 is now smarter about only trying to convert pairs of
untyped constants when it's likely to succeed. This allows us to
report better error messages for things like 3+"x"; instead of "cannot
convert 3 to string" we now report "mismatched types untyped number
and untyped string".
Passes toolstash-check.
Change-Id: I26822a02dc35855bd0ac774907b1cf5737e91882
Reviewed-on: https://go-review.googlesource.com/c/go/+/187657
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
This reverts commit 2da9c3e0f9.
Reason for revert: while the new error messages are more informative,
they're not strictly correct. This CL also conflicts with CL 187657.
Change-Id: I1c36cf7e86c2f35ee83a4f98918ee38aa1f59965
Reviewed-on: https://go-review.googlesource.com/c/go/+/193977
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Follow-up to Change-Id: If6e52c59eab438599d641ecf6f110ebafca740a9
This addresses the remaining tech debt on issue 21979.
The aforementioned previous CL silenced one of two mostly redundant
compiler errors. However, the silenced error was the more expressive
error. This CL now imbues the surviving error with the same level
of expressiveness as the old semi-redundant error.
Fixes#21979
Change-Id: I3273d48c88bbab073fabe53421d801df621ce321
Reviewed-on: https://go-review.googlesource.com/c/go/+/191079
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Test with some code that triggered a compilation error bug in gccgo.
Updates #33866.
Change-Id: Ib2f226bbbebbfae33b41037438fe34dc5f2ad034
Reviewed-on: https://go-review.googlesource.com/c/go/+/193261
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Add block method to preserve loop depth when evaluating statements in a
block, so escape analysis can handle looping label more precisely.
Updates #22438
Change-Id: I39b306544a6c0ee3fcbebbe0d0ee735cb71773e6
Reviewed-on: https://go-review.googlesource.com/c/go/+/193517
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>