1
0
mirror of https://github.com/golang/go synced 2024-11-18 03:14:44 -07:00
Commit Graph

3494 Commits

Author SHA1 Message Date
Matthew Dempsky
b3bd7ab3d7 cmd/compile: fix //go:uintptrescapes for basic method calls
The logic for keeping arguments alive for calls to //go:uintptrescapes
functions was only applying to direct function calls. This CL changes
it to also apply to direct method calls, which should address most
uses of Proc.Call and LazyProc.Call.

It's still an open question (#34684) whether other call forms (e.g.,
method expressions, or indirect calls via function values, method
values, or interfaces).

Fixes #34474.

Change-Id: I874f97145972b0e237a4c9e8926156298f4d6ce0
Reviewed-on: https://go-review.googlesource.com/c/go/+/198043
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-05 00:26:30 +00:00
Cuong Manh Le
26d5f032e9 cmd/compile: add test for skipping empty init functions
CL 200958 adds skipping empty init function feature without any tests
for it. A codegen test sounds ideal, but it's unlikely that we can make
one for now, so use a program to manipulate runtime/proc.go:initTask
directly.

Updates #34869

Change-Id: I2683b9a1ace36af6861af02a3a9fb18b3110b282
Reviewed-on: https://go-review.googlesource.com/c/go/+/204217
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-04 20:19:15 +00:00
Dan Scales
7dcd343ed6 runtime: ensure that Goexit cannot be aborted by a recursive panic/recover
When we do a successful recover of a panic, we resume normal execution by
returning from the frame that had the deferred call that did the recover (after
executing any remaining deferred calls in that frame).

However, suppose we have called runtime.Goexit and there is a panic during one of the
deferred calls run by the Goexit. Further assume that there is a deferred call in
the frame of the Goexit or a parent frame that does a recover. Then the recovery
process will actually resume normal execution above the Goexit frame and hence
abort the Goexit.  We will not terminate the thread as expected, but continue
running in the frame above the Goexit.

To fix this, we explicitly create a _panic object for a Goexit call. We then
change the "abort" behavior for Goexits, but not panics. After a recovery, if the
top-level panic is actually a Goexit that is marked to be aborted, then we return
to the Goexit defer-processing loop, so that the Goexit is not actually aborted.

Actual code changes are just panic.go, runtime2.go, and funcid.go. Adjusted the
test related to the new Goexit behavior (TestRecoverBeforePanicAfterGoexit) and
added several new tests of aborted panics (whose behavior has not changed).

Fixes #29226

Change-Id: Ib13cb0074f5acc2567a28db7ca6912cfc47eecb5
Reviewed-on: https://go-review.googlesource.com/c/go/+/200081
Run-TryBot: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-04 16:32:38 +00:00
Cuong Manh Le
efd395f9fb cmd/compile: make duplicate index error distinguish arrays and slices
Fixes #35291

Change-Id: I11ae367b6e972cd9e7a22bbc2cb23d32f4d72b98
Reviewed-on: https://go-review.googlesource.com/c/go/+/204617
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2019-11-01 01:51:26 +00:00
Shenghou Ma
449b6abbac cmd/compile/internal/gc: reword "declared and not used" error message
"declared and not used" is technically correct, but might confuse
the user. Switching "and" to "but" will hopefully create the
contrast for the users: they did one thing (declaration), but
not the other --- actually using the variable.

This new message is still not ideal (specifically, declared is not
entirely precise here), but at least it matches the other parsers
and is one step in the right direction.

Change-Id: I725c7c663535f9ab9725c4b0bf35b4fa74b0eb20
Reviewed-on: https://go-review.googlesource.com/c/go/+/203282
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2019-10-28 23:34:13 +00:00
Giovanni Bajo
5d000a8b62 test: add test for fixed internal compiler error
Updates #35157 (the bug there was fixed by CL200861)

Change-Id: I67069207b4cdc2ad4a475dd0bbc8555ecc5f534f
Reviewed-on: https://go-review.googlesource.com/c/go/+/203598
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Alberto Donizetti <alb.donizetti@gmail.com>
2019-10-26 08:29:23 +00:00
Dan Scales
be64a19d99 cmd/compile, cmd/link, runtime: make defers low-cost through inline code and extra funcdata
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).

When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.

In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).

I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().

The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).

Cost of defer statement  [ go test -run NONE -bench BenchmarkDefer$ runtime ]
  With normal (stack-allocated) defers only:         35.4  ns/op
  With open-coded defers:                             5.6  ns/op
  Cost of function call alone (remove defer keyword): 4.4  ns/op

Text size increase (including funcdata) for go binary without/with open-coded defers:  0.09%

The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.

The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:

Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
  Without open-coded defers:        62.0 ns/op
  With open-coded defers:           255  ns/op

A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:

CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
  Without open-coded defers:        443 ns/op
  With open-coded defers:           347 ns/op

Updates #14939 (defer performance)
Updates #34481 (design doc)

Change-Id: I63b1a60d1ebf28126f55ee9fd7ecffe9cb23d1ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/202340
Reviewed-by: Austin Clements <austin@google.com>
2019-10-24 13:54:11 +00:00
Matthew Dempsky
b282efa022 cmd/compile: recognize reflect.{Slice,String}Header for -d=checkptr
Avoids false positive pointer arithmetic panic.

Fixes #35027.

Change-Id: Idd008caaab25fcf739327ac50a021b835ef13def
Reviewed-on: https://go-review.googlesource.com/c/go/+/202560
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-21 20:51:06 +00:00
smasher164
58b031949b cmd/compile: add fma intrinsic for arm
This change introduces an arm intrinsic that generates the FMULAD
instruction for the fused-multiply-add operation on systems that
support it. System support is detected via cpu.ARM.HasVFPv4. A rewrite
rule translates the generic intrinsic to FMULAD.

Updates #25819.

Change-Id: I8459e5dd1cdbdca35f88a78dbeb7d387f1e20efa
Reviewed-on: https://go-review.googlesource.com/c/go/+/142117
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-21 17:42:47 +00:00
smasher164
7a6da218b1 cmd/compile: add fma intrinsic for amd64
To permit ssa-level optimization, this change introduces an amd64 intrinsic
that generates the VFMADD231SD instruction for the fused-multiply-add
operation on systems that support it. System support is detected via
cpu.X86.HasFMA. A rewrite rule can then translate the generic ssa intrinsic
("Fma") to VFMADD231SD.

The benchmark compares the software implementation (old) with the intrinsic
(new).

name   old time/op  new time/op  delta
Fma-4  27.2ns ± 1%   1.0ns ± 9%  -96.48%  (p=0.008 n=5+5)

Updates #25819.

Change-Id: I966655e5f96817a5d06dff5942418a3915b09584
Reviewed-on: https://go-review.googlesource.com/c/go/+/137156
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-21 16:42:10 +00:00
smasher164
33425ab8db cmd/compile: introduce generic ssa intrinsic for fused-multiply-add
In order to make math.FMA a compiler intrinsic for ISAs like ARM64,
PPC64[le], and S390X, a generic 3-argument opcode "Fma" is provided and
rewritten as

    ARM64: (Fma x y z) -> (FMADDD z x y)
    PPC64: (Fma x y z) -> (FMADD x y z)
    S390X: (Fma x y z) -> (FMADD z x y)

Updates #25819.

Change-Id: Ie5bc628311e6feeb28ddf9adaa6e702c8c291efa
Reviewed-on: https://go-review.googlesource.com/c/go/+/131959
Run-TryBot: Akhil Indurti <aindurti@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-21 16:24:15 +00:00
Matthew Dempsky
8c6876e9a4 cmd/compile: disable checkptr for //go:cgo_unsafe_args functions
Fixes #34968.

Change-Id: I538d653fab6cf7cf9b9b7022a1c2d4ae6ee497b6
Reviewed-on: https://go-review.googlesource.com/c/go/+/201823
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-17 22:27:31 +00:00
Matthew Dempsky
f9226454b9 cmd/compile: fix -d=checkptr for named unsafe.Pointer types
We need to explicitly convert pointers to unsafe.Pointer before
passing to the runtime checkptr instrumentation in case the user
declared their own type with underlying type unsafe.Pointer.

Updates #22218.
Fixes #34966.

Change-Id: I3baa2809d77f8257167cd78f57156f819130baa8
Reviewed-on: https://go-review.googlesource.com/c/go/+/201782
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-17 21:10:22 +00:00
Bryan C. Mills
b76e6f8825 Revert "cmd/compile, cmd/link, runtime: make defers low-cost through inline code and extra funcdata"
This reverts CL 190098.

Reason for revert: broke several builders.

Change-Id: I69161352f9ded02537d8815f259c4d391edd9220
Reviewed-on: https://go-review.googlesource.com/c/go/+/201519
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Dan Scales <danscales@google.com>
2019-10-16 20:59:53 +00:00
Dan Scales
dad616375f cmd/compile, cmd/link, runtime: make defers low-cost through inline code and extra funcdata
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).

When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.

In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).

I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().

The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).

Cost of defer statement  [ go test -run NONE -bench BenchmarkDefer$ runtime ]
  With normal (stack-allocated) defers only:         35.4  ns/op
  With open-coded defers:                             5.6  ns/op
  Cost of function call alone (remove defer keyword): 4.4  ns/op

Text size increase (including funcdata) for go cmd without/with open-coded defers:  0.09%

The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.

The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:

Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
  Without open-coded defers:        62.0 ns/op
  With open-coded defers:           255  ns/op

A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:

CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
  Without open-coded defers:        443 ns/op
  With open-coded defers:           347 ns/op

Updates #14939 (defer performance)
Updates #34481 (design doc)

Change-Id: I51a389860b9676cfa1b84722f5fb84d3c4ee9e28
Reviewed-on: https://go-review.googlesource.com/c/go/+/190098
Reviewed-by: Austin Clements <austin@google.com>
2019-10-16 18:27:16 +00:00
Cherry Zhang
c4817f5d4f cmd/compile: on Wasm and AIX, let deferred nil function panic at invocation
The Go spec requires

	If a deferred function value evaluates to nil, execution
	panics when the function is invoked, not when the "defer"
	statement is executed.

On Wasm and AIX, currently we actually emit a nil check at the
point of defer statement, which will make it panic too early.
This CL fixes this.

Also, on Wasm, now the nil function will be passed through
deferreturn to jmpdefer, which does an explicit nil check and
calls sigpanic if it is nil. This sigpanic, being called from
assembly, is ABI0. So change the assembler backend to also
handle sigpanic in ABI0.

Fixes #34926.
Updates #8047.

Change-Id: I28489a571cee36d2aef041f917b8cfdc31d557d4
Reviewed-on: https://go-review.googlesource.com/c/go/+/201297
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-16 00:05:37 +00:00
David Chase
6adaf17eaa cmd/compile: preserve statements in late nilcheckelim optimization
When a subsequent load/store of a ptr makes the nil check of that pointer
unnecessary, if their lines differ, change the line of the load/store
to that of the nilcheck, and attempt to rehome the load/store position
instead.

This fix makes profiling less accurate in order to make panics more
informative.

Fixes #33724

Change-Id: Ib9afaac12fe0d0320aea1bf493617facc34034b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/200197
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-15 16:43:44 +00:00
Meng Zhuo
50f1157760 cmd/compile: add math/bits.Mul64 intrinsic on mips64x
Benchmark:
name   old time/op  new time/op  delta
Mul    36.0ns ± 1%   2.8ns ± 0%  -92.31%  (p=0.000 n=10+10)
Mul32  4.37ns ± 0%  4.37ns ± 0%     ~     (p=0.429 n=6+10)
Mul64  36.4ns ± 0%   2.8ns ± 0%  -92.37%  (p=0.000 n=10+9)

Change-Id: Ic4f4e5958adbf24999abcee721d0180b5413fca7
Reviewed-on: https://go-review.googlesource.com/c/go/+/200582
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-14 21:23:34 +00:00
Matthew Dempsky
b649bdc7f3 cmd/compile: remove period from "not allowed in runtime" errors
We don't punctuate compiler diagnostics.

Change-Id: I19e1f30fbf04f0d1bfe6648fae26beaf3a06ee92
Reviewed-on: https://go-review.googlesource.com/c/go/+/201077
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-14 19:32:08 +00:00
Than McIntosh
4c9e757daf test: revise testcase for new gccgo compiler bug
Add to the testcase originally created for issue 34577 so
as to also trigger the error condition for issue 34852 (the
two bugs are closely related).

Updates #34577.
Updates #34852.

Change-Id: I2347369652ce500184347606b2bb3e76d802b204
Reviewed-on: https://go-review.googlesource.com/c/go/+/201017
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-10-14 16:13:27 +00:00
zdjones
3c56eb4083 cmd/compile: make poset use sufficient conditions for OrderedOrEqual
When assessing whether A <= B, the poset's OrderedOrEqual has a passing
condition which permits A <= B, but is not sufficient to infer that A <= B.
This CL removes that incorrect passing condition.

Having identified that A and B are in the poset, the method will report that
A <= B if any of these three conditions are true:
 (1) A and B are the same node in the poset.
 	- This means we know that A == B.
 (2) There is a directed path, strict or not, from A -> B
 	- This means we know that, at least, A <= B, but A < B is possible.
 (3) There is a directed path from B -> A, AND that path has no strict edges.
 	- This means we know that B <= A, but do not know that B < A.

In condition (3), we do not have enough information to say that A <= B, rather
we only know that B == A (which satisfies A <= B) is possible. The way I
understand it, a strict edge shows a known, strictly-ordered relation (<) but
the lack of a strict edge does not show the lack of a strictly-ordered relation.

The difference is highlighted by the example in #34802, where a bounds check is
incorrectly removed by prove, such that negative indexes into a slice
succeed:

	n := make([]int, 1)
	for i := -1; i <= 0; i++ {
	    fmt.Printf("i is %d\n", i)
	    n[i] = 1  // No Bounds check, program runs, assignment to n[-1] succeeds!!
	}

When prove is checking the negative/failed branch from the bounds check at n[i],
in the signed domain we learn (0 > i || i >= len(n)). Because prove can't learn
the OR condition, we check whether we know that i is non-negative so we can
learn something, namely that i >= len(n). Prove uses the poset to check whether
we know that i is non-negative.  At this point the poset holds the following
relations as a directed graph:

	-1 <= i <= 0
	-1 < 0

In poset.OrderedOrEqual, we are testing for 0 <= i. In this case, condition (3)
above is true because there is a non-strict path from i -> 0, and that path
does NOT have any strict edges. Because this condition is true, the poset
reports to prove that i is known to be >= 0. Knowing, incorrectly, that i >= 0,
prove learns from the failed bounds check that i >= len(n) in the signed domain.

When the slice, n, was created, prove learned that len(n) == 1. Because i is
also the induction variable for the loop, upon entering the loop, prove previously
learned that i is in [-1,0]. So when prove attempts to learn from the failed
bounds check, it finds the new fact, i > len(n), unsatisfiable given that it
previously learned that i <= 0 and len(n) = 1.

Fixes #34802

Change-Id: I235f4224bef97700c3aa5c01edcc595eb9f13afc
Reviewed-on: https://go-review.googlesource.com/c/go/+/200759
Run-TryBot: Zach Jones <zachj1@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Giovanni Bajo <rasky@develer.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-12 09:17:14 +00:00
Emmanuel T Odeke
1627714cd5 test/fixedbugs: bump issue21576.go's timeout to 1min
Increases the exec timeout from 5sec to 1min, but
also print out the error value on any test failure.

Fixes #34836

Change-Id: Ida2b8bd460243491ef0f90dfe0f978dfe02a0703
Reviewed-on: https://go-review.googlesource.com/c/go/+/200519
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
2019-10-11 15:05:18 +00:00
Brad Fitzpatrick
6dc740f092 test: adjust a test to work with js/wasm's background goroutine
Fixes #34768

Change-Id: Ic73591f620cdee5bc7203483902e6ba98d2c442b
Reviewed-on: https://go-review.googlesource.com/c/go/+/200438
Reviewed-by: Bryan C. Mills <bcmills@google.com>
2019-10-10 19:38:06 +00:00
Than McIntosh
22d3da4781 test: new testcase for gccgo compiler problem
Test case with code that caused a gccgo error while emitting export
data for an inlinable function.

Updates #34577.

Change-Id: I28b598c4c893c77f4a76bb4f2d27e5b42f702992
Reviewed-on: https://go-review.googlesource.com/c/go/+/198057
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-09 17:26:20 +00:00
Michael Munday
6ec4c71eef cmd/compile: add SSA rules for s390x compare-and-branch instructions
This commit adds SSA rules for the s390x combined compare-and-branch
instructions. These have a shorter encoding than separate compare
and branch instructions and they also don't clobber the condition
code (a.k.a. flag register) reducing pressure on the flag allocator.

I have deleted the 'loop_test.go' file and replaced it with a new
codegen test which performs a wider range of checks.

Object sizes from compilebench:

name                      old object-bytes  new object-bytes  delta
Template                        562kB ± 0%        561kB ± 0%   -0.28%  (p=0.000 n=10+10)
Unicode                         217kB ± 0%        217kB ± 0%   -0.17%  (p=0.000 n=10+10)
GoTypes                        2.03MB ± 0%       2.02MB ± 0%   -0.59%  (p=0.000 n=10+10)
Compiler                       8.16MB ± 0%       8.11MB ± 0%   -0.62%  (p=0.000 n=10+10)
SSA                            27.4MB ± 0%       27.0MB ± 0%   -1.45%  (p=0.000 n=10+10)
Flate                           356kB ± 0%        356kB ± 0%   -0.12%  (p=0.000 n=10+10)
GoParser                        438kB ± 0%        436kB ± 0%   -0.51%  (p=0.000 n=10+10)
Reflect                        1.37MB ± 0%       1.37MB ± 0%   -0.42%  (p=0.000 n=10+10)
Tar                             485kB ± 0%        483kB ± 0%   -0.39%  (p=0.000 n=10+10)
XML                             630kB ± 0%        621kB ± 0%   -1.45%  (p=0.000 n=10+10)
[Geo mean]                     1.14MB            1.13MB        -0.60%

name                      old text-bytes    new text-bytes    delta
HelloSize                       763kB ± 0%        754kB ± 0%   -1.30%  (p=0.000 n=10+10)
CmdGoSize                      10.7MB ± 0%       10.6MB ± 0%   -0.91%  (p=0.000 n=10+10)
[Geo mean]                     2.86MB            2.82MB        -1.10%

Change-Id: Ibca55d9c0aa1254aee69433731ab5d26a43a7c18
Reviewed-on: https://go-review.googlesource.com/c/go/+/198037
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-10-08 10:03:04 +00:00
Cuong Manh Le
77f5adba55 cmd/compile: don't use statictmps for small object in slice literal
Fixes #21561

Change-Id: I89c59752060dd9570d17d73acbbaceaefce5d8ce
Reviewed-on: https://go-review.googlesource.com/c/go/+/197560
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-08 06:09:26 +00:00
Matthew Dempsky
a0894ea5b5 cmd/compile: reimplement parameter leak encoding
Currently, escape analysis is able to record at most one dereference
when a parameter leaks to the heap; that is, at call sites, it can't
distinguish between any of these three functions:

    func x1(p ****int) { sink = *p }
    func x2(p ****int) { sink = **p }
    func x3(p ****int) { sink = ***p }

Similarly, it's limited to recording parameter leaks to only the first
4 parameters, and only up to 6 dereferences.

All of these limitations are due to the awkward encoding scheme used
at the moment.

This CL replaces the encoding scheme with a simple [8]uint8 array,
which can handle up to the first 7 parameters, and up to 254
dereferences, which ought to be enough for anyone. And if not, it's
much more easily increased.

Shrinks export data size geometric mean for Kubernetes by 0.07%.

Fixes #33981.

Change-Id: I10a94b9accac9a0c91490e0d6d458316f5ca1e13
Reviewed-on: https://go-review.googlesource.com/c/go/+/197680
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-07 18:50:14 +00:00
Richard Musiol
30521d5126 cmd/link: produce valid binaries with large data section on wasm
CL 170950 had a regression that makes the compiler produce
an invalid wasm binary if the data section is too large.
Loading such a binary gives the following error:
"LinkError: WebAssembly.instantiate(): data segment is out of bounds"

This change fixes the issue by ensuring that the minimum size of the
linear memory is larger than the end of the data section.

Fixes #34395.

Change-Id: I0c8629de7ffd0d85895ad31bf8c9d45fef197a57
Reviewed-on: https://go-review.googlesource.com/c/go/+/199358
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-07 18:09:29 +00:00
Keith Randall
30da79d958 cmd/compile: improve write barrier removal
We're allowed to remove a write barrier when both the old
value in memory and the new value we're writing are not heap pointers.

Improve both those checks a little bit.

A pointer is known to not be a heap pointer if it is read from
read-only memory. This sometimes happens for loads of pointers
from string constants in read-only memory.

Do a better job of tracking which parts of memory are known to be
zero.  Before we just kept track of a range of offsets in the most
recently allocated object. For code that initializes the new object's
fields in a nonstandard order, that tracking is imprecise. Instead,
keep a bit map of the first 64 words of that object, so we can track
precisely what we know to be zeroed.

The new scheme is only precise up to the first 512 bytes of the object.
After that, we'll use write barriers unnecessarily. Hopefully most
initializers of large objects will use typedmemmove, which does only one
write barrier check for the whole initialization.

Fixes #34723
Update #21561

Change-Id: Idf6e1b7d525042fb67961302d4fc6f941393cac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/199558
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-10-07 17:19:13 +00:00
Keith Randall
72dc9ab191 cmd/compile: reuse dead register before reusing register holding constant
For commuting ops, check whether the second argument is dead before
checking if the first argument is rematerializeable. Reusing the register
holding a dead value is always best.

Fixes #33580

Change-Id: I7372cfc03d514e6774d2d9cc727a3e6bf6ce2657
Reviewed-on: https://go-review.googlesource.com/c/go/+/199559
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2019-10-07 15:16:26 +00:00
Cuong Manh Le
047141797c cmd/compile: lookup methods of base type for named pointer type
Passed toolstash-check.

Updates #21738
Fixes #21934

Change-Id: I59f0b2c9890146565ff913b04aeeeff7dc7a4499
Reviewed-on: https://go-review.googlesource.com/c/go/+/197561
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-04 18:34:18 +00:00
Alberto Donizetti
c1e46af62f test: add testcase for Issue 34520
CL 188317 introduced a compiler crash during dwarf generation which
was reported as Issue #34520. After CL 188217, the issue appears to be
fixed. Add a testcase to avoid future regressions.

Fixes #34520

Change-Id: I73544a9e9baf8dbfb85c19eb6d202beea05affb6
Reviewed-on: https://go-review.googlesource.com/c/go/+/198546
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2019-10-04 15:04:57 +00:00
David Chase
adc4d2cc2d cmd/compile: run deadcode before nilcheck for better statement relocation
Nilcheck would move statements from NilCheck values to others that
turned out were already dead, which leads to lost statements.  Better
to eliminate the dead code first.

One "error" is removed from test/prove.go because the code is
actually dead, and the additional deadcode pass removes it before
prove can run.

Change-Id: If75926ca1acbb59c7ab9c8ef14d60a02a0a94f8b
Reviewed-on: https://go-review.googlesource.com/c/go/+/198479
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
2019-10-03 21:12:13 +00:00
Matthew Dempsky
c33d45a898 cmd/compile: don't statically copy string-typed variables
During package initialization, the compiler tries to optimize:

    var A = "foo"
    var B = A

into

    var A = "foo"
    var B = "foo"

so that we can statically initialize both A and B and skip emitting
dynamic initialization code to assign "B = A".

However, this isn't safe in the presence of cmd/link's -X flag, which
might overwrite an initialized string-typed variable at link time. In
particular, if cmd/link changes A's static initialization, it won't
know it also needs to change B's static initialization.

To address this, this CL disables this optimization for string-typed
variables.

Fixes #34675.

Change-Id: I1c18f3b855f6d7114aeb39f96aaaf1b452b88236
Reviewed-on: https://go-review.googlesource.com/c/go/+/198657
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-10-03 18:08:32 +00:00
Robert Griesemer
27fc32ff01 cmd/compile: better error message for language version errors
Fixes #33753.
Updates #31747.

Change-Id: Icc42b23405ead4f7f17b0ffa3611405454b6b271
Reviewed-on: https://go-review.googlesource.com/c/go/+/198491
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-10-03 04:48:44 +00:00
Emmanuel T Odeke
e79b57d6c4 os/signal: lazily start signal watch loop only on Notify
By lazily starting the signal watch loop only on Notify,
we are able to have deadlock detection even when
"os/signal" is imported.

Thanks to Ian Lance Taylor for the solution and discussion.

With this change in, fix a runtime gorountine count test that
assumed that os/signal.init would unconditionally start the
signal watching goroutine, but alas no more.

Fixes #21576.

Change-Id: I6eecf82a887f59f2ec8897f1bcd67ca311ca42ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/101036
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-10-02 03:52:59 +00:00
Matthew Dempsky
ac1d440ea6 cmd/compile: apply constant folding to ORUNESTR
ORUNESTR represents the special case of integer->string conversion. If
the integer is a constant, then the string is a constant too, so
evconst needs to perform constant folding here.

Passes toolstash-check.

Fixes #34563.

Change-Id: Ieab3d76794d8ce570106b6b707a4bcd725d156e9
Reviewed-on: https://go-review.googlesource.com/c/go/+/197677
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-09-26 23:54:29 +00:00
Giovanni Bajo
1658263bbf cmd/compile: detect indvars that are bound by other indvars
prove wasn't able to detect induction variables that was bound
by another inducation variable. This happened because an indvar
is a Phi, and thus in case of a dependency, the loop bounding
condition looked as Phi < Phi. This triggered an existing
codepath that checked whether the upper bound was a Phi to
detect loop conditions written in reversed order respect to the
idiomatic way (eg: for i:=0; len(n)>i; i++).

To fix this, we call the indvar pattern matching on both operands
of the loop condition, so that the first operand that matches
will be treated as the indvar.

Updates #24660 (removes a boundcheck from Fannkuch)

Change-Id: Iade83d8deb54f14277ed3f2e37b190e1ed173d11
Reviewed-on: https://go-review.googlesource.com/c/go/+/195220
Reviewed-by: David Chase <drchase@google.com>
2019-09-26 18:47:12 +00:00
Matthew Dempsky
00b773a4a9 cmd/compile: simplify OPTRLIT handling
Previously, we would recognize &(T{...}) expressions during type
checking, rewrite them into (*T){...}, and then do a lot of extra work
to make sure the user doesn't write (*T){...} themselves and
resynthesizing the OPTRLIT later on.

This CL simply handles &T{...} directly in the straight forward
manner, by changing OADDR directly to OPTRLIT when appropriate.

While here, match go/types's invalid composite literal type error
message.

Passes toolstash-check.

Change-Id: I902b14c7e2cd9fa93e6915dd58272d2352ba38f8
Reviewed-on: https://go-review.googlesource.com/c/go/+/197120
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2019-09-26 18:45:53 +00:00
Giovanni Bajo
87e2b34f7b cmd/compile: in prove, learn facts from OpSliceMake
Now that OpSliceMake is called by runtime.makeslice callers,
prove can see and record the actual length and cap of each
slice being constructed.

This small patch is enough to remove 260 additional bound checks
from cmd+std.

Thanks to Martin Möhrmann for pointing me to CL141822 that
I had missed.

Updates #24660

Change-Id: I14556850f285392051f3f07d13b456b608b64eb9
Reviewed-on: https://go-review.googlesource.com/c/go/+/196784
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2019-09-26 18:27:38 +00:00
Brad Fitzpatrick
cb418dd0d1 test: make -all_codegen default to true on linux-amd64 builder
Fixes #34297

Change-Id: I4584a97d4562d7af0412d683ba1c206e3c1d9edb
Reviewed-on: https://go-review.googlesource.com/c/go/+/197539
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-09-26 17:42:40 +00:00
Dan Scales
225f484c88 misc, runtime, test: extra tests and benchmarks for defer
Add a bunch of extra tests and benchmarks for defer, in preparation for new
low-cost (open-coded) implementation of defers (see #34481),

 - New file defer_test.go that tests a bunch more unusual defer scenarios,
   including things that might have problems for open-coded defers.
 - Additions to callers_test.go actually verifying what the stack trace looks like
   for various panic or panic-recover scenarios.
 - Additions to crash_test.go testing several more crash scenarios involving
   recursive panics.
 - New benchmark in runtime_test.go measuring speed of panic-recover
 - New CGo benchmark in cgo_test.go calling from Go to C back to Go that
   shows defer overhead

Updates #34481

Change-Id: I423523f3e05fc0229d4277dd00073289a5526188
Reviewed-on: https://go-review.googlesource.com/c/go/+/197017
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-09-25 23:27:16 +00:00
Than McIntosh
fad0a14d92 test: add testcase for gccgo compiler buglet
New test containing code that caused a gccgo compiler failure.

Updates #34503.

Change-Id: Id895a1e1249062b7fb147e54bcaa657e774ed0d9
Reviewed-on: https://go-review.googlesource.com/c/go/+/197217
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-09-25 18:10:57 +00:00
Matthew Dempsky
efb9739203 cmd/compile: use underlying OCOMPLIT's position for OPTRLIT
Currently, when we create an OPTRLIT node, it defaults to the
OCOMPLIT's final element's position. But it improves error messages to
use the OCOMPLIT's own position instead.

Change-Id: Ibb031f543c7248d88d99fd0737685e01d86e2500
Reviewed-on: https://go-review.googlesource.com/c/go/+/197119
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-09-25 17:07:09 +00:00
Matthew Dempsky
f346a4c44c test: add regress test for #27557
This commit just adds a regress test for a few of the important corner
cases that I identified in #27557, which turn out to not be tested
anywhere.

While here, annotate a few of the existing test cases where we could
improve escape analysis.

Updates #27557.

Change-Id: Ie57792a538f7899bb17915485fabc86100f469a3
Reviewed-on: https://go-review.googlesource.com/c/go/+/197137
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-09-25 17:06:15 +00:00
Martin Möhrmann
f41451e7eb compile: prefer an AND instead of SHR+SHL instructions
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.

Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.

However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.

Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.

For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f                SHRQ $0x3f, DX
48c1e23f                SHLQ $0x3f, DX

after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080    MOVQ $0x8000000000000000, AX
4821d0                  ANDQ DX, AX

Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.

Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:

    var GlobalU uint

    func BenchmarkAndHighBits(b *testing.B) {
        x := uint(0)
        for i := 0; i < b.N; i++ {
                x &= 1 << 63
        }
        GlobalU = x
    }

amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name           old time/op  new time/op  delta
AndHighBits-4  0.61ns ± 6%  0.42ns ± 6%  -31.42%  (p=0.000 n=25+25):

'go run run.go -all_codegen -v codegen' passes  with following adjustments:

ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
       since ORshiftRL generation fusing '>> rc' and '|' interferes
       with matching ((x << lc) >> rc) to generate UBFX. Previously
       ORshiftLL was created first using the shifts generated for (y & ac).

S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.

Updates #33826
Updates #32781

Change-Id: I5a59f6239660d53c029cd22dfb44ddf39f93a56c
Reviewed-on: https://go-review.googlesource.com/c/go/+/196810
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-09-24 20:30:59 +00:00
Bryan C. Mills
34fe8295c5 Revert "compile: prefer an AND instead of SHR+SHL instructions"
This reverts CL 194297.

Reason for revert: introduced register allocation failures on PPC64LE builders.

Updates #33826
Updates #32781
Updates #34468

Change-Id: I7d0b55df8cdf8e7d2277f1814299b083c2692e48
Reviewed-on: https://go-review.googlesource.com/c/go/+/196957
Run-TryBot: Bryan C. Mills <bcmills@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-09-23 15:20:12 +00:00
Martin Möhrmann
4e2b84ffc5 compile: prefer an AND instead of SHR+SHL instructions
On modern 64bit CPUs a SHR, SHL or AND instruction take 1 cycle to execute.
A pair of shifts that operate on the same register will take 2 cycles
and needs to wait for the input register value to be available.

Large constants used to mask the high bits of a register with an AND
instruction can not be encoded as an immediate in the AND instruction
on amd64 and therefore need to be loaded into a register with a MOV
instruction.

However that MOV instruction is not dependent on the output register and
on many CPUs does not compete with the AND or shift instructions for
execution ports.

Using a pair of shifts to mask high bits instead of an AND to mask high
bits of a register has a shorter encoding and uses one less general
purpose register but is slower due to taking one clock cycle longer
if there is no register pressure that would make the AND variant need to
generate a spill.

For example the instructions emitted for (x & 1 << 63) before this CL are:
48c1ea3f                SHRQ $0x3f, DX
48c1e23f                SHLQ $0x3f, DX

after this CL the instructions are the same as GCC and LLVM use:
48b80000000000000080    MOVQ $0x8000000000000000, AX
4821d0                  ANDQ DX, AX

Some platforms such as arm64 already have SSA optimization rules to fuse
two shift instructions back into an AND.

Removing the general rule to rewrite AND to SHR+SHL speeds up this benchmark:

    var GlobalU uint

    func BenchmarkAndHighBits(b *testing.B) {
        x := uint(0)
        for i := 0; i < b.N; i++ {
                x &= 1 << 63
        }
        GlobalU = x
    }

amd64/darwin on Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz:
name           old time/op  new time/op  delta
AndHighBits-4  0.61ns ± 6%  0.42ns ± 6%  -31.42%  (p=0.000 n=25+25):

'go run run.go -all_codegen -v codegen' passes  with following adjustments:

ARM64: The BFXIL pattern ((x << lc) >> rc | y & ac) needed adjustment
       since ORshiftRL generation fusing '>> rc' and '|' interferes
       with matching ((x << lc) >> rc) to generate UBFX. Previously
       ORshiftLL was created first using the shifts generated for (y & ac).

S390X: Add rules for abs and copysign to match use of AND instead of SHIFTs.

Updates #33826
Updates #32781

Change-Id: I43227da76b625de03fbc51117162b23b9c678cdb
Reviewed-on: https://go-review.googlesource.com/c/go/+/194297
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-09-21 18:00:13 +00:00
Agniva De Sarker
ecc7dd5469 test/codegen: fix wasm codegen breakage
i32.eqz instructions don't appear unless needed in if conditions anymore
after CL 195204. I forgot to run the codegen tests while submitting the CL.

Thanks to @martisch for catching it.

Fixes #34442

Change-Id: I177b064b389be48e39d564849714d7a8839be13e
Reviewed-on: https://go-review.googlesource.com/c/go/+/196580
Run-TryBot: Agniva De Sarker <agniva.quicksilver@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-09-21 16:31:44 +00:00
Matthew Dempsky
85fc765341 cmd/compile: optimize switch on strings
When compiling expression switches, we try to optimize runs of
constants into binary searches. The ordering used isn't visible to the
application, so it's unimportant as long as we're consistent between
sorting and searching.

For strings, it's much cheaper to compare string lengths than strings
themselves, so instead of ordering strings by "si <= sj", we currently
order them by "len(si) < len(sj) || len(si) == len(sj) && si <= sj"
(i.e., the lexicographical ordering on the 2-tuple (len(s), s)).

However, it's also somewhat cheaper to compare strings for equality
(i.e., ==) than for ordering (i.e., <=). And if there were two or
three string constants of the same length in a switch statement, we
might unnecessarily emit ordering comparisons.

For example, given:

    switch s {
    case "", "1", "2", "3": // ordered by length then content
        goto L
    }

we currently compile this as:

    if len(s) < 1 || len(s) == 1 && s <= "1" {
        if s == "" { goto L }
        else if s == "1" { goto L }
    } else {
        if s == "2" { goto L }
        else if s == "3" { goto L }
    }

This CL switches to using a 2-level binary search---first on len(s),
then on s itself---so that string ordering comparisons are only needed
when there are 4 or more strings of the same length. (4 being the
cut-off for when using binary search is actually worthwhile.)

So the above switch instead now compiles to:

    if len(s) == 0 {
        if s == "" { goto L }
    } else if len(s) == 1 {
        if s == "1" { goto L }
        else if s == "2" { goto L }
        else if s == "3" { goto L }
    }

which is better optimized by walk and SSA. (Notably, because there are
only two distinct lengths and no more than three strings of any
particular length, this example ends up falling back to simply using
linear search.)

Test case by khr@ from CL 195138.

Fixes #33934.

Change-Id: I8eeebcaf7e26343223be5f443d6a97a0daf84f07
Reviewed-on: https://go-review.googlesource.com/c/go/+/195340
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-09-18 05:33:05 +00:00