Print numGC followed by numGC1, rather than printing numGC twice.
Change-Id: I8e7144b6a11d4ae9be0d82d88b86fed04b906e2f
Reviewed-on: https://go-review.googlesource.com/22087
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Map keys are currently validated in multiple locations but share
a common validation routine. The problem is that early validations
should be lenient enough to allow for forward types while the final
validations should not. The final validations should fail on forward
types since they've already settled.
This change also separates the key type checking from the creation
of the map via typMap. Instead of the mapqueue being populated in
copytype() by checking the map line number, it's populated in the
same block that validates the key type. This isolates key validation
logic while type checking.
Fixes#14988
Change-Id: Ia47cf6213585d6c63b3a35249104c0439feae658
Reviewed-on: https://go-review.googlesource.com/21830
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Removes 49 more bound checks in make.bash. For example:
var a[100]int
for i := 0; i < 50; i++ {
use a[i+25]
}
Change-Id: I85e0130ee5d07f0ece9b17044bba1a2047414ce7
Reviewed-on: https://go-review.googlesource.com/21379
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Be more careful about inserting instrumentation in racewalk.
If the node being instrumented is an OAS, and it has a non-
empty Ninit, then append instrumentation to the Ninit list
rather than letting it be inserted before the OAS (and the
compilation of its init list). This deals with the case that
the Ninit list defines a variable used in the RHS of the OAS.
Fixes#15091.
Change-Id: Iac91696d9104d07f0bf1bd3499bbf56b2e1ef073
Reviewed-on: https://go-review.googlesource.com/21771
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: David Chase <drchase@google.com>
Fails for the same reason as ppc64 and mips64 (incomplete
optimization).
Change-Id: Ieb4d997fc27d4f2b756e63dd7f588abe10c0213a
Reviewed-on: https://go-review.googlesource.com/20963
Reviewed-by: Bill O'Farrell <billotosyr@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Make sure the results of unsigned constant-folded
shifts are sign-extended into the AuxInt field.
Fixes#15175
Change-Id: I3490d1bc3d9b2e1578ed30964645508577894f58
Reviewed-on: https://go-review.googlesource.com/21586
Reviewed-by: Alexandru Moșoi <alexandru@mosoi.ro>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fold the comparison when the SHR result is small.
Useful for:
- murmur mix like hashing where higher bits are desirable, i.e. hash = uint32(i * C) >> 18
- integer log2 via DeBruijn sequence: http://graphics.stanford.edu/~seander/bithacks.html#IntegerLogDeBruijn
Change-Id: If70ae18cb86f4cc83ab6213f88ced03cc4986156
Reviewed-on: https://go-review.googlesource.com/21514
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Test goprint.go sometimes failed on a slow builder (plan9_arm)
because of timing dependency. Instead of sleeping for a fixed
time to allow the child goroutine to finish, wait explicitly for
child termination by calling runtime.NumGoroutine until the
returned value is 1.
Fixes#15097
Change-Id: Ib3ef5ec3c8277083c774542f48bcd4ff2f79efde
Reviewed-on: https://go-review.googlesource.com/21603
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
No point in doing anything for x=x assignments.
In addition, skipping these assignments prevents generating:
VARDEF x
COPY x -> x
which is bad because x is incorrectly considered
dead before the vardef.
Fixes#14904
Change-Id: I6817055ec20bcc34a9648617e0439505ee355f82
Reviewed-on: https://go-review.googlesource.com/21470
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Missed a case for closure calls (OCALLFUNC && indirect) in
esc.go:esccall.
Cleanup to runtime code for windows to more thoroughly hide
a technical escape. Also made code pickier about failing
to late non-optional kernel32.dll.
Fixes#14409.
Change-Id: Ie75486a2c8626c4583224e02e4872c2875f7bca5
Reviewed-on: https://go-review.googlesource.com/20102
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Since BCE happens over several passes (opt, loopbce, prove)
it's easy to regress especially with rewriting.
The pass is only activated with special debug flag.
Change-Id: I46205982e7a2751156db8e875d69af6138068f59
Reviewed-on: https://go-review.googlesource.com/21510
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Identify this assignment case and instead of the more general error
prog.go:6: cannot assign to students["sally"].age
produce
prog.go:6: cannot directly assign to struct field students["sally"].age in map
that explains why the assignment is not possible.
Fixes#13779.
Change-Id: I90c10b445f907834fc1735aa66e44a0f447aa74f
Reviewed-on: https://go-review.googlesource.com/21462
Reviewed-by: David Chase <drchase@google.com>
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
Change-Id: I91873aaebf79bdf1c00d38aacc1a1fb8d79656a7
Reviewed-on: https://go-review.googlesource.com/21433
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Handle this case:
if 0 <= i && i < len(a) {
use a[i]
}
Shaves about 5k from pkg/tools/linux_amd64/*.
Change-Id: I6675ff49aa306b0d241b074c5738e448204cd981
Reviewed-on: https://go-review.googlesource.com/21431
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The issue was seen when inlining an exported function that contained
a fallthrough statement.
Fixes#15071
Change-Id: I1e8215ad49d57673dba7e8f8bd2ed8ad290dc452
Reviewed-on: https://go-review.googlesource.com/21452
Reviewed-by: Dave Cheney <dave@cheney.net>
There are 5293 loop in the main go repository.
A survey of the top most common for loops:
18 for __k__ := 0; i < len(sa.Addr); i++ {
19 for __k__ := 0; ; i++ {
19 for __k__ := 0; i < 16; i++ {
25 for __k__ := 0; i < length; i++ {
30 for __k__ := 0; i < 8; i++ {
49 for __k__ := 0; i < len(s); i++ {
67 for __k__ := 0; i < n; i++ {
376 for __k__ := range __slice__ {
685 for __k__, __v__ := range __slice__ {
2074 for __, __v__ := range __slice__ {
The algorithm to find induction variables handles all cases
with an upper limit. It currently doesn't find related induction
variables such as c * ind or c + ind.
842 out of 22954 bound checks are removed for src/make.bash.
1957 out of 42952 bounds checks are removed for src/all.bash.
Things to do in follow-up CLs:
* Find the associated pointer for `for _, v := range a {}`
* Drop the NilChecks on the pointer.
* Replace the implicit induction variable by a loop over the pointer
Generated garbage can be reduced if we share the sdom between passes.
% benchstat old.txt new.txt
name old time/op new time/op delta
Template 337ms ± 3% 333ms ± 3% ~ (p=0.258 n=9+9)
GoTypes 1.11s ± 2% 1.10s ± 2% ~ (p=0.912 n=10+10)
Compiler 5.25s ± 1% 5.29s ± 2% ~ (p=0.077 n=9+9)
MakeBash 33.5s ± 1% 34.1s ± 2% +1.85% (p=0.011 n=9+9)
name old alloc/op new alloc/op delta
Template 63.6MB ± 0% 63.9MB ± 0% +0.52% (p=0.000 n=10+9)
GoTypes 218MB ± 0% 219MB ± 0% +0.59% (p=0.000 n=10+9)
Compiler 978MB ± 0% 985MB ± 0% +0.69% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Template 582k ± 0% 583k ± 0% +0.10% (p=0.000 n=10+10)
GoTypes 1.78M ± 0% 1.78M ± 0% +0.12% (p=0.000 n=10+10)
Compiler 7.68M ± 0% 7.69M ± 0% +0.05% (p=0.000 n=10+10)
name old text-bytes new text-bytes delta
HelloSize 581k ± 0% 581k ± 0% -0.08% (p=0.000 n=10+10)
CmdGoSize 6.40M ± 0% 6.39M ± 0% -0.08% (p=0.000 n=10+10)
name old data-bytes new data-bytes delta
HelloSize 3.66k ± 0% 3.66k ± 0% ~ (all samples are equal)
CmdGoSize 134k ± 0% 134k ± 0% ~ (all samples are equal)
name old bss-bytes new bss-bytes delta
HelloSize 126k ± 0% 126k ± 0% ~ (all samples are equal)
CmdGoSize 149k ± 0% 149k ± 0% ~ (all samples are equal)
name old exe-bytes new exe-bytes delta
HelloSize 947k ± 0% 946k ± 0% -0.01% (p=0.000 n=10+10)
CmdGoSize 9.92M ± 0% 9.91M ± 0% -0.06% (p=0.000 n=10+10)
Change-Id: Ie74bdff46fd602db41bb457333d3a762a0c3dc4d
Reviewed-on: https://go-review.googlesource.com/20517
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
Added a debug flag "-d closure" to explain compilation of
closures (should this be done some other way? Should we
rewrite the "-m" flag to "-d escapes"?) Used this to
discover that cause was an OXXX node in the captured vars
list, and in turn noticed that OXXX nodes are explicitly
ignored in all other processing of captured variables.
Couldn't figure out a reproducer, did verify that this OXXX
was not caused by an unnamed return value (which is one use
of these). Verified lack of heap allocation by examining -S
output.
Assembly:
(runtime/mgc.go:1371) PCDATA $0, $2
(runtime/mgc.go:1371) CALL "".notewakeup(SB)
(runtime/mgc.go:1377) LEAQ "".gcBgMarkWorker.func1·f(SB), AX
(runtime/mgc.go:1404) MOVQ AX, (SP)
(runtime/mgc.go:1404) MOVQ "".autotmp_2242+88(SP), CX
(runtime/mgc.go:1404) MOVQ CX, 8(SP)
(runtime/mgc.go:1404) LEAQ go.string."GC worker (idle)"(SB), AX
(runtime/mgc.go:1404) MOVQ AX, 16(SP)
(runtime/mgc.go:1404) MOVQ $16, 24(SP)
(runtime/mgc.go:1404) MOVB $20, 32(SP)
(runtime/mgc.go:1404) MOVQ $0, 40(SP)
(runtime/mgc.go:1404) PCDATA $0, $2
(runtime/mgc.go:1404) CALL "".gopark(SB)
Added a check for compiling_runtime to ensure that this is
caught in the future. Added a test to test the check.
Verified that 1.5.3 did NOT reject the test case when
compiled with -+ flag, so this is not a recently added bug.
Cause of bug is two-part -- there was no leaking closure
detection ever, and instead it relied on capture-of-variables
to trigger compiling_runtime test, but closures improved in
1.5.3 so that mere capture of a value did not also capture
the variable, which thus allowed closures to escape, as well
as this case where the escape was spurious. In
fixedbugs/issue14999.go, compare messages for f and g;
1.5.3 would reject g, but not f. 1.4 rejects both because
1.4 heap-allocates parameter x for both.
Fixes#14999.
Change-Id: I40bcdd27056810628e96763a44f2acddd503aee1
Reviewed-on: https://go-review.googlesource.com/21322
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The change in 20907 fixed varexpr but broke aliased. After that change,
a reference to a field in a struct would not be seen as aliasing itself.
Before that change, it would, but only because all fields in a struct
aliased everything.
This CL changes the compiler to consider all references to a field as
aliasing all other fields in that struct. This is imperfect--a
reference to one field does not alias another field--but is a simple fix
for the immediate problem. A better fix would require tracking the
specific fields as well.
Fixes#15042.
Change-Id: I5c95c0dd7b0699e53022fce9bae2e8f50d6d1d04
Reviewed-on: https://go-review.googlesource.com/21390
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Compound AUTO types weren't named previously. That was because live
variable analysis (plive.go) doesn't handle spilling to compound types.
It can't handle them because there is no valid place to put VARDEFs when
regalloc is spilling compound types.
compound types = multiword builtin types: complex, string, slice, and
interface.
Instead, we split named AUTOs into individual one-word variables. For
example, a string s gets split into a byte ptr s.ptr and an integer
s.len. Those two variables can be spilled to / restored from
independently. As a result, live variable analysis can handle them
because they are one-word objects.
This CL will change how AUTOs are described in DWARF information.
Consider the code:
func f(s string, i int) int {
x := s[i:i+5]
g()
return lookup(x)
}
The old compiler would spill x to two consecutive slots on the stack,
both named x (at offsets 0 and 8). The new compiler spills the pointer
of x to a slot named x.ptr. It doesn't spill x.len at all, as it is a
constant (5) and can be rematerialized for the call to lookup.
So compound objects may not be spilled in their entirety, and even if
they are they won't necessarily be contiguous. Such is the price of
optimization.
Re-enable live variable analysis tests. One test remains disabled, it
fails because of #14904.
Change-Id: I8ef2b5ab91e43a0d2136bfc231c05d100ec0b801
Reviewed-on: https://go-review.googlesource.com/21233
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Find comparisons to constants and propagate that information
down the dominator tree. Use it to resolve other constant
comparisons on the same variable.
So if we know x >= 7, then a x > 4 condition must return true.
This change allows us to use "_ = b[7]" hints to eliminate bounds checks.
Fixes#14900
Change-Id: Idbf230bd5b7da43de3ecb48706e21cf01bf812f7
Reviewed-on: https://go-review.googlesource.com/21008
Reviewed-by: Alexandru Moșoi <alexandru@mosoi.ro>
Pushed from an old client by mistake. These are the
missing changes.
Change-Id: Ia8d61c5c0bde907369366ea9ea98711823342803
Reviewed-on: https://go-review.googlesource.com/21349
Reviewed-by: Todd Neal <todd@tneal.org>
We need to make sure all the bounds checks pass before issuing
a load which combines several others. We do this by issuing the
combined load at the last load's block, where "last" = closest to
the leaf of the dominator tree.
Fixes#15002
Change-Id: I7358116db1e039a072c12c0a73d861f3815d72af
Reviewed-on: https://go-review.googlesource.com/21246
Reviewed-by: Todd Neal <todd@tneal.org>
Previously, cmd/compile rejected constant int->string conversions if
the integer value did not fit into an "int" value. Also, runtime
incorrectly truncated 64-bit values to 32-bit before checking if
they're a valid Unicode code point. According to the Go spec, both of
these cases should instead yield "\uFFFD".
Fixes#15039.
Change-Id: I3c8a3ad9a0780c0a8dc1911386a523800fec9764
Reviewed-on: https://go-review.googlesource.com/21344
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This only tests amd64 because it's currently broken on non-SSA
backends.
Fixes#8613
Change-Id: I6bc501c81c395e533bb9c7335789750e0c6b7a8f
Reviewed-on: https://go-review.googlesource.com/21325
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
* This is an improved version of an earlier patch.
* Verified with gcc up to 100.
* Limited to two instructions based on costs from
https://gmplib.org/~tege/x86-timing.pdf
Change-Id: Ib7c37de6fd8e0ba554459b15c7409508cbcf6728
Reviewed-on: https://go-review.googlesource.com/21103
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
One intrinsic was needed to help get the very best
performance out of a future GC; as long as that one was
being added, I also added Bswap since that is sometimes
a handy thing to have. I had intended to fill out the
bit-scan intrinsic family, but the mismatch between the
"scan forward" instruction and "count leading zeroes"
was large enough to cause me to leave it out -- it poses
a dilemma that I'd rather dodge right now.
These intrinsics are not exposed for general use.
That's a separate issue requiring an API proposal change
( https://github.com/golang/proposal )
All intrinsics are tested, both that they are substituted
on the appropriate architecture, and that they produce the
expected result.
Change-Id: I5848037cfd97de4f75bdc33bdd89bba00af4a8ee
Reviewed-on: https://go-review.googlesource.com/20564
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This makes the rounding bug fix in math/big for issue 14651 available
to the compiler.
- changes to cmd/compile/internal/big fully automatic via script
- added test case for issue
- updated old test case with correct test data
Fixes#14651.
Change-Id: Iea37a2cd8d3a75f8c96193748b66156a987bbe40
Reviewed-on: https://go-review.googlesource.com/20818
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Don't write back parts of a slicing operation if they
are unchanged from the source of the slice. For example:
x.s = x.s[0:5] // don't write back pointer or cap
x.s = x.s[:5] // don't write back pointer or cap
x.s = x.s[:5:7] // don't write back pointer
There is more to be done here, for example:
x.s = x.s[:len(x.s):7] // don't write back ptr or len
This CL can't handle that one yet.
Fixes#14855
Change-Id: Id1e1a4fa7f3076dc1a76924a7f1cd791b81909bb
Reviewed-on: https://go-review.googlesource.com/20954
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
Allow inlining of functions with switch statements as long as they don't
contain a break or type switch.
Fixes#13071
Change-Id: I057be351ea4584def1a744ee87eafa5df47a7f6d
Reviewed-on: https://go-review.googlesource.com/20824
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
An instruction consisting of all 0s causes an illegal instruction
signal on s390x. Since 0s are the default in this test this CL just
makes it explicit.
Change-Id: Id6e060eed1a588f4b10a4e4861709fcd19b434ac
Reviewed-on: https://go-review.googlesource.com/20962
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Consider functions with an ODCLCONST for inlining and modify exprfmt to
ignore those nodes when exporting. Don't add symbols to the export list
if there is no definition. This occurs when OLITERAL symbols are looked
up via Pkglookup for non-exported symbols.
Fixes#7655
Change-Id: I1de827850f4c69e58107447314fe7433e378e069
Reviewed-on: https://go-review.googlesource.com/20773
Run-TryBot: Todd Neal <todd@tneal.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
The biggest change is that each test is now responsible for managing
the starting and stopping of its parallel subtests.
The "Main" test could be run as a tRunner as well. This shows that
the introduction of subtests is merely a generalization of and
consistent with the current semantics.
Change-Id: Ibf8388c08f85d4b2c0df69c069326762ed36a72e
Reviewed-on: https://go-review.googlesource.com/18893
Reviewed-by: Russ Cox <rsc@golang.org>
Make sure we don't generate write barriers in runtime
code that is marked to forbid write barriers.
Implement the optimization that if we're writing a sliced
slice back to the location it came from, we don't need a
write barrier.
Fixes#14784
Change-Id: I04b6a3b2ac303c19817e932a36a3b006de103aaa
Reviewed-on: https://go-review.googlesource.com/20791
Reviewed-by: Austin Clements <austin@google.com>
This should probably be considered "experimental" at this stage, but
what it needs is feedback from adventurous adopters. I think the data
structure used for describing escape reasons might be extendable to
allow a cleanup of the underlying algorithms, which suffers from
insufficiently separated concerns (the graph does not deal well with
escape level adjustments, so it is augmented by a second custom-walk
portion of the "flood" phase. It would be better to put it all,
including level adjustments, in a single graph structure, and then
simply flood the graph.
Tweaked to avoid allocations in the no-logging case.
Modified run.go to ignore lines with leading "#" in the output (since
it can never match a line), and in -update_errors to ignore leading
tabs in output lines and to normalize embedded filenames.
Currently requires -m -m because otherwise the noise/update
burden for the other escape tests is considerable.
There is a partial test. Existing escape analysis tests seem to
cover all except the panic case and what looks like it might be
unreachable code in escape analysis.
Fixes#10526.
Change-Id: I2524fdec54facae48b00b2548e25d9e46fcaf832
Reviewed-on: https://go-review.googlesource.com/18041
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Keep track of how many uses each Value has. Each appearance in
Value.Args and in Block.Control counts once.
The number of uses of a value is generically useful to
constrain rewrite rules. For instance, we might want to
prevent merging index operations into loads if the same
index expression is used lots of times.
But I have one use in particular for which the use count is required.
We must make sure we don't combine ops with loads if the load has
more than one use. Otherwise, we may split a single load
into multiple loads and that breaks perceived behavior in
the presence of races. In particular, the load of m.state
in sync/mutex.go:Lock can't be done twice. (I have a separate
CL which triggers the mutex failure. This CL has a test which
demonstrates a similar failure.)
Change-Id: Icaafa479239f48632a069d0c3f624e6ebc6b1f0e
Reviewed-on: https://go-review.googlesource.com/20790
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Todd Neal <todd@tneal.org>
Receiver parameters generally aren't relevant to the function
signature type. In particular:
1. When checking whether a type's method implements an interface's
method, we specifically want to ignore the receiver parameters,
because they'll be different.
2. When checking interface type equality, interface methods always
use the same "fakethis" *struct{} type as their receiver.
3. Finally, method expressions and method values degenerate into
receiver-less function types.
The only case where we care about receiver types matching is in
addmethod, which is easily handled by adding an extra Eqtype check of
the receiver parameters. Also, added a test for this, since
(surprisingly) there weren't any.
As precedence, go/types.Identical ignores receiver parameters when
comparing go/types.Signature values.
Notably, this allows us to slightly simplify the "implements"
function, which is used for checking whether type/interface t
implements interface iface. Currently, cmd/compile actually works
around Eqtype's receiver parameter checking by creating new throwaway
TFUNC Types without the receiver parameter.
(Worse, the compiler currently only provides APIs to build TFUNC Types
from Nod syntax trees, so building those throwaway types also involves
first building throwaway syntax trees.)
Passes toolstash -cmp.
Change-Id: Ib07289c66feacee284e016bc312e8c5ff674714f
Reviewed-on: https://go-review.googlesource.com/20602
Reviewed-by: Robert Griesemer <gri@golang.org>
Currently we generate write barriers when the right side of an
assignment is a global function. This doesn't fall into the existing
case of storing an address of a global because we haven't lowered the
function to a pointer yet.
This write barrier is unnecessary, so eliminate it.
Fixes#13901.
Change-Id: Ibc10e00a8803db0fd75224b66ab94c3737842a79
Reviewed-on: https://go-review.googlesource.com/20772
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Step 2 of stream-lining parameter parsing
- do parameter validity checks in parser
- two passes instead of multiple (and theoretically quadratic) passes
when checking parameters
- removes the need for OKEY and some ONONAME nodes in those passes
This removes allocation of ~123K OKEY (incl. some ONONAME) nodes
out of a total of ~10M allocated nodes when running make.bash, or
a reduction of the number of alloacted nodes by ~1.2%.
Change-Id: I4a8ec578d0ee2a7b99892ac6b92e56f8e0415f03
Reviewed-on: https://go-review.googlesource.com/20748
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
* Refacts a bit saving and restoring parents restrictions
* Shaves ~100k from pkg/tools/linux_amd64,
but most of the savings come from the rewrite rules.
* Improves on the following artificial test case:
func f1(a4 bool, a6 bool) bool {
return a6 || (a6 || (a6 || a4)) || (a6 || (a4 || a6 || (false || a6)))
}
Change-Id: I714000f75a37a3a6617c6e6834c75bd23674215f
Reviewed-on: https://go-review.googlesource.com/20306
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Removes an intermediate layer of functions that was clogging up a
corner of the compiler's profile graph.
I can't measure a performance improvement running a large build
like jujud, but the profile reports less total time spent in
gc.(*lexer).getr.
Change-Id: I3000585cfcb0f9729d3a3859e9023690a6528591
Reviewed-on: https://go-review.googlesource.com/20565
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In addition to reflect.Value.Call, exported methods can be invoked
by the Func value in the reflect.Method struct. This CL has the
compiler track what functions get access to a legitimate reflect.Method
struct by looking for interface calls to either of:
Method(int) reflect.Method
MethodByName(string) (reflect.Method, bool)
This is a little overly conservative. If a user implements a type
with one of these methods without using the underlying calls on
reflect.Type, the linker will assume the worst and include all
exported methods. But it's cheap.
No change to any of the binary sizes reported in cl/20483.
For #14740
Change-Id: Ie17786395d0453ce0384d8b240ecb043b7726137
Reviewed-on: https://go-review.googlesource.com/20489
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The location of VARDEFs is incorrect for PPARAMOUT variables
which are also used as temporary locations. We put in VARDEFs
when setting the variable at return time, but when the location
is also used as a temporary the lifetime values are wrong.
Fix copyelim to update the names map properly. This is a
real name bug fix which, as a result, allows me to
write a reasonable test to trigger the PPARAMOUT bug.
This is kind of a band-aid fix for #14591. A more pricipled
fix (which allows values to be stored in the return variable
earlier than the return point) will be harder.
Fixes#14591
Change-Id: I7df8ae103a982d1f218ed704c080d7b83cdcfdd9
Reviewed-on: https://go-review.googlesource.com/20457
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Make sure we do any just-before-return cleanup on all paths out of a
function, including when recovering. Each exit path should include
deferreturn (if there are any defers) and then the exit
code (e.g. copying heap-escaping return values back to the stack).
Introduce a Defer SSA block type which has two outgoing edges - one the
fallthrough edge (the defer was queued successfully) and one which
immediately returns (the defer had a successful recover() call and
normal execution should resume at the return point).
Fixes#14725
Change-Id: Iad035c9fd25ef8b7a74dafbd7461cf04833d981f
Reviewed-on: https://go-review.googlesource.com/20486
Reviewed-by: David Chase <drchase@google.com>
In increment and decrement statements, explicit check that the type
of operand is numeric earlier. This avoids a related but less clear
error about converting "1" to be emitted.
So, when compiling
package main
func main() {
var x bool
x++
}
instead of emitting two errors
prog.go:5: cannot convert 1 to type bool
prog.go:5: invalid operation: x++ (non-numeric type bool)
just emits the second error.
Fixes#12525.
Change-Id: I6e81330703765bef0d6eb6c57098c1336af7c799
Reviewed-on: https://go-review.googlesource.com/20245
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The new check corresponds to the (etype != TANY || Debug['A'] != 0)
that was lost in golang.org/cl/19936.
Fixes#14652.
Change-Id: Iec3788ff02529b3b0f0d4dd92ec9f3ef20aec849
Reviewed-on: https://go-review.googlesource.com/20271
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When the linker was written in C, command line arguments were passed
around as null-terminated byte arrays which encouraged checking
characters one at a time. In Go, that can easily lead to
out-of-bounds panics.
Use the more idiomatic strings.HasPrefix when checking cmd/link's -B
argument to avoid the panic, and replace the manual hex decode with
use of the encoding/hex package.
Fixes#14636
Change-Id: I45f765bbd8cf796fee1a9a3496178bf76b117827
Reviewed-on: https://go-review.googlesource.com/20211
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The changes to internal/big are completely automatic
by running vendor.bash in that directory.
Also added respective test case.
For #14553.
Change-Id: I98b124bcc9ad9e9bd987943719be27864423cb5d
Reviewed-on: https://go-review.googlesource.com/20199
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Static branch predictions (which guide block ordering) are
adjusted based on:
loop/not-loop (favor looping)
abnormal-exit/not (avoid panic)
call/not-call (avoid call)
ret/default (treat returns as rare)
This appears to make no difference in performance of real
code, meaning the compiler itself. The earlier version of
this has been stripped down to help make the cost of this
only-aesthetic-on-Intel phase be as cheap as possible (we
probably want information about inner loops for improving
register allocation, but because register allocation follows
close behind this pass, conceivably the information could be
reused -- so we might do this anyway just to normalize
output).
For a ./make.bash that takes 200 user seconds, about .75
second is reported in likelyadjust (summing nanoseconds
reported with -d=ssa/likelyadjust/time ).
Upstream predictions are respected.
Includes test, limited to build on amd64 only.
Did several iterations on the debugging output to allow
some rough checks on behavior.
Debug=1 logging notes agree/disagree with earlier passes,
allowing analysis like the following:
Run on make.bash:
GO_GCFLAGS=-d=ssa/likelyadjust/debug \
./make.bash >& lkly5.log
grep 'ranch prediction' lkly5.log | wc -l
78242 // 78k predictions
grep 'ranch predi' lkly5.log | egrep -v 'agrees with' | wc -l
29633 // 29k NEW predictions
grep 'disagrees' lkly5.log | wc -l
444 // contradicted 444 times
grep '< exit' lkly5.log | wc -l
10212 // 10k exit predictions
grep '< exit' lkly5.log | egrep 'disagrees' | wc -l
5 // 5 contradicted by previous prediction
grep '< exit' lkly5.log | egrep -v 'agrees' | wc -l
702 // 702-5 redundant with previous prediction
grep '< call' lkly5.log | egrep -v 'agrees' | wc -l
16699 // 16k new call predictions
grep 'stay in loop' lkly5.log | egrep -v 'agrees' | wc -l
3951 // 4k new "remain in loop" predictions
Fixes#11451.
Change-Id: Iafb0504f7030d304ef4b6dc1aba9a5789151a593
Reviewed-on: https://go-review.googlesource.com/19995
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
* It does very simple bounds checking elimination. E.g.
removes the second check in for i := range a { a[i]++; a[i++]; }
* Improves on the following redundant expression:
return a6 || (a6 || (a6 || a4)) || (a6 || (a4 || a6 || (false || a6)))
* Linear in the number of block edges.
I patched in CL 12960 that does bounds, nil and constant propagation
to make sure this CL is not just redundant. Size of pkg/tool/linux_amd64/*
(excluding compile which is affected by this change):
With IsInBounds and IsSliceInBounds
-this -12960 92285080
+this -12960 91947416
-this +12960 91978976
+this +12960 91923088
Gain is ~110% of 12960.
Without IsInBounds and IsSliceInBounds (older run)
-this -12960 95515512
+this -12960 95492536
-this +12960 95216920
+this +12960 95204440
Shaves 22k on its own.
* Can we handle IsInBounds better with this? In
for i := range a { a[i]++; } the bounds checking at a[i]
is not eliminated.
Change-Id: I98957427399145fb33693173fd4d5a8d71c7cc20
Reviewed-on: https://go-review.googlesource.com/19710
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
If a general comment contains multiple newline characters, we can't
simply unread one and then re-lex it via the general whitespace lexing
phase, because then we'll reset lineno to the line before the "*/"
marker, rather than keeping it where we found the "/*" marker.
Also, for processing imports, call importfile before advancing the
lexer with p.next(), so that lineno reflects the line where we found
the import path, and not the token afterwards.
Fixes#14520.
Change-Id: I785a2d83d632280113d4b757de0d57c88ba2caf4
Reviewed-on: https://go-review.googlesource.com/19934
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
non-SSA backends are all over the map as to whether nil checks
get removed or not. amd64, 386, 386/387, arm are all subtly different.
Remove these extra checks for now, they are in nilptr3_ssa.go so they
won't get lost.
Change-Id: I2e0051f488fb2cb7278c6fdd44cb9d68b5778345
Reviewed-on: https://go-review.googlesource.com/19961
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Just like we do for integer loads/stores.
Update #14511
Change-Id: Ic6ca6b54301438a5701ea5fb0be755451cb24d45
Reviewed-on: https://go-review.googlesource.com/19923
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Keith Randall <khr@golang.org>
Our stack frame sizes look pretty good now. Lower the stack
guard from 1024 to 720.
Tip is currently using 720.
We could go lower (to 640 at least) except PPC doesn't like that.
Change-Id: Ie5f96c0e822435638223f1e8a2bd1a1eed68e6aa
Reviewed-on: https://go-review.googlesource.com/19922
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Always reading runes (rather than bytes) has negligible overhead
(a simple if at the moment - it can be eliminated eventually) but
simplifies the lexer logic and opens up the door for speedups.
In the process remove many int conversions that are now not needed
anymore.
Also, because identifiers are now more easily recognized, remove
talph label and move identifier lexing "in place".
Also, instead of accepting all chars < 0x80 and then check for
"frogs", only permit valid characters in the first place. Removes
an extra call for common simple tokens and leads to simpler logic.
`time go build -a net/http` (best of 5 runs) seems 1% faster.
Assuming this is in the noise, there is no noticeable performance
degradation with this change.
Change-Id: I3454c9bf8b91808188cf7a5f559341749da9a1eb
Reviewed-on: https://go-review.googlesource.com/19847
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Merge push_parser and pop_parser into a single parse_import function
and inline unimportfile. Shake out function boundaries a little bit so
that the symmetry is readily visible.
Move the import_package call into parse_import (and inline
import_there into import_package). This means importfile no longer
needs to provide fake import data to be needlessly lexed/parsed every
time it's called.
Also, instead of indicating import success/failure by whether the next
token is "package", import_spec can just check whether importpkg is
non-nil.
Tangentially, this somehow alters the diagnostics produced for
test/fixedbugs/issue11610.go. However, the new diagnostics are more
consistent with those produced when the empty import statement is
absent, which seems more desirable than maintaining the previous
errors.
Change-Id: I5cd1c22aa14da8a743ef569ff084711d137279d5
Reviewed-on: https://go-review.googlesource.com/19650
Reviewed-by: Robert Griesemer <gri@golang.org>
Walking the field name as if it were an expression
caused a called to haspointers with a TFIELD, which panics.
Trigger was a field at a large offset within a large struct,
combined with a struct literal expression mentioning that
field.
Fixes#14405
Change-Id: I4589badae27cf3d7cf365f3a66c13447512f41f9
Reviewed-on: https://go-review.googlesource.com/19699
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The Go 1.6 release notes say we'll remove the “-X name value” form
(in favor of the “-X name=value” form) in Go 1.7.
Do that.
Also establish the doc/go1.7.txt file.
Change-Id: Ie4565a6bc5dbcf155181754d8d92bfbb23c75338
Reviewed-on: https://go-review.googlesource.com/19614
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This bug was introduced in golang.org/cl/18217,
while trying to fix#13777.
Originally I wanted to just disable inlining for the case
being handled incorrectly, but it's fairly difficult to detect
and much easier just to fix. Since the case being handled
incorrectly was inlined correctly in Go 1.5, not inlining it
would also be somewhat of a regression.
So just fix it.
Test case copied from Ian's CL 19520.
The mistake to worry about in this CL would be relaxing
the condition too much (we now print the note more often
than we did yesterday). To confirm that we'd catch this mistake,
I checked that changing (!fmtbody || !t.Funarg) to (true) does
cause fixedbugs/issue13777.go to fail. And putting it back
to what is written in this CL makes that test pass again
as well as the new fixedbugs/issue14331.go.
So I believe that the new condition is correct for both constraints.
Fixes#14331.
Change-Id: I91f75a4d5d07c53af5caea1855c780d9874b8df6
Reviewed-on: https://go-review.googlesource.com/19514
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Type switches need write barriers if the written-to
variable is heap allocated.
For the added needwritebarrier call, the right arg doesn't
really matter, I just pass something that will never disqualify
the write barrier. The left arg is the one that matters.
Fixes#14306
Change-Id: Ic2754167cce062064ea2eeac2944ea4f77cc9c3b
Reviewed-on: https://go-review.googlesource.com/19481
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Semi-regular merge from tip to dev.ssa.
Two fixes:
1) Mark selectgo as not returning. This caused problems
because there are no VARKILL ops on the selectgo path,
causing things to be marked live that shouldn't be.
2) Tell the amd64 assembler that addressing modes like
name(SP)(AX*4) are ok.
Change-Id: I9ca81c76391b1a65cc47edc8610c70ff1a621913
It is one of the slowest compiler phases right now, and we
run two of them.
Instead of using a map to make the initial partition, use a sort.
It is much less memory intensive.
Do a few optimizations to avoid work for size-1 equivalence classes.
Implement -N.
Change-Id: I1d2d85d3771abc918db4dd7cc30b0b2d854b15e1
Reviewed-on: https://go-review.googlesource.com/19024
Reviewed-by: David Chase <drchase@google.com>
Empty blocks are introduced to remove critical edges.
After regalloc, we can remove any of the added blocks
that are still empty.
Change-Id: I0b40e95ac3a6cc1e632a479443479532b6c5ccd9
Reviewed-on: https://go-review.googlesource.com/18833
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Consider this code:
func f(*int)
func g() {
p := new(int)
f(p)
}
where f is an assembly function.
In general liveness analysis assumes that during the call to f, p is dead
in this frame. If f has retained p, p will be found alive in f's frame and keep
the new(int) from being garbage collected. This is all correct and works.
We use the Go func declaration for f to give the assembly function
liveness information (the arguments are assumed live for the entire call).
Now consider this code:
func h1() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
}
Here syscall.Syscall is taking the place of f, but because its arguments
are uintptr, the liveness analysis and the garbage collector ignore them.
Since p is no longer live in h once the call starts, if the garbage collector
scans the stack while the system call is blocked, it will find no reference
to the new(int) and reclaim it. If the kernel is going to write to *p once
the call finishes, reclaiming the memory is a mistake.
We can't change the arguments or the liveness information for
syscall.Syscall itself, both for compatibility and because sometimes the
arguments really are integers, and the garbage collector will get quite upset
if it finds an integer where it expects a pointer. The problem is that
these arguments are fundamentally untyped.
The solution we have taken in the syscall package's wrappers in past
releases is to insert a call to a dummy function named "use", to make
it look like the argument is live during the call to syscall.Syscall:
func h2() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
use(unsafe.Pointer(p))
}
Keeping p alive during the call means that if the garbage collector
scans the stack during the system call now, it will find the reference to p.
Unfortunately, this approach is not available to users outside syscall,
because 'use' is unexported, and people also have to realize they need
to use it and do so. There is much existing code using syscall.Syscall
without a 'use'-like function. That code will fail very occasionally in
mysterious ways (see #13372).
This CL fixes all that existing code by making the compiler do the right
thing automatically, without any code modifications. That is, it takes h1
above, which is incorrect code today, and makes it correct code.
Specifically, if the compiler sees a foreign func definition (one
without a body) that has uintptr arguments, it marks those arguments
as "unsafe uintptrs". If it later sees the function being called
with uintptr(unsafe.Pointer(x)) as an argument, it arranges to mark x
as having escaped, and it makes sure to hold x in a live temporary
variable until the call returns, so that the garbage collector cannot
reclaim whatever heap memory x points to.
For now I am leaving the explicit calls to use in package syscall,
but they can be removed early in a future cycle (likely Go 1.7).
The rule has no effect on escape analysis, only on liveness analysis.
Fixes#13372.
Change-Id: I2addb83f70d08db08c64d394f9d06ff0a063c500
Reviewed-on: https://go-review.googlesource.com/18584
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Brief background on "why heap allocate". Things can be
forced to the heap for the following reasons:
1) address published, hence lifetime unknown.
2) size unknown/too large, cannot be stack allocated
3) multiplicity unknown/too large, cannot be stack allocated
4) reachable from heap (not necessarily published)
The bug here is a case of failing to enforce 4) when an
object Y was reachable from a heap allocation X forced
because of 3). It was found in the case of a closure
allocated within a loop (X) and assigned to a variable
outside the loop (multiplicity unknown) where the closure
also captured a map (Y) declared outside the loop (reachable
from heap). Note the variable declared outside the loop (Y)
is not published, has known size, and known multiplicity
(one). The only reason for heap allocation is that it was
reached from a heap allocated item (X), but because that was
not forced by publication, it has to be tracked by loop
level, but escape-loop level was not tracked and thus a bug
results.
The fix is that when a heap allocation is newly discovered,
use its looplevel as the minimum loop level for downstream
escape flooding.
Every attempt to generalize this bug to X-in-loop-
references-Y-outside loop succeeded, so the fix was aimed
to be general. Anywhere that loop level forces heap
allocation, the loop level is tracked. This is not yet
tested for all possible X and Y, but it is correctness-
conservative and because it caused only one trivial
regression in the escape tests, it is probably also
performance-conservative.
The new test checks the following:
1) in the map case, that if fn escapes, so does the map.
2) in the map case, if fn does not escape, neither does the map.
3) in the &x case, that if fn escapes, so does &x.
4) in the &x case, if fn does not escape, neither does &x.
Fixes#13799.
Change-Id: Ie280bef2bb86ec869c7c206789d0b68f080c3fdb
Reviewed-on: https://go-review.googlesource.com/18234
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Adding the evconst(n) call for OANDAND and OOROR in
golang.org/cl/18262 was originally just to parallel the above iscmp
branch, but upon further inspection it seemed odd that removing it
caused test/fixedbugs/issue6671.go's
var b mybool
// ...
b = bool(true) && true // ERROR "cannot use"
to start failing (i.e., by not emitting the expected "cannot use"
error).
The problem is that evconst(n)'s settrue and setfalse paths always
reset n.Type to idealbool, even for logical operators where n.Type
should preserve the operand type. Adding the evconst(n) call for
OANDAND/OOROR inadvertantly worked around this by turning the later
evconst(n) call at line 2167 into a noop, so the "n.Type = t"
assignment at line 739 would preserve the operand type.
However, that means evconst(n) was still clobbering n.Type for ONOT,
so declarations like:
const _ bool = !mybool(true)
were erroneously accepted.
Update #13821.
Change-Id: I18e37287f05398fdaeecc0f0d23984e244f025da
Reviewed-on: https://go-review.googlesource.com/18362
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
We don't use these for benchmarking anymore.
Now we have the go1 dir and the benchmarks subrepo.
Some have problematic copyright notices, so move out of main repo.
Preserved in golang.org/x/exp/shootout.
Fixes#12688.
Fixes#13584.
Change-Id: Ic0b71191ca1a286d33d7813aca94bab1617a1c82
Reviewed-on: https://go-review.googlesource.com/18320
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Added a format option to inhibit output of .Note field in
printing, and enabled that option during export.
Added test.
Fixes#13777.
Change-Id: I739f9785eb040f2fecbeb96d5a9ceb8c1ca0f772
Reviewed-on: https://go-review.googlesource.com/18217
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
The test for non-package main top-level inputs is done while parsing
the export data. Issue #13468 happened because we were not parsing
the export data when using compiler-generated archives
(that is, when using go tool compile -pack).
Fix this by parsing the export data even for archives.
However, that turns up a different problem: the export data check
reports (one assumes spurious) skew errors now, because it has
not been run since Go 1.2.
(Go 1.3 was the first release to use go tool compile -pack.)
Since the code hasn't run since Go 1.2, it can't be that important.
Since it doesn't work today, just delete it.
Figuring out how to make this code work with Robert's export
format was one of the largest remaining TODOs for that format.
Now we don't have to.
Fixes#13468 and makes the world a better place.
Change-Id: I40a4b284cf140d49d48b714bd80762d6889acdb9
Reviewed-on: https://go-review.googlesource.com/17976
Reviewed-by: Robert Griesemer <gri@golang.org>
Fixes#12411.
Change-Id: I2202a754c7750e3b2119e3744362c98ca0d2433e
Reviewed-on: https://go-review.googlesource.com/17818
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Another (historic) artifact due to partially resolving symbols too early.
Fixes#13539.
Change-Id: Ie720c491cfa399599454f384b3a9735e75d4e8f1
Reviewed-on: https://go-review.googlesource.com/17600
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Damien Neil <dneil@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Following an empty import, a declaration involving a ? symbol
generates an internal compiler error when the name of the
symbol (in newname function).
package a
import""
var?
go.go:2: import path is empty
go.go:3: internal compiler error: newname nil
Make sure dclname is not called when the symbol is nil.
The error message is now:
go.go:2: import path is empty
go.go:3: invalid declaration
go.go:4: syntax error: unexpected EOF
This CL was initially meant to be applied to the old parser,
and has been updated to apply to the new parser.
Fixes#11610
Change-Id: I75e07622fb3af1d104e3a38c89d9e128e3b94522
Reviewed-on: https://go-review.googlesource.com/15268
Reviewed-by: Russ Cox <rsc@golang.org>
The following code:
func n() {(interface{int})}
generates:
3: interface contains embedded non-interface int
3: type %!v(PANIC=runtime error: invalid memory address or nil pointer dereference) is not an expression
It is because the corresponding symbol (Sym field in Type object)
is nil, resulting in a panic in typefmt.
Just skip the symbol if it is nil, so that the error message becomes:
3: interface contains embedded non-interface int
3: type interface { int } is not an expression
Fixes#11614
Change-Id: I219ae7eb01edca264fad1d4a1bd261d026294b00
Reviewed-on: https://go-review.googlesource.com/14015
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
The build tags are necessary to keep "go build" in that directory
building only stdio.go, but we have to arrange for test/run.go to
treat them as satisfied.
Fixes#12625.
Change-Id: Iec0cb2fdc2c9b24a4e0530be25e940aa0cc9552e
Reviewed-on: https://go-review.googlesource.com/17454
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Move test for isblank into addmethod so that most of the type checking
for methods is also performed for blank methods.
Fixes#11366.
Change-Id: I13d554723bf96d906d0b3ff390d7b7c87c1a5020
Reviewed-on: https://go-review.googlesource.com/16866
Reviewed-by: Robert Griesemer <gri@golang.org>
- use same local variable name (lno) for line number for LCOLAS everywhere
- remove now unneeded assignment of line number to yylval.i in lexer
Fix per suggestion of mdempsky.
Fixes#13415.
Change-Id: Ie3c7f5681615042a12b81b26724b3a5d8a979c25
Reviewed-on: https://go-review.googlesource.com/17248
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This is a bit ugly but it's a useful test. Run go install -buildmode=shared std
and then go run run.go -linkshared (it passes on linux/amd64).
Change-Id: I5684c79cd03817fa1fc399788b7320f8535c08da
Reviewed-on: https://go-review.googlesource.com/16343
Reviewed-by: Russ Cox <rsc@golang.org>
- fix/check location of popdcl calls where questioned
- remove unnecessary handling of ... (LDDD) in ntype (couldn't be reached)
- inlined and fnret_type and simplified fnres as a consequence
- leave handling of ... (LDDD) in arg_list alone (remove TODO)
- verify that parser requires a ';' after last statement in a case/default
(added test case)
Fixes#13243.
Change-Id: Iad94b498591a5e85f4cb15bbc01e8e101415560d
Reviewed-on: https://go-review.googlesource.com/17155
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Chris Manghane <cmang@golang.org>
Use a combination of follow- and stop-token lists and nesting levels
to better synchronize parser after a syntax error.
Fixes#13319.
Change-Id: I9592e0b5b3ba782fb9f9315fea16163328e204f7
Reviewed-on: https://go-review.googlesource.com/17080
Reviewed-by: Chris Manghane <cmang@golang.org>
Handling of &(T{}) assumed that the parser would not introduce ()'s.
Also: Better comments around handling of OPAREN syntax tree optimization.
Fixes#13261.
Change-Id: Ifc5047a0448f5e7d74cd42f6608b87dcc9c2f2fb
Reviewed-on: https://go-review.googlesource.com/17040
Reviewed-by: Chris Manghane <cmang@golang.org>
Also:
- better error messages in some cases
- factored out function to produce syntax error at given line number
Fixes#13273.
Change-Id: I0192a94731cc23444680a26bd0656ef663e6da0b
Reviewed-on: https://go-review.googlesource.com/16992
Reviewed-by: Chris Manghane <cmang@golang.org>
This is a translation of the yacc-based parser with adjustements
to make the grammar work for a recursive-descent parser followed
by cleanups and simplifications.
The yacc actions were mostly literally copied for correctness
with better temporary names.
A few of the syntax tests were adjusted for slightly different
error messages (it is very difficult to match the yacc-based
error messages in all cases, and sometimes the new parser could
produce better errors).
The new parser is enabled by default.
To switch back to the yacc-based parser, set -oldparser.
To hardwire the switch back, uncomment "oldparser = 1" in lex.go.
- passes all.bash
- ~18% reduced parse time per file on average for make.bash
- ~3% reduced compile time for building cmd/compile
Change-Id: Icb5651bb9d8b9f66261762d2c94a03793050d4ce
Reviewed-on: https://go-review.googlesource.com/16665
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Larger stack frames mean nosplit functions use more stack and so the limit
needs to increase.
The change to test/nosplit.go is a bit ugly but I can't really think of a
way to make it nicer.
Change-Id: I2616b58015f0b62abbd62951575fcd0d2d8643c2
Reviewed-on: https://go-review.googlesource.com/16504
Reviewed-by: Russ Cox <rsc@golang.org>
The heap profile is only guaranteed to be up-to-date after two GC
cycles, so force two GCs instead of just one.
Updates #13098.
Change-Id: I4fb9287b698f4a3b90b8af9fc6a2efb3b082bfe5
Reviewed-on: https://go-review.googlesource.com/16848
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The heapsampling.go test occasionally fails on some architectures
because it finds zero heap samples in main.alloc. This happens because
the byte and object counts are only updated at a GC. Hence, if a GC
happens part way through allocInterleaved, but then doesn't happen
after we start calling main.alloc, checkAllocations will see buckets
for the lines in main.alloc (which are created eagerly), but the
object and byte counts will be zero.
Fix this by forcing a GC to update the profile before we collect it.
Fixes#13098.
Change-Id: Ia7a9918eea6399307f10499dd7abefd4f6d13cf6
Reviewed-on: https://go-review.googlesource.com/16846
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Skip fixedbugs/issue10607.go because external linking is not supported
yet.
Skip nilptr3.go because of issue #9058 (same as ppc64).
Change-Id: Ib3dfbd9a03ee4052871cf57c74b3cc5e745e1f80
Reviewed-on: https://go-review.googlesource.com/14461
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change is the same as CL #9345 which was reverted,
except for a small bug fix.
The only change is to the body of sendDirect and its callsite.
Also added a test.
The problem was during a channel send operation. The target
of the send was a sleeping goroutine waiting to receive. We
basically do:
1) Read the destination pointer out of the sudog structure
2) Copy the value we're sending to that destination pointer
Unfortunately, the previous change had a goroutine suspend
point between 1 & 2 (the call to sendDirect). At that point
the destination goroutine's stack could be copied (shrunk).
The pointer we read in step 1 is no longer valid for step 2.
Fixed by not allowing any suspension points between 1 & 2.
I suspect the old code worked correctly basically by accident.
Fixes#13169
The original 9345:
This change removes the retry mechanism we use for buffered channels.
Instead, any sender waking up a receiver or vice versa completes the
full protocol with its counterpart. This means the counterpart does
not need to relock the channel when it wakes up. (Currently
buffered channels need to relock on wakeup.)
For sends on a channel with waiting receivers, this change replaces
two copies (sender->queue, queue->receiver) with one (sender->receiver).
For receives on channels with a waiting sender, two copies are still required.
This change unifies to a large degree the algorithm for buffered
and unbuffered channels, simplifying the overall implementation.
Fixes#11506
Change-Id: I57dfa3fc219cffa4d48301ee15fe5479299efa09
Reviewed-on: https://go-review.googlesource.com/16740
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Make sure that we're moving or zeroing pointers atomically.
Anything that is a multiple of pointer size and at least
pointer aligned might have pointers in it. All the code looks
ok except for the 1-pointer-sized moves.
Fixes#13160
Update #12552
Change-Id: Ib97d9b918fa9f4cc5c56c67ed90255b7fdfb7b45
Reviewed-on: https://go-review.googlesource.com/16668
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Duffcopy now uses X0, as of 5cf281a. Teach the peephole
optimizer that duffcopy clobbers X0 so that it does not
rename registers use X0 across the duffcopy instruction.
Fixes#13171
Change-Id: I389cbf1982cb6eb2f51e6152ac96736a8589f085
Reviewed-on: https://go-review.googlesource.com/16715
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ilya Tocar <ilya.tocar@intel.com>
sradi and sradi. hide the top bit of their immediate argument apart from the
rest of it, but the code only handled the sradi case.
I'm pretty sure this is the only instruction missing (a couple of the rotate
instructions encode their immediate the same way but their handling looks OK).
This fixes the failure of "GOARCH=amd64 ~/go/bin/go install -v runtime" as
reported in the bug.
Fixes#11987
Change-Id: I0cdefcd7a04e0e8fce45827e7054ffde9a83f589
Reviewed-on: https://go-review.googlesource.com/16710
Reviewed-by: Minux Ma <minux@golang.org>
Handling of special records for tiny allocations has two problems:
1. Once we queue a finalizer we mark the object. As the result any
subsequent finalizers for the same object will not be queued
during this GC cycle. If we have 16 finalizers setup (the worst case),
finalization will take 16 GC cycles. This is what caused misbehave
of tinyfin.go. The actual flakiness was caused by the fact that fing
is asynchronous and don't always run before the check.
2. If a tiny block has both finalizer and profile specials,
it is possible that we both queue finalizer, preserve the object live
and free the profile record. As the result heap profile can be skewed.
Fix both issues by analyzing all special records for a single object at once.
Also, make tinyfin test stricter and remove reliance on real time.
Also, add a test for the problem 2. Currently heap profile missed about
a half of live memory.
Fixes#13100
Change-Id: I9ae4dc1c44893724138a4565ca5cae29f2e97544
Reviewed-on: https://go-review.googlesource.com/16591
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
* use new(int32) to be pedantic about documented SetFinalizer rules:
"The argument x must be a pointer to an object allocated by calling
new or by taking the address of a composite literal"
* remove the amd64-only restriction. The GC is fully precise everywhere
now, even on 32-bit. (keep the gccgo restriction, though)
* remove a data race (perhaps the actual bug) and use atomic.LoadInt32
for the final check. The race detector is now happy, too.
Updates #13100
Change-Id: I8d05c0ac4f046af9ba05701ad709c57984b34893
Reviewed-on: https://go-review.googlesource.com/16535
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Some tests disabled, some bifurcated into _ssa and not,
with appropriate logging added to compiler.
"tests/live.go" in particular needs attention.
SSA-specific testing removed, since it's all SSA now.
Added "-run_skips" option to tests/run.go to simplify
checking whether a test still fails (or how it fails)
on a skipped platform.
The compiler now compiles with SSA by default.
If you don't want SSA, specify GOSSAHASH=n (or N) as
an environment variable. Function names ending in "_ssa"
are always SSA-compiled.
GOSSAFUNC=fname retains its "SSA for fname, log to ssa.html"
GOSSAPKG=pkg only has an effect when GOSSAHASH=n
GOSSAHASH=10101 etc retains its name-hash-matching behavior
for purposes of debugging.
See #13068
Change-Id: I8217bfeb34173533eaeb391b5f6935483c7d6b43
Reviewed-on: https://go-review.googlesource.com/16299
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
Some tests need to disable inlining of a function. It's currently done
in one of a few ways (adding a function call, an empty switch, or a
defer). Add support for a less fragile 'go:noinline' directive that
prevents inlining.
Fixes#12312
Change-Id: Ife444e13361b4a927709d81aa41e448f32eec8d4
Reviewed-on: https://go-review.googlesource.com/13911
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Modified GOSSA{HASH.PKG} environment variable filters to
make it easier to make/run with all SSA for testing.
Disable attempts at SSA for architectures that are not
amd64 (avoid spurious errors/unimplementeds.)
Removed easy out for unimplemented features.
Add convert op for proper liveness in presence of uintptr
to/from unsafe.Pointer conversions.
Tweaked stack sizes to get a pass on windows;
1024 instead 768, was observed to pass at least once.
Change-Id: Ida3800afcda67d529e3b1cf48ca4a3f0fa48b2c5
Reviewed-on: https://go-review.googlesource.com/16201
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
Small fix: looks like a short variable declaration with a type switch
checks to make sure the variable used had valid shape (ONAME, OTYPE, or
ONONAME) and rejects everything else. Then a new variable is declared.
If the symbol contained in the declaration was a named OLITERAL (still a
valid identifier obviously) it would be rejected, even though a new
variable would have been declared.
Fix adds this case to the check.
Added a test case from issue12413.
Fixes#12413
Change-Id: I150dadafa8ee5612c867d58031027f2dca8c6ebc
Reviewed-on: https://go-review.googlesource.com/15760
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Apply static bounds checking logic during type checking even to
zero-element arrays, but skip synthesized OINDEX nodes that the
compiler has asserted are within bounds (such as the ones generated
while desugaring ORANGE nodes). This matches the logic in walkexpr
that also skips static bounds checking when Bounded is true.
Passes toolstash/buildall.
Fixes#12944.
Change-Id: I14ba03d71c002bf969d69783bec8d1a8e10e7d75
Reviewed-on: https://go-review.googlesource.com/15902
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Following the C to Go translation, some useless variables
were left in the code. In fmt.go, this was harmless.
In lex.go, it broke the error message related to
non-canonical import paths.
Fix it, and remove the useless variables.
The added test case is ignored in the go/types tests, since
the behavior of the non-canonical import path check seems
to be different.
Fixes#11362
Change-Id: Ic9129139ede90357dc79ebf167af638cf44536fa
Reviewed-on: https://go-review.googlesource.com/15580
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This lets us re-enable duffzero.
Fixes#12108
Change-Id: Iefd24d26eaa56067caa2c29ff99cd20a42d8714a
Reviewed-on: https://go-review.googlesource.com/14937
Reviewed-by: Keith Randall <khr@golang.org>
The current heap sampling introduces some bias that interferes
with unsampling, producing unexpected heap profiles.
The solution is to use a Poisson process to generate the
sampling points, using the formulas described at
https://en.wikipedia.org/wiki/Poisson_process
This fixes#12620
Change-Id: If2400809ed3c41de504dd6cff06be14e476ff96c
Reviewed-on: https://go-review.googlesource.com/14590
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Turns out the summary information for the ... args was
already correctly computed, all that lacked was to make
use of it and correct tests that documented our prior
deficiencies.
Fixes#12006
Change-Id: Ie8adfab7547f179391d470679598f0904aabf9f7
Reviewed-on: https://go-review.googlesource.com/15200
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Internal error arose from calling methodfunc on a invalid interface
field during the implements check. int obviously isn't a function,
and errors on getinarg...
for im := iface.Type; im != nil; im = im.Down {
imtype = methodfunc(im.Type, nil)
// ...
}
Fix handles the internal compiler error, but does not throw an
additional error, i.e. the following code will error on the I
interface, but type A will pass the implements check since
'Read(string) string' is implemented and 'int' is skipped
type I interface {
Read(string) string
int
}
type A struct {
}
func (a *A) Read(s string) string {
return s
}
func New() I {
return new(A)
}
Fixes#10975
Change-Id: I4b54013afb2814db3f315515f0c742d8631ca500
Reviewed-on: https://go-review.googlesource.com/13747
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The existing test did not take into account the implicit
dereference of &fixedArray and thus heap-escaped when it
was not necessary.
Also added a detailed test for this and related cases.
Fixes#12588
Change-Id: I951e9684a093082ccdca47710f69f4366bd6b3cf
Reviewed-on: https://go-review.googlesource.com/15130
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
As detailed in #11910, the current implementation attempts to execute an area
of memory with unknown content. If the memory is executable, the result is
unpredictable - instead, make the test deterministic by attempting to execute
an instruction that is known to trigger a trap on the given architecture.
The new implementation is written by iant@ and provided via #11910.
Update issue #11910
Change-Id: Ia698c36e0dd98a9d9d16a701f60f6748c6faf896
Reviewed-on: https://go-review.googlesource.com/15058
Run-TryBot: Joel Sing <jsing@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
For variables which get SSA'd, SSA keeps track of all the def/kill.
It is only for on-stack variables that we need them.
This reduces stack frame sizes significantly because often the
only use of a variable was a varkill, and without that last use
the variable doesn't get allocated in the frame at all.
Fixes#12602
Change-Id: I3f00a768aa5ddd8d7772f375b25f846086a3e689
Reviewed-on: https://go-review.googlesource.com/14758
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
recover4 allocates 16 pages of memory via mmap, makes a 4 page hole in it with
munmap, allocates another 16 pages of memory via normal allocation and then
tries to copy from one to the other. For some reason on arm64 (but no other
platform I have tested) the second allocation sometimes causes the runtime to
ask the kernel for 4 additional pages of memory -- which the kernel satisfies
by remapping the pages that were just unmapped!
Moving the second allocation before the munmap fixes this behaviour, I can run
recover4 tens of thousands of times without failure with this fix vs a failure
rate of ~0.5% before.
Fixes#12549
Change-Id: I490b895b606897e4f7f25b1b51f5d485a366fffb
Reviewed-on: https://go-review.googlesource.com/14632
Reviewed-by: Dave Cheney <dave@cheney.net>
For now, we only use typedmemmove. This can be optimized
in future CLs.
Also add a feature to help with binary searching bad compilations.
Together with GOSSAPKG, GOSSAHASH specifies the last few binary digits
of the hash of function names that should be compiled. So
GOSSAHASH=0110 means compile only those functions whose last 4 bits
of hash are 0110. By adding digits to the front we can binary search
for the function whose SSA-generated code is causing a test to fail.
Change-Id: I5a8b6b70c6f034f59e5753965234cd42ea36d524
Reviewed-on: https://go-review.googlesource.com/14530
Reviewed-by: Keith Randall <khr@golang.org>
Add line that was inadvertently removed.
Change-Id: I99ebc1041e984e408ae5825836c28b9891d6043b
Reviewed-on: https://go-review.googlesource.com/14470
Run-TryBot: Todd Neal <todd@tneal.org>
Reviewed-by: Keith Randall <khr@golang.org>
This matches existing behavior, see issue #2196
Change-Id: Ifa9359b7c821115389f337a57de355c5ec23be8f
Reviewed-on: https://go-review.googlesource.com/14261
Reviewed-by: Keith Randall <khr@golang.org>
Evaluating args can overwrite arg area, so we can't write argsize and func
until args are evaluated.
Fixes test/recover.go, test/recover1.go, and test/fixedbugs/issue4066.go
Change-Id: I862e4934ccdb8661431bcc3e1e93817ea834ea3f
Reviewed-on: https://go-review.googlesource.com/14405
Reviewed-by: David Chase <drchase@google.com>
Now that the standard library tests
are all passing, add the
test directory tests.
These contain a number of edge case tests
that are of particular interest for compilers.
Some kinds of tests are not well-suited
for a new backend, such as errorcheck tests.
To start, use SSA only for run and runoutput.
There are three failing tests now.
Just mark them as such for now,
so that we can prevent regressions.
This code will all be unwound once SSA
codegen matures and becomes the default.
Change-Id: Ic51e6d0cc1cd48ef1e2fe2c9a743bf0cce275200
Reviewed-on: https://go-review.googlesource.com/14344
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This one of a set of changes to make the transition away from NodeList
easier by removing cases in which NodeList doesn't act semi-trivially like a
[]*Node.
This CL was originally prepared by Josh Bleecher Snyder <josharian@gmail.com>.
This change passes go build -toolexec 'toolstash -cmp' -a std.
Change-Id: I4d041b343952f4a31f3150fd70669e08fcaa74f8
Reviewed-on: https://go-review.googlesource.com/14305
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This is a first of a set of changes to make the transition away from NodeList
easier by removing cases in which NodeList doesn't act semi-trivially like a
[]*Node.
This CL was originally prepared by Josh Bleecher Snyder <josharian@gmail.com>.
This change passes go build -toolexec 'toolstash -cmp' -a std.
Change-Id: Iad10b75e42b5b24e1694407841282fa3bab2dc9f
Reviewed-on: https://go-review.googlesource.com/14232
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Instead of trying to delete dead code as soon as we find it, just
mark it as dead using a PlainAndDead block kind. The deadcode pass
will do the real removal.
This way is somewhat more efficient because we don't need to mess
with successor and predecessor lists of all the dead blocks.
Fixes#12347
Change-Id: Ia42d6b5f9cdb3215a51737b3eb117c00bd439b13
Reviewed-on: https://go-review.googlesource.com/14033
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The issue 12226 has been caused by the allocation of the same register
for the equality check of two byte values. The code in cgen.go freed the
register for the second operand before the allocation of the register
for the first operand.
Fixes#12226
Change-Id: Ie4dc33a488bd48a17f8ae9b497fd63c1ae390555
Reviewed-on: https://go-review.googlesource.com/13771
Reviewed-by: Russ Cox <rsc@golang.org>
Currently an expression like
var v = 0 >> 1000
is rejected by gc with a "stupid shift" error, while gotype
compiles it successfully.
As suggested by gri on the issue tracker, allow an rsh right
operand to be any valid uint value.
Fixes#11328
Change-Id: I6ccb3b7f842338d91fd26ae37dd4fa279d7fc440
Reviewed-on: https://go-review.googlesource.com/13777
Reviewed-by: Robert Griesemer <gri@golang.org>
The reg[] array in .../gc is where truth lies. The copy in .../ARCH
is incorrect as it is mostly not updated to reflect regalloc decisions.
This bug was introduced in the rewrite
https://go-review.googlesource.com/#/c/7853/. The new reg[] array was
introduced in .../gc but not all of the uses were removed in the
.../ARCH directories.
Fixes#12133
Change-Id: I6364fc403cdab92d802d17f2913ba1607734037c
Reviewed-on: https://go-review.googlesource.com/13630
Reviewed-by: Russ Cox <rsc@golang.org>
ODOTTYPE should be treated a whole lot like ODOT,
but it was missing completely from the switch in
escwalk and thus escape status did not propagate
to fields.
Since interfaces are required to trigger this bug,
the test was added to escape_iface.go.
Fixes#11931.
Change-Id: Id0383981cc4b1a160f6ad447192a112eed084538
Reviewed-on: https://go-review.googlesource.com/12921
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Instead of pushing the denominator argument on the stack,
the denominator is now passed in m.
This fixes a variety of bugs related to trying to take stack traces
backwards from the middle of the software div/mod routines.
Some of those bugs have been kludged around in the past,
but others have not. Instead of trying to patch up after breaking
the stack, this CL stops breaking the stack.
This is an update of https://golang.org/cl/19810043,
which was rolled back in https://golang.org/cl/20350043.
The problem in the original CL was that there were divisions
at bad times, when m was not available. These were divisions
by constant denominators, either in C code or in assembly.
The Go compiler knows how to generate division by multiplication
for constant denominators, but the C compiler did not.
There is no longer any C code, so that's taken care of.
There was one problematic DIV in runtime.usleep (assembly)
but https://golang.org/cl/12898 took care of that one.
So now this approach is safe.
Reject DIV/MOD in NOSPLIT functions to keep them from
coming back.
Fixes#6681.
Fixes#6699.
Fixes#10486.
Change-Id: I09a13c76ad08ba75b3bd5d46a3eb78e66a84ab38
Reviewed-on: https://go-review.googlesource.com/12899
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The layout code has to date insisted on stack frames that are 16-aligned
including the saved LR, and it ensured this by growing the frame itself.
This breaks code that refers to values near the top of the frame by positive
offset from SP, and in general it's too magical: if you see TEXT xxx, $N,
you expect that the frame size is actually N, not sometimes N and sometimes N+8.
This led to a serious bug in the compiler where ambiguously live values
were not being zeroed correctly, which in turn triggered an assertion
in the GC about finding only valid pointers. The compiler has been
fixed to always emit aligned frames, and the hand-written assembly
has also been fixed.
Now that everything is aligned, make unaligned an error instead of
something to "fix" silently.
For #9880.
Change-Id: I05f01a9df174d64b37fa19b36a6b6c5f18d5ba2d
Reviewed-on: https://go-review.googlesource.com/12848
Reviewed-by: Austin Clements <austin@google.com>
Russ Cox fixed this issue for other systems
in CL 12026, but the Plan 9 part was forgotten.
Fixes#11656.
Change-Id: I91c033687987ba43d13ad8f42e3fe4c7a78e6075
Reviewed-on: https://go-review.googlesource.com/12762
Reviewed-by: Russ Cox <rsc@golang.org>
This is a reprise of https://golang.org/cl/12623. In that a CL I made
a suggestion which forgot that the +build constraints in the test
directory are not the same as those supported by the go tool: in the
test directory, if a single +build line fails, the test is skipped.
(In my defense, the code I was commenting on was also wrong.)
Change-Id: I8f29392a80b1983027f9a33043c803578409d678
Reviewed-on: https://go-review.googlesource.com/12776
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Old code appended, did not play well with a closure
with a ... param.
Fixes#11075.
Change-Id: Ib7c8590c5c4e576e798837e7499e00f3494efb4a
Reviewed-on: https://go-review.googlesource.com/12580
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
Add label and goto checks and improve test coverage.
Implement OSWITCH and OSELECT.
Implement OBREAK and OCONTINUE.
Allow generation of code in dead blocks.
Change-Id: Ibebb7c98b4b2344f46d38db7c9dce058c56beaac
Reviewed-on: https://go-review.googlesource.com/12445
Reviewed-by: Keith Randall <khr@golang.org>
There was already special code to recognize "?" in hidden_structdcl,
which is used for inlined types and variables. This recognizes "?" in
structdcl as well, a case that arises when a struct type appears
within an inlined function body.
Fixes#10219.
Change-Id: Ic5257ae54f817e0d4a189c2294dcd633c9f2101a
Reviewed-on: https://go-review.googlesource.com/12241
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The one in misc/makerelease/makerelease.go is particularly bad and
probably warrants rotating our keys.
I didn't update old weekly notes, and reverted some changes involving
test code for now, since we're late in the Go 1.5 freeze. Otherwise,
the rest are all auto-generated changes, and all manually reviewed.
Change-Id: Ia2753576ab5d64826a167d259f48a2f50508792d
Reviewed-on: https://go-review.googlesource.com/12048
Reviewed-by: Rob Pike <r@golang.org>
This avoids both a write barrier and then dynamic initialization
globals of the form
var x something
var xp = unsafe.Pointer(&x)
Using static initialization avoids emitting a relocation for &x,
which helps cgo.
Fixes#9411.
Change-Id: I0dbf480859cce6ab57ab805d1b8609c45b48f156
Reviewed-on: https://go-review.googlesource.com/11693
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
We can't address more than this on amd64 anyway.
Fixes#9862.
Change-Id: Ifb1abae558e2e1ee2dc953a76995f3f08c60b1df
Reviewed-on: https://go-review.googlesource.com/11715
Reviewed-by: Austin Clements <austin@google.com>
Currently the runtime fails to clear a G's stack barriers in gfput if
the G's stack allocation is _FixedStack bytes. This causes the runtime
to panic if the following sequence of events happens:
1) The runtime installs stack barriers on a G.
2) The G exits by calling runtime.Goexit. Since this does not
necessarily return through the stack barriers installed on the G,
there may still be untriggered stack barriers left on the G's stack
in recorded in g.stkbar.
3) The runtime calls gfput to add the exiting G to the free pool. If
the G's stack allocation is _FixedStack bytes, we fail to clear
g.stkbar.
4) A new G starts and allocates the G that was just added to the free
pool.
5) The new G begins to execute and overwrites the stack slots that had
stack barriers in them.
6) The garbage collector enters mark termination, attempts to remove
stack barriers from the new G, and finds that they've been
overwritten.
Fix this by clearing the stack barriers in gfput in the case where it
reuses the stack.
Fixes#11256.
Change-Id: I377c44258900e6bcc2d4b3451845814a8eeb2bcf
Reviewed-on: https://go-review.googlesource.com/11461
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Russ Cox <rsc@golang.org>
In walkdiv, an OMUL node was created and passed to typecheck,
before the op was changed back to OHMUL. In some instances,
the node that came back was an evaluated literal constant that
occurred with a full multiply. The end result was a literal node
with a non-shifted value and an OHMUL op. This change causes code
to be generated for the OHMUL.
Fixes#11358Fixes#11369
Change-Id: If42a98c6830d07fe065d5ca57717704fb8cfbd33
Reviewed-on: https://go-review.googlesource.com/11400
Reviewed-by: Russ Cox <rsc@golang.org>
When heapBitsSetType repeats a source bitmap with a scalar tail
(typ.ptrdata < typ.size), it lays out the tail upon reaching the end
of the source bitmap by simply increasing the number of bits claimed
to be in the incoming bit buffer. This causes later iterations to read
the appropriate number of zeros out of the bit buffer before starting
on the next repeat of the source bitmap.
Currently, however, later iterations of the loop continue to read bits
from the source bitmap *regardless of the number of bits currently in
the bit buffer*. The bit buffer can only hold 32 or 64 bits, so if the
scalar tail is large and the padding bits exceed the size of the bit
buffer, the read from the source bitmap on the next iteration will
shift the incoming bits into oblivion when it attempts to put them in
the bit buffer. When the buffer does eventually shift down to where
these bits were supposed to be, it will contain zeros. As a result,
words that should be marked as pointers on later repetitions are
marked as scalars, so the garbage collector does not trace them. If
this is the only reference to an object, it will be incorrectly freed.
Fix this by adding logic to drain the bit buffer down if it is large
instead of reading more bits from the source bitmap.
Fixes#11286.
Change-Id: I964432c4b9f1cec334fc8c3da0ff16460203feb6
Reviewed-on: https://go-review.googlesource.com/11360
Reviewed-by: Russ Cox <rsc@golang.org>
When a method is called using the Type.Method(receiver, args...) syntax
without the receiver, or enough arguments, provide the more helpful
error message "not enough arguments in call to method expression
Type.Method" instead of the old message "not enough arguments in call
to Type.Method".
Fixes#8385
Change-Id: Id5037eb1ee5fa93687d4a6557b4a8233b29e9df2
Reviewed-on: https://go-review.googlesource.com/2193
Reviewed-by: Russ Cox <rsc@golang.org>
Also modified test/run.go to ignore messages prefixed <autogenerated>
because those cannot be described with "// ERROR ...", and backed out
patch from issue #9537 because it is no longer necessary. The reasons
described in the 9537 discussion for why escape analysis cannot run
late no longer hold, happily.
Fixes#11053.
Change-Id: Icb14eccdf2e8cde3d0f8fb8a216b765400a96385
Reviewed-on: https://go-review.googlesource.com/11088
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
Bool codegen was generating a temp for function calls
and other complex expressions, but was not using it.
This was a refactoring bug introduced by CL 7853.
The cmp code used to do (in short):
l, r := &n1, &n2
It was changed to:
l, r := nl, nr
But the requisite assignments:
nl, nr = &n1, &n2
were only introduced on one of two code paths.
Fixes#10654.
Change-Id: Ie8de0b3a333842a048d4308e02911bb10c6915ce
Reviewed-on: https://go-review.googlesource.com/10844
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
People invoking the linker directly already have to change their scripts
to use the new "go tool link", so this is a good time to make the -X flag
behave like all other Go flags and take just a single argument.
The old syntax will continue to be accepted (it is rewritten into the new
syntax before flag parsing). Maybe some day we will be able to retire it.
Even if we never retire the old syntax, having the new syntax at least
makes the rewriting much less of a kludge.
Change-Id: I91e8df94f4c22b2186e81d7f1016b8767d777eac
Reviewed-on: https://go-review.googlesource.com/10310
Reviewed-by: Rob Pike <r@golang.org>
Also modifies 'dist test' to use that sharding, and removes some old
temporary stuff from dist test which are no longer required.
'dist test' now also supports running a list of tests given in
arguments, mutually exclusive with the existing -run=REGEXP flag. The
hacky fast paths for avoiding the 1 second "go list" latency are now
removed and only apply to the case where partial tests are run via
args, instead of regex. The build coordinator will use both styles
for awhile. (the statically-sharded ARM builders on scaleway will
continue to use regexps, but the dynamically-shared builders on GCE
will use the list of tests)
Updates #10029
Change-Id: I557800a54dfa6f3b5100ef4c26fe397ba5189813
Reviewed-on: https://go-review.googlesource.com/10688
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Memory usage has been reduced.
The tests are still slow,
but that is issue #10571.
/usr/bin/time shows significant variation
in the peak memory usage compiling with tip.
This is unsurprising, given GC.
Using Go 1.4.2, memory is stable at 410mb.
Using tip at d2ee09298,
memory ranges from 470mb (+15%) to 534mb (+30%),
with a mean of 504mb (+23%), with n=50.
Fixes#9933.
Change-Id: Id31f3ae086ec324abf70e8f1a8044c4a0c27e274
Reviewed-on: https://go-review.googlesource.com/10211
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Use pkgimport == nil (or not) to distinguish between
parsing .go source files where "p" exponent specifier
is not allowed and parsing .a or .o export data where
it is. Use that to control error when p-exponent is
seen.
Fixes#9036
Change-Id: I8924f09c91d4945ef3f20e80a6e544008a94a7e4
Reviewed-on: https://go-review.googlesource.com/10450
Reviewed-by: Russ Cox <rsc@golang.org>
This is dead code that was missed
during the 'go tool compile' migration.
Change-Id: Ice2af8a9ef72f8fd5f82225ee261854d93b659f1
Reviewed-on: https://go-review.googlesource.com/10430
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Added a lineno parameter to treecopy and listtreecopy
(ignored if = 0). When nodes are copied the copy is
assigned the non-zero lineno (normally this would be
the destination).
Fixes#8183
Change-Id: Iffb767a745093fb89aa08bf8a7692c2f0122be98
Reviewed-on: https://go-review.googlesource.com/10334
Reviewed-by: Russ Cox <rsc@golang.org>
Before this change, the check for too-large arrays (and other large
types) occurred after escape analysis. If the data moved off stack
and onto the heap contained any pointers, it would therefore escape,
but because the too-large check occurred after escape analysis this
would not be recorded and a stack pointer would leak to the heap
(see the modified escape_array.go for an example).
Some of these appear to remain, in calls to typecheck from within walk.
Also corrected a few comments in escape_array.go about "BAD"
analysis that is now done correctly.
Enhanced to move aditional EscNone-but-large-so-heap checks into esc.c.
Change-Id: I770c111baff28a9ed5f8beb601cf09dacc561b83
Reviewed-on: https://go-review.googlesource.com/10268
Reviewed-by: Russ Cox <rsc@golang.org>
Indirect function and method calls should leak everything,
but they didn't.
This fix had no particular effect on the cost of running the
compiler on html/template/*.go and added a single new "escape"
to the standard library:
syscall/syscall_unix.go:85: &b[0] escapes to heap
in
if errno := m.munmap(uintptr(unsafe.Pointer(&b[0])),
uintptr(len(b))); errno != nil {
Added specific escape testing to escape_calls.go
(and verified that it fails without this patch)
I also did a little code cleanup around the changes in esc.c.
Fixes#10925
Change-Id: I9984b701621ad4c49caed35b01e359295c210033
Reviewed-on: https://go-review.googlesource.com/10295
Reviewed-by: Russ Cox <rsc@golang.org>
This CL removes the remaining visible uses of the "architecture letter" concept.
(They are no longer in tool names nor in source directory names.)
Because the architecture letter concept is now gone, delete GOCHAR
from "go env" output, and change go/build.ArchChar to return an
error always.
The architecture letter is still used in the compiler and linker sources
as a clumsy architecture enumeration, but that use is not visible to
Go users and can be cleaned up separately.
Change-Id: I4d97a38f372003fb610c9c5241bea440d9dbeb8d
Reviewed-on: https://go-review.googlesource.com/10289
Reviewed-by: Rob Pike <r@golang.org>
This CL fixes the build to use the newly created go tool compile
and go tool link in place of go tool 5g, go tool 5l, and so on.
See golang-dev thread titled "go tool compile, etc" for background.
Although it was not a primary motivation, this conversion does
reduce the wall clock time and cpu time required for make.bash
by about 10%.
Change-Id: I79cbbdb676cab029db8aeefb99a53178ff55f98d
Reviewed-on: https://go-review.googlesource.com/10288
Reviewed-by: Rob Pike <r@golang.org>
Modified esc.go to allow slice literals (before append)
to be non-escaping. Modified tests to account for changes
in escape behavior and to also test the two cases that
were previously not tested.
Also minor cleanups to debug-printing within esc.go
Allocation stats for running compiler
( cd src/html/template;
for i in {1..5} ; do
go tool 6g -memprofile=testzz.${i}.prof -memprofilerate=1 *.go ;
go tool pprof -alloc_objects -text testzz.${i}.prof ;
done ; )
before about 86k allocations
after about 83k allocations
Fixes#8972
Change-Id: Ib61dd70dc74adb40d6f6fdda6eaa4bf7d83481de
Reviewed-on: https://go-review.googlesource.com/10118
Reviewed-by: Russ Cox <rsc@golang.org>
This reverts commit 5726af54eb.
It broke all the builds.
Change-Id: I4b1dde86f9433717d303c1dabd6aa1a2bf97fab2
Reviewed-on: https://go-review.googlesource.com/10143
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Set overflowing integer constants to 1 rather than 0 to avoid
spurious div-zero errors in subsequent constant expressions.
Also: Exclude new test case from go/types test since it's
running too long (go/types doesn't have an upper constant
size limit at the moment).
Fixes#7746.
Change-Id: I3768488ad9909a3cf995247b81ee78a8eb5a1e41
Reviewed-on: https://go-review.googlesource.com/9165
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The code generated for x = append(x, v) is roughly:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
}
t[len(t)] = v
len(t)++
x = t
We used to generate this code as Go pseudocode during walk.
Generate it instead as actual instructions during gen.
Doing so lets us apply a few optimizations. The most important
is that when, as in the above example, the source slice and the
destination slice are the same, the code can instead do:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
x = {base(t), len(t)+1, cap(t)}
} else {
len(x)++
}
t[len(t)] = v
That is, in the fast path that does not reallocate the array,
only the updated length needs to be written back to x,
not the array pointer and not the capacity. This is more like
what you'd write by hand in C. It's faster in general, since
the fast path elides two of the three stores, but it's especially
faster when the form of x is such that the base pointer write
would turn into a write barrier. No write, no barrier.
name old mean new mean delta
BinaryTree17 5.68s × (0.97,1.04) 5.81s × (0.98,1.03) +2.35% (p=0.023)
Fannkuch11 4.41s × (0.98,1.03) 4.35s × (1.00,1.00) ~ (p=0.090)
FmtFprintfEmpty 92.7ns × (0.91,1.16) 86.0ns × (0.94,1.11) -7.31% (p=0.038)
FmtFprintfString 281ns × (0.96,1.08) 276ns × (0.98,1.04) ~ (p=0.219)
FmtFprintfInt 288ns × (0.97,1.06) 274ns × (0.98,1.06) -4.94% (p=0.002)
FmtFprintfIntInt 493ns × (0.97,1.04) 506ns × (0.99,1.01) +2.65% (p=0.009)
FmtFprintfPrefixedInt 423ns × (0.97,1.04) 391ns × (0.99,1.01) -7.52% (p=0.000)
FmtFprintfFloat 598ns × (0.99,1.01) 566ns × (0.99,1.01) -5.27% (p=0.000)
FmtManyArgs 1.89µs × (0.98,1.05) 1.91µs × (0.99,1.01) ~ (p=0.231)
GobDecode 14.8ms × (0.98,1.03) 15.3ms × (0.99,1.02) +3.01% (p=0.000)
GobEncode 12.3ms × (0.98,1.01) 11.5ms × (0.97,1.03) -5.93% (p=0.000)
Gzip 656ms × (0.99,1.05) 645ms × (0.99,1.01) ~ (p=0.055)
Gunzip 142ms × (1.00,1.00) 142ms × (1.00,1.00) -0.32% (p=0.034)
HTTPClientServer 91.2µs × (0.97,1.04) 90.5µs × (0.97,1.04) ~ (p=0.468)
JSONEncode 32.6ms × (0.97,1.08) 32.0ms × (0.98,1.03) ~ (p=0.190)
JSONDecode 114ms × (0.97,1.05) 114ms × (0.99,1.01) ~ (p=0.887)
Mandelbrot200 6.11ms × (0.98,1.04) 6.04ms × (1.00,1.01) ~ (p=0.167)
GoParse 6.66ms × (0.97,1.04) 6.47ms × (0.97,1.05) -2.81% (p=0.014)
RegexpMatchEasy0_32 159ns × (0.99,1.00) 171ns × (0.93,1.07) +7.19% (p=0.002)
RegexpMatchEasy0_1K 538ns × (1.00,1.01) 550ns × (0.98,1.01) +2.30% (p=0.000)
RegexpMatchEasy1_32 138ns × (1.00,1.00) 135ns × (0.99,1.02) -1.60% (p=0.000)
RegexpMatchEasy1_1K 869ns × (0.99,1.01) 879ns × (1.00,1.01) +1.08% (p=0.000)
RegexpMatchMedium_32 252ns × (0.99,1.01) 243ns × (1.00,1.00) -3.71% (p=0.000)
RegexpMatchMedium_1K 72.7µs × (1.00,1.00) 70.3µs × (1.00,1.00) -3.34% (p=0.000)
RegexpMatchHard_32 3.85µs × (1.00,1.00) 3.82µs × (1.00,1.01) -0.81% (p=0.000)
RegexpMatchHard_1K 118µs × (1.00,1.00) 117µs × (1.00,1.00) -0.56% (p=0.000)
Revcomp 920ms × (0.97,1.07) 917ms × (0.97,1.04) ~ (p=0.808)
Template 129ms × (0.98,1.03) 114ms × (0.99,1.01) -12.06% (p=0.000)
TimeParse 619ns × (0.99,1.01) 622ns × (0.99,1.01) ~ (p=0.062)
TimeFormat 661ns × (0.98,1.04) 665ns × (0.99,1.01) ~ (p=0.524)
See next CL for combination with a similar optimization for slice.
The benchmarks that are slower in this CL are still faster overall
with the combination of the two.
Change-Id: I2a7421658091b2488c64741b4db15ab6c3b4cb7e
Reviewed-on: https://go-review.googlesource.com/9812
Reviewed-by: David Chase <drchase@google.com>
Today's earlier fix can stay, but it's a band-aid over the real problem,
which is that bad code was slipping through the type checker
into the back end (and luckily causing a type error there).
I discovered this because my new append does not use the same
temporaries and failed the test as written.
Fixes#9521.
Change-Id: I7e33e2ea15743406e15c6f3fdf73e1edecda69bd
Reviewed-on: https://go-review.googlesource.com/9921
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Instead of errors like:
./blank2.go:15: cannot use ~b1 (type []int) as type int in assignment
we now have:
./blank2.go:15: cannot use _ (type []int) as type int in assignment
Less confusing for users.
Fixes#9521
Change-Id: Ieab9859040e8e0df95deeaee7eeb408d3be61c0f
Reviewed-on: https://go-review.googlesource.com/9902
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Try to provide hints for common areas, either *interface
were interface would have been better, and note incorrect
capitalization (but don't be more ambitious than that, at
least not today).
Added code and test for cases
ptrInterface.ExistingMethod
ptrInterface.unexportedMethod
ptrInterface.MissingMethod
ptrInterface.withwRongcASEdMethod
interface.withwRongcASEdMethod
ptrStruct.withwRongcASEdMethod
struct.withwRongcASEdMethod
also included tests for related errors to check for
unintentional changes and consistent wording.
Somewhat simplified from previous versions to avoid second-
guessing user errors, yet also biased to point out most-likely
root cause.
Fixes#10700
Change-Id: I16693e93cc8d8ca195e7742a222d640c262105b4
Reviewed-on: https://go-review.googlesource.com/9731
Reviewed-by: Russ Cox <rsc@golang.org>
Noopt builds get a larger stack guard. This test must take that into account.
Change-Id: I1b5cbafdbbfee8c369ae1bebd0b900524ebf0d7d
Reviewed-on: https://go-review.googlesource.com/9610
Reviewed-by: Russ Cox <rsc@golang.org>
This includes the following information in the per-function summary:
outK = paramJ encoded in outK bits for paramJ
outK = *paramJ encoded in outK bits for paramJ
heap = paramJ EscHeap
heap = *paramJ EscContentEscapes
Note that (currently) if the address of a parameter is taken and
returned, necessarily a heap allocation occurred to contain that
reference, and the heap can never refer to stack, therefore the
parameter and everything downstream from it escapes to the heap.
The per-function summary information now has a tuneable number of bits
(2 is probably noticeably better than 1, 3 is likely overkill, but it
is now easy to check and the -m debugging output includes information
that allows you to figure out if more would be better.)
A new test was added to check pointer flow through struct-typed and
*struct-typed parameters and returns; some of these are sensitive to
the number of summary bits, and ought to yield better results with a
more competent escape analysis algorithm. Another new test checks
(some) correctness with array parameters, results, and operations.
The old analysis inferred a piece of plan9 runtime was non-escaping by
counteracting overconservative analysis with buggy analysis; with the
bug fixed, the result was too conservative (and it's not easy to fix
in this framework) so the source code was tweaked to get the desired
result. A test was added against the discovered bug.
The escape analysis was further improved splitting the "level" into
3 parts, one tracking the conventional "level" and the other two
computing the highest-level-suffix-from-copy, which is used to
generally model the cancelling effect of indirection applied to
address-of.
With the improved escape analysis enabled, it was necessary to
modify one of the runtime tests because it now attempts to allocate
too much on the (small, fixed-size) G0 (system) stack and this
failed the test.
Compiling src/std after touching src/runtime/*.go with -m logging
turned on shows 420 fewer heap allocation sites (10538 vs 10968).
Profiling allocations in src/html/template with
for i in {1..5} ;
do go tool 6g -memprofile=mastx.${i}.prof -memprofilerate=1 *.go;
go tool pprof -alloc_objects -text mastx.${i}.prof ;
done
showed a 15% reduction in allocations performed by the compiler.
Update #3753
Update #4720Fixes#10466
Change-Id: I0fd97d5f5ac527b45f49e2218d158a6e89951432
Reviewed-on: https://go-review.googlesource.com/8202
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Clean up after CL 5310.
Change-Id: Ib870e7b9d26eb118eefdaa3e76dcec4a4d459584
Reviewed-on: https://go-review.googlesource.com/9398
Reviewed-by: Ian Lance Taylor <iant@golang.org>
With this fix,
GOMAXPROCS=8 ./all.bash
passes, at least on my machine.
Fixes#10216.
Change-Id: Ib5991950892a1399ec81aced0a52b435e6f83fdf
Reviewed-on: https://go-review.googlesource.com/9392
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
These were fixed a little while ago, but overlooked when reenabling
disabled tests.
Update #9968.
Change-Id: I301ef587e580c517a170ad08ff897118b58cedec
Reviewed-on: https://go-review.googlesource.com/9347
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
We can expand the test cases as we discover problems.
This is some basic tests plus all the things I got wrong
in some recent work.
Change-Id: Id875fcfaf74eb087ae42b441fe47a34c5b8ccb39
Reviewed-on: https://go-review.googlesource.com/9158
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
In https://golang.org/cl/7797 I attempted to use myimportpath to set the value
of the go.importpath.$foo. symbol for the module being compiled, but I messed
it up and only set the name (which the linker rewrites anyway). This lead to
the importpath for the module being compiled being "". This was hard to notice,
because all modules that import another define the importpath for their
imported modules correctly -- but main is not imported, and this meant that the
reflect module saw all fields of all types defined in the main module as
exported.
The fix is to do what I meant to do the first time, add a test and change the
go tool to compile main packages with -p main and not -p
command-line-arguments.
Fixes#10332
Change-Id: I5fc6e9b1dc2b26f058641e382f9a56a526eca291
Reviewed-on: https://go-review.googlesource.com/8481
Reviewed-by: Russ Cox <rsc@golang.org>
The flag updates error annotations in test files from actual compiler output.
This is useful when doing compiler changes that add/remove/change lots of errors,
or when adding lots of new tests.
Also I noticed at least 2 cases where annotation were sub-optimal:
1. The annotation was "leaking param p" when the actual error is
"leaking param p to result ~r1".
2. The annotation was "leaking param m" when the actual errors
are "leaking param m" and "leaking param mv1".
For now it works only for errorcheck mode.
Also, apply the update to escape and liveness tests.
Some files have gccgo-specific errors of the form "gc error|gccgo error",
so it is risky to run update on all files. Gccgo-specific error
does not necessary contain '|', it can be just truncated.
Change-Id: Iaaae767f859dcb8321a8cb4970b2b70969e8a345
Reviewed-on: https://go-review.googlesource.com/5310
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This CL extends cmd/yacc to expose a yyErrorVerbose variable that
changes the error messages from just "syntax error" to "syntax error:
unexpected ${tokname}".
It also moves the yyToknames table generation to after rules have been
processed so that entries can be generated for tokens that aren't
mentioned in the preamble (e.g., '.' in the case of go.y).
Lastly, it restores gc's old code for applying yytfix to yyToknames,
except that substituting "LLITERAL" with litbuf happens in Yyerror.
Fixes#9968.
Change-Id: Icec188d11fdabc1dae31b8a471c35b5c7f6deec7
Reviewed-on: https://go-review.googlesource.com/8432
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
These registers are not available for programs to use. Prior to this
change, the compiler would crash attempting to use ZR as a general
purpose register. Other programs would compile but on execution would
overwrite the G register and cause havoc.
Fixes linux/arm64 build.
Fixes#10304Fixes#10320
Change-Id: I5cf51d3b77cfe3db7dd6377324950cafb02f8d8b
Reviewed-on: https://go-review.googlesource.com/8456
Reviewed-by: Minux Ma <minux@golang.org>
The original implementation used 16 int "words" but only 29 bits per word
for a total of 16*29 = 464 bits, with a space consumption of 16*64 = 1024
bits on a 64 bit machine. Switching to 512 bits increases precision while
still using (in the worst case) half the amount of memory per mp value on
a 64 bit machine.
Also: Decreased permitted number of least-significant mantissa bits which
may be incorrect when considering if a precise floating-point constant is
an integer from 29 to 16 bits.
Change-Id: Iee9287056f0e9aa4f06ceac0724ff4674f710c53
Reviewed-on: https://go-review.googlesource.com/8429
Reviewed-by: Russ Cox <rsc@golang.org>
All multi-precision arithmetic is now based on math/big.
- passes all.bash
- added test cases for fixed bugs
Fixes#7740.
Fixes#6866.
Change-Id: I67268b91766970ced3b928260053ccdce8753d58
Reviewed-on: https://go-review.googlesource.com/7912
Reviewed-by: Russ Cox <rsc@golang.org>
This restores go.errors from before 3af0d79 along with a fixed up
version of the bisonerrors AWK script, translated to Go.
However, this means Yyerror needs access to the yacc parser's state,
which is currently private. To workaround that, add a "state"
accessor method like the Lookahead method added in c7fa3c6.
Update issue #9968.
Change-Id: Ib868789e92fdb7d135442120a392457923e50121
Reviewed-on: https://go-review.googlesource.com/7270
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On arm64, CMP $foo, R is encoded as from=$foo, reg=R, not as from=$foo,
to=R. The progtable entry for ACMP incorrectly described the latter
form. Because of this, the registerizer was not accounting the registers
used in CMP instructions and was incorrectly re-assigning those registers.
This was an old problem, but it only became apparent after b115c35
(cmd/internal/gc: move cgen, regalloc, et al to portable code). Previous
to this commit, the compiler used a slightly larger register set for the
temps than it used for register variables. Since it had plenty registers
dedicated to temps, the registers used in CMP instruction never clashed
with registers assigned to register variables.
Fixes#10253
Change-Id: Iedf4bd882bd59440dff310ac0f81e0f53d80d7ed
Reviewed-on: https://go-review.googlesource.com/8387
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Extend escape analysis to convT2E and conT2I. If the interface value
does not escape supply runtime with a stack buffer for the object copy.
This is a straight port from .c to .go of Dmitry's patch
Change-Id: Ic315dd50d144d94dd3324227099c116be5ca70b6
Reviewed-on: https://go-review.googlesource.com/8201
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The false positives (var incorrectly escapes) are marked with BAD.
Change-Id: If64fabb6ea96de44a1177d9ab12e2ccc579fe0c4
Reviewed-on: https://go-review.googlesource.com/5294
Reviewed-by: Keith Randall <khr@golang.org>
10 false positives (var incorrectly escapes to heap) are marked with BAD.
Change-Id: I773b13a18ff55aaa499a2a28a979118422cc5322
Reviewed-on: https://go-review.googlesource.com/5293
Reviewed-by: Keith Randall <khr@golang.org>
Fix build after http://golang.org/cl/5297
The compiler was changed to not print implicit map capacity in error messages.
Change-Id: I852f668680c3c69c5eecc7964e46202a97014d6a
Reviewed-on: https://go-review.googlesource.com/8212
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The false positive (var incorrectly escapes to heap) is marked with BAD.
Change-Id: I11877fa8e976094b31a221abd88ae32d351c85ee
Reviewed-on: https://go-review.googlesource.com/5292
Reviewed-by: Keith Randall <khr@golang.org>
These can be implemented with just a compare and a move instruction.
Do so, avoiding the overhead of a call into the runtime.
These assertions are a significant cost in Go code that uses interface{}
as a safe alternative to C's void* (or unsafe.Pointer), such as the
current version of the Go compiler.
*T here includes pointer to T but also any Go type represented as
a single pointer (chan, func, map). It does not include [1]*T or struct{*int}.
That requires more work in other parts of the compiler; there is a TODO.
Change-Id: I7ff681c20d2c3eb6ad11dd7b3a37b1f3dda23965
Reviewed-on: https://go-review.googlesource.com/7862
Reviewed-by: Rob Pike <r@golang.org>
Fix recover4.go to work on 64kb systems.
Change-Id: I211cb048de1268a8bbac77c6f3a1e0b8c8277594
Reviewed-on: https://go-review.googlesource.com/7673
Reviewed-by: Minux Ma <minux@golang.org>
Updates #10180
Temporarily disable this test on ppc64 systems as all our builders use 64k page size.
We need a portable way to get the page size of the host so we can correctly size the mmap hole.
Change-Id: Ibd36ebe2f54cf75a44667e2070c385f0daaca481
Reviewed-on: https://go-review.googlesource.com/7652
Reviewed-by: Andrew Gerrand <adg@golang.org>
This came up in private mail.
It works today and I want to make sure it stays working.
Change-Id: I13ebdc2dfadb3c72d7f179be89883137320c05d0
Reviewed-on: https://go-review.googlesource.com/7390
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Rob Pike <r@golang.org>
Fixes#10135.
Change-Id: Ic4c5ab15bcb7b9c3fcc685a788d3b59c60c26e1e
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/7400
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: Ia5115b15a79e1b2b53036646f1ed4b08225b220f
Reviewed-on: https://go-review.googlesource.com/7051
Run-TryBot: Chris Manghane <cmang@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Inlining refuses to inline bodies containing an actual function call, so that
if that call or a child uses runtime.Caller it cannot observe
the inlining.
However, inlining was also refusing to inline bodies that contained
function calls that were themselves inlined away. For example:
func f() int {
return f1()
}
func f1() int {
return f2()
}
func f2() int {
return 2
}
The f2 call in f1 would be inlined, but the f1 call in f would not,
because f1's call to f2 blocked the inlining, despite itself eventually
being inlined away.
Account properly for this kind of transitive inlining and enable.
Also bump the inlining budget a bit, so that the runtime's
heapBits.next is inlined.
This reduces the time for '6g *.go' in html/template by around 12% (!).
(For what it's worth, closing Chrome reduces the time by about 17%.)
Change-Id: If1aa673bf3e583082dcfb5f223e67355c984bfc1
Reviewed-on: https://go-review.googlesource.com/5952
Reviewed-by: Austin Clements <austin@google.com>
These don't work with the new compiler, because the
new compiler doesn't have the custom syntax errors
that I built for the old compiler. It will, just not yet.
(Issue #9968.)
Change-Id: I658f7dab2c7f855340a501f9ae4479c097b28cd3
Reviewed-on: https://go-review.googlesource.com/5632
Reviewed-by: Rob Pike <r@golang.org>
They use too much memory in the current Go compiler draft.
This should fix some builders.
Reenabling is #9933.
Change-Id: Ib5ef348b2c55d2012ffed765f2a6df99dec171f4
Reviewed-on: https://go-review.googlesource.com/5302
Reviewed-by: Russ Cox <rsc@golang.org>
Per the comment at top, this test is about whether the GC runs during
init, but it was testing more than that, and testing how much the GC
collected in a certain amount of time.
Instead, loosen this test to just see whether it ran at all and not
how well it did.
Fixes#9848
Change-Id: I31da7dd769140d7b49aa6c149a543fae6076aa5e
Reviewed-on: https://go-review.googlesource.com/4820
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we always create context objects for closures that capture variables.
However, it is completely unnecessary for direct calls of closures
(whether it is func()(), defer func()() or go func()()).
This change transforms any OCALLFUNC(OCLOSURE) to normal function call.
Closed variables become function arguments.
This transformation is especially beneficial for go func(),
because we do not need to allocate context object on heap.
But it makes direct closure calls a bit faster as well (see BenchmarkClosureCall).
On implementation level it required to introduce yet another compiler pass.
However, the pass iterates only over xtop, so it should not be an issue.
Transformation consists of two parts: closure transformation and call site
transformation. We can't run these parts on different sides of escape analysis,
because tree state is inconsistent. We can do both parts during typecheck,
we don't know how to capture variables and don't have call site.
We can't do both parts during walk of OCALLFUNC, because we can walk
OCLOSURE body earlier.
So now capturevars pass only decides how to capture variables
(this info is required for escape analysis). New transformclosure
pass, that runs just before order/walk, does all transformations
of a closure. And later walk of OCALLFUNC(OCLOSURE) transforms call site.
benchmark old ns/op new ns/op delta
BenchmarkClosureCall 4.89 3.09 -36.81%
BenchmarkCreateGoroutinesCapture 1634 1294 -20.81%
benchmark old allocs new allocs delta
BenchmarkCreateGoroutinesCapture 6 2 -66.67%
benchmark old bytes new bytes delta
BenchmarkCreateGoroutinesCapture 176 48 -72.73%
Change-Id: Ic85e1706e18c3235cc45b3c0c031a9c1cdb7a40e
Reviewed-on: https://go-review.googlesource.com/4050
Reviewed-by: Russ Cox <rsc@golang.org>
Consider an interface value i of type I and concrete value c of type C.
Prior to this CL, i==c was evaluated as
I(c) == i
Evaluating I(c) can allocate.
This CL changes the evaluation of i==c to
x, ok := i.(C); ok && x == c
The new generated code is shorter and does not allocate directly.
If C is small, as it is in every instance in the stdlib,
the new code also uses less stack space
and makes one runtime call instead of two.
If C is very large, the original implementation is used.
The cutoff for "very large" is 1<<16,
following the stack vs heap cutoff used elsewhere.
This kind of comparison occurs in 38 places in the stdlib,
mostly in the net and os packages.
benchmark old ns/op new ns/op delta
BenchmarkEqEfaceConcrete 29.5 7.92 -73.15%
BenchmarkEqIfaceConcrete 32.1 7.90 -75.39%
BenchmarkNeEfaceConcrete 29.9 7.90 -73.58%
BenchmarkNeIfaceConcrete 35.9 7.90 -77.99%
Fixes#9370.
Change-Id: I7c4555950bcd6406ee5c613be1f2128da2c9a2b7
Reviewed-on: https://go-review.googlesource.com/2096
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Support the following conversions in escape analysis:
[]rune("foo")
[]byte("foo")
string([]rune{})
If the result does not escape, allocate temp buffer on stack
and pass it to runtime functions.
Change-Id: I1d075907eab8b0109ad7ad1878104b02b3d5c690
Reviewed-on: https://go-review.googlesource.com/3590
Reviewed-by: Russ Cox <rsc@golang.org>
Ordinary switch statements are rewritten
into a sequence of if statements.
Staticly dead cases were not being eliminated
because the rewrite introduced a temporary,
which hid the fact that the case was a constant.
Stop doing that.
This eliminates dead code in the standard library at:
runtime/cgocall.go:219
runtime/cgocall.go:269
debug/gosym/pclntab.go:175
debug/macho/file.go:208
math/big/nat.go:635
math/big/nat.go:850
math/big/nat.go:1058
cmd/pprof/internal/commands/commands.go:86
net/sock_bsd.go:19
cmd/go/build.go:2657
cmd/go/env.go:90
Fixes#9608.
Change-Id: Ic23a05dfbb1ad91d5f62a6506b35a13e51b33e38
Reviewed-on: https://go-review.googlesource.com/3980
Reviewed-by: Keith Randall <khr@golang.org>
Only documentation / comment changes. Update references to
point to golang.org permalinks or go.googlesource.com/go.
References in historical release notes under doc are left as is.
Change-Id: Icfc14e4998723e2c2d48f9877a91c5abef6794ea
Reviewed-on: https://go-review.googlesource.com/4060
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The main issue is that the misc/cgo/{stdio,life} tests are silently
getting skipped when invoked from run.bash.
run.go should ignore any build tags after the first blank line in
source file. It already checks for test actions only upto the first
blank line. Build tags must be specified in the same block.
See http://golang.org/cl/3675 for background.
Change-Id: Id8abf000119e3335f7250d8ef34aac7811fc9dff
Reviewed-on: https://go-review.googlesource.com/3812
Reviewed-by: Minux Ma <minux@golang.org>
issue9355 generated a file a.[568] in test/ directory and left it there.
For tests like these, it is best to chdir to a test specific directory
before generating any temporary files, since the tests are running
in parallel and might otherwise race with each other for the same files.
Change-Id: I58d96256d4d8ee3fda70d81077f19006064a7425
Reviewed-on: https://go-review.googlesource.com/3813
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Kindly detected by race builders by failing TestRaceRange.
ORANGE typecheck does not increment decldepth around body.
Change-Id: I0df5f310cb3370a904c94d9647a9cf0f15729075
Reviewed-on: https://go-review.googlesource.com/3507
Reviewed-by: Russ Cox <rsc@golang.org>
Type switch variables was not typechecked.
Previously it lead only to a minor consequence:
switch unsafe.Sizeof = x.(type) {
generated an inconsistent error message.
But capturing by value functionality now requries typechecking of all ONAMEs.
Fixes#9731
Change-Id: If037883cba53d85028fb97b1328696091b3b7ddd
Reviewed-on: https://go-review.googlesource.com/3600
Reviewed-by: Russ Cox <rsc@golang.org>
This adds a "framepointer" GOEXPERIMENT that that makes the amd64
toolchain maintain base pointer chains in the same way that gcc
-fno-omit-frame-pointer does. Go doesn't use these saved base
pointers, but this does enable external tools like Linux perf and
VTune to unwind Go stacks when collecting system-wide profiles.
This requires support in the compilers to not clobber BP, support in
liblink for generating the BP-saving function prologue and unwinding
epilogue, and support in the runtime to save BPs across preemption, to
skip saved BPs during stack unwinding and, and to adjust saved BPs
during stack moving.
As with other GOEXPERIMENTs, everything from the toolchain to the
runtime must be compiled with this experiment enabled. To do this,
run make.bash (or all.bash) with GOEXPERIMENT=framepointer.
Change-Id: I4024853beefb9539949e5ca381adfdd9cfada544
Reviewed-on: https://go-review.googlesource.com/2992
Reviewed-by: Russ Cox <rsc@golang.org>
cmd/go doesn't complain (this is an open issue), but go/types does
Change-Id: I2caec1f7aec991a9500d2c3504c29e4ab718c138
Reviewed-on: https://go-review.googlesource.com/3541
Reviewed-by: Alan Donovan <adonovan@google.com>
Language specification says that variables are captured by reference.
And that is what gc compiler does. However, in lots of cases it is
possible to capture variables by value under the hood without
affecting visible behavior of programs. For example, consider
the following typical pattern:
func (o *Obj) requestMany(urls []string) []Result {
wg := new(sync.WaitGroup)
wg.Add(len(urls))
res := make([]Result, len(urls))
for i := range urls {
i := i
go func() {
res[i] = o.requestOne(urls[i])
wg.Done()
}()
}
wg.Wait()
return res
}
Currently o, wg, res, and i are captured by reference causing 3+len(urls)
allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap).
But all of them can be captured by value without changing behavior.
This change implements simple strategy for capturing by value:
if a captured variable is not addrtaken and never assigned to,
then it is captured by value (it is effectively const).
This simple strategy turned out to be very effective:
~80% of all captures in std lib are turned into value captures.
The remaining 20% are mostly in defers and non-escaping closures,
that is, they do not cause allocations anyway.
benchmark old allocs new allocs delta
BenchmarkCompressedZipGarbage 153 126 -17.65%
BenchmarkEncodeDigitsSpeed1e4 91 69 -24.18%
BenchmarkEncodeDigitsSpeed1e5 178 129 -27.53%
BenchmarkEncodeDigitsSpeed1e6 1510 1051 -30.40%
BenchmarkEncodeDigitsDefault1e4 100 75 -25.00%
BenchmarkEncodeDigitsDefault1e5 193 139 -27.98%
BenchmarkEncodeDigitsDefault1e6 1420 985 -30.63%
BenchmarkEncodeDigitsCompress1e4 100 75 -25.00%
BenchmarkEncodeDigitsCompress1e5 193 139 -27.98%
BenchmarkEncodeDigitsCompress1e6 1420 985 -30.63%
BenchmarkEncodeTwainSpeed1e4 109 81 -25.69%
BenchmarkEncodeTwainSpeed1e5 211 151 -28.44%
BenchmarkEncodeTwainSpeed1e6 1588 1097 -30.92%
BenchmarkEncodeTwainDefault1e4 103 77 -25.24%
BenchmarkEncodeTwainDefault1e5 199 143 -28.14%
BenchmarkEncodeTwainDefault1e6 1324 917 -30.74%
BenchmarkEncodeTwainCompress1e4 103 77 -25.24%
BenchmarkEncodeTwainCompress1e5 190 137 -27.89%
BenchmarkEncodeTwainCompress1e6 1327 919 -30.75%
BenchmarkConcurrentDBExec 16223 16220 -0.02%
BenchmarkConcurrentStmtQuery 17687 16182 -8.51%
BenchmarkConcurrentStmtExec 5191 5186 -0.10%
BenchmarkConcurrentTxQuery 17665 17661 -0.02%
BenchmarkConcurrentTxExec 15154 15150 -0.03%
BenchmarkConcurrentTxStmtQuery 17661 16157 -8.52%
BenchmarkConcurrentTxStmtExec 3677 3673 -0.11%
BenchmarkConcurrentRandom 14000 13614 -2.76%
BenchmarkManyConcurrentQueries 25 22 -12.00%
BenchmarkDecodeComplex128Slice 318 252 -20.75%
BenchmarkDecodeFloat64Slice 318 252 -20.75%
BenchmarkDecodeInt32Slice 318 252 -20.75%
BenchmarkDecodeStringSlice 2318 2252 -2.85%
BenchmarkDecode 11 8 -27.27%
BenchmarkEncodeGray 64 56 -12.50%
BenchmarkEncodeNRGBOpaque 64 56 -12.50%
BenchmarkEncodeNRGBA 67 58 -13.43%
BenchmarkEncodePaletted 68 60 -11.76%
BenchmarkEncodeRGBOpaque 64 56 -12.50%
BenchmarkGoLookupIP 153 139 -9.15%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServer 62 59 -4.84%
BenchmarkClientServerParallel4 62 59 -4.84%
BenchmarkClientServerParallel64 62 59 -4.84%
BenchmarkClientServerParallelTLS4 79 76 -3.80%
BenchmarkClientServerParallelTLS64 112 109 -2.68%
BenchmarkCreateGoroutinesCapture 10 6 -40.00%
BenchmarkAfterFunc 1006 1005 -0.10%
Fixes#6632.
Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca
Reviewed-on: https://go-review.googlesource.com/3166
Reviewed-by: Russ Cox <rsc@golang.org>
If result of string(i) does not escape,
allocate a [4]byte temp on stack for it.
Change-Id: If31ce9447982929d5b3b963fd0830efae4247c37
Reviewed-on: https://go-review.googlesource.com/3411
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we always allocate string buffers in heap.
For example, in the following code we allocate a temp string
just for comparison:
if string(byteSlice) == "abc" { ... }
This change extends escape analysis to cover []byte->string
conversions and string concatenation. If the result of operations
does not escape, compiler allocates a small buffer
on stack and passes it to slicebytetostring and concatstrings.
Then runtime uses the buffer if the result fits into it.
Size of the buffer is 32 bytes. There is no fundamental theory
behind this number. Just an observation that on std lib
tests/benchmarks frequency of string allocation is inversely
proportional to string length; and there is significant number
of allocations up to length 32.
benchmark old allocs new allocs delta
BenchmarkFprintfBytes 2 1 -50.00%
BenchmarkDecodeComplex128Slice 318 316 -0.63%
BenchmarkDecodeFloat64Slice 318 316 -0.63%
BenchmarkDecodeInt32Slice 318 316 -0.63%
BenchmarkDecodeStringSlice 2318 2316 -0.09%
BenchmarkStripTags 11 5 -54.55%
BenchmarkDecodeGray 111 102 -8.11%
BenchmarkDecodeNRGBAGradient 200 188 -6.00%
BenchmarkDecodeNRGBAOpaque 165 152 -7.88%
BenchmarkDecodePaletted 319 309 -3.13%
BenchmarkDecodeRGB 166 157 -5.42%
BenchmarkDecodeInterlacing 279 268 -3.94%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServerParallel4 62 61 -1.61%
BenchmarkClientServerParallel64 62 61 -1.61%
BenchmarkClientServerParallelTLS4 79 78 -1.27%
BenchmarkClientServerParallelTLS64 112 111 -0.89%
benchmark old ns/op new ns/op delta
BenchmarkFprintfBytes 381 311 -18.37%
BenchmarkStripTags 2615 2351 -10.10%
BenchmarkDecodeNRGBAGradient 3715887 3635096 -2.17%
BenchmarkDecodeNRGBAOpaque 3047645 2928644 -3.90%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
Change-Id: I9ec01da816945c3329d7be3c7794b520418c3f99
Reviewed-on: https://go-review.googlesource.com/3120
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Escape analysis treats everything assigned to OIND/ODOTPTR as escaping.
As the result b escapes in the following code:
func (b *Buffer) Foo() {
n, m := ...
b.buf = b.buf[n:m]
}
This change recognizes such assignments and ignores them.
Update issue #9043.
Update issue #7921.
There are two similar cases in std lib that benefit from this optimization.
First is in archive/zip:
type readBuf []byte
func (b *readBuf) uint32() uint32 {
v := binary.LittleEndian.Uint32(*b)
*b = (*b)[4:]
return v
}
Second is in time:
type data struct {
p []byte
error bool
}
func (d *data) read(n int) []byte {
if len(d.p) < n {
d.p = nil
d.error = true
return nil
}
p := d.p[0:n]
d.p = d.p[n:]
return p
}
benchmark old ns/op new ns/op delta
BenchmarkCompressedZipGarbage 32431724 32217851 -0.66%
benchmark old allocs new allocs delta
BenchmarkCompressedZipGarbage 153 143 -6.54%
Change-Id: Ia6cd32744e02e36d6d8c19f402f8451101711626
Reviewed-on: https://go-review.googlesource.com/3162
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently all PTRLIT element initializers escape. There is no reason for that.
This change links STRUCTLIT to PTRLIT; STRUCTLIT element initializers are
already linked to the STRUCTLIT. As the result, PTRLIT element initializers
escape when PTRLIT itself escapes.
Change-Id: I89ecd8677cbf81addcfd469cd2fd461c0e9bf7dd
Reviewed-on: https://go-review.googlesource.com/3031
Reviewed-by: Russ Cox <rsc@golang.org>
The compiler has a phase ordering problem. Escape analysis runs
before wrapper generation. When a generated wrapper calls a method
defined in a different package, if that call is inlined, there will be
no escape information for the variables defined in the inlined call.
Those variables will be placed on the stack, which fails if they
actually do escape.
There are probably various complex ways to fix this. This is a simple
way to avoid it: when a generated wrapper calls a method defined in a
different package, treat all local variables as escaping.
Fixes#9537.
Change-Id: I530f39346de16ad173371c6c3f69cc189351a4e9
Reviewed-on: https://go-review.googlesource.com/3092
Reviewed-by: Russ Cox <rsc@golang.org>
We were failing ^uint16(0xffff) == 0, as we computed 0xffff0000 instead.
I could only trigger a failure for the above case, the other two tests
^uint16(0xfffe) == 1 and -uint16(0xffff) == 1 didn't seem to fail
previously. Somehow they get MOVHUs inserted for other reasons (used
by CMP instead of TST?). I fixed OMINUS anyway, better safe than
sorry.
Fixes#9604
Change-Id: I4c2d5bdc667742873ac029fdbe3db0cf12893c27
Reviewed-on: https://go-review.googlesource.com/2940
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
The various files are confusingly named and their operation
not easy to see. Add a comment to cmplxdivide.c, one of the few
C files that will endure in the repository, to explain how to build
and run the test.
Change-Id: I1fd5c564a14217e1b9815b09bc24cc43c54c096f
Reviewed-on: https://go-review.googlesource.com/2850
Reviewed-by: Russ Cox <rsc@golang.org>
Recognize loops of the form
for i := range a {
a[i] = zero
}
in which the evaluation of a is free from side effects.
Replace these loops with calls to memclr.
This occurs in the stdlib in 18 places.
The motivating example is clearing a byte slice:
benchmark old ns/op new ns/op delta
BenchmarkGoMemclr5 3.31 3.26 -1.51%
BenchmarkGoMemclr16 13.7 3.28 -76.06%
BenchmarkGoMemclr64 50.8 4.14 -91.85%
BenchmarkGoMemclr256 157 6.02 -96.17%
Update #5373.
Change-Id: I99d3e6f5f268e8c6499b7e661df46403e5eb83e4
Reviewed-on: https://go-review.googlesource.com/2520
Reviewed-by: Keith Randall <khr@golang.org>
run GC in its own background goroutine making the
caller runnable if resources are available. This is
critical in single goroutine applications.
Allow goroutines that allocate a lot to help out
the GC and in doing so throttle their own allocation.
Adjust test so that it only detects that a GC is run
during init calls and not whether the GC is memory
efficient. Memory efficiency work will happen later
in 1.5.
Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b
Reviewed-on: https://go-review.googlesource.com/2349
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This test was taking a long time, reduce its zealousness.
Change-Id: Ib824247b84b0039a9ec690f72336bef3738d4c44
Reviewed-on: https://go-review.googlesource.com/2502
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
The compiler converts 'val, ok = m[key]' to
tmp, ok = <runtime call>
val = *tmp
For lookups of the form '_, ok = m[key]',
the second statement is unnecessary.
By not generating it we save a nil check.
Change-Id: I21346cc195cb3c62e041af8b18770c0940358695
Reviewed-on: https://go-review.googlesource.com/1975
Reviewed-by: Russ Cox <rsc@golang.org>
If the user provided a key but no value via -ldflag -X,
another linker flag was used as the value.
Placing the user's flags at the end avoids this problem.
It also provides the user the opportunity to
override existing linker flags.
Fixes#8810.
Change-Id: I96f4190713dc9a9c29142e56658446fba7fb6bc8
Reviewed-on: https://go-review.googlesource.com/2242
Reviewed-by: Minux Ma <minux@golang.org>
These tests were enabled as part of change 1774.
They depend on the errchk tool, which is a Perl
script. However, Perl is not available on Plan 9.
Change-Id: I82707aae16013acc9a3800d39b0084588b852b53
Reviewed-on: https://go-review.googlesource.com/2031
Reviewed-by: Minux Ma <minux@golang.org>
Broken by e7173dfdfd
Fix by simply disabling the relevant tests.
* bug248 and bug345 require errchk, but we can't
rely on perl being available.
* bug369 is disabled anyway.
Change-Id: Idf73ebccb066943e3fe17c2f662b37238ec74dfe
Reviewed-on: https://go-review.googlesource.com/2052
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Gccgo creates a struct to hold the arguments for the deferred
function. In this example the struct holds a type defined in a
different package. The bug was that gccgo tried to create an equality
function for this struct, and it implemented that function by calling
the equality function for the type defined in the other package.
Since that type is not exported, the reference to the equality
function failed at link time. Normally it is impossible for a struct
to directly contain a member that is an unexported type from another
package, but in this specific case it was possible. Fixed in gccgo
with https://codereview.appspot.com/183500043 .
Change-Id: I8ec3a33631225b9ac2a4ac060cb4d10b4635e60b
Reviewed-on: https://go-review.googlesource.com/1690
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
* bug248, bug345, bug369, and bug429 were ported from bash commands to run scripts. bug369 remains disabled.
* bug395 is a test for issue 1909, which is still open. It is marked as skip now and will be usable with compile with run.go when issue 1909 is fixed.
Fixes#4139
Updates #1909
Change-Id: Ibb5fbfb5cf72ddc285829245318eeacd3fb5a636
Reviewed-on: https://go-review.googlesource.com/1774
Reviewed-by: Russ Cox <rsc@golang.org>
When we do y = &x for global variables x and y, y gets initialized
at link time. Do the same for y = &x.f if x is a struct and y=&x[5]
if x is an array.
fixes#9217fixes#9355
Change-Id: Iea3c0ce2ce1b309e2b760e345608fd95460b5713
Reviewed-on: https://go-review.googlesource.com/1691
Reviewed-by: Minux Ma <minux@golang.org>
This test was added in CL 151000043.
It got lost in CL 144630044.
Change-Id: I318ab11be8e3e7489fc1395457c029c8bdb2aa41
Reviewed-on: https://go-review.googlesource.com/1773
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Gccgo incorrectly executed functions multiple times when they appeared
in a composite literal that required a conversion between different
interface types.
Change-Id: I7b40e76ed23fa8440ffa03b262041265c109adf7
Reviewed-on: https://go-review.googlesource.com/1710
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Gccgo failed to create the type descriptor for the type used to
allocate the nil value passed to append as the second argument when
append is called with only one argument. Calling append with only one
argument is unusual but obviously should not cause a compiler crash.
Change-Id: I530821847dfd68f0302de6ca6a84dfbc79653935
Reviewed-on: https://go-review.googlesource.com/1692
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
It shouldn't semacquire() inside an acquirem(), the runtime
thinks that means deadlock. It actually isn't a deadlock, but it
looks like it because acquirem() does m.locks++.
Candidate for inclusion in 1.4.1. runtime.Stack with all=true
is pretty unuseable in GOMAXPROCS>1 environment.
fixes#9321
Change-Id: Iac6b664217d24763b9878c20e49229a1ecffc805
Reviewed-on: https://go-review.googlesource.com/1600
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Make sure dequeueing from a channel queue does not exhibit quadratic time behavior.
Change-Id: Ifb7c709b026f74c7e783610d4914dd92909a441b
Reviewed-on: https://go-review.googlesource.com/1212
Reviewed-by: Russ Cox <rsc@golang.org>
Usage:
fibo <n> compute fibonacci(n), n must be >= 0
fibo -bench benchmark fibonacci computation (takes about 1 min)
Additional flags:
-half add values using two half-digit additions
-opt optimize memory allocation through reuse
-short only print the first 10 digits of very large fibonacci numbers
This change was reviewed in detail as https://codereview.appspot.com/168480043 .
Change-Id: I7c86d49c5508532ea6206d00f424cf2117d2fe41
Reviewed-on: https://go-review.googlesource.com/1211
Reviewed-by: Russ Cox <rsc@golang.org>
When we start work on Gerrit, ppc64 and garbage collection
work will continue in the master branch, not the dev branches.
(We may still use dev branches for other things later, but
these are ready to be merged, and doing it now, before moving
to Git means we don't have to have dev branches working
in the Gerrit workflow on day one.)
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/183140043
640 bytes ought to be enough for anybody.
We'll bring this back down before Go 1.5. That's issue 9214.
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/188730043
The SudoG used to sit on the stack, so it was cheap to allocated
and didn't need to be cleaned up when finished.
For the conversion to Go, we had to move sudog off the stack
for a few reasons, so we added a cache of recently used sudogs
to keep allocation cheap. But we didn't add any of the necessary
cleanup before adding a SudoG to the new cache, and so the cached
SudoGs had stale pointers inside them that have caused all sorts
of awful, hard to debug problems.
CL 155760043 made sure SudoG.elem is cleaned up.
CL 150520043 made sure SudoG.selectdone is cleaned up.
This CL makes sure SudoG.next, SudoG.prev, and SudoG.waitlink
are cleaned up. I should have done this when I did the other two
fields; instead I wasted a week tracking down a leak they caused.
A dangling SudoG.waitlink can point into a sudogcache list that
has been "forgotten" in order to let the GC collect it, but that
dangling .waitlink keeps the list from being collected.
And then the list holding the SudoG with the dangling waitlink
can find itself in the same situation, and so on. We end up
with lists of lists of unusable SudoGs that are still linked into
the object graph and never collected (given the right mix of
non-trivial selects and non-channel synchronization).
More details in golang.org/issue/9110.
Fixes#9110.
LGTM=r
R=r
CC=dvyukov, golang-codereviews, iant, khr
https://golang.org/cl/177870043
This is to reduce the delta between dev.cc and dev.garbage to just garbage collector changes.
These are the files that had merge conflicts and have been edited by hand:
malloc.go
mem_linux.go
mgc.go
os1_linux.go
proc1.go
panic1.go
runtime1.go
LGTM=austin
R=austin
CC=golang-codereviews
https://golang.org/cl/174180043
Now the only difference between dev.cc and dev.garbage
is the runtime conversion on the one side and the
garbage collection on the other. They both have the
same set of changes from default and dev.power64.
LGTM=austin
R=austin
CC=golang-codereviews
https://golang.org/cl/172570043
The remaining run-only tests will be migrated to run.go in another CL.
This CL will break the build due to issues 8746 and 8806.
Update #4139
Update #8746
Update #8806
LGTM=rsc
R=rsc, bradfitz, iant
CC=golang-codereviews
https://golang.org/cl/144630044
Now each C printf, Go print, or Go println is guaranteed
not to be interleaved with other calls of those functions.
This should help when debugging concurrent failures.
LGTM=rlh
R=rlh
CC=golang-codereviews
https://golang.org/cl/169120043
One failing case this removes is:
var bytes = []byte("hello, world")
var copy_bytes = bytes
We could handle this in the compiler, but it requires special
case for a variable that is initialized to the value of a
variable that is initialized to a string literal converted to
[]byte. This seems an unlikely case--it never occurs in the
standrd library--and it seems unnecessary to write the code to
handle it.
If we do want to support this case, one approach is
https://golang.org/cl/171840043.
The other failing cases are of the form
var bx bool
var copy_bx = bx
The compiler used to initialize copy_bx to false. However,
that led to issue 7665, since bx may be initialized in non-Go
code. The compiler no longer assumes that bx must be false,
so copy_bx can not be statically initialized.
We can fix these with https://golang.org/cl/169040043
if we also pass -complete to the compiler as part of this
test. This is OK but it's too late in the release cycle.
Fixes#8746.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/165400043
On power64x, this one line in live.go reports that t is live
because of missing optimization passes. This isn't what this
test is trying to test, so shuffle bad40 so that it still
accomplishes the intent of the test without also depending on
optimization.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/167110043
The remaining failures in this test are because of incomplete
optimization support on power64x. Tracked in issue 9058.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/168130043
All three cases of clearfat were wrong on power64x.
The cases that handle 1032 bytes and up and 32 bytes and up
both use MOVDU (one directly generated in a loop and the other
via duffzero), which leaves the pointer register pointing at
the *last written* address. The generated code was not
accounting for this, so the byte fill loop was re-zeroing the
last zeroed dword, rather than the bytes following the last
zeroed dword. Fix this by simply adding an additional 8 byte
offset to the byte zeroing loop.
The case that handled under 32 bytes was also wrong. It
didn't update the pointer register at all, so the byte zeroing
loop was simply re-zeroing the beginning of region. Again,
the fix is to add an offset to the byte zeroing loop to
account for this.
LGTM=dave, bradfitz
R=rsc, dave, bradfitz
CC=golang-codereviews
https://golang.org/cl/168870043
Originally traceback was only used for printing the stack
when an unexpected signal came in. In that case, the
initial PC is taken from the signal and should be used
unaltered. For the callers, the PC is the return address,
which might be on the line after the call; we subtract 1
to get to the CALL instruction.
Traceback is now used for a variety of things, and for
almost all of those the initial PC is a return address,
whether from getcallerpc, or gp->sched.pc, or gp->syscallpc.
In those cases, we need to subtract 1 from this initial PC,
but the traceback code had a hard rule "never subtract 1
from the initial PC", left over from the signal handling days.
Change gentraceback to take a flag that specifies whether
we are tracing a trap.
Change traceback to default to "starting with a return PC",
which is the overwhelmingly common case.
Add tracebacktrap, like traceback but starting with a trap PC.
Use tracebacktrap in signal handlers.
Fixes#7690.
LGTM=iant, r
R=r, iant
CC=golang-codereviews
https://golang.org/cl/167810044
The test just doubled a certain number of times
and then gave up. On a mostly fast but occasionally
slow machine this may never make the test run
long enough to see the linear growth.
Change test to keep doubling until the first round
takes at least a full second, to reduce the effect of
occasional scheduling or other jitter.
The failure we saw had a time for the first round
of around 100ms.
Note that this test still passes once it sees a linear
effect, even with a very small total time.
The timeout here only applies to how long the execution
must be to support a reported failure.
LGTM=khr
R=khr
CC=golang-codereviews, rlh
https://golang.org/cl/164070043
This brings dev.power64 up-to-date with the current tip of
default. go_bootstrap is still panicking with a bad defer
when initializing the runtime (even on amd64).
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/152570049
This also removes pkg/runtime/traceback_lr.c, which was ported
to Go in an earlier commit and then moved to
runtime/traceback.go.
Reviewer: rsc@golang.org
rsc: LGTM
test16 used to fail with gccgo. The withoutRecoverRecursive
test would have failed in some possible implementations.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/151630043
This brings cmd/gc in line with the spec on this question.
It might break existing code, but that code was not conformant
with the spec.
Credit to Rémy for finding the broken code.
Fixes#6366.
LGTM=r
R=golang-codereviews, r
CC=adonovan, golang-codereviews, gri
https://golang.org/cl/129550043
https://golang.org/cl/152700045/ made it possible for struct literals assigned to globals to use <N> as the RHS. Normally, this is to zero out variables on first use. Because globals are already zero (or their linker initialized value), we just ignored this.
Now that <N> can occur from non-initialization code, we need to emit this code. We don't use <N> for initialization of globals any more, so this shouldn't cause any excessive zeroing.
Fixes#8961.
LGTM=rsc
R=golang-codereviews, rsc
CC=bradfitz, golang-codereviews
https://golang.org/cl/154540044
This fixes the bug in which the linker reports "missing Go
type information" when a -X option refers to a symbol that is
not used.
Fixes#8821.
LGTM=rsc
R=rsc, r
CC=golang-codereviews
https://golang.org/cl/151000043
If there is a leading ·, assume there is a Go prototype and
attach the Go prototype information to the function.
If the function is not called from Go and does not need a
Go prototype, it can be made file-local instead (using name<>(SB)).
This fixes the current BSD build failures, by giving functions like
sync/atomic.StoreUint32 argument stack map information.
Fixes#8753.
LGTM=khr, iant
R=golang-codereviews, iant, khr, bradfitz
CC=golang-codereviews, r, rlh
https://golang.org/cl/142150043
iterdelete's run time varies; occasionally we get unlucky. To reduce spurious failures, average away some of the variation.
On my machine, 8 of 5000 runs (0.15%) failed before this CL. After this CL, there were no failures after 35,000 runs.
I confirmed that this adjusted test still fails before CL 141270043.
LGTM=khr
R=khr
CC=bradfitz, golang-codereviews
https://golang.org/cl/140610043
During anylit run, nodes such as SLICEARR(statictmp, [:])
may be generated and are expected to be found unchanged by
gen_as_init.
In some walks (in particular walkselect), the statement
may be walked again and lowered to its usual form, leading to a
crash.
Fixes#8017.
Fixes#8024.
Fixes#8058.
LGTM=rsc
R=golang-codereviews, dvyukov, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/112080043
Previously it might happen before calling dowidth and
result in a compiler crash.
Fixes#8060.
LGTM=dvyukov, rsc
R=golang-codereviews, dvyukov, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/110980044
A write *p = x that needs a write barrier (not all do)
now turns into runtime.writebarrierptr(p, x)
or one of the other variants.
The write barrier implementations are trivial.
The goal here is to emit the calls in the correct places
and to incur the cost of those function calls in the Go 1.4 cycle.
Performance on the Go 1 benchmark suite below.
Remember, the goal is to slow things down (and be correct).
We will look into optimizations in separate CLs, as part of
the process of comparing Go 1.3 against tip in order to make
sure Go 1.4 runs at least as fast as Go 1.3.
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 3118336716 3452876110 +10.73%
BenchmarkFannkuch11 3184497677 3211552284 +0.85%
BenchmarkFmtFprintfEmpty 89.9 107 +19.02%
BenchmarkFmtFprintfString 236 287 +21.61%
BenchmarkFmtFprintfInt 246 278 +13.01%
BenchmarkFmtFprintfIntInt 395 458 +15.95%
BenchmarkFmtFprintfPrefixedInt 343 378 +10.20%
BenchmarkFmtFprintfFloat 477 525 +10.06%
BenchmarkFmtManyArgs 1446 1707 +18.05%
BenchmarkGobDecode 14398047 14685958 +2.00%
BenchmarkGobEncode 12557718 12947104 +3.10%
BenchmarkGzip 453462345 472413285 +4.18%
BenchmarkGunzip 114226016 115127398 +0.79%
BenchmarkHTTPClientServer 114689 112122 -2.24%
BenchmarkJSONEncode 24914536 26135942 +4.90%
BenchmarkJSONDecode 86832877 103620289 +19.33%
BenchmarkMandelbrot200 4833452 4898780 +1.35%
BenchmarkGoParse 4317976 4835474 +11.98%
BenchmarkRegexpMatchEasy0_32 150 166 +10.67%
BenchmarkRegexpMatchEasy0_1K 393 402 +2.29%
BenchmarkRegexpMatchEasy1_32 125 142 +13.60%
BenchmarkRegexpMatchEasy1_1K 1010 1236 +22.38%
BenchmarkRegexpMatchMedium_32 232 301 +29.74%
BenchmarkRegexpMatchMedium_1K 76963 102721 +33.47%
BenchmarkRegexpMatchHard_32 3833 5463 +42.53%
BenchmarkRegexpMatchHard_1K 119668 161614 +35.05%
BenchmarkRevcomp 763449047 706768534 -7.42%
BenchmarkTemplate 124954724 134834549 +7.91%
BenchmarkTimeParse 517 511 -1.16%
BenchmarkTimeFormat 501 514 +2.59%
benchmark old MB/s new MB/s speedup
BenchmarkGobDecode 53.31 52.26 0.98x
BenchmarkGobEncode 61.12 59.28 0.97x
BenchmarkGzip 42.79 41.08 0.96x
BenchmarkGunzip 169.88 168.55 0.99x
BenchmarkJSONEncode 77.89 74.25 0.95x
BenchmarkJSONDecode 22.35 18.73 0.84x
BenchmarkGoParse 13.41 11.98 0.89x
BenchmarkRegexpMatchEasy0_32 213.30 191.72 0.90x
BenchmarkRegexpMatchEasy0_1K 2603.92 2542.74 0.98x
BenchmarkRegexpMatchEasy1_32 254.00 224.93 0.89x
BenchmarkRegexpMatchEasy1_1K 1013.53 827.98 0.82x
BenchmarkRegexpMatchMedium_32 4.30 3.31 0.77x
BenchmarkRegexpMatchMedium_1K 13.30 9.97 0.75x
BenchmarkRegexpMatchHard_32 8.35 5.86 0.70x
BenchmarkRegexpMatchHard_1K 8.56 6.34 0.74x
BenchmarkRevcomp 332.92 359.62 1.08x
BenchmarkTemplate 15.53 14.39 0.93x
LGTM=rlh
R=rlh
CC=dvyukov, golang-codereviews, iant, khr, r
https://golang.org/cl/136380043
This CL adjusts code referring to src/pkg to refer to src.
Immediately after submitting this CL, I will submit
a change doing 'hg mv src/pkg/* src'.
That change will be too large to review with Rietveld
but will contain only the 'hg mv'.
This CL will break the build.
The followup 'hg mv' will fix it.
For more about the move, see golang.org/s/go14nopkg.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/134570043
Increase NOSPLIT reservation from 192 to 384 bytes.
The problem is that the non-Unix systems (Solaris and Windows)
just can't make system calls in a small amount of space,
and then worse they do things that are complex enough
to warrant calling runtime.throw on failure.
We don't have time to rewrite the code to use less stack.
I'm not happy about this, but it's still a small amount.
The good news is that we're doing this to get to only
using copying stacks for stack growth. Once that is true,
we can drop the default stack size from 8k to 4k, which
should more than make up for the bytes we're losing here.
LGTM=r
R=iant, r, bradfitz, aram.h
CC=golang-codereviews
https://golang.org/cl/140350043
The gp->panicwrap adjustment is just fatally flawed.
Now that there is a Panic.argp field, update that instead.
That can be done on entry only, so that unwinding doesn't
need to worry about undoing anything. The wrappers
emit a few more instructions in the prologue but everything
else in the system gets much simpler.
It also fixes (without trying) a broken test I never checked in.
Fixes#7491.
LGTM=khr
R=khr
CC=dvyukov, golang-codereviews, iant, r
https://golang.org/cl/135490044
newstackcall creates a new stack segment, and we want to
be able to throw away all that code.
LGTM=khr
R=khr, iant
CC=dvyukov, golang-codereviews, r
https://golang.org/cl/139270043
created panic1.go just so diffs were available.
After this CL is in, I'd like to move panic.go -> defer.go
and panic1.go -> panic.go.
LGTM=rsc
R=rsc, khr
CC=golang-codereviews
https://golang.org/cl/133530045
In CL 131450043, which raised it to 160,
I'd raise it to 192 if necessary.
Apparently it is necessary on windows/amd64.
One note for those concerned about the growth:
in the old segmented stack world, we wasted this much
space at the bottom of every stack segment.
In the new contiguous stack world, each goroutine has
only one stack segment, so we only waste this much space
once per goroutine. So even raising the limit further might
still be a net savings.
Fixes windows/amd64 build.
TBR=r
CC=golang-codereviews
https://golang.org/cl/132480043
The Go calling convention uses more stack space than C.
On 64-bit systems we've been right up against the limit
(128 bytes, so only 16 words) and doing awful things to
our source code to work around it. Instead of continuing
to do awful things, raise the limit to 160 bytes.
I am prepared to raise the limit to 192 bytes if necessary,
but I think this will be enough.
Should fix current link-time stack overflow errors on
- nacl/arm
- netbsd/amd64
- openbsd/amd64
- solaris/amd64
- windows/amd64
TBR=r
CC=golang-codereviews, iant
https://golang.org/cl/131450043
Before, a slice with cap=0 or a string with len=0 might have its
base pointer pointing beyond the actual slice/string data into
the next block. The collector had to ignore slices and strings with
cap=0 in order to avoid misinterpreting the base pointer.
Now, a slice with cap=0 or a string with len=0 still has a base
pointer pointing into the actual slice/string data, no matter what.
The collector can now always scan the pointer, which means
strings and slices are no longer special.
Fixes#8404.
LGTM=khr, josharian
R=josharian, khr, dvyukov
CC=golang-codereviews
https://golang.org/cl/112570044
Normally, an expression of the form x.f or *y can be reordered
with function calls and communications.
Select is stricter than normal: each channel expression is evaluated
in source order. If you have case <-x.f and case <-foo(), then if the
evaluation of x.f causes a panic, foo must not have been called.
(This is in contrast to an expression like x.f + foo().)
Enforce this stricter ordering.
Fixes#8336.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/126570043
We need to change the interface value representation for
concurrent garbage collection, so that there is no ambiguity
about whether the data word holds a pointer or scalar.
This CL does NOT make any representation changes.
Instead, it removes representation assumptions from
various pieces of code throughout the tree.
The isdirectiface function in cmd/gc/subr.c is now
the only place that decides that policy.
The policy propagates out from there in the reflect
metadata, as a new flag in the internal kind value.
A follow-up CL will change the representation by
changing the isdirectiface function. If that CL causes
problems, it will be easy to roll back.
Update #8405.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, r
https://golang.org/cl/129090043
Credit to Rémy for finding and writing test case.
Fixes#8325.
LGTM=r
R=golang-codereviews, r
CC=dave, golang-codereviews, iant, remyoudompheng
https://golang.org/cl/124950043
This change introduces gomallocgc, a Go clone of mallocgc.
Only a few uses have been moved over, so there are still
lots of uses from C. Many of these C uses will be moved
over to Go (e.g. in slice.goc), but probably not all.
What should remain of C's mallocgc is an open question.
LGTM=rsc, dvyukov
R=rsc, khr, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/108840046
Following CL 68150047, the goos and goarch
variables are not currently set when the GOOS
and GOARCH environment variables are not set.
This made the content of the build tag to be
ignored in this case.
This CL sets goos and goarch to runtime.GOOS
and runtime.GOARCH when the GOOS and GOARCH
environments variables are not set.
LGTM=aram, bradfitz
R=golang-codereviews, aram, gobot, rsc, dave, bradfitz
CC=golang-codereviews, rsc
https://golang.org/cl/112490043
I'm improving gccgo's detection of variables that are only set
but not used, and it triggers additional errors on this code.
The new gccgo errors are correct; gc seems to suppress them
due to the other, expected, errors. This change uses the
variables so that no compiler will complain.
gccgo change is https://golang.org/cl/119920043 .
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/116050043
benchmark old ns/op new ns/op delta
BenchmarkSelectUncontended 220 165 -25.00%
BenchmarkSelectContended 209 161 -22.97%
BenchmarkSelectProdCons 1042 904 -13.24%
But more importantly this change will allow
to get rid of free function in runtime.
Fixes#6494.
LGTM=rsc, khr
R=golang-codereviews, rsc, dominik.honnef, khr
CC=golang-codereviews, remyoudompheng
https://golang.org/cl/107670043
There is no reason to generate different code for cap and len.
Fixes#8025.
Fixes#8026.
LGTM=rsc
R=rsc, iant, khr
CC=golang-codereviews
https://golang.org/cl/93570044
Fixes#8074.
The issue was not reproduceable by revision
go version devel +e0ad7e329637 Thu Jun 19 22:19:56 2014 -0700 linux/arm
But include the original test case in case the issue reopens itself.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/107290043
No functional changes.
Generating shorter functions improves compilation time. On my laptop, this test's running time goes from 5.5s to 1.5s; the wall clock time to run all tests goes down 1s. On Raspberry Pi, this CL cuts 50s off the wall clock time to run all tests.
Fixes#7503.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/72590045
There is a hierarchy of location defined by loop depth:
-1 = the heap
0 = function results
1 = local variables (and parameters)
2 = local variable declared inside a loop
3 = local variable declared inside a loop inside a loop
etc
In general if an address from loopdepth n is assigned to
something in loop depth m < n, that indicates an extended
lifetime of some form that requires a heap allocation.
Function results can be local variables too, though, and so
they don't actually fit into the hierarchy very well.
Treat the address of a function result as level 1 so that
if it is written back into a result, the address is treated
as escaping.
Fixes#8185.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108870044
The analysis for &x was using the loop depth on x set
during x's declaration. A type switch creates a list of
implicit declarations that were not getting initialized
with loop depths.
Fixes#8176.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/108860043
A runtime.Goexit during a panic-invoked deferred call
left the panic stack intact even though all the stack frames
are gone when the goroutine is torn down.
The next goroutine to reuse that struct will have a
bogus panic stack and can cause the traceback routines
to walk into garbage.
Most likely to happen during tests, because t.Fatal might
be called during a deferred func and uses runtime.Goexit.
This "not enough cleared in Goexit" failure mode has
happened to us multiple times now. Clear all the pointers
that don't make sense to keep, not just gp->panic.
Fixes#8158.
LGTM=iant, dvyukov
R=iant, dvyukov
CC=golang-codereviews
https://golang.org/cl/102220043
I am not sure what the rounding here was
trying to do, but it was skipping the first
pointer on native client.
The code above the rounding already checks
that xoffset is widthptr-aligned, so the rnd
was a no-op everywhere but on Native Client.
And on Native Client it was wrong.
Perhaps it was supposed to be rounding down,
not up, but zerorange handles the extra 32 bits
correctly, so the rnd does not seem to be necessary
at all.
This wouldn't be worth doing for Go 1.3 except
that it can affect code on the playground.
Fixes#8155.
LGTM=r, iant
R=golang-codereviews, r, iant
CC=dvyukov, golang-codereviews, khr
https://golang.org/cl/108740047
I introduced this bug when I changed the escape
analysis to run in phases based on call graph
dependency order, in order to be more precise about
inputs escaping back to outputs (functions returning
their arguments).
Given
func f(z **int) *int { return *z }
we were tagging the function as 'z does not escape
and is not returned', which is all true, but not
enough information.
If used as:
var x int
p := &x
q := &p
leak(f(q))
then the compiler might try to keep x, p, and q all
on the stack, since (according to the recorded
information) nothing interesting ends up being
passed to leak.
In fact since f returns *q = p, &x is passed to leak
and x needs to be heap allocated.
To trigger the bug, you need a chain that the
compiler wants to keep on the stack (like x, p, q
above), and you need a function that returns an
indirect of its argument, and you need to pass the
head of the chain to that function. This doesn't
come up very often: this bug has been present since
June 2012 (between Go 1 and Go 1.1) and we haven't
seen it until now. It helps that most functions that
return indirects are getters that are simple enough
to be inlined, avoiding the bug.
Earlier versions of Go also had the benefit that if
&x really wasn't used beyond x's lifetime, nothing
broke if you put &x in a heap-allocated structure
accidentally. With the new stack copying, though,
heap-allocated structures containing &x are not
updated when the stack is copied and x moves,
leading to crashes in Go 1.3 that were not crashes
in Go 1.2 or Go 1.1.
The fix is in two parts.
First, in the analysis of a function, recognize when
a value obtained via indirect of a parameter ends up
being returned. Mark those parameters as having
content escape back to the return results (but we
don't bother to write down which result).
Second, when using the analysis to analyze, say,
f(q), mark parameters with content escaping as
having any indirections escape to the heap. (We
don't bother trying to match the content to the
return value.)
The fix could be less precise (simpler).
In the first part we might mark all content-escaping
parameters as plain escaping, and then the second
part could be dropped. Or we might assume that when
calling f(q) all the things pointed at by q escape
always (for any f and q).
The fix could also be more precise (more complex).
We might record the specific mapping from parameter
to result along with the number of indirects from the
parameter to the thing being returned as the result,
and then at the call sites we could set up exactly the
right graph for the called function. That would make
notleaks(f(q)) be able to keep x on the stack, because
the reuslt of f(q) isn't passed to anything that leaks it.
The less precise the fix, the more stack allocations
become heap allocations.
This fix is exactly as precise as it needs to be so that
none of the current stack allocations in the standard
library turn into heap allocations.
Fixes#8120.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, khr, r
https://golang.org/cl/102040046
The 'address taken' bit in a function variable was not
propagating into the inlined copies, causing incorrect
liveness information.
LGTM=dsymonds, bradfitz
R=golang-codereviews, bradfitz
CC=dsymonds, golang-codereviews, iant, khr, r
https://golang.org/cl/96670046
The 1-byte write was silently clearing a byte on the stack.
If there was another function call with more arguments
in the same stack frame, no harm done.
Otherwise, if the variable at that location was already zero,
no harm done.
Otherwise, problems.
Fixes#8139.
LGTM=dsymonds
R=golang-codereviews, dsymonds
CC=golang-codereviews, iant, r
https://golang.org/cl/100940043
We were requiring that the defer stack and the panic stack
be completely processed, thinking that if any were left over
the stack scan and the defer stack/panic stack must be out
of sync. It turns out that the panic stack may well have
leftover entries in some situations, and that's okay.
Fixes#8132.
LGTM=minux, r
R=golang-codereviews, minux, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/100900044
The 'continuation pc' is where the frame will continue
execution, if anywhere. For a frame that stopped execution
due to a CALL instruction, the continuation pc is immediately
after the CALL. But for a frame that stopped execution due to
a fault, the continuation pc is the pc after the most recent CALL
to deferproc in that frame, or else 0. That is where execution
will continue, if anywhere.
The liveness information is only recorded for CALL instructions.
This change makes sure that we never look for liveness information
except for CALL instructions.
Using a valid PC fixes crashes when a garbage collection or
stack copying tries to process a stack frame that has faulted.
Record continuation pc in heapdump (format change).
Fixes#8048.
LGTM=iant, khr
R=khr, iant, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/100870044
This CL forces the optimizer to preserve some memory stores
that would be redundant except that a stack scan due to garbage
collection or stack copying might look at them during a function call.
As such, it forces additional memory writes and therefore slows
down the execution of some programs, especially garbage-heavy
programs that are already limited by memory bandwidth.
The slowdown can be as much as 7% for end-to-end benchmarks.
These numbers are from running go1.test -test.benchtime=5s three times,
taking the best (lowest) ns/op for each benchmark. I am excluding
benchmarks with time/op < 10us to focus on macro effects.
All benchmarks are on amd64.
Comparing tip (a27f34c771cb) against this CL on an Intel Core i5 MacBook Pro:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 3876500413 3856337341 -0.52%
BenchmarkFannkuch11 2965104777 2991182127 +0.88%
BenchmarkGobDecode 8563026 8788340 +2.63%
BenchmarkGobEncode 5050608 5267394 +4.29%
BenchmarkGzip 431191816 434168065 +0.69%
BenchmarkGunzip 107873523 110563792 +2.49%
BenchmarkHTTPClientServer 85036 86131 +1.29%
BenchmarkJSONEncode 22143764 22501647 +1.62%
BenchmarkJSONDecode 79646916 85658808 +7.55%
BenchmarkMandelbrot200 4720421 4700108 -0.43%
BenchmarkGoParse 4651575 4712247 +1.30%
BenchmarkRegexpMatchMedium_1K 71986 73490 +2.09%
BenchmarkRegexpMatchHard_1K 111018 117495 +5.83%
BenchmarkRevcomp 648798723 659352759 +1.63%
BenchmarkTemplate 112673009 112819078 +0.13%
Comparing tip (a27f34c771cb) against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5461110720 5393104469 -1.25%
BenchmarkFannkuch11 4314677151 4327177615 +0.29%
BenchmarkGobDecode 11065853 11235272 +1.53%
BenchmarkGobEncode 6500065 6959837 +7.07%
BenchmarkGzip 647478596 671769097 +3.75%
BenchmarkGunzip 139348579 141096376 +1.25%
BenchmarkHTTPClientServer 69376 73610 +6.10%
BenchmarkJSONEncode 30172320 31796106 +5.38%
BenchmarkJSONDecode 113704905 114239137 +0.47%
BenchmarkMandelbrot200 6032730 6003077 -0.49%
BenchmarkGoParse 6775251 6405995 -5.45%
BenchmarkRegexpMatchMedium_1K 111832 113895 +1.84%
BenchmarkRegexpMatchHard_1K 161112 168420 +4.54%
BenchmarkRevcomp 876363406 892319935 +1.82%
BenchmarkTemplate 146273096 148998339 +1.86%
Just to get a sense of where we are compared to the previous release,
here are the same benchmarks comparing Go 1.2 to this CL.
Comparing Go 1.2 against this CL on an Intel Core i5 MacBook Pro:
BenchmarkBinaryTree17 4370077662 3856337341 -11.76%
BenchmarkFannkuch11 3347052657 2991182127 -10.63%
BenchmarkGobDecode 8791384 8788340 -0.03%
BenchmarkGobEncode 4968759 5267394 +6.01%
BenchmarkGzip 437815669 434168065 -0.83%
BenchmarkGunzip 94604099 110563792 +16.87%
BenchmarkHTTPClientServer 87798 86131 -1.90%
BenchmarkJSONEncode 22818243 22501647 -1.39%
BenchmarkJSONDecode 97182444 85658808 -11.86%
BenchmarkMandelbrot200 4733516 4700108 -0.71%
BenchmarkGoParse 5054384 4712247 -6.77%
BenchmarkRegexpMatchMedium_1K 67612 73490 +8.69%
BenchmarkRegexpMatchHard_1K 107321 117495 +9.48%
BenchmarkRevcomp 733270055 659352759 -10.08%
BenchmarkTemplate 109304977 112819078 +3.21%
Comparing Go 1.2 against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5986953594 5393104469 -9.92%
BenchmarkFannkuch11 4861139174 4327177615 -10.98%
BenchmarkGobDecode 11830997 11235272 -5.04%
BenchmarkGobEncode 6608722 6959837 +5.31%
BenchmarkGzip 661875826 671769097 +1.49%
BenchmarkGunzip 138630019 141096376 +1.78%
BenchmarkHTTPClientServer 71534 73610 +2.90%
BenchmarkJSONEncode 30393609 31796106 +4.61%
BenchmarkJSONDecode 139645860 114239137 -18.19%
BenchmarkMandelbrot200 5988660 6003077 +0.24%
BenchmarkGoParse 6974092 6405995 -8.15%
BenchmarkRegexpMatchMedium_1K 111331 113895 +2.30%
BenchmarkRegexpMatchHard_1K 165961 168420 +1.48%
BenchmarkRevcomp 995049292 892319935 -10.32%
BenchmarkTemplate 145623363 148998339 +2.32%
Fixes#8036.
LGTM=khr
R=golang-codereviews, josharian, khr
CC=golang-codereviews, iant, r
https://golang.org/cl/99660044
[Same as CL 102820043 except applied changes to 6g/gsubr.c
also to 5g/gsubr.c and 8g/gsubr.c. The problem I had last night
trying to do that was that 8g's copy of nodarg has different
(but equivalent) control flow and I was pasting the new code
into the wrong place.]
Description from CL 102820043:
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, khr, r
https://golang.org/cl/103750043
Breaks 386 and arm builds.
The obvious reason is that this CL only edited 6g/gsubr.c
and failed to edit 5g/gsubr.c and 8g/gsubr.c.
However, the obvious CL applying the same edit to those
files (CL 101900043) causes mysterious build failures
in various of the standard package tests, usually involving
reflect. Something deep and subtle is broken but only on
the 32-bit systems.
Undo this CL for now.
««« original CL description
cmd/gc: fix x=x crash
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=r, khr
R=golang-codereviews, r, khr
CC=golang-codereviews, iant
https://golang.org/cl/102820043
»»»
TBR=r
CC=golang-codereviews, khr
https://golang.org/cl/95660043
The 'nodarg' function is used to obtain a Node*
representing a function argument or result.
It returned a brand new Node*, but that violates
the guarantee in most places in the compiler that
two Node*s refer to the same variable if and only if
they are the same Node* pointer. Reestablish that
invariant by making nodarg return a preexisting
named variable if present.
Having fixed that, avoid any copy during x=x in
componentgen, because the VARDEF we emit
before the copy marks the lhs x as dead incorrectly.
The change in walk.c avoids modifying the result
of nodarg. This was the only place in the compiler
that did so.
Fixes#8097.
LGTM=r, khr
R=golang-codereviews, r, khr
CC=golang-codereviews, iant
https://golang.org/cl/102820043
This matters for NaCl, which seems to swamp my 4-core MacBook Pro otherwise.
It's not a correctness problem, just a usability problem.
LGTM=bradfitz
R=bradfitz
CC=golang-codereviews
https://golang.org/cl/98600046
CL 51010045 fixed the first one of these:
cmd/gc: return canonical Node* from temp
For historical reasons, temp was returning a copy
of the created Node*, not the original Node*.
This meant that if analysis recorded information in the
returned node (for example, n->addrtaken = 1), the
analysis would not show up on the original Node*, the
one kept in fn->dcl and consulted during liveness
bitmap creation.
Correct this, and watch for it when setting addrtaken.
Fixes#7083.
R=khr, dave, minux.ma
CC=golang-codereviews
https://golang.org/cl/51010045
CL 53200043 fixed the second:
cmd/gc: fix race build
Missed this case in CL 51010045.
TBR=khr
CC=golang-codereviews
https://golang.org/cl/53200043
This CL fixes the third. There are only three nod(OXXX, ...)
calls in sinit.c, so maybe we're done. Embarassing that it
took three CLs to find all three.
Fixes#8028.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant
https://golang.org/cl/100800046
In the first very rough draft of the reordering code
that was introduced in the Go 1.3 cycle, the pre-allocated
temporary for a ... argument was held in n->right.
It moved to n->alloc but the code avoiding n->right
was left behind in order.c. In copy(x, <-c), the receive
is in n->right and must be processed. Delete the special
case code, removing the bug.
Fixes#8039.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100820044
The code cannot have worked before, because it was
trying to use the old value in a range check for the new
type, which might have a different representation
(hence the 'internal compiler error').
Fixes#8073.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/98630045
Add nacl.bash, the NaCl version of all.bash.
It's a separate script because it builds a variant of package syscall
with a large zip file embedded in it, containing all the input files
needed for tests.
Disable various tests new since the last round, mostly the ones using os/exec.
Fixes#7945.
LGTM=dave
R=golang-codereviews, remyoudompheng, dave, bradfitz
CC=golang-codereviews
https://golang.org/cl/100590044
I don't know when the bug was fixed, but empirically it was.
Make sure it stays fixed by adding a test.
Fixes#7884.
LGTM=adg
R=golang-codereviews, adg
CC=golang-codereviews
https://golang.org/cl/93500043
The float32 const conversion used to round to float64
and then use the hardware to round to float32.
Even though there was a range check before this
conversion, the double rounding introduced inaccuracy:
the round to float64 might round the value further away
from the float32 range, reaching a float64 value that
could not actually be rounded to float32. The hardware
appears to give us 0 in that case, but it is probably undefined.
Double rounding also meant that the wrong value might
be used for certain border cases.
Do the rounding the float32 ourselves, just as we already
did the rounding to float64. This makes the conversion
precise and also makes the conversion match the range check.
Finally, add some code to print very large (bigger than float64)
floating point constants in decimal floating point notation instead
of falling back to the precise but human-unreadable binary floating
point notation.
Fixes#8015.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, r
https://golang.org/cl/100580044
The temporary-introducing pass was not recursing
into the argumnt of a receive operation.
Fixes#8011.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/91540043
The introduction of temporaries in order.c was not
quite right for two corner cases:
1) The rewrite that pushed new variables on the lhs of
a receive into the body of the case was dropping the
declaration of the variables. If the variables escape,
the declaration is what allocates them.
Caught by escape analysis sanity check.
In fact the declarations should move into the body
always, so that we only allocate if the corresponding
case is selected. Do that. (This is an optimization that
was already present in Go 1.2. The new order code just
made it stop working.)
Fixes#7997.
2) The optimization to turn a single-recv select into
an ordinary receive assumed it could take the address
of the destination; not so if the destination is _.
Fixes#7998.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100480043
The GC program describing a data structure sometimes trusts the
pointer base type and other times does not (if not, the garbage collector
must fall back on per-allocation type information stored in the heap).
Make the scanning of a pointer in an interface do the same.
This fixes a crash in a particular use of reflect.SliceHeader.
Fixes#8004.
LGTM=khr
R=golang-codereviews, khr
CC=0xe2.0x9a.0x9b, golang-codereviews, iant, r
https://golang.org/cl/100470045
Globals, function arguments, and results are special cases in
registerization.
Globals must be flushed aggressively, because nearly any
operation can cause a panic, and the recovery code must see
the latest values. Globals also must be loaded aggressively,
because nearly any store through a pointer might be updating a
global: the compiler cannot see all the "address of"
operations on globals, especially exported globals. To
accomplish this, mark all globals as having their address
taken, which effectively disables registerization.
If a function contains a defer statement, the function results
must be flushed aggressively, because nearly any operation can
cause a panic, and the deferred code may call recover, causing
the original function to return the current values of its
function results. To accomplish this, mark all function
results as having their address taken if the function contains
any defer statements. This causes not just aggressive flushing
but also aggressive loading. The aggressive loading is
overkill but the best we can do in the current code.
Function arguments must be considered live at all safe points
in a function, because garbage collection always preserves
them: they must be up-to-date in order to be preserved
correctly. Accomplish this by marking them live at all call
sites. An earlier attempt at this marked function arguments as
having their address taken, which disabled registerization
completely, making programs slower. This CL's solution allows
registerization while preserving safety. The benchmark speedup
is caused by being able to registerize again (the earlier CL
lost the same amount).
benchmark old ns/op new ns/op delta
BenchmarkEqualPort32 61.4 56.0 -8.79%
benchmark old MB/s new MB/s speedup
BenchmarkEqualPort32 521.56 570.97 1.09x
Fixes#1304. (again)
Fixes#7944. (again)
Fixes#7984.
Fixes#7995.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, r
https://golang.org/cl/97500044
The inputs to a function are marked live at all times in the
liveness bitmaps, so that the garbage collector will not free
the things they point at and reuse the pointers, so that the
pointers shown in stack traces are guaranteed not to have
been recycled.
Unfortunately, no one told the register optimizer that the
inputs need to be preserved at all call sites. If a function
is done with a particular input value, the optimizer will stop
preserving it across calls. For single-word values this just
means that the value recorded might be stale. For multi-word
values like slices, the value recorded could be only partially stale:
it can happen that, say, the cap was updated but not the len,
or that the len was updated but not the base pointer.
Either of these possibilities (and others) would make the
garbage collector misinterpret memory, leading to memory
corruption.
This came up in a real program, in which the garbage collector's
'slice len ≤ slice cap' check caught the inconsistency.
Fixes#7944.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, khr
https://golang.org/cl/100370045
This is joint work with Daniel Morsing.
In order for the register allocator to alias two variables, they must have the same width, stack offset, and etype. Code generation was altering a variable's etype in a few places. This prevented the variable from being moved to a register, which in turn prevented peephole optimization. This failure to alias was very common, with almost 23,000 instances just running make.bash.
This phenomenon was not visible in the register allocation debug output because the variables that failed to alias had the same name. The debugging-only change to bits.c fixes this by printing the variable number with its name.
This CL fixes the source of all etype mismatches for 6g, all but one case for 8g, and depressingly few cases for 5g. (I believe that extending CL 6819083 to 5g is a prerequisite.) Fixing the remaining cases in 8g and 5g is work for the future.
The etype mismatch fixes are:
* [gc] Slicing changed the type of the base pointer into a uintptr in order to perform arithmetic on it. Instead, support addition directly on pointers.
* [*g] OSPTR was giving type uintptr to slice base pointers; undo that. This arose, for example, while compiling copy(dst, src).
* [8g] 64 bit float conversion was assigning int64 type during codegen, overwriting the existing uint64 type.
Note that some etype mismatches are appropriate, such as a struct with a single field or an array with a single element.
With these fixes, the number of registerizations that occur while running make.bash for 6g increases ~10%. Hello world binary size shrinks ~1.5%. Running all benchmarks in the standard library show performance improvements ranging from nominal to substantive (>10%); a full comparison using 6g on my laptop is available at https://gist.github.com/josharian/8f9b5beb46667c272064. The microbenchmarks must be taken with a grain of salt; see issue 7920. The few benchmarks that show real regressions are likely due to issue 7920. I manually examined the generated code for the top few regressions and none had any assembly output changes. The few benchmarks that show extraordinary improvements are likely also due to issue 7920.
Performance results from 8g appear similar to 6g.
5g shows no performance improvements. This is not surprising, given the discussion above.
Update #7316
LGTM=rsc
R=rsc, daniel.morsing, bradfitz
CC=dave, golang-codereviews
https://golang.org/cl/91850043
Before we used line 1 of the first source file.
This should be clearer.
Fixes#4388.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/92250044
If the ... element type contained no pointers,
then the escape analysis did not track the ... itself.
This manifested in an escaping ...byte being treated
as non-escaping.
Fixes#7934.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100310043
The register allocator decides which variables should be placed into registers by charging for each load/store and crediting for each use, and then selecting an allocation with minimal cost. NOPs will be eliminated, however, so using a variable in a NOP should not generate credit.
Issue 7867 arises from attempted registerization of multi-word variables because they are used in NOPs. By not crediting for that use, they will no longer be considered for registerization.
This fix could theoretically lead to better register allocation, but NOPs are rare relative to other instructions.
Fixes#7867.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/94810044
Variables declared with 'var' have no sym->def.
Fixes#7794.
LGTM=rsc
R=golang-codereviews, bradfitz, rsc
CC=golang-codereviews
https://golang.org/cl/88360043
The new code is adapted from the Go 1.2 nosplit code,
but it does not have the bug reported in issue 7623:
g% go run nosplit.go
g% go1.2 run nosplit.go
BUG
rejected incorrectly:
main 0 call f; f 120
linker output:
# _/tmp/go-test-nosplit021064539
main.main: nosplit stack overflow
120 guaranteed after split check in main.main
112 on entry to main.f
-8 after main.f uses 120
g%
Fixes#6931.
Fixes#7623.
LGTM=iant
R=golang-codereviews, iant, ality
CC=golang-codereviews, r
https://golang.org/cl/88190043
Trying to make GODEBUG=gcdead=1 work with liveness
and in particular ambiguously live variables.
1. In the liveness computation, mark all ambiguously live
variables as live for the entire function, except the entry.
They are zeroed directly after entry, and we need them not
to be poisoned thereafter.
2. In the liveness computation, compute liveness (and deadness)
for all parameters, not just pointer-containing parameters.
Otherwise gcdead poisons untracked scalar parameters and results.
3. Fix liveness debugging print for -live=2 to use correct bitmaps.
(Was not updated for compaction during compaction CL.)
4. Correct varkill during map literal initialization.
Was killing the map itself instead of the inserted value temp.
5. Disable aggressive varkill cleanup for call arguments if
the call appears in a defer or go statement.
6. In the garbage collector, avoid bug scanning empty
strings. An empty string is two zeros. The multiword
code only looked at the first zero and then interpreted
the next two bits in the bitmap as an ordinary word bitmap.
For a string the bits are 11 00, so if a live string was zero
length with a 0 base pointer, the poisoning code treated
the length as an ordinary word with code 00, meaning it
needed poisoning, turning the string into a poison-length
string with base pointer 0. By the same logic I believe that
a live nil slice (bits 11 01 00) will have its cap poisoned.
Always scan full multiword struct.
7. In the runtime, treat both poison words (PoisonGC and
PoisonStack) as invalid pointers that warrant crashes.
Manual testing as follows:
- Create a script called gcdead on your PATH containing:
#!/bin/bash
GODEBUG=gcdead=1 GOGC=10 GOTRACEBACK=2 exec "$@"
- Now you can build a test and then run 'gcdead ./foo.test'.
- More importantly, you can run 'go test -short -exec gcdead std'
to run all the tests.
Fixes#7676.
While here, enable the precise scanning of slices, since that was
disabled due to bugs like these. That now works, both with and
without gcdead.
Fixes#7549.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/83410044
1. Use n->alloc, not n->left, to hold the allocated temp being
passed from orderstmt/orderexpr to walk.
2. Treat method values the same as closures.
3. Use killed temporary for composite literal passed to
non-escaping function argument.
4. Clean temporaries promptly in if and for statements.
5. Clean temporaries promptly in select statements.
As part of this, move all the temporary-generating logic
out of select.c into order.c, so that the temporaries can
be reclaimed.
With the new temporaries, can re-enable the 1-entry
select optimization. Fixes issue 7672.
While we're here, fix a 1-line bug in select processing
turned up by the new liveness test (but unrelated; select.c:72).
Fixes#7686.
6. Clean temporaries (but not particularly promptly) in switch
and range statements.
7. Clean temporary used during convT2E/convT2I.
8. Clean temporaries promptly during && and || expressions.
---
CL 81940043 reduced the number of ambiguously live temps
in the godoc binary from 860 to 711.
CL 83090046 reduced the number from 711 to 121.
This CL reduces the number from 121 to 23.
15 the 23 that remain are in fact ambiguously live.
The final 8 could be fixed but are not trivial and
not common enough to warrant work at this point
in the release cycle.
These numbers only count ambiguously live temps,
not ambiguously live user-declared variables.
There are 18 such variables in the godoc binary after this CL,
so a total of 41 ambiguously live temps or user-declared
variables.
The net effect is that zeroing anything on entry to a function
should now be a rare event, whereas earlier it was the
common case.
This is good enough for Go 1.3, and probably good
enough for future releases too.
Fixes#7345.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/83000048
1. In functions with heap-allocated result variables or with
defer statements, the return sequence requires more than
just a single RET instruction. There is an optimization that
arranges for all returns to jump to a single copy of the return
epilogue in this case. Unfortunately, that optimization is
fundamentally incompatible with PC-based liveness information:
it takes PCs at many different points in the function and makes
them all land at one PC, making the combined liveness information
at that target PC a mess. Disable this optimization, so that each
return site gets its own copy of the 'call deferreturn' and the
copying of result variables back from the heap.
This removes quite a few spurious 'ambiguously live' variables.
2. Let orderexpr allocate temporaries that are passed by address
to a function call and then die on return, so that we can arrange
an appropriate VARKILL.
2a. Do this for ... slices.
2b. Do this for closure structs.
2c. Do this for runtime.concatstring, which is the implementation
of large string additions. Change representation of OADDSTR to
an explicit list in typecheck to avoid reconstructing list in both
walk and order.
3. Let orderexpr allocate the temporary variable copies used for
range loops, so that they can be killed when the loop is over.
Similarly, let it allocate the temporary holding the map iterator.
CL 81940043 reduced the number of ambiguously live temps
in the godoc binary from 860 to 711.
This CL reduces the number to 121. Still more to do, but another
good checkpoint.
Update #7345
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/83090046
The new channel and map runtime routines take pointers
to values, typically temporaries. Without help, the compiler
cannot tell when those temporaries stop being needed,
because it isn't sure what happened to the pointer.
Arrange to insert explicit VARKILL instructions for these
temporaries so that the liveness analysis can avoid seeing
them as "ambiguously live".
The change is made in order.c, which was already in charge of
introducing temporaries to preserve the order-of-evaluation
guarantees. Now its job has expanded to include introducing
temporaries as needed by runtime routines, and then also
inserting the VARKILL annotations for all these temporaries,
so that their lifetimes can be shortened.
In order to do its job for the map runtime routines, order.c arranges
that all map lookups or map assignments have the form:
x = m[k]
x, y = m[k]
m[k] = x
where x, y, and k are simple variables (often temporaries).
Likewise, receiving from a channel is now always:
x = <-c
In order to provide the map guarantee, order.c is responsible for
rewriting x op= y into x = x op y, so that m[k] += z becomes
t = m[k]
t2 = t + z
m[k] = t2
While here, fix a few bugs in order.c's traversal: it was failing to
walk into select and switch case bodies, so order of evaluation
guarantees were not preserved in those situations.
Added tests to test/reorder2.go.
Fixes#7671.
In gc/popt's temporary-merging optimization, allow merging
of temporaries with their address taken as long as the liveness
ranges do not intersect. (There is a good chance of that now
that we have VARKILL annotations to limit the liveness range.)
Explicitly killing temporaries cuts the number of ambiguously
live temporaries that must be zeroed in the godoc binary from
860 to 711, or -17%. There is more work to be done, but this
is a good checkpoint.
Update #7345
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/81940043
1. On entry to a function, only zero the ambiguously live stack variables.
Before, we were zeroing all stack variables containing pointers.
The zeroing is pretty inefficient right now (issue 7624), but there are also
too many stack variables detected as ambiguously live (issue 7345),
and that must be addressed before deciding how to improve the zeroing code.
(Changes in 5g/ggen.c, 6g/ggen.c, 8g/ggen.c, gc/pgen.c)
Fixes#7647.
2. Make the regopt word-based liveness analysis preserve the
whole-variable liveness property expected by the garbage collection
bitmap liveness analysis. That is, if the regopt liveness decides that
one word in a struct needs to be preserved, make sure it preserves
the entire struct. This is particularly important for multiword values
such as strings, slices, and interfaces, in which all the words need
to be present in order to understand the meaning.
(Changes in 5g/reg.c, 6g/reg.c, 8g/reg.c.)
Fixes#7591.
3. Make the regopt word-based liveness analysis treat a variable
as having its address taken - which makes it preserved across
all future calls - whenever n->addrtaken is set, for consistency
with the gc bitmap liveness analysis, even if there is no machine
instruction actually taking the address. In this case n->addrtaken
is incorrect (a nicer way to put it is overconservative), and ideally
there would be no such cases, but they can happen and the two
analyses need to agree.
(Changes in 5g/reg.c, 6g/reg.c, 8g/reg.c; test in bug484.go.)
Fixes crashes found by turning off "zero everything" in step 1.
4. Remove spurious VARDEF annotations. As the comment in
gc/pgen.c explains, the VARDEF must immediately precede
the initialization. It cannot be too early, and it cannot be too late.
In particular, if a function call sits between the VARDEF and the
actual machine instructions doing the initialization, the variable
will be treated as live during that function call even though it is
uninitialized, leading to problems.
(Changes in gc/gen.c; test in live.go.)
Fixes crashes found by turning off "zero everything" in step 1.
5. Do not treat loading the address of a wide value as a signal
that the value must be initialized. Instead depend on the existence
of a VARDEF or the first actual read/write of a word in the value.
If the load is in order to pass the address to a function that does
the actual initialization, treating the load as an implicit VARDEF
causes the same problems as described in step 4.
The alternative is to arrange to zero every such value before
passing it to the real initialization function, but this is a much
easier and more efficient change.
(Changes in gc/plive.c.)
Fixes crashes found by turning off "zero everything" in step 1.
6. Treat wide input parameters with their address taken as
initialized on entry to the function. Otherwise they look
"ambiguously live" and we will try to emit code to zero them.
(Changes in gc/plive.c.)
Fixes crashes found by turning off "zero everything" in step 1.
7. An array of length 0 has no pointers, even if the element type does.
Without this change, the zeroing code complains when asked to
clear a 0-length array.
(Changes in gc/reflect.c.)
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/80160044
Revision 3ae4607a43ff introduced CONVNOP layers
to fix type checking issues arising from comparisons.
The added complexity made 8g run out of registers
when compiling an equality function in go.net/ipv6.
A similar issue occurred in test/sizeof.go on
amd64p32 with 6g.
Fixes#7405.
LGTM=khr
R=rsc, dave, iant, khr
CC=golang-codereviews
https://golang.org/cl/78100044
A too large float constant is an error.
A too small float constant is rounded to zero.
Fixes#7419
Update #6902
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/76730046
The lowering to runtime calls introduces hidden pointers to the
arguments of select clauses. When implicit conversions were
involved it could end up with incompatible pointers. Since the
pointed-to types have the same representation, we can introduce a
forced conversion.
Fixes#6847.
LGTM=rsc
R=rsc, iant, khr
CC=golang-codereviews
https://golang.org/cl/72380043
The garbage collector uses type information to guide the
traversal of the heap. If it sees a field that should be a string,
it marks the object pointed at by the string data pointer as
visited but does not bother to look at the data, because
strings contain bytes, not pointers.
If you save s[len(s):] somewhere, though, the string data pointer
actually points just beyond the string data; if the string data
were exactly the size of an allocated block, the string data
pointer would actually point at the next block. It is incorrect
to mark that next block as visited and not bother to look at
the data, because the next block may be some other type
entirely.
The fix is to ignore strings with zero length during collection:
they are empty and can never become non-empty: the base
pointer will never be used again. The handling of slices already
does this (but using cap instead of len).
This was not a bug in Go 1.2, because until January all string
allocations included a trailing NUL byte not included in the
length, so s[len(s):] still pointed inside the string allocation
(at the NUL).
This bug was causing the crashes in test/run.go. Specifically,
the parsing of a regexp in package regexp/syntax allocated a
[]syntax.Inst with rounded size 1152 bytes. In fact it
allocated many such slices, because during the processing of
test/index2.go it creates thousands of regexps that are all
approximately the same complexity. That takes a long time, and
test/run works on other tests in other goroutines. One such
other test is chan/perm.go, which uses an 1152-byte source
file. test/run reads that file into a []byte and then calls
strings.Split(string(src), "\n"). The string(src) creates an
1152-byte string - and there's a very good chance of it
landing next to one of the many many regexp slices already
allocated - and then because the file ends in a \n,
strings.Split records the tail empty string as the final
element in the slice. A garbage collection happens at this
point, the collection finds that string before encountering
the []syntax.Inst data it now inadvertently points to, and the
[]syntax.Inst data is not scanned for the pointers that it
contains. Each syntax.Inst contains a []rune, those are
missed, and the backing rune arrays are freed for reuse. When
the regexp is later executed, the runes being searched for are
no longer runes at all, and there is no match, even on text
that should match.
On 64-bit machines the pointer in the []rune inside the
syntax.Inst is larger (along with a few other pointers),
pushing the []syntax.Inst backing array into a larger size
class, avoiding the collision with chan/perm.go's
inadvertently sized file.
I expect this was more prevalent on OS X than on Linux or
Windows because those managed to run faster or slower and
didn't overlap index2.go with chan/perm.go as often. On the
ARM systems, we only run one errorcheck test at a time, so
index2 and chan/perm would never overlap.
It is possible that this bug is the root cause of other crashes
as well. For now we only know it is the cause of the test/run crash.
Many thanks to Dmitriy for help debugging.
Fixes#7344.
Fixes#7455.
LGTM=r, dvyukov, dave, iant
R=golang-codereviews, dave, r, dvyukov, delpontej, iant
CC=golang-codereviews, khr
https://golang.org/cl/74250043
Some of the errorcheck tests have many many identical regexps.
Use a map to avoid storing the compiled form many many times
in memory. Change the filterRe to a simple string to avoid
the expense of those regexps as well.
Cuts the time for run.go on index2.go by almost 50x.
Noticed during debugging of issue 7344.
LGTM=bradfitz
R=bradfitz, josharian
CC=golang-codereviews
https://golang.org/cl/74380043
The byte that r is or'd into is already 0x7, so the failure to zero r only
impacts the generated machine code if the register is > 7.
Fixes#7044.
LGTM=dave, minux.ma, rsc
R=dave, minux.ma, bradfitz, rsc
CC=golang-codereviews
https://golang.org/cl/73730043