Add missing file that should have been included in CL 6854063 / 5eac1a2d6fc3
R=remyoudompheng, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6891049
Fixes#4396.
For fixed arrays larger than the unmapped page, agenr would general a nil check by loading the first word of the array. However there is no requirement for the first element of a byte array to be word aligned, so this check causes a trap on ARMv5 hardware (ARMv6 since relaxed that restriction, but it probably still comes at a cost).
Switching the check to MOVB ensures alignment is not an issue. This check is only invoked in a few places in the code where large fixed arrays are embedded into structs, compress/lzw is the biggest offender, and switching to MOVB has no observable performance penalty.
Thanks to Rémy and Daniel Morsing for helping me debug this on IRC last night.
R=remyoudompheng, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6854063
This allows 5g and 8g to benefit from the rewrite as shifts
or magic multiplies. The 64-bit arithmetic is not handled there,
and left in 6g.
Update #2230.
R=golang-dev, dave, mtj, iant, rsc
CC=golang-dev
https://golang.org/cl/6819123
The patch adds more cases to agenr to allocate registers later,
and makes 6g generate addresses for sgen in something else than
SI and DI. It avoids a complex save/restore sequence that
amounts to allocate a register before descending in subtrees.
Fixes#4207.
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/6817080
xtramodes' C_PBIT optimisation transforms:
MOVW 0(R3),R1
ADD $4,R3,R3
into:
MOVW.P 4(R3),R1
and the AADD optimisation tranforms:
ADD R0,R1
MOVBU 0(R1),R0
into:
MOVBU R0<<0(R1),R0
5g does not appear to generate sequences that
can be transformed by xtramodes' AMOVW.
R=remyoudompheng, rsc
CC=golang-dev
https://golang.org/cl/6817085
This CL is a backport of 6012049 which improves code
generation for shift operations.
benchmark old ns/op new ns/op delta
BenchmarkLSL 9 5 -49.67%
BenchmarkLSR 9 4 -50.00%
R=golang-dev, minux.ma, r, rsc
CC=golang-dev
https://golang.org/cl/6813045
Compiling expressions like:
s[s[s[s[s[s[s[s[s[s[s[s[i]]]]]]]]]]]]
make 5g and 6g run out of registers. Such expressions can arise
if a slice is used to represent a permutation and the user wants
to iterate it.
This is due to the usual problem of allocating registers before
going down the expression tree, instead of allocating them in a
postfix way.
The functions cgenr and agenr (that generate a value to a newly
allocated register instead of an existing location), are either
introduced or modified when they already existed to allocate
the new register as late as possible, and sudoaddable is disabled
for OINDEX nodes so that igen/agenr is used instead.
Update #4207.
R=dave, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/6733055
This is an experiment in static analysis of Go programs
to understand which struct fields a program might use.
It is not part of the Go language specification, it must
be enabled explicitly when building the toolchain,
and it may be removed at any time.
After building the toolchain with GOEXPERIMENT=fieldtrack,
a specific field can be marked for tracking by including
`go:"track"` in the field tag:
package pkg
type T struct {
F int `go:"track"`
G int // untracked
}
To simplify usage, only named struct types can have
tracked fields, and only exported fields can be tracked.
The implementation works by making each function begin
with a sequence of no-op USEFIELD instructions declaring
which tracked fields are accessed by a specific function.
After the linker's dead code elimination removes unused
functions, the fields referred to by the remaining
USEFIELD instructions are the ones reported as used by
the binary.
The -k option to the linker specifies the fully qualified
symbol name (such as my/pkg.list) of a string variable that
should be initialized with the field tracking information
for the program. The field tracking string is a sequence
of lines, each terminated by a \n and describing a single
tracked field referred to by the program. Each line is made
up of one or more tab-separated fields. The first field is
the name of the tracked field, fully qualified, as in
"my/pkg.T.F". Subsequent fields give a shortest path of
reverse references from that field to a global variable or
function, corresponding to one way in which the program
might reach that field.
A common source of false positives in field tracking is
types with large method sets, because a reference to the
type descriptor carries with it references to all methods.
To address this problem, the CL also introduces a comment
annotation
//go:nointerface
that marks an upcoming method declaration as unavailable
for use in satisfying interfaces, both statically and
dynamically. Such a method is also invisible to package
reflect.
Again, all of this is disabled by default. It only turns on
if you have GOEXPERIMENT=fieldtrack set during make.bash.
R=iant, ken
CC=golang-dev
https://golang.org/cl/6749064
This patch is enough to fix compilation of
exp/types tests but only passes a stripped down
version of the appripriate torture test.
Update #4207.
R=dave, nigeltao, rsc, golang-dev
CC=golang-dev
https://golang.org/cl/6621061
This CL makes the compiler understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32'.
It does not change the meaning of int, but it should make
the eventual change of the meaning of int in 6g a bit smoother.
Update #2188.
R=ken, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6542059
The width was not being set on the address, which meant
that the optimizer could not find variables that overlapped
with it and mark them as having had their address taken.
This let to the compiler believing variables had been set
but never used and then optimizing away the set.
Fixes#4129.
R=ken2
CC=golang-dev
https://golang.org/cl/6552059
There may be further savings if convT2I can avoid the function call
if the cache is good and T is uintptr-shaped, a la convT2E, but that
will be a follow-up CL.
src/pkg/runtime:
benchmark old ns/op new ns/op delta
BenchmarkConvT2ISmall 43 15 -64.01%
BenchmarkConvT2IUintptr 45 14 -67.48%
BenchmarkConvT2ILarge 130 101 -22.31%
test/bench/go1:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 8588997000 8499058000 -1.05%
BenchmarkFannkuch11 5300392000 5358093000 +1.09%
BenchmarkGobDecode 30295580 31040190 +2.46%
BenchmarkGobEncode 18102070 17675650 -2.36%
BenchmarkGzip 774191400 771591400 -0.34%
BenchmarkGunzip 245915100 247464100 +0.63%
BenchmarkJSONEncode 123577000 121423050 -1.74%
BenchmarkJSONDecode 451969800 596256200 +31.92%
BenchmarkMandelbrot200 10060050 10072880 +0.13%
BenchmarkParse 10989840 11037710 +0.44%
BenchmarkRevcomp 1782666000 1716864000 -3.69%
BenchmarkTemplate 798286600 723234400 -9.40%
R=rsc, bradfitz, go.peter.90, daniel.morsing, dave, uriel
CC=golang-dev
https://golang.org/cl/6337058
Fixes#3708.
The fix to allow 5{c,g,l} to compile under clang 3.1 broke cross
compilation on darwin using the Apple default compiler on 10.7.3.
This failure was introduced in 9b455eb64690.
This has been tested by cross compiling on darwin/amd64 to linux/arm using
* gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)
* clang version 3.1 (branches/release_31)
As well as on linux/arm using
* gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
* Ubuntu clang version 3.0-6ubuntu3 (tags/RELEASE_30/final) (based on LLVM 3.0)
* Debian clang version 3.1-4 (branches/release_31) (based on LLVM 3.1)
R=consalus, rsc
CC=golang-dev
https://golang.org/cl/6307058
Drop expecttaken function in favor of extra argument
to gbranch and bgen. Mark loop condition as likely to
be true, so that loops are generated inline.
The main benefit here is contiguous code when trying
to read the generated assembly. It has only minor effects
on the timing, and they mostly cancel the minor effects
that aligning function entry points had. One exception:
both changes made Fannkuch faster.
Compared to before CL 6244066 (before aligned functions)
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4222117400 4201958800 -0.48%
BenchmarkFannkuch11 3462631800 3215908600 -7.13%
BenchmarkGobDecode 20887622 20899164 +0.06%
BenchmarkGobEncode 9548772 9439083 -1.15%
BenchmarkGzip 151687 152060 +0.25%
BenchmarkGunzip 8742 8711 -0.35%
BenchmarkJSONEncode 62730560 62686700 -0.07%
BenchmarkJSONDecode 252569180 252368960 -0.08%
BenchmarkMandelbrot200 5267599 5252531 -0.29%
BenchmarkRevcomp25M 980813500 985248400 +0.45%
BenchmarkTemplate 361259100 357414680 -1.06%
Compared to tip (aligned functions):
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4140739800 4201958800 +1.48%
BenchmarkFannkuch11 3259914400 3215908600 -1.35%
BenchmarkGobDecode 20620222 20899164 +1.35%
BenchmarkGobEncode 9384886 9439083 +0.58%
BenchmarkGzip 150333 152060 +1.15%
BenchmarkGunzip 8741 8711 -0.34%
BenchmarkJSONEncode 65210990 62686700 -3.87%
BenchmarkJSONDecode 249394860 252368960 +1.19%
BenchmarkMandelbrot200 5273394 5252531 -0.40%
BenchmarkRevcomp25M 996013800 985248400 -1.08%
BenchmarkTemplate 360620840 357414680 -0.89%
R=ken2
CC=golang-dev
https://golang.org/cl/6245069
* Eliminate bounds check on known small shifts.
* Rewrite x<<s | x>>(32-s) as a rotate (constant s).
* More aggressive (but still minimal) range analysis.
R=ken, dave, iant
CC=golang-dev
https://golang.org/cl/6209077
Using reg as the flag word was unfortunate, since the
default value is not 0 but NREG (==16), which happens
to be the bit NOPTR now. Clear it.
If I say this will fix the build, it won't.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/5690072
cc: add #pragma textflag to set it
runtime: mark mheap to go into noptr-bss.
remove special case in garbage collector
Remove the ARM from.flag field created by CL 5687044.
The DUPOK flag was already in p->reg, so keep using that.
Otherwise test/nilptr.go creates a very large binary.
Should fix the arm build.
Diagnosed by minux.ma; replacement for CL 5690044.
R=golang-dev, minux.ma, r
CC=golang-dev
https://golang.org/cl/5686060
Such variables would be put at 0(SP), leading to serious
corruptions at zero initialization.
Fixes#3084.
R=golang-dev, r
CC=golang-dev, remy
https://golang.org/cl/5683052
The alternative is to record enough information that the
trap handler know which registers contain cached globals
and can flush the registers back to their original locations.
That's significantly more work.
This only affects globals that have been written to.
Code that reads from a global should continue to registerize
as well as before.
Fixes#1304.
R=ken2
CC=golang-dev
https://golang.org/cl/5687046
ARM doesn't have the concept of scale, so I renamed the field
Addr.scale to Addr.flag to better reflect its true meaning.
R=rsc
CC=golang-dev
https://golang.org/cl/5687044
The garbage collector can avoid scanning this section, with
reduces collection time as well as the number of false positives.
Helps a little bit with issue 909, but certainly does not solve it.
R=ken2
CC=golang-dev
https://golang.org/cl/5671099
If the values being compared have different concrete types,
then they're clearly unequal without needing to invoke the
actual interface compare routine. This speeds tests for
specific values, like if err == io.EOF, by about 3x.
benchmark old ns/op new ns/op delta
BenchmarkIfaceCmp100 843 287 -65.95%
BenchmarkIfaceCmpNil100 184 182 -1.09%
Fixes#2591.
R=ken2
CC=golang-dev
https://golang.org/cl/5651073