This patch is enough to fix compilation of
exp/types tests but only passes a stripped down
version of the appripriate torture test.
Update #4207.
R=dave, nigeltao, rsc, golang-dev
CC=golang-dev
https://golang.org/cl/6621061
This CL makes the compiler understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32'.
It does not change the meaning of int, but it should make
the eventual change of the meaning of int in 6g a bit smoother.
Update #2188.
R=ken, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6542059
The width was not being set on the address, which meant
that the optimizer could not find variables that overlapped
with it and mark them as having had their address taken.
This let to the compiler believing variables had been set
but never used and then optimizing away the set.
Fixes#4129.
R=ken2
CC=golang-dev
https://golang.org/cl/6552059
There may be further savings if convT2I can avoid the function call
if the cache is good and T is uintptr-shaped, a la convT2E, but that
will be a follow-up CL.
src/pkg/runtime:
benchmark old ns/op new ns/op delta
BenchmarkConvT2ISmall 43 15 -64.01%
BenchmarkConvT2IUintptr 45 14 -67.48%
BenchmarkConvT2ILarge 130 101 -22.31%
test/bench/go1:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 8588997000 8499058000 -1.05%
BenchmarkFannkuch11 5300392000 5358093000 +1.09%
BenchmarkGobDecode 30295580 31040190 +2.46%
BenchmarkGobEncode 18102070 17675650 -2.36%
BenchmarkGzip 774191400 771591400 -0.34%
BenchmarkGunzip 245915100 247464100 +0.63%
BenchmarkJSONEncode 123577000 121423050 -1.74%
BenchmarkJSONDecode 451969800 596256200 +31.92%
BenchmarkMandelbrot200 10060050 10072880 +0.13%
BenchmarkParse 10989840 11037710 +0.44%
BenchmarkRevcomp 1782666000 1716864000 -3.69%
BenchmarkTemplate 798286600 723234400 -9.40%
R=rsc, bradfitz, go.peter.90, daniel.morsing, dave, uriel
CC=golang-dev
https://golang.org/cl/6337058
Fixes#3708.
The fix to allow 5{c,g,l} to compile under clang 3.1 broke cross
compilation on darwin using the Apple default compiler on 10.7.3.
This failure was introduced in 9b455eb64690.
This has been tested by cross compiling on darwin/amd64 to linux/arm using
* gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)
* clang version 3.1 (branches/release_31)
As well as on linux/arm using
* gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
* Ubuntu clang version 3.0-6ubuntu3 (tags/RELEASE_30/final) (based on LLVM 3.0)
* Debian clang version 3.1-4 (branches/release_31) (based on LLVM 3.1)
R=consalus, rsc
CC=golang-dev
https://golang.org/cl/6307058
Drop expecttaken function in favor of extra argument
to gbranch and bgen. Mark loop condition as likely to
be true, so that loops are generated inline.
The main benefit here is contiguous code when trying
to read the generated assembly. It has only minor effects
on the timing, and they mostly cancel the minor effects
that aligning function entry points had. One exception:
both changes made Fannkuch faster.
Compared to before CL 6244066 (before aligned functions)
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4222117400 4201958800 -0.48%
BenchmarkFannkuch11 3462631800 3215908600 -7.13%
BenchmarkGobDecode 20887622 20899164 +0.06%
BenchmarkGobEncode 9548772 9439083 -1.15%
BenchmarkGzip 151687 152060 +0.25%
BenchmarkGunzip 8742 8711 -0.35%
BenchmarkJSONEncode 62730560 62686700 -0.07%
BenchmarkJSONDecode 252569180 252368960 -0.08%
BenchmarkMandelbrot200 5267599 5252531 -0.29%
BenchmarkRevcomp25M 980813500 985248400 +0.45%
BenchmarkTemplate 361259100 357414680 -1.06%
Compared to tip (aligned functions):
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4140739800 4201958800 +1.48%
BenchmarkFannkuch11 3259914400 3215908600 -1.35%
BenchmarkGobDecode 20620222 20899164 +1.35%
BenchmarkGobEncode 9384886 9439083 +0.58%
BenchmarkGzip 150333 152060 +1.15%
BenchmarkGunzip 8741 8711 -0.34%
BenchmarkJSONEncode 65210990 62686700 -3.87%
BenchmarkJSONDecode 249394860 252368960 +1.19%
BenchmarkMandelbrot200 5273394 5252531 -0.40%
BenchmarkRevcomp25M 996013800 985248400 -1.08%
BenchmarkTemplate 360620840 357414680 -0.89%
R=ken2
CC=golang-dev
https://golang.org/cl/6245069
* Eliminate bounds check on known small shifts.
* Rewrite x<<s | x>>(32-s) as a rotate (constant s).
* More aggressive (but still minimal) range analysis.
R=ken, dave, iant
CC=golang-dev
https://golang.org/cl/6209077
Using reg as the flag word was unfortunate, since the
default value is not 0 but NREG (==16), which happens
to be the bit NOPTR now. Clear it.
If I say this will fix the build, it won't.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/5690072
cc: add #pragma textflag to set it
runtime: mark mheap to go into noptr-bss.
remove special case in garbage collector
Remove the ARM from.flag field created by CL 5687044.
The DUPOK flag was already in p->reg, so keep using that.
Otherwise test/nilptr.go creates a very large binary.
Should fix the arm build.
Diagnosed by minux.ma; replacement for CL 5690044.
R=golang-dev, minux.ma, r
CC=golang-dev
https://golang.org/cl/5686060
Such variables would be put at 0(SP), leading to serious
corruptions at zero initialization.
Fixes#3084.
R=golang-dev, r
CC=golang-dev, remy
https://golang.org/cl/5683052
The alternative is to record enough information that the
trap handler know which registers contain cached globals
and can flush the registers back to their original locations.
That's significantly more work.
This only affects globals that have been written to.
Code that reads from a global should continue to registerize
as well as before.
Fixes#1304.
R=ken2
CC=golang-dev
https://golang.org/cl/5687046
ARM doesn't have the concept of scale, so I renamed the field
Addr.scale to Addr.flag to better reflect its true meaning.
R=rsc
CC=golang-dev
https://golang.org/cl/5687044
The garbage collector can avoid scanning this section, with
reduces collection time as well as the number of false positives.
Helps a little bit with issue 909, but certainly does not solve it.
R=ken2
CC=golang-dev
https://golang.org/cl/5671099
If the values being compared have different concrete types,
then they're clearly unequal without needing to invoke the
actual interface compare routine. This speeds tests for
specific values, like if err == io.EOF, by about 3x.
benchmark old ns/op new ns/op delta
BenchmarkIfaceCmp100 843 287 -65.95%
BenchmarkIfaceCmpNil100 184 182 -1.09%
Fixes#2591.
R=ken2
CC=golang-dev
https://golang.org/cl/5651073
As a convenience to people working on the tools,
leave Makefiles that invoke the go dist tool appropriately.
They are not used during the build.
R=golang-dev, bradfitz, n13m3y3r, gustavo
CC=golang-dev
https://golang.org/cl/5636050
This can happen on Plan 9 if we we're building
with the 32-bit and 64-bit host compilers, one
after the other.
R=rsc
CC=golang-dev
https://golang.org/cl/5599053
Also delete gotest, since it's messy to fix and slated for deletion anyway.
A couple of things outside src can't be tested any more. "go test" will be
fixed and these tests will be re-enabled. They're noisy for now.
Fixes#284.
R=rsc
CC=golang-dev
https://golang.org/cl/5598049
This fixes issue 2444.
A big cleanup of all 31/32bit size boundaries i'll leave for another cl though. (see also issue 1700).
R=rsc
CC=golang-dev
https://golang.org/cl/5484058
To allow these types as map keys, we must fill in
equal and hash functions in their algorithm tables.
Structs or arrays that are "just memory", like [2]int,
can and do continue to use the AMEM algorithm.
Structs or arrays that contain special values like
strings or interface values use generated functions
for both equal and hash.
The runtime helper func runtime.equal(t, x, y) bool handles
the general equality case for x == y and calls out to
the equal implementation in the algorithm table.
For short values (<= 4 struct fields or array elements),
the sequence of elementwise comparisons is inlined
instead of calling runtime.equal.
R=ken, mpimenov
CC=golang-dev
https://golang.org/cl/5451105