Accomplished by synchronizing the formatting of conversion errors between typecheck.c and subr.c
Fixes#3984.
R=golang-dev, remyoudompheng, rsc
CC=golang-dev
https://golang.org/cl/6500064
This fixes a spurious 'invalid recursive type' error, and stops the compiler from emitting errors on uses of the invalid type.
Fixes#3766.
R=golang-dev, dave, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/6443100
The compiler is incorrectly rejecting switches on arrays of
comparable types. It also doesn't catch incomparable structs
when typechecking the switch, leading to unreadable errors
during typechecking of the generated code.
Fixes#3894.
R=rsc
CC=gobot, golang-dev, r, remy
https://golang.org/cl/6442074
The receive operator was given incorrect precedence
resulting in incorrect deletion of parentheses.
Fixes#3843.
R=rsc
CC=golang-dev, remy
https://golang.org/cl/6442049
They were previously ignored when deciding order and
detecting dependency loops.
Fixes#3824.
R=rsc, golang-dev
CC=golang-dev, remy
https://golang.org/cl/6455055
The error was caused by a call to implements() even when
the type switch variable was not an interface.
Fixes#3786.
R=golang-dev, r
CC=golang-dev, remy
https://golang.org/cl/6354102
There may be further savings if convT2I can avoid the function call
if the cache is good and T is uintptr-shaped, a la convT2E, but that
will be a follow-up CL.
src/pkg/runtime:
benchmark old ns/op new ns/op delta
BenchmarkConvT2ISmall 43 15 -64.01%
BenchmarkConvT2IUintptr 45 14 -67.48%
BenchmarkConvT2ILarge 130 101 -22.31%
test/bench/go1:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 8588997000 8499058000 -1.05%
BenchmarkFannkuch11 5300392000 5358093000 +1.09%
BenchmarkGobDecode 30295580 31040190 +2.46%
BenchmarkGobEncode 18102070 17675650 -2.36%
BenchmarkGzip 774191400 771591400 -0.34%
BenchmarkGunzip 245915100 247464100 +0.63%
BenchmarkJSONEncode 123577000 121423050 -1.74%
BenchmarkJSONDecode 451969800 596256200 +31.92%
BenchmarkMandelbrot200 10060050 10072880 +0.13%
BenchmarkParse 10989840 11037710 +0.44%
BenchmarkRevcomp 1782666000 1716864000 -3.69%
BenchmarkTemplate 798286600 723234400 -9.40%
R=rsc, bradfitz, go.peter.90, daniel.morsing, dave, uriel
CC=golang-dev
https://golang.org/cl/6337058
If there are mutually recursive functions, there is a cycle in
the dependency graph, so the order is actually dependency order
among the strongly connected components: mutually recursive
functions get put into the same batch and analyzed together.
(Until now the entire package was put in one batch.)
The non-recursive case (single function, maybe with some
closures inside) will be able to be more precise about inputs
that escape only back to outputs, but that is not implemented yet.
R=ken2
CC=golang-dev, lvd
https://golang.org/cl/6304050
CL 4313064 fixed its test case but did not address a
general enough problem:
type T1 struct { F *T2 }
type T2 T1
type T3 T2
could still end up copying the definition of T1 for T2
before T1 was done being evaluated, or T3 before T2
was done.
In order to propagate the updates correctly,
record a copy of an incomplete type for re-execution
once the type is completed. Roll back CL 4313064.
Fixes#3709.
R=ken2
CC=golang-dev, lstoakes
https://golang.org/cl/6301059
The original implementation of closures created the
underlying top-level function during walk, which is fairly
late in the compilation process and caused ordering-based
complications due to earlier stages that had to be repeated
any number of times.
Create the underlying function during typecheck, much
earlier, so that later stages can be run just once.
The result is a simpler compilation sequence.
R=ken2
CC=golang-dev
https://golang.org/cl/6279049
Drop expecttaken function in favor of extra argument
to gbranch and bgen. Mark loop condition as likely to
be true, so that loops are generated inline.
The main benefit here is contiguous code when trying
to read the generated assembly. It has only minor effects
on the timing, and they mostly cancel the minor effects
that aligning function entry points had. One exception:
both changes made Fannkuch faster.
Compared to before CL 6244066 (before aligned functions)
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4222117400 4201958800 -0.48%
BenchmarkFannkuch11 3462631800 3215908600 -7.13%
BenchmarkGobDecode 20887622 20899164 +0.06%
BenchmarkGobEncode 9548772 9439083 -1.15%
BenchmarkGzip 151687 152060 +0.25%
BenchmarkGunzip 8742 8711 -0.35%
BenchmarkJSONEncode 62730560 62686700 -0.07%
BenchmarkJSONDecode 252569180 252368960 -0.08%
BenchmarkMandelbrot200 5267599 5252531 -0.29%
BenchmarkRevcomp25M 980813500 985248400 +0.45%
BenchmarkTemplate 361259100 357414680 -1.06%
Compared to tip (aligned functions):
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4140739800 4201958800 +1.48%
BenchmarkFannkuch11 3259914400 3215908600 -1.35%
BenchmarkGobDecode 20620222 20899164 +1.35%
BenchmarkGobEncode 9384886 9439083 +0.58%
BenchmarkGzip 150333 152060 +1.15%
BenchmarkGunzip 8741 8711 -0.34%
BenchmarkJSONEncode 65210990 62686700 -3.87%
BenchmarkJSONDecode 249394860 252368960 +1.19%
BenchmarkMandelbrot200 5273394 5252531 -0.40%
BenchmarkRevcomp25M 996013800 985248400 -1.08%
BenchmarkTemplate 360620840 357414680 -0.89%
R=ken2
CC=golang-dev
https://golang.org/cl/6245069
It's sad to introduce a new macro, but rnd shows up consistently
in profiles, and the function call overwhelms the two arithmetic
instructions it performs.
R=r
CC=golang-dev
https://golang.org/cl/6260051
for expr1, expr2 = range slice
was assigning to expr1 and expr2 in sequence
instead of in parallel. Now it assigns in parallel,
as it should. This matters for things like
for i, x[i] = range slice.
Fixes#3464.
R=ken2
CC=golang-dev
https://golang.org/cl/6252048
* Eliminate bounds check on known small shifts.
* Rewrite x<<s | x>>(32-s) as a rotate (constant s).
* More aggressive (but still minimal) range analysis.
R=ken, dave, iant
CC=golang-dev
https://golang.org/cl/6209077
Before:
./x.go:6: first argument to append must be slice; have nil
After:
./x.go:6: first argument to append must be typed slice; have untyped nil
Fixes#3616.
R=ken2
CC=golang-dev
https://golang.org/cl/6209067
The two optimizations for small structs and arrays
were missing the implicit cast from ideal bool.
Fixes#3351.
R=rsc, lvd
CC=golang-dev
https://golang.org/cl/5848062
Windows has paths like C:/Users/ADMIN~1. Also, it so happens
that go/parser allows ~ in import paths. So does the spec.
Fixes the build too.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/5777073
1. consistent usage section (go tool xxx)
2. reformat cmd/ld document with minor correction
document which -H flags are valid on which ld
document -d flag can't be used on Windows.
document -Hwindowsgui
R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/5782043
The spec is looser than the current implementation.
The spec edit was made in CL 4444050 (May 2011)
but I never implemented it.
Fixes#3244.
R=ken2
CC=golang-dev
https://golang.org/cl/5785049