This brings cmd/gc in line with the spec on this question.
It might break existing code, but that code was not conformant
with the spec.
Credit to Rémy for finding the broken code.
Fixes#6366.
LGTM=r
R=golang-codereviews, r
CC=adonovan, golang-codereviews, gri
https://golang.org/cl/129550043
https://golang.org/cl/152700045/ made it possible for struct literals assigned to globals to use <N> as the RHS. Normally, this is to zero out variables on first use. Because globals are already zero (or their linker initialized value), we just ignored this.
Now that <N> can occur from non-initialization code, we need to emit this code. We don't use <N> for initialization of globals any more, so this shouldn't cause any excessive zeroing.
Fixes#8961.
LGTM=rsc
R=golang-codereviews, rsc
CC=bradfitz, golang-codereviews
https://golang.org/cl/154540044
During anylit run, nodes such as SLICEARR(statictmp, [:])
may be generated and are expected to be found unchanged by
gen_as_init.
In some walks (in particular walkselect), the statement
may be walked again and lowered to its usual form, leading to a
crash.
Fixes#8017.
Fixes#8024.
Fixes#8058.
LGTM=rsc
R=golang-codereviews, dvyukov, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/112080043
Previously it might happen before calling dowidth and
result in a compiler crash.
Fixes#8060.
LGTM=dvyukov, rsc
R=golang-codereviews, dvyukov, gobot, rsc
CC=golang-codereviews
https://golang.org/cl/110980044
newstackcall creates a new stack segment, and we want to
be able to throw away all that code.
LGTM=khr
R=khr, iant
CC=dvyukov, golang-codereviews, r
https://golang.org/cl/139270043
Normally, an expression of the form x.f or *y can be reordered
with function calls and communications.
Select is stricter than normal: each channel expression is evaluated
in source order. If you have case <-x.f and case <-foo(), then if the
evaluation of x.f causes a panic, foo must not have been called.
(This is in contrast to an expression like x.f + foo().)
Enforce this stricter ordering.
Fixes#8336.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/126570043
Credit to Rémy for finding and writing test case.
Fixes#8325.
LGTM=r
R=golang-codereviews, r
CC=dave, golang-codereviews, iant, remyoudompheng
https://golang.org/cl/124950043
Fixes#8074.
The issue was not reproduceable by revision
go version devel +e0ad7e329637 Thu Jun 19 22:19:56 2014 -0700 linux/arm
But include the original test case in case the issue reopens itself.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/107290043
A runtime.Goexit during a panic-invoked deferred call
left the panic stack intact even though all the stack frames
are gone when the goroutine is torn down.
The next goroutine to reuse that struct will have a
bogus panic stack and can cause the traceback routines
to walk into garbage.
Most likely to happen during tests, because t.Fatal might
be called during a deferred func and uses runtime.Goexit.
This "not enough cleared in Goexit" failure mode has
happened to us multiple times now. Clear all the pointers
that don't make sense to keep, not just gp->panic.
Fixes#8158.
LGTM=iant, dvyukov
R=iant, dvyukov
CC=golang-codereviews
https://golang.org/cl/102220043
I am not sure what the rounding here was
trying to do, but it was skipping the first
pointer on native client.
The code above the rounding already checks
that xoffset is widthptr-aligned, so the rnd
was a no-op everywhere but on Native Client.
And on Native Client it was wrong.
Perhaps it was supposed to be rounding down,
not up, but zerorange handles the extra 32 bits
correctly, so the rnd does not seem to be necessary
at all.
This wouldn't be worth doing for Go 1.3 except
that it can affect code on the playground.
Fixes#8155.
LGTM=r, iant
R=golang-codereviews, r, iant
CC=dvyukov, golang-codereviews, khr
https://golang.org/cl/108740047
The 1-byte write was silently clearing a byte on the stack.
If there was another function call with more arguments
in the same stack frame, no harm done.
Otherwise, if the variable at that location was already zero,
no harm done.
Otherwise, problems.
Fixes#8139.
LGTM=dsymonds
R=golang-codereviews, dsymonds
CC=golang-codereviews, iant, r
https://golang.org/cl/100940043
We were requiring that the defer stack and the panic stack
be completely processed, thinking that if any were left over
the stack scan and the defer stack/panic stack must be out
of sync. It turns out that the panic stack may well have
leftover entries in some situations, and that's okay.
Fixes#8132.
LGTM=minux, r
R=golang-codereviews, minux, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/100900044
The 'continuation pc' is where the frame will continue
execution, if anywhere. For a frame that stopped execution
due to a CALL instruction, the continuation pc is immediately
after the CALL. But for a frame that stopped execution due to
a fault, the continuation pc is the pc after the most recent CALL
to deferproc in that frame, or else 0. That is where execution
will continue, if anywhere.
The liveness information is only recorded for CALL instructions.
This change makes sure that we never look for liveness information
except for CALL instructions.
Using a valid PC fixes crashes when a garbage collection or
stack copying tries to process a stack frame that has faulted.
Record continuation pc in heapdump (format change).
Fixes#8048.
LGTM=iant, khr
R=khr, iant, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/100870044
This CL forces the optimizer to preserve some memory stores
that would be redundant except that a stack scan due to garbage
collection or stack copying might look at them during a function call.
As such, it forces additional memory writes and therefore slows
down the execution of some programs, especially garbage-heavy
programs that are already limited by memory bandwidth.
The slowdown can be as much as 7% for end-to-end benchmarks.
These numbers are from running go1.test -test.benchtime=5s three times,
taking the best (lowest) ns/op for each benchmark. I am excluding
benchmarks with time/op < 10us to focus on macro effects.
All benchmarks are on amd64.
Comparing tip (a27f34c771cb) against this CL on an Intel Core i5 MacBook Pro:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 3876500413 3856337341 -0.52%
BenchmarkFannkuch11 2965104777 2991182127 +0.88%
BenchmarkGobDecode 8563026 8788340 +2.63%
BenchmarkGobEncode 5050608 5267394 +4.29%
BenchmarkGzip 431191816 434168065 +0.69%
BenchmarkGunzip 107873523 110563792 +2.49%
BenchmarkHTTPClientServer 85036 86131 +1.29%
BenchmarkJSONEncode 22143764 22501647 +1.62%
BenchmarkJSONDecode 79646916 85658808 +7.55%
BenchmarkMandelbrot200 4720421 4700108 -0.43%
BenchmarkGoParse 4651575 4712247 +1.30%
BenchmarkRegexpMatchMedium_1K 71986 73490 +2.09%
BenchmarkRegexpMatchHard_1K 111018 117495 +5.83%
BenchmarkRevcomp 648798723 659352759 +1.63%
BenchmarkTemplate 112673009 112819078 +0.13%
Comparing tip (a27f34c771cb) against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5461110720 5393104469 -1.25%
BenchmarkFannkuch11 4314677151 4327177615 +0.29%
BenchmarkGobDecode 11065853 11235272 +1.53%
BenchmarkGobEncode 6500065 6959837 +7.07%
BenchmarkGzip 647478596 671769097 +3.75%
BenchmarkGunzip 139348579 141096376 +1.25%
BenchmarkHTTPClientServer 69376 73610 +6.10%
BenchmarkJSONEncode 30172320 31796106 +5.38%
BenchmarkJSONDecode 113704905 114239137 +0.47%
BenchmarkMandelbrot200 6032730 6003077 -0.49%
BenchmarkGoParse 6775251 6405995 -5.45%
BenchmarkRegexpMatchMedium_1K 111832 113895 +1.84%
BenchmarkRegexpMatchHard_1K 161112 168420 +4.54%
BenchmarkRevcomp 876363406 892319935 +1.82%
BenchmarkTemplate 146273096 148998339 +1.86%
Just to get a sense of where we are compared to the previous release,
here are the same benchmarks comparing Go 1.2 to this CL.
Comparing Go 1.2 against this CL on an Intel Core i5 MacBook Pro:
BenchmarkBinaryTree17 4370077662 3856337341 -11.76%
BenchmarkFannkuch11 3347052657 2991182127 -10.63%
BenchmarkGobDecode 8791384 8788340 -0.03%
BenchmarkGobEncode 4968759 5267394 +6.01%
BenchmarkGzip 437815669 434168065 -0.83%
BenchmarkGunzip 94604099 110563792 +16.87%
BenchmarkHTTPClientServer 87798 86131 -1.90%
BenchmarkJSONEncode 22818243 22501647 -1.39%
BenchmarkJSONDecode 97182444 85658808 -11.86%
BenchmarkMandelbrot200 4733516 4700108 -0.71%
BenchmarkGoParse 5054384 4712247 -6.77%
BenchmarkRegexpMatchMedium_1K 67612 73490 +8.69%
BenchmarkRegexpMatchHard_1K 107321 117495 +9.48%
BenchmarkRevcomp 733270055 659352759 -10.08%
BenchmarkTemplate 109304977 112819078 +3.21%
Comparing Go 1.2 against this CL on an Intel Xeon E5520:
BenchmarkBinaryTree17 5986953594 5393104469 -9.92%
BenchmarkFannkuch11 4861139174 4327177615 -10.98%
BenchmarkGobDecode 11830997 11235272 -5.04%
BenchmarkGobEncode 6608722 6959837 +5.31%
BenchmarkGzip 661875826 671769097 +1.49%
BenchmarkGunzip 138630019 141096376 +1.78%
BenchmarkHTTPClientServer 71534 73610 +2.90%
BenchmarkJSONEncode 30393609 31796106 +4.61%
BenchmarkJSONDecode 139645860 114239137 -18.19%
BenchmarkMandelbrot200 5988660 6003077 +0.24%
BenchmarkGoParse 6974092 6405995 -8.15%
BenchmarkRegexpMatchMedium_1K 111331 113895 +2.30%
BenchmarkRegexpMatchHard_1K 165961 168420 +1.48%
BenchmarkRevcomp 995049292 892319935 -10.32%
BenchmarkTemplate 145623363 148998339 +2.32%
Fixes#8036.
LGTM=khr
R=golang-codereviews, josharian, khr
CC=golang-codereviews, iant, r
https://golang.org/cl/99660044
CL 51010045 fixed the first one of these:
cmd/gc: return canonical Node* from temp
For historical reasons, temp was returning a copy
of the created Node*, not the original Node*.
This meant that if analysis recorded information in the
returned node (for example, n->addrtaken = 1), the
analysis would not show up on the original Node*, the
one kept in fn->dcl and consulted during liveness
bitmap creation.
Correct this, and watch for it when setting addrtaken.
Fixes#7083.
R=khr, dave, minux.ma
CC=golang-codereviews
https://golang.org/cl/51010045
CL 53200043 fixed the second:
cmd/gc: fix race build
Missed this case in CL 51010045.
TBR=khr
CC=golang-codereviews
https://golang.org/cl/53200043
This CL fixes the third. There are only three nod(OXXX, ...)
calls in sinit.c, so maybe we're done. Embarassing that it
took three CLs to find all three.
Fixes#8028.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant
https://golang.org/cl/100800046
In the first very rough draft of the reordering code
that was introduced in the Go 1.3 cycle, the pre-allocated
temporary for a ... argument was held in n->right.
It moved to n->alloc but the code avoiding n->right
was left behind in order.c. In copy(x, <-c), the receive
is in n->right and must be processed. Delete the special
case code, removing the bug.
Fixes#8039.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100820044
The code cannot have worked before, because it was
trying to use the old value in a range check for the new
type, which might have a different representation
(hence the 'internal compiler error').
Fixes#8073.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/98630045
I don't know when the bug was fixed, but empirically it was.
Make sure it stays fixed by adding a test.
Fixes#7884.
LGTM=adg
R=golang-codereviews, adg
CC=golang-codereviews
https://golang.org/cl/93500043
The temporary-introducing pass was not recursing
into the argumnt of a receive operation.
Fixes#8011.
LGTM=r
R=golang-codereviews, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/91540043
The introduction of temporaries in order.c was not
quite right for two corner cases:
1) The rewrite that pushed new variables on the lhs of
a receive into the body of the case was dropping the
declaration of the variables. If the variables escape,
the declaration is what allocates them.
Caught by escape analysis sanity check.
In fact the declarations should move into the body
always, so that we only allocate if the corresponding
case is selected. Do that. (This is an optimization that
was already present in Go 1.2. The new order code just
made it stop working.)
Fixes#7997.
2) The optimization to turn a single-recv select into
an ordinary receive assumed it could take the address
of the destination; not so if the destination is _.
Fixes#7998.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/100480043
The GC program describing a data structure sometimes trusts the
pointer base type and other times does not (if not, the garbage collector
must fall back on per-allocation type information stored in the heap).
Make the scanning of a pointer in an interface do the same.
This fixes a crash in a particular use of reflect.SliceHeader.
Fixes#8004.
LGTM=khr
R=golang-codereviews, khr
CC=0xe2.0x9a.0x9b, golang-codereviews, iant, r
https://golang.org/cl/100470045
Globals, function arguments, and results are special cases in
registerization.
Globals must be flushed aggressively, because nearly any
operation can cause a panic, and the recovery code must see
the latest values. Globals also must be loaded aggressively,
because nearly any store through a pointer might be updating a
global: the compiler cannot see all the "address of"
operations on globals, especially exported globals. To
accomplish this, mark all globals as having their address
taken, which effectively disables registerization.
If a function contains a defer statement, the function results
must be flushed aggressively, because nearly any operation can
cause a panic, and the deferred code may call recover, causing
the original function to return the current values of its
function results. To accomplish this, mark all function
results as having their address taken if the function contains
any defer statements. This causes not just aggressive flushing
but also aggressive loading. The aggressive loading is
overkill but the best we can do in the current code.
Function arguments must be considered live at all safe points
in a function, because garbage collection always preserves
them: they must be up-to-date in order to be preserved
correctly. Accomplish this by marking them live at all call
sites. An earlier attempt at this marked function arguments as
having their address taken, which disabled registerization
completely, making programs slower. This CL's solution allows
registerization while preserving safety. The benchmark speedup
is caused by being able to registerize again (the earlier CL
lost the same amount).
benchmark old ns/op new ns/op delta
BenchmarkEqualPort32 61.4 56.0 -8.79%
benchmark old MB/s new MB/s speedup
BenchmarkEqualPort32 521.56 570.97 1.09x
Fixes#1304. (again)
Fixes#7944. (again)
Fixes#7984.
Fixes#7995.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, r
https://golang.org/cl/97500044