The GC info masks for slices and strings were changed in
commit caab29a25f68, but the reference masks used by
gcinfo_test for power64x hadn't caught up. Now they're
identical to amd64, so this CL fixes this test by combining
the reference masks for these platforms.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/162620044
reflect/asm_power64x.s was missing changes made to other
platforms for stack maps. This CL ports those changes. With
this fix, the reflect test passes on power64x.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/170870043
fastrand1 depends on testing the high bit of its uint32 state.
For efficiency, all of the architectures implement this as a
sign bit test. However, on power64, fastrand1 was using a
64-bit sign test on the zero-extended 32-bit state. This
always failed, causing fastrand1 to have very short periods
and often decay to 0 and get stuck.
Fix this by using a 32-bit signed compare instead of a 64-bit
compare. This fixes various tests for the randomization of
select of map iteration.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/166990043
All three cases of clearfat were wrong on power64x.
The cases that handle 1032 bytes and up and 32 bytes and up
both use MOVDU (one directly generated in a loop and the other
via duffzero), which leaves the pointer register pointing at
the *last written* address. The generated code was not
accounting for this, so the byte fill loop was re-zeroing the
last zeroed dword, rather than the bytes following the last
zeroed dword. Fix this by simply adding an additional 8 byte
offset to the byte zeroing loop.
The case that handled under 32 bytes was also wrong. It
didn't update the pointer register at all, so the byte zeroing
loop was simply re-zeroing the beginning of region. Again,
the fix is to add an offset to the byte zeroing loop to
account for this.
LGTM=dave, bradfitz
R=rsc, dave, bradfitz
CC=golang-codereviews
https://golang.org/cl/168870043
Apparently I had already moved on to fixing another problem
when I submitted CL 169790043.
LGTM=dave
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/165210043
No real problems found. Just lots of argument names that
didn't quite match up.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/169790043
This adds a test to runtime·check to ensure CAS of large
unsigned 32-bit numbers does not accidentally sign-extend its
arguments.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/162490044
Previously, the power64x runtime assembly was sloppy about
using sign-extending versus zero-extending moves of arguments
and return values. I think all of the cases that actually
mattered have been fixed in recent CLs; this CL fixes up the
few remaining mismatches.
LGTM=rsc
R=rsc, dave
CC=golang-codereviews
https://golang.org/cl/162480043
This CL implements the many multiword write barriers by calling
writebarrierptr, so that only writebarrierptr needs the actual barrier.
In lieu of an actual barrier, writebarrierptr checks that the value
being copied is not a small non-zero integer. This is enough to
shake out bugs where the barrier is being called when it should not
(for non-pointer values). It also found a few tests in sync/atomic
that were being too clever.
This CL adds a write barrier for the memory moved during the
builtin copy function, which I forgot when inserting barriers for Go 1.4.
This CL re-enables some write barriers that were disabled for Go 1.4.
Those were disabled because it is possible to change the generated
code so that they are unnecessary most of the time, but we have not
changed the generated code yet. For safety they must be enabled.
None of this is terribly efficient. We are aiming for correct first.
LGTM=rlh
R=rlh
CC=golang-codereviews
https://golang.org/cl/168770043
If you get a stack of PCs from Callers, it would be expected
that every PC is immediately after a call instruction, so to find
the line of the call, you look up the line for PC-1.
CL 163550043 now explicitly documents that.
The most common exception to this is the top-most return PC
on the stack, which is the entry address of the runtime.goexit
function. Subtracting 1 from that PC will end up in a different
function entirely.
To remove this special case, make the top-most return PC
goexit+PCQuantum and then implement goexit in assembly
so that the first instruction can be skipped.
Fixes#7690.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/170720043
This removes a bunch of ugly duplicate code.
The end goal is to factor the disassembly code
into cmd/internal/objfile too, so that pprof can use it,
but one step at a time.
LGTM=r, iant
R=r, alex.brainman, iant
CC=golang-codereviews
https://golang.org/cl/149400043
Originally traceback was only used for printing the stack
when an unexpected signal came in. In that case, the
initial PC is taken from the signal and should be used
unaltered. For the callers, the PC is the return address,
which might be on the line after the call; we subtract 1
to get to the CALL instruction.
Traceback is now used for a variety of things, and for
almost all of those the initial PC is a return address,
whether from getcallerpc, or gp->sched.pc, or gp->syscallpc.
In those cases, we need to subtract 1 from this initial PC,
but the traceback code had a hard rule "never subtract 1
from the initial PC", left over from the signal handling days.
Change gentraceback to take a flag that specifies whether
we are tracing a trap.
Change traceback to default to "starting with a return PC",
which is the overwhelmingly common case.
Add tracebacktrap, like traceback but starting with a trap PC.
Use tracebacktrap in signal handlers.
Fixes#7690.
LGTM=iant, r
R=r, iant
CC=golang-codereviews
https://golang.org/cl/167810044
Attempt to clear up confusion about how to turn
the PCs reported by Callers into the file and line
number people actually want.
Fixes#7690.
LGTM=r, chris.cs.guy
R=r, chris.cs.guy
CC=golang-codereviews
https://golang.org/cl/163550043
The goal here is to get the big-endian fixes so that
in some upcoming code movement for write barriers
I don't make them unmergeable.
LGTM=rlh
R=rlh
CC=golang-codereviews
https://golang.org/cl/166890043
goprintf is a printf-like print for Go.
It is used in the code generated by 'defer print(...)' and 'go print(...)'.
Normally print(1, 2, 3) turns into
printint(1)
printint(2)
printint(3)
but defer and go need a single function call to give the runtime;
they give the runtime something like goprintf("%d%d%d", 1, 2, 3).
Variadic functions like goprintf cannot be described in the new
type information world, so we have to replace it.
Replace with a custom function, so that defer print(1, 2, 3) turns
into
defer func(a1, a2, a3 int) {
print(a1, a2, a3)
}(1, 2, 3)
(and then the print becomes three different printints as usual).
Fixes#8614.
LGTM=austin
R=austin
CC=golang-codereviews, r
https://golang.org/cl/159700043
I removed support for jumping between functions years ago,
as part of doing the instruction layout for each function separately.
Given that, it makes sense to treat labels as function-scoped.
This lets each function have its own 'loop' label, for example.
Makes the assembly much cleaner and removes the last
reason anyone would reach for the 123(PC) form instead.
Note that this is on the dev.power64 branch, but it changes all
the assemblers. The change will ship in Go 1.5 (perhaps after
being ported into the new assembler).
Came up as part of CL 167730043.
LGTM=r
R=r
CC=austin, dave, golang-codereviews, minux
https://golang.org/cl/159670043
In CL 160670043 the write function was changed
so a zero-length write is now allowed. This leads
the ExampleWriter_Init test to fail.
The reason is that Plan 9 preserves message
boundaries, while the os library expects systems
that don't preserve them. We have to ignore
zero-length writes so they will never turn into EOF.
This issue was previously discussed in CL 7406046.
LGTM=bradfitz
R=rsc, bradfitz
CC=golang-codereviews
https://golang.org/cl/163510043