Add Addr-checking for all Progs on input to liblink, in liblink/pass.c,
including requiring use of TYPE_ADDR, not TYPE_CONST.
Update compilers and assemblers to satisfy checks.
Change-Id: Idac36b9f6805f0451cb541d2338992ca5eaf3963
Reviewed-on: https://go-review.googlesource.com/3801
Reviewed-by: Austin Clements <austin@google.com>
Generating array types like [4]int would fail even though the int type
is generatable. Allow generating values of array types when the inner
type is generatable.
Change-Id: I7d71b3c18edb3737e2fec1ddf5e36c9dc8401971
Reviewed-on: https://go-review.googlesource.com/3865
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
cc is no more.
Change-Id: I8d1bc0d2e471cd9357274204c9bc1fa67cbc272d
Reviewed-on: https://go-review.googlesource.com/3833
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
We don't need placeholders for the old built-in poll server any more.
Change-Id: I3a510aec6a30bc2ac97676c400177cdfe557b8dc
Reviewed-on: https://go-review.googlesource.com/3863
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The issue #8432 has been marked as an issue for golang.org/x/net.
Change-Id: Ia39abd99b685c820ea6169ee6505b16028e7e77f
Reviewed-on: https://go-review.googlesource.com/3836
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Change-Id: Iadcfb113ccecf912e1b64afc07926f0de9de2248
Reviewed-on: https://go-review.googlesource.com/3741
Reviewed-by: Keith Randall <khr@golang.org>
Using benchmark from the issue:
benchmark old ns/op new ns/op delta
BenchmarkRangeStringCast 2162 1152 -46.72%
benchmark old allocs new allocs delta
BenchmarkRangeStringCast 1 0 -100.00%
Fixes#2204
Change-Id: I92c5edd2adca4a7b6fba00713a581bf49dc59afe
Reviewed-on: https://go-review.googlesource.com/3790
Reviewed-by: Keith Randall <khr@golang.org>
Before 3c0fee1, runtime.gogo was just long enough to align to 64 bytes
on OSs with short get_tls implementations and 80 bytes on OSs with
longer get_tls implementations (Windows, Solaris, and Plan 9).
3c0fee1 added a few instructions, which pushed it to 80 on most OSs,
including Windows and Plan 9, and 96 on Solaris.
Fixes#9770.
Change-Id: Ie84810657c14ab16dce9f0e0a932955251b0bf33
Reviewed-on: https://go-review.googlesource.com/3850
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
golang.org/cl/144110044 made _ consts treated
as exported as a small, safe fix for #5397.
It also introduced issue #9615.
golang.org/cl/2091 then fixed the underlying issue,
which was missing type information when the type
was specified only for _.
This cl reverts the original fix.
Fixes#9615.
Change-Id: I4815ad8292bb5bec18beb8c131b48949d9af8876
Reviewed-on: https://go-review.googlesource.com/3832
Reviewed-by: Robert Griesemer <gri@golang.org>
Generalizes PRF calculation for TLS 1.2 to support arbitrary hashes (SHA-384 instead of SHA-256).
Testdata were all updated to correspond with the new cipher suites in the handshake.
Change-Id: I3d9fc48c19d1043899e38255a53c80dc952ee08f
Reviewed-on: https://go-review.googlesource.com/3265
Reviewed-by: Adam Langley <agl@golang.org>
Additional elements in a DN can be added in via ExtraNames. This
option can also be used for sorting DN elements in a custom order.
Change-Id: Ie408d332de913dc2a33bdd86433be38abb7b55be
Reviewed-on: https://go-review.googlesource.com/2257
Reviewed-by: Adam Langley <agl@golang.org>
Use memprofilerate in GODEBUG instead of memprofrate to be
consistent with other uses.
Change-Id: Iaf6bd3b378b1fc45d36ecde32f3ad4e63ca1e86b
Reviewed-on: https://go-review.googlesource.com/3800
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This makes names like ANOP, ATEXT, AGLOBL, ACALL, AJMP, ARET
available for use by architecture-independent processing passes.
On arm and ppc64, the alternate names are now aliases for the
official ones (ABL for ACALL, AB or ABR for AJMP, ARETURN for ARET).
Change-Id: Id027771243795af2b3745199c645b6e1bedd7d18
Reviewed-on: https://go-review.googlesource.com/3577
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Reviewed-by: Austin Clements <austin@google.com>
Like the TEXT/GLOBL flags, this was split between from.scale and reg,
neither of which is appropriate.
Change-Id: I2a16ef066a53b6edb7afb16cce108c0d1d26389c
Reviewed-on: https://go-review.googlesource.com/3576
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Reviewed-by: Austin Clements <austin@google.com>
Use AXXX instead of AGOK (neither is a valid instruction but AXXX is zero)
for the initial setting of Prog.as, and now there are no non-zero default
field settings.
Remove the arch-specific zprog/zprg in favor of a single global zprog.
Remove the arch-specific prg constructor in favor of emallocz(sizeof(Prog)).
Change-Id: Ia73078726768333d7cdba296f548170c1bea9498
Reviewed-on: https://go-review.googlesource.com/3575
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Reviewed-by: Austin Clements <austin@google.com>
Originally, when this code was part of 6l/8l, every new
Prog was constructed starting with zprg, which set back=2,
and then this code walked over the list setting back=1 for
backward branches, back=0 otherwise. The initial back=2
setting was used to identify forward branches (the branched-to
instruction had back == 2 since it hadn't yet been set to 0 or 1).
When the code was extracted into liblink and linked directly
with 6a/6g/8a/8g, those programs created the Prog struct
and did not set back=2, breaking this backward branch detection.
No one noticed, because the next loop recomputes the information.
The only requirement for the next loop is that p->back == 0 or 1 for
each of the Progs in the list.
The initialization of the zprg with back=2 would cause problems
in this second loop, for the few liblink-internally-generated instructions
that are created by copying zprg, except that the first loop was
making sure that back == 0 or 1.
The first loop's manipulation of p->back can thus be deleted,
provided we also delete the zprg.back = 2 initializations.
This is awful and my fault. I apologize.
While we're here, remove the .scale = 1 from the zprg init too.
Anything that sets up a scaled index should set the scale itself.
(And mostly those come from outside liblink anyway.)
Tested by checking that all generated code is bit-for-bit
identical to before this CL.
Change-Id: I7f6e0b33ce9ccd5b7dc25e0f00429fedd0957c8c
Reviewed-on: https://go-review.googlesource.com/3574
Reviewed-by: Austin Clements <austin@google.com>
A step toward making the zero Prog useful.
Change-Id: I427b98b1ce9bd8f093da825aa4bb83244fc01903
Reviewed-on: https://go-review.googlesource.com/3573
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Austin Clements <austin@google.com>
Before, amd64 and 386 stored the flags in p->from.scale
and arm and ppc64 stored the flags in p->reg.
Both caused special cases in printing and in handling of the
addresses.
To avoid possible conflicts with the real meaning of p->from
and to avoid storing a non-register value in a reg field,
use from3 to hold a TYPE_CONST value giving the flags.
There is still a special case for printing, because the flags
are specified without a $, and normally a TYPE_CONST prints
with a $. But that's much less special than what came before.
This allows us to remove the textflag and settextflag methods
from LinkArch. They are no longer architecture-specific.
Change-Id: I931da8e1ecd92e127cd9aa44ef5a73c42e730110
Reviewed-on: https://go-review.googlesource.com/3572
Reviewed-by: Austin Clements <austin@google.com>
Because it was lumped in with the TEXT instruction,
the high 32 bits of the 64-bit constant holding the size
were always set to 0x80000000 (ArgsSizeUnknown).
This only worked because cmd/9l was reading the 64-bit
value into an int32.
While we're here, fix 5a.
It wasn't as much of a problem there because
the two values were being stored in two different fields.
But it was still wrong.
Change-Id: I69a2214c7be939530d499e29cfdc3b26720ac05a
Reviewed-on: https://go-review.googlesource.com/3570
Reviewed-by: Austin Clements <austin@google.com>
Kindly detected by race builders by failing TestRaceRange.
ORANGE typecheck does not increment decldepth around body.
Change-Id: I0df5f310cb3370a904c94d9647a9cf0f15729075
Reviewed-on: https://go-review.googlesource.com/3507
Reviewed-by: Russ Cox <rsc@golang.org>
Type switch variables was not typechecked.
Previously it lead only to a minor consequence:
switch unsafe.Sizeof = x.(type) {
generated an inconsistent error message.
But capturing by value functionality now requries typechecking of all ONAMEs.
Fixes#9731
Change-Id: If037883cba53d85028fb97b1328696091b3b7ddd
Reviewed-on: https://go-review.googlesource.com/3600
Reviewed-by: Russ Cox <rsc@golang.org>
The overflow happens only with -gcflags="-N -l"
and can be reproduced with:
$ go test -gcflags="-N -l" -a -run=none net
runtime.cgocall: nosplit stack overflow
504 assumed on entry to runtime.cgocall
480 after runtime.cgocall uses 24
472 on entry to runtime.cgocall_errno
408 after runtime.cgocall_errno uses 64
400 on entry to runtime.exitsyscall
288 after runtime.exitsyscall uses 112
280 on entry to runtime.exitsyscallfast
152 after runtime.exitsyscallfast uses 128
144 on entry to runtime.writebarrierptr
88 after runtime.writebarrierptr uses 56
80 on entry to runtime.writebarrierptr_nostore1
24 after runtime.writebarrierptr_nostore1 uses 56
16 on entry to runtime.acquirem
-24 after runtime.acquirem uses 40
Move closure creation into separate function so that
frames of writebarrierptr_shadow and writebarrierptr_nostore1
are overlapped.
Fixes#9721
Change-Id: I40851f0786763ee964af34814edbc3e3d73cf4e7
Reviewed-on: https://go-review.googlesource.com/3418
Reviewed-by: Russ Cox <rsc@golang.org>
Currently race detector produces the following reports on pprof tests:
WARNING: DATA RACE
Read by goroutine 4:
runtime/pprof_test.TestTraceStartStop()
src/runtime/pprof/trace_test.go:38 +0x1da
testing.tRunner()
src/testing/testing.go:448 +0x13a
Previous write by goroutine 5:
bytes.(*Buffer).grow()
src/bytes/buffer.go:102 +0x190
bytes.(*Buffer).Write()
src/bytes/buffer.go:127 +0x75
runtime/pprof.func·002()
src/runtime/pprof/pprof.go:633 +0xae
Trace writer goroutine synchronizes with StopTrace
using trace.shutdownSema runtime semaphore.
But race detector does not see that synchronization
and so produces false reports.
Teach race detector about the synchronization.
Change-Id: I1219817325d4e16b423f29a0cbee94c929793881
Reviewed-on: https://go-review.googlesource.com/3746
Reviewed-by: Russ Cox <rsc@golang.org>
The test for the framepointer experiment flag is cheaper and more
branch-predictable than the other parts of this conditional, so move
it first. This is also more readable.
(Originally, the flag check required parsing the experiments string,
which is why it was done last. Now that flag is cached.)
Change-Id: I84e00fa7e939e9064f0fa0a4a6fe00576dd61457
Reviewed-on: https://go-review.googlesource.com/3782
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously, we checked for a saved frame pointer by looking for a
2*ptrSize gap between the argument pointer and the locals pointer.
The intent of this check was to look for a two stack slot gap (caller
IP and saved frame pointer), but stack slots are regSize, not ptrSize.
Correct this by checking instead for a 2*regSize gap.
On most platforms, this made no difference because ptrSize==regSize.
However, on amd64p32 (nacl), the saved frame pointer check incorrectly
fired when there was no saved frame pointer because the one stack slot
for the caller IP left an 8 byte gap, which is 2*ptrSize (but not
2*regSize) on amd64p32.
Fixes#9760.
Change-Id: I6eedcf681fe5bf2bf924dde8a8f2d9860a4d758e
Reviewed-on: https://go-review.googlesource.com/3781
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change adds support for case insensitivity of DNS labels to
built-in DNS stub resolver as described in RFC 4343.
Fixes#9215.
Change-Id: Ia752fe71866a3bfa3ea08371985b799d419ddea3
Reviewed-on: https://go-review.googlesource.com/3685
Reviewed-by: Ian Lance Taylor <iant@golang.org>
We already checked for the prefix with strings.HasPrefix
Change-Id: I33852fd19ffa92aa33b75b94b4bb505f4043a54a
Reviewed-on: https://go-review.googlesource.com/3691
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Fourth time's the charm.
Actually this doesn't fix the build, there is a
crash after go_bootstrap is compiled which looks
like it is related to auxv parsing.
Change-Id: Id00e2dfbe7bae42856f996065d3fb90b820e29a8
Reviewed-on: https://go-review.googlesource.com/3610
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Add memprofrate as a value recognized in GODEBUG. The
value provided is used as the new setting for
runtime.MemProfileRate, allowing the user to
adjust memory profiling.
Change-Id: If129a247683263b11e2dd42473cf9b31280543d5
Reviewed-on: https://go-review.googlesource.com/3450
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This adds a "framepointer" GOEXPERIMENT that that makes the amd64
toolchain maintain base pointer chains in the same way that gcc
-fno-omit-frame-pointer does. Go doesn't use these saved base
pointers, but this does enable external tools like Linux perf and
VTune to unwind Go stacks when collecting system-wide profiles.
This requires support in the compilers to not clobber BP, support in
liblink for generating the BP-saving function prologue and unwinding
epilogue, and support in the runtime to save BPs across preemption, to
skip saved BPs during stack unwinding and, and to adjust saved BPs
during stack moving.
As with other GOEXPERIMENTs, everything from the toolchain to the
runtime must be compiled with this experiment enabled. To do this,
run make.bash (or all.bash) with GOEXPERIMENT=framepointer.
Change-Id: I4024853beefb9539949e5ca381adfdd9cfada544
Reviewed-on: https://go-review.googlesource.com/2992
Reviewed-by: Russ Cox <rsc@golang.org>
Any place that clobbers BP in the runtime can potentially interfere
with frame pointer unwinding with GOEXPERIMENT=framepointer. This
change eliminates uses of BP in the runtime to address this problem.
We have spare registers everywhere this occurs, so there's no downside
to eliminating BP. Where possible, this uses the same new register as
the amd64p32 runtime, which doesn't use BP due to restrictions placed
on it by NaCL.
One nice side effect of this is that it will let perf/VTune unwind the
call stack even through a call to systemstack, which will let us get
really good call graphs from the garbage collector.
Change-Id: I0ffa14cb4dd2b613a7049b8ec59df37c52286212
Reviewed-on: https://go-review.googlesource.com/3390
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
m.gcing has become overloaded to mean "don't preempt this g" in
general. Once the garbage collector is preemptible, the one thing it
*won't* mean is that we're in the garbage collector.
So, rename gcing to "preemptoff" and make it a string giving a reason
that preemption is disabled. gcing was never set to anything but 0 or
1, so we don't have to worry about there being a stack of reasons.
Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46
Reviewed-on: https://go-review.googlesource.com/3660
Reviewed-by: Russ Cox <rsc@golang.org>
Commit 656be31 replaced onM with systemstack, but missed updating a
few comments that still referred to onM. Update these.
Change-Id: I0efb017e9a66ea0adebb6e1da6e518ee11263f69
Reviewed-on: https://go-review.googlesource.com/3664
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
9g generates needlessly complex code for small copies. There are a
few other things that need to be improved about the copy code, so for
now just note the problem.
Change-Id: I0f1de4b2f9197a2635e27cc4b91ecf7a6c11f457
Reviewed-on: https://go-review.googlesource.com/3665
Reviewed-by: Russ Cox <rsc@golang.org>
In stripCommonPrefix, the prefix was correctly calculated in all cases,
except one. That unhandled case is when there are more than 2 lines,
but all lines are blank (other than the first and last lines,
which contain /* and */ respectively).
This change detects that case and correctly sets the prefix calculated
from the last line. This is consistent with the (correct) behavior
that happens when there's at least one non-blank line.
That fixes issue #9751 that occurs for problematic input,
where cmd/gofmt and go/source would insert extra indentation on
every format operation. It also allows go/printer itself to print
such parsed files in an expected way.
Fixes#9751.
Change-Id: Id3dfb945beb59ffad3705085a3c285fca30a5f87
Reviewed-on: https://go-review.googlesource.com/3684
Reviewed-by: Robert Griesemer <gri@golang.org>
Since CL 3676, the TestMkdirAllAtSlash test
depends on syscall.EROFS, which isn't defined
on Plan 9.
This change works around this issue by
defining a system dependent isReadonlyError
function.
Change-Id: If972fd2fe4828ee3bcb8537ea7f4ba29f7a87619
Reviewed-on: https://go-review.googlesource.com/3696
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>