There is currently no way to ignore signals using the os/signal package.
It is possible to catch a signal and do nothing but this is not the same
as ignoring it. The new function Ignore allows a set of signals to be
ignored. The new function Reset allows the initial handlers for a set of
signals to be restored.
Fixes#5572
Change-Id: I5c0f07956971e3a9ff9b9d9631e6e3a08c20df15
Reviewed-on: https://go-review.googlesource.com/3580
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change 85e7bee introduced a bug:
it marks map buckets as noscan when key and val do not contain pointers.
However, buckets with large/outline key or val do contain pointers.
This change takes key/val size into consideration when
marking buckets as noscan.
Change-Id: I7172a0df482657be39faa59e2579dd9f209cb54d
Reviewed-on: https://go-review.googlesource.com/4901
Reviewed-by: Keith Randall <khr@golang.org>
MOVQ RARG0, 0(SP) smashes exactly what was saved by PUSHQ R15.
This code managed to work somehow with the current race runtime,
but corrupts caller arguments with new race runtime that I am testing.
Change-Id: I9ffe8b5eee86451db36e99dbf4d11f320192e576
Reviewed-on: https://go-review.googlesource.com/4810
Reviewed-by: Keith Randall <khr@golang.org>
New race runtime is more scrupulous about env flags format.
Change-Id: I2828bc737a8be3feae5288ccf034c52883f224d8
Reviewed-on: https://go-review.googlesource.com/4811
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
drainworkbuf is now gcDrain, since it drains until there's
nothing left to drain. drainobjects is now gcDrainN because it's
the bounded equivalent to gcDrain.
The new names use the Go camel case convention because we have to
start somewhere. The "gc" prefix is because we don't have runtime
packages yet and just "drain" is too ambiguous.
Change-Id: I88dbdf32e8ce4ce6c3b7e1f234664be9b76cb8fd
Reviewed-on: https://go-review.googlesource.com/4785
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
All calls to drainworkbuf now pass true for this argument, so remove
the argument and update the documentation to reflect the simplified
interface.
At a higher level, there are no longer any situations where we drain
"one wbuf" (though drainworkbuf didn't guarantee this anyway). We
either drain everything, or we drain a specific number of objects.
Change-Id: Ib7ee0fde56577eff64232ee1e711ec57c4361335
Reviewed-on: https://go-review.googlesource.com/4784
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
scanblock is only called during _GCscan and _GCmarktermination.
During _GCscan, scanblock didn't call drainworkbufs anyway. During
_GCmarktermination, there's really no point in draining some (largely
arbitrary) amount of work during the scanblock, since the GC is about
to drain everything anyway, so simply eliminate this case.
Change-Id: I7f3c59ce9186a83037c6f9e9b143181acd04c597
Reviewed-on: https://go-review.googlesource.com/4783
Reviewed-by: Russ Cox <rsc@golang.org>
We no longer ever call scanblock with b == 0.
Change-Id: I9b01da39595e0cc251668c24d58748d88f5f0792
Reviewed-on: https://go-review.googlesource.com/4782
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
scanblock(0, 0, nil, nil) was just a confusing way of saying
wbuf = getpartialorempty()
drainworkbuf(wbuf, true)
Make drainworkbuf accept a nil workbuf and perform the
getpartialorempty itself and replace all uses of scanblock(0, 0, nil,
nil) with direct calls to drainworkbuf(nil, true).
Change-Id: I7002a2f8f3eaf6aa85bbf17ccc81d7288acfef1c
Reviewed-on: https://go-review.googlesource.com/4781
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Previously, scanblock called checknocurrentwbuf() after
drainworkbuf(). Move this call into drainworkbuf so that every return
path from drainworkbuf calls checknocurrentwbuf(). This is equivalent
to the previous code because scanblock was the only caller of
drainworkbuf.
Change-Id: I96ef2168c8aa169bfc4d368f296342fa0fbeafb4
Reviewed-on: https://go-review.googlesource.com/4780
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently we always create context objects for closures that capture variables.
However, it is completely unnecessary for direct calls of closures
(whether it is func()(), defer func()() or go func()()).
This change transforms any OCALLFUNC(OCLOSURE) to normal function call.
Closed variables become function arguments.
This transformation is especially beneficial for go func(),
because we do not need to allocate context object on heap.
But it makes direct closure calls a bit faster as well (see BenchmarkClosureCall).
On implementation level it required to introduce yet another compiler pass.
However, the pass iterates only over xtop, so it should not be an issue.
Transformation consists of two parts: closure transformation and call site
transformation. We can't run these parts on different sides of escape analysis,
because tree state is inconsistent. We can do both parts during typecheck,
we don't know how to capture variables and don't have call site.
We can't do both parts during walk of OCALLFUNC, because we can walk
OCLOSURE body earlier.
So now capturevars pass only decides how to capture variables
(this info is required for escape analysis). New transformclosure
pass, that runs just before order/walk, does all transformations
of a closure. And later walk of OCALLFUNC(OCLOSURE) transforms call site.
benchmark old ns/op new ns/op delta
BenchmarkClosureCall 4.89 3.09 -36.81%
BenchmarkCreateGoroutinesCapture 1634 1294 -20.81%
benchmark old allocs new allocs delta
BenchmarkCreateGoroutinesCapture 6 2 -66.67%
benchmark old bytes new bytes delta
BenchmarkCreateGoroutinesCapture 176 48 -72.73%
Change-Id: Ic85e1706e18c3235cc45b3c0c031a9c1cdb7a40e
Reviewed-on: https://go-review.googlesource.com/4050
Reviewed-by: Russ Cox <rsc@golang.org>
Consider an interface value i of type I and concrete value c of type C.
Prior to this CL, i==c was evaluated as
I(c) == i
Evaluating I(c) can allocate.
This CL changes the evaluation of i==c to
x, ok := i.(C); ok && x == c
The new generated code is shorter and does not allocate directly.
If C is small, as it is in every instance in the stdlib,
the new code also uses less stack space
and makes one runtime call instead of two.
If C is very large, the original implementation is used.
The cutoff for "very large" is 1<<16,
following the stack vs heap cutoff used elsewhere.
This kind of comparison occurs in 38 places in the stdlib,
mostly in the net and os packages.
benchmark old ns/op new ns/op delta
BenchmarkEqEfaceConcrete 29.5 7.92 -73.15%
BenchmarkEqIfaceConcrete 32.1 7.90 -75.39%
BenchmarkNeEfaceConcrete 29.9 7.90 -73.58%
BenchmarkNeIfaceConcrete 35.9 7.90 -77.99%
Fixes#9370.
Change-Id: I7c4555950bcd6406ee5c613be1f2128da2c9a2b7
Reviewed-on: https://go-review.googlesource.com/2096
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
No code modifications.
This is in preparation for improving the wbuf abstraction.
Change-Id: I719543a345c34d079b7e39b251eccd5dd8a07826
Reviewed-on: https://go-review.googlesource.com/4710
Reviewed-by: Rick Hudson <rlh@golang.org>
Plan 9's sysFree has an optimization where if the object being freed
is the last object allocated, it will roll back the brk to allow the
memory to be reused by sysAlloc. However, it does not zero this
"returned" memory, so as a result, sysAlloc can return non-zeroed
memory after a sysFree. This leads to corruption because the runtime
assumes sysAlloc returns zeroed memory.
Fix this by zeroing the memory returned by sysFree.
Fixes#9846.
Change-Id: Id328c58236eb7c464b31ac1da376a0b757a5dc6a
Reviewed-on: https://go-review.googlesource.com/4700
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
typedslicecopy is another write barrier that is not
understood by racewalk. It seems quite complex to handle it
in the compiler, so instead just instrument it in runtime.
Update #9796
Change-Id: I0eb6abf3a2cd2491a338fab5f7da22f01bf7e89b
Reviewed-on: https://go-review.googlesource.com/4370
Reviewed-by: Russ Cox <rsc@golang.org>
Support the following conversions in escape analysis:
[]rune("foo")
[]byte("foo")
string([]rune{})
If the result does not escape, allocate temp buffer on stack
and pass it to runtime functions.
Change-Id: I1d075907eab8b0109ad7ad1878104b02b3d5c690
Reviewed-on: https://go-review.googlesource.com/3590
Reviewed-by: Russ Cox <rsc@golang.org>
Add local workbufs to the m struct in order to reduce contention.
Add consistency checks for workbuf ownership.
Chain workbufs through call change to avoid swapping them
to and from the m struct.
Adjust the size of the workbuf so that the mutators can
more frequently pass modifications to the GC thus shifting
some work from the STW mark termination phase to the concurrent
mark phase.
Change-Id: I557b53af34ad9972265e0ed9f5996e52d548563d
Reviewed-on: https://go-review.googlesource.com/3972
Reviewed-by: Austin Clements <austin@google.com>
Fixes#9791
g.issystem flag setup races with other code wherever we set it.
Even if we set both in parent goroutine and in the system goroutine,
it is still possible that some other goroutine crashes
before the flag is set. We could pass issystem flag to newproc1,
but we start all goroutines with go nowadays.
Instead look at g.startpc to distinguish system goroutines (similar to topofstack).
Change-Id: Ia3467968dee27fa07d9fecedd4c2b00928f26645
Reviewed-on: https://go-review.googlesource.com/4113
Reviewed-by: Keith Randall <khr@golang.org>
Update #8832
This is probably not the root cause of the issue.
Resolve TODO about setting unusedsince on a wrong span.
Change-Id: I69c87e3d93cb025e3e6fa80a8cffba6ad6ad1395
Reviewed-on: https://go-review.googlesource.com/4390
Reviewed-by: Keith Randall <khr@golang.org>
Container symbols shouldn't be considered as functions in the functab.
Having them present probably messes up function lookup, as you might get
the descriptor of the container instead of the descriptor of the actual
function on the stack. It also messed up the findfunctab because these
entries caused off-by-one errors in how functab entries were counted.
Normal code is not affected - it only changes (& hopefully fixes) the
behavior for libraries linked as a unit, like:
net
runtime/cgo
runtime/race
Fixes#9804
Change-Id: I81e036e897571ac96567d59e1f1d7f058ca75e85
Reviewed-on: https://go-review.googlesource.com/4290
Reviewed-by: Russ Cox <rsc@golang.org>
This CL introduces new methods for 'context' type, so we can
manipulate its values in an architecture independent way.
Use new methods to replace both 386 and amd64 versions of
dosigprof with single piece of code.
There is more similar code to be converted in the following CLs.
Also remove os_windows_386.go and os_windows_amd64.go. These
contain unused functions.
Change-Id: I28f76aeb97f6e4249843d30d3d0c33fb233d3f7f
Reviewed-on: https://go-review.googlesource.com/2790
Reviewed-by: Minux Ma <minux@golang.org>
CL 2118 makes the assumption that all references to runtime.tlsg
should be accompanied by a declaration of runtime.tlsg if its type
should be a normal variable, instead of a placeholder for TLS
relocation.
Because if runtime.tlsg is not declared by the runtime package,
the type of runtime.tlsg will be zero, so fix the check in liblink
to look for 0 instead of STLSBSS (the type will be initialized by
cmd/ld, but cmd/ld doesn't run during assembly).
Change-Id: I691ac5c3faea902f8b9a0b963e781b22e7b269a7
Reviewed-on: https://go-review.googlesource.com/4030
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This change is an implementation of the signal
runtime and os/signal package on Plan 9.
Contrary to Unix, on Plan 9 a signal is called
a note and is represented by a string.
For this reason, the sigsend and signal_recv
functions had to be reimplemented specifically
for Plan 9.
In order to reuse most of the code and internal
interface of the os/signal package, the note
strings are mapped to integers.
Thanks to Russ Cox for the early review.
Change-Id: I95836645efe21942bb1939f43f87fb3c0eaaef1a
Reviewed-on: https://go-review.googlesource.com/2164
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
It turns out -iex argument is not supported by all gdb versions,
but as we need to add the auto-load safe path before loading the
inferior, test -iex support first and skip the test if it's not
available.
We should still update our builders though.
Change-Id: I355697de51baf12162ba6cb82f389dad93f93dc5
Reviewed-on: https://go-review.googlesource.com/4070
Reviewed-by: Ian Lance Taylor <iant@golang.org>
On some systems, gdb refuses to load Python plugin from arbitrary
paths, so we have to add $GOROOT/src/runtime to auto-load-safe-path
in the gdb script test.
Change-Id: Icc44baab8d04a65bd21ceac2ab8ddb13c8d083e8
Reviewed-on: https://go-review.googlesource.com/2905
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
eqstring does not need to check the length of the strings.
Other architectures were done in a separate commit.
While we're here, add a pointer equality check.
Change-Id: Id2c8616a03a7da7037c1e9ccd56a549fc952bd98
Reviewed-on: https://go-review.googlesource.com/3956
Reviewed-by: Keith Randall <khr@golang.org>
eqstring does not need to check the length of the strings.
6g
benchmark old ns/op new ns/op delta
BenchmarkCompareStringEqual 7.03 6.14 -12.66%
BenchmarkCompareStringIdentical 3.36 3.04 -9.52%
5g
benchmark old ns/op new ns/op delta
BenchmarkCompareStringEqual 238 232 -2.52%
BenchmarkCompareStringIdentical 90.8 80.7 -11.12%
The equivalent PPC changes are in a separate commit
because I don't have the hardware to test them.
Change-Id: I292874324b9bbd9d24f57a390cfff8b550cdd53c
Reviewed-on: https://go-review.googlesource.com/3955
Reviewed-by: Keith Randall <khr@golang.org>
Only documentation / comment changes. Update references to
point to golang.org permalinks or go.googlesource.com/go.
References in historical release notes under doc are left as is.
Change-Id: Icfc14e4998723e2c2d48f9877a91c5abef6794ea
Reviewed-on: https://go-review.googlesource.com/4060
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In the old code, liblink, cmd/ld and runtime all have code determine
whether runtime.tlsg is an actual variable or a placeholder for TLS
relocation. This change consolidate them into one: the runtime/tls_arm.s
will ultimately determine the type of that variable.
Change-Id: I3b3f80791a1db4c2b7318f81a115972cd2237e43
Reviewed-on: https://go-review.googlesource.com/2118
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
In android-L, logging is done through the logd daemon.
If logd daemon is available, send logging to logd.
Otherwise, fallback to the legacy mechanism (/dev/log files).
This change adds access/socket/connect calls to interact with the logd.
Fixesgolang/go#9398.
Change-Id: I3c52b81b451f5862107d7c675f799fc85548486d
Reviewed-on: https://go-review.googlesource.com/3350
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Change-Id: Iadcfb113ccecf912e1b64afc07926f0de9de2248
Reviewed-on: https://go-review.googlesource.com/3741
Reviewed-by: Keith Randall <khr@golang.org>
Using benchmark from the issue:
benchmark old ns/op new ns/op delta
BenchmarkRangeStringCast 2162 1152 -46.72%
benchmark old allocs new allocs delta
BenchmarkRangeStringCast 1 0 -100.00%
Fixes#2204
Change-Id: I92c5edd2adca4a7b6fba00713a581bf49dc59afe
Reviewed-on: https://go-review.googlesource.com/3790
Reviewed-by: Keith Randall <khr@golang.org>
Before 3c0fee1, runtime.gogo was just long enough to align to 64 bytes
on OSs with short get_tls implementations and 80 bytes on OSs with
longer get_tls implementations (Windows, Solaris, and Plan 9).
3c0fee1 added a few instructions, which pushed it to 80 on most OSs,
including Windows and Plan 9, and 96 on Solaris.
Fixes#9770.
Change-Id: Ie84810657c14ab16dce9f0e0a932955251b0bf33
Reviewed-on: https://go-review.googlesource.com/3850
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Use memprofilerate in GODEBUG instead of memprofrate to be
consistent with other uses.
Change-Id: Iaf6bd3b378b1fc45d36ecde32f3ad4e63ca1e86b
Reviewed-on: https://go-review.googlesource.com/3800
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The overflow happens only with -gcflags="-N -l"
and can be reproduced with:
$ go test -gcflags="-N -l" -a -run=none net
runtime.cgocall: nosplit stack overflow
504 assumed on entry to runtime.cgocall
480 after runtime.cgocall uses 24
472 on entry to runtime.cgocall_errno
408 after runtime.cgocall_errno uses 64
400 on entry to runtime.exitsyscall
288 after runtime.exitsyscall uses 112
280 on entry to runtime.exitsyscallfast
152 after runtime.exitsyscallfast uses 128
144 on entry to runtime.writebarrierptr
88 after runtime.writebarrierptr uses 56
80 on entry to runtime.writebarrierptr_nostore1
24 after runtime.writebarrierptr_nostore1 uses 56
16 on entry to runtime.acquirem
-24 after runtime.acquirem uses 40
Move closure creation into separate function so that
frames of writebarrierptr_shadow and writebarrierptr_nostore1
are overlapped.
Fixes#9721
Change-Id: I40851f0786763ee964af34814edbc3e3d73cf4e7
Reviewed-on: https://go-review.googlesource.com/3418
Reviewed-by: Russ Cox <rsc@golang.org>
Currently race detector produces the following reports on pprof tests:
WARNING: DATA RACE
Read by goroutine 4:
runtime/pprof_test.TestTraceStartStop()
src/runtime/pprof/trace_test.go:38 +0x1da
testing.tRunner()
src/testing/testing.go:448 +0x13a
Previous write by goroutine 5:
bytes.(*Buffer).grow()
src/bytes/buffer.go:102 +0x190
bytes.(*Buffer).Write()
src/bytes/buffer.go:127 +0x75
runtime/pprof.func·002()
src/runtime/pprof/pprof.go:633 +0xae
Trace writer goroutine synchronizes with StopTrace
using trace.shutdownSema runtime semaphore.
But race detector does not see that synchronization
and so produces false reports.
Teach race detector about the synchronization.
Change-Id: I1219817325d4e16b423f29a0cbee94c929793881
Reviewed-on: https://go-review.googlesource.com/3746
Reviewed-by: Russ Cox <rsc@golang.org>
The test for the framepointer experiment flag is cheaper and more
branch-predictable than the other parts of this conditional, so move
it first. This is also more readable.
(Originally, the flag check required parsing the experiments string,
which is why it was done last. Now that flag is cached.)
Change-Id: I84e00fa7e939e9064f0fa0a4a6fe00576dd61457
Reviewed-on: https://go-review.googlesource.com/3782
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously, we checked for a saved frame pointer by looking for a
2*ptrSize gap between the argument pointer and the locals pointer.
The intent of this check was to look for a two stack slot gap (caller
IP and saved frame pointer), but stack slots are regSize, not ptrSize.
Correct this by checking instead for a 2*regSize gap.
On most platforms, this made no difference because ptrSize==regSize.
However, on amd64p32 (nacl), the saved frame pointer check incorrectly
fired when there was no saved frame pointer because the one stack slot
for the caller IP left an 8 byte gap, which is 2*ptrSize (but not
2*regSize) on amd64p32.
Fixes#9760.
Change-Id: I6eedcf681fe5bf2bf924dde8a8f2d9860a4d758e
Reviewed-on: https://go-review.googlesource.com/3781
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>