Plan 9's sysFree has an optimization where if the object being freed
is the last object allocated, it will roll back the brk to allow the
memory to be reused by sysAlloc. However, it does not zero this
"returned" memory, so as a result, sysAlloc can return non-zeroed
memory after a sysFree. This leads to corruption because the runtime
assumes sysAlloc returns zeroed memory.
Fix this by zeroing the memory returned by sysFree.
Fixes#9846.
Change-Id: Id328c58236eb7c464b31ac1da376a0b757a5dc6a
Reviewed-on: https://go-review.googlesource.com/4700
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
typedslicecopy is another write barrier that is not
understood by racewalk. It seems quite complex to handle it
in the compiler, so instead just instrument it in runtime.
Update #9796
Change-Id: I0eb6abf3a2cd2491a338fab5f7da22f01bf7e89b
Reviewed-on: https://go-review.googlesource.com/4370
Reviewed-by: Russ Cox <rsc@golang.org>
Support the following conversions in escape analysis:
[]rune("foo")
[]byte("foo")
string([]rune{})
If the result does not escape, allocate temp buffer on stack
and pass it to runtime functions.
Change-Id: I1d075907eab8b0109ad7ad1878104b02b3d5c690
Reviewed-on: https://go-review.googlesource.com/3590
Reviewed-by: Russ Cox <rsc@golang.org>
Add local workbufs to the m struct in order to reduce contention.
Add consistency checks for workbuf ownership.
Chain workbufs through call change to avoid swapping them
to and from the m struct.
Adjust the size of the workbuf so that the mutators can
more frequently pass modifications to the GC thus shifting
some work from the STW mark termination phase to the concurrent
mark phase.
Change-Id: I557b53af34ad9972265e0ed9f5996e52d548563d
Reviewed-on: https://go-review.googlesource.com/3972
Reviewed-by: Austin Clements <austin@google.com>
Fixes#9791
g.issystem flag setup races with other code wherever we set it.
Even if we set both in parent goroutine and in the system goroutine,
it is still possible that some other goroutine crashes
before the flag is set. We could pass issystem flag to newproc1,
but we start all goroutines with go nowadays.
Instead look at g.startpc to distinguish system goroutines (similar to topofstack).
Change-Id: Ia3467968dee27fa07d9fecedd4c2b00928f26645
Reviewed-on: https://go-review.googlesource.com/4113
Reviewed-by: Keith Randall <khr@golang.org>
Update #8832
This is probably not the root cause of the issue.
Resolve TODO about setting unusedsince on a wrong span.
Change-Id: I69c87e3d93cb025e3e6fa80a8cffba6ad6ad1395
Reviewed-on: https://go-review.googlesource.com/4390
Reviewed-by: Keith Randall <khr@golang.org>
Container symbols shouldn't be considered as functions in the functab.
Having them present probably messes up function lookup, as you might get
the descriptor of the container instead of the descriptor of the actual
function on the stack. It also messed up the findfunctab because these
entries caused off-by-one errors in how functab entries were counted.
Normal code is not affected - it only changes (& hopefully fixes) the
behavior for libraries linked as a unit, like:
net
runtime/cgo
runtime/race
Fixes#9804
Change-Id: I81e036e897571ac96567d59e1f1d7f058ca75e85
Reviewed-on: https://go-review.googlesource.com/4290
Reviewed-by: Russ Cox <rsc@golang.org>
This CL introduces new methods for 'context' type, so we can
manipulate its values in an architecture independent way.
Use new methods to replace both 386 and amd64 versions of
dosigprof with single piece of code.
There is more similar code to be converted in the following CLs.
Also remove os_windows_386.go and os_windows_amd64.go. These
contain unused functions.
Change-Id: I28f76aeb97f6e4249843d30d3d0c33fb233d3f7f
Reviewed-on: https://go-review.googlesource.com/2790
Reviewed-by: Minux Ma <minux@golang.org>
CL 2118 makes the assumption that all references to runtime.tlsg
should be accompanied by a declaration of runtime.tlsg if its type
should be a normal variable, instead of a placeholder for TLS
relocation.
Because if runtime.tlsg is not declared by the runtime package,
the type of runtime.tlsg will be zero, so fix the check in liblink
to look for 0 instead of STLSBSS (the type will be initialized by
cmd/ld, but cmd/ld doesn't run during assembly).
Change-Id: I691ac5c3faea902f8b9a0b963e781b22e7b269a7
Reviewed-on: https://go-review.googlesource.com/4030
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This change is an implementation of the signal
runtime and os/signal package on Plan 9.
Contrary to Unix, on Plan 9 a signal is called
a note and is represented by a string.
For this reason, the sigsend and signal_recv
functions had to be reimplemented specifically
for Plan 9.
In order to reuse most of the code and internal
interface of the os/signal package, the note
strings are mapped to integers.
Thanks to Russ Cox for the early review.
Change-Id: I95836645efe21942bb1939f43f87fb3c0eaaef1a
Reviewed-on: https://go-review.googlesource.com/2164
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
It turns out -iex argument is not supported by all gdb versions,
but as we need to add the auto-load safe path before loading the
inferior, test -iex support first and skip the test if it's not
available.
We should still update our builders though.
Change-Id: I355697de51baf12162ba6cb82f389dad93f93dc5
Reviewed-on: https://go-review.googlesource.com/4070
Reviewed-by: Ian Lance Taylor <iant@golang.org>
On some systems, gdb refuses to load Python plugin from arbitrary
paths, so we have to add $GOROOT/src/runtime to auto-load-safe-path
in the gdb script test.
Change-Id: Icc44baab8d04a65bd21ceac2ab8ddb13c8d083e8
Reviewed-on: https://go-review.googlesource.com/2905
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
eqstring does not need to check the length of the strings.
Other architectures were done in a separate commit.
While we're here, add a pointer equality check.
Change-Id: Id2c8616a03a7da7037c1e9ccd56a549fc952bd98
Reviewed-on: https://go-review.googlesource.com/3956
Reviewed-by: Keith Randall <khr@golang.org>
eqstring does not need to check the length of the strings.
6g
benchmark old ns/op new ns/op delta
BenchmarkCompareStringEqual 7.03 6.14 -12.66%
BenchmarkCompareStringIdentical 3.36 3.04 -9.52%
5g
benchmark old ns/op new ns/op delta
BenchmarkCompareStringEqual 238 232 -2.52%
BenchmarkCompareStringIdentical 90.8 80.7 -11.12%
The equivalent PPC changes are in a separate commit
because I don't have the hardware to test them.
Change-Id: I292874324b9bbd9d24f57a390cfff8b550cdd53c
Reviewed-on: https://go-review.googlesource.com/3955
Reviewed-by: Keith Randall <khr@golang.org>
Only documentation / comment changes. Update references to
point to golang.org permalinks or go.googlesource.com/go.
References in historical release notes under doc are left as is.
Change-Id: Icfc14e4998723e2c2d48f9877a91c5abef6794ea
Reviewed-on: https://go-review.googlesource.com/4060
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In the old code, liblink, cmd/ld and runtime all have code determine
whether runtime.tlsg is an actual variable or a placeholder for TLS
relocation. This change consolidate them into one: the runtime/tls_arm.s
will ultimately determine the type of that variable.
Change-Id: I3b3f80791a1db4c2b7318f81a115972cd2237e43
Reviewed-on: https://go-review.googlesource.com/2118
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
In android-L, logging is done through the logd daemon.
If logd daemon is available, send logging to logd.
Otherwise, fallback to the legacy mechanism (/dev/log files).
This change adds access/socket/connect calls to interact with the logd.
Fixesgolang/go#9398.
Change-Id: I3c52b81b451f5862107d7c675f799fc85548486d
Reviewed-on: https://go-review.googlesource.com/3350
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Change-Id: Iadcfb113ccecf912e1b64afc07926f0de9de2248
Reviewed-on: https://go-review.googlesource.com/3741
Reviewed-by: Keith Randall <khr@golang.org>
Using benchmark from the issue:
benchmark old ns/op new ns/op delta
BenchmarkRangeStringCast 2162 1152 -46.72%
benchmark old allocs new allocs delta
BenchmarkRangeStringCast 1 0 -100.00%
Fixes#2204
Change-Id: I92c5edd2adca4a7b6fba00713a581bf49dc59afe
Reviewed-on: https://go-review.googlesource.com/3790
Reviewed-by: Keith Randall <khr@golang.org>
Before 3c0fee1, runtime.gogo was just long enough to align to 64 bytes
on OSs with short get_tls implementations and 80 bytes on OSs with
longer get_tls implementations (Windows, Solaris, and Plan 9).
3c0fee1 added a few instructions, which pushed it to 80 on most OSs,
including Windows and Plan 9, and 96 on Solaris.
Fixes#9770.
Change-Id: Ie84810657c14ab16dce9f0e0a932955251b0bf33
Reviewed-on: https://go-review.googlesource.com/3850
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Use memprofilerate in GODEBUG instead of memprofrate to be
consistent with other uses.
Change-Id: Iaf6bd3b378b1fc45d36ecde32f3ad4e63ca1e86b
Reviewed-on: https://go-review.googlesource.com/3800
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The overflow happens only with -gcflags="-N -l"
and can be reproduced with:
$ go test -gcflags="-N -l" -a -run=none net
runtime.cgocall: nosplit stack overflow
504 assumed on entry to runtime.cgocall
480 after runtime.cgocall uses 24
472 on entry to runtime.cgocall_errno
408 after runtime.cgocall_errno uses 64
400 on entry to runtime.exitsyscall
288 after runtime.exitsyscall uses 112
280 on entry to runtime.exitsyscallfast
152 after runtime.exitsyscallfast uses 128
144 on entry to runtime.writebarrierptr
88 after runtime.writebarrierptr uses 56
80 on entry to runtime.writebarrierptr_nostore1
24 after runtime.writebarrierptr_nostore1 uses 56
16 on entry to runtime.acquirem
-24 after runtime.acquirem uses 40
Move closure creation into separate function so that
frames of writebarrierptr_shadow and writebarrierptr_nostore1
are overlapped.
Fixes#9721
Change-Id: I40851f0786763ee964af34814edbc3e3d73cf4e7
Reviewed-on: https://go-review.googlesource.com/3418
Reviewed-by: Russ Cox <rsc@golang.org>
Currently race detector produces the following reports on pprof tests:
WARNING: DATA RACE
Read by goroutine 4:
runtime/pprof_test.TestTraceStartStop()
src/runtime/pprof/trace_test.go:38 +0x1da
testing.tRunner()
src/testing/testing.go:448 +0x13a
Previous write by goroutine 5:
bytes.(*Buffer).grow()
src/bytes/buffer.go:102 +0x190
bytes.(*Buffer).Write()
src/bytes/buffer.go:127 +0x75
runtime/pprof.func·002()
src/runtime/pprof/pprof.go:633 +0xae
Trace writer goroutine synchronizes with StopTrace
using trace.shutdownSema runtime semaphore.
But race detector does not see that synchronization
and so produces false reports.
Teach race detector about the synchronization.
Change-Id: I1219817325d4e16b423f29a0cbee94c929793881
Reviewed-on: https://go-review.googlesource.com/3746
Reviewed-by: Russ Cox <rsc@golang.org>
The test for the framepointer experiment flag is cheaper and more
branch-predictable than the other parts of this conditional, so move
it first. This is also more readable.
(Originally, the flag check required parsing the experiments string,
which is why it was done last. Now that flag is cached.)
Change-Id: I84e00fa7e939e9064f0fa0a4a6fe00576dd61457
Reviewed-on: https://go-review.googlesource.com/3782
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously, we checked for a saved frame pointer by looking for a
2*ptrSize gap between the argument pointer and the locals pointer.
The intent of this check was to look for a two stack slot gap (caller
IP and saved frame pointer), but stack slots are regSize, not ptrSize.
Correct this by checking instead for a 2*regSize gap.
On most platforms, this made no difference because ptrSize==regSize.
However, on amd64p32 (nacl), the saved frame pointer check incorrectly
fired when there was no saved frame pointer because the one stack slot
for the caller IP left an 8 byte gap, which is 2*ptrSize (but not
2*regSize) on amd64p32.
Fixes#9760.
Change-Id: I6eedcf681fe5bf2bf924dde8a8f2d9860a4d758e
Reviewed-on: https://go-review.googlesource.com/3781
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Add memprofrate as a value recognized in GODEBUG. The
value provided is used as the new setting for
runtime.MemProfileRate, allowing the user to
adjust memory profiling.
Change-Id: If129a247683263b11e2dd42473cf9b31280543d5
Reviewed-on: https://go-review.googlesource.com/3450
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This adds a "framepointer" GOEXPERIMENT that that makes the amd64
toolchain maintain base pointer chains in the same way that gcc
-fno-omit-frame-pointer does. Go doesn't use these saved base
pointers, but this does enable external tools like Linux perf and
VTune to unwind Go stacks when collecting system-wide profiles.
This requires support in the compilers to not clobber BP, support in
liblink for generating the BP-saving function prologue and unwinding
epilogue, and support in the runtime to save BPs across preemption, to
skip saved BPs during stack unwinding and, and to adjust saved BPs
during stack moving.
As with other GOEXPERIMENTs, everything from the toolchain to the
runtime must be compiled with this experiment enabled. To do this,
run make.bash (or all.bash) with GOEXPERIMENT=framepointer.
Change-Id: I4024853beefb9539949e5ca381adfdd9cfada544
Reviewed-on: https://go-review.googlesource.com/2992
Reviewed-by: Russ Cox <rsc@golang.org>
Any place that clobbers BP in the runtime can potentially interfere
with frame pointer unwinding with GOEXPERIMENT=framepointer. This
change eliminates uses of BP in the runtime to address this problem.
We have spare registers everywhere this occurs, so there's no downside
to eliminating BP. Where possible, this uses the same new register as
the amd64p32 runtime, which doesn't use BP due to restrictions placed
on it by NaCL.
One nice side effect of this is that it will let perf/VTune unwind the
call stack even through a call to systemstack, which will let us get
really good call graphs from the garbage collector.
Change-Id: I0ffa14cb4dd2b613a7049b8ec59df37c52286212
Reviewed-on: https://go-review.googlesource.com/3390
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
m.gcing has become overloaded to mean "don't preempt this g" in
general. Once the garbage collector is preemptible, the one thing it
*won't* mean is that we're in the garbage collector.
So, rename gcing to "preemptoff" and make it a string giving a reason
that preemption is disabled. gcing was never set to anything but 0 or
1, so we don't have to worry about there being a stack of reasons.
Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46
Reviewed-on: https://go-review.googlesource.com/3660
Reviewed-by: Russ Cox <rsc@golang.org>
Commit 656be31 replaced onM with systemstack, but missed updating a
few comments that still referred to onM. Update these.
Change-Id: I0efb017e9a66ea0adebb6e1da6e518ee11263f69
Reviewed-on: https://go-review.googlesource.com/3664
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The following line in sysFree:
n += (n + memRound) &^ memRound
doubles value of n (n += n).
Which is wrong and can lead to memory corruption.
Fixes#9712
Change-Id: I3c141b71da11e38837c09408cf4f1d22e8f7f36e
Reviewed-on: https://go-review.googlesource.com/3602
Reviewed-by: David du Colombier <0intro@gmail.com>
Set gcscanvalid=false after you have cased to _Grunning.
If you do it before the cas and the atomicstatus races to a scan state,
the scan will set gcscanvalid=true and we will be _Grunning
with gcscanvalid==true which is not a good thing.
Change-Id: Ie53ea744a5600392b47da91159d985fe6fe75961
Reviewed-on: https://go-review.googlesource.com/3510
Reviewed-by: Austin Clements <austin@google.com>
Yet another leftover from C: parfor took a func value for the
callback, casted it to an unsafe.Pointer for storage, and then casted
it back to a func value to call it. This is unnecessary, so just
store the body as a func value. Beyond general cleanup, this also
eliminates the last use of unsafe in parfor.
Change-Id: Ia904af7c6c443ba75e2699835aee8e9a39b26dd8
Reviewed-on: https://go-review.googlesource.com/3396
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Prior to the conversion of the runtime to Go, this void* was
necessary to get closure information in to C callbacks. There
are no more C callbacks and parfor is perfectly capable of
invoking a Go closure now, so eliminate ctx and all of its
unsafe-ness. (Plus, the runtime currently doesn't use ctx for
anything.)
Change-Id: I39fc53b7dd3d7f660710abc76b0d831bfc6296d8
Reviewed-on: https://go-review.googlesource.com/3395
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
parfor originally used a tail array for its thread array. This got
replaced with a slice allocation in the conversion to Go, but many of
its gnarlier effects remained. Instead of keeping track of the
pointer to the first element of the slice and using unsafe pointer
math to get at the ith element, just keep the slice around and use
regular slice indexing. There is no longer any need for padding to
64-bit align the tail array (there hasn't been since the Go
conversion), so remove this unnecessary padding from the parfor
struct. Finally, since the slice tracks its own length, replace the
nthrmax field with len(thr).
Change-Id: I0020a1815849bca53e3613a8fa46ae4fbae67576
Reviewed-on: https://go-review.googlesource.com/3394
Reviewed-by: Russ Cox <rsc@golang.org>
This cleanup was slated for after the conversion of the runtime to Go.
Also improve type and function documentation.
Change-Id: I55a16b09e00cf701f246deb69e7ce7e3e04b26e7
Reviewed-on: https://go-review.googlesource.com/3393
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Currently, if we do an atomic{load,store}64 of an unaligned address on
386, we'll simply get a non-atomic load/store. This has been the
source of myriad bugs, so add alignment checks to these two
operations. These checks parallel the equivalent checks in
sync/atomic.
The alignment check is not necessary in cas64 because it uses a locked
instruction. The CPU will either execute this atomically or raise an
alignment fault (#AC)---depending on the alignment check flag---either
of which is fine.
This also fixes the two places in the runtime that trip the new
checks. One is in the runtime self-test and shouldn't have caused
real problems. The other is in tickspersecond and could, in
principle, have caused a misread of the ticks per second during
initialization.
Change-Id: If1796667012a6154f64f5e71d043c7f5fb3dd050
Reviewed-on: https://go-review.googlesource.com/3521
Reviewed-by: Russ Cox <rsc@golang.org>
Language specification says that variables are captured by reference.
And that is what gc compiler does. However, in lots of cases it is
possible to capture variables by value under the hood without
affecting visible behavior of programs. For example, consider
the following typical pattern:
func (o *Obj) requestMany(urls []string) []Result {
wg := new(sync.WaitGroup)
wg.Add(len(urls))
res := make([]Result, len(urls))
for i := range urls {
i := i
go func() {
res[i] = o.requestOne(urls[i])
wg.Done()
}()
}
wg.Wait()
return res
}
Currently o, wg, res, and i are captured by reference causing 3+len(urls)
allocations (e.g. PPARAM o is promoted to PPARAMREF and moved to heap).
But all of them can be captured by value without changing behavior.
This change implements simple strategy for capturing by value:
if a captured variable is not addrtaken and never assigned to,
then it is captured by value (it is effectively const).
This simple strategy turned out to be very effective:
~80% of all captures in std lib are turned into value captures.
The remaining 20% are mostly in defers and non-escaping closures,
that is, they do not cause allocations anyway.
benchmark old allocs new allocs delta
BenchmarkCompressedZipGarbage 153 126 -17.65%
BenchmarkEncodeDigitsSpeed1e4 91 69 -24.18%
BenchmarkEncodeDigitsSpeed1e5 178 129 -27.53%
BenchmarkEncodeDigitsSpeed1e6 1510 1051 -30.40%
BenchmarkEncodeDigitsDefault1e4 100 75 -25.00%
BenchmarkEncodeDigitsDefault1e5 193 139 -27.98%
BenchmarkEncodeDigitsDefault1e6 1420 985 -30.63%
BenchmarkEncodeDigitsCompress1e4 100 75 -25.00%
BenchmarkEncodeDigitsCompress1e5 193 139 -27.98%
BenchmarkEncodeDigitsCompress1e6 1420 985 -30.63%
BenchmarkEncodeTwainSpeed1e4 109 81 -25.69%
BenchmarkEncodeTwainSpeed1e5 211 151 -28.44%
BenchmarkEncodeTwainSpeed1e6 1588 1097 -30.92%
BenchmarkEncodeTwainDefault1e4 103 77 -25.24%
BenchmarkEncodeTwainDefault1e5 199 143 -28.14%
BenchmarkEncodeTwainDefault1e6 1324 917 -30.74%
BenchmarkEncodeTwainCompress1e4 103 77 -25.24%
BenchmarkEncodeTwainCompress1e5 190 137 -27.89%
BenchmarkEncodeTwainCompress1e6 1327 919 -30.75%
BenchmarkConcurrentDBExec 16223 16220 -0.02%
BenchmarkConcurrentStmtQuery 17687 16182 -8.51%
BenchmarkConcurrentStmtExec 5191 5186 -0.10%
BenchmarkConcurrentTxQuery 17665 17661 -0.02%
BenchmarkConcurrentTxExec 15154 15150 -0.03%
BenchmarkConcurrentTxStmtQuery 17661 16157 -8.52%
BenchmarkConcurrentTxStmtExec 3677 3673 -0.11%
BenchmarkConcurrentRandom 14000 13614 -2.76%
BenchmarkManyConcurrentQueries 25 22 -12.00%
BenchmarkDecodeComplex128Slice 318 252 -20.75%
BenchmarkDecodeFloat64Slice 318 252 -20.75%
BenchmarkDecodeInt32Slice 318 252 -20.75%
BenchmarkDecodeStringSlice 2318 2252 -2.85%
BenchmarkDecode 11 8 -27.27%
BenchmarkEncodeGray 64 56 -12.50%
BenchmarkEncodeNRGBOpaque 64 56 -12.50%
BenchmarkEncodeNRGBA 67 58 -13.43%
BenchmarkEncodePaletted 68 60 -11.76%
BenchmarkEncodeRGBOpaque 64 56 -12.50%
BenchmarkGoLookupIP 153 139 -9.15%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServer 62 59 -4.84%
BenchmarkClientServerParallel4 62 59 -4.84%
BenchmarkClientServerParallel64 62 59 -4.84%
BenchmarkClientServerParallelTLS4 79 76 -3.80%
BenchmarkClientServerParallelTLS64 112 109 -2.68%
BenchmarkCreateGoroutinesCapture 10 6 -40.00%
BenchmarkAfterFunc 1006 1005 -0.10%
Fixes#6632.
Change-Id: I0cd51e4d356331d7f3c5f447669080cd19b0d2ca
Reviewed-on: https://go-review.googlesource.com/3166
Reviewed-by: Russ Cox <rsc@golang.org>
Set the minimum heap size to 4Mbytes except when the hash
table code wants to force a GC. In an unrelated change when a
mutator is asked to assist the GC by marking pointer workbufs
it will keep working until the requested number of pointers
are processed even if it means asking for additional workbufs.
Change-Id: I661cfc0a7f2efcf6286b5d37d73e593d9ecd04d5
Reviewed-on: https://go-review.googlesource.com/3392
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
If result of string(i) does not escape,
allocate a [4]byte temp on stack for it.
Change-Id: If31ce9447982929d5b3b963fd0830efae4247c37
Reviewed-on: https://go-review.googlesource.com/3411
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we always allocate string buffers in heap.
For example, in the following code we allocate a temp string
just for comparison:
if string(byteSlice) == "abc" { ... }
This change extends escape analysis to cover []byte->string
conversions and string concatenation. If the result of operations
does not escape, compiler allocates a small buffer
on stack and passes it to slicebytetostring and concatstrings.
Then runtime uses the buffer if the result fits into it.
Size of the buffer is 32 bytes. There is no fundamental theory
behind this number. Just an observation that on std lib
tests/benchmarks frequency of string allocation is inversely
proportional to string length; and there is significant number
of allocations up to length 32.
benchmark old allocs new allocs delta
BenchmarkFprintfBytes 2 1 -50.00%
BenchmarkDecodeComplex128Slice 318 316 -0.63%
BenchmarkDecodeFloat64Slice 318 316 -0.63%
BenchmarkDecodeInt32Slice 318 316 -0.63%
BenchmarkDecodeStringSlice 2318 2316 -0.09%
BenchmarkStripTags 11 5 -54.55%
BenchmarkDecodeGray 111 102 -8.11%
BenchmarkDecodeNRGBAGradient 200 188 -6.00%
BenchmarkDecodeNRGBAOpaque 165 152 -7.88%
BenchmarkDecodePaletted 319 309 -3.13%
BenchmarkDecodeRGB 166 157 -5.42%
BenchmarkDecodeInterlacing 279 268 -3.94%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
BenchmarkGoLookupIPWithBrokenNameServer 245 226 -7.76%
BenchmarkClientServerParallel4 62 61 -1.61%
BenchmarkClientServerParallel64 62 61 -1.61%
BenchmarkClientServerParallelTLS4 79 78 -1.27%
BenchmarkClientServerParallelTLS64 112 111 -0.89%
benchmark old ns/op new ns/op delta
BenchmarkFprintfBytes 381 311 -18.37%
BenchmarkStripTags 2615 2351 -10.10%
BenchmarkDecodeNRGBAGradient 3715887 3635096 -2.17%
BenchmarkDecodeNRGBAOpaque 3047645 2928644 -3.90%
BenchmarkGoLookupIP 153 135 -11.76%
BenchmarkGoLookupIPNoSuchHost 508 466 -8.27%
Change-Id: I9ec01da816945c3329d7be3c7794b520418c3f99
Reviewed-on: https://go-review.googlesource.com/3120
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
During a concurrent GC stacks are scanned in
an initial scan phase informing the GC of all
pointers on the stack. The GC only needs to rescan
the stack if it potentially changes which can only
happen if the goroutine runs.
This CL tracks whether the Goroutine has run
since it was last scanned and thus may have changed
its stack. If necessary the stack is rescanned.
Change-Id: I5fb1c4338d42e3f61ab56c9beb63b7b2da25f4f1
Reviewed-on: https://go-review.googlesource.com/3275
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we allocate a new string during []byte->string conversion
in string comparison expressions. String allocation is unnecessary in
this case, because comparison does memorize the strings for later use.
This change uses slicebytetostringtmp to construct temp string directly
from []byte buffer and passes it to runtime.eqstring.
Change-Id: If00f1faaee2076baa6f6724d245d5b5e0f59b563
Reviewed-on: https://go-review.googlesource.com/3410
Reviewed-by: Russ Cox <rsc@golang.org>
Coarse-grained test skips to fix bots.
Need to look closer at windows and nacl failures.
Change-Id: I767ef1707232918636b33f715459ee3c0349b45e
Reviewed-on: https://go-review.googlesource.com/3416
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Call frame allocations can account for significant portion
of all allocations in a program, if call is executed
in an inner loop (e.g. to process every line in a log).
On the other hand, the allocation is easy to remove
using sync.Pool since the allocation is strictly scoped.
benchmark old ns/op new ns/op delta
BenchmarkCall 634 338 -46.69%
BenchmarkCall-4 496 167 -66.33%
benchmark old allocs new allocs delta
BenchmarkCall 1 0 -100.00%
BenchmarkCall-4 1 0 -100.00%
Update #7818
Change-Id: Icf60cce0a9be82e6171f0c0bd80dee2393db54a7
Reviewed-on: https://go-review.googlesource.com/1954
Reviewed-by: Keith Randall <khr@golang.org>
The %61 hack was added when runtime was is in C.
Now the Go compiler does the optimization.
Change-Id: I79c3302ec4b931eaaaaffe75e7101c92bf287fc7
Reviewed-on: https://go-review.googlesource.com/3289
Reviewed-by: Keith Randall <khr@golang.org>
Consider the following code:
s := "(" + string(byteSlice) + ")"
Currently we allocate a new string during []byte->string conversion,
and pass it to concatstrings. String allocation is unnecessary in
this case, because concatstrings does memorize the strings for later use.
This change uses slicebytetostringtmp to construct temp string directly
from []byte buffer and passes it to concatstrings.
I've found few such cases in std lib:
s += string(msg[off:off+c]) + "."
buf.WriteString("Sec-WebSocket-Accept: " + string(c.accept) + "\r\n")
bw.WriteString("Sec-WebSocket-Key: " + string(nonce) + "\r\n")
err = xml.Unmarshal([]byte("<Top>"+string(data)+"</Top>"), &logStruct)
d.err = d.syntaxError("invalid XML name: " + string(b))
return m, ProtocolError("malformed MIME header line: " + string(kv))
But there are much more in our internal code base.
Change-Id: I42f401f317131237ddd0cb9786b0940213af16fb
Reviewed-on: https://go-review.googlesource.com/3163
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Half of tests currently crash with GODEBUG=wbshadow.
_PageSize is set to 8192. So data can be extended outside
of actually mapped region during rounding. Which leads to crash
during initial copying to shadow.
Use _PhysPageSize instead.
Change-Id: Iaa89992bd57f86dafa16b092b53fdc0606213acb
Reviewed-on: https://go-review.googlesource.com/3286
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we scan maps even if k/v does not contain pointers.
This is required because overflow buckets are hanging off the main table.
This change introduces a separate array that contains pointers to all
overflow buckets and keeps them alive. Buckets themselves are marked
as containing no pointers and are not scanned by GC (if k/v does not
contain pointers).
This brings maps in line with slices and chans -- GC does not scan
their contents if elements do not contain pointers.
Currently scanning of a map[int]int with 2e8 entries (~8GB heap)
takes ~8 seconds. With this change scanning takes negligible time.
Update #9477.
Change-Id: Id8a04066a53d2f743474cad406afb9f30f00eaae
Reviewed-on: https://go-review.googlesource.com/3288
Reviewed-by: Keith Randall <khr@golang.org>
Adjust triggergc so that we trigger when we have used 7/8
of the available heap memory. Do first collection when we
exceed 4Mbytes.
Change-Id: I467b4335e16dc9cd1521d687fc1f99a51cc7e54b
Reviewed-on: https://go-review.googlesource.com/3149
Reviewed-by: Austin Clements <austin@google.com>
Adujst triggergc so that we trigger when we have used 7/8
of the available memory.
Change-Id: I7ca02546d3084e6a04d60b09479e04a9a9837ae2
Reviewed-on: https://go-review.googlesource.com/3061
Reviewed-by: Russ Cox <rsc@golang.org>
Print out the object holding the reference to the object
that checkmark detects as not being properly marked.
Change-Id: Ieedbb6fddfaa65714504af9e7230bd9424cd0ae0
Reviewed-on: https://go-review.googlesource.com/2744
Reviewed-by: Austin Clements <austin@google.com>
The code in mfinal.go is moved from malloc*.go and mgc*.go
and substantially unchanged.
The code in mbitmap.go is also moved from those files, but
cleaned up so that it can be called from those files (in most cases
the code being moved was not already a standalone function).
I also renamed the constants and wrote comments describing
the format. The result is a significant cleanup and isolation of
the bitmap code, but, roughly speaking, it should be treated
and reviewed as new code.
The other files changed only as much as necessary to support
this code movement.
This CL does NOT change the semantics of the heap or type
bitmaps at all, although there are now some obvious opportunities
to do so in followup CLs.
Change-Id: I41b8d5de87ad1d3cd322709931ab25e659dbb21d
Reviewed-on: https://go-review.googlesource.com/2991
Reviewed-by: Keith Randall <khr@golang.org>
I also added new comments at the top of mbarrier.go,
but the rest of the code is just copy-and-paste.
Change-Id: Iaeb2b12f8b1eaa33dbff5c2de676ca902bfddf2e
Reviewed-on: https://go-review.googlesource.com/2990
Reviewed-by: Austin Clements <austin@google.com>
Otherwise, if you mistakenly refer to an undeclared 'shift' variable, you get 52.
Change-Id: I845fb29f23baee1d8e17b37bde0239872eb54316
Reviewed-on: https://go-review.googlesource.com/2909
Reviewed-by: Austin Clements <austin@google.com>
The function is here ONLY for symmetry with package bytes.
This function should be used ONLY if it makes code clearer.
It is not here for performance. Remove any performance benefit.
If performance becomes an issue, the compiler should be fixed to
recognize the three-way compare (for all comparable types)
rather than encourage people to micro-optimize by using this function.
Change-Id: I71f4130bce853f7aef724c6044d15def7987b457
Reviewed-on: https://go-review.googlesource.com/3012
Reviewed-by: Rob Pike <r@golang.org>
This manually reverts 555da73 from #6372 which implies a
minimum FreeBSD version of 8-STABLE.
Updates docs to mention new minimum requirement.
Fixes#9627
Change-Id: I40ae64be3682d79dd55024e32581e3e5e2be8aa7
Reviewed-on: https://go-review.googlesource.com/3020
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The implementation is the same assembly (or Go) routine.
Change-Id: Ib937c461c24ad2d5be9b692b4eed40d9eb031412
Reviewed-on: https://go-review.googlesource.com/2828
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
runtime.rtype was a copy of reflect.rtype - update script to use that directly.
Introduces a basic test which will skip on systems without appropriate GDB.
Fixes#9326
Change-Id: I6ec74e947bd2e1295492ca34b3a8c1b49315a8cb
Reviewed-on: https://go-review.googlesource.com/2821
Reviewed-by: Ian Lance Taylor <iant@golang.org>
6g does not implement dead code elimination for const switches like it
does for const if statements, so the undefined raiseproc() function
was resulting in a link-time failure.
Change-Id: Ie4fcb3716cb4fe6e618033071df9de545ab3e0af
Reviewed-on: https://go-review.googlesource.com/2830
Reviewed-by: Russ Cox <rsc@golang.org>
printf, vprintf, snprintf, gc_m_ptr, gc_g_ptr, gc_itab_ptr, gc_unixnanotime.
These were called from C.
There is no more C.
Now that vprintf is gone, delete roundup, which is unsafe (see CL 2814).
Change-Id: If8a7b727d497ffa13165c0d3a1ed62abc18f008c
Reviewed-on: https://go-review.googlesource.com/2824
Reviewed-by: Austin Clements <austin@google.com>
Moving the "don't really preempt" check up earlier in the function
introduced a race where gp.stackguard0 might change between
the early check and the later one. Since the later one is missing the
"don't really preempt" logic, it could decide to preempt incorrectly.
Pull the result of the check into a local variable and use an atomic
to access stackguard0, to eliminate the race.
I believe this will fix the broken OS X and Solaris builders.
Change-Id: I238350dd76560282b0c15a3306549cbcf390dbff
Reviewed-on: https://go-review.googlesource.com/2823
Reviewed-by: Austin Clements <austin@google.com>
Since CL 2750, the build is broken on Plan 9,
because a new function netpollinited was added
and called from findrunnable in proc1.go.
However, netpoll is not implemented on Plan 9.
Thus, we define netpollinited in netpoll_stub.go.
Fixes#9590
Change-Id: I0895607b86cbc7e94c1bfb2def2b1a368a8efbe6
Reviewed-on: https://go-review.googlesource.com/2759
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
These were fixed in my local commit,
but I forgot that the web Submit button can't see that.
Change-Id: Iec3a70ce3ccd9db2a5394ae2da0b293e45ac2fb5
Reviewed-on: https://go-review.googlesource.com/2822
Reviewed-by: Russ Cox <rsc@golang.org>
During all.bash I got a crash in the GOMAXPROCS=2 runtime test reporting
that the write barrier in the assignment 'c.tiny = add(x, size)' had been
given a pointer pointing into an unexpected span. The problem is that
the tiny allocation was at the end of a span and c.tiny was now pointing
to the end of the allocation and therefore to the end of the span aka
the beginning of the next span.
Rewrite tinyalloc not to do that.
More generally, it's not okay to call add(p, size) unless you know that p
points at > (not just >=) size bytes. Similarly, pretty much any call to
roundup doesn't know how much space p points at, so those are all
broken.
Rewrite persistentalloc not to use add(p, totalsize) and not to use roundup.
There is only one use of roundup left, in vprintf, which is dead code.
I will remove that code and roundup itself in a followup CL.
Change-Id: I211e307d1a656d29087b8fd40b2b71010722fb4a
Reviewed-on: https://go-review.googlesource.com/2814
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
It could happen that mp.printlock++ happens, then on entry to lock,
the goroutine is preempted and then rescheduled onto another m
for the actual call to lock. Now the lock and the printlock++ have
happened on different m's. This can lead to printlock not being
unlocked, which either gives a printing deadlock or a crash when
the goroutine reschedules, because m.locks > 0.
Change-Id: Ib0c08740e1b53de3a93f7ebf9b05f3dceff48b9f
Reviewed-on: https://go-review.googlesource.com/2819
Reviewed-by: Rick Hudson <rlh@golang.org>
Mostly this is using uint32 instead of int32 for unsigned values
like instruction encodings or float32 bit representations,
removal of ternary operations, and removal of #defines.
Delete sched9.c, because it is not compiled (it is still in the history
if we ever need it).
Change-Id: I68579cfea679438a27a80416727a9af932b088ae
Reviewed-on: https://go-review.googlesource.com/2658
Reviewed-by: Austin Clements <austin@google.com>
Normally, a panic/throw only shows the thread stack for the current thread
and all paused goroutines. Goroutines running on other threads, or other threads
running on their system stacks, are opaque. Change that when GODEBUG=crash,
by passing a SIGQUIT around to all the threads when GODEBUG=crash.
If this works out reasonably well, we might make the SIGQUIT relay part of
the standard panic/throw death, perhaps eliding idle m's.
Change-Id: If7dd354f7f3a6e326d17c254afcf4f7681af2f8b
Reviewed-on: https://go-review.googlesource.com/2811
Reviewed-by: Rick Hudson <rlh@golang.org>
There is a small possibility that runtime deadlocks when netpoll is just activated.
Consider the following scenario:
GOMAXPROCS=1
epfd=-1 (netpoll is not activated yet)
A thread is in findrunnable, sets sched.lastpoll=0, calls netpoll(true),
which returns nil. Now the thread is descheduled for some time.
Then sysmon retakes a P from syscall and calls handoffp.
The "If this is the last running P and nobody is polling network" check in handoffp fails,
since the first thread set sched.lastpoll=0. So handoffp decides that there is already
a thread that polls network and so it calls pidleput.
Now the first thread is scheduled again, finds no work and calls stopm.
There is no thread that polls network and so checkdead reports deadlock.
To fix this, don't set sched.lastpoll=0 when netpoll is not activated.
The deadlock can happen if cgo is disabled (-tag=netgo) and only on program startup
(when netpoll is just activated).
The test is from issue 5216 that lead to addition of the
"If this is the last running P and nobody is polling network" check in handoffp.
Update issue 9576.
Change-Id: I9405f627a4d37bd6b99d5670d4328744aeebfc7a
Reviewed-on: https://go-review.googlesource.com/2750
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The old name was too ambiguous (is it a verb? is it a predicate? is
it a constant?) and too close to debug.gccheckmark. Hopefully the new
name conveys that this variable indicates that we are currently doing
mark checking.
Change-Id: I031cd48b0906cdc7774f5395281d3aeeb8ef3ec9
Reviewed-on: https://go-review.googlesource.com/2656
Reviewed-by: Rick Hudson <rlh@golang.org>
1) Move non-preemption check even earlier in newstack.
This avoids a few priority inversion problems.
2) Always use atomic operations to update bitmap for 1-word objects.
This avoids lost mark bits during concurrent GC.
3) Stop using work.nproc == 1 as a signal for being single-threaded.
The concurrent GC runs with work.nproc == 1 but other procs are
running mutator code.
The use of work.nproc == 1 in getfull *is* safe, but remove it anyway,
since it is saving only a single atomic operation per GC round.
Fixes#9225.
Change-Id: I24134f100ad592ea8cb59efb6a54f5a1311093dc
Reviewed-on: https://go-review.googlesource.com/2745
Reviewed-by: Rick Hudson <rlh@golang.org>
Make auxv parsing in linux/arm less of a special case.
* rename setup_auxv to sysargs
* exclude linux/arm from vdso_none.go
* move runtime.checkarm after runtime.sysargs so arm specific
values are properly initialised
Change-Id: I1ca7f5844ad5a162337ff061a83933fc9a2b5ff6
Reviewed-on: https://go-review.googlesource.com/2681
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In the previous sandbox implementation we read all sandboxed output
from standard output, and so all fake time writes were made to
standard output. Now we have a more sophisticated sandbox server
(see golang.org/x/playground/sandbox) that is capable of recording
both standard output and standard error, so allow fake time writes to
go to either file descriptor.
Change-Id: I79737deb06fd8e0f28910f21f41bd3dc1726781e
Reviewed-on: https://go-review.googlesource.com/2713
Reviewed-by: Minux Ma <minux@golang.org>
Previously, gccheckmark could only be enabled or disabled by calling
runtime.GCcheckmarkenable/GCcheckmarkdisable. This was a necessary
hack because GODEBUG was broken.
Now that GODEBUG works again, move control over gccheckmark to a
GODEBUG variable and remove these runtime functions. Currently,
gccheckmark is enabled by default (and will probably remain so for
much of the 1.5 development cycle).
Change-Id: I2bc6f30c21b795264edf7dbb6bd7354b050673ab
Reviewed-on: https://go-review.googlesource.com/2603
Reviewed-by: Rick Hudson <rlh@golang.org>
Also fix one unaligned stack size for nacl that is caught
by this change.
Fixes#9539.
Change-Id: Ib696a573d3f1f9bac7724f3a719aab65a11e04d3
Reviewed-on: https://go-review.googlesource.com/2600
Reviewed-by: Keith Randall <khr@golang.org>
Recognize loops of the form
for i := range a {
a[i] = zero
}
in which the evaluation of a is free from side effects.
Replace these loops with calls to memclr.
This occurs in the stdlib in 18 places.
The motivating example is clearing a byte slice:
benchmark old ns/op new ns/op delta
BenchmarkGoMemclr5 3.31 3.26 -1.51%
BenchmarkGoMemclr16 13.7 3.28 -76.06%
BenchmarkGoMemclr64 50.8 4.14 -91.85%
BenchmarkGoMemclr256 157 6.02 -96.17%
Update #5373.
Change-Id: I99d3e6f5f268e8c6499b7e661df46403e5eb83e4
Reviewed-on: https://go-review.googlesource.com/2520
Reviewed-by: Keith Randall <khr@golang.org>
Fixes#9541.
Change-Id: I5d659ad50d7c3d1c92ed9feb86cda4c1a6e62054
Reviewed-on: https://go-review.googlesource.com/2584
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Random is bad, it can block and prevent binaries from starting.
Use urandom instead. We'd rather have bad random bits than no
random bits.
Change-Id: I360e1cb90ace5518a1b51708822a1dae27071ebd
Reviewed-on: https://go-review.googlesource.com/2582
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Minux Ma <minux@golang.org>
In 32-bit worlds, 8-byte objects are only aligned to 4-byte boundaries.
Change-Id: I91469a9a67b1ee31dd508a4e105c39c815ecde58
Reviewed-on: https://go-review.googlesource.com/2581
Reviewed-by: Keith Randall <khr@golang.org>
For a non-zero-sized struct with a final zero-sized field,
add a byte to the size (before rounding to alignment). This
change ensures that taking the address of the zero-sized field
will not incorrectly leak the following object in memory.
reflect.funcLayout also needs this treatment.
Fixes#9401
Change-Id: I1dc503dc5af4ca22c8f8c048fb7b4541cc957e0f
Reviewed-on: https://go-review.googlesource.com/2452
Reviewed-by: Russ Cox <rsc@golang.org>
run GC in its own background goroutine making the
caller runnable if resources are available. This is
critical in single goroutine applications.
Allow goroutines that allocate a lot to help out
the GC and in doing so throttle their own allocation.
Adjust test so that it only detects that a GC is run
during init calls and not whether the GC is memory
efficient. Memory efficiency work will happen later
in 1.5.
Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b
Reviewed-on: https://go-review.googlesource.com/2349
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This improves the printing of GC times to be both more human-friendly
and to provide enough information for the construction of MMU curves
and other statistics. The new times look like:
GC: #8 72413852ns @143036695895725 pause=622900 maxpause=427037 goroutines=11 gomaxprocs=4
GC: sweep term: 190584ns max=190584 total=275001 procs=4
GC: scan: 260397ns max=260397 total=902666 procs=1
GC: install wb: 5279ns max=5279 total=18642 procs=4
GC: mark: 71530555ns max=71530555 total=186694660 procs=1
GC: mark term: 427037ns max=427037 total=1691184 procs=4
This prints gomaxprocs and the number of procs used in each phase for
the benefit of analyzing mutator utilization during concurrent phases.
This also means the analysis doesn't have to hard-code which phases
are STW.
This prints the absolute start time only for the GC cycle. The other
start times can be derived from the phase durations. This declutters
the view for humans readers and doesn't pose any additional complexity
for machine readers.
This removes the confusing "cycle" terminology. Instead, this places
the phase duration after the phase name and adds a "ns" unit, which
both makes it implicitly clear that this is the duration of that phase
and indicates the units of the times.
This adds a "GC:" prefix to all lines for easier identification.
Finally, this generally cleans up the code as well as the placement of
spaces in the output and adds print locking so the statistics blocks
are never interrupted by other prints.
Change-Id: Ifd056db83ed1b888de7dfa9a8fc5732b01ccc631
Reviewed-on: https://go-review.googlesource.com/2542
Reviewed-by: Rick Hudson <rlh@golang.org>
The equal algorithm used to take the size
equal(p, q *T, size uintptr) bool
With this change, it does not
equal(p, q *T) bool
Similarly for the hash algorithm.
The size is rarely used, as most equal functions know the size
of the thing they are comparing. For instance f32equal already
knows its inputs are 4 bytes in size.
For cases where the size is not known, we allocate a closure
(one for each size needed) that points to an assembly stub that
reads the size out of the closure and calls generic code that
has a size argument.
Reduces the size of the go binary by 0.07%. Performance impact
is not measurable.
Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec
Reviewed-on: https://go-review.googlesource.com/2392
Reviewed-by: Russ Cox <rsc@golang.org>
Use a lookup table to find the function which contains a pc. It is
faster than the old binary search. findfunc is used primarily for
stack copying and garbage collection.
benchmark old ns/op new ns/op delta
BenchmarkStackCopy 294746596 255400980 -13.35%
(findfunc is one of several tasks done by stack copy, the findfunc
time itself is about 2.5x faster.)
The lookup table is built at link time. The table grows the binary
size by about 0.5% of the text segment.
We impose a lower limit of 16 bytes on any function, which should not
have much of an impact. (The real constraint required is <=256
functions in every 4096 bytes, but 16 bytes/function is easier to
implement.)
Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca
Reviewed-on: https://go-review.googlesource.com/2097
Reviewed-by: Russ Cox <rsc@golang.org>
This implements support for calls to and from C in the ppc64 C ABI, as
well as supporting functionality such as an entry point from the
dynamic linker.
Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a
Reviewed-on: https://go-review.googlesource.com/2009
Reviewed-by: Russ Cox <rsc@golang.org>
Cgo will need this for calls from C to Go and for handling signals
that may occur in C code.
Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada
Reviewed-on: https://go-review.googlesource.com/2008
Reviewed-by: Russ Cox <rsc@golang.org>
Cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
will be allocated directly. There is no point in cacheing
32KB+ stacks as we ask for and return 32KB at a time
from the allocator.
Note that the minimum stack is 8K on windows/64bit and 4K on
windows/32bit and plan9. For these os/arch combinations,
the number of stack orders is less so that we have the same
maximum cached size.
Fixes#9045
Change-Id: Ia4195dd1858fb79fc0e6a91ae29c374d28839e44
Reviewed-on: https://go-review.googlesource.com/2098
Reviewed-by: Russ Cox <rsc@golang.org>
The ones at the end of M and G are just used to compute
their size for use in assembly. Generate the size explicitly.
The one at the end of itab is variable-sized, and at least one.
The ones at the end of interfacetype and uncommontype are not
needed, as the preceding slice references them (the slice was
originally added for use by reflect?).
The one at the end of stackmap is already accessed correctly,
and the runtime never allocates one.
Update #9401
Change-Id: Ia75e3aaee38425f038c506868a17105bd64c712f
Reviewed-on: https://go-review.googlesource.com/2420
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Fold in some startup randomness to make the hash vary across
different runs. This helps prevent attackers from choosing
keys that all map to the same bucket.
Also, reorganize the hash a bit. Move the *m1 multiply to after
the xor of the current hash and the message. For hash quality
it doesn't really matter, but for DDOS resistance it helps a lot
(any processing done to the message before it is merged with the
random seed is useless, as it is easily inverted by an attacker).
Update #9365
Change-Id: Ib19968168e1bbc541d1d28be2701bb83e53f1e24
Reviewed-on: https://go-review.googlesource.com/2344
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This CL only fixes the build, there are two failing tests:
RaceMapBigValAccess1 and RaceMapBigValAccess2
in runtime/race tests. I haven't investigated why yet.
Updates #9516.
Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f
Reviewed-on: https://go-review.googlesource.com/2401
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
sysReserve doesn't actually reserve the full amount requested on
64-bit systems, because of problems with ulimit. Instead it checks
that it can get the first 64 kB and assumes it can grab the rest as
needed. This doesn't work well with the "let the kernel pick an address"
mode, so don't do that. Pick a high address instead.
Change-Id: I4de143a0e6fdeb467fa6ecf63dcd0c1c1618a31c
Reviewed-on: https://go-review.googlesource.com/2345
Reviewed-by: Rick Hudson <rlh@golang.org>
The line 'mp.schedlink = mnext' has an implicit write barrier call,
which needs a valid g. Move it above the setg(nil).
Change-Id: If3e86c948e856e10032ad89f038bf569659300e0
Reviewed-on: https://go-review.googlesource.com/2347
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
First, call clearcheckmarks immediately after changing checkmark,
so that there is less time when the checkmark flag and the bitmap
are inconsistent. The tiny gap between the two lines is fine, because
the world is stopped. Before, the gap was much larger and included
such code as "go bgsweep()", which allocated.
Second, modify gcphase only when the world is stopped.
As written, gcscan_m was changing gcphase from 0 to GCscan
and back to 0 while other goroutines were running.
Another goroutine running at the same time might decide to
sleep, see GCscan, call gcphasework, and start "helping" by
scanning its stack. That's fine, except that if gcphase flips back
to 0 as the goroutine calls scanblock, it will start draining the
work buffers prematurely.
Both of these were found wbshadow=2 (and a lot of hard work).
Eventually that will run automatically, but right now it still
doesn't quite work for all.bash, due to mmap conflicts with
pthread-created threads.
Change-Id: I99aa8210cff9c6e7d0a1b62c75be32a23321897b
Reviewed-on: https://go-review.googlesource.com/2340
Reviewed-by: Rick Hudson <rlh@golang.org>
Use typedmemmove, typedslicecopy, and adjust reflect.call
to execute the necessary write barriers.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iec5b5b0c1be5589295e28e5228e37f1a92e07742
Reviewed-on: https://go-review.googlesource.com/2312
Reviewed-by: Keith Randall <khr@golang.org>
A side effect of this change is that when assertI2T writes to the
memory for the T being extracted, it can use typedmemmove
for write barriers.
There are other ways we could have done this, but this one
finishes a TODO in package runtime.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Icbc8aabfd8a9b1f00be2e421af0e3b29fa54d01e
Reviewed-on: https://go-review.googlesource.com/2279
Reviewed-by: Keith Randall <khr@golang.org>
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iea83d693480c2f3008b4e80d55821acff65970a6
Reviewed-on: https://go-review.googlesource.com/2277
Reviewed-by: Keith Randall <khr@golang.org>
Preparation for replacing many memmove calls in runtime
with typedmemmove, which is a clearer description of what
the routine is doing.
For the same reason, rename writebarriercopy to typedslicecopy.
Change-Id: I6f23bef2c2215509fefba175b16908f76dc7538c
Reviewed-on: https://go-review.googlesource.com/2276
Reviewed-by: Rick Hudson <rlh@golang.org>
Add write barrier to atomic operations manipulating pointers.
In general an atomic write of a pointer word may indicate racy accesses,
so there is no strictly safe way to attempt to keep the shadow copy
in sync with the real one. Instead, mark the shadow copy as not used.
Redirect sync/atomic pointer routines back to the runtime ones,
so that there is only one copy of the write barrier and shadow logic.
In time we might consider doing this for most of the sync/atomic
functions, but for now only the pointer routines need that treatment.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I852936b9a111a6cb9079cfaf6bd78b43016c0242
Reviewed-on: https://go-review.googlesource.com/2066
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The Gobuf.g goroutine pointer is almost always updated by assembly code.
In one of the few places it is updated by Go code - func save - it must be
treated as a uintptr to avoid a write barrier being emitted at a bad time.
Instead of figuring out how to emit the write barriers missing in the
assembly manipulation, change the type of the field to uintptr, so that
it does not require write barriers at all.
Goroutine structs are published in the allg list and never freed.
That will keep the goroutine structs from being collected.
There is never a time that Gobuf.g's contain the only references
to a goroutine: the publishing of the goroutine in allg comes first.
Goroutine pointers are also kept in non-GC-visible places like TLS,
so I can't see them ever moving. If we did want to start moving data
in the GC, we'd need to allocate the goroutine structs from an
alternate arena. This CL doesn't make that problem any worse.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I85f91312ec3e0ef69ead0fff1a560b0cfb095e1a
Reviewed-on: https://go-review.googlesource.com/2065
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Ic8624401d7c8225a935f719f96f2675c6f5c0d7c
Reviewed-on: https://go-review.googlesource.com/2064
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is the detection code. It works well enough that I know of
a handful of missing write barriers. However, those are subtle
enough that I'll address them in separate followup CLs.
GODEBUG=wbshadow=1 checks for a write that bypassed the
write barrier at the next write barrier of the same word.
If a bug can be detected in this mode it is typically easy to
understand, since the crash says quite clearly what kind of
word has missed a write barrier.
GODEBUG=wbshadow=2 adds a check of the write barrier
shadow copy during garbage collection. Bugs detected at
garbage collection can be difficult to understand, because
there is no context for what the found word means.
Typically you have to reproduce the problem with allocfreetrace=1
in order to understand the type of the badly updated word.
Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e
Reviewed-on: https://go-review.googlesource.com/2061
Reviewed-by: Austin Clements <austin@google.com>
Noticed while investigating the speed of the runtime tests, as part
of debugging while Plan 9's runtime tests are timing out on GCE.
Change-Id: I95f5a3d967a0b45ec1ebf10067e193f51db84e26
Reviewed-on: https://go-review.googlesource.com/2283
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This reverts commit ab0535ae3f.
I think it will remain useful to distinguish code that must
run on a system stack from code that can run on either stack,
even if that distinction is no
longer based on the implementation language.
That is, I expect to add a //go:systemstack comment that,
in terms of the old implementation, tells the compiler,
to pretend this function was written in C.
Change-Id: I33d2ebb2f99ae12496484c6ec8ed07233d693275
Reviewed-on: https://go-review.googlesource.com/2275
Reviewed-by: Russ Cox <rsc@golang.org>
Shell out to `uname -r` this time, so that the test will compile
even if the platform doesn't have syscall.Sysctl.
Change-Id: I3a19ab5d820bdb94586a97f4507b3837d7040525
Reviewed-on: https://go-review.googlesource.com/2271
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The test program requires static constructor, which in turn needs
external linking to work, but external linking never works on 10.6.
This should fix the darwin-{386,amd64} builders.
Change-Id: I714fdd3e35f9a7e5f5659cf26367feec9412444f
Reviewed-on: https://go-review.googlesource.com/2235
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Fixes build on plan9 and windows.
Change-Id: Ic9b02c641ab84e4f6d8149de71b9eb495e3343b2
Reviewed-on: https://go-review.googlesource.com/2233
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
I missed this one in golang.org/cl/2232 and only tested the patch
on openbsd/amd64.
Change-Id: I4ff437ae0bfc61c989896c01904b6d33f9bdf0ec
Reviewed-on: https://go-review.googlesource.com/2234
Reviewed-by: Minux Ma <minux@golang.org>
This is a genuine bug exposed by our test for issue 9456: our wrapper
for pthread_create is not initialized until we initialize cgo itself,
but it is possible that a static constructor could call pthread_create,
and in that case, it will be calling a nil function pointer.
Fix that by also initializing the sys_pthread_create function pointer
inside our pthread_create wrapper function, and use a pthread_once to
make sure it is only initialized once.
Fix build for openbsd.
Change-Id: Ica4da2c21fcaec186fdd3379128ef46f0e767ed7
Reviewed-on: https://go-review.googlesource.com/2232
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Some libraries, for example, OpenBLAS, create work threads in a global constructor.
If we're doing cpu profiling, it's possible that SIGPROF might come to some of the
worker threads before we make our first cgo call. Cgocallback used to terminate the
process when that happens, but it's better to miss a couple profiling signals than
to abort in this case.
Fixes#9456.
Change-Id: I112b8e1a6e10e6cc8ac695a4b518c0f577309b6b
Reviewed-on: https://go-review.googlesource.com/2141
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Following change 2154, the goatoi function
was renamed atoi.
However, this definition conflicts with the
atoi function defined in the Plan 9 runtime,
which takes a []byte instead of a string.
This change fixes the build on Plan 9.
Change-Id: Ia0f7ca2f965bd5e3cce3177bba9c806f64db05eb
Reviewed-on: https://go-review.googlesource.com/2165
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
They are no longer needed now that C is gone.
goatoi -> atoi
gofuncname/funcname -> funcname/cfuncname
goroundupsize -> already existing roundupsize
Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7
Reviewed-on: https://go-review.googlesource.com/2154
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Now that we've removed all the C code in runtime and the C compilers,
there is no need to have a separate stackguard field to check for C
code on Go stack.
Remove field g.stackguard1 and rename g.stackguard0 to g.stackguard.
Adjust liblink and cmd/ld as necessary.
Change-Id: I54e75db5a93d783e86af5ff1a6cd497d669d8d33
Reviewed-on: https://go-review.googlesource.com/2144
Reviewed-by: Keith Randall <khr@golang.org>
The goalg function was a holdover from when we had algorithm
tables in both C and Go. It is no longer needed.
Change-Id: Ia0c1af35bef3497a899f22084a1a7b42daae72a0
Reviewed-on: https://go-review.googlesource.com/2099
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Currently we do very a complex rebalancing of runnable goroutines
between queues, which tries to preserve scheduling fairness.
Besides being complex and error-prone, it also destroys all locality
of scheduling.
This change uses simpler scheme: leave runnable goroutines where
they are, during starttheworld start all Ps with local work,
plus start one additional P in case we have excessive runnable
goroutines in local queues or in the global queue.
The schedler must be able to operate efficiently w/o the rebalancing,
because garbage collections do not have to happen frequently.
The immediate need is execution tracing support: handling of
garabage collection which does stoptheworld/starttheworld several
times becomes exceedingly complex if the current execution can
jump between Ps during starttheworld.
Change-Id: I4fdb7a6d80ca4bd08900d0c6a0a252a95b1a2c90
Reviewed-on: https://go-review.googlesource.com/1951
Reviewed-by: Rick Hudson <rlh@golang.org>