1
0
mirror of https://github.com/golang/go synced 2024-11-20 01:04:40 -07:00
Commit Graph

1582 Commits

Author SHA1 Message Date
Ian Lance Taylor
68c6aad58b runtime: change SIGEMT on linux/mips64 to throw
This matches SIGEMT on other systems that use it (SIGEMT is not used
for most linux systems).

Change-Id: If394c06c9ed1cb3ea2564385a8edfbed8b5566d1
Reviewed-on: https://go-review.googlesource.com/17874
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
2015-12-16 02:08:02 +00:00
Austin Clements
01baf13ba5 runtime: only trigger forced GC if GC is not running
Currently, sysmon triggers a forced GC solely based on
memstats.last_gc. However, memstats.last_gc isn't updated until mark
termination, so once sysmon starts triggering forced GC, it will keep
triggering them until GC finishes. The first of these actually starts
a GC; the remainder up to the last print "GC forced", but gcStart
returns immediately because gcphase != _GCoff; then the last may start
another GC if the previous GC finishes (and sets last_gc) between
sysmon triggering it and gcStart checking the GC phase.

Fix this by expanding the condition for starting a forced GC to also
require that no GC is currently running. This, combined with the way
forcegchelper blocks until the GC cycle is started, ensures sysmon
only starts one GC when the time exceeds the forced GC threshold.

Fixes #13458.

Change-Id: Ie6cf841927f6085136be3f45259956cd5cf10d23
Reviewed-on: https://go-review.googlesource.com/17819
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 20:13:19 +00:00
Austin Clements
50d8d4e834 runtime: simplify sigprof traceback interlocking
The addition of stack barrier locking to copystack subsumes the
partial fix from commit bbd1a1c for SIGPROF during copystack. With the
stack barrier locking, this commit simplifies the rule in sigprof to:
the user stack can be traced only if sigprof can acquire the stack
barrier lock.

Updates #12932, #13362.

Change-Id: I1c1f80015053d0ac7761e9e0c7437c2aba26663f
Reviewed-on: https://go-review.googlesource.com/17192
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 20:12:17 +00:00
Austin Clements
2bacae815b runtime: update triggerRatio in setGCPercent
Currently, runtime/debug.SetGCPercent does not adjust the controller
trigger ratio. As a result, runtime reductions of GOGC don't take full
effect until after one more concurrent cycle has happened, which
adjusts the trigger ratio to account for the new gcpercent.

Fix this by lowering the trigger ratio if necessary in setGCPercent.

Change-Id: I4d23e0c58d91939b86ac60fa5d53ef91d0d89e0c
Reviewed-on: https://go-review.googlesource.com/17813
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 17:58:38 +00:00
Austin Clements
1e1ea66991 runtime: print gctrace before releasing worldsema
Currently we drop worldsema and then print the gctrace. We did this so
that if stderr is a pipe or a blocked terminal, blocking on printing
the gctrace would not block another GC from starting. However, this is
a bit of a fool's errand because a blocked runtime print will block
the whole M/P, so after GOMAXPROCS GC cycles, the whole system will
freeze. Furthermore, now this is much less of an issue because
allocation will block indefinitely if it can't start a GC (whereas it
used to be that allocation could run away). Finally, this allows
another GC cycle to start while the previous cycle is printing the
gctrace, which leads to races on reading various statistics to print
them and the next GC cycle overwriting those statistics.

Fix this by moving the release of worldsema after the gctrace print.

Change-Id: I3d044ea0f77d80f3b4050af6b771e7912258662a
Reviewed-on: https://go-review.googlesource.com/17812
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 17:58:30 +00:00
Austin Clements
ff5c945382 runtime: reset sweep stats before starting the world
Currently we reset the sweep stats just after gcMarkTermination starts
the world and releases worldsema. However, background sweeping can
start the moment we start the world and, in fact, pause sweeping can
start the moment we release worldsema (because another GC cycle can
start up), so these need to be cleared before starting the world.

Change-Id: I95701e3de6af76bb3fbf2ee65719985bf57d20b2
Reviewed-on: https://go-review.googlesource.com/17811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 17:58:22 +00:00
Austin Clements
87d939dee8 runtime: fix (sometimes major) underestimation of heap_live
Currently, we update memstats.heap_live from mcache.local_cachealloc
whenever we lock the heap (e.g., to obtain a fresh span or to release
an unused span). However, under the right circumstances,
local_cachealloc can accumulate allocations up to the size of
the *entire heap* without flushing them to heap_live. Specifically,
since span allocations from an mcentral don't lock the heap, if a
large number of pages are held in an mcentral and the application
continues to use and free objects of that size class (e.g., the
BinaryTree17 benchmark), local_cachealloc won't be flushed until the
mcentral runs out of spans.

This is a problem because, unlike many of the memory statistics that
are purely informative, heap_live is used to determine when the
garbage collector should start and how hard it should work.

This commit eliminates local_cachealloc, instead atomically updating
heap_live directly. To control contention, we do this only when
obtaining a span from an mcentral. Furthermore, we make heap_live
conservative: allocating a span assumes that all free slots in that
span will be used and accounts for these when the span is
allocated, *before* the objects themselves are. This is important
because 1) this triggers the GC earlier than necessary rather than
potentially too late and 2) this leads to a conservative GC rate
rather than a GC rate that is potentially too low.

Alternatively, we could have flushed local_cachealloc when it passed
some threshold, but this would require determining a threshold and
would cause heap_live to underestimate the true value rather than
overestimate.

Fixes #12199.

name                      old time/op    new time/op    delta
BinaryTree17-12              2.88s ± 4%     2.88s ± 1%    ~     (p=0.470 n=19+19)
Fannkuch11-12                2.48s ± 1%     2.48s ± 1%    ~     (p=0.243 n=16+19)
FmtFprintfEmpty-12          50.9ns ± 2%    50.7ns ± 1%    ~     (p=0.238 n=15+14)
FmtFprintfString-12          175ns ± 1%     171ns ± 1%  -2.48%  (p=0.000 n=18+18)
FmtFprintfInt-12             159ns ± 1%     158ns ± 1%  -0.78%  (p=0.000 n=19+18)
FmtFprintfIntInt-12          270ns ± 1%     265ns ± 2%  -1.67%  (p=0.000 n=18+18)
FmtFprintfPrefixedInt-12     235ns ± 1%     234ns ± 0%    ~     (p=0.362 n=18+19)
FmtFprintfFloat-12           309ns ± 1%     308ns ± 1%  -0.41%  (p=0.001 n=18+19)
FmtManyArgs-12              1.10µs ± 1%    1.08µs ± 0%  -1.96%  (p=0.000 n=19+18)
GobDecode-12                7.81ms ± 1%    7.80ms ± 1%    ~     (p=0.425 n=18+19)
GobEncode-12                6.53ms ± 1%    6.53ms ± 1%    ~     (p=0.817 n=19+19)
Gzip-12                      312ms ± 1%     312ms ± 2%    ~     (p=0.967 n=19+20)
Gunzip-12                   42.0ms ± 1%    41.9ms ± 1%    ~     (p=0.172 n=19+19)
HTTPClientServer-12         63.7µs ± 1%    63.8µs ± 1%    ~     (p=0.639 n=19+19)
JSONEncode-12               16.4ms ± 1%    16.4ms ± 1%    ~     (p=0.954 n=19+19)
JSONDecode-12               58.5ms ± 1%    57.8ms ± 1%  -1.27%  (p=0.000 n=18+19)
Mandelbrot200-12            3.86ms ± 1%    3.88ms ± 0%  +0.44%  (p=0.000 n=18+18)
GoParse-12                  3.67ms ± 2%    3.66ms ± 1%  -0.52%  (p=0.001 n=18+19)
RegexpMatchEasy0_32-12       100ns ± 1%     100ns ± 0%    ~     (p=0.257 n=19+18)
RegexpMatchEasy0_1K-12       347ns ± 1%     347ns ± 1%    ~     (p=0.527 n=18+18)
RegexpMatchEasy1_32-12      83.7ns ± 2%    83.1ns ± 2%    ~     (p=0.096 n=18+19)
RegexpMatchEasy1_1K-12       509ns ± 1%     505ns ± 1%  -0.75%  (p=0.000 n=18+19)
RegexpMatchMedium_32-12      130ns ± 2%     129ns ± 1%    ~     (p=0.962 n=20+20)
RegexpMatchMedium_1K-12     39.5µs ± 2%    39.4µs ± 1%    ~     (p=0.376 n=20+19)
RegexpMatchHard_32-12       2.04µs ± 0%    2.04µs ± 1%    ~     (p=0.195 n=18+17)
RegexpMatchHard_1K-12       61.4µs ± 1%    61.4µs ± 1%    ~     (p=0.885 n=19+19)
Revcomp-12                   540ms ± 2%     542ms ± 4%    ~     (p=0.552 n=19+17)
Template-12                 69.6ms ± 1%    71.2ms ± 1%  +2.39%  (p=0.000 n=20+20)
TimeParse-12                 357ns ± 1%     357ns ± 1%    ~     (p=0.883 n=18+20)
TimeFormat-12                379ns ± 1%     362ns ± 1%  -4.53%  (p=0.000 n=18+19)
[Geo mean]                  62.0µs         61.8µs       -0.44%

name              old time/op  new time/op  delta
XBenchGarbage-12  5.89ms ± 2%  5.81ms ± 2%  -1.41%  (p=0.000 n=19+18)

Change-Id: I96b31cca6ae77c30693a891cff3fe663fa2447a0
Reviewed-on: https://go-review.googlesource.com/17748
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 16:16:08 +00:00
Austin Clements
4ad64cadf8 runtime: trace sweep completion in gcpacertrace mode
Change-Id: I7991612e4d064c15492a39c19f753df1db926203
Reviewed-on: https://go-review.googlesource.com/17747
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 16:15:59 +00:00
Austin Clements
c1cbe5b577 runtime: check for spanBytesAlloc underflow
Change-Id: I5e6739ff0c6c561195ed9891fb90f933b81e7750
Reviewed-on: https://go-review.googlesource.com/17746
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 16:15:47 +00:00
Austin Clements
6383fb6152 runtime: deduct correct sweep credit
deductSweepCredit expects the size in bytes of the span being
allocated, but mCentral_CacheSpan passes the size of a single object
in the span. As a result, we don't sweep enough on that call and when
mCentral_CacheSpan later calls reimburseSweepCredit, it's very likely
to underflow mheap_.spanBytesAlloc, which causes the next call to
deductSweepCredit to think it owes a huge number of pages and finish
off the whole sweep.

In addition to causing the occasional allocation that triggers the
full sweep to be potentially extremely expensive relative to other
allocations, this can indirectly slow down many other allocations.
deductSweepCredit uses sweepone to sweep spans, which returns
fully-unused spans to the heap, where these spans are freed and
coalesced with neighboring free spans. On the other hand, when
mCentral_CacheSpan sweeps a span, it does so with the intent to
immediately reuse that span and, as a result, will not return the span
to the heap even if it is fully unused. This saves on the cost of
locking the heap, finding a span, and initializing that span. For
example, before this change, with GOMAXPROCS=1 (or the background
sweeper disabled) BinaryTree17 returned roughly 220K spans to the heap
and allocated new spans from the heap roughly 232K times. After this
change, it returns 1.3K spans to the heap and allocates new spans from
the heap 39K times. (With background sweeping these numbers are
effectively unchanged because the background sweeper sweeps almost all
of the spans with sweepone; however, parallel sweeping saves more than
the cost of allocating spans from the heap.)

Fixes #13535.
Fixes #13589.

name                      old time/op    new time/op    delta
BinaryTree17-12              3.03s ± 1%     2.86s ± 4%  -5.61%  (p=0.000 n=18+20)
Fannkuch11-12                2.48s ± 1%     2.49s ± 1%    ~     (p=0.060 n=17+20)
FmtFprintfEmpty-12          50.7ns ± 1%    50.9ns ± 1%  +0.43%  (p=0.025 n=15+16)
FmtFprintfString-12          174ns ± 2%     174ns ± 2%    ~     (p=0.539 n=19+20)
FmtFprintfInt-12             158ns ± 1%     158ns ± 1%    ~     (p=0.300 n=18+20)
FmtFprintfIntInt-12          269ns ± 2%     269ns ± 2%    ~     (p=0.784 n=20+18)
FmtFprintfPrefixedInt-12     233ns ± 1%     234ns ± 1%    ~     (p=0.389 n=18+18)
FmtFprintfFloat-12           309ns ± 1%     310ns ± 1%  +0.25%  (p=0.048 n=18+18)
FmtManyArgs-12              1.10µs ± 1%    1.10µs ± 1%    ~     (p=0.259 n=18+19)
GobDecode-12                7.81ms ± 1%    7.72ms ± 1%  -1.17%  (p=0.000 n=19+19)
GobEncode-12                6.56ms ± 0%    6.55ms ± 1%    ~     (p=0.433 n=17+19)
Gzip-12                      318ms ± 2%     317ms ± 1%    ~     (p=0.578 n=19+18)
Gunzip-12                   42.1ms ± 2%    42.0ms ± 0%  -0.45%  (p=0.007 n=18+16)
HTTPClientServer-12         63.9µs ± 1%    64.0µs ± 1%    ~     (p=0.146 n=17+19)
JSONEncode-12               16.4ms ± 1%    16.4ms ± 1%    ~     (p=0.271 n=19+19)
JSONDecode-12               58.1ms ± 1%    58.0ms ± 1%    ~     (p=0.152 n=18+18)
Mandelbrot200-12            3.85ms ± 0%    3.85ms ± 0%    ~     (p=0.126 n=19+18)
GoParse-12                  3.71ms ± 1%    3.64ms ± 1%  -1.86%  (p=0.000 n=20+18)
RegexpMatchEasy0_32-12       100ns ± 2%     100ns ± 1%    ~     (p=0.588 n=20+20)
RegexpMatchEasy0_1K-12       346ns ± 1%     347ns ± 1%  +0.27%  (p=0.014 n=17+20)
RegexpMatchEasy1_32-12      82.9ns ± 3%    83.5ns ± 3%    ~     (p=0.096 n=19+20)
RegexpMatchEasy1_1K-12       506ns ± 1%     506ns ± 1%    ~     (p=0.530 n=19+19)
RegexpMatchMedium_32-12      129ns ± 2%     129ns ± 1%    ~     (p=0.566 n=20+19)
RegexpMatchMedium_1K-12     39.4µs ± 1%    39.4µs ± 1%    ~     (p=0.713 n=19+20)
RegexpMatchHard_32-12       2.05µs ± 1%    2.06µs ± 1%  +0.36%  (p=0.008 n=18+20)
RegexpMatchHard_1K-12       61.6µs ± 1%    61.7µs ± 1%    ~     (p=0.286 n=19+20)
Revcomp-12                   538ms ± 1%     541ms ± 2%    ~     (p=0.081 n=18+19)
Template-12                 71.5ms ± 2%    71.6ms ± 1%    ~     (p=0.513 n=20+19)
TimeParse-12                 357ns ± 1%     357ns ± 1%    ~     (p=0.935 n=19+18)
TimeFormat-12                352ns ± 1%     352ns ± 1%    ~     (p=0.293 n=19+20)
[Geo mean]                  62.0µs         61.9µs       -0.21%

name              old time/op  new time/op  delta
XBenchGarbage-12  5.83ms ± 2%  5.86ms ± 3%    ~     (p=0.247 n=19+20)

Change-Id: I790bb530adace27ccf25d372f24a11954b88443c
Reviewed-on: https://go-review.googlesource.com/17745
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-15 16:15:38 +00:00
Péter Szilágyi
d26a0952a8 runtime: init argc/argv for android/arm64 c-shared
Analogous to https://go-review.googlesource.com/#/c/8457/ this
code synthesizes an set of program arguments for Android on the
arm64 architecture.

Change-Id: I851958b4b0944ec79d7a1426a3bb2cfc31746797
Reviewed-on: https://go-review.googlesource.com/17782
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-12-15 13:49:47 +00:00
Austin Clements
0cbf8d13a7 runtime: recycle large stack spans
To prevent races with the garbage collector, stack spans cannot be
reused as heap spans during a GC. We deal with this by caching stack
spans during GC and releasing them at the end of mark termination.
However, while our cache lets us reuse small stack spans, currently
large stack spans are *not* reused. This can cause significant memory
growth in programs that allocate large stacks rapidly, but grow the
heap slowly (such as in issue #13552).

Fix this by adding logic to reuse large stack spans for other stacks.

Fixes #11466.

Fixes #13552. Without this change, the program in this issue creeps to
over 1GB of memory over the course of a few hours. With this change,
it stays rock solid at around 30MB.

Change-Id: If8b2d85464aa80c96230a1990715e39aa803904f
Reviewed-on: https://go-review.googlesource.com/17814
Reviewed-by: Keith Randall <khr@golang.org>
2015-12-14 21:57:34 +00:00
Dmitry Vyukov
fb6f8a96f2 runtime: remove unnecessary wakeups of worker threads
Currently we wake up new worker threads whenever we pass
through the scheduler with nmspinning==0. This leads to
lots of unnecessary thread wake ups.
Instead let only spinning threads wake up new spinning threads.

For the following program:

package main
import "runtime"
func main() {
	for i := 0; i < 1e7; i++ {
		runtime.Gosched()
	}
}

Before:
$ time ./test
real	0m4.278s
user	0m7.634s
sys	0m1.423s

$ strace -c ./test
% time     seconds  usecs/call     calls    errors syscall
 99.93    9.314936           3   2685009     17536 futex

After:
$ time ./test
real	0m1.200s
user	0m1.181s
sys	0m0.024s

$ strace -c ./test
% time     seconds  usecs/call     calls    errors syscall
  3.11    0.000049          25         2           futex

Fixes #13527

Change-Id: Ia1f5bf8a896dcc25d8b04beb1f4317aa9ff16f74
Reviewed-on: https://go-review.googlesource.com/17540
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-12-11 11:31:12 +00:00
Rahul Chaudhry
f939ee13ae runtime: fix GODEBUG=schedtrace=X delay handling.
debug.schedtrace is an int32. Convert it to int64 before
multiplying with constant 1000000. Otherwise, schedtrace
values more than 2147 result in int32 overflow causing
incorrect delays between traces.

Change-Id: I064e8d7b432c1e892a705ee1f31a2e8cdd2c3ea3
Reviewed-on: https://go-review.googlesource.com/17712
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
2015-12-11 03:59:20 +00:00
Brad Fitzpatrick
371e44e5a1 runtime/race: update two stale references
Fixes #13550

Change-Id: I407daad8b94f6773d7949ba27981d26cbfd2cdf4
Reviewed-on: https://go-review.googlesource.com/17682
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-09 18:11:42 +00:00
Russ Cox
50c5042047 runtime: best-effort detection of concurrent misuse of maps
If reports like #13062 are really concurrent misuse of maps,
we can detect that, at least some of the time, with a cheap check.

There is an extra pair of memory writes for writing to a map,
but to the same cache line as h.count, which is often being modified anyway,
and there is an extra memory read for reading from a map,
but to the same cache line as h.count, which is always being read anyway.
So the check should be basically invisible and may help reduce the
number of "mysterious runtime crash due to map misuse" reports.

Change-Id: I0e71b0d92eaa3b7bef48bf41b0f5ab790092487e
Reviewed-on: https://go-review.googlesource.com/17501
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-12-07 20:19:52 +00:00
Russ Cox
b9fde8605e runtime: fix integer comparison in signal handling
(sig is unsigned, so sig-1 >= 0 is always true.)

Fixes #11281.

Change-Id: I4b9d784da6e3cc80816f2d2f7228d5d8a237e2d5
Reviewed-on: https://go-review.googlesource.com/17457
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
2015-12-05 21:23:35 +00:00
Russ Cox
c5a94ba24f runtime: document that NumCPU does not change
Fixes #11609.

Change-Id: I3cf64164fde28ebf739706728b84d8ef5b6dc90e
Reviewed-on: https://go-review.googlesource.com/17456
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-12-05 20:16:35 +00:00
Shenghou Ma
35e84546d7 runtime: check and fail early with a message if MMX is not available on 386
Fixes #12970.

Change-Id: Id0026e8274e071d65d47df63d65a93110abbec5d
Reviewed-on: https://go-review.googlesource.com/15998
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-12-05 17:43:51 +00:00
Russ Cox
cd58f44b20 runtime/cgo: assume Solaris thread stack is at least 1 MB
When run with "ulimit -s unlimited", the misc/cgo/test test binary
finds a stack size of 0x3000 returned by getcontext, causing the
runtime to try to stay within those bounds and then fault when
called back in the test after 64 kB has been used by C.

I suspect that Solaris is doing something clever like reporting the
current stack size and growing the stack as faults happen.
On all the other systems, getcontext reports the maximum stack size.
And when the ulimit is not unlimited, even Solaris reports the
maximum stack size.

Work around this by assuming that any stack on Solaris must be at least 1 MB.

Fixes #12210.

Change-Id: I0a6ed0afb8a8f50aa1b2486f32b4ae470ab47dbf
Reviewed-on: https://go-review.googlesource.com/17452
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-12-05 03:56:06 +00:00
Austin Clements
bb6fb929d6 runtime: fix sanity check in stackBarrier
stackBarrier on amd64 sanity checks that it's unwinding the correct
entry in the stack barrier array. However, this check is wrong in two
ways that make it unlikely to catch anything, right or wrong:

1) It checks that savedLRPtr == SP, but, in fact, it should be that
   savedLRPtr+8 == SP because the RET that returned to stackBarrier
   popped the saved LR. However, we didn't notice this check was wrong
   because,

2) the sense of the conditional branch is also wrong.

Fix both of these.

Change-Id: I38ba1f652b0168b5b2c11b81637656241262af7c
Reviewed-on: https://go-review.googlesource.com/17039
Reviewed-by: Russ Cox <rsc@golang.org>
2015-12-03 03:53:35 +00:00
Shenghou Ma
08b80ca880 runtime: note interactions between GC and MemProfile
Change-Id: Icce28fc4937cc73c0712c054161222f034381c2f
Reviewed-on: https://go-review.googlesource.com/16876
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-12-03 02:11:52 +00:00
Rahul Chaudhry
c091d4cd25 runtime: set TLSG_IS_VARIABLE for android/arm64.
On android, runtime.tls_g is a normal variable.
TLS offset is computed in x_cgo_inittls.

Change-Id: I18bc9a736d5fb2a89d0f798956c754e3c10d10e2
Reviewed-on: https://go-review.googlesource.com/17246
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-12-02 22:00:04 +00:00
Rahul Chaudhry
4480d6a927 runtime/cgo: define x_cgo_inittls() for android/arm64.
On android, runtime.tls_g is a normal variable.
TLS offset is computed in x_cgo_inittls.

Change-Id: I64cfd3543040776dcdf73cad8dba54fc6aaf6f35
Reviewed-on: https://go-review.googlesource.com/17245
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-12-02 21:59:52 +00:00
Michael Hudson-Doyle
f000523018 runtime: set r12 to sigpanic before jumping to it in sighandler
The ppc64le shared library ABI demands that r12 is set to a function's global
entrypoint before jumping to the global entrypoint. Not doing so means that
handling signals that usually panic actually crashes (and so, e.g. can't be
recovered). Fixes several failures of "cd test; go run run.go -linkshared".

Change-Id: Ia4d0da4c13efda68340d38c045a52b37c2f90796
Reviewed-on: https://go-review.googlesource.com/17280
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-12-01 04:51:35 +00:00
Michael Hudson-Doyle
fd2bc8681d runtime: fix conflict resolution in golang.org/cl/14207
Fixes testshared on arm64 and ppc64le.

Change-Id: Ie94bc0c85c7666fbb5ab6fc6d3dbb180407a9955
Reviewed-on: https://go-review.googlesource.com/17212
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-11-25 01:32:55 +00:00
Shenghou Ma
3583a44ed2 runtime: check that masks and shifts are correct aligned
We need a runtime check because the original issue is encountered
when running cross compiled windows program from linux. It's better
to give a meaningful crash message earlier than to segfault later.

The added test should not impose any measurable overhead to Go
programs.

For #12415.

Change-Id: Ib4a24ef560c09c0585b351d62eefd157b6b7f04c
Reviewed-on: https://go-review.googlesource.com/14207
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-25 00:46:57 +00:00
Austin Clements
ab9d5f38be runtime: make gcFlushBgCredit go:nowritebarrierrec
Write barriers in gcFlushBgCredit lead to very subtle bugs because it
executes after the getfull barrier. I tracked some bugs of this form
down before go:nowritebarrierrec was implemented. Ensure that they
don't reappear by making gcFlushBgCredit go:nowritebarrierrec.

Change-Id: Ia5ca2dc59e6268bce8d8b4c87055bd0f6e19bed2
Reviewed-on: https://go-review.googlesource.com/17052
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-24 19:37:03 +00:00
Austin Clements
e126f30a66 runtime: recursively disallow write barriers in sighandler
sighandler may run during STW, so write barriers are not allowed.

Change-Id: Icdf46be10ea296fd87e73ab56ebb718c5d3c97ac
Reviewed-on: https://go-review.googlesource.com/17007
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-24 19:36:55 +00:00
Elias Naur
a7383fc467 runtime: use a proper type, sigset, for m.sigmask
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.

The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.

The cost is that OSes that don't need m.sigmask has to define sigset.

completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.

With GOOS=nacl ./make.bash failed with a seemingly unrelated error.

[Replay of CL 16942 by Elias Naur.]

Change-Id: I98f144d626033ae5318576115ed635415ac71b2c
Reviewed-on: https://go-review.googlesource.com/17033
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
2015-11-24 17:16:47 +00:00
dvyukov
5526d95019 runtime/race: add tests for channels
These tests were failing on one of the versions of cl/9345
("runtime: simplify buffered channels").

Change-Id: I920ffcd28de428bcb7c2d5a300068644260e1017
Reviewed-on: https://go-review.googlesource.com/16416
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-24 17:00:43 +00:00
Ilya Tocar
b597e1ed54 runtime: speed up memclr with avx2 on amd64
Results are a bit noisy, but show good improvement (haswell)

name            old time/op    new time/op     delta
Memclr5-48        6.06ns ± 8%     5.65ns ± 8%    -6.81%  (p=0.000 n=20+20)
Memclr16-48       5.75ns ± 6%     5.71ns ± 6%      ~     (p=0.545 n=20+19)
Memclr64-48       6.54ns ± 5%     6.14ns ± 9%    -6.12%  (p=0.000 n=18+20)
Memclr256-48      10.1ns ±12%      9.9ns ±14%      ~     (p=0.285 n=20+19)
Memclr4096-48      104ns ± 8%       57ns ±15%   -44.98%  (p=0.000 n=20+20)
Memclr65536-48    2.45µs ± 5%     2.43µs ± 8%      ~     (p=0.665 n=16+20)
Memclr1M-48       58.7µs ±13%     56.4µs ±11%    -3.92%  (p=0.033 n=20+19)
Memclr4M-48        233µs ± 9%      234µs ± 9%      ~     (p=0.728 n=20+19)
Memclr8M-48        469µs ±11%      472µs ±16%      ~     (p=0.947 n=20+20)
Memclr16M-48       947µs ±10%      916µs ±10%      ~     (p=0.050 n=20+19)
Memclr64M-48      10.9ms ±10%      4.5ms ± 9%   -58.43%  (p=0.000 n=20+20)
GoMemclr5-48      3.80ns ±13%     3.38ns ± 6%   -11.02%  (p=0.000 n=20+20)
GoMemclr16-48     3.34ns ±15%     3.40ns ± 9%      ~     (p=0.351 n=20+20)
GoMemclr64-48     4.10ns ±15%     4.04ns ±10%      ~     (p=1.000 n=20+19)
GoMemclr256-48    7.75ns ±20%     7.88ns ± 9%      ~     (p=0.227 n=20+19)

name            old speed      new speed       delta
Memclr5-48       826MB/s ± 7%    886MB/s ± 8%    +7.32%  (p=0.000 n=20+20)
Memclr16-48     2.78GB/s ± 5%   2.81GB/s ± 6%      ~     (p=0.550 n=20+19)
Memclr64-48     9.79GB/s ± 5%  10.44GB/s ±10%    +6.64%  (p=0.000 n=18+20)
Memclr256-48    25.4GB/s ±14%   25.6GB/s ±12%      ~     (p=0.647 n=20+19)
Memclr4096-48   39.4GB/s ± 8%   72.0GB/s ±13%   +82.81%  (p=0.000 n=20+20)
Memclr65536-48  26.6GB/s ± 6%   27.0GB/s ± 9%      ~     (p=0.517 n=17+20)
Memclr1M-48     17.9GB/s ±12%   18.5GB/s ±11%      ~     (p=0.068 n=20+20)
Memclr4M-48     18.0GB/s ± 9%   17.8GB/s ±14%      ~     (p=0.547 n=20+20)
Memclr8M-48     17.9GB/s ±10%   17.8GB/s ±14%      ~     (p=0.947 n=20+20)
Memclr16M-48    17.8GB/s ± 9%   18.4GB/s ± 9%      ~     (p=0.050 n=20+19)
Memclr64M-48    6.19GB/s ±10%  14.87GB/s ± 9%  +140.11%  (p=0.000 n=20+20)
GoMemclr5-48    1.31GB/s ±10%   1.48GB/s ± 6%   +13.06%  (p=0.000 n=19+20)
GoMemclr16-48   4.81GB/s ±14%   4.71GB/s ± 8%      ~     (p=0.341 n=20+20)
GoMemclr64-48   15.7GB/s ±13%   15.8GB/s ±11%      ~     (p=0.967 n=20+19)

Change-Id: I393f3f20e2f31538d1b1dd70d6e5c201c106a095
Reviewed-on: https://go-review.googlesource.com/16773
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Klaus Post <klauspost@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
2015-11-24 16:49:30 +00:00
Dave Cheney
984004263d runtime: skip CallbackGC test in short mode on linux/arm
Fixes #11959
Fixes #12035

Skip the CallbackGC test on linux/arm. This test takes between 30 and 60
seconds to run by itself, and is run 4 times over the course of ./run.bash
(once during the runtime test, three times more later in the build).

Change-Id: I4e7d3046031cd8c08f39634bdd91da6e00054caf
Reviewed-on: https://go-review.googlesource.com/14485
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-24 16:48:32 +00:00
Alex Brainman
1cb53ce36b runtime: fix handling VirtualAlloc failure in sysUsed
Original code is mistakenly panics on VirtualAlloc failure - we want
it to go looking for smaller memory region that VirtualAlloc will
succeed to allocate. Also return immediately if VirtualAlloc succeeds.
See rsc comment on issue #12587 for details.

I still don't have a test for this. So I can only hope that this

Fixes #12587

Change-Id: I052068ec627fdcb466c94ae997ad112016f734b7
Reviewed-on: https://go-review.googlesource.com/17169
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-24 14:50:23 +00:00
Michael Hudson-Doyle
239273d963 runtime: mark {g,m,p}uintptr methods as nosplit
These are methods that are "obviously" going to get inlined -- until you build
with -l, when they can trigger a stack split at a bad time.

Fixes #11482

Change-Id: Ia065c385978a2e7fe9f587811991d088c4d68325
Reviewed-on: https://go-review.googlesource.com/17165
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-24 02:33:04 +00:00
Austin Clements
ecf388f3a4 runtime: take stack barrier lock during copystack
Commit bbd1a1c prevented SIGPROF from scanning stacks that were being
copied, but it didn't prevent a stack copy (specifically a stack
shrink) from happening while SIGPROF is scanning the stack. As a
result, a stack copy may adjust stack barriers while SIGPROF is in the
middle of scanning a stack, causing SIGPROF to panic when it detects
an inconsistent stack barrier.

Fix this by taking the stack barrier lock while adjusting the stack.
In addition to preventing SIGPROF from scanning this stack, this will
block until any in-progress SIGPROF is done scanning the stack.

For 1.5.2.

Fixes #13362.
Updates #12932.

Change-Id: I422219c363054410dfa56381f7b917e04690e5dd
Reviewed-on: https://go-review.googlesource.com/17191
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-24 01:50:52 +00:00
Alex Brainman
ab4c9298b8 runtime: do not call timeBeginPeriod on windows
Calling timeBeginPeriod changes Windows global timer resolution
from 15ms to 1ms. This used to improve Go runtime scheduler
performance, but not anymore. Thanks to @aclements, scheduler now
behaves the same way if we call timeBeginPeriod or not.

Remove call to timeBeginPeriod, since it is machine global
resource, and there are downsides of using low timer resolution.
See issue #8687 for details.

Fixes #8687

Change-Id: Ib7e41aa4a81861b62a900e0e62776c9ef19bfb73
Reviewed-on: https://go-review.googlesource.com/17164
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
2015-11-24 01:11:58 +00:00
Austin Clements
b890333998 runtime: clean up gcMarkDone
This improves the documentation comment on gcMarkDone, replaces a
recursive call with a simple goto, and disables preemption before
stopping the world in accordance with the documentation comment on
stopTheWorldWithSema.

Updates #13363, but, sadly, doesn't fix it.

Change-Id: I6cb2a5836b35685bf82f7b1ce7e48a7625906656
Reviewed-on: https://go-review.googlesource.com/17149
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-23 20:59:35 +00:00
Austin Clements
a7c09ad4fb runtime: improve stack barrier debugging
This improves stack barrier debugging messages in various ways:

1) Rather than printing only the remaining stack barriers (of which
   there may be none, which isn't very useful), print all of the G's
   stack barriers with a marker at the position the stack itself has
   unwound to and a marker at the problematic stack barrier (where
   applicable).

2) Rather than crashing if we encounter a stack barrier when there are
   no more stkbar entries, print the same debug message we would if we
   had encountered a stack barrier at an unexpected location.

Hopefully this will help with debugging #12528.

Change-Id: I2e6fe6a778e0d36dd8ef30afd4c33d5d94731262
Reviewed-on: https://go-review.googlesource.com/17147
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-23 19:17:52 +00:00
Austin Clements
22e57c6655 runtime: make stack barrier locking more robust
The stack barrier locking functions use a simple cas lock because they
need to support trylock, but currently don't increment g.m.locks. This
is okay right now because they always run on the system stack or the
signal stack and are hence non-preemtible, but this could lead to
difficult-to-reproduce deadlocks if these conditions change in the
future.

Make these functions more robust by incrementing g.m.locks and making
them nosplit to enforce non-preemtibility.

Change-Id: I73d60a35bd2ad2d81c73aeb20dbd37665730eb1b
Reviewed-on: https://go-review.googlesource.com/17058
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ingo Oeser <nightlyone@googlemail.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-23 19:13:15 +00:00
Austin Clements
624d798a41 runtime/pprof: disable TestStackBarrierProfiling on ppc64
This test depends on GODEBUG=gcstackbarrierall, which doesn't work on
ppc64.

Updates #13334.

Change-Id: Ie554117b783c4e999387f97dd660484488499d85
Reviewed-on: https://go-review.googlesource.com/17120
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-20 19:39:36 +00:00
Russ Cox
cb859021d1 runtime: fix new stack barrier check
During a crash showing goroutine stacks of all threads
(with GOTRACEBACK=crash), it can be that f == nil.

Only happens on Solaris; not sure why.

Change-Id: Iee2c394a0cf19fa0a24f6befbc70776b9e42d25a
Reviewed-on: https://go-review.googlesource.com/17110
Reviewed-by: Austin Clements <austin@google.com>
2015-11-20 19:08:21 +00:00
David Crawshaw
2fa64c4182 runtime/pprof: check if test can fork
(TestStackBarrierProfiling is failing on darwin/arm.)

Change-Id: I8006d6222ccafc213821e02105896440079caa37
Reviewed-on: https://go-review.googlesource.com/17091
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-20 17:52:33 +00:00
Shenghou Ma
35a5bd6431 runtime: make it possible to call syscall on solaris without g
The nosplit stack is now much bigger, so we can afford to allocate
libcall on stack.

Fix asmsysvicall6 to not update errno if g == nil.

These two fixes TestCgoCallbackGC on solaris, which used to stuck
in a loop.

Change-Id: Id1b13be992dae9f059aa3d47ffffd37785300933
Reviewed-on: https://go-review.googlesource.com/17076
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-20 08:11:35 +00:00
Russ Cox
3af29fb858 runtime: make asmcgocall work without a g
Solaris needs to make system calls without a g,
and Solaris uses asmcgocall to make system calls.
I know, I know.

I hope this makes CL 16915, fixing #12277, work on Solaris.

Change-Id: If988dfd37f418b302da9c7096f598e5113ecea87
Reviewed-on: https://go-review.googlesource.com/17072
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Run-TryBot: Russ Cox <rsc@golang.org>
2015-11-20 02:51:09 +00:00
Austin Clements
d2c81ad847 runtime: recursively disallow write barriers in sysmon
sysmon runs without a P. This means it can't interact with the garbage
collector, so write barriers not allowed in anything that sysmon does.

Fixes #10600.

Change-Id: I9de1283900dadee4f72e2ebfc8787123e382ae88
Reviewed-on: https://go-review.googlesource.com/17006
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-19 21:17:25 +00:00
Austin Clements
402e37d4a9 cmd/compile: special case nowritebarrierrec for allocm
allocm is a very unusual function: it is specifically designed to
allocate in contexts where m.p is nil by temporarily taking over a P.
Since allocm is used in many contexts where it would make sense to use
nowritebarrierrec, this commit teaches the nowritebarrierrec analysis
to stop at allocm.

Updates #10600.

Change-Id: I8499629461d4fe25712d861720dfe438df7ada9b
Reviewed-on: https://go-review.googlesource.com/17005
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-19 21:17:19 +00:00
Austin Clements
c84ae1c499 runtime: eliminate write barriers from mem_plan9.go
This replaces *memHdr with memHdrPtr.

Updates #10600.

Change-Id: I673aa2cd20f29abec8ab91ed7e783718c8479ce1
Reviewed-on: https://go-review.googlesource.com/17009
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
2015-11-19 21:17:14 +00:00
Austin Clements
e9aef43d87 runtime: eliminate traceAllocBlock write barriers
This replaces *traceAllocBlock with traceAllocBlockPtr.

Updates #10600.

Change-Id: I94a20d90f04cca7c457b29062427748e315e4857
Reviewed-on: https://go-review.googlesource.com/17004
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-11-19 21:17:09 +00:00
Austin Clements
b43b375c6c runtime: eliminate write barriers from gentraceback
gentraceback is used in many contexts where write barriers are
disallowed. This currently works because the only write barrier is in
assigning frame.argmap in setArgInfo and in practice frame is always
on the stack, so this write barrier is a no-op.

However, we can easily eliminate this write barrier, which will let us
statically disallow write barriers (using go:nowritebarrierrec
annotations) in many more situations. As a bonus, this makes the code
a little more idiomatic.

Updates #10600.

Change-Id: I45ba5cece83697ff79f8537ee6e43eadf1c18c6d
Reviewed-on: https://go-review.googlesource.com/17003
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-11-19 21:17:04 +00:00