This makes traces self-contained and simplifies trace workflow
in modern cloud environments where it is simpler to reach
a service via HTTP than to obtain the binary.
Change-Id: I6ff3ca694dc698270f1e29da37d5efaf4e843a0d
Reviewed-on: https://go-review.googlesource.com/21732
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
This broke solaris, which apparently does use the upper 17 bits of the address space.
This reverts commit 3b02c5b1b6.
Change-Id: Iedfe54abd0384960845468205f20191a97751c0b
Reviewed-on: https://go-review.googlesource.com/21652
Reviewed-by: Dave Cheney <dave@cheney.net>
The test for profiling of channel blocking is timing dependent,
and in particular the blockSelectRecvAsync case can fail on a
slow builder (plan9_arm) when many tests are run in parallel.
The child goroutine sleeps for a fixed period so the parent
can be observed to block in a select call reading from the
child; but if the OS process running the parent goroutine is
delayed long enough, the child may wake again before the
parent has reached the blocking point. By repeating the test
three times, the likelihood of a blocking event is increased.
Fixes#15096
Change-Id: I2ddb9576a83408d06b51ded682bf8e71e53ce59e
Reviewed-on: https://go-review.googlesource.com/21604
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Merge the remaining lfstack{Pack,Unpack} implemetations into one file.
unsafe.Sizeof(uintptr(0)) == 4 is a constant comparison so this branch
folds away at compile time.
Dmitry confirmed that the upper 17 bits of an address will be zero for a
user mode pointer, so there is no need to sign extend on amd64 during
unpack, so we can reuse the same implementation as all othe 64 bit
archs.
Change-Id: I99f589416d8b181ccde5364c9c2e78e4a5efc7f1
Reviewed-on: https://go-review.googlesource.com/21597
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
So that all Go processes do not die on startup on a system with >256 CPUs.
I tested this by hacking osinit to set ncpu to 1000.
Updates #15131
Change-Id: I52e061a0de97be41d684dd8b748fa9087d6f1aef
Reviewed-on: https://go-review.googlesource.com/21599
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
None of the two places that call lfstackUnpack use the second argument.
This simplifies a followup CL that merges the lfstack{Pack,Unpack}
implementations.
Change-Id: I3c93f6259da99e113d94f8c8027584da79c1ac2c
Reviewed-on: https://go-review.googlesource.com/21595
Run-TryBot: Dave Cheney <dave@cheney.net>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Flaky tests are a distraction and cover up real problems.
File bugs instead and mark them as flaky.
This moves the net/http flaky test flagging mechanism to internal/testenv.
Updates #15156
Updates #15157
Updates #15158
Change-Id: I0e561cd2a09c0dec369cd4ed93bc5a2b40233dfe
Reviewed-on: https://go-review.googlesource.com/21614
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Merge all the 64bit lfstack impls into one file, adjust build tags to
match.
Merge all the comments on the various lfstack implementations for
posterity.
lfstack_amd64.go can probably be merged, but it is slightly different so
that will happen in a followup.
Change-Id: I5362d5e127daa81c9cb9d4fa8a0cc5c5e5c2707c
Reviewed-on: https://go-review.googlesource.com/21591
Run-TryBot: Dave Cheney <dave@cheney.net>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
A future CL will rename os1_windows.go to os_windows.go.
Change-Id: I223e76002dd1e9c9d1798fb0beac02c7d3bf4812
Reviewed-on: https://go-review.googlesource.com/21564
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Missed a case for closure calls (OCALLFUNC && indirect) in
esc.go:esccall.
Cleanup to runtime code for windows to more thoroughly hide
a technical escape. Also made code pickier about failing
to late non-optional kernel32.dll.
Fixes#14409.
Change-Id: Ie75486a2c8626c4583224e02e4872c2875f7bca5
Reviewed-on: https://go-review.googlesource.com/20102
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Usleep(100) in runqgrab negatively affects latency and throughput
of parallel application. We are sleeping instead of doing useful work.
This is effect is particularly visible on windows where minimal
sleep duration is 1-15ms.
Reduce sleep from 100us to 3us and use osyield on windows.
Sync chan send/recv takes ~50ns, so 3us gives us ~50x overshoot.
benchmark old ns/op new ns/op delta
BenchmarkChanSync-12 216 217 +0.46%
BenchmarkChanSyncWork-12 27213 25816 -5.13%
CPU consumption goes up from 106% to 108% in the first case,
and from 107% to 125% in the second case.
Test case from #14790 on windows:
BenchmarkDefaultResolution-8 4583372 29720 -99.35%
Benchmark1ms-8 992056 30701 -96.91%
99-th latency percentile for HTTP request serving is improved by up to 15%
(see http://golang.org/cl/20835 for details).
The following benchmarks are from the change that originally added this sleep
(see https://golang.org/s/go15gomaxprocs):
name old time/op new time/op delta
Chain 22.6µs ± 2% 22.7µs ± 6% ~ (p=0.905 n=9+10)
ChainBuf 22.4µs ± 3% 22.5µs ± 4% ~ (p=0.780 n=9+10)
Chain-2 23.5µs ± 4% 24.9µs ± 1% +5.66% (p=0.000 n=10+9)
ChainBuf-2 23.7µs ± 1% 24.4µs ± 1% +3.31% (p=0.000 n=9+10)
Chain-4 24.2µs ± 2% 25.1µs ± 3% +3.70% (p=0.000 n=9+10)
ChainBuf-4 24.4µs ± 5% 25.0µs ± 2% +2.37% (p=0.023 n=10+10)
Powser 2.37s ± 1% 2.37s ± 1% ~ (p=0.423 n=8+9)
Powser-2 2.48s ± 2% 2.57s ± 2% +3.74% (p=0.000 n=10+9)
Powser-4 2.66s ± 1% 2.75s ± 1% +3.40% (p=0.000 n=10+10)
Sieve 13.3s ± 2% 13.3s ± 2% ~ (p=1.000 n=10+9)
Sieve-2 7.00s ± 2% 7.44s ±16% ~ (p=0.408 n=8+10)
Sieve-4 4.13s ±21% 3.85s ±22% ~ (p=0.113 n=9+9)
Fixes#14790
Change-Id: Ie7c6a1c4f9c8eb2f5d65ab127a3845386d6f8b5d
Reviewed-on: https://go-review.googlesource.com/20835
Reviewed-by: Austin Clements <austin@google.com>
When we grow the heap, we create a temporary "in use" span for the
memory acquired from the OS and then free that span to link it into
the heap. Hence, we (1) increase pagesInUse when we make the temporary
span so that (2) freeing the span will correctly decrease it.
However, currently step (1) increases pagesInUse by the number of
pages requested from the heap, while step (2) decreases it by the
number of pages requested from the OS (the size of the temporary
span). These aren't necessarily the same, since we round up the number
of pages we request from the OS, so steps 1 and 2 don't necessarily
cancel out like they're supposed to. Over time, this can add up and
cause pagesInUse to underflow and wrap around to 2^64. The garbage
collector computes the sweep ratio from this, so if this happens, the
sweep ratio becomes effectively infinite, causing the first allocation
on each P in a sweep cycle to sweep the entire heap. This makes
sweeping effectively STW.
Fix this by increasing pagesInUse in step 1 by the number of pages
requested from the OS, so that the two steps correctly cancel out. We
add a test that checks that the running total matches the actual state
of the heap.
Fixes#15022. For 1.6.x.
Change-Id: Iefd9d6abe37d0d447cbdbdf9941662e4f18eeffc
Reviewed-on: https://go-review.googlesource.com/21280
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
It appears that windows osyield is just 15ms sleep on my computer
(see benchmarks below). Replace NtWaitForSingleObject in osyield
with SwitchToThread (as suggested by Dmitry).
Also add issue #14790 related benchmarks, so we can track perfomance
changes in CL 20834 and CL 20835 and beyond.
Update #14790
benchmark old ns/op new ns/op delta
BenchmarkChanToSyscallPing1ms 1953200 1953000 -0.01%
BenchmarkChanToSyscallPing15ms 31562904 31248400 -1.00%
BenchmarkSyscallToSyscallPing1ms 5247 4202 -19.92%
BenchmarkSyscallToSyscallPing15ms 5260 4374 -16.84%
BenchmarkChanToChanPing1ms 474 494 +4.22%
BenchmarkChanToChanPing15ms 468 489 +4.49%
BenchmarkOsYield1ms 980018 75.5 -99.99%
BenchmarkOsYield15ms 15625200 75.8 -100.00%
Change-Id: I1b4cc7caca784e2548ee3c846ca07ef152ebedce
Reviewed-on: https://go-review.googlesource.com/21294
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Add supporting code for runtime initialization, including both
32- and 64-bit x86 architectures.
Add .ctors section on Windows to PE .o files, and INITENTRY to .ctors
section to plug in to the GCC C/C++ startup initialization mechanism.
This allows the Go runtime to initialize itself. Add .text section
symbol for .ctor relocations. Note: This is unlikely to be useful for
MSVC-based toolchains.
Fixes#13494
Change-Id: I4286a96f70e5f5228acae88eef46e2bed95813f3
Reviewed-on: https://go-review.googlesource.com/18057
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Signed-off-by: Eric Engestrom <eric@engestrom.ch>
Change-Id: I91873aaebf79bdf1c00d38aacc1a1fb8d79656a7
Reviewed-on: https://go-review.googlesource.com/21433
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Make sure that for any DLL that Go uses itself, we only look for the
DLL in the Windows System32 directory, guarding against DLL preloading
attacks.
(Unless the Windows version is ancient and LoadLibraryEx is
unavailable, in which case the user probably has bigger security
problems anyway.)
This does not change the behavior of syscall.LoadLibrary or NewLazyDLL
if the DLL name is something unused by Go itself.
This change also intentionally does not add any new API surface. Instead,
x/sys is updated with a LoadLibraryEx function and LazyDLL.Flags in:
https://golang.org/cl/21388
Updates #14959
Change-Id: I8d29200559cc19edf8dcf41dbdd39a389cd6aeb9
Reviewed-on: https://go-review.googlesource.com/21140
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fixes a problem when using the external linker on Solaris. The Solaris
external linker still doesn't work due to issue #14957.
The problem is, for example, with `go test cmd/objdump`:
objdump_test.go:71: go build fmthello.go: exit status 2
# command-line-arguments
/var/gcc/iant/go/pkg/tool/solaris_amd64/link: running gcc failed: exit status 1
Undefined first referenced
symbol in file
x_cgo_callers /tmp/go-link-355600608/go.o
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
Change-Id: I54917cfd5c288ee77ea25c439489bd2c9124fe73
Reviewed-on: https://go-review.googlesource.com/21392
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The new function runtime.SetCgoTraceback may be used to register stack
traceback and symbolizer functions, written in C, to do a stack
traceback from cgo code.
There is a sample implementation of runtime.SetCgoSymbolizer at
github.com/ianlancetaylor/cgosymbolizer. Just importing that package is
sufficient to get symbolic C backtraces.
Currently only supported on linux/amd64.
Change-Id: If96ee2eb41c6c7379d407b9561b87557bfe47341
Reviewed-on: https://go-review.googlesource.com/17761
Reviewed-by: Austin Clements <austin@google.com>
Previously, cmd/compile rejected constant int->string conversions if
the integer value did not fit into an "int" value. Also, runtime
incorrectly truncated 64-bit values to 32-bit before checking if
they're a valid Unicode code point. According to the Go spec, both of
these cases should instead yield "\uFFFD".
Fixes#15039.
Change-Id: I3c8a3ad9a0780c0a8dc1911386a523800fec9764
Reviewed-on: https://go-review.googlesource.com/21344
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Only use REP;MOVSB if:
1) The CPUID flag says it is fast, and
2) The pointers are unaligned
Otherwise, use REP;MOVSQ.
Update #14630
Change-Id: I946b28b87880c08e5eed1ce2945016466c89db66
Reviewed-on: https://go-review.googlesource.com/21300
Reviewed-by: Nigel Tao <nigeltao@golang.org>
See #14874
This change tells the linker to collect all the itablink symbols and
collect them so that moduledata can have a slice of all compiler
generated itabs.
The logic is shamelessly adapted from what is done with typelink symbols.
Change-Id: Ie93b59acf0fcba908a876d506afbf796f222dbac
Reviewed-on: https://go-review.googlesource.com/20889
Reviewed-by: Keith Randall <khr@golang.org>
See #14874
This change tells the compiler to emit itab and itablink symbols in
situations where they could be useful; however the compiled code does
not actually make use of the new symbols yet.
Change-Id: I0db3e6ec0cb1f3b7cebd4c60229e4a48372fe586
Reviewed-on: https://go-review.googlesource.com/20888
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Michel Lespinasse <walken@google.com>
See #14874
This change makes the runtime register all compiler generated itabs
(as obtained from the moduledata) during init.
Change-Id: I9969a0985b99b8bda820a631f7fe4c78f1174cdf
Reviewed-on: https://go-review.googlesource.com/20900
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Michel Lespinasse <walken@google.com>
linux/386 depends on modify_ldt system call, but recent Linux kernels
can disable this system call. Any Go programs built as linux/386
crash with the message 'Trace/breakpoint trap'.
The kernel config CONFIG_MODIFY_LDT_SYSCALL, which control
enable/disable modify_ldt, is disabled on Amazon Linux 2016.03.
This fixes this problem by using set_thread_area instead of modify_ldt
on linux/386.
Fixes#14795.
Change-Id: I0cc5139e40e9e5591945164156a77b6bdff2c7f1
Reviewed-on: https://go-review.googlesource.com/21190
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
One intrinsic was needed to help get the very best
performance out of a future GC; as long as that one was
being added, I also added Bswap since that is sometimes
a handy thing to have. I had intended to fill out the
bit-scan intrinsic family, but the mismatch between the
"scan forward" instruction and "count leading zeroes"
was large enough to cause me to leave it out -- it poses
a dilemma that I'd rather dodge right now.
These intrinsics are not exposed for general use.
That's a separate issue requiring an API proposal change
( https://github.com/golang/proposal )
All intrinsics are tested, both that they are substituted
on the appropriate architecture, and that they produce the
expected result.
Change-Id: I5848037cfd97de4f75bdc33bdd89bba00af4a8ee
Reviewed-on: https://go-review.googlesource.com/20564
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
For #14876.
Change-Id: I0992859264cbaf9c9b691fad53345bbb01b4cf3b
Reviewed-on: https://go-review.googlesource.com/21085
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
For #14876.
Change-Id: I33947f74e8058437a784862f1f064974afc99250
Reviewed-on: https://go-review.googlesource.com/21084
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This is a follow-up of https://go-review.googlesource.com/#/c/20653/
Special case computation for slices with elements of byte size or
pointer size.
name old time/op new time/op delta
GrowSliceBytes-4 86.2ns ± 3% 75.4ns ± 2% -12.50% (p=0.000 n=20+20)
GrowSliceInts-4 161ns ± 3% 136ns ± 3% -15.59% (p=0.000 n=19+19)
GrowSlicePtr-4 239ns ± 2% 233ns ± 2% -2.52% (p=0.000 n=20+20)
GrowSliceStruct24Bytes-4 258ns ± 3% 256ns ± 3% ~ (p=0.134 n=20+20)
Change-Id: Ice5fa648058fe9d7fa89dee97ca359966f671128
Reviewed-on: https://go-review.googlesource.com/21101
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
There's a race between runtime.goexitsall killing all OS processes
of a go program in order to exit, and runtime.newosproc forking a
new one. If the new process has been created but not yet stored
its pid in m.procid, it will not be killed by goexitsall and
deadlock results.
This CL prevents the race by making the newly forked process
check whether the program is exiting. It also prevents a
potential "shoot-out" if multiple goroutines call Exit at
the same time, which could possibly lead to two processes
killing each other and leaving the rest deadlocked.
Change-Id: I3170b4a62d2461f6b029b3d6aad70373714ed53e
Reviewed-on: https://go-review.googlesource.com/21135
Run-TryBot: David du Colombier <0intro@gmail.com>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
During random stealing we steal 4*GOMAXPROCS times from random procs.
One would expect that most of the time we check all procs this way,
but due to low quality PRNG we actually miss procs with frightening
probability. Below are modelling experiment results for 1e6 tries:
GOMAXPROCS = 2 : missed 1 procs 7944 times
GOMAXPROCS = 3 : missed 1 procs 101620 times
GOMAXPROCS = 3 : missed 2 procs 3571 times
GOMAXPROCS = 4 : missed 1 procs 63916 times
GOMAXPROCS = 4 : missed 2 procs 61 times
GOMAXPROCS = 4 : missed 3 procs 16 times
GOMAXPROCS = 5 : missed 1 procs 133136 times
GOMAXPROCS = 5 : missed 2 procs 1025 times
GOMAXPROCS = 5 : missed 3 procs 101 times
GOMAXPROCS = 5 : missed 4 procs 15 times
GOMAXPROCS = 8 : missed 1 procs 151765 times
GOMAXPROCS = 8 : missed 2 procs 5057 times
GOMAXPROCS = 8 : missed 3 procs 1726 times
GOMAXPROCS = 8 : missed 4 procs 68 times
GOMAXPROCS = 12 : missed 1 procs 199081 times
GOMAXPROCS = 12 : missed 2 procs 27489 times
GOMAXPROCS = 12 : missed 3 procs 3113 times
GOMAXPROCS = 12 : missed 4 procs 233 times
GOMAXPROCS = 12 : missed 5 procs 9 times
GOMAXPROCS = 16 : missed 1 procs 237477 times
GOMAXPROCS = 16 : missed 2 procs 30037 times
GOMAXPROCS = 16 : missed 3 procs 9466 times
GOMAXPROCS = 16 : missed 4 procs 1334 times
GOMAXPROCS = 16 : missed 5 procs 192 times
GOMAXPROCS = 16 : missed 6 procs 5 times
GOMAXPROCS = 16 : missed 7 procs 1 times
GOMAXPROCS = 16 : missed 8 procs 1 times
A missed proc won't lead to underutilization because we check all procs
again after dropping P. But it can lead to an unpleasant situation
when we miss a proc, drop P, check all procs, discover work, acquire P,
miss the proc again, repeat.
Improve stealing logic to cover all procs.
Also don't enter spinning mode and try to steal when there is nobody around.
Change-Id: Ibb6b122cc7fb836991bad7d0639b77c807aab4c2
Reviewed-on: https://go-review.googlesource.com/20836
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Create a byte encoding designed for static Go names.
It is intended to be a compact representation of a name
and optional tag data that can be turned into a Go string
without allocating, and describes whether or not it is
exported without unicode table.
The encoding is described in reflect/type.go:
// The first byte is a bit field containing:
//
// 1<<0 the name is exported
// 1<<1 tag data follows the name
// 1<<2 pkgPath *string follow the name and tag
//
// The next two bytes are the data length:
//
// l := uint16(data[1])<<8 | uint16(data[2])
//
// Bytes [3:3+l] are the string data.
//
// If tag data follows then bytes 3+l and 3+l+1 are the tag length,
// with the data following.
//
// If the import path follows, then ptrSize bytes at the end of
// the data form a *string. The import path is only set for concrete
// methods that are defined in a different package than their type.
Shrinks binary sizes:
cmd/go: 164KB (1.6%)
jujud: 1.0MB (1.5%)
For #6853.
Change-Id: I46b6591015b17936a443c9efb5009de8dfe8b609
Reviewed-on: https://go-review.googlesource.com/20968
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The current runtime attempts to forward signals generated by non-Go
code to the original signal handler. If it can't call the original
handler directly, it currently attempts to re-raise the signal after
resetting the handler. In this case, the original context is lost.
This fix prevents that problem by simply returning from the go signal
handler after resetting the original handler. It only does this when
the original handler is the system default handler, which in all cases
is known to not recover. The signal is not reset, so it is retriggered
and the original handler takes over with the proper context.
Fixes#14899
Change-Id: Ib1c19dfa4b50d9732d7a453de3784c8141e1cbb3
Reviewed-on: https://go-review.googlesource.com/21006
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Fixes#14938.
Additionally some simplifications along the way.
Change-Id: I2c5fb7e32dcc6fab68fff36a49cb72e715756abe
Reviewed-on: https://go-review.googlesource.com/21046
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
For darwin/arm{,64} a non-Go thread is created to convert
EXC_BAD_ACCESS to panics. However, the Go signal handler refuse to
handle signals that would otherwise be ignored if they arrive at
non-Go threads.
Block all (posix) signals to that thread, making sure that
no unexpected signals arrive to it. At least one test, TestStop in
os/signal, depends on signals not arriving on any non-Go threads.
For #14318
Change-Id: I901467fb53bdadb0d03b0f1a537116c7f4754423
Reviewed-on: https://go-review.googlesource.com/21047
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The existing implementation for Equal and similar
functions in the bytes package operate on one byte at
at time. This performs poorly on ppc64/ppc64le especially
when the byte buffers are large. This change improves
those functions by loading and comparing double words where
possible. The common code has been moved to a function
that can be shared by the other functions in this
file which perform the same type of comparison.
Further optimizations are done for the case where
>= 32 bytes are being compared. The new function
memeqbody is used by memeq_varlen, Equal, and eqstring.
When running the bytes test with -test.bench=Equal
benchmark old MB/s new MB/s speedup
BenchmarkEqual1 164.83 129.49 0.79x
BenchmarkEqual6 563.51 445.47 0.79x
BenchmarkEqual9 656.15 1099.00 1.67x
BenchmarkEqual15 591.93 1024.30 1.73x
BenchmarkEqual16 613.25 1914.12 3.12x
BenchmarkEqual20 682.37 1687.04 2.47x
BenchmarkEqual32 807.96 3843.29 4.76x
BenchmarkEqual4K 1076.25 23280.51 21.63x
BenchmarkEqual4M 1079.30 13120.14 12.16x
BenchmarkEqual64M 1073.28 10876.92 10.13x
It was determined that the degradation in the smaller byte tests
were due to unfavorable code alignment of the single byte loop.
Fixes#14368
Change-Id: I0dd87382c28887c70f4fbe80877a8ba03c31d7cd
Reviewed-on: https://go-review.googlesource.com/20249
Reviewed-by: Minux Ma <minux@golang.org>
MOVSB is quite a bit faster for unaligned moves.
Possibly we should use MOVSB all of the time, but Intel folks
say it might be a bit faster to use MOVSQ on some processors
(but not any I have access to at the moment).
benchmark old ns/op new ns/op delta
BenchmarkMemmove4096-8 93.9 93.2 -0.75%
BenchmarkMemmoveUnalignedDst4096-8 256 151 -41.02%
BenchmarkMemmoveUnalignedSrc4096-8 175 90.5 -48.29%
Fixes#14630
Change-Id: I568e6d6590eb3615e6a699fb474020596be665ff
Reviewed-on: https://go-review.googlesource.com/20293
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Missed this in review of 20812
Change-Id: I01e220499dcd58e1a7205e2a577dd9630a8b7174
Reviewed-on: https://go-review.googlesource.com/20819
Reviewed-by: Keith Randall <khr@golang.org>
We're about to add another root marking job that needs to happen only
during the first markroot pass (whether that's concurrent or STW),
just like finalizer scanning. Rather than introducing another flag
that has the same value as finalizersDone, just rename finalizersDone
to markrootDone.
Change-Id: I535356c6ea1f3734cb5b6add264cb7bf48de95e8
Reviewed-on: https://go-review.googlesource.com/20043
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently shinkstack is only safe during STW because it adjusts
channel-related stack pointers and moves send/receive stack slots
without synchronizing with the channel code. Make it safe to use when
the world isn't stopped by:
1) Locking all channels the G is blocked on while adjusting the sudogs
and copying the area of the stack that may contain send/receive
slots.
2) For any stack frames that may contain send/receive slot, using an
atomic CAS to adjust pointers to prevent races between adjusting a
pointer in a receive slot and a concurrent send writing to that
receive slot.
In principle, the synchronization could be finer-grained. For example,
we considered synchronizing around the sudogs, which would allow
channel operations involving other Gs to continue if the G being
shrunk was far enough down the send/receive queue. However, using the
channel lock means no additional locks are necessary in the channel
code. Furthermore, the stack shrinking code holds the channel lock for
a very short time (much less than the time required to shrink the
stack).
This does not yet make stack shrinking concurrent; it merely makes
doing so safe.
This has negligible effect on the go1 and garbage benchmarks.
For #12967.
Change-Id: Ia49df3a8a7be4b36e365aac4155a2416b94b988c
Reviewed-on: https://go-review.googlesource.com/20042
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently, locking a G's stack by setting its status to _Gcopystack or
_Gscan is unordered with respect to channel locks. However, when we
make stack shrinking concurrent, stack shrinking will need to lock the
G and then acquire channel locks, which imposes an order on these.
Document this lock ordering and fix closechan to respect it.
Everything else already happens to respect it.
For #12967.
Change-Id: I4dd02675efffb3e7daa5285cf75bf24f987d90d4
Reviewed-on: https://go-review.googlesource.com/20041
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently sudog.elem is never accessed concurrently, so in several
cases we drop the channel lock just before reading/writing the
sent/received value from/to sudog.elem. However, concurrent stack
shrinking is going to have to adjust sudog.elem to point to the new
stack, which means it needs a way to synchronize with accesses to
sudog.elem. Hence, add sudog.elem to the fields protected by
hchan.lock and scoot the unlocks down past the uses of sudog.elem.
While we're here, better document the channel synchronization rules.
For #12967.
Change-Id: I3ad0ca71f0a74b0716c261aef21b2f7f13f74917
Reviewed-on: https://go-review.googlesource.com/20040
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
With concurrent stack shrinking, the stack can move the instant after
a G enters _Gwaiting. There are only two places that put a G into
_Gwaiting: gopark and newstack. We fixed uses of gopark. This commit
fixes newstack by simplifying its G transitions and, in particular,
eliminating or narrowing the transient _Gwaiting states it passes
through so it's clear nothing in the G is accessed while in _Gwaiting.
For #12967.
Change-Id: I2440ead411d2bc61beb1e2ab020ebe3cb3481af9
Reviewed-on: https://go-review.googlesource.com/20039
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
gopark calls the unlock function after setting the G to _Gwaiting.
This means it's generally unsafe to access the G's stack from the
unlock function because the G may start running on another P. Once we
start shrinking stacks concurrently, a stack shrink could also move
the stack the moment after it enters _Gwaiting and before the unlock
function is called.
Document this restriction and fix the two places where we currently
violate it.
This is unlikely to be a problem in practice for these two places
right now, but they're already skating on thin ice. For example, the
following sequence could in principle cause corruption, deadlock, or a
panic in the select code:
On M1/P1:
1. G1 selects on channels A and B.
2. selectgoImpl calls gopark.
3. gopark puts G1 in _Gwaiting.
4. gopark calls selparkcommit.
5. selparkcommit releases the lock on channel A.
On M2/P2:
6. G2 sends to channel A.
7. The send puts G1 in _Grunnable and puts it on P2's run queue.
8. The scheduler runs, selects G1, puts it in _Grunning, and resumes G1.
9. On G1, the sellock immediately following the gopark gets called.
10. sellock grows and moves the stack.
On M1/P1:
11. selparkcommit continues to scan the lock order for the next
channel to unlock, but it's now reading from a freed (and possibly
reused) stack.
This shouldn't happen in practice because step 10 isn't the first call
to sellock, so the stack should already be big enough. However, once
we start shrinking stacks concurrently, this reasoning won't work any
more.
For #12967.
Change-Id: I3660c5be37e5be9f87433cb8141bdfdf37fadc4c
Reviewed-on: https://go-review.googlesource.com/20038
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the g.waiting list created by a select is in poll order.
However, nothing depends on this, and we're going to need access to
the channel lock order in other places shortly, so modify select to
put the waiting list in channel lock order.
For #12967.
Change-Id: If0d38816216ecbb37a36624d9b25dd96e0a775ec
Reviewed-on: https://go-review.googlesource.com/20037
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently the select lock order is a []*hchan. We're going to need to
refer to things other than the channel itself in lock order shortly,
so switch this to a []uint16 of indexes into the select cases. This
parallels the existing representation for the poll order.
Change-Id: I89262223fe20b4ddf5321592655ba9eac489cda1
Reviewed-on: https://go-review.googlesource.com/20036
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Given a G, there's currently no way to find the channel it's blocking
on. We'll need this information to fix a (probably theoretical) bug in
select and to implement concurrent stack shrinking, so record the
channel in the sudog.
For #12967.
Change-Id: If8fb63a140f1d07175818824d08c0ebeec2bdf66
Reviewed-on: https://go-review.googlesource.com/20035
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
gcMarkRootCheck is too expensive to do during mark termination.
However, since it's a useful check and it complements checkmark mode
nicely, enable it during mark termination is checkmark is enabled.
Change-Id: Icd9039e85e6e9d22747454441b50f1cdd1412202
Reviewed-on: https://go-review.googlesource.com/20663
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change Cond implementation to use a notification list such that waiters
can first register for a notification, release the lock, then actually
wait. Signalers never have to park anymore.
This is intended to address an issue in the previous implementation
where Broadcast could fail to signal all waiters.
Results of the existing benchmark are below.
Original New Diff
BenchmarkCond1-48 2000000 745 ns/op 755 +1.3%
BenchmarkCond2-48 1000000 1545 ns/op 1532 -0.8%
BenchmarkCond4-48 300000 3833 ns/op 3896 +1.6%
BenchmarkCond8-48 200000 10049 ns/op 10257 +2.1%
BenchmarkCond16-48 100000 21123 ns/op 21236 +0.5%
BenchmarkCond32-48 30000 40393 ns/op 41097 +1.7%
Fixes#14064
Change-Id: I083466d61593a791a034df61f5305adfb8f1c7f9
Reviewed-on: https://go-review.googlesource.com/18892
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Caleb Spare <cespare@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The type information for a method includes two variants: a func
without the receiver, and a func with the receiver as the first
parameter. The former is used as part of the dynamic interface
checks, but the latter is only returned as a type in the
reflect.Method struct.
Instead of computing it at compile time, construct it at run time
with reflect.FuncOf.
Using cl/20701 as a baseline,
cmd/go: -480KB, (4.4%)
jujud: -5.6MB, (7.8%)
For #6853.
Change-Id: I1b8c73f3ab894735f53d00cb9c0b506d84d54e92
Reviewed-on: https://go-review.googlesource.com/20709
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
CL 14603 attempted to preserve the callee-save registers for
the darwin/arm runtime initialization routine, but I believe it
wasn't sufficient and resulted in the crash reported in issue
Saving and restoring the registers on the stack the same way
linux/arm does seems more obvious and fixes#14778, so do that.
Even though #14778 is not reproducible on darwin/arm64, I applied
a similar change there, and to linux/arm64 which obeys the same
calling convention.
Finally, this CL is a candidate for a 1.6 minor release for the same
reason CL 14603 was in a 1.5 minor release (as CL 16968). It is
small and only touches the iOS platforms and gomobile on darwin/arm
is currently useless without it.
Fixes#14778Fixes#12590 (again)
Change-Id: I7401daf0bbd7c579a7e84761384a7b763651752a
Reviewed-on: https://go-review.googlesource.com/20621
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In particular, write down the rules for stack ownership because the
details of this are about to get very important with concurrent stack
shrinking. (Interestingly, the details don't actually change, but
anything that's currently skating on thin ice is likely to fall
through.)
Fox #12967.
Change-Id: I561e2610e864295e9faba07717a934aabefcaab9
Reviewed-on: https://go-review.googlesource.com/20034
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently copystack adjusts pointers in the old stack and then copies
the adjusted stack to the new stack. In addition to being generally
confusing, this is going to make concurrent stack shrinking harder.
Switch this around so that we first copy the stack and then adjust
pointers on the new stack (never writing to the old stack).
This reprises CL 15996, but takes a different and simpler approach. CL
15996 still walked the old stack while adjusting pointers on the new
stack. In this CL, we adjust auxiliary structures before walking the
stack, so we can just walk the new stack.
For #12967.
Change-Id: I94fa86f823ba9ee478e73b2ba509eed3361c43df
Reviewed-on: https://go-review.googlesource.com/20033
Reviewed-by: Rick Hudson <rlh@golang.org>
In particular, document that *sel is on the stack no matter what.
Change-Id: I1c264215e026c27721b13eedae73ec845066cdec
Reviewed-on: https://go-review.googlesource.com/20032
Reviewed-by: Rick Hudson <rlh@golang.org>
Only compute the number of maximum allowed elements per slice once.
Special case newcap computation for slices with byte sized elements.
name old time/op new time/op delta
GrowSliceBytes-2 61.1ns ± 1% 43.4ns ± 1% -29.00% (p=0.000 n=20+20)
GrowSliceInts-2 85.9ns ± 1% 75.7ns ± 1% -11.80% (p=0.000 n=20+20)
Change-Id: I5d9c0d5987cdd108ac29dc32e31912dcefa2324d
Reviewed-on: https://go-review.googlesource.com/20653
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Move functions testSchedLocalQueueLocal and testSchedLocalQueueSteal
from proc.go to export_test.go, the only site that they are used.
Fixes#14796
Change-Id: I16b6fa4a13835eab33f66a2c2e87a5f5c79b7bd3
Reviewed-on: https://go-review.googlesource.com/20640
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
In addition to reflect.Value.Call, exported methods can be invoked
by the Func value in the reflect.Method struct. This CL has the
compiler track what functions get access to a legitimate reflect.Method
struct by looking for interface calls to either of:
Method(int) reflect.Method
MethodByName(string) (reflect.Method, bool)
This is a little overly conservative. If a user implements a type
with one of these methods without using the underlying calls on
reflect.Type, the linker will assume the worst and include all
exported methods. But it's cheap.
No change to any of the binary sizes reported in cl/20483.
For #14740
Change-Id: Ie17786395d0453ce0384d8b240ecb043b7726137
Reviewed-on: https://go-review.googlesource.com/20489
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Still fails about 20% of the time on my laptop.
Fixes#14766.
Change-Id: I169ab728c6022dceeb91188f5ad466ed6413c062
Reviewed-on: https://go-review.googlesource.com/20590
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On Plan 9, there's no "kill all threads" system call, so exit is done
by sending a "go: exit" note to each OS process. If concurrent GC
occurs during this loop, deadlock sometimes results. Prevent this by
incrementing m.locks before sending notes.
Change-Id: I31aa15134ff6e42d9a82f9f8a308620b3ad1b1b1
Reviewed-on: https://go-review.googlesource.com/20477
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Alternative to golang.org/cl/19852. This memory layout doesn't have
an easy type representation, but it is noticeably smaller than the
current funcType, and saves significant extra space.
Some notes on the layout are in reflect/type.go:
// A *rtype for each in and out parameter is stored in an array that
// directly follows the funcType (and possibly its uncommonType). So
// a function type with one method, one input, and one output is:
//
// struct {
// funcType
// uncommonType
// [2]*rtype // [0] is in, [1] is out
// uncommonTypeSliceContents
// }
There are three arbitrary limits introduced by this CL:
1. No more than 65535 function input parameters.
2. No more than 32767 function output parameters.
3. reflect.FuncOf is limited to 128 parameters.
I don't think these are limits in practice, but are worth noting.
Reduces godoc binary size by 2.4%, 330KB.
For #6853.
Change-Id: I225c0a0516ebdbe92d41dfdf43f716da42dfe347
Reviewed-on: https://go-review.googlesource.com/19916
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Instead of a pointer on every rtype, use a bit flag to indicate that
the contents of uncommonType directly follows the rtype value when it
is needed.
This requires a bit of juggling in the compiler's rtype encoder. The
backing arrays for fields in the rtype are presently encoded directly
after the slice header. This packing requires separating the encoding
of the uncommonType slice headers from their backing arrays.
Reduces binary size of godoc by ~180KB (1.5%).
No measurable change in all.bash time.
For #6853.
Change-Id: I60205948ceb5c0abba76fdf619652da9c465a597
Reviewed-on: https://go-review.googlesource.com/19790
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Before, those C files might have been intended for the Plan 9 C compiler,
but that option was removed in Go 1.5. We can simplify the maintenance
of cgo packages now if we assume C files (and C++ and M and SWIG files)
should only be considered when cgo is enabled.
Also remove newly unnecessary build tags in runtime/cgo's C files.
Fixes#14123
Change-Id: Ia5a7fe62b9469965aa7c3547fe43c6c9292b8205
Reviewed-on: https://go-review.googlesource.com/19613
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently work.finalizersDone is reset only at the beginning of
gcStart. As a result, it will be set when checkmark runs, so checkmark
will skip scanning finalizers. Hence, if there are any bugs that cause
the regular scan of finalizers to miss pointers, checkmark will also
miss them and fail to detect the missed pointer.
Fix this by resetting finalizersDone in gcResetMarkState. This way it
gets reset before any full mark, which is exactly what we want.
Change-Id: I4ddb5eba5b3b97e52aaf3e08fd9aa692bda32b20
Reviewed-on: https://go-review.googlesource.com/20332
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Implementation more or less follows plan9_386 version.
Revised 7 March to correct a bug in runtime.seek and
tidy whitespace for 8-column tabs.
Change-Id: I2e921558b5816502e8aafe330530c5a48a6c7537
Reviewed-on: https://go-review.googlesource.com/18966
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Plan 9 trap/signal handling differs on ARM from other architectures
because ARM has a link register. Also trap message syntax varies
between different architectures (historical accident?).
Revised 7 March to clarify a comment.
Change-Id: Ib6485f82857a2f9a0d6b2c375cf0aaa230b83656
Reviewed-on: https://go-review.googlesource.com/18969
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
We used to start background mark workers and assists at different
times, so we needed to keep track of these separately. They're now set
to exactly the same time, so clean things up by merging them in to one
value, markStartTime.
Change-Id: I17c9843c3ed2d6f07b4c8cd0b2c438fc6de23b53
Reviewed-on: https://go-review.googlesource.com/20143
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Current runtime.WriteHeapProfile() doesn't print correct
EnableGC. Even if GOGC=off, the result file has below line:
# EnableGC = true
It is hard to print correct status of the variable because of corner
cases e.g. initialization. For avoiding confusion, this commit removes
the print.
Change-Id: Ia792454a6c650bdc50a06fbaff4df7b6330ae08a
Reviewed-on: https://go-review.googlesource.com/18600
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
gcMarkRootCheck takes ~10ns per goroutine. This is just a debugging
check, so disable it (plus, if something is going to go wrong, it's
more likely to go wrong during concurrent mark).
We may be able to re-enable this later, or move it to after we've
started the world again. (But not for 1.6.x.)
For 1.6.x.
Fixes#14419.
name / 95%ile-time/markTerm old new delta
500kIdleGs-12 24.0ms ± 0% 18.9ms ± 6% -21.46% (p=0.000 n=15+20)
Change-Id: Idb2a2b1771449de772c159ef95920d6df1090666
Reviewed-on: https://go-review.googlesource.com/20148
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently we reset the mark state during STW sweep termination. This
involves looping over all of the goroutines. Each iteration of this
loop takes ~25ns, so at around 400k goroutines, we'll exceed our 10ms
pause goal.
However, it's safe to do this before we stop the world for sweep
termination because nothing is consuming this state yet. Hence, move
the reset to just before STW.
This isn't perfect: a long reset can still delay allocating goroutines
that block on GC starting. But it's certainly better to block some
things eventually than to block everything immediately.
For 1.6.x.
Fixes#14420.
name \ 95%ile-time/sweepTerm old new delta
500kIdleGs-12 11312µs ± 6% 18.9µs ± 6% -99.83% (p=0.000 n=16+20)
Change-Id: I9815c4d8d9b0d3c3e94dfdab78049cefe0dcc93c
Reviewed-on: https://go-review.googlesource.com/20147
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Also fix compiler-invoked panics to avoid a confusing "malloc deadlock"
crash if they are invoked while executing the runtime.
Fixes#14599.
Change-Id: I89436abcbf3587901909abbdca1973301654a76e
Reviewed-on: https://go-review.googlesource.com/20219
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
go test github.com/onsi/gomega/gbytes now passes at tip, and tests
added to the reflect package.
Fixes#14645
Change-Id: I16216c1a86211a1103d913237fe6bca5000cf885
Reviewed-on: https://go-review.googlesource.com/20221
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change consolidates functions and methods related to TCPAddr,
TCPConn and TCPListener for maintenance purpose, especially for
documentation. Also refactors Dial error code paths.
The followup changes will update comments and examples.
Updates #10624.
Change-Id: I3333ee218ebcd08928f9e2826cd1984d15ea153e
Reviewed-on: https://go-review.googlesource.com/20009
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is a subset of https://golang.org/cl/20022 with only the copyright
header lines, so the next CL will be smaller and more reviewable.
Go policy has been single space after periods in comments for some time.
The copyright header template at:
https://golang.org/doc/contribute.html#copyright
also uses a single space.
Make them all consistent.
Change-Id: Icc26c6b8495c3820da6b171ca96a74701b4a01b0
Reviewed-on: https://go-review.googlesource.com/20111
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Avoid targeting a partial register with load;
ensure source of load (writebarrier) is aligned.
Better yet would be "CMPB $1,writebarrier" but that requires
wrestling with flagalloc (mem operand complicates moving
instruction around).
Didn't see a change in time for
benchcmd -n 10 Build go build net/http
Verified that we clean the code up properly:
0x20a8 <main.main+104>: mov 0xc30a2(%rip),%eax
# 0xc5150 <runtime.writeBarrier>
0x20ae <main.main+110>: test %al,%al
Change-Id: Id5fb8c260eaec27bd727cb0ae1476c60343b0986
Reviewed-on: https://go-review.googlesource.com/19998
Reviewed-by: Keith Randall <khr@golang.org>
Commit a5c3bbe modified adjustpointers to use *uintptrs instead of
*unsafe.Pointers for manipulating stack pointers for clarity and to
eliminate the unnecessary write barrier when writing the updated stack
pointer.
This commit makes the equivalent change to adjustpointer.
Change-Id: I6dc309590b298bdd86ecdc9737db848d6786c3f7
Reviewed-on: https://go-review.googlesource.com/17148
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently dropg does not unwire locked g/m.
This is unnecessary distiction between locked and non-locked g/m.
We always restart goroutines with execute which re-wires g/m.
First, this produces false sense that this distinction is necessary.
Second, it can confuse some sanity and cross checks. For example,
if we check that g/m are unwired before we wire them in execute,
the check will fail for locked g/m. I've hit this while doing some
race detector changes, When we deschedule a goroutine and run
scheduler code, m.curg is generally nil, but not for locked ms.
Remove the distinction.
Change-Id: I3b87a28ff343baa1d564aab1f821b582a84dee07
Reviewed-on: https://go-review.googlesource.com/19950
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
REP-prefixed instructions have a large startup cost.
Avoid them like the plague.
benchmark old ns/op new ns/op delta
BenchmarkIndexByte10-8 22.4 5.34 -76.16%
Fixes#13983
Change-Id: I857e956e240fc9681d053f2584ccf24c1b272bb3
Reviewed-on: https://go-review.googlesource.com/18703
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
_Genqueue and _Gscanenqueue were introduced as part of the GC quiesce
code. The quiesce code was removed by 197aa9e, but these states and
some associated code stuck around. Remove them.
Change-Id: I69df81881602d4a431556513dac2959668d27c20
Reviewed-on: https://go-review.googlesource.com/19638
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently most uses of gcWork use the per-P gcWork, but there are two
places that still use a stack-based gcWork. Simplify things by making
these instead use the per-P gcWork.
Change-Id: I712d012cce9dd5757c8541824e9641ac1c2a329c
Reviewed-on: https://go-review.googlesource.com/19636
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently markroot uses a gcWork on the stack and disposes of it
immediately after marking one root. This used to be necessary because
markroot was called from the depths of parfor, but now that we call it
directly and have ready access to a gcWork at the call site, pass the
gcWork in, use it directly in markroot, and share it across calls to
markroot from the same P.
Change-Id: Id7c3b811bfb944153760e01873c07c8d18909be1
Reviewed-on: https://go-review.googlesource.com/19635
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
When gcWork was first introduced, the compiler's escape analysis
wasn't good enough to detect that that method receiver didn't escape,
so we had to hack around this.
Now that the compiler can figure out this for itself, remove these
hacks.
Change-Id: I9f73fab721e272410b8b6905b564e7abc03c0dfe
Reviewed-on: https://go-review.googlesource.com/19634
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The channel code must not allow stack splits between when it assigns a
potential stack pointer to sudog.elem (or sudog.selectdone) and when
it makes the sudog visible to copystack by putting it on the g.waiting
list. We do get this right everywhere, but add a comment about this
subtlety for future eyes.
Change-Id: I941da150437167acff37b0e56983c793f40fcf79
Reviewed-on: https://go-review.googlesource.com/19632
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the heapBitsSweepSpan comment claims that heapBitsSweepSpan
sets the heap bitmap for the first two words to dead. In fact, it sets
the first *four* words to scalar/dead. This is important because first
two words don't actually have a dead bit, so for objects larger than
two words it *must* set a dead bit in third word to reset the object
to a "noscan" state. For example, we use this in heapBits.hasPointers
to detect that an object larger than two words is noscan.
Change-Id: Ie166a628bed5060851db083475c7377adb349d6c
Reviewed-on: https://go-review.googlesource.com/19630
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Our stack frame sizes look pretty good now. Lower the stack
guard from 1024 to 720.
Tip is currently using 720.
We could go lower (to 640 at least) except PPC doesn't like that.
Change-Id: Ie5f96c0e822435638223f1e8a2bd1a1eed68e6aa
Reviewed-on: https://go-review.googlesource.com/19922
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
This indirectly implements a small fix for runtime/pprof: it used to
look for runtime.gopanic when it should have been looking for
runtime.sigpanic.
Update #11432.
Change-Id: I5e3f5203b2ac5463efd85adf6636e64174aacb1d
Reviewed-on: https://go-review.googlesource.com/19869
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Simplifies some code as ptrToThis was unreliable under dynamic
linking. Now the same type lookup is used regardless of execution
mode.
A synthetic relocation, R_USETYPE, is introduced to make sure the
linker includes *T on use of T, if *T is carrying methods.
Changes the heap dump format. Anything reading the format needs to
look at the last bool of a type of an interface value to determine
if the type should be the pointer-to type.
Reduces binary size of cmd/go by 0.2%.
For #6853.
Change-Id: I79fcb19a97402bdb0193f3c7f6d94ddf061ee7b2
Reviewed-on: https://go-review.googlesource.com/19695
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Like bionic, musl also doesn't provide vsyscall helper in %gs:0x10,
and as int $0x80 is as fast as calling %gs:0x10, just use int $0x80
always.
Because we're no longer using vsyscall in VDSO, get rid of VDSO code
for linux/386 too.
Fixes#14476.
Change-Id: I00ec8652060700e0a3c9b524bfe3c16a810263f6
Reviewed-on: https://go-review.googlesource.com/19833
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Bump up the multiplier to 20. Also run the fast version first, so that
the slow version is likely to start up faster.
Change-Id: Ia0654cc1212ab03a45da1904d3e4b57d6a8d02a0
Reviewed-on: https://go-review.googlesource.com/19835
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
They do the same thing, except memequal also has the short-circuit
check if the two pointers are equal.
A) We might as well always do the short-circuit check, it is only 2 instructions.
B) The extra function call (memequal->memeq) is expensive.
benchmark old ns/op new ns/op delta
BenchmarkArrayEqual-8 8.56 5.31 -37.97%
No noticeable affect on the former memeq user (maps).
Fixes#14302
Change-Id: I85d1ada59ed11e64dd6c54667f79d32cc5f81948
Reviewed-on: https://go-review.googlesource.com/19843
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
It's not needed on other OSes.
Change-Id: Ia6b13510585392a7062374806527d33876beba2a
Reviewed-on: https://go-review.googlesource.com/19818
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Also eliminates per-maptype hiter and hmap types, since they're not
really needed anyway. Update packages reflect and runtime
accordingly.
Reduces golang.org/x/tools/cmd/godoc's text segment by ~170kB:
text data bss dec hex filename
13085702 140640 151520 13377862 cc2146 godoc.before
12915382 140640 151520 13207542 c987f6 godoc.after
Updates #6853.
Change-Id: I948b2bc1f22d477c1756204996b4e3e1fb568d81
Reviewed-on: https://go-review.googlesource.com/16610
Reviewed-by: Keith Randall <khr@golang.org>
These functions are really simple, the overhead of calling
them (in both time and code size) is larger than the inlined versions.
Reorganize how the nil case in a type switch is handled, as we have
to check for nil explicitly now anyway.
Saves about 0.8% in the binary size of the go tool.
Change-Id: I8501b62d72fde43650b79f52b5f699f1fbd0e7e7
Reviewed-on: https://go-review.googlesource.com/19814
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Change-Id: I6cb8ac7b59812e82111ab3b0f8303ab8194a5129
Reviewed-on: https://go-review.googlesource.com/19791
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
There's no need for 8 different ways to represent that a type is
non-comparable.
While here, move AMEM out of the runtime-known algorithm values since
it's not needed at run-time, and get rid of the unused AUNK constant.
Change-Id: Ie23972b692c6f27fc5f1a908561b3e26ef5a50e9
Reviewed-on: https://go-review.googlesource.com/19779
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
You can not use cannot, but you cannot spell cannot can not.
Change-Id: I2f0971481a460804de96fd8c9e46a9cc62a3fc5b
Reviewed-on: https://go-review.googlesource.com/19772
Reviewed-by: Rob Pike <r@golang.org>
Tested it 1000x on OS X and Linux amd64, no failures.
Updated TODO.
Change-Id: Ia60c8d90962f6e5f7c3ed1ded6ba1b25eee983e1
Reviewed-on: https://go-review.googlesource.com/19662
Reviewed-by: Todd Neal <todd@tneal.org>
TestCrashDumpsAllThreads carefully sets the number of Ps to one
greater than the number of non-preemptible loops it starts so that the
main goroutine can continue to run (necessary because of #10958).
However, if GC starts, it can take over that one spare P and lock up
the system while waiting for the non-preemptible loops, causing the
test to eventually time out. This deadlock is easily reproducible if
you run the runtime test with GOGC=1.
Fix this by forcing GOGC=off when running this test.
Change-Id: Ifb22da5ce33f9a61700a326ea92fcf4b049721d1
Reviewed-on: https://go-review.googlesource.com/19516
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
We used to include panic calls in tracebacks; however, when
runtime.panic was renamed to runtime.gopanic in the conversion of the
runtime to Go, we missed the special case in showframe that includes
panic calls even though they're in package runtime.
Fix the function name check in showframe (and, while we're here, fix
the other check for "runtime.panic" in runtime/pprof). Since the
"runtime.gopanic" name doesn't match what users call panic and hence
isn't very user-friendly, make traceback rewrite it to just "panic".
Updates #5832, #13857. Fixes#14315.
Change-Id: I8059621b41ec043e63d5cfb4cbee479f47f64973
Reviewed-on: https://go-review.googlesource.com/19492
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The code in mem_bsd.go expects that when mmap fails it will return a
positive errno value. This fixes the Solaris implementation of mmap to
work as expected.
Change-Id: Id1c34a9b916e8dc955ced90ea2f4af8321d92265
Reviewed-on: https://go-review.googlesource.com/19477
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The caller of mmap expects it to return a positive errno value, but the
linux-arm64 and nacl-386 system calls returned a negative errno value.
Correct them to negate the errno value.
The caller of mincore expects it to return a negative errno value (yes,
this is inconsistent), but the linux-mips64x and linux-ppc64x system
call returned a positive errno value. Correct them to negate the errno
value.
Add a test that mmap returns errno with the correct sign. Brad added a
test for mincore's errno value in https://golang.org/cl/19457.
Fixes#14297.
Change-Id: I2b93f32e679bd1eae1c9aef9ae7bcf0ba39521b5
Reviewed-on: https://go-review.googlesource.com/19455
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Semi-regular merge from tip to dev.ssa.
Two fixes:
1) Mark selectgo as not returning. This caused problems
because there are no VARKILL ops on the selectgo path,
causing things to be marked live that shouldn't be.
2) Tell the amd64 assembler that addressing modes like
name(SP)(AX*4) are ok.
Change-Id: I9ca81c76391b1a65cc47edc8610c70ff1a621913
When using a stack-allocated buffer for the result, don't
expose the uninitialized portion of it by restricting its
capacity to its length.
The other option is to zero the portion between len and cap.
That seems like more work, but might be worth it if the caller
then appends some stuff to the result. But this close to 1.6,
I'm inclined to do the simplest fix possible.
Fixes#14232
Change-Id: I21c50d3cda02fd2df4d60ba5e2cfe2efe272f333
Reviewed-on: https://go-review.googlesource.com/19231
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently it's possible for the scheduler to deadlock with the right
confluence of locked Gs, assists, and scheduling of background mark
workers. Broadly, this happens because handoffp is stricter than
findrunnable, and if the only work for a P is GC work, handoffp will
put the P into idle, rather than starting an M to execute that P. One
way this can happen is as follows:
0. There is only one user G, which we'll call G 1. There is more than
one P, but they're all idle except the one running G 1.
1. G 1 locks itself to an M using runtime.LockOSThread.
2. GC starts up and enters mark 1.
3. G 1 performs a GC assist, which completes mark 1 without being
fully satisfied. Completing mark 1 causes all background mark
workers to park. And since the assist isn't fully satisfied, it
parks as well, waiting for a background mark worker to satisfy its
remaining assist debt.
4. The assist park enters the scheduler. Since G 1 is locked to the M,
the scheduler releases the P and calls handoffp to hand the P to
another M.
5. handoffp checks the local and global run queues, which are empty,
and sees that there are idle Ps, so rather than start an M, it puts
the P into idle.
At this point, all of the Gs are waiting and all of the Ps are idle.
In particular, none of the GC workers are running, so no mark work
gets done and the assist on the main G is never satisfied, so the
whole process soft locks up.
Fix this by making handoffp start an M if there is GC work. This
reintroduces a key invariant: that in any situation where findrunnable
would return a G to run on a P, handoffp for that P will start an M to
run work on that P.
Fixes#13645.
Tested by running 2,689 iterations of `go tool dist test -no-rebuild
runtime:cpu124` across 10 linux-amd64-noopt VMs with no failures.
Without this change, the failure rate was somewhere around 1%.
Performance change is negligible.
name old time/op new time/op delta
XBenchGarbage-12 2.48ms ± 2% 2.48ms ± 1% -0.24% (p=0.000 n=92+93)
name old time/op new time/op delta
BinaryTree17-12 2.86s ± 2% 2.87s ± 2% ~ (p=0.667 n=19+20)
Fannkuch11-12 2.52s ± 1% 2.47s ± 1% -2.05% (p=0.000 n=18+20)
FmtFprintfEmpty-12 51.7ns ± 1% 51.5ns ± 3% ~ (p=0.931 n=16+20)
FmtFprintfString-12 170ns ± 1% 168ns ± 1% -0.65% (p=0.000 n=19+19)
FmtFprintfInt-12 160ns ± 0% 160ns ± 0% +0.18% (p=0.033 n=17+19)
FmtFprintfIntInt-12 265ns ± 1% 273ns ± 1% +2.98% (p=0.000 n=17+19)
FmtFprintfPrefixedInt-12 235ns ± 1% 239ns ± 1% +1.99% (p=0.000 n=16+19)
FmtFprintfFloat-12 315ns ± 0% 315ns ± 1% ~ (p=0.250 n=17+19)
FmtManyArgs-12 1.04µs ± 1% 1.05µs ± 0% +0.87% (p=0.000 n=17+19)
GobDecode-12 7.93ms ± 0% 7.85ms ± 1% -1.03% (p=0.000 n=16+18)
GobEncode-12 6.62ms ± 1% 6.58ms ± 1% -0.60% (p=0.000 n=18+19)
Gzip-12 322ms ± 1% 320ms ± 1% -0.46% (p=0.009 n=20+20)
Gunzip-12 42.5ms ± 1% 42.5ms ± 0% ~ (p=0.751 n=19+19)
HTTPClientServer-12 69.7µs ± 1% 70.0µs ± 2% ~ (p=0.056 n=19+19)
JSONEncode-12 16.9ms ± 1% 16.7ms ± 1% -1.13% (p=0.000 n=19+19)
JSONDecode-12 61.5ms ± 1% 61.3ms ± 1% -0.35% (p=0.001 n=20+17)
Mandelbrot200-12 3.94ms ± 0% 3.91ms ± 0% -0.67% (p=0.000 n=20+18)
GoParse-12 3.71ms ± 1% 3.70ms ± 1% ~ (p=0.244 n=17+19)
RegexpMatchEasy0_32-12 101ns ± 1% 102ns ± 2% +0.54% (p=0.037 n=19+20)
RegexpMatchEasy0_1K-12 349ns ± 0% 350ns ± 0% +0.33% (p=0.000 n=17+18)
RegexpMatchEasy1_32-12 84.5ns ± 2% 84.2ns ± 1% -0.43% (p=0.048 n=19+20)
RegexpMatchEasy1_1K-12 510ns ± 1% 513ns ± 2% +0.58% (p=0.002 n=18+20)
RegexpMatchMedium_32-12 132ns ± 1% 134ns ± 1% +0.95% (p=0.000 n=20+20)
RegexpMatchMedium_1K-12 40.1µs ± 1% 39.6µs ± 1% -1.39% (p=0.000 n=20+20)
RegexpMatchHard_32-12 2.08µs ± 0% 2.06µs ± 1% -0.95% (p=0.000 n=18+18)
RegexpMatchHard_1K-12 62.2µs ± 1% 61.9µs ± 1% -0.42% (p=0.001 n=19+20)
Revcomp-12 537ms ± 0% 536ms ± 0% ~ (p=0.076 n=20+20)
Template-12 71.3ms ± 1% 69.3ms ± 1% -2.75% (p=0.000 n=20+20)
TimeParse-12 361ns ± 0% 360ns ± 1% ~ (p=0.056 n=19+19)
TimeFormat-12 353ns ± 0% 352ns ± 0% -0.23% (p=0.000 n=17+18)
[Geo mean] 62.6µs 62.5µs -0.17%
Change-Id: I0fbbbe4d7d99653ba5600ffb4394fa03558bc4e9
Reviewed-on: https://go-review.googlesource.com/19107
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Tested by hand with a runtime/cgo modified to return an mmap failure
after 10 calls.
This is an interim patch. For 1.7 we should fix mmap properly to avoid
using the same value as both a pointer and an errno value.
Fixes#14149.
Change-Id: I8f2bbd47d711e283001ba73296f1c34a26c59241
Reviewed-on: https://go-review.googlesource.com/19084
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The old write barriers used _nostore versions, which
don't work for Ian's cgo checker. Instead, we adopt the
same write barrier pattern as the default compiler.
It's a bit trickier to code up but should be more efficient.
Change-Id: I6696c3656cf179e28f800b0e096b7259bd5f3bb7
Reviewed-on: https://go-review.googlesource.com/18941
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
We might be forwarding to a C signal handler. C code expects the stack
to be aligned. Should fix darwin/386 build: the testcarchive tests were
hanging as the program got an endless series of SIGSEGV signals.
Change-Id: Ia02485d3736a3c40e12259f02d25f842cf8e4d29
Reviewed-on: https://go-review.googlesource.com/19025
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
It's awkward to get a string value in cgoCheckArg, but SWIG testing
revealed that it is possible. The new handling of extra files in the
ptr.go test emulates what SWIG does with an exported function that
returns a string.
Change-Id: I453717f867b8a49499576c28550e7c93053a0cf8
Reviewed-on: https://go-review.googlesource.com/19020
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This makes "CGO_ENABLED=0 go list runtime/cgo" work,
which fixes the current cmd/go test failure.
Change-Id: Ia55ce3ba1dbb09f618ae5f4c8547722670360f59
Reviewed-on: https://go-review.googlesource.com/19001
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
We set GOMAXPROCS=1 to prevent test flakiness.
There are two sources of flakiness:
1. Some tests rely on particular execution order.
If the order is different, race does not happen at all.
2. Ironically, ThreadSanitizer runtime contains a logical race condition
that can lead to false negatives if racy accesses happen literally at the same time.
Tests used to work reliably in the good old days of GOMAXPROCS=1.
So let's set it for now. A more reliable solution is to explicitly annotate tests
with required execution order by means of a special "invisible" synchronization primitive
(that's what is done for C++ ThreadSanitizer tests). This is issue #14119.
This reduces flakes on RaceAsFunc3 test from 60/3000 to 1/3000.
Fixes#14086Fixes#14079Fixes#14035
Change-Id: Ibaec6b2b21e27b62563bffbb28473a854722cf41
Reviewed-on: https://go-review.googlesource.com/18968
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
CL 18964 included an extra patch (sorry, my first experience of
git-codereview) which defined the conventional breakpoint instruction
used by Plan 9 on arm, but also introduced a benign but unneeded
call to runtime.emptyfunc. This CL removes the redundant call again.
This completes the series of CLs which add support for Plan 9 on arm.
Change-Id: Id293cfd40557c9d79b4b6cb164ed7ed49295b178
Reviewed-on: https://go-review.googlesource.com/19010
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Fields in Plan 9 object headers are big-endian, on all architectures.
Change-Id: If95ad29750b776338178d660646568bf26a4abda
Reviewed-on: https://go-review.googlesource.com/18964
Reviewed-by: Russ Cox <rsc@golang.org>
It's possible for arena_start+MaxArena32 to wrap.
We do the right thing in the bounds check but not in the print.
For #13992 (to fix the print there, not the bug).
Change-Id: I4df845d0c03f0f35461b128e4f6765d3ccb71c6d
Reviewed-on: https://go-review.googlesource.com/18975
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The previous CL is the real fix. This one is just insurance.
Fixes#14046 again.
Change-Id: I553349504bb1789e4b66c888dbe4034568918ad6
Reviewed-on: https://go-review.googlesource.com/18977
Reviewed-by: Austin Clements <austin@google.com>
It was just completely broken if you gave it the number
of records it asked for. Make it impossible for that particular
inconsistency to happen again.
Also make it exclude system goroutines, to match both
NumGoroutine and Stack.
Fixes#14046.
Change-Id: Ic238c6b89934ba7b47cccd3440dd347ed11e4c3d
Reviewed-on: https://go-review.googlesource.com/18976
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently p.gcBgMarkWorker is a *g. Change it to a guintptr. This
eliminates a write barrier during the subtle mark worker parking dance
(which isn't known to be causing problems, but may).
Change-Id: Ibf12c05ac910820448059e69a68e5b882c993ed8
Reviewed-on: https://go-review.googlesource.com/18970
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
traceEvent records system call events after a G has already entered
_Gsyscall, which means the garbage collector could be installing stack
barriers in the G's stack during the traceEvent. If traceEvent
attempts to capture the user stack during this, it may observe a
inconsistent stack barriers and panic. Fix this by acquiring the stack
lock around the stack walk in traceEvent.
Fixes#14101.
Change-Id: I15f0ab0c70c04c6e182221f65a6f761c5a896459
Reviewed-on: https://go-review.googlesource.com/18973
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently mark workers attach to their designated Ps before parking,
either during initialization or after performing a phase transition.
However, in both of these cases, it's possible that the mark worker is
running on a different P than the one it attaches to. This is a
problem, because as soon as the worker attaches to a P, that P's
scheduler can execute the worker. If the worker hasn't yet parked on
the P it's actually running on, this means the worker G will be
running in two places at once. The most visible consequence of this is
that once the first instance of the worker does park, it will clear
g.m and the second instance will crash shortly when it tries to use
g.m.
Fix this by moving the attach to the gopark callback. At this point,
the G is genuinely stopped and the callback is running on the system
stack, so it's safe for another P's scheduler to pick up the worker G.
Fixes#13363. Fixes#13978.
Change-Id: If2f7c4a4174f9511f6227e14a27c56fb842d1cc8
Reviewed-on: https://go-review.googlesource.com/18761
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently we run profiling tests for around 200ms in short mode.
However, even on platforms with good profiling, these tests are
inherently flaky, especially on loaded systems like the builders.
To mitigate this, modify the profiling test harness so that if a test
fails in a way that could indicate there just weren't enough samples,
it retries with a longer duration.
This requires some adjustment to the profile checker to distinguish
"fatal" and "retryable" errors. In particular, we no longer consider
it a fatal error to get a profile with zero samples (which we
previously treated as a parse error). We replace this with a retryable
check that the total number of samples is reasonable.
Fixes#13943. Fixes#13871. Fixes#13223.
Change-Id: I9a08664a7e1734c5334b1f3792a56184fe314c4d
Reviewed-on: https://go-review.googlesource.com/18683
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Use of the alternate signal stack on darwin/{arm,arm64} is reportedly
buggy, and the runtime function sigaltstack does nothing. So don't
check the sigaltstack result to decide how to handle the signal stack.
Fixes#14070.
Change-Id: Ie97ede8895fad721e3acc79225f2cafcbe1f3a81
Reviewed-on: https://go-review.googlesource.com/18940
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
When using c-archive/c-shared, the signal handler for SIGPROF will not
be installed, which means that runtime/pprof.StartCPUProfile won't work.
There is no really good solution here, as the main program may want to
do its own profiling. For now, just document that runtime/pprof doesn't
work as expected, but that it will work if you use Notify to install the
Go signal handler.
Fixes#14043.
Change-Id: I7ff7a01df6ef7f63a7f050aac3674d640a246fb4
Reviewed-on: https://go-review.googlesource.com/18911
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
On NetBSD and DragonFly a newly created thread inherits the signal stack
of the creating thread. That means that in a cgo program a C thread
created using pthread_create will get the signal stack of the creating
thread, most likely a Go thread. This will then lead to chaos if two
signals occur simultaneously.
We can't fix the general case. But we can fix the case of a C thread
that calls a Go function, by installing a new signal stack and then
dropping it when we return to C. That will break the case of a C thread
that calls sigaltstack and then calls Go, because we will drop the C
thread's alternate signal stack as we return from Go. Still, this is
the 1.5 behavior. And what else can we do?
Fixes#14051.
Fixes#14052.
Fixes#14067.
Change-Id: Iee286ca50b50ec712a4d929c7121c35e2383a7b9
Reviewed-on: https://go-review.googlesource.com/18835
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Use the standard names, for discoverability.
Use the standard register arguments, for correctness.
Implement all possible arguments, for completeness.
Enable the corresponding tests now that everything is standard.
Update the uses in package runtime.
Fixes#14068.
Change-Id: I8e1af9a41e7d02d98c2a82af3d4cdb3e9204824f
Reviewed-on: https://go-review.googlesource.com/18852
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
It doesn't work and I don't know why.
Update #14063.
Change-Id: I42735012cf6247eca5336f29fcf713e08c8477f8
Reviewed-on: https://go-review.googlesource.com/18817
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
On NetBSD a signal handler returns to the kernel by calling the
setcontext system call with the context passed to the signal handler.
The implementation of runtime·sigreturn_tramp for amd64, copied from the
NetBSD libc, expects that context address to be in r15. That works in
the NetBSD libc because r15 is preserved across the call to the signal
handler. It fails in the Go library because r15 is not preserved.
There are various ways to fix this; this one uses the simple approach,
essentially identical to the one in the NetBSD libc, of preserving r15
across the signal handler proper.
Looking at the code for 386 and arm suggests that they are OK. However,
I have not actually tested them.
Update #14052.
Change-Id: I2b516b1d05fe5d3b8911e65ca761d621dc37fa1b
Reviewed-on: https://go-review.googlesource.com/18815
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
On NetBSD and DragonFly a newly created thread inherits the signal stack
of the creating thread. This breaks horribly if both threads get a
signal at the same time. Fix this by dropping the signal stack in the
newly created thread. The right signal stack will then get installed
later.
Note that cgo code that calls pthread_create will have the wrong,
duplicated, signal stack in the newly created thread. I don't see any
way to fix that in Go. People using cgo to call pthread_create will
have to be aware of the problem.
Fixes#13945.
Fixes#13947.
Change-Id: I0c7bd2cdf9ada575d57182ca5e9523060de34931
Reviewed-on: https://go-review.googlesource.com/18814
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
I'm not sure what the convert function was intended to be.
Fixes#14011
Change-Id: I29d905bc1827936b9433b20b13b7a0b0ac5f502e
Reviewed-on: https://go-review.googlesource.com/18712
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is testing code in asm_GOARCH.s, so it's not necessary to run the
test on systems where it doesn't build.
Fixes#13991.
Change-Id: Ia7a2d3a34b32e6987dc67428c1e09e63baf0518a
Reviewed-on: https://go-review.googlesource.com/18707
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
GC assists check gcBlackenEnabled under the assist queue lock to avoid
going to sleep after gcWakeAllAssists has already woken all assists.
However, currently we clear gcBlackenEnabled shortly *after* waking
all assists, which opens a window where this exact race can happen.
Fix this by clearing gcBlackenEnabled before waking blocked assists.
However, it's unlikely this actually matters because the world is
stopped between waking assists and clearing gcBlackenEnabled and there
aren't any obvious allocations during this window, so I don't think an
assist could actually slip in to this race window.
Updates #13645.
Change-Id: I7571f059530481dc781d8fd96a1a40aadebecb0d
Reviewed-on: https://go-review.googlesource.com/18682
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Also adds missing nosplit to unminit.
Fixes#13964.
Change-Id: I07d93a8c872a255a89f91f808b66c889f0a6a69c
Reviewed-on: https://go-review.googlesource.com/18658
Reviewed-by: Ian Lance Taylor <iant@golang.org>
While the default behavior of eliding runtime frames from tracebacks
usually makes sense, this is not the case when you're trying to test
the runtime itself. Fix this by forcing the traceback level to at
least "system" in the runtime tests.
This will specifically help with debugging issue #13645, which has
proven remarkably resistant to reproduction outside of the build
dashboard itself.
Change-Id: I2a8356ba6c3c5badba8bb3330fc527357ec0d296
Reviewed-on: https://go-review.googlesource.com/18648
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
This doesn't fix a bug, but may improve performance in programs that
have many concurrent calls from C to Go. The old code made several
system calls between lockextra and unlockextra. That could be happening
while another thread is spinning acquiring lockextra. This changes the
code to not make any system calls while holding the lock.
Change-Id: I50576478e478670c3d6429ad4e1b7d80f98a19d8
Reviewed-on: https://go-review.googlesource.com/18548
Reviewed-by: Russ Cox <rsc@golang.org>
TestFutexsleep is supposed to clean up before returning by waking up
the goroutines it started and left blocked in futex sleeps. However,
it currently fails at this in several ways:
1. Both the sleep and wakeup are done on the address of tt.mtx, but in
both cases tt is a *local copy* of the futexsleepTest created by a
loop, so the sleep and wakeup happen on completely different
addresses. Fix this by making them both use the address of the
global tt.mtx.
2. If the sleep happens after the wakeup (not likely, but not
impossible), it won't wake up. Fix this by using the futex protocol
properly: sleep if the mutex's value is 0, and set the mutex's
value to non-zero before doing the wakeup.
3. If TestFutexsleep runs more than once, channels and mutex values
left over from the first run will interfere with later runs. Fix
this by clearing the mutex value and creating a new channel for
each test and waiting for goroutines to finish before returning
(lest they send their completion to the channel for the next run).
As an added bonus, this test now actually tests that futex
sleep/wakeup work. Previously this test would have been satisfied if
futexsleep was an infinite loop and futexwakeup was a no-op.
Change-Id: I1cbc6871cc9dcb8f4601b3621913bec2b79b0fc3
Reviewed-on: https://go-review.googlesource.com/18617
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
CMOVs were not introduced until P6. We need 386 to run on
Pentium MMX.
Fixes#13923
Change-Id: Iee9572cd83e64c3a1336bc1e6b300b048fbcc996
Reviewed-on: https://go-review.googlesource.com/18621
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Add several instructions that were used via BYTE and use them.
Instructions added: PEXTRB, PEXTRD, PEXTRQ, PINSRB, XGETBV, POPCNT.
Change-Id: I5a80cd390dc01f3555dbbe856a475f74b5e6df65
Reviewed-on: https://go-review.googlesource.com/18593
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
A bit cleanuppy for 1.6 maybe, but something I happened to notice.
Change-Id: I70f3b48445f4f527d67f7b202b6171195440b09f
Reviewed-on: https://go-review.googlesource.com/18550
Reviewed-by: Russ Cox <rsc@golang.org>
[Repeat of CL 18343 with build fixes.]
Before, NumGoroutine counted system goroutines and Stack (usually) didn't show them,
which was inconsistent and confusing.
To resolve which way they should be consistent, it seems like
package main
import "runtime"
func main() { println(runtime.NumGoroutine()) }
should print 1 regardless of internal runtime details. Make it so.
Fixes#11706.
Change-Id: If26749fec06aa0ff84311f7941b88d140552e81d
Reviewed-on: https://go-review.googlesource.com/18432
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
It would certainly be a mistake to invoke a write barrier while
greying an object.
Change-Id: I34445a15ab09655ea8a3628a507df56aea61e618
Reviewed-on: https://go-review.googlesource.com/18533
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It used to be the case that repeatedly getting one GC pointer and
enqueuing one GC pointer could cause contention on the work buffers as
each operation passed over the boundary of a work buffer. As of
b6c0934, we use a two buffer cache that prevents this sort of
contention.
Change-Id: I4f1111623f76df9c5493dd9124dec1e0bfaf53b7
Reviewed-on: https://go-review.googlesource.com/18532
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This comment is probably a hold-over from when the heap bitmap was
interleaved and the shift was 0, 2, 4, or 6. Now the shift is 0, 1, 2,
or 3.
Change-Id: I096ec729e1ca31b708455c98b573dd961d16aaee
Reviewed-on: https://go-review.googlesource.com/18531
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Fixes#13881.
Change-Id: Idff77db381640184ddd2b65022133bb226168800
Reviewed-on: https://go-review.googlesource.com/18449
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Currently, due to an oversight, we only balance work buffers
in background and idle workers and not in assists. As a
result, in assist-heavy workloads, assists are likely to tie
up large work buffers in per-P caches increasing the
likelihood that the global list will be empty. This increases
the likelihood that other GC workers will exit and assists
will block, slowing down the system as a whole. Fix this by
eagerly balancing work buffers as soon as the assists notice
that the global buffers are empty. This makes it much more
likely that work will be immediately available to other
workers and assists.
This change reduces the garbage benchmark time by 39% and
fixes the regresssion seen at CL 15893 golang.org/cl/15893.
Garbage benchmark times before and after this CL.
Before GOPERF-METRIC:time=4427020
After GOPERF-METRIC:time=2721645
Fixes#13827
Change-Id: I9cb531fb873bab4b69ce9c1617e30df6c49cdcfe
Reviewed-on: https://go-review.googlesource.com/18341
Reviewed-by: Austin Clements <austin@google.com>
The previous behaviour of installing the signal handlers in a separate
thread meant that Go initialization raced with non-Go initialization if
the non-Go initialization also wanted to install signal handlers. Make
installing signal handlers synchronous so that the process-wide behavior
is predictable.
Update #9896.
Change-Id: Ice24299877ec46f8518b072a381932d273096a32
Reviewed-on: https://go-review.googlesource.com/18150
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Go 1.6 simplified the GC phases. The "synchronize Ps" phase no longer
exists and "root scan" and "mark" phases have been combined.
Update the gctrace line implementation and documentation to remove the
unused phases.
Fixes#13536.
Change-Id: I4fc37a3ce1ae3a99d48c0be2df64cbda3e05dee6
Reviewed-on: https://go-review.googlesource.com/18458
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Sigh. Sleeps on FreeBSD also yield the rest of the time slice and
profiling signals are only delivered when a process completes a time
slice (worse, itimer time is only accounted to the process that
completes a time slice). It's less noticeable than the other BSDs
because the default tick rate is 1000Hz, but it's still failing
regularly.
Fixes#13846.
Change-Id: I41bf116bffe46682433b677183f86944d0944ed4
Reviewed-on: https://go-review.googlesource.com/18455
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
We're only getting away with it today by luck.
Change-Id: I24d1cceee4d20c5181ca64fceda152e875f6ad81
Reviewed-on: https://go-review.googlesource.com/18440
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Today, signal.Ignore(syscall.SIGTRAP) does nothing
while signal.Notify(make(chan os.Signal), syscall.SIGTRAP)
correctly discards user-generated SIGTRAPs.
The same applies to any signal that we throw on.
Make signal.Ignore work for these signals.
Fixes#12906.
Change-Id: Iba244813051e0ce23fa32fbad3e3fa596a941094
Reviewed-on: https://go-review.googlesource.com/18348
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
OS X unconditionally sets si_code = TRAP_BRKPT when sending SIGTRAP,
even if it was generated by kill -TRAP and not a breakpoint.
Correct the si_code by looking to see if the PC is after a breakpoint.
For #12906.
Change-Id: I998c2499f7f12b338e607282a325b045f1f4f690
Reviewed-on: https://go-review.googlesource.com/18347
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Before, NumGoroutine counted system goroutines and Stack (usually) didn't show them,
which was inconsistent and confusing.
To resolve which way they should be consistent, it seems like
package main
import "runtime"
func main() { println(runtime.NumGoroutine()) }
should print 1 regardless of internal runtime details. Make it so.
Fixes#11706.
Change-Id: I6bfe26a901de517728192cfb26a5568c4ef4fe47
Reviewed-on: https://go-review.googlesource.com/18343
Reviewed-by: Austin Clements <austin@google.com>
It's fairly common to call cgo functions with conversions to
unsafe.Pointer or other C types. Apply the simpler checking of address
expressions when possible when the address expression occurs within a
type conversion.
Change-Id: I5187d4eb4d27a6542621c396cad9ee4b8647d1cd
Reviewed-on: https://go-review.googlesource.com/18391
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
f90b48e intended to require the stack barrier lock in all cases of
sigprof that walked the user stack, but got it wrong. In particular,
if sp < gp.stack.lo || gp.stack.hi < sp, tracebackUser would be true,
but we wouldn't acquire the stack lock. If it then turned out that we
were in a cgo call, it would walk the stack without the lock.
In fact, the whole structure of stack locking is sigprof is somewhat
wrong because it assumes the G to lock is gp.m.curg, but all three
gentraceback calls start from potentially different Gs.
To fix this, we lower the gcTryLockStackBarriers calls much closer to
the gentraceback calls. There are now three separate trylock calls,
each clearly associated with a gentraceback and the locked G clearly
matches the G from which the gentraceback starts. This actually brings
the sigprof logic closer to what it originally was before stack
barrier locking.
This depends on "runtime: increase assumed stack size in
externalthreadhandler" because it very slightly increases the stack
used by sigprof; without this other commit, this is enough to blow the
profiler thread's assumed stack size.
Fixes#12528 (hopefully for real this time!).
For the 1.5 branch, though it will require some backporting. On the
1.5 branch, this will *not* require the "runtime: increase assumed
stack size in externalthreadhandler" commit: there's no pcvalue cache,
so the used stack is smaller.
Change-Id: Id2f6446ac276848f6fc158bee550cccd03186b83
Reviewed-on: https://go-review.googlesource.com/18328
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
On Windows, externalthreadhandler currently sets the assumed stack
size for the profiler thread and the ctrlhandler threads to 8KB. The
actual stack size is determined by the SizeOfStackReserve field in the
binary set by the linker, which is currently at least 64KB (and
typically 128KB).
It turns out the profiler thread is running within a few words of the
8KB-(stack guard) bound set by externalthreadhandler. If it overflows
this bound, morestack crashes unceremoniously with an access
violation, which we then fail to handle, causing the whole process to
exit without explanation.
To avoid this problem and give us some breathing room, increase the
assumed stack size in externalthreadhandler to 32KB (there's some
unknown amount of stack already in use, so it's not safe to increase
this all the way to the reserve size).
We also document the relationships between externalthreadhandler and
SizeOfStackReserve to make this more obvious in the future.
Change-Id: I2f9f9c0892076d78e09827022ff0f2bedd9680a9
Reviewed-on: https://go-review.googlesource.com/18304
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
If a sigprof happens during a cgo call, we traceback from the entry
point of the cgo call. However, if the SP is outside of the G's stack,
we'll then ignore this traceback, even if it was successful, and
overwrite it with just _ExternalCode.
Fix this by accepting any successful traceback, regardless of whether
we got it from a cgo entry point or from regular Go code.
Fixes#13466.
Change-Id: I5da9684361fc5964f44985d74a8cdf02ffefd213
Reviewed-on: https://go-review.googlesource.com/18327
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
net has GODEBUG text already.
net/http still needs it (leaving for Brad).
For #13611.
Change-Id: Icea1027924a23a687cbbe4001985e8c6384629d7
Reviewed-on: https://go-review.googlesource.com/18346
Reviewed-by: Ian Lance Taylor <iant@golang.org>
We were setting the signal mask of a new m to the signal mask of the m
that created it. That failed when that m happened to be the one created
by ensureSigM, which sets its signal mask to only include the signals
being caught by os/signal.Notify.
Fixes#13164.
Update #9896.
Change-Id: I705c196fe9d11754e10bab9e9b2e7530ecdfa367
Reviewed-on: https://go-review.googlesource.com/18064
Reviewed-by: Russ Cox <rsc@golang.org>
When calling a Go function on a C thread, if the C thread already has an
alternate signal stack, use that signal stack instead of installing a
new one.
Update #9896.
Change-Id: I62aa3a6a4a1dc4040fca050757299c8e6736987c
Reviewed-on: https://go-review.googlesource.com/18108
Reviewed-by: Russ Cox <rsc@golang.org>
Just saw a few dragonfly failures here.
I'm tempted to preemptively add plan9 here too, but I'll wait until
I see it fail.
Change-Id: Ic99fc088dbfd1aa21f509148aee98ccfe7f640bf
Reviewed-on: https://go-review.googlesource.com/18306
Reviewed-by: Russ Cox <rsc@golang.org>
Since Stop was introduced, it would revert to the system default for the
signal, rather than to the default Go behavior. Change it to revert to
the default Go behavior.
Change-Id: I345467ece0e49e31b2806d6fce2f1937b17905a6
Reviewed-on: https://go-review.googlesource.com/18229
Reviewed-by: Russ Cox <rsc@golang.org>
Avoids an msan error when runtime/cgo is explicitly rebuilt with
-fsanitize=memory.
Fixes#13815.
Change-Id: I70308034011fb308b63585bcd40b0d1e62ec93ef
Reviewed-on: https://go-review.googlesource.com/18263
Reviewed-by: Russ Cox <rsc@golang.org>
This test triggers a large number of usleep(100)s. linux/arm, openbsd,
and solaris have very poor timer resolution on the builders, so
usleep(100) actually gives up the whole scheduling quantum. On Linux
and OpenBSD (and probably Solaris), profiling signals are only
generated when a process completes a whole scheduling quantum, so this
test often gets zero profiling signals and fails.
Until we figure out what to do about this, skip this test on these
platforms.
Updates #13405.
Change-Id: Ica94e4a8ae7a8df3e5a840504f83ee2ec08727df
Reviewed-on: https://go-review.googlesource.com/18252
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Previously, when a program died because of a SIGHUP, SIGINT, or SIGTERM
signal it would exit with status 2. This CL fixes the runtime to exit
with a status indicating that the program was killed by a signal.
Change-Id: Ic2982a2562857edfdccaf68856e0e4df532af136
Reviewed-on: https://go-review.googlesource.com/18156
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Use the current ability to say that we don't do anything with SIGCONT by
default, but programs can catch it using signal.Notify if they want.
Fixes#8953.
Change-Id: I67d40ce36a029cbc58a235cbe957335f4a58e1c5
Reviewed-on: https://go-review.googlesource.com/18185
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Old behavior: 10 consecutive EPIPE errors on any descriptor cause the
program to exit with a SIGPIPE signal.
New behavior: an EPIPE error on file descriptors 1 or 2 cause the
program to raise a SIGPIPE signal. If os/signal.Notify was not used to
catch SIGPIPE signals, this will cause the program to exit with SIGPIPE.
An EPIPE error on a file descriptor other than 1 or 2 will simply be
returned from Write.
Fixes#11845.
Update #9896.
Change-Id: Ic85d77e386a8bb0255dc4be1e4b3f55875d10f18
Reviewed-on: https://go-review.googlesource.com/18151
Reviewed-by: Russ Cox <rsc@golang.org>
This CL changes the source file information in the
standard library's .a files to say "$GOROOT/src/runtime/chan.go"
(with a literal "$GOROOT") instead of spelling out the actual directory.
The linker then substitutes the actual $GOROOT (or $GOROOT_FINAL)
as appropriate.
If people download a binary distribution to an alternate location,
following the instructions at https://golang.org/doc/install#install,
the code before this CL would end up with source paths pointing to
/usr/local/go no matter where the actual sources were.
Now the source paths for built binaries will point to the actual sources
(hopefully).
The source line information in distributed binaries is not affected:
those will still say /usr/local/go. But binaries people build themselves
(their own programs, not the go distribution programs) will be correct.
Fixing this path also fixes the lookup of the runtime-gdb.py file.
Fixes#5533.
Change-Id: I03729baae3fbd8cd636e016275ee5ad2606e4663
Reviewed-on: https://go-review.googlesource.com/18200
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Now there are just three programs to compile instead of many,
and repeated tests can reuse the compilation result instead of
rebuilding it.
Combined, these changes reduce the time spent testing runtime
during all.bash on my laptop from about 60 to about 30 seconds.
(All.bash itself runs in 5½ minutes.)
For #10571.
Change-Id: Ie2c1798b847f1a635a860d11dcdab14375319ae9
Reviewed-on: https://go-review.googlesource.com/18085
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Currently goroutineheader goes through some convolutions to *almost*
print the scan state of a G. However, the code path that would print
the scan state of the G refers to gStatusStrings where it almost
certainly meant to refer to gScanStatusStrings (which is unused), so
it winds up printing the regular status string without the scan state
either way. Furthermore, if the G is in _Gwaiting, we override the
status string and lose where this would indicate the scan state if it
worked.
This commit fixes this so the runtime prints the scan state. However,
rather than using a parallel list of status strings, this simply adds
a conditional print if the scan bit is set. This lets us remove the
string list, prints the scan state even in _Gwaiting, and lets us
strip off the scan bit at the beginning of the function, which
simplifies the rest of it.
Change-Id: Ic0adbe5c05abf4adda93da59f93b578172b28e3d
Reviewed-on: https://go-review.googlesource.com/18092
Reviewed-by: Keith Randall <khr@golang.org>
If non-Go code calls sigaltstack before a signal is received, use
sigaltstack to determine the current signal stack and set the gsignal
stack to use it. This makes the Go runtime more robust in the face of
non-Go code. We still can't handle a disabled signal stack or a signal
triggered with SA_ONSTACK clear, but we now give clear errors for those
cases.
Fixes#7227.
Update #9896.
Change-Id: Icb1607e01fd6461019b6d77d940e59b3aed4d258
Reviewed-on: https://go-review.googlesource.com/18102
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Only install signal handlers for synchronous signals that become
run-time panics. Set the SA_ONSTACK flag for other signal handlers as
needed.
Fixes#13028.
Update #12465.
Update #13034.
Update #13042.
Change-Id: I28375e70641f60630e10f3c86e24b6e4f8a35cc9
Reviewed-on: https://go-review.googlesource.com/17903
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It turns out that the second argument for sigaction on Darwin has a
different type than the first argument. The second argument is the user
visible sigaction struct, and does not have the sa_tramp field.
I base this on
http://www.opensource.apple.com/source/Libc/Libc-1081.1.3/sys/sigaction.c
not to mention actual testing.
While I was at it I removed a useless memclr in setsig, a relic of the C
code.
This CL is Darwin-specific changes. The tests for this CL are in
https://golang.org/cl/17903 .
Change-Id: I61fe305c72311df6a589b49ad7b6e49b6960ca24
Reviewed-on: https://go-review.googlesource.com/18015
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Programs that call panic to crash after detecting a serious problem
may wish to use SetTraceback to force printing of all goroutines first.
Change-Id: Ib23ad9336f405485aabb642ca73f454a14c8baf3
Reviewed-on: https://go-review.googlesource.com/18043
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, if sigprof determines that the G is in user code (not cgo
or libcall code), it will only traceback the G stack if it can acquire
the stack barrier lock. However, it has no such restriction if the G
is in cgo or libcall code. Because cgo calls count as syscalls, stack
scanning and stack barrier installation can occur during a cgo call,
which means sigprof could attempt to traceback a G in a cgo call while
scanstack is installing stack barriers in that G's stack. As a result,
the following sequence of events can cause the sigprof traceback to
panic with "missed stack barrier":
1. M1: G1 performs a Cgo call (which, on Windows, is any system call,
which could explain why this is easier to reproduce on Windows).
2. M1: The Cgo call puts G1 into _Gsyscall state.
3. M2: GC starts a scan of G1's stack. It puts G1 in to _Gscansyscall
and acquires the stack barrier lock.
4. M3: A profiling signal comes in. On Windows this is a global
(though I don't think this matters), so the runtime stops M1 and
calls sigprof for G1.
5. M3: sigprof fails to acquire the stack barrier lock (because the
GC's stack scan holds it).
6. M3: sigprof observes that G1 is in a Cgo call, so it calls
gentraceback on G1 with its Cgo transition point.
7. M3: gentraceback on G1 grabs the currently empty g.stkbar slice.
8. M2: GC finishes scanning G1's stack and installing stack barriers.
9. M3: gentraceback encounters one of the just-installed stack
barriers and panics.
This commit fixes this by only allowing cgo tracebacks if sigprof can
acquire the stack barrier lock, just like in the regular user
traceback case.
For good measure, we put the same constraint on libcall tracebacks.
This case is probably already safe because, unlike cgo calls, libcalls
leave the G in _Grunning and prevent reaching a safe point, so
scanstack cannot run during a libcall. However, this also means that
sigprof will always acquire the stack barrier lock without contention,
so there's no cost to adding this constraint to libcall tracebacks.
Fixes#12528. For 1.5.3 (will require some backporting).
Change-Id: Ia5a4b8e3d66b23b02ffcd54c6315c81055c0cec2
Reviewed-on: https://go-review.googlesource.com/18023
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, setNextBarrierPC manipulates the stack barriers without
acquiring the stack barrier lock. This is mostly okay because
setNextBarrierPC also runs synchronously on the G and prevents safe
points, but this doesn't prevent a sigprof from occurring during a
setNextBarrierPC and performing a traceback.
Given that setNextBarrierPC simply sets one entry in the stack barrier
array, this is almost certainly safe in reality. However, given that
this depends on a subtle argument, which may not hold in the future,
and that setNextBarrierPC almost never happens, making it nowhere near
performance-critical, we can simply acquire the stack barrier lock and
be sure that the synchronization will work.
Updates #12528. For 1.5.3.
Change-Id: Ife696e10d969f190157eb1cbe762a2de2ebce079
Reviewed-on: https://go-review.googlesource.com/18022
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
I've already turned away one attempt to remove this field.
As the comment above the struct says, many tools know the layout.
The field cannot simply be removed.
It was one thing to remove the fields name, but the TODO should
not have been added.
Change-Id: If40eacf0eb35835082055e129e2b88333a0731b9
Reviewed-on: https://go-review.googlesource.com/17741
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TestMemStats currently requires that NumGC != 0, but GC may
legitimately not have run (for example, if this test runs first, or
GOGC is set high, etc). Accept NumGC == 0 and instead sanity check
NumGC by making sure that all pause times after NumGC are 0.
Fixes#11989.
Change-Id: I4203859fbb83292d59a509f2eeb24d6033e7aabc
Reviewed-on: https://go-review.googlesource.com/17830
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
This matches SIGEMT on other systems that use it (SIGEMT is not used
for most linux systems).
Change-Id: If394c06c9ed1cb3ea2564385a8edfbed8b5566d1
Reviewed-on: https://go-review.googlesource.com/17874
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Currently, sysmon triggers a forced GC solely based on
memstats.last_gc. However, memstats.last_gc isn't updated until mark
termination, so once sysmon starts triggering forced GC, it will keep
triggering them until GC finishes. The first of these actually starts
a GC; the remainder up to the last print "GC forced", but gcStart
returns immediately because gcphase != _GCoff; then the last may start
another GC if the previous GC finishes (and sets last_gc) between
sysmon triggering it and gcStart checking the GC phase.
Fix this by expanding the condition for starting a forced GC to also
require that no GC is currently running. This, combined with the way
forcegchelper blocks until the GC cycle is started, ensures sysmon
only starts one GC when the time exceeds the forced GC threshold.
Fixes#13458.
Change-Id: Ie6cf841927f6085136be3f45259956cd5cf10d23
Reviewed-on: https://go-review.googlesource.com/17819
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The addition of stack barrier locking to copystack subsumes the
partial fix from commit bbd1a1c for SIGPROF during copystack. With the
stack barrier locking, this commit simplifies the rule in sigprof to:
the user stack can be traced only if sigprof can acquire the stack
barrier lock.
Updates #12932, #13362.
Change-Id: I1c1f80015053d0ac7761e9e0c7437c2aba26663f
Reviewed-on: https://go-review.googlesource.com/17192
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, runtime/debug.SetGCPercent does not adjust the controller
trigger ratio. As a result, runtime reductions of GOGC don't take full
effect until after one more concurrent cycle has happened, which
adjusts the trigger ratio to account for the new gcpercent.
Fix this by lowering the trigger ratio if necessary in setGCPercent.
Change-Id: I4d23e0c58d91939b86ac60fa5d53ef91d0d89e0c
Reviewed-on: https://go-review.googlesource.com/17813
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we drop worldsema and then print the gctrace. We did this so
that if stderr is a pipe or a blocked terminal, blocking on printing
the gctrace would not block another GC from starting. However, this is
a bit of a fool's errand because a blocked runtime print will block
the whole M/P, so after GOMAXPROCS GC cycles, the whole system will
freeze. Furthermore, now this is much less of an issue because
allocation will block indefinitely if it can't start a GC (whereas it
used to be that allocation could run away). Finally, this allows
another GC cycle to start while the previous cycle is printing the
gctrace, which leads to races on reading various statistics to print
them and the next GC cycle overwriting those statistics.
Fix this by moving the release of worldsema after the gctrace print.
Change-Id: I3d044ea0f77d80f3b4050af6b771e7912258662a
Reviewed-on: https://go-review.googlesource.com/17812
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we reset the sweep stats just after gcMarkTermination starts
the world and releases worldsema. However, background sweeping can
start the moment we start the world and, in fact, pause sweeping can
start the moment we release worldsema (because another GC cycle can
start up), so these need to be cleared before starting the world.
Change-Id: I95701e3de6af76bb3fbf2ee65719985bf57d20b2
Reviewed-on: https://go-review.googlesource.com/17811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, we update memstats.heap_live from mcache.local_cachealloc
whenever we lock the heap (e.g., to obtain a fresh span or to release
an unused span). However, under the right circumstances,
local_cachealloc can accumulate allocations up to the size of
the *entire heap* without flushing them to heap_live. Specifically,
since span allocations from an mcentral don't lock the heap, if a
large number of pages are held in an mcentral and the application
continues to use and free objects of that size class (e.g., the
BinaryTree17 benchmark), local_cachealloc won't be flushed until the
mcentral runs out of spans.
This is a problem because, unlike many of the memory statistics that
are purely informative, heap_live is used to determine when the
garbage collector should start and how hard it should work.
This commit eliminates local_cachealloc, instead atomically updating
heap_live directly. To control contention, we do this only when
obtaining a span from an mcentral. Furthermore, we make heap_live
conservative: allocating a span assumes that all free slots in that
span will be used and accounts for these when the span is
allocated, *before* the objects themselves are. This is important
because 1) this triggers the GC earlier than necessary rather than
potentially too late and 2) this leads to a conservative GC rate
rather than a GC rate that is potentially too low.
Alternatively, we could have flushed local_cachealloc when it passed
some threshold, but this would require determining a threshold and
would cause heap_live to underestimate the true value rather than
overestimate.
Fixes#12199.
name old time/op new time/op delta
BinaryTree17-12 2.88s ± 4% 2.88s ± 1% ~ (p=0.470 n=19+19)
Fannkuch11-12 2.48s ± 1% 2.48s ± 1% ~ (p=0.243 n=16+19)
FmtFprintfEmpty-12 50.9ns ± 2% 50.7ns ± 1% ~ (p=0.238 n=15+14)
FmtFprintfString-12 175ns ± 1% 171ns ± 1% -2.48% (p=0.000 n=18+18)
FmtFprintfInt-12 159ns ± 1% 158ns ± 1% -0.78% (p=0.000 n=19+18)
FmtFprintfIntInt-12 270ns ± 1% 265ns ± 2% -1.67% (p=0.000 n=18+18)
FmtFprintfPrefixedInt-12 235ns ± 1% 234ns ± 0% ~ (p=0.362 n=18+19)
FmtFprintfFloat-12 309ns ± 1% 308ns ± 1% -0.41% (p=0.001 n=18+19)
FmtManyArgs-12 1.10µs ± 1% 1.08µs ± 0% -1.96% (p=0.000 n=19+18)
GobDecode-12 7.81ms ± 1% 7.80ms ± 1% ~ (p=0.425 n=18+19)
GobEncode-12 6.53ms ± 1% 6.53ms ± 1% ~ (p=0.817 n=19+19)
Gzip-12 312ms ± 1% 312ms ± 2% ~ (p=0.967 n=19+20)
Gunzip-12 42.0ms ± 1% 41.9ms ± 1% ~ (p=0.172 n=19+19)
HTTPClientServer-12 63.7µs ± 1% 63.8µs ± 1% ~ (p=0.639 n=19+19)
JSONEncode-12 16.4ms ± 1% 16.4ms ± 1% ~ (p=0.954 n=19+19)
JSONDecode-12 58.5ms ± 1% 57.8ms ± 1% -1.27% (p=0.000 n=18+19)
Mandelbrot200-12 3.86ms ± 1% 3.88ms ± 0% +0.44% (p=0.000 n=18+18)
GoParse-12 3.67ms ± 2% 3.66ms ± 1% -0.52% (p=0.001 n=18+19)
RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 0% ~ (p=0.257 n=19+18)
RegexpMatchEasy0_1K-12 347ns ± 1% 347ns ± 1% ~ (p=0.527 n=18+18)
RegexpMatchEasy1_32-12 83.7ns ± 2% 83.1ns ± 2% ~ (p=0.096 n=18+19)
RegexpMatchEasy1_1K-12 509ns ± 1% 505ns ± 1% -0.75% (p=0.000 n=18+19)
RegexpMatchMedium_32-12 130ns ± 2% 129ns ± 1% ~ (p=0.962 n=20+20)
RegexpMatchMedium_1K-12 39.5µs ± 2% 39.4µs ± 1% ~ (p=0.376 n=20+19)
RegexpMatchHard_32-12 2.04µs ± 0% 2.04µs ± 1% ~ (p=0.195 n=18+17)
RegexpMatchHard_1K-12 61.4µs ± 1% 61.4µs ± 1% ~ (p=0.885 n=19+19)
Revcomp-12 540ms ± 2% 542ms ± 4% ~ (p=0.552 n=19+17)
Template-12 69.6ms ± 1% 71.2ms ± 1% +2.39% (p=0.000 n=20+20)
TimeParse-12 357ns ± 1% 357ns ± 1% ~ (p=0.883 n=18+20)
TimeFormat-12 379ns ± 1% 362ns ± 1% -4.53% (p=0.000 n=18+19)
[Geo mean] 62.0µs 61.8µs -0.44%
name old time/op new time/op delta
XBenchGarbage-12 5.89ms ± 2% 5.81ms ± 2% -1.41% (p=0.000 n=19+18)
Change-Id: I96b31cca6ae77c30693a891cff3fe663fa2447a0
Reviewed-on: https://go-review.googlesource.com/17748
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
deductSweepCredit expects the size in bytes of the span being
allocated, but mCentral_CacheSpan passes the size of a single object
in the span. As a result, we don't sweep enough on that call and when
mCentral_CacheSpan later calls reimburseSweepCredit, it's very likely
to underflow mheap_.spanBytesAlloc, which causes the next call to
deductSweepCredit to think it owes a huge number of pages and finish
off the whole sweep.
In addition to causing the occasional allocation that triggers the
full sweep to be potentially extremely expensive relative to other
allocations, this can indirectly slow down many other allocations.
deductSweepCredit uses sweepone to sweep spans, which returns
fully-unused spans to the heap, where these spans are freed and
coalesced with neighboring free spans. On the other hand, when
mCentral_CacheSpan sweeps a span, it does so with the intent to
immediately reuse that span and, as a result, will not return the span
to the heap even if it is fully unused. This saves on the cost of
locking the heap, finding a span, and initializing that span. For
example, before this change, with GOMAXPROCS=1 (or the background
sweeper disabled) BinaryTree17 returned roughly 220K spans to the heap
and allocated new spans from the heap roughly 232K times. After this
change, it returns 1.3K spans to the heap and allocates new spans from
the heap 39K times. (With background sweeping these numbers are
effectively unchanged because the background sweeper sweeps almost all
of the spans with sweepone; however, parallel sweeping saves more than
the cost of allocating spans from the heap.)
Fixes#13535.
Fixes#13589.
name old time/op new time/op delta
BinaryTree17-12 3.03s ± 1% 2.86s ± 4% -5.61% (p=0.000 n=18+20)
Fannkuch11-12 2.48s ± 1% 2.49s ± 1% ~ (p=0.060 n=17+20)
FmtFprintfEmpty-12 50.7ns ± 1% 50.9ns ± 1% +0.43% (p=0.025 n=15+16)
FmtFprintfString-12 174ns ± 2% 174ns ± 2% ~ (p=0.539 n=19+20)
FmtFprintfInt-12 158ns ± 1% 158ns ± 1% ~ (p=0.300 n=18+20)
FmtFprintfIntInt-12 269ns ± 2% 269ns ± 2% ~ (p=0.784 n=20+18)
FmtFprintfPrefixedInt-12 233ns ± 1% 234ns ± 1% ~ (p=0.389 n=18+18)
FmtFprintfFloat-12 309ns ± 1% 310ns ± 1% +0.25% (p=0.048 n=18+18)
FmtManyArgs-12 1.10µs ± 1% 1.10µs ± 1% ~ (p=0.259 n=18+19)
GobDecode-12 7.81ms ± 1% 7.72ms ± 1% -1.17% (p=0.000 n=19+19)
GobEncode-12 6.56ms ± 0% 6.55ms ± 1% ~ (p=0.433 n=17+19)
Gzip-12 318ms ± 2% 317ms ± 1% ~ (p=0.578 n=19+18)
Gunzip-12 42.1ms ± 2% 42.0ms ± 0% -0.45% (p=0.007 n=18+16)
HTTPClientServer-12 63.9µs ± 1% 64.0µs ± 1% ~ (p=0.146 n=17+19)
JSONEncode-12 16.4ms ± 1% 16.4ms ± 1% ~ (p=0.271 n=19+19)
JSONDecode-12 58.1ms ± 1% 58.0ms ± 1% ~ (p=0.152 n=18+18)
Mandelbrot200-12 3.85ms ± 0% 3.85ms ± 0% ~ (p=0.126 n=19+18)
GoParse-12 3.71ms ± 1% 3.64ms ± 1% -1.86% (p=0.000 n=20+18)
RegexpMatchEasy0_32-12 100ns ± 2% 100ns ± 1% ~ (p=0.588 n=20+20)
RegexpMatchEasy0_1K-12 346ns ± 1% 347ns ± 1% +0.27% (p=0.014 n=17+20)
RegexpMatchEasy1_32-12 82.9ns ± 3% 83.5ns ± 3% ~ (p=0.096 n=19+20)
RegexpMatchEasy1_1K-12 506ns ± 1% 506ns ± 1% ~ (p=0.530 n=19+19)
RegexpMatchMedium_32-12 129ns ± 2% 129ns ± 1% ~ (p=0.566 n=20+19)
RegexpMatchMedium_1K-12 39.4µs ± 1% 39.4µs ± 1% ~ (p=0.713 n=19+20)
RegexpMatchHard_32-12 2.05µs ± 1% 2.06µs ± 1% +0.36% (p=0.008 n=18+20)
RegexpMatchHard_1K-12 61.6µs ± 1% 61.7µs ± 1% ~ (p=0.286 n=19+20)
Revcomp-12 538ms ± 1% 541ms ± 2% ~ (p=0.081 n=18+19)
Template-12 71.5ms ± 2% 71.6ms ± 1% ~ (p=0.513 n=20+19)
TimeParse-12 357ns ± 1% 357ns ± 1% ~ (p=0.935 n=19+18)
TimeFormat-12 352ns ± 1% 352ns ± 1% ~ (p=0.293 n=19+20)
[Geo mean] 62.0µs 61.9µs -0.21%
name old time/op new time/op delta
XBenchGarbage-12 5.83ms ± 2% 5.86ms ± 3% ~ (p=0.247 n=19+20)
Change-Id: I790bb530adace27ccf25d372f24a11954b88443c
Reviewed-on: https://go-review.googlesource.com/17745
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Analogous to https://go-review.googlesource.com/#/c/8457/ this
code synthesizes an set of program arguments for Android on the
arm64 architecture.
Change-Id: I851958b4b0944ec79d7a1426a3bb2cfc31746797
Reviewed-on: https://go-review.googlesource.com/17782
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
To prevent races with the garbage collector, stack spans cannot be
reused as heap spans during a GC. We deal with this by caching stack
spans during GC and releasing them at the end of mark termination.
However, while our cache lets us reuse small stack spans, currently
large stack spans are *not* reused. This can cause significant memory
growth in programs that allocate large stacks rapidly, but grow the
heap slowly (such as in issue #13552).
Fix this by adding logic to reuse large stack spans for other stacks.
Fixes#11466.
Fixes#13552. Without this change, the program in this issue creeps to
over 1GB of memory over the course of a few hours. With this change,
it stays rock solid at around 30MB.
Change-Id: If8b2d85464aa80c96230a1990715e39aa803904f
Reviewed-on: https://go-review.googlesource.com/17814
Reviewed-by: Keith Randall <khr@golang.org>
Currently we wake up new worker threads whenever we pass
through the scheduler with nmspinning==0. This leads to
lots of unnecessary thread wake ups.
Instead let only spinning threads wake up new spinning threads.
For the following program:
package main
import "runtime"
func main() {
for i := 0; i < 1e7; i++ {
runtime.Gosched()
}
}
Before:
$ time ./test
real 0m4.278s
user 0m7.634s
sys 0m1.423s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
99.93 9.314936 3 2685009 17536 futex
After:
$ time ./test
real 0m1.200s
user 0m1.181s
sys 0m0.024s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
3.11 0.000049 25 2 futex
Fixes#13527
Change-Id: Ia1f5bf8a896dcc25d8b04beb1f4317aa9ff16f74
Reviewed-on: https://go-review.googlesource.com/17540
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
debug.schedtrace is an int32. Convert it to int64 before
multiplying with constant 1000000. Otherwise, schedtrace
values more than 2147 result in int32 overflow causing
incorrect delays between traces.
Change-Id: I064e8d7b432c1e892a705ee1f31a2e8cdd2c3ea3
Reviewed-on: https://go-review.googlesource.com/17712
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
If reports like #13062 are really concurrent misuse of maps,
we can detect that, at least some of the time, with a cheap check.
There is an extra pair of memory writes for writing to a map,
but to the same cache line as h.count, which is often being modified anyway,
and there is an extra memory read for reading from a map,
but to the same cache line as h.count, which is always being read anyway.
So the check should be basically invisible and may help reduce the
number of "mysterious runtime crash due to map misuse" reports.
Change-Id: I0e71b0d92eaa3b7bef48bf41b0f5ab790092487e
Reviewed-on: https://go-review.googlesource.com/17501
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
(sig is unsigned, so sig-1 >= 0 is always true.)
Fixes#11281.
Change-Id: I4b9d784da6e3cc80816f2d2f7228d5d8a237e2d5
Reviewed-on: https://go-review.googlesource.com/17457
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
When run with "ulimit -s unlimited", the misc/cgo/test test binary
finds a stack size of 0x3000 returned by getcontext, causing the
runtime to try to stay within those bounds and then fault when
called back in the test after 64 kB has been used by C.
I suspect that Solaris is doing something clever like reporting the
current stack size and growing the stack as faults happen.
On all the other systems, getcontext reports the maximum stack size.
And when the ulimit is not unlimited, even Solaris reports the
maximum stack size.
Work around this by assuming that any stack on Solaris must be at least 1 MB.
Fixes#12210.
Change-Id: I0a6ed0afb8a8f50aa1b2486f32b4ae470ab47dbf
Reviewed-on: https://go-review.googlesource.com/17452
Reviewed-by: Ian Lance Taylor <iant@golang.org>
stackBarrier on amd64 sanity checks that it's unwinding the correct
entry in the stack barrier array. However, this check is wrong in two
ways that make it unlikely to catch anything, right or wrong:
1) It checks that savedLRPtr == SP, but, in fact, it should be that
savedLRPtr+8 == SP because the RET that returned to stackBarrier
popped the saved LR. However, we didn't notice this check was wrong
because,
2) the sense of the conditional branch is also wrong.
Fix both of these.
Change-Id: I38ba1f652b0168b5b2c11b81637656241262af7c
Reviewed-on: https://go-review.googlesource.com/17039
Reviewed-by: Russ Cox <rsc@golang.org>
On android, runtime.tls_g is a normal variable.
TLS offset is computed in x_cgo_inittls.
Change-Id: I18bc9a736d5fb2a89d0f798956c754e3c10d10e2
Reviewed-on: https://go-review.googlesource.com/17246
Reviewed-by: David Crawshaw <crawshaw@golang.org>
On android, runtime.tls_g is a normal variable.
TLS offset is computed in x_cgo_inittls.
Change-Id: I64cfd3543040776dcdf73cad8dba54fc6aaf6f35
Reviewed-on: https://go-review.googlesource.com/17245
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The ppc64le shared library ABI demands that r12 is set to a function's global
entrypoint before jumping to the global entrypoint. Not doing so means that
handling signals that usually panic actually crashes (and so, e.g. can't be
recovered). Fixes several failures of "cd test; go run run.go -linkshared".
Change-Id: Ia4d0da4c13efda68340d38c045a52b37c2f90796
Reviewed-on: https://go-review.googlesource.com/17280
Reviewed-by: Ian Lance Taylor <iant@golang.org>
We need a runtime check because the original issue is encountered
when running cross compiled windows program from linux. It's better
to give a meaningful crash message earlier than to segfault later.
The added test should not impose any measurable overhead to Go
programs.
For #12415.
Change-Id: Ib4a24ef560c09c0585b351d62eefd157b6b7f04c
Reviewed-on: https://go-review.googlesource.com/14207
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Write barriers in gcFlushBgCredit lead to very subtle bugs because it
executes after the getfull barrier. I tracked some bugs of this form
down before go:nowritebarrierrec was implemented. Ensure that they
don't reappear by making gcFlushBgCredit go:nowritebarrierrec.
Change-Id: Ia5ca2dc59e6268bce8d8b4c87055bd0f6e19bed2
Reviewed-on: https://go-review.googlesource.com/17052
Reviewed-by: Russ Cox <rsc@golang.org>
sighandler may run during STW, so write barriers are not allowed.
Change-Id: Icdf46be10ea296fd87e73ab56ebb718c5d3c97ac
Reviewed-on: https://go-review.googlesource.com/17007
Reviewed-by: Russ Cox <rsc@golang.org>
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.
The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.
The cost is that OSes that don't need m.sigmask has to define sigset.
completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.
With GOOS=nacl ./make.bash failed with a seemingly unrelated error.
[Replay of CL 16942 by Elias Naur.]
Change-Id: I98f144d626033ae5318576115ed635415ac71b2c
Reviewed-on: https://go-review.googlesource.com/17033
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
These tests were failing on one of the versions of cl/9345
("runtime: simplify buffered channels").
Change-Id: I920ffcd28de428bcb7c2d5a300068644260e1017
Reviewed-on: https://go-review.googlesource.com/16416
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fixes#11959Fixes#12035
Skip the CallbackGC test on linux/arm. This test takes between 30 and 60
seconds to run by itself, and is run 4 times over the course of ./run.bash
(once during the runtime test, three times more later in the build).
Change-Id: I4e7d3046031cd8c08f39634bdd91da6e00054caf
Reviewed-on: https://go-review.googlesource.com/14485
Reviewed-by: Russ Cox <rsc@golang.org>
Original code is mistakenly panics on VirtualAlloc failure - we want
it to go looking for smaller memory region that VirtualAlloc will
succeed to allocate. Also return immediately if VirtualAlloc succeeds.
See rsc comment on issue #12587 for details.
I still don't have a test for this. So I can only hope that this
Fixes#12587
Change-Id: I052068ec627fdcb466c94ae997ad112016f734b7
Reviewed-on: https://go-review.googlesource.com/17169
Reviewed-by: Russ Cox <rsc@golang.org>
These are methods that are "obviously" going to get inlined -- until you build
with -l, when they can trigger a stack split at a bad time.
Fixes#11482
Change-Id: Ia065c385978a2e7fe9f587811991d088c4d68325
Reviewed-on: https://go-review.googlesource.com/17165
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Commit bbd1a1c prevented SIGPROF from scanning stacks that were being
copied, but it didn't prevent a stack copy (specifically a stack
shrink) from happening while SIGPROF is scanning the stack. As a
result, a stack copy may adjust stack barriers while SIGPROF is in the
middle of scanning a stack, causing SIGPROF to panic when it detects
an inconsistent stack barrier.
Fix this by taking the stack barrier lock while adjusting the stack.
In addition to preventing SIGPROF from scanning this stack, this will
block until any in-progress SIGPROF is done scanning the stack.
For 1.5.2.
Fixes#13362.
Updates #12932.
Change-Id: I422219c363054410dfa56381f7b917e04690e5dd
Reviewed-on: https://go-review.googlesource.com/17191
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Calling timeBeginPeriod changes Windows global timer resolution
from 15ms to 1ms. This used to improve Go runtime scheduler
performance, but not anymore. Thanks to @aclements, scheduler now
behaves the same way if we call timeBeginPeriod or not.
Remove call to timeBeginPeriod, since it is machine global
resource, and there are downsides of using low timer resolution.
See issue #8687 for details.
Fixes#8687
Change-Id: Ib7e41aa4a81861b62a900e0e62776c9ef19bfb73
Reviewed-on: https://go-review.googlesource.com/17164
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
This improves the documentation comment on gcMarkDone, replaces a
recursive call with a simple goto, and disables preemption before
stopping the world in accordance with the documentation comment on
stopTheWorldWithSema.
Updates #13363, but, sadly, doesn't fix it.
Change-Id: I6cb2a5836b35685bf82f7b1ce7e48a7625906656
Reviewed-on: https://go-review.googlesource.com/17149
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This improves stack barrier debugging messages in various ways:
1) Rather than printing only the remaining stack barriers (of which
there may be none, which isn't very useful), print all of the G's
stack barriers with a marker at the position the stack itself has
unwound to and a marker at the problematic stack barrier (where
applicable).
2) Rather than crashing if we encounter a stack barrier when there are
no more stkbar entries, print the same debug message we would if we
had encountered a stack barrier at an unexpected location.
Hopefully this will help with debugging #12528.
Change-Id: I2e6fe6a778e0d36dd8ef30afd4c33d5d94731262
Reviewed-on: https://go-review.googlesource.com/17147
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>