Also runs 100X faster on average, because it takes so many
fewer attempts to trigger the failure.
Fixes#11443.
Change-Id: I8c39ee48bb3ff6c36fa63083e04076771b65a80d
Reviewed-on: https://go-review.googlesource.com/36841
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
It was a little tricky to figure out how to go from the documentation
to figuring out the best way to implement a Pool, so I thought I'd
try to provide a simple example. The implementation is mostly taken
from the fmt package.
I'm not happy with the verbosity of the calls to WriteString() etc,
but I wanted to provide a non-trivial example.
Change-Id: Id33a8b6cbf8eb278f71e1f78e20205b436578606
Reviewed-on: https://go-review.googlesource.com/24371
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
runtime.SetMutexProfileFraction(n int) will capture 1/n-th of stack
traces of goroutines holding contended mutexes if n > 0. From runtime/pprof,
pprot.Lookup("mutex").WriteTo writes the accumulated
stack traces to w (in essentially the same format that blocking
profiling uses).
Change-Id: Ie0b54fa4226853d99aa42c14cb529ae586a8335a
Reviewed-on: https://go-review.googlesource.com/29650
Reviewed-by: Austin Clements <austin@google.com>
The panic leaves the lock in an unusable state.
Trying to panic with a usable state makes the lock significantly
less efficient and scalable (see early CL patch sets and discussion).
Instead, use runtime.throw, which will crash the program directly.
In general throw is reserved for when the runtime detects truly
serious, unrecoverable problems. This problem is certainly serious,
and, without a significant performance hit, is unrecoverable.
Fixes#13879.
Change-Id: I41920d9e2317270c6f909957d195bd8b68177f8d
Reviewed-on: https://go-review.googlesource.com/31359
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Uses the same implementation as runtime/internal/atomic.
Reorganize the intrinsic detector to make it more table-driven.
Also works on amd64p32.
Change-Id: I7a5238951d6018d7d5d1bc01f339f6ee9282b2d0
Reviewed-on: https://go-review.googlesource.com/28076
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Add missing function prototypes.
Fix function prototypes.
Use FP references instead of SP references.
Fix variable names.
Update comments.
Clean up whitespace. (Not for vet.)
All fairly minor fixes to make vet happy.
Updates #11041
Change-Id: Ifab2cdf235ff61cdc226ab1d84b8467b5ac9446c
Reviewed-on: https://go-review.googlesource.com/27713
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The following performance improvements have been made to the
low-level atomic functions for ppc64le & ppc64:
- For those cases containing a lwarx and stwcx (or other sizes):
sync, lwarx, maybe something, stwcx, loop to sync, sync, isync
The sync is moved before (outside) the lwarx/stwcx loop, and the
sync after is removed, so it becomes:
sync, lwarx, maybe something, stwcx, loop to lwarx, isync
- For the Or8 and And8, the shifting and manipulation of the
address to the word aligned version were removed and the
instructions were changed to use lbarx, stbcx instead of
register shifting, xor, then lwarx, stwcx.
- New instructions LWSYNC, LBAR, STBCC were tested and added.
runtime/atomic_ppc64x.s was changed to use the LWSYNC opcode
instead of the WORD encoding.
Fixes#15469
Ran some of the benchmarks in the runtime and sync directories.
Some results varied from run to run but the trend was improvement
based on best times for base and new:
runtime.test:
BenchmarkChanNonblocking-128 0.88 0.89 +1.14%
BenchmarkChanUncontended-128 569 511 -10.19%
BenchmarkChanContended-128 63110 53231 -15.65%
BenchmarkChanSync-128 691 598 -13.46%
BenchmarkChanSyncWork-128 11355 11649 +2.59%
BenchmarkChanProdCons0-128 2402 2090 -12.99%
BenchmarkChanProdCons10-128 1348 1363 +1.11%
BenchmarkChanProdCons100-128 1002 746 -25.55%
BenchmarkChanProdConsWork0-128 2554 2720 +6.50%
BenchmarkChanProdConsWork10-128 1909 1804 -5.50%
BenchmarkChanProdConsWork100-128 1624 1580 -2.71%
BenchmarkChanCreation-128 237 212 -10.55%
BenchmarkChanSem-128 705 667 -5.39%
BenchmarkChanPopular-128 5081190 4497566 -11.49%
BenchmarkCreateGoroutines-128 532 473 -11.09%
BenchmarkCreateGoroutinesParallel-128 35.0 34.7 -0.86%
BenchmarkCreateGoroutinesCapture-128 4923 4200 -14.69%
sync.test:
BenchmarkUncontendedSemaphore-128 112 94.2 -15.89%
BenchmarkContendedSemaphore-128 133 128 -3.76%
BenchmarkMutexUncontended-128 1.90 1.67 -12.11%
BenchmarkMutex-128 353 310 -12.18%
BenchmarkMutexSlack-128 304 283 -6.91%
BenchmarkMutexWork-128 554 541 -2.35%
BenchmarkMutexWorkSlack-128 567 556 -1.94%
BenchmarkMutexNoSpin-128 275 242 -12.00%
BenchmarkMutexSpin-128 1129 1030 -8.77%
BenchmarkOnce-128 1.08 0.96 -11.11%
BenchmarkPool-128 29.8 27.4 -8.05%
BenchmarkPoolOverflow-128 40564 36583 -9.81%
BenchmarkSemaUncontended-128 3.14 2.63 -16.24%
BenchmarkSemaSyntNonblock-128 1087 1069 -1.66%
BenchmarkSemaSyntBlock-128 897 893 -0.45%
BenchmarkSemaWorkNonblock-128 1034 1028 -0.58%
BenchmarkSemaWorkBlock-128 949 886 -6.64%
Change-Id: I4403fb29d3cd5254b7b1ce87a216bd11b391079e
Reviewed-on: https://go-review.googlesource.com/22549
Reviewed-by: Michael Munday <munday@ca.ibm.com>
Reviewed-by: Minux Ma <minux@golang.org>
cmd and runtime were handled separately, and I'm intentionally skipped
syscall. This is the rest of the standard library.
CL generated mechanically with github.com/mdempsky/unconvert.
Change-Id: I9e0eff886974dedc37adb93f602064b83e469122
Reviewed-on: https://go-review.googlesource.com/22104
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change Cond implementation to use a notification list such that waiters
can first register for a notification, release the lock, then actually
wait. Signalers never have to park anymore.
This is intended to address an issue in the previous implementation
where Broadcast could fail to signal all waiters.
Results of the existing benchmark are below.
Original New Diff
BenchmarkCond1-48 2000000 745 ns/op 755 +1.3%
BenchmarkCond2-48 1000000 1545 ns/op 1532 -0.8%
BenchmarkCond4-48 300000 3833 ns/op 3896 +1.6%
BenchmarkCond8-48 200000 10049 ns/op 10257 +2.1%
BenchmarkCond16-48 100000 21123 ns/op 21236 +0.5%
BenchmarkCond32-48 30000 40393 ns/op 41097 +1.7%
Fixes#14064
Change-Id: I083466d61593a791a034df61f5305adfb8f1c7f9
Reviewed-on: https://go-review.googlesource.com/18892
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Caleb Spare <cespare@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is a subset of https://golang.org/cl/20022 with only the copyright
header lines, so the next CL will be smaller and more reviewable.
Go policy has been single space after periods in comments for some time.
The copyright header template at:
https://golang.org/doc/contribute.html#copyright
also uses a single space.
Make them all consistent.
Change-Id: Icc26c6b8495c3820da6b171ca96a74701b4a01b0
Reviewed-on: https://go-review.googlesource.com/20111
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Atomic load/store/add/swap routines, as for other ARM platforms, but with DMB inserted
for load/store (assuming that "atomic" also implies acquire/release memory ordering).
Change-Id: I70a283d8f0ae61a66432998ce59eac76fd940c67
Reviewed-on: https://go-review.googlesource.com/18965
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In normal mode the test runs for 9+ seconds on my machine (48 cores).
But the real problem is race mode, in race mode it hits 10m test timeout.
Reduce test size in short mode. Now it runs for 100ms without race.
Change-Id: I9493a0e84f630b930af8f958e2920025df37c268
Reviewed-on: https://go-review.googlesource.com/19956
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
You can not use cannot, but you cannot spell cannot can not.
Change-Id: I2f0971481a460804de96fd8c9e46a9cc62a3fc5b
Reviewed-on: https://go-review.googlesource.com/19772
Reviewed-by: Rob Pike <r@golang.org>
Previous flakes:
https://build.golang.org/log/223365dedb6b6aa0cfdf5afd0a50fd433a16badehttps://build.golang.org/log/edbea4cd3f24e707ef2ae8378559bb0fcc453c22
Dmitry says in email about this:
> The stack trace points to it pretty clearly. Done can indeed unblock
> Wait first and then panic. I guess we need to recover after first
> Done as well.
And it looks like TestWaitGroupMisuse2 was already hardened against
this. Do the same in TestWaitGroupMisuse3.
Change-Id: I317800c7e46f13c97873f0873c759a489dd5f47d
Reviewed-on: https://go-review.googlesource.com/19183
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Factor out duplicated race thunks from sync, syscall net
and fmt packages into a separate package and use it.
Fixes#8593
Change-Id: I156869c50946277809f6b509463752e7f7d28cdb
Reviewed-on: https://go-review.googlesource.com/14870
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.
all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.
Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This only triggers on ARMv7+.
If there are important SMP ARMv6 machines we can reconsider.
Makes TestLFStress tests pass and sync/atomic tests not time out
on Apple iPad Mini 3.
Fixes#7977.
Fixes#10189.
Change-Id: Ie424dea3765176a377d39746be9aa8265d11bec4
Reviewed-on: https://go-review.googlesource.com/12950
Reviewed-by: David Crawshaw <crawshaw@golang.org>
There is absolutely no information about how this was failing.
If we reenable the test then at least we can get a build log from
darwin/arm.
There are not even freebsd/arm or netbsd/arm builders,
so not too worried about those. (That is another problem.)
Change-Id: I0e739a4dd2897adbe110aa400d720d8fa02ae65f
Reviewed-on: https://go-review.googlesource.com/12920
Reviewed-by: Russ Cox <rsc@golang.org>
A comment in waitgroup.go describes the following scenario
as the reason to have dynamically created semaphores:
// G1: Add(1)
// G1: go G2()
// G1: Wait() // Context switch after Unlock() and before Semacquire().
// G2: Done() // Release semaphore: sema == 1, waiters == 0. G1 doesn't run yet.
// G3: Wait() // Finds counter == 0, waiters == 0, doesn't block.
// G3: Add(1) // Makes counter == 1, waiters == 0.
// G3: go G4()
// G3: Wait() // G1 still hasn't run, G3 finds sema == 1, unblocked! Bug.
However, the scenario is incorrect:
G3: Add(1) happens concurrently with G1: Wait(),
and so there is no reasonable behavior of the program
(G1: Wait() may or may not wait for G3: Add(1) which
can't be the intended behavior).
With this conclusion we can:
1. Remove dynamic allocation of semaphores.
2. Remove the mutex entirely and instead pack counter and waiters
into single uint64.
This makes the logic significantly simpler, both Add and Wait
do only a single atomic RMW to update the state.
benchmark old ns/op new ns/op delta
BenchmarkWaitGroupUncontended 30.6 32.7 +6.86%
BenchmarkWaitGroupActuallyWait 722 595 -17.59%
BenchmarkWaitGroupActuallyWait-2 396 319 -19.44%
BenchmarkWaitGroupActuallyWait-4 224 183 -18.30%
BenchmarkWaitGroupActuallyWait-8 134 106 -20.90%
benchmark old allocs new allocs delta
BenchmarkWaitGroupActuallyWait 2 1 -50.00%
benchmark old bytes new bytes delta
BenchmarkWaitGroupActuallyWait 48 16 -66.67%
Change-Id: I28911f3243aa16544e99ac8f1f5af31944c7ea3a
Reviewed-on: https://go-review.googlesource.com/4117
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
All of the architectures except ppc64 have only "RET" for the return
mnemonic. ppc64 used to have only "RETURN", but commit cf06ea6
introduced RET as a synonym for RETURN to make ppc64 consistent with
the other architectures. However, that commit was never followed up to
make the code itself consistent by eliminating uses of RETURN.
This commit replaces all uses of RETURN in the ppc64 assembly with
RET.
This was done with
sed -i 's/\<RETURN\>/RET/' **/*_ppc64x.s
plus one manual change to syscall/asm.s.
Change-Id: I3f6c8d2be157df8841d48de988ee43f3e3087995
Reviewed-on: https://go-review.googlesource.com/10672
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Similar to darwin/arm. This issue is quite worrying and I hope it
can be addressed for Go 1.5.
Change-Id: Ic095281d6a2e9a38a59973f58d464471db5a2edc
Reviewed-on: https://go-review.googlesource.com/8811
Reviewed-by: Minux Ma <minux@golang.org>
Updates #7338.
Change-Id: I859a73543352dbdd13ec05efb23a95aecbcc628a
Reviewed-on: https://go-review.googlesource.com/7164
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently sync.Mutex is fully cooperative. That is, once contention is discovered,
the goroutine calls into scheduler. This is suboptimal as the resource can become
free soon after (especially if critical sections are short). Server software
usually runs at ~~50% CPU utilization, that is, switching to other goroutines
is not necessary profitable.
This change adds limited active spinning to sync.Mutex if:
1. running on a multicore machine and
2. GOMAXPROCS>1 and
3. there is at least one other running P and
4. local runq is empty.
As opposed to runtime mutex we don't do passive spinning,
because there can be work on global runq on on other Ps.
benchmark old ns/op new ns/op delta
BenchmarkMutexNoSpin 1271 1272 +0.08%
BenchmarkMutexNoSpin-2 702 683 -2.71%
BenchmarkMutexNoSpin-4 377 372 -1.33%
BenchmarkMutexNoSpin-8 197 190 -3.55%
BenchmarkMutexNoSpin-16 131 122 -6.87%
BenchmarkMutexNoSpin-32 170 164 -3.53%
BenchmarkMutexSpin 4724 4728 +0.08%
BenchmarkMutexSpin-2 2501 2491 -0.40%
BenchmarkMutexSpin-4 1330 1325 -0.38%
BenchmarkMutexSpin-8 684 684 +0.00%
BenchmarkMutexSpin-16 414 372 -10.14%
BenchmarkMutexSpin-32 559 469 -16.10%
BenchmarkMutex 19.1 19.1 +0.00%
BenchmarkMutex-2 81.6 54.3 -33.46%
BenchmarkMutex-4 143 100 -30.07%
BenchmarkMutex-8 154 156 +1.30%
BenchmarkMutex-16 140 159 +13.57%
BenchmarkMutex-32 141 163 +15.60%
BenchmarkMutexSlack 33.3 31.2 -6.31%
BenchmarkMutexSlack-2 122 97.7 -19.92%
BenchmarkMutexSlack-4 168 158 -5.95%
BenchmarkMutexSlack-8 152 158 +3.95%
BenchmarkMutexSlack-16 140 159 +13.57%
BenchmarkMutexSlack-32 146 162 +10.96%
BenchmarkMutexWork 154 154 +0.00%
BenchmarkMutexWork-2 89.2 89.9 +0.78%
BenchmarkMutexWork-4 139 86.1 -38.06%
BenchmarkMutexWork-8 177 162 -8.47%
BenchmarkMutexWork-16 170 173 +1.76%
BenchmarkMutexWork-32 176 176 +0.00%
BenchmarkMutexWorkSlack 160 160 +0.00%
BenchmarkMutexWorkSlack-2 103 99.1 -3.79%
BenchmarkMutexWorkSlack-4 155 148 -4.52%
BenchmarkMutexWorkSlack-8 176 170 -3.41%
BenchmarkMutexWorkSlack-16 170 173 +1.76%
BenchmarkMutexWorkSlack-32 175 176 +0.57%
"No work" benchmarks are not very interesting (BenchmarkMutex and
BenchmarkMutexSlack), as they are absolutely not realistic.
Fixes#8889
Change-Id: I6f14f42af1fa48f73a776fdd11f0af6dd2bb428b
Reviewed-on: https://go-review.googlesource.com/5430
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Require a name to be specified when referencing the pseudo-stack.
If you want a real stack offset, use the hardware stack pointer (e.g.,
R13 on arm), not SP.
Fix affected assembly files.
Change-Id: If3545f187a43cdda4acc892000038ec25901132a
Reviewed-on: https://go-review.googlesource.com/5120
Run-TryBot: Rob Pike <r@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Several .s files for ARM had several properties the new assembler will not support.
These include:
- mentioning SP or PC as a hardware register
These are always pseudo-registers except that in some contexts
they're not, and it's confusing because the context should not affect
which register you mean. Change the references to the hardware
registers to be explicit: R13 for SP, R15 for PC.
- constant creation using assignment
The files say a=b when they could instead say #define a b.
There is no reason to have both mechanisms.
- R(0) to refer to R0.
Some macros use this to a great extent. Again, it's easy just to
use a #define to rename a register.
Change-Id: I002335ace8e876c5b63c71c2560533eb835346d2
Reviewed-on: https://go-review.googlesource.com/4822
Reviewed-by: Dave Cheney <dave@cheney.net>
These depend on storing arbitrary integer values using
pointer atomics, and we can't support that anymore.
Change-Id: I8cadd6d462c3eebdbe7078f43fe7c779fa8f52b3
Reviewed-on: https://go-review.googlesource.com/2311
Reviewed-by: Rick Hudson <rlh@golang.org>
Add write barrier to atomic operations manipulating pointers.
In general an atomic write of a pointer word may indicate racy accesses,
so there is no strictly safe way to attempt to keep the shadow copy
in sync with the real one. Instead, mark the shadow copy as not used.
Redirect sync/atomic pointer routines back to the runtime ones,
so that there is only one copy of the write barrier and shadow logic.
In time we might consider doing this for most of the sync/atomic
functions, but for now only the pointer routines need that treatment.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I852936b9a111a6cb9079cfaf6bd78b43016c0242
Reviewed-on: https://go-review.googlesource.com/2066
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
When we start work on Gerrit, ppc64 and garbage collection
work will continue in the master branch, not the dev branches.
(We may still use dev branches for other things later, but
these are ready to be merged, and doing it now, before moving
to Git means we don't have to have dev branches working
in the Gerrit workflow on day one.)
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/183140043