Each URL was manually verified to ensure it did not serve up incorrect
content.
Change-Id: I4dc846227af95a73ee9a3074d0c379ff0fa955df
Reviewed-on: https://go-review.googlesource.com/115798
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
The assembly is broken: it does `MOVQ g(R12), R14` expecting that
R12 contains tls address, but it does not do get_tls(R12) before.
This magically works on linux: `MOVQ g(R12), R14` is compiled to
`mov %fs:0xfffffffffffffff8,%r14` which does not use R12.
But it crashes on windows.
Add explicit `get_tls(R12)`.
Fixes#16206
Change-Id: Ic1f21a6fef2473bcf9147de6646929781c9c1e98
Reviewed-on: https://go-review.googlesource.com/24590
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Race runtime also needs local malloc caches and currently uses
a mix of per-OS-thread and per-goroutine caches. This leads to
increased memory consumption. But more importantly cache of
synchronization objects is per-goroutine and we don't always
have goroutine context when feeing memory in GC. As the result
synchronization object descriptors leak (more precisely, they
can be reused if another synchronization object is recreated
at the same address, but it does not always help). For example,
the added BenchmarkSyncLeak has effectively runaway memory
consumption (based on a real long running server).
This change updates race runtime with support for per-P contexts.
BenchmarkSyncLeak now stabilizes at ~1GB memory consumption.
Long term, this will allow us to remove race runtime dependency
on glibc (as malloc is the main cornerstone).
I've also implemented a different scheme to pass P context to
race runtime: scheduler notified race runtime about association
between G and P by calling procwire(g, p)/procunwire(g, p).
But it turned out to be very messy as we have lots of places
where the association changes (e.g. syscalls). So I dropped it
in favor of the current scheme: race runtime asks scheduler
about the current P.
Fixes#14533
Change-Id: Iad10d2f816a44affae1b9fed446b3580eafd8c69
Reviewed-on: https://go-review.googlesource.com/19970
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is a subset of https://golang.org/cl/20022 with only the copyright
header lines, so the next CL will be smaller and more reviewable.
Go policy has been single space after periods in comments for some time.
The copyright header template at:
https://golang.org/doc/contribute.html#copyright
also uses a single space.
Make them all consistent.
Change-Id: Icc26c6b8495c3820da6b171ca96a74701b4a01b0
Reviewed-on: https://go-review.googlesource.com/20111
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Changed racewalk/race detector to use FP in a more
sensible way.
Relaxed checks for CONVNOP when race detecting.
Modified tighten to ensure that GetClosurePtr cannot float
out of entry block (turns out this cannot be relaxed, DX is
sometimes stomped by other code accompanying race detection).
Added case for addr(CONVNOP)
Modified addr to take "bounded" flag to suppress nilchecks
where it is set (usually, by race detector).
Cannot leave unimplemented-complainer enabled because it
turns out we are optimistically running SSA on every platform.
Change-Id: Ife021654ee4065b3ffac62326d09b4b317b9f2e0
Reviewed-on: https://go-review.googlesource.com/15710
Reviewed-by: Keith Randall <khr@golang.org>
In most cases we pass return PC to race detector,
and race runtime subtracts one from them.
However, in manual instrumentation in runtime
we pass function start PC to race runtime.
Race runtime can't distinguish these cases
and so it does not subtract one from top PC.
This leads to bogus line numbers in some cases.
Make it consistent and always pass what looks
like a return PC, so that race runtime can
subtract one and still get PC in the same function.
Also delete two unused functions.
Update #8053
Change-Id: I4242dec5e055e460c9a8990eaca1d085ae240ed2
Reviewed-on: https://go-review.googlesource.com/4902
Reviewed-by: Ian Lance Taylor <iant@golang.org>
MOVQ RARG0, 0(SP) smashes exactly what was saved by PUSHQ R15.
This code managed to work somehow with the current race runtime,
but corrupts caller arguments with new race runtime that I am testing.
Change-Id: I9ffe8b5eee86451db36e99dbf4d11f320192e576
Reviewed-on: https://go-review.googlesource.com/4810
Reviewed-by: Keith Randall <khr@golang.org>
This CL only fixes the build, there are two failing tests:
RaceMapBigValAccess1 and RaceMapBigValAccess2
in runtime/race tests. I haven't investigated why yet.
Updates #9516.
Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f
Reviewed-on: https://go-review.googlesource.com/2401
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Add write barrier to atomic operations manipulating pointers.
In general an atomic write of a pointer word may indicate racy accesses,
so there is no strictly safe way to attempt to keep the shadow copy
in sync with the real one. Instead, mark the shadow copy as not used.
Redirect sync/atomic pointer routines back to the runtime ones,
so that there is only one copy of the write barrier and shadow logic.
In time we might consider doing this for most of the sync/atomic
functions, but for now only the pointer routines need that treatment.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I852936b9a111a6cb9079cfaf6bd78b43016c0242
Reviewed-on: https://go-review.googlesource.com/2066
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Race detector runtime does not tolerate operations on addresses
that was not previously declared with __tsan_map_shadow
(namely, data, bss and heap). The corresponding address
checks for atomic operations were removed in
https://golang.org/cl/111310044
Restore these checks.
It's tricker than just not calling into race runtime,
because it is the race runtime that makes the atomic
operations themselves (if we do not call into race runtime
we skip the atomic operation itself as well). So instead we call
__tsan_go_ignore_sync_start/end around the atomic operation.
This forces race runtime to skip all other processing
except than doing the atomic operation itself.
Fixes#9136.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/179030043
This is to reduce the delta between dev.cc and dev.garbage to just garbage collector changes.
These are the files that had merge conflicts and have been edited by hand:
malloc.go
mem_linux.go
mgc.go
os1_linux.go
proc1.go
panic1.go
runtime1.go
LGTM=austin
R=austin
CC=golang-codereviews
https://golang.org/cl/174180043
The main change is that #include "zasm_GOOS_GOARCH.h"
is now #include "go_asm.h" and/or #include "go_tls.h".
Also, because C StackGuard is now Go _StackGuard,
the assembly name changes from const_StackGuard to
const__StackGuard.
In asm_$GOARCH.s, add new function getg, formerly
implemented in C.
The renamed atomics now have Go wrappers, to get
escape analysis annotations right. Those wrappers
are in CL 174860043.
LGTM=r, aram
R=r, aram
CC=austin, dvyukov, golang-codereviews, iant, khr
https://golang.org/cl/168510043
I removed support for jumping between functions years ago,
as part of doing the instruction layout for each function separately.
Given that, it makes sense to treat labels as function-scoped.
This lets each function have its own 'loop' label, for example.
Makes the assembly much cleaner and removes the last
reason anyone would reach for the 123(PC) form instead.
Note that this is on the dev.power64 branch, but it changes all
the assemblers. The change will ship in Go 1.5 (perhaps after
being ported into the new assembler).
Came up as part of CL 167730043.
LGTM=r
R=r
CC=austin, dave, golang-codereviews, minux
https://golang.org/cl/159670043