Handling of special records for tiny allocations has two problems:
1. Once we queue a finalizer we mark the object. As the result any
subsequent finalizers for the same object will not be queued
during this GC cycle. If we have 16 finalizers setup (the worst case),
finalization will take 16 GC cycles. This is what caused misbehave
of tinyfin.go. The actual flakiness was caused by the fact that fing
is asynchronous and don't always run before the check.
2. If a tiny block has both finalizer and profile specials,
it is possible that we both queue finalizer, preserve the object live
and free the profile record. As the result heap profile can be skewed.
Fix both issues by analyzing all special records for a single object at once.
Also, make tinyfin test stricter and remove reliance on real time.
Also, add a test for the problem 2. Currently heap profile missed about
a half of live memory.
Fixes#13100
Change-Id: I9ae4dc1c44893724138a4565ca5cae29f2e97544
Reviewed-on: https://go-review.googlesource.com/16591
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Currently the GC work buffers are only 256 bytes and hence can record
only 24 64-bit pointer. They were reduced from 4K in commits db7fd1c
and a15818f as a way to minimize the amount of work the per-P workbuf
caches could "hide" from the mark phase and carry in to the mark
termination phase. However, this approach wasn't very robust and we
later added a "mark 2" phase to address this problem head-on.
Because of mark 2, there's now no benefit to having very small work
buffers. But there are plenty of downsides: small work buffers
increase contention on the work lists, increase the frequency and
hence net overhead of acquiring and releasing work buffers, and
somewhat increase memory overhead of the GC.
This commit expands work buffers back to 4K (504 64-bit pointers).
This reduces the rate of writes to work.full in the garbage benchmark
from a peak of ~780,000 writes/sec to a peak of ~32,000 writes/sec.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.37ms ± 5% 5.60ms ± 2% +4.37% (p=0.000 n=20+20)
Change-Id: Ic9cc28e7a125d23d9faf4f5e690fb8aa9bcdfb28
Reviewed-on: https://go-review.googlesource.com/15893
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, assists are non-preemptible, which means a heavily
assisting G can block other Gs from running. At the beginning of a GC
cycle, it can also delay scang, which will spin until the assist is
done. Since scanning is currently done sequentially, this can
seriously extend the length of the scan phase.
Fix this by making assists preemptible. Since the assist holds work
buffers and runs on the system stack, this must be done cooperatively:
we make gcDrainN return on preemption, and make the assist return from
the system stack and voluntarily Gosched.
This is prerequisite to enlarging the work buffers. Without this
change, the delays and spinning in scang increase significantly.
This has no effect on the go1 benchmarks.
name old time/op new time/op delta
XBenchGarbage-12 5.72ms ± 4% 5.37ms ± 5% -6.11% (p=0.000 n=20+20)
Change-Id: I829e732a0f23b126da633516a1a9ec1a508fdbf1
Reviewed-on: https://go-review.googlesource.com/15894
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
GC assists must block until the assist can be satisfied (either
through stealing credit or doing work) or the GC cycle ends.
Currently, this is implemented as a retry loop with a 100 µs delay.
This obviously isn't ideal, as it wastes CPU and delays mutator
execution. It also has the somewhat peculiar downside that sleeping a
G requires allocation, and this requires working around recursive
allocation.
Replace this timed delay with a proper scheduling queue. When an
assist can't be satisfied immediately, it adds the allocating G to a
queue and parks it. Any time background scan credit is flushed, it
consults this queue, directly satisfies the debt of queued assists,
and wakes up satisfied assists before flushing any remaining credit to
the background credit pool.
No effect on the go1 benchmarks. Slightly speeds up the garbage
benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.81ms ± 1% 5.72ms ± 4% -1.65% (p=0.011 n=20+20)
Updates #12041.
Change-Id: I8ee3b6274dd097b12b10a8030796a958a4b0e7b7
Reviewed-on: https://go-review.googlesource.com/15890
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This eliminates many write barriers in the scheduler code that are
unnecessary and will interfere with upcoming changes where the garbage
collector will have to invoke run queue functions in contexts that
must not have write barriers.
Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55
Reviewed-on: https://go-review.googlesource.com/16556
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
To avoid collisions with what existing code may already be doing.
Change-Id: Ice639440aafc0724714c25333d90a49954372230
Reviewed-on: https://go-review.googlesource.com/16503
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently the concurrent root scan is performed in its entirety by the
GC coordinator before entering concurrent mark (which enables GC
workers). This scan is done sequentially, which can prolong the scan
phase, delay the mark phase, and means that the scan phase does not
obey the 25% CPU goal. Furthermore, there's no need to complete the
root scan before starting marking (in fact, we already allow GC
assists to happen during the scan phase), so this acts as an
unnecessary barrier between root scanning and marking.
This change shifts the root scan work out of the GC coordinator and in
to the GC workers. The coordinator simply sets up the scan state and
enqueues the right number of root scan jobs. The GC workers then drain
the root scan jobs prior to draining heap scan jobs.
This parallelizes the root scan process, makes it obey the 25% CPU
goal, and effectively eliminates root scanning as an isolated phase,
allowing the system to smoothly transition from root scanning to heap
marking. This also eliminates a major non-STW responsibility of the GC
coordinator, which will make it easier to switch to a decentralized
state machine. Finally, it puts us in a good position to perform root
scanning in assists as well, which will help satisfy assists at the
beginning of the GC cycle.
This is mostly straightforward. One tricky aspect is that we have to
deal with preemption deadlock: where two non-preemptible gorountines
are trying to preempt each other to perform a stack scan. Given the
context where this happens, the only instance of this is two
background workers trying to scan each other. We avoid this by simply
not scanning the stacks of background workers during the concurrent
phase; this is safe because we'll scan them during mark termination
(and their stacks are *very* small and should not contain any new
pointers).
This change also switches the root marking during mark termination to
use the same gcDrain-based code path as concurrent mark. This
shouldn't affect performance because STW root marking was already
parallel and tasks switched to heap marking immediately when no more
root marking tasks were available. However, it simplifies the code and
unifies these code paths.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark, possibly by making GC run slightly more
frequently.
name old time/op new time/op delta
XBenchGarbage-12 5.10ms ± 1% 5.24ms ± 1% +2.87% (p=0.000 n=18+18)
name old time/op new time/op delta
BinaryTree17-12 3.25s ± 3% 3.20s ± 5% -1.57% (p=0.013 n=20+20)
Fannkuch11-12 2.45s ± 1% 2.46s ± 1% +0.38% (p=0.019 n=20+18)
FmtFprintfEmpty-12 49.7ns ± 3% 49.9ns ± 4% ~ (p=0.851 n=19+20)
FmtFprintfString-12 170ns ± 2% 170ns ± 1% ~ (p=0.775 n=20+19)
FmtFprintfInt-12 161ns ± 1% 160ns ± 1% -0.78% (p=0.000 n=19+18)
FmtFprintfIntInt-12 267ns ± 1% 270ns ± 1% +1.04% (p=0.000 n=19+19)
FmtFprintfPrefixedInt-12 238ns ± 2% 238ns ± 1% ~ (p=0.133 n=18+19)
FmtFprintfFloat-12 311ns ± 1% 310ns ± 2% -0.35% (p=0.023 n=20+19)
FmtManyArgs-12 1.08µs ± 1% 1.06µs ± 1% -2.31% (p=0.000 n=20+20)
GobDecode-12 8.65ms ± 1% 8.63ms ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 6.49ms ± 1% 6.52ms ± 1% +0.37% (p=0.015 n=20+20)
Gzip-12 319ms ± 3% 318ms ± 1% ~ (p=0.975 n=19+17)
Gunzip-12 41.9ms ± 1% 42.1ms ± 2% +0.65% (p=0.004 n=19+20)
HTTPClientServer-12 61.7µs ± 1% 62.6µs ± 1% +1.40% (p=0.000 n=18+20)
JSONEncode-12 16.8ms ± 1% 16.9ms ± 1% ~ (p=0.239 n=20+18)
JSONDecode-12 58.4ms ± 1% 60.7ms ± 1% +3.85% (p=0.000 n=19+20)
Mandelbrot200-12 3.86ms ± 0% 3.86ms ± 1% ~ (p=0.092 n=18+19)
GoParse-12 3.75ms ± 2% 3.75ms ± 2% ~ (p=0.708 n=19+20)
RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 2% +0.60% (p=0.010 n=17+20)
RegexpMatchEasy0_1K-12 341ns ± 1% 342ns ± 2% ~ (p=0.203 n=20+19)
RegexpMatchEasy1_32-12 82.5ns ± 2% 83.2ns ± 2% +0.83% (p=0.007 n=19+19)
RegexpMatchEasy1_1K-12 495ns ± 1% 495ns ± 2% ~ (p=0.970 n=19+18)
RegexpMatchMedium_32-12 130ns ± 2% 130ns ± 2% +0.59% (p=0.039 n=19+20)
RegexpMatchMedium_1K-12 39.2µs ± 1% 39.3µs ± 1% ~ (p=0.214 n=18+18)
RegexpMatchHard_32-12 2.03µs ± 2% 2.02µs ± 1% ~ (p=0.166 n=18+19)
RegexpMatchHard_1K-12 61.0µs ± 1% 60.9µs ± 1% ~ (p=0.169 n=20+18)
Revcomp-12 533ms ± 1% 535ms ± 1% ~ (p=0.071 n=19+17)
Template-12 68.1ms ± 2% 73.0ms ± 1% +7.26% (p=0.000 n=19+20)
TimeParse-12 355ns ± 2% 356ns ± 2% ~ (p=0.530 n=19+20)
TimeFormat-12 357ns ± 2% 347ns ± 1% -2.59% (p=0.000 n=20+19)
[Geo mean] 62.1µs 62.3µs +0.31%
name old speed new speed delta
GobDecode-12 88.7MB/s ± 1% 88.9MB/s ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 118MB/s ± 1% 118MB/s ± 1% -0.37% (p=0.015 n=20+20)
Gzip-12 60.9MB/s ± 3% 60.9MB/s ± 1% ~ (p=0.944 n=19+17)
Gunzip-12 464MB/s ± 1% 461MB/s ± 2% -0.64% (p=0.004 n=19+20)
JSONEncode-12 115MB/s ± 1% 115MB/s ± 1% ~ (p=0.236 n=20+18)
JSONDecode-12 33.2MB/s ± 1% 32.0MB/s ± 1% -3.71% (p=0.000 n=19+20)
GoParse-12 15.5MB/s ± 2% 15.5MB/s ± 2% ~ (p=0.702 n=19+20)
RegexpMatchEasy0_32-12 320MB/s ± 1% 318MB/s ± 2% ~ (p=0.094 n=18+20)
RegexpMatchEasy0_1K-12 3.00GB/s ± 1% 2.99GB/s ± 1% ~ (p=0.194 n=20+19)
RegexpMatchEasy1_32-12 388MB/s ± 2% 385MB/s ± 2% -0.83% (p=0.008 n=19+19)
RegexpMatchEasy1_1K-12 2.07GB/s ± 1% 2.07GB/s ± 1% ~ (p=0.964 n=19+18)
RegexpMatchMedium_32-12 7.68MB/s ± 1% 7.64MB/s ± 2% -0.57% (p=0.020 n=19+20)
RegexpMatchMedium_1K-12 26.1MB/s ± 1% 26.1MB/s ± 1% ~ (p=0.211 n=18+18)
RegexpMatchHard_32-12 15.8MB/s ± 1% 15.8MB/s ± 1% ~ (p=0.180 n=18+19)
RegexpMatchHard_1K-12 16.8MB/s ± 1% 16.8MB/s ± 2% ~ (p=0.236 n=20+19)
Revcomp-12 477MB/s ± 1% 475MB/s ± 1% ~ (p=0.071 n=19+17)
Template-12 28.5MB/s ± 2% 26.6MB/s ± 1% -6.77% (p=0.000 n=19+20)
[Geo mean] 100MB/s 99.0MB/s -0.82%
Change-Id: I875bf6ceb306d1ee2f470cabf88aa6ede27c47a0
Reviewed-on: https://go-review.googlesource.com/16059
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We already have gcMarkWorkAvailable, but the check for GC mark work is
open-coded in several places. Generalize gcMarkWorkAvailable slightly
and replace these open-coded checks with calls to gcMarkWorkAvailable.
In addition to cleaning up the code, this puts us in a better position
to make this check slightly more complicated.
Change-Id: I1b29883300ecd82a1bf6be193e9b4ee96582a860
Reviewed-on: https://go-review.googlesource.com/16058
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Abandon (but still support) the old numbering system.
GOTRACEBACK=none is old 0
GOTRACEBACK=single is the new behavior
GOTRACEBACK=all is old 1
GOTRACEBACK=system is old 2
GOTRACEBACK=crash is unchanged
See doc comment change in runtime1.go for details.
Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release.
If you run into programs where printing only the current goroutine omits
needed information, please add details in a comment on that issue.
Fixes#12366.
Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db
Reviewed-on: https://go-review.googlesource.com/16512
Reviewed-by: Austin Clements <austin@google.com>
On ppc64x, the thread pointer, held in R13, points 0x7000 bytes past where
thread-local storage begins (presumably to maximize the amount of storage that
can be accessed with a 16-bit signed displacement). The relocations used to
indicate thread-local storage to the platform linker account for this, so to be
able to support external linking we need to change things so the linker applies
this offset instead of the runtime assembly.
Change-Id: I2556c249ab2d802cae62c44b2b4c5b44787d7059
Reviewed-on: https://go-review.googlesource.com/14233
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is needed to make external linking work.
Change-Id: I4cf7edb4ea318849cab92a697952f8745eed40c4
Reviewed-on: https://go-review.googlesource.com/14237
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Android linker does not handle TLS for us. We set up the TLS slot
for g, as darwin/386,amd64 handle instead. This is disgusting and
fragile. We will eventually fix this ugly hack by taking advantage
of the recent TLS IE model implementation. (Instead of referencing
an GOT entry, make the code sequence look into the TLS variable that
holds the offset.)
The TLS slot for g in android/amd64 assumes a fixed offset from %fs.
See runtime/cgo/gcc_android_amd64.c for details.
For golang/go#10743
Change-Id: I1a3fc207946c665515f79026a56ea19134ede2dd
Reviewed-on: https://go-review.googlesource.com/15991
Reviewed-by: David Crawshaw <crawshaw@golang.org>
In the Go signal handler on Plan 9, when a signal with
the _SigThrow flag is received, we call startpanic before
printing the stack trace.
The startpanic function calls systemstack which calls
startpanic_m. In the startpanic_m function, we call
allocmcache to allocate _g_.m.mcache. The problem is
that allocmcache calls nextSample, which does a floating
point operation to return a sampling point for heap profiling.
However, Plan 9 doesn't support floating point in the
signal handler.
This change adds a new function nextSampleNoFP, only
called when in the Plan 9 signal handler, which is
similar to nextSample, but avoids floating point.
Change-Id: Iaa30437aa0f7c8c84d40afbab7567ad3bd5ea2de
Reviewed-on: https://go-review.googlesource.com/16307
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The dynamic linker on linux/386 stores the address of the vsyscall helper at a
fixed offset from the %gs register on linux/386 for easy access from PIC code.
Change-Id: I635305cfecceef2289985d62e676e16810ed6b94
Reviewed-on: https://go-review.googlesource.com/16346
Reviewed-by: Ian Lance Taylor <iant@golang.org>
arena_{start,used,end} are already uintptr, so no need to convert them
to uintptr, much less to convert them to unsafe.Pointer and then to
uintptr. No binary change to pkg/linux_amd64/runtime.a.
Change-Id: Ia4232ed2a724c44fde7eba403c5fe8e6dccaa879
Reviewed-on: https://go-review.googlesource.com/16339
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Implement an abort note on Plan 9, as an
equivalent of the SIGABRT signal on other
operating systems.
Updates #11975.
Change-Id: I010c9b10f2fbd2471aacd1d073368d975a2f0592
Reviewed-on: https://go-review.googlesource.com/16300
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: David du Colombier <0intro@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When a new tiny block is allocated because we're allocating an object
that won't fit into the current block, mallocgc saves the new block if
it has more space leftover than the old block. However, the logic for
this was subtly broken in golang.org/cl/2814, resulting in never
saving (or consequently reusing) a tiny block.
Change-Id: Ib5f6769451fb82877ddeefe75dfe79ed4a04fd40
Reviewed-on: https://go-review.googlesource.com/16330
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I went looking for an arm system whose stacks are by default smaller
than 64KB. In fact the smallest common linux target I could find was
Android, which like iOS uses 1MB stacks.
Fixes#11873
Change-Id: Ieeb66ad095b3da18d47ba21360ea75152a4107c6
Reviewed-on: https://go-review.googlesource.com/14602
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Minux Ma <minux@golang.org>
This copies the change from CL 16158 (applied as
22d4c8bf13).
Updates #13013
Change-Id: Id7d02e63d92806f06a4e064a91b2fb6574fe385f
Reviewed-on: https://go-review.googlesource.com/16291
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This CL introduces a new mSpanList type to replace the empty mspan
variables that were previously used as list heads.
To be type safe, the previous circular linked list data structure is
now a tail queue instead. One complication of this is
mSpanList_Remove needs to know the list a span is being removed from,
but this appears to be computable in all circumstances.
As a temporary sanity check, mSpanList_Insert and mSpanList_InsertBack
record the list that an mspan has been inserted into so that
mSpanList_Remove can verify that the correct list was specified.
Whereas mspan is 112 bytes on amd64, mSpanList is only 16 bytes. This
shrinks the size of mheap from 50216 bytes to 12584 bytes.
Change-Id: I8146364753dbc3b4ab120afbb9c7b8740653c216
Reviewed-on: https://go-review.googlesource.com/15906
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Austin Clements <austin@google.com>
It's never used as a *byte anyway, so might as well just make it an
unsafe.Pointer instead.
Change-Id: I68ee418781ab2fc574eeac0498f2515b5561b7a8
Reviewed-on: https://go-review.googlesource.com/16175
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reduces the size of m by ~8% on linux/amd64 (1040 bytes -> 960 bytes).
There are also windows-specific fields, but they're currently
referenced in OS-independent source files (but only when
GOOS=="windows").
Change-Id: I13e1471ff585ccced1271f74209f8ed6df14c202
Reviewed-on: https://go-review.googlesource.com/16173
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change compiler-invoked interface functions to directly take
iface/eface parameters instead of fInterface/interface{} to avoid
needing to always convert.
For the handful of functions that legitimately need to take an
interface{} parameter, add efaceOf to type-safely convert *interface{}
to *eface.
Change-Id: I8928761a12fd3c771394f36adf93d3006a9fcf39
Reviewed-on: https://go-review.googlesource.com/16166
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Add explicit memory sanitizer instrumentation to the runtime and syscall
packages. The compiler does not instrument the runtime package. It
does instrument the syscall package, but we need to add a couple of
cases that it can't see.
Change-Id: I2d66073f713fe67e33a6720460d2bb8f72f31394
Reviewed-on: https://go-review.googlesource.com/16164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Allows removing a few gratuitous unsafe.Pointer conversions and
parallels the type of reflect.funcType's in and out fields ([]*rtype).
Change-Id: Ie5ca230a94407301a854dfd8782a3180d5054bc4
Reviewed-on: https://go-review.googlesource.com/16163
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These are the runtime support functions for letting Go code interoperate
with the C/C++ memory sanitizer. Calls to msanread/msanwrite are now
inserted by the compiler with the -msan option. Calls to
msanmalloc/msanfree will be from other runtime functions in a subsequent
CL.
Change-Id: I64fb061b38cc6519153face242eccd291c07d1f2
Reviewed-on: https://go-review.googlesource.com/16162
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The ragged barrier after entering the concurrent mark phase is
vestigial. This used to be the point where we enabled write barriers,
so it was necessary to synchronize all Ps to ensure write barriers
were enabled before any marking occurred. However, we've long since
switched to enabling write barriers during the concurrent scan phase,
so the start-the-world at the beginning of the concurrent scan phase
ensures that all Ps have enabled the write barrier.
Hence, we can eliminate the old "install write barrier" phase.
Fixes#11971.
Change-Id: I8cdcb84b5525cef19927d51ea11ba0a4db991ea8
Reviewed-on: https://go-review.googlesource.com/16044
Reviewed-by: Rick Hudson <rlh@golang.org>
Instead of open-coding conversions from *string to unsafe.Pointer then
to *stringStruct, add a helper function to add some type safety.
Bonus: This caught two **string values being converted to
*stringStruct in heapdump.go.
While here, get rid of the redundant _string type, but add in a
stringStructDWARF type used for generating DWARF debug info.
Change-Id: I8882f8cca66ac45190270f82019a5d85db023bd2
Reviewed-on: https://go-review.googlesource.com/16131
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The '1' part is left over from the C conversion, but no longer makes
sense given that print1.go no longer exists.
Change-Id: Iec171251370d740f234afdbd6fb1a4009fde6696
Reviewed-on: https://go-review.googlesource.com/16036
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
access, connect, socket.
In Android-L, logging is done by writing the log messages to the logd
process through a unix domain socket.
Also, changed the arg types of those syscall stubs to match linux
programming APIs.
For golang/go#10743
Change-Id: I66368a03316e253561e9e76aadd180c2cd2e48f3
Reviewed-on: https://go-review.googlesource.com/15993
Reviewed-by: David Crawshaw <crawshaw@golang.org>
When I saw that it was labelled "legacy", I went looking for users of it
to see how it was still used. But there aren't any. Save the next person
the trouble.
Change-Id: I921dd6c57b60331c9816542272555153ac133c02
Reviewed-on: https://go-review.googlesource.com/16035
Reviewed-by: Dave Cheney <dave@cheney.net>
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Building Go shared libraries requires that all functions that have declarations
without bodies have implementations and vice versa, so remove the
implementation of call16 and add a stub implementation of sigreturn.
Change-Id: I4d5a30c8637a5da7991054e151a536611d5bea46
Reviewed-on: https://go-review.googlesource.com/15966
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These functions are always called together and perform logically
related state resets, so combine them in to just gcResetMarkState.
Fixes#11427.
Change-Id: I06c17ef65f66186494887a767b3993126955b5fe
Reviewed-on: https://go-review.googlesource.com/16041
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently gcResetGState is called by func gcscan_m for concurrent GC
and directly by func gc for STW GC. Simplify this by consolidating
these two calls in to one call by func gc above where it splits for
concurrent and STW GC.
As a consequence, gcResetGState and gcResetMarkState are always called
together, so the next commit will consolidate these.
Change-Id: Ib62d404c7b32b28f7d3080d26ecf3966cbc4aca0
Reviewed-on: https://go-review.googlesource.com/16040
Reviewed-by: Rick Hudson <rlh@golang.org>
This work queue is no longer used (there are many reads of
work.partial, but the only write is in putpartial, which is never
called).
Fixes#11922.
Change-Id: I08b76c0c02a0867a9cdcb94783e1f7629d44249a
Reviewed-on: https://go-review.googlesource.com/15892
Reviewed-by: Rick Hudson <rlh@golang.org>
It appears this was made possible by commit 89f185f; before that, g was
not dereferenced above.
Change-Id: I70bc571d924b36351392fd4c13d681e938cfb573
Reviewed-on: https://go-review.googlesource.com/16033
Reviewed-by: Andrew Gerrand <adg@golang.org>
PIC code on ppc64le uses R2 as a TOC pointer and when calling a function
through a function pointer must ensure the function pointer is in R12. These
rules are easy enough to follow unconditionally in our assembly, so do that.
Change-Id: Icfc4e47ae5dfbe15f581cbdd785cdeed6e40bc32
Reviewed-on: https://go-review.googlesource.com/15526
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). Part 3 of that
is using a #define in the ppc64 assembly to refer to the size of the fixed
part of the stack (finding all these took me about a week!).
Change-Id: I50f22fe1c47af1ec59da1bd7ea8f84a4750df9b7
Reviewed-on: https://go-review.googlesource.com/15525
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). So to prepare
for this, make a constant for the fixed part of a stack and use that where
necessary.
Change-Id: I447949f4d725003bb82e7d2cf7991c1bca5aa887
Reviewed-on: https://go-review.googlesource.com/15523
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Replace the confusing game where a frame size of $-8 would suppress the
implicit setting up of a stack frame with a nice explicit flag.
The code to set up the function prologue is still a little confusing but better
than it was.
Change-Id: I1d49278ff42c6bc734ebfb079998b32bc53f8d9a
Reviewed-on: https://go-review.googlesource.com/15670
Reviewed-by: Minux Ma <minux@golang.org>
A TODO to merge is removed from panic1.go.
The rest is appended to panic.go
Updates #12952
Change-Id: Ied4382a455abc20bc2938e34d031802e6b4baf8b
Reviewed-on: https://go-review.googlesource.com/15905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
os/signal depends on a few unexported runtime functions. This removes the
assembly stubs it used to get access to these in favour of using
//go:linkname in runtime to make the functions accessible to os/signal.
This is motivated by ppc64le shared libraries, where you cannot BR to a symbol
defined in a shared library (only BL), but it seems like an improvment anyway.
Change-Id: I09361203ce38070bd3f132f6dc5ac212f2dc6f58
Reviewed-on: https://go-review.googlesource.com/15871
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
This isn't C anymore. No binary change to pkg/linux_amd64/runtime.a.
Change-Id: I24d66b0f5ac888f432b874aac684b1395e7c8345
Reviewed-on: https://go-review.googlesource.com/15903
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The current fastlog2 testing checks all 64M values in the domain of
interest, which is too much for platforms with no native floating point.
Reduce testing under testing.Short() to speed up builds for those platforms.
Related to #12620
Change-Id: Ie5dcd408724ba91c3b3fcf9ba0dddedb34706cd1
Reviewed-on: https://go-review.googlesource.com/15830
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Joel Sing <jsing@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The duplication of _Kind and kind constants is a legacy of the
conversion from C.
Change-Id: I368b35a41f215cf91ac4b09dac59699edb414a0e
Reviewed-on: https://go-review.googlesource.com/15800
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, when the mutator allocates, the runtime first allocates the
memory and then, if that G has done "enough" allocation, the runtime
checks whether the G has assist debt to pay off and, if so, pays it
off. This approach leads to under-assisting, where a G can allocate a
large region (or many small regions) before paying for it, or can even
exit with outstanding debt.
This commit flips this around so that a G always acquires enough
credit for an allocation before it can perform that allocation. We
continue to amortize the cost of assists by requiring that they
over-assist when triggered to build up credit for many allocations.
Fixes#11967.
Change-Id: Idac9f11133b328535667674d837be72c23ebd899
Reviewed-on: https://go-review.googlesource.com/15409
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently we track the per-G GC assist balance as two monotonically
increasing values: the bytes allocated by the G this cycle (gcalloc)
and the scan work performed by the G this cycle (gcscanwork). The
assist balance is hence assistRatio*gcalloc - gcscanwork.
This works, but has two important downsides:
1) It requires floating-point math to figure out if a G is in debt or
not. This makes it inappropriate to check for assist debt in the
hot path of mallocgc, so we only do this when a G allocates a new
span. As a result, Gs can operate "in the red", leading to
under-assist and extended GC cycle length.
2) Revising the assist ratio during a GC cycle can lead to an "assist
burst". If you think of plotting the scan work performed versus
heaps size, the assist ratio controls the slope of this line.
However, in the current system, the target line always passes
through 0 at the heap size that triggered GC, so if the runtime
increases the assist ratio, there has to be a potentially large
assist to jump from the current amount of scan work up to the new
target scan work for the current heap size.
This commit replaces this approach with directly tracking the GC
assist balance in terms of allocation credit bytes. Allocating N bytes
simply decreases this by N and assisting raises it by the amount of
scan work performed divided by the assist ratio (to get back to
bytes).
This will make it cheap to figure out if a G is in debt, which will
let us efficiently check if an assist is necessary *before* performing
an allocation and hence keep Gs "in the black".
This also fixes assist bursts because the assist ratio is now in terms
of *remaining* work, rather than work from the beginning of the GC
cycle. Hence, the plot of scan work versus heap size becomes
continuous: we can revise the slope, but this slope always starts from
where we are right now, rather than where we were at the beginning of
the cycle.
Change-Id: Ia821c5f07f8a433e8da7f195b52adfedd58bdf2c
Reviewed-on: https://go-review.googlesource.com/15408
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently we ensure a minimum heap distance of 1MB when computing the
assist ratio. Rather than enforcing this minimum on the heap distance,
it makes more sense to enforce that the heap goal itself is at least
1MB over the live heap size at the beginning of GC. Currently the two
approaches are semantically equivalent, but this will let us switch to
basing the assist ratio on current heap distance rather than the
initial heap distance, since we can't enforce this minimum on the
current heap distance (the GC may never finish because the goal posts
will always be 1MB away).
Change-Id: I0027b1c26a41a0152b01e5b67bdb1140d43ee903
Reviewed-on: https://go-review.googlesource.com/15604
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, gcController.scanWork is updated as lazily as possible
since it is only read at the end of the GC cycle. We're about to read
it during the GC cycle to improve the assist ratio revisions, so
modify gcDrain* to regularly flush to gcController.scanWork in much
the same way as we regularly flush to gcController.bgScanCredit.
One consequence of this is that it's difficult to keep gcw.scanWork
monotonic, so we give up on that and simply return the amount of scan
work done by gcDrainN rather than calculating it in the caller.
Change-Id: I7b50acdc39602f843eed0b5c6d2dacd7e762b81d
Reviewed-on: https://go-review.googlesource.com/15407
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently callers of gcDrain control whether it flushes scan work
credit to gcController.bgScanCredit by passing a value other than -1
for the flush threshold. Shortly we're going to make this always flush
scan work to gcController.scanWork and optionally also flush scan work
to gcController.bgScanCredit. This will be much easier if the flush
threshold is simply a constant (which it is in practice) and callers
merely control whether or not the flush includes the background
credit. Hence, replace the flush threshold argument with a flag.
Change-Id: Ia27db17de8a3f1e462a5d7137d4b5dc72f99a04e
Reviewed-on: https://go-review.googlesource.com/15406
Reviewed-by: Rick Hudson <rlh@golang.org>
These functions were nearly identical. Consolidate them by adding a
flags argument. In addition to cleaning up this code, this makes
further changes that affect both functions easier.
Change-Id: I6ec5c947603bbbd3ff4040113b2fbc240e99745f
Reviewed-on: https://go-review.googlesource.com/15405
Reviewed-by: Rick Hudson <rlh@golang.org>
The comment for assistRatio claimed it to be the reciprocal of what it
actually is.
Change-Id: If7f9bb853d75d0097facff3aa6704b224d9108b8
Reviewed-on: https://go-review.googlesource.com/15402
Reviewed-by: Russ Cox <rsc@golang.org>
(*T)(unsafe.Pointer(&t)) === &t
for t of type T
Change-Id: I43c1aa436747dfa0bf4cb0d615da1647633f9536
Reviewed-on: https://go-review.googlesource.com/15656
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Improve the aeshash implementation to make it harder to engineer collisions.
1) Scramble the seed before xoring with the input string. This
makes it harder to cancel known portions of the seed (like the size)
because it mixes the per-table seed into those other parts.
2) Use table-dependent seeds for all stripes when hashing >16 byte strings.
For small strings this change uses 4 aesenc ops instead of 3, so it
is somewhat slower. The first two can run in parallel, though, so
it isn't 33% slower.
benchmark old ns/op new ns/op delta
BenchmarkHash64-12 10.2 11.2 +9.80%
BenchmarkHash16-12 5.71 6.13 +7.36%
BenchmarkHash5-12 6.64 7.01 +5.57%
BenchmarkHashBytesSpeed-12 30.3 31.9 +5.28%
BenchmarkHash65536-12 2785 2882 +3.48%
BenchmarkHash1024-12 53.6 55.4 +3.36%
BenchmarkHashStringArraySpeed-12 54.9 56.5 +2.91%
BenchmarkHashStringSpeed-12 18.7 19.2 +2.67%
BenchmarkHashInt32Speed-12 14.8 15.1 +2.03%
BenchmarkHashInt64Speed-12 14.5 14.5 +0.00%
Change-Id: I59ea124b5cb92b1c7e8584008257347f9049996c
Reviewed-on: https://go-review.googlesource.com/14124
Reviewed-by: jcd . <jcd@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It's particularly nice to get rid of the android special cases in the linker.
Change-Id: I516363af7ce8a6b2f196fe49cb8887ac787a6dad
Reviewed-on: https://go-review.googlesource.com/14197
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The current heap sampling introduces some bias that interferes
with unsampling, producing unexpected heap profiles.
The solution is to use a Poisson process to generate the
sampling points, using the formulas described at
https://en.wikipedia.org/wiki/Poisson_process
This fixes#12620
Change-Id: If2400809ed3c41de504dd6cff06be14e476ff96c
Reviewed-on: https://go-review.googlesource.com/14590
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, amd64p32's memmove and memclr use 8 byte writes as much as
possible and 1 byte writes for the tail of the object. However, if an
object ends with a 4 byte pointer at an 8 byte aligned offset, this
may copy/zero the pointer field one byte at a time, allowing the
garbage collector to observe a partially copied pointer.
Fix this by using 4 byte writes instead of 8 byte writes.
Updates #12552.
Change-Id: I13324fd05756fb25ae57e812e836f0a975b5595c
Reviewed-on: https://go-review.googlesource.com/15370
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This fixes an issue where the runtime panics with "out of memory" or
"cannot allocate memory" even though there's ample memory by reducing
the number of memory mappings created by the memory allocator.
Commit 7e1b61c worked around issue #8832 where Linux's transparent
huge page support could dramatically increase the RSS of a Go process
by setting the MADV_NOHUGEPAGE flag on any regions of pages released
to the OS with MADV_DONTNEED. This had the side effect of also
increasing the number of VMAs (memory mappings) in a Go address space
because a separate VMA is needed for every region of the virtual
address space with different flags. Unfortunately, by default, Linux
limits the number of VMAs in an address space to 65530, and a large
heap can quickly reach this limit when the runtime starts scavenging
memory.
This commit dramatically reduces the number of VMAs. It does this
primarily by only adjusting the huge page flag at huge page
granularity. With this change, on amd64, even a pessimal heap that
alternates between MADV_NOHUGEPAGE and MADV_HUGEPAGE must reach 128GB
to reach the VMA limit. Because of this rounding to huge page
granularity, this change is also careful to leave large used and
unused regions huge page-enabled.
This change reduces the maximum number of VMAs during the runtime
benchmarks with GODEBUG=scavenge=1 from 692 to 49.
Fixes#12233.
Change-Id: Ic397776d042f20d53783a1cacf122e2e2db00584
Reviewed-on: https://go-review.googlesource.com/15191
Reviewed-by: Keith Randall <khr@golang.org>
In general, finishsweep_m must block until any spans that are
concurrently being swept have been swept. It accomplishes this by
looping over all spans, which, as in the previous commit, takes
~1ms/heap GB. Unfortunately, we do this during the STW sweep
termination phase, so multi-gigabyte heaps can push our STW time past
10ms.
However, there's no need to do this wait if the world is stopped
because, in effect, stopping the world already had to wait for
anything that was sweeping (and if it didn't, the wait in
finishsweep_m would deadlock). Hence, we can simply skip this loop if
the world is stopped, such as during sweep termination. In fact,
currently all calls to finishsweep_m are STW, but this hasn't always
been the case and may not be the case in the future, so we keep the
logic around.
For 24GB heaps, this reduces max pause time by 75% relative to tip and
by 90% relative to Go 1.5. Notably, all pauses are now well under
10ms. Here are the results for the garbage benchmark:
------------- max pause ------------
Heap Procs after change before change 1.5.1
24GB 12 3.8ms 16ms 37ms
24GB 4 3.7ms 16ms 37ms
4GB 4 3.7ms 3ms 6.9ms
In the 4GB/4P case, it seems the "before change" run got lucky: the
max went up, but the 99%ile pause time went down from 3ms to 2.04ms.
Change-Id: Ica22189559f231d408ef2815019c9dbb5f38bf31
Reviewed-on: https://go-review.googlesource.com/15071
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In order to compute the sweep ratio, the runtime needs to know how
many pages belong to spans in state _MSpanInUse. Currently it finds
this out by looping over all spans during mark termination. However,
this takes ~1ms/heap GB, so multi-gigabyte heaps can quickly push our
STW time past 10ms.
Replace the loop with an actively maintained count of in-use pages.
For multi-gigabyte heaps, this reduces max mark termination pause time
by 75%–90% relative to tip and by 85%–95% relative to Go 1.5.1. This
shifts the longest pause time for large heaps to the sweep termination
phase, so it only slightly decreases max pause time, though it roughly
halves mean pause time. Here are the results for the garbage
benchmark:
---- max mark termination pause ----
Heap Procs after change before change 1.5.1
24GB 12 1.9ms 18ms 37ms
24GB 4 3.7ms 18ms 37ms
4GB 4 920µs 3.8ms 6.9ms
Fixes#11484.
Change-Id: Ia2d28bb8a1e4f1c3b8ebf79fb203f12b9bf114ac
Reviewed-on: https://go-review.googlesource.com/15070
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This reduces pause time by ~25% relative to tip and by ~50% relative
to Go 1.5.1.
Currently one of the steps of STW mark termination is to loop (in
parallel) over all spans to find objects with finalizers in order to
mark all objects reachable from these objects and to treat the
finalizer special as a root. Unfortunately, even if there are no
finalizers at all, this loop takes roughly 1 ms/heap GB/core, so
multi-gigabyte heaps can quickly push our STW time past 10ms.
Fix this by moving this scan from mark termination to concurrent scan,
where it can run in parallel with mutators. The loop itself could also
be optimized, but this cost is small compared to concurrent marking.
Making this scan concurrent introduces two complications:
1) The scan currently walks the specials list of each span without
locking it, which is safe only with the world stopped. We fix this by
speculatively checking if a span has any specials (the vast majority
won't) and then locking the specials list only if there are specials
to check.
2) An object can have a finalizer set after concurrent scan, in which
case it won't have been marked appropriately by concurrent scan. If
the finalizer is a closure and is only reachable from the special, it
could be swept before it is run. Likewise, if the object is not marked
yet when the finalizer is set and then becomes unreachable before it
is marked, other objects reachable only from it may be swept before
the finalizer function is run. We fix this issue by making
addfinalizer ensure the same marking invariants as markroot does.
For multi-gigabyte heaps, this reduces max pause time by 20%–30%
relative to tip (depending on GOMAXPROCS) and by ~50% relative to Go
1.5.1 (where this loop was neither concurrent nor parallel). Here are
the results for the garbage benchmark:
---------------- max pause ----------------
Heap Procs Concurrent scan STW parallel scan 1.5.1
24GB 12 18ms 23ms 37ms
24GB 4 18ms 25ms 37ms
4GB 4 3.8ms 4.9ms 6.9ms
In all cases, 95%ile pause time is similar to the max pause time. This
also improves mean STW time by 10%–30%.
Fixes#11485.
Change-Id: I9359d8c3d120a51d23d924b52bf853a1299b1dfd
Reviewed-on: https://go-review.googlesource.com/14982
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, the GC modes constants are untyped and functions pass them
around as ints. Clean this up by introducing a proper type for these
constant.
Change-Id: Ibc022447bdfa203644921fbb548312d7e2272e8d
Reviewed-on: https://go-review.googlesource.com/14981
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change splits signal_unix.go into signal_unix.go and
signal2_unix.go and removes the fake symbol sigfwd from signal
forwarding unsupported platforms for clarification purpose.
Change-Id: I205eab5cf1930fda8a68659b35cfa9f3a0e67ca6
Reviewed-on: https://go-review.googlesource.com/12062
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reduce allocation to avoid running out of memory on the openbsd/arm builder,
until issue/12032 is resolved.
Update issue #12032
Change-Id: Ibd513829ffdbd0db6cd86a0a5409934336131156
Reviewed-on: https://go-review.googlesource.com/15242
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
sysReserve will return nil on failure - correctly handle this case and return
nil to the caller. Currently, a failure will result in h.arena_end being set
to psize, h.arena_used being set to zero and fun times ensue.
On the openbsd/arm builder this has resulted in:
runtime: address space conflict: map(0x0) = 0x40946000
fatal error: runtime: address space conflict
When it should be reporting out of memory instead.
Change-Id: Iba828d5ee48ee1946de75eba409e0cfb04f089d4
Reviewed-on: https://go-review.googlesource.com/15056
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The memory sanitizer (msan) is a nice compiler feature that can
dynamically check for memory errors in C code. It's not useful for Go
code, since Go is memory safe. But it is useful to be able to use the
memory sanitizer on C code that is linked into a Go program via cgo.
Without this change it does not work, as msan considers memory passed
from Go to C as uninitialized.
To make this work, change the runtime to call the C mmap function when
using cgo. When using msan the mmap call will be intercepted and marked
as returning initialized memory.
Work around what appears to be an msan bug by calling malloc before we
call mmap.
Change-Id: I8ab7286d7595ae84782f68a98bef6d3688b946f9
Reviewed-on: https://go-review.googlesource.com/15170
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
We've broken periodic GC a few times without noticing because there's
no test for it, partly because you have to wait two minutes to see if
it happens. This exposes control of the periodic GC timeout to runtime
tests and adds a test that cranks it down to zero and sleeps for a bit
to make sure periodic GCs happen.
Change-Id: I3ec44e967e99f4eda752f85c329eebd18b87709e
Reviewed-on: https://go-review.googlesource.com/13169
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>