We need a runtime check because the original issue is encountered
when running cross compiled windows program from linux. It's better
to give a meaningful crash message earlier than to segfault later.
The added test should not impose any measurable overhead to Go
programs.
For #12415.
Change-Id: Ib4a24ef560c09c0585b351d62eefd157b6b7f04c
Reviewed-on: https://go-review.googlesource.com/14207
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Write barriers in gcFlushBgCredit lead to very subtle bugs because it
executes after the getfull barrier. I tracked some bugs of this form
down before go:nowritebarrierrec was implemented. Ensure that they
don't reappear by making gcFlushBgCredit go:nowritebarrierrec.
Change-Id: Ia5ca2dc59e6268bce8d8b4c87055bd0f6e19bed2
Reviewed-on: https://go-review.googlesource.com/17052
Reviewed-by: Russ Cox <rsc@golang.org>
sighandler may run during STW, so write barriers are not allowed.
Change-Id: Icdf46be10ea296fd87e73ab56ebb718c5d3c97ac
Reviewed-on: https://go-review.googlesource.com/17007
Reviewed-by: Russ Cox <rsc@golang.org>
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.
The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.
The cost is that OSes that don't need m.sigmask has to define sigset.
completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.
With GOOS=nacl ./make.bash failed with a seemingly unrelated error.
[Replay of CL 16942 by Elias Naur.]
Change-Id: I98f144d626033ae5318576115ed635415ac71b2c
Reviewed-on: https://go-review.googlesource.com/17033
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
These tests were failing on one of the versions of cl/9345
("runtime: simplify buffered channels").
Change-Id: I920ffcd28de428bcb7c2d5a300068644260e1017
Reviewed-on: https://go-review.googlesource.com/16416
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fixes#11959Fixes#12035
Skip the CallbackGC test on linux/arm. This test takes between 30 and 60
seconds to run by itself, and is run 4 times over the course of ./run.bash
(once during the runtime test, three times more later in the build).
Change-Id: I4e7d3046031cd8c08f39634bdd91da6e00054caf
Reviewed-on: https://go-review.googlesource.com/14485
Reviewed-by: Russ Cox <rsc@golang.org>
Original code is mistakenly panics on VirtualAlloc failure - we want
it to go looking for smaller memory region that VirtualAlloc will
succeed to allocate. Also return immediately if VirtualAlloc succeeds.
See rsc comment on issue #12587 for details.
I still don't have a test for this. So I can only hope that this
Fixes#12587
Change-Id: I052068ec627fdcb466c94ae997ad112016f734b7
Reviewed-on: https://go-review.googlesource.com/17169
Reviewed-by: Russ Cox <rsc@golang.org>
These are methods that are "obviously" going to get inlined -- until you build
with -l, when they can trigger a stack split at a bad time.
Fixes#11482
Change-Id: Ia065c385978a2e7fe9f587811991d088c4d68325
Reviewed-on: https://go-review.googlesource.com/17165
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Commit bbd1a1c prevented SIGPROF from scanning stacks that were being
copied, but it didn't prevent a stack copy (specifically a stack
shrink) from happening while SIGPROF is scanning the stack. As a
result, a stack copy may adjust stack barriers while SIGPROF is in the
middle of scanning a stack, causing SIGPROF to panic when it detects
an inconsistent stack barrier.
Fix this by taking the stack barrier lock while adjusting the stack.
In addition to preventing SIGPROF from scanning this stack, this will
block until any in-progress SIGPROF is done scanning the stack.
For 1.5.2.
Fixes#13362.
Updates #12932.
Change-Id: I422219c363054410dfa56381f7b917e04690e5dd
Reviewed-on: https://go-review.googlesource.com/17191
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Calling timeBeginPeriod changes Windows global timer resolution
from 15ms to 1ms. This used to improve Go runtime scheduler
performance, but not anymore. Thanks to @aclements, scheduler now
behaves the same way if we call timeBeginPeriod or not.
Remove call to timeBeginPeriod, since it is machine global
resource, and there are downsides of using low timer resolution.
See issue #8687 for details.
Fixes#8687
Change-Id: Ib7e41aa4a81861b62a900e0e62776c9ef19bfb73
Reviewed-on: https://go-review.googlesource.com/17164
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
This improves the documentation comment on gcMarkDone, replaces a
recursive call with a simple goto, and disables preemption before
stopping the world in accordance with the documentation comment on
stopTheWorldWithSema.
Updates #13363, but, sadly, doesn't fix it.
Change-Id: I6cb2a5836b35685bf82f7b1ce7e48a7625906656
Reviewed-on: https://go-review.googlesource.com/17149
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This improves stack barrier debugging messages in various ways:
1) Rather than printing only the remaining stack barriers (of which
there may be none, which isn't very useful), print all of the G's
stack barriers with a marker at the position the stack itself has
unwound to and a marker at the problematic stack barrier (where
applicable).
2) Rather than crashing if we encounter a stack barrier when there are
no more stkbar entries, print the same debug message we would if we
had encountered a stack barrier at an unexpected location.
Hopefully this will help with debugging #12528.
Change-Id: I2e6fe6a778e0d36dd8ef30afd4c33d5d94731262
Reviewed-on: https://go-review.googlesource.com/17147
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The stack barrier locking functions use a simple cas lock because they
need to support trylock, but currently don't increment g.m.locks. This
is okay right now because they always run on the system stack or the
signal stack and are hence non-preemtible, but this could lead to
difficult-to-reproduce deadlocks if these conditions change in the
future.
Make these functions more robust by incrementing g.m.locks and making
them nosplit to enforce non-preemtibility.
Change-Id: I73d60a35bd2ad2d81c73aeb20dbd37665730eb1b
Reviewed-on: https://go-review.googlesource.com/17058
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ingo Oeser <nightlyone@googlemail.com>
Reviewed-by: Russ Cox <rsc@golang.org>
This test depends on GODEBUG=gcstackbarrierall, which doesn't work on
ppc64.
Updates #13334.
Change-Id: Ie554117b783c4e999387f97dd660484488499d85
Reviewed-on: https://go-review.googlesource.com/17120
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
During a crash showing goroutine stacks of all threads
(with GOTRACEBACK=crash), it can be that f == nil.
Only happens on Solaris; not sure why.
Change-Id: Iee2c394a0cf19fa0a24f6befbc70776b9e42d25a
Reviewed-on: https://go-review.googlesource.com/17110
Reviewed-by: Austin Clements <austin@google.com>
The nosplit stack is now much bigger, so we can afford to allocate
libcall on stack.
Fix asmsysvicall6 to not update errno if g == nil.
These two fixes TestCgoCallbackGC on solaris, which used to stuck
in a loop.
Change-Id: Id1b13be992dae9f059aa3d47ffffd37785300933
Reviewed-on: https://go-review.googlesource.com/17076
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Solaris needs to make system calls without a g,
and Solaris uses asmcgocall to make system calls.
I know, I know.
I hope this makes CL 16915, fixing #12277, work on Solaris.
Change-Id: If988dfd37f418b302da9c7096f598e5113ecea87
Reviewed-on: https://go-review.googlesource.com/17072
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Run-TryBot: Russ Cox <rsc@golang.org>
sysmon runs without a P. This means it can't interact with the garbage
collector, so write barriers not allowed in anything that sysmon does.
Fixes#10600.
Change-Id: I9de1283900dadee4f72e2ebfc8787123e382ae88
Reviewed-on: https://go-review.googlesource.com/17006
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
allocm is a very unusual function: it is specifically designed to
allocate in contexts where m.p is nil by temporarily taking over a P.
Since allocm is used in many contexts where it would make sense to use
nowritebarrierrec, this commit teaches the nowritebarrierrec analysis
to stop at allocm.
Updates #10600.
Change-Id: I8499629461d4fe25712d861720dfe438df7ada9b
Reviewed-on: https://go-review.googlesource.com/17005
Reviewed-by: Russ Cox <rsc@golang.org>
gentraceback is used in many contexts where write barriers are
disallowed. This currently works because the only write barrier is in
assigning frame.argmap in setArgInfo and in practice frame is always
on the stack, so this write barrier is a no-op.
However, we can easily eliminate this write barrier, which will let us
statically disallow write barriers (using go:nowritebarrierrec
annotations) in many more situations. As a bonus, this makes the code
a little more idiomatic.
Updates #10600.
Change-Id: I45ba5cece83697ff79f8537ee6e43eadf1c18c6d
Reviewed-on: https://go-review.googlesource.com/17003
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The assumption is that there are no nested function calls in complex expressions.
For the most part that assumption is true. It wasn't for these calls inserted during walk.
Fix that.
I looked through all the calls to mkcall in walk and these were the only cases
that emitted calls, that could be part of larger expressions (like not delete),
and that were not already handled.
Fixes#12225.
Change-Id: Iad380683fe2e054d480e7ae4e8faf1078cdd744c
Reviewed-on: https://go-review.googlesource.com/17034
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This adds a test that runs CPU profiling with a high load of stack
barriers and stack barrier insertion/removal operations and checks
that both 1) the runtime doesn't crash and 2) stackBarrier itself
never appears in a profile. Prior to the fix for gentraceback starting
in the middle of stackBarrier, condition 2 often failed.
Change-Id: Ic28860448859029779844c4bf3bb28ca84611e2c
Reviewed-on: https://go-review.googlesource.com/17037
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
A sigprof during stack barrier insertion or removal can crash if it
detects an inconsistency between the stkbar array and the stack
itself. Currently we protect against this when scanning another G's
stack using stackLock, but we don't protect against it when unwinding
stack barriers for a recover or a memmove to the stack.
This commit cleans up and improves the stack locking code. It
abstracts out the lock and unlock operations. It uses the lock
consistently everywhere we perform stack operations, and pushes the
lock/unlock down closer to where the stack barrier operations happen
to make it more obvious what it's protecting. Finally, it modifies
sigprof so that instead of spinning until it acquires the lock, it
simply doesn't perform a traceback if it can't acquire it. This is
necessary to prevent self-deadlock.
Updates #11863, which introduced stackLock to fix some of these
issues, but didn't go far enough.
Updates #12528.
Change-Id: I9d1fa88ae3744d31ba91500c96c6988ce1a3a349
Reviewed-on: https://go-review.googlesource.com/17036
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, if a profiling signal happens in the middle of
stackBarrier, gentraceback may see inconsistencies between stkbar and
the barriers on the stack and it will certainly get the wrong return
PC for stackBarrier. In most cases, the return PC won't be a PC at all
and this will immediately abort the traceback (which is considered
okay for a sigprof), but if it happens to be a valid PC this may sent
gentraceback down a rabbit hole.
Fix this by detecting when the gentraceback starts in stackBarrier and
simulating the completion of the barrier to get the correct initial
frame.
Change-Id: Ib11f705ac9194925f63fe5dfbfc84013a38333e6
Reviewed-on: https://go-review.googlesource.com/17035
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Cgo-created threads transition between having associated Go g's and m's and not.
A signal arriving during the transition could think it was safe and appropriate to
run Go signal handlers when it was in fact not.
Avoid the race by masking all signals during the transition.
Fixes#12277.
Change-Id: Ie9711bc1d098391d58362492197a7e0f5b497d14
Reviewed-on: https://go-review.googlesource.com/16915
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Mostly by avoiding CX entirely, sometimes by reloading it.
I also vetted the assembly in other packages, it's all fine.
Change-Id: I50059669aaaa04efa303cf22ac228f9d14d83db0
Reviewed-on: https://go-review.googlesource.com/16386
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.
The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.
The cost is that OSes that don't need m.sigmask has to define sigset.
completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.
With GOOS=nacl ./make.bash failed with a seemingly unrelated error.
R=go1.7
Change-Id: Ib460379f063eb83d393e1c5efe7333a643c1595e
Reviewed-on: https://go-review.googlesource.com/16942
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
golang.org/cl/16796 broke android/386 by assuming behaviour specific to glibc's
dynamic linker. Copy bionic by using int $0x80 to invoke syscalls on
android/386 as the old alternative (CALL *runtime_vdso(SB)) cannot be compiled
without text relocations, which we want to get rid of on android.
Also remove "CALL *runtime_vdso(SB)" variant from the syscall package.
Change-Id: I6c01849f8dcbd073d000ddc8f13948a836b8b261
Reviewed-on: https://go-review.googlesource.com/16996
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Not all tests passing yet, but a good chunk are.
Change-Id: I5daebaeabf3aecb380674ece8830a86751a8d139
Reviewed-on: https://go-review.googlesource.com/16458
Reviewed-by: Rahul Chaudhry <rahulchaudhry@google.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Currently, if an allocation is large enough that arena_end + size
overflows (which is not hard to do on 32-bit), we go ahead and call
sysReserve with the impossible base and length and depend on this to
either directly fail because the kernel can't possibly fulfill the
requested mapping (causing mheap.sysAlloc to return nil) or to succeed
with a mapping at some other address which will then be rejected as
outside the arena.
In order to make this less subtle, less dependent on the kernel
getting all of this right, and to eliminate the hopeless system call,
add an explicit overflow check.
Updates #13143. This real issue has been fixed by 0de59c2, but this is
a belt-and-suspenders improvement on top of that. It was uncovered by
my symbolic modeling of that bug.
Change-Id: I85fa868a33286fdcc23cdd7cdf86b19abf1cb2d1
Reviewed-on: https://go-review.googlesource.com/16961
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
mcache.tiny is in non-GC'd memory, but points to heap memory. As a
result, there may or may not be write barriers when writing to
mcache.tiny. Make it clearer that funny things are going on by making
mcache.tiny a uintptr instead of an unsafe.Pointer.
Change-Id: I732a5b7ea17162f196a9155154bbaff8d4d00eac
Reviewed-on: https://go-review.googlesource.com/16963
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The tiny alloc cache is maintained in a pointer from non-GC'd memory
(mcache) to heap memory and hence must be handled carefully.
Currently we clear the tiny alloc cache during sweep termination and,
if it is assigned to a non-nil value during concurrent marking, we
depend on a write barrier to keep the new value alive. However, while
the compiler currently always generates this write barrier, we're
treading on thin ice because write barriers may not happen for writes
to non-heap memory (e.g., typedmemmove). Without this lucky write
barrier, the GC may free a current tiny block while it's still
reachable by the tiny allocator, leading to later memory corruption.
Change this code so that, rather than depending on the write barrier,
we simply clear the tiny cache during mark termination when we're
clearing all of the other mcaches. If the current tiny block is
reachable from regular pointers, it will be retained; if it isn't
reachable from regular pointers, it may be freed, but that's okay
because there won't be any pointers in non-GC'd memory to it.
Change-Id: I8230980d8612c35c2997b9705641a1f9f865f879
Reviewed-on: https://go-review.googlesource.com/16962
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
If you set GODEBUG=cgocheck=2 the runtime package will use the write
barrier to detect cases where a Go program writes a Go pointer into
non-Go memory. In conjunction with the existing cgo checks, and the
not-yet-implemented cgo check for exported functions, this should
reliably detect all cases (that do not import the unsafe package) in
which a Go pointer is incorrectly shared with C code. This check is
optional because it turns on the write barrier at all times, which is
known to be expensive.
Update #12416.
Change-Id: I549d8b2956daa76eac853928e9280e615d6365f4
Reviewed-on: https://go-review.googlesource.com/16899
Reviewed-by: Russ Cox <rsc@golang.org>
In mheap.sysAlloc, if an allocation at arena_used would exceed
arena_end (but wouldn't yet push us past arena_start+_MaxArean32), it
trie to extend the arena reservation by another 256 MB. It extends the
arena by calling sysReserve, which, on 32-bit, calls mmap without
MAP_FIXED, which means the address is just a hint and the kernel can
put the mapping wherever it wants. In particular, mmap may choose an
address below arena_start (the kernel also chose arena_start, so there
could be lots of space below it). Currently, we don't detect this case
and, if it happens, mheap.sysAlloc will corrupt arena_end and
arena_used then return the low pointer to mheap.grow, which will crash
when it attempts to index in to h_spans with an underflowed index.
Fix this by checking not only that that p+p_size isn't too high, but
that p isn't too low.
Fixes#13143.
Change-Id: I8d0f42bd1484460282a83c6f1a6f8f0df7fb2048
Reviewed-on: https://go-review.googlesource.com/16927
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
If the area returned by sysReserve in mheap.sysAlloc is outside the
usable arena, we sysFree it. We pass a fake stat pointer to sysFree
because we haven't added the allocation to any stat at that point.
However, we pass a 0 stat, so sysFree panics when it decrements the
stat because the fake stat underflows.
Fix this by setting the fake stat to the allocation size.
Updates #13143 (this is a prerequisite to fixing that bug).
Change-Id: I61a6c9be19ac1c95863cf6a8435e19790c8bfc9a
Reviewed-on: https://go-review.googlesource.com/16926
Reviewed-by: Ian Lance Taylor <iant@golang.org>
golang.org/cl/16346 changed the runtime on linux/386 to invoke the vsyscall
helper via a PIC sequence (CALL 0x10(GS)) when dynamically linking. But it's
actually quite easy to make that code sequence work all the time, so do that,
and remove the ugly machinery that passed the buildmode from the go tool to the
assembly.
This means enlarging m.tls so that we can safely access 0x10(GS) (GS is set to
&m.tls + 4, so 0x10(GS) accesses m_tls[5]).
Change-Id: I1345c34029b149cb5f25320bf19a3cdd73a056fa
Reviewed-on: https://go-review.googlesource.com/16796
Reviewed-by: Ian Lance Taylor <iant@golang.org>
A nosplit comment was added to reflect.typelinks accidentally in
https://golang.org/cl/98510044. There is only one caller of
reflect.typelinks, reflect.typesByString, and that function is not
nosplit. There is no reason for reflect.typelinks to be nosplit.
Change-Id: I0fd3cc66fafcd92643e38e53fa586d6b2f868a0a
Reviewed-on: https://go-review.googlesource.com/16932
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
These now live in runtime/internal/sys.
Change-Id: I270597142516512bfc1395419e51d8083ba1663f
Reviewed-on: https://go-review.googlesource.com/16891
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Allows removing fields that aren't relevant to a particular OS or
changing their types to match the underlying OS system calls they'll
be used for.
Change-Id: I5cea89ee77b4e7b985bff41337e561887c3272ff
Reviewed-on: https://go-review.googlesource.com/16176
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
We're allocating TLS storage for m0 anyway, so might as well use it.
Change-Id: I7dc20bbea5320c8ab8a367f18a9540706751e771
Reviewed-on: https://go-review.googlesource.com/16890
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This requires changing the tls access code to match the patterns documented in
the ABI documentation or the system linker will "optimize" it into ridiculousness.
With this change, -buildmode=pie works, although as it is tested in testshared,
the tests are not run yet.
Change-Id: I1efa6687af0a5b8db3385b10f6542a49056b2eb3
Reviewed-on: https://go-review.googlesource.com/15971
Reviewed-by: Russ Cox <rsc@golang.org>
The PowerPC ISA does not have a PC-relative load instruction, which poses
obvious challenges when generating position-independent code. The way the ELFv2
ABI addresses this is to specify that r2 points to a per "module" (shared
library or executable) TOC pointer. Maintaining this pointer requires
cooperation between codegen and the system linker:
* Non-leaf functions leave space on the stack at r1+24 to save the TOC pointer.
* A call to a function that *might* have to go via a PLT stub must be followed
by a nop instruction that the system linker can replace with "ld r1, 24(r1)"
to restore the TOC pointer (only when dynamically linking Go code).
* When calling a function via a function pointer, the address of the function
must be in r12, and the first couple of instructions (the "global entry
point") of the called function use this to derive the address of the TOC
for the module it is in.
* When calling a function that is implemented in the same module, the system
linker adjusts the call to skip over the instructions mentioned above (the
"local entry point"), assuming that r2 is already correctly set.
So this changeset adds the global entry point instructions, sets the metadata so
the system linker knows where the local entry point is, inserts code to save the
TOC pointer at 24(r1), adds a nop after any call not known to be local and copes
with the odd non-local code transfer in the runtime (e.g. the stuff around
jmpdefer). It does not actually compile PIC yet.
Change-Id: I7522e22bdfd2f891745a900c60254fe9e372c854
Reviewed-on: https://go-review.googlesource.com/15967
Reviewed-by: Russ Cox <rsc@golang.org>
darwin/386, freebsd/386, and linux/386 use a setldt system call to
setup each M's thread-local storage area, and they need access to the
M's id for this. The current code copies m.id into m.tls[0] (and this
logic has been cargo culted to OSes like NetBSD and OpenBSD, which
don't even need m.id to configure TLS), and then the 386 assembly
loads m.tls[0]... but since the assembly code already has a pointer to
the M, it might as well just load m.id directly.
Change-Id: I1a7278f1ec8ebda8d1de3aa3a61993070e3a8cdf
Reviewed-on: https://go-review.googlesource.com/16881
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The larger stack frames causes the nosplit stack to overflow so the next change
increases the stackguard.
Change-Id: Ib2b4f24f0649eb1d13e3a58d265f13d1b6cc9bf9
Reviewed-on: https://go-review.googlesource.com/15964
Reviewed-by: Russ Cox <rsc@golang.org>
Larger stack frames mean nosplit functions use more stack and so the limit
needs to increase.
The change to test/nosplit.go is a bit ugly but I can't really think of a
way to make it nicer.
Change-Id: I2616b58015f0b62abbd62951575fcd0d2d8643c2
Reviewed-on: https://go-review.googlesource.com/16504
Reviewed-by: Russ Cox <rsc@golang.org>
Apparently its last use was removed in CL 8899.
Change-Id: I4f3a789b3cc4c249582e81463af62b576a281e40
Reviewed-on: https://go-review.googlesource.com/16880
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The new revision is 389d49d4943780efbfcd2a434f4462b6d0f23c44 (Nov 13, 2015).
The runtimes are built using the new x/build/cmd/racebuild utility.
This update fixes a bug in race detection algorithm that can
lead to occasional false negatives (#10589). But generally just
brings in an up-to-date runtime.
Update #8653Fixes#10589
Change-Id: I7ac9614d014ee89c2302ce5e096d326ef293f367
Reviewed-on: https://go-review.googlesource.com/16827
Reviewed-by: Keith Randall <khr@golang.org>
As per mdempsky's comment on golang.org/cl/14204, textflag.h is
copied to the includes dir by cmd/dist, and the copy in
runtime/internal/atomic is not actually being used.
Updates #11647
Change-Id: Ie95c08903a9df54cea4c70ee9d5291176f7b5609
Reviewed-on: https://go-review.googlesource.com/16871
Run-TryBot: Michael Matloob <matloob@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Somehow these were left out of the orignial CL.
Updates #11647
Change-Id: I058a30eaa25fbb72d60e7fb6bc9ff0a3b54fdb2a
Reviewed-on: https://go-review.googlesource.com/16870
Reviewed-by: Minux Ma <minux@golang.org>
I made a copy of the per-arch _CacheLineSize definitons when checking in
runtime/internal/atomic. Now that runtime/internal/sys is checked in,
we can use the definition there.
Change-Id: I7242f6b633e4164f033b67ff471416b9d71c64d2
Reviewed-on: https://go-review.googlesource.com/16847
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
sigprof tracebacks the stack across systemstack switches to make
profile tracebacks more complete. However, it does this even if the
user stack is currently being copied, which means it may be in an
inconsistent state that will cause the traceback to panic.
One specific way this can happen is during stack shrinking. Some
goroutine blocks for STW, then enters gchelper, which then assists
with root marking. If that root marking happens to pick the original
goroutine and its stack needs to be shrunk, it will begin to copy that
stack. During this copy, the stack is generally inconsistent and, in
particular, the actual locations of the stack barriers and their
recorded locations are temporarily out of sync. If a SIGPROF happens
during this inconsistency, it will walk the stack all the way back to
the blocked goroutine and panic when it fails to unwind the stack
barriers.
Fix this by disallowing jumping to the user stack during SIGPROF if
that user stack is in the process of being copied.
Fixes#12932.
Change-Id: I9ef694c2c01e3653e292ce22612418dd3daff1b4
Reviewed-on: https://go-review.googlesource.com/16819
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
runtime/internal/sys will hold system-, architecture- and config-
specific constants.
Updates #11647
Change-Id: I6db29c312556087a42e8d2bdd9af40d157c56b54
Reviewed-on: https://go-review.googlesource.com/16817
Reviewed-by: Russ Cox <rsc@golang.org>
cgo_ppc64x.go:7: +build comment must appear before package clause and be followed by a blank line
Change-Id: Ib6dedddae70cc75dc3f137eb37ea338a64f8b595
Reviewed-on: https://go-review.googlesource.com/16835
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change-Id: I419f3b8bf1bddffd4a775b0cd7b98f0239fe19cb
Reviewed-on: https://go-review.googlesource.com/14458
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Linux/mips64 uses a different signal table. To avoid code copying,
signal table is factored out from signal_linux.go to
sigtab_linux_generic.go. And a mips64-specific version is added.
Change-Id: I842d7a7467c330bf772855fde01aecc77a42316b
Reviewed-on: https://go-review.googlesource.com/14993
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Linux/mips64 has a different sigset type and some different constants.
os2_linux.go is renamed to os2_linux_generic.go, and not used in mips64.
The corresponding file os2_linux_mips64x.go is added.
Change-Id: Ief83845a2779f7fe048d236d3c7da52b627ab533
Reviewed-on: https://go-review.googlesource.com/14992
Reviewed-by: Minux Ma <minux@golang.org>
Linux/mips64 uses a different type of sigset. To deal with it, related
functions in os1_linux.go is refactored to os1_linux_generic.go
(used for non-mips64 architectures), and os1_linux_mips64x.go (only used
in mips64{,le}), to avoid code copying.
Change-Id: I5cadfccd86bfc4b30bf97e12607c3c614903ea4c
Reviewed-on: https://go-review.googlesource.com/14991
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: I381c03d957a0dccae5f655f02e92760e5c0e9629
Reviewed-on: https://go-review.googlesource.com/14929
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
files for unsupported architectures are deleted, as it would require
changing cmd/dist to recognize their names as build tags (probably
need a separated CL).
Change-Id: Ifd164b014867d39b4924d1b859fb84317dce4ab0
Reviewed-on: https://go-review.googlesource.com/14928
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Applies to types fixAlloc, mCache, mCentral, mHeap, mSpan, and
mSpanList.
Two special cases:
1. mHeap_Scavenge() previously didn't take an *mheap parameter, so it
was specially handled in this CL.
2. mHeap_Free() would have collided with mheap's "free" field, so it's
been renamed to (*mheap).freeSpan to parallel its underlying
(*mheap).freeSpanLocked method.
Change-Id: I325938554cca432c166fe9d9d689af2bbd68de4b
Reviewed-on: https://go-review.googlesource.com/16221
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It was disabled because of the lack of external linking.
Change-Id: Iccb4a4ef8c57d048d53deabe4e0f4e6b9dccce33
Reviewed-on: https://go-review.googlesource.com/16797
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The GC now handles the root marking jobs as part of general marking,
so work.markfor is no longer used.
Change-Id: I6c3b23fed27e4e7ea6430d6ca7ba25ae4d04ed14
Reviewed-on: https://go-review.googlesource.com/16811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When we're jumping time forward, it means everyone is asleep, so there
should always be an M available. Furthermore, this causes both
allocation and write barriers in contexts that may be running without
a P (such as in sysmon).
Hence, replace this allocation with a throw.
Updates #10600.
Change-Id: I2cee70d5db828d0044082878995949edb25dda5f
Reviewed-on: https://go-review.googlesource.com/16815
Reviewed-by: Russ Cox <rsc@golang.org>
Currently traceBuf keeps track of where it is in the trace buffer by
also maintaining a slice that points in to this buffer with an initial
length of 0 and a cap of the length of the array. All writes to this
buffer are done by appending to the slice (as long as the bounds
checks are right, it will never overflow and the append won't allocate
a new slice).
Each of these appends generates a write barrier. As long as we never
overflow the buffer, this write barrier won't fire, but this wreaks
havoc with eliminating write barriers from the tracing code. If we
were to overflow the buffer, this would both allocate and invoke a
write barrier, both things that are dicey at best to do in many of the
contexts tracing happens. It also wastes space in the traceBuf and
leads to more complex code and more complex generated code.
Replace this slice trick with keeping track of a simple array
position.
Updates #10600.
Change-Id: I0a63eecec1992e195449f414ed47653f66318d0e
Reviewed-on: https://go-review.googlesource.com/16814
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The tracing code is currently called from contexts such as sysmon and
the scheduler where write barriers are not allowed. Unfortunately,
while the common paths through the tracing code do not have write
barriers, many of the less common paths dealing with buffer overflow
and recycling do.
This change replaces all *traceBufs with traceBufPtrs. In the style of
guintptr, etc., the GC does not trace traceBufPtrs and write barriers
do not apply when these pointers are written. Since traceBufs are
allocated from non-GC'd memory and manually managed, this is always
safe.
Updates #10600.
Change-Id: I52b992d36d1b634ebd855c8cde27947ec14f59ba
Reviewed-on: https://go-review.googlesource.com/16812
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Commit 7407d8e was rebased over the switch to runtime/internal/atomic
and introduced a call to xadd64, which no longer exists. Fix that
call.
Change-Id: I99c93469794c16504ae4a8ffe3066ac382c66a3a
Reviewed-on: https://go-review.googlesource.com/16816
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, sweeping is performed before allocating a span by charging
for the entire size of the span requested, rather than the number of
bytes actually available for allocation from the returned span. That
is, if the returned span is 8K, but already has 6K in use, the mutator
is charged for 8K of heap allocation even though it can only allocate
2K more from the span. As a result, proportional sweep is
over-aggressive and tends to finish much earlier than it needs to.
This effect is more amplified by fragmented heaps.
Fix this by reimbursing the mutator for the used space in a span once
it has allocated that span. We still have to charge up-front for the
worst-case because we don't know which span the mutator will get, but
at least we can correct the over-charge once it has a span, which will
go toward later span allocations.
This has negligible effect on the throughput of the go1 benchmarks and
the garbage benchmark.
Fixes#12040.
Change-Id: I0e23e7a4ccf126cca000fed5067b20017028dd6b
Reviewed-on: https://go-review.googlesource.com/16515
Reviewed-by: Rick Hudson <rlh@golang.org>
The runtime is not instrumented, but the calls to msanread in the
runtime can sometimes refer to the system stack. An example is the call
to copy in stkbucket in mprof.go. Depending on what C code has done,
the system stack may appear uninitialized to msan.
Change-Id: Ic21705b9ac504ae5cf7601a59189302f072e7db1
Reviewed-on: https://go-review.googlesource.com/16660
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This is a fix for the -msan option when using cgo callbacks. A cgo
callback works by writing out C code that puts a struct on the stack and
passes the address of that struct into Go. The result parameters are
fields of the struct. The Go code will write to the result parameters,
but the Go code thinks it is just writing into the Go stack, and
therefore won't call msanwrite. This CL adds a call to msanwrite in the
cgo callback code so that the C knows that results were written.
Change-Id: I80438dbd4561502bdee97fad3f02893a06880ee1
Reviewed-on: https://go-review.googlesource.com/16611
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This changes "mark worker (idle)" to "GC worker (idle)" so it's more
clear to users that these goroutines are GC-related. It changes "GC
assist" to "GC assist wait" to make it clear that the assist is
blocked.
Change-Id: Iafbc0903c84f9250ff6bee14baac6fcd4ed5ef76
Reviewed-on: https://go-review.googlesource.com/16511
Reviewed-by: Rick Hudson <rlh@golang.org>
We couldn't do this before this point because it must be done before
the next GC cycle starts. Hence, if it delayed the start of the next
cycle, that would widen the window between reaching the heap trigger
of the next cycle and starting the next GC cycle, during which the
mutator could over-allocate. With the decentralized GC, any mutators
that reach the heap trigger will block on the GC starting, so it's
safe to widen the time between starting the world and being able to
start the next GC cycle.
Fixes#11465.
Change-Id: Ic7ea7e9eba5b66fc050299f843a9c9001ad814aa
Reviewed-on: https://go-review.googlesource.com/16394
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This implements part of the proposal in issue 12416 by adding dynamic
checks for passing pointers from Go to C. This code is intended to be
on at all times. It does not try to catch every case. It does not
implement checks on calling Go functions from C.
The new cgo checks may be disabled using GODEBUG=cgocheck=0.
Update #12416.
Change-Id: I48de130e7e2e83fb99a1e176b2c856be38a4d3c8
Reviewed-on: https://go-review.googlesource.com/16003
Reviewed-by: Russ Cox <rsc@golang.org>
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.
all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.
Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This change is the same as CL #9345 which was reverted,
except for a small bug fix.
The only change is to the body of sendDirect and its callsite.
Also added a test.
The problem was during a channel send operation. The target
of the send was a sleeping goroutine waiting to receive. We
basically do:
1) Read the destination pointer out of the sudog structure
2) Copy the value we're sending to that destination pointer
Unfortunately, the previous change had a goroutine suspend
point between 1 & 2 (the call to sendDirect). At that point
the destination goroutine's stack could be copied (shrunk).
The pointer we read in step 1 is no longer valid for step 2.
Fixed by not allowing any suspension points between 1 & 2.
I suspect the old code worked correctly basically by accident.
Fixes#13169
The original 9345:
This change removes the retry mechanism we use for buffered channels.
Instead, any sender waking up a receiver or vice versa completes the
full protocol with its counterpart. This means the counterpart does
not need to relock the channel when it wakes up. (Currently
buffered channels need to relock on wakeup.)
For sends on a channel with waiting receivers, this change replaces
two copies (sender->queue, queue->receiver) with one (sender->receiver).
For receives on channels with a waiting sender, two copies are still required.
This change unifies to a large degree the algorithm for buffered
and unbuffered channels, simplifying the overall implementation.
Fixes#11506
Change-Id: I57dfa3fc219cffa4d48301ee15fe5479299efa09
Reviewed-on: https://go-review.googlesource.com/16740
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Other systems use pthread_sigmask. It was a mistake to use sigprocmask
here.
Change-Id: Ie045aa3f09cf035fcf807b7543b96fa5b847958a
Reviewed-on: https://go-review.googlesource.com/16720
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Make sure that we're moving or zeroing pointers atomically.
Anything that is a multiple of pointer size and at least
pointer aligned might have pointers in it. All the code looks
ok except for the 1-pointer-sized moves.
Fixes#13160
Update #12552
Change-Id: Ib97d9b918fa9f4cc5c56c67ed90255b7fdfb7b45
Reviewed-on: https://go-review.googlesource.com/16668
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Revert for now until #13169 is understood.
This reverts commit 8e496f1d69.
Change-Id: Ib3eb2588824ef47a2b6eb9e377a24e5c817fcc81
Reviewed-on: https://go-review.googlesource.com/16716
Reviewed-by: Keith Randall <khr@golang.org>
Currently mallocgc detects if the GC is in a state where it can't
assist, but also can't allocate uncontrolled and yields to help out
the GC. This was a workaround for periods when we were trying to
schedule the GC coordinator. It is no longer necessary because there
is no GC coordinator and malloc can always assist with any GC
transitions that are necessary.
Updates #11970.
Change-Id: I4f7beb7013e85e50ae99a3a8b0bb708ba49cbcd4
Reviewed-on: https://go-review.googlesource.com/16392
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This moves all of the mark 1 to mark 2 transition and mark termination
to the mark done transition function. This means these transitions are
now handled on the goroutine that detected mark completion. This also
means that the GC coordinator and the background completion barriers
are no longer used and various workarounds to yield to the coordinator
are no longer necessary. These will be removed in follow-up commits.
One consequence of this is that mark workers now need to be
preemptible when performing the mark done transition. This allows them
to stop the world and to perform the final clean-up steps of GC after
restarting the world. They are only made preemptible while performing
this transition, so if the worker findRunnableGCWorker would schedule
isn't available, we didn't want to schedule it anyway.
Fixes#11970.
Change-Id: I9203a2d6287eeff62d589ec02ad9cb1e29ddb837
Reviewed-on: https://go-review.googlesource.com/16391
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently gcMarkDone takes basically no time, so it's okay to account
the worker time after calling it. However, gcMarkDone is about to take
potentially *much* longer because it may perform all of mark
termination. Prepare for this by swapping the order so we account the
time before calling gcMarkDone.
Change-Id: I90c7df68192acfc4fd02a7254dae739dda4e2fcb
Reviewed-on: https://go-review.googlesource.com/16390
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the code for completion of mark 1/mark 2 is duplicated in
background workers and assists. Factor this in to a single function
that will serve as the transition function for concurrent mark.
Change-Id: I4d9f697a15da0d349db3b34d56f3a220dd41d41b
Reviewed-on: https://go-review.googlesource.com/16359
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, findRunnableGCWorker will perform mark completion if there
is no remaining work and no running workers. This used to be necessary
to resolve a race in the transition from mark 1 to mark 2 where we
would enter mark 2 with no mark work (and no dedicated workers), so no
workers would run, so no worker would signal mark completion.
However, we're about to make mark completion also perform the entire
follow-on process, which includes mark termination. We really don't
want to do that in the scheduler if it happens to detect completion.
Conveniently, this hack is no longer necessary because we always
enqueue root scanning work at the beginning of both mark 1 and mark 2,
so a mark worker will always run. Hence, we can simply eliminate it.
Change-Id: I3fc8f27c8da632f0fb732c9f6425e1f457f5652e
Reviewed-on: https://go-review.googlesource.com/16358
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, we don't start dedicated or fractional mark workers unless
the mark 1 or mark 2 barriers have been cleared. One intended
consequence of this is that no background workers run between the
forEachP that disposes all gcWork caches and the beginning of mark 2.
However, we (unintentionally) did not apply this restriction to idle
mark workers. As a result, these can start in the interim between mark
1 completion and mark 2 starting. This explains why it was necessary
to reset the root marking jobs using carefully ordered atomic writes
when setting up mark 2. It also means that, even though we definitely
enqueue work before starting mark 2, it may be drained by the time we
reset the mark 2 barrier. If this happens, currently the only thing
preventing the runtime from deadlocking is that the scheduler itself
also checks for mark completion and will signal mark 2 completion.
Were it not for the odd behavior of idle workers, this check in the
scheduler would not be necessary.
Clean all of this up and prepare to remove this check in the scheduler
by applying the same restriction to starting idle mark workers.
Change-Id: Ic1b479e1591bd7773dc27b320ca399a215603b5a
Reviewed-on: https://go-review.googlesource.com/16631
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This moves all of GC initialization, sweep termination, and the
transition to concurrent marking in to the off->mark transition
function. This means it's now handled on the goroutine that detected
the state exit condition.
As a result, malloc no longer needs to Gosched() at the beginning of
the GC cycle to prevent over-allocation while the GC is starting up
because it will now *help* the GC to start up. The Gosched hack is
still necessary during GC shutdown (this is easy to test by enabling
gctrace and hitting Ctrl-S to block the gctrace output).
At this point, the GC coordinator still handles later phases. This
requires a small tweak to how we start the GC coordinator. Currently,
starting the GC coordinator is best-effort and may fail if the
coordinator is about to park from the previous cycle but hasn't yet.
We fix this by replacing the park/ready to wake up the coordinator
with a semaphore. This is temporary since the coordinator will be
going away in a few commits.
Updates #11970.
Change-Id: I2c6a11c91e72dfbc59c2d8e7c66146dee9a444fe
Reviewed-on: https://go-review.googlesource.com/16357
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This moves concurrent sweep termination from the coordinator to the
off->mark transition. This allows it to be performed by all Gs
attempting to start the GC.
Updates #11970.
Change-Id: I24428e8599a759398c2ef7ec996ba755a448f947
Reviewed-on: https://go-review.googlesource.com/16356
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This begins the conversion of the centralized GC coordinator to a
decentralized state machine by introducing the internal API that
triggers the first state transition from _GCoff to _GCmark (or
_GCmarktermination).
This change introduces the transition lock, the off->mark transition
condition (which is very similar to shouldtriggergc()), and the
general structure of a state transition. Since we're doing this
conversion in stages, it then falls back to the GC coordinator to
actually execute the cycle. We'll start moving logic out of the GC
coordinator and in to transition functions next.
This fixes a minor bug in gcstoptheworld debug mode where passing the
heap trigger once could trigger multiple STW GCs.
Updates #11970.
Change-Id: I964087dd190a639eb5766398f8e1bbf8b352902f
Reviewed-on: https://go-review.googlesource.com/16355
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
For historical reasons we currently do a lot of the concurrent mark
setup on the system stack. In fact, at this point the one and only
thing that needs to happen on the system stack is the start-the-world.
Clean up this code by lifting everything other than the
start-the-world off the system stack.
The diff for this change looks large, but the only code change is to
narrow the systemstack call. Everything else is re-indentation.
Change-Id: I1e03b8afc759fad726f2397b05a17d183c2713ce
Reviewed-on: https://go-review.googlesource.com/16354
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We're about to split func gc across several functions, so lift the
local variables it uses for tracking statistics and state across the
cycle into the global "work" variable.
Change-Id: Ie955f2f1758c7f5a5543ea1f3f33b222bc4b1d37
Reviewed-on: https://go-review.googlesource.com/16353
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change removes the retry mechanism we use for buffered channels.
Instead, any sender waking up a receiver or vice versa completes the
full protocol with its counterpart. This means the counterpart does
not need to relock the channel when it wakes up. (Currently
buffered channels need to relock on wakeup.)
For sends on a channel with waiting receivers, this change replaces
two copies (sender->queue, queue->receiver) with one (sender->receiver).
For receives on channels with a waiting sender, two copies are still required.
This change unifies to a large degree the algorithm for buffered
and unbuffered channels, simplifying the overall implementation.
Fixes#11506
benchmark old ns/op new ns/op delta
BenchmarkChanProdCons10 125 110 -12.00%
BenchmarkChanProdCons0 303 284 -6.27%
BenchmarkChanProdCons100 75.5 71.3 -5.56%
BenchmarkChanContended 6452 6125 -5.07%
BenchmarkChanNonblocking 11.5 11.0 -4.35%
BenchmarkChanCreation 149 143 -4.03%
BenchmarkChanSem 63.6 61.6 -3.14%
BenchmarkChanUncontended 6390 6212 -2.79%
BenchmarkChanSync 282 276 -2.13%
BenchmarkChanProdConsWork10 516 506 -1.94%
BenchmarkChanProdConsWork0 696 685 -1.58%
BenchmarkChanProdConsWork100 470 469 -0.21%
BenchmarkChanPopular 660427 660012 -0.06%
Change-Id: I164113a56432fbc7cace0786e49c5a6e6a708ea4
Reviewed-on: https://go-review.googlesource.com/9345
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Currently dedicated mark workers participate in the getfull barrier
during concurrent mark. However, the getfull barrier wasn't designed
for concurrent work and this causes no end of headaches.
In the concurrent setting, participants come and go. This makes mark
completion susceptible to live-lock: since dedicated workers are only
periodically polling for completion, it's possible for the program to
be in some transient worker each time one of the dedicated workers
wakes up to check if it can exit the getfull barrier. It also
complicates reasoning about the system because dedicated workers
participate directly in the getfull barrier, but transient workers
must instead use trygetfull because they have exit conditions that
aren't captured by getfull (e.g., fractional workers exit when
preempted). The complexity of implementing these exit conditions
contributed to #11677. Furthermore, the getfull barrier is inefficient
because we could be running user code instead of spinning on a P. In
effect, we're dedicating 25% of the CPU to marking even if that means
we have to spin to make that 25%. It also causes issues on Windows
because we can't actually sleep for 100µs (#8687).
Fix this by making dedicated workers no longer participate in the
getfull barrier. Instead, dedicated workers simply return to the
scheduler when they fail to get more work, regardless of what others
workers are doing, and the scheduler only starts new dedicated workers
if there's work available. Everything that needs to be handled by this
barrier is already handled by detection of mark completion.
This makes the system much more symmetric because all workers and
assists now use trygetfull during concurrent mark. It also loosens the
25% CPU target so that we can give some of that 25% back to user code
if there isn't enough work to keep the mark worker busy. And it
eliminates the problematic 100µs sleep on Windows during concurrent
mark (though not during mark termination).
The downside of this is that if we hit a bottleneck in the heap graph
that then expands back out, the system may shut down dedicated workers
and take a while to start them back up. We'll address this in the next
commit.
Updates #12041 and #8687.
No effect on the go1 benchmarks. This slows down the garbage benchmark
by 9%, but we'll more than make it up in the next commit.
name old time/op new time/op delta
XBenchGarbage-12 5.80ms ± 2% 6.32ms ± 4% +9.03% (p=0.000 n=20+20)
Change-Id: I65100a9ba005a8b5cf97940798918672ea9dd09b
Reviewed-on: https://go-review.googlesource.com/16297
Reviewed-by: Rick Hudson <rlh@golang.org>
This introduces a recursive variant of the go:nowritebarrier
annotation that prohibits write barriers not only in the annotated
function, but in all functions it calls, recursively. The error
message gives the shortest call stack from the annotated function to
the function containing the prohibited write barrier, including the
names of the functions and the line numbers of the calls.
To demonstrate the annotation, we apply it to gcmarkwb_m, the write
barrier itself.
This is a new annotation rather than a modification of the existing
go:nowritebarrier annotation because, for better or worse, there are
many go:nowritebarrier functions that do call functions with write
barriers. In most of these cases this is benign because the annotation
was conservative, but it prohibits simply coopting the existing
annotation.
Change-Id: I225ca483c8f699e8436373ed96349e80ca2c2479
Reviewed-on: https://go-review.googlesource.com/16554
Reviewed-by: Keith Randall <khr@golang.org>
Handling of special records for tiny allocations has two problems:
1. Once we queue a finalizer we mark the object. As the result any
subsequent finalizers for the same object will not be queued
during this GC cycle. If we have 16 finalizers setup (the worst case),
finalization will take 16 GC cycles. This is what caused misbehave
of tinyfin.go. The actual flakiness was caused by the fact that fing
is asynchronous and don't always run before the check.
2. If a tiny block has both finalizer and profile specials,
it is possible that we both queue finalizer, preserve the object live
and free the profile record. As the result heap profile can be skewed.
Fix both issues by analyzing all special records for a single object at once.
Also, make tinyfin test stricter and remove reliance on real time.
Also, add a test for the problem 2. Currently heap profile missed about
a half of live memory.
Fixes#13100
Change-Id: I9ae4dc1c44893724138a4565ca5cae29f2e97544
Reviewed-on: https://go-review.googlesource.com/16591
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Declare a function's arguments as having already been
spilled so their use just requires a restore.
Allow spill locations to be portions of larger objects the stack.
Required to load portions of compound input arguments.
Rename the memory input to InputMem. Use Arg for the
pre-spilled argument values.
Change-Id: I8fe2a03ffbba1022d98bfae2052b376b96d32dda
Reviewed-on: https://go-review.googlesource.com/16536
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Currently the GC work buffers are only 256 bytes and hence can record
only 24 64-bit pointer. They were reduced from 4K in commits db7fd1c
and a15818f as a way to minimize the amount of work the per-P workbuf
caches could "hide" from the mark phase and carry in to the mark
termination phase. However, this approach wasn't very robust and we
later added a "mark 2" phase to address this problem head-on.
Because of mark 2, there's now no benefit to having very small work
buffers. But there are plenty of downsides: small work buffers
increase contention on the work lists, increase the frequency and
hence net overhead of acquiring and releasing work buffers, and
somewhat increase memory overhead of the GC.
This commit expands work buffers back to 4K (504 64-bit pointers).
This reduces the rate of writes to work.full in the garbage benchmark
from a peak of ~780,000 writes/sec to a peak of ~32,000 writes/sec.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.37ms ± 5% 5.60ms ± 2% +4.37% (p=0.000 n=20+20)
Change-Id: Ic9cc28e7a125d23d9faf4f5e690fb8aa9bcdfb28
Reviewed-on: https://go-review.googlesource.com/15893
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, assists are non-preemptible, which means a heavily
assisting G can block other Gs from running. At the beginning of a GC
cycle, it can also delay scang, which will spin until the assist is
done. Since scanning is currently done sequentially, this can
seriously extend the length of the scan phase.
Fix this by making assists preemptible. Since the assist holds work
buffers and runs on the system stack, this must be done cooperatively:
we make gcDrainN return on preemption, and make the assist return from
the system stack and voluntarily Gosched.
This is prerequisite to enlarging the work buffers. Without this
change, the delays and spinning in scang increase significantly.
This has no effect on the go1 benchmarks.
name old time/op new time/op delta
XBenchGarbage-12 5.72ms ± 4% 5.37ms ± 5% -6.11% (p=0.000 n=20+20)
Change-Id: I829e732a0f23b126da633516a1a9ec1a508fdbf1
Reviewed-on: https://go-review.googlesource.com/15894
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
GC assists must block until the assist can be satisfied (either
through stealing credit or doing work) or the GC cycle ends.
Currently, this is implemented as a retry loop with a 100 µs delay.
This obviously isn't ideal, as it wastes CPU and delays mutator
execution. It also has the somewhat peculiar downside that sleeping a
G requires allocation, and this requires working around recursive
allocation.
Replace this timed delay with a proper scheduling queue. When an
assist can't be satisfied immediately, it adds the allocating G to a
queue and parks it. Any time background scan credit is flushed, it
consults this queue, directly satisfies the debt of queued assists,
and wakes up satisfied assists before flushing any remaining credit to
the background credit pool.
No effect on the go1 benchmarks. Slightly speeds up the garbage
benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.81ms ± 1% 5.72ms ± 4% -1.65% (p=0.011 n=20+20)
Updates #12041.
Change-Id: I8ee3b6274dd097b12b10a8030796a958a4b0e7b7
Reviewed-on: https://go-review.googlesource.com/15890
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This eliminates many write barriers in the scheduler code that are
unnecessary and will interfere with upcoming changes where the garbage
collector will have to invoke run queue functions in contexts that
must not have write barriers.
Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55
Reviewed-on: https://go-review.googlesource.com/16556
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
To avoid collisions with what existing code may already be doing.
Change-Id: Ice639440aafc0724714c25333d90a49954372230
Reviewed-on: https://go-review.googlesource.com/16503
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently the concurrent root scan is performed in its entirety by the
GC coordinator before entering concurrent mark (which enables GC
workers). This scan is done sequentially, which can prolong the scan
phase, delay the mark phase, and means that the scan phase does not
obey the 25% CPU goal. Furthermore, there's no need to complete the
root scan before starting marking (in fact, we already allow GC
assists to happen during the scan phase), so this acts as an
unnecessary barrier between root scanning and marking.
This change shifts the root scan work out of the GC coordinator and in
to the GC workers. The coordinator simply sets up the scan state and
enqueues the right number of root scan jobs. The GC workers then drain
the root scan jobs prior to draining heap scan jobs.
This parallelizes the root scan process, makes it obey the 25% CPU
goal, and effectively eliminates root scanning as an isolated phase,
allowing the system to smoothly transition from root scanning to heap
marking. This also eliminates a major non-STW responsibility of the GC
coordinator, which will make it easier to switch to a decentralized
state machine. Finally, it puts us in a good position to perform root
scanning in assists as well, which will help satisfy assists at the
beginning of the GC cycle.
This is mostly straightforward. One tricky aspect is that we have to
deal with preemption deadlock: where two non-preemptible gorountines
are trying to preempt each other to perform a stack scan. Given the
context where this happens, the only instance of this is two
background workers trying to scan each other. We avoid this by simply
not scanning the stacks of background workers during the concurrent
phase; this is safe because we'll scan them during mark termination
(and their stacks are *very* small and should not contain any new
pointers).
This change also switches the root marking during mark termination to
use the same gcDrain-based code path as concurrent mark. This
shouldn't affect performance because STW root marking was already
parallel and tasks switched to heap marking immediately when no more
root marking tasks were available. However, it simplifies the code and
unifies these code paths.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark, possibly by making GC run slightly more
frequently.
name old time/op new time/op delta
XBenchGarbage-12 5.10ms ± 1% 5.24ms ± 1% +2.87% (p=0.000 n=18+18)
name old time/op new time/op delta
BinaryTree17-12 3.25s ± 3% 3.20s ± 5% -1.57% (p=0.013 n=20+20)
Fannkuch11-12 2.45s ± 1% 2.46s ± 1% +0.38% (p=0.019 n=20+18)
FmtFprintfEmpty-12 49.7ns ± 3% 49.9ns ± 4% ~ (p=0.851 n=19+20)
FmtFprintfString-12 170ns ± 2% 170ns ± 1% ~ (p=0.775 n=20+19)
FmtFprintfInt-12 161ns ± 1% 160ns ± 1% -0.78% (p=0.000 n=19+18)
FmtFprintfIntInt-12 267ns ± 1% 270ns ± 1% +1.04% (p=0.000 n=19+19)
FmtFprintfPrefixedInt-12 238ns ± 2% 238ns ± 1% ~ (p=0.133 n=18+19)
FmtFprintfFloat-12 311ns ± 1% 310ns ± 2% -0.35% (p=0.023 n=20+19)
FmtManyArgs-12 1.08µs ± 1% 1.06µs ± 1% -2.31% (p=0.000 n=20+20)
GobDecode-12 8.65ms ± 1% 8.63ms ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 6.49ms ± 1% 6.52ms ± 1% +0.37% (p=0.015 n=20+20)
Gzip-12 319ms ± 3% 318ms ± 1% ~ (p=0.975 n=19+17)
Gunzip-12 41.9ms ± 1% 42.1ms ± 2% +0.65% (p=0.004 n=19+20)
HTTPClientServer-12 61.7µs ± 1% 62.6µs ± 1% +1.40% (p=0.000 n=18+20)
JSONEncode-12 16.8ms ± 1% 16.9ms ± 1% ~ (p=0.239 n=20+18)
JSONDecode-12 58.4ms ± 1% 60.7ms ± 1% +3.85% (p=0.000 n=19+20)
Mandelbrot200-12 3.86ms ± 0% 3.86ms ± 1% ~ (p=0.092 n=18+19)
GoParse-12 3.75ms ± 2% 3.75ms ± 2% ~ (p=0.708 n=19+20)
RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 2% +0.60% (p=0.010 n=17+20)
RegexpMatchEasy0_1K-12 341ns ± 1% 342ns ± 2% ~ (p=0.203 n=20+19)
RegexpMatchEasy1_32-12 82.5ns ± 2% 83.2ns ± 2% +0.83% (p=0.007 n=19+19)
RegexpMatchEasy1_1K-12 495ns ± 1% 495ns ± 2% ~ (p=0.970 n=19+18)
RegexpMatchMedium_32-12 130ns ± 2% 130ns ± 2% +0.59% (p=0.039 n=19+20)
RegexpMatchMedium_1K-12 39.2µs ± 1% 39.3µs ± 1% ~ (p=0.214 n=18+18)
RegexpMatchHard_32-12 2.03µs ± 2% 2.02µs ± 1% ~ (p=0.166 n=18+19)
RegexpMatchHard_1K-12 61.0µs ± 1% 60.9µs ± 1% ~ (p=0.169 n=20+18)
Revcomp-12 533ms ± 1% 535ms ± 1% ~ (p=0.071 n=19+17)
Template-12 68.1ms ± 2% 73.0ms ± 1% +7.26% (p=0.000 n=19+20)
TimeParse-12 355ns ± 2% 356ns ± 2% ~ (p=0.530 n=19+20)
TimeFormat-12 357ns ± 2% 347ns ± 1% -2.59% (p=0.000 n=20+19)
[Geo mean] 62.1µs 62.3µs +0.31%
name old speed new speed delta
GobDecode-12 88.7MB/s ± 1% 88.9MB/s ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 118MB/s ± 1% 118MB/s ± 1% -0.37% (p=0.015 n=20+20)
Gzip-12 60.9MB/s ± 3% 60.9MB/s ± 1% ~ (p=0.944 n=19+17)
Gunzip-12 464MB/s ± 1% 461MB/s ± 2% -0.64% (p=0.004 n=19+20)
JSONEncode-12 115MB/s ± 1% 115MB/s ± 1% ~ (p=0.236 n=20+18)
JSONDecode-12 33.2MB/s ± 1% 32.0MB/s ± 1% -3.71% (p=0.000 n=19+20)
GoParse-12 15.5MB/s ± 2% 15.5MB/s ± 2% ~ (p=0.702 n=19+20)
RegexpMatchEasy0_32-12 320MB/s ± 1% 318MB/s ± 2% ~ (p=0.094 n=18+20)
RegexpMatchEasy0_1K-12 3.00GB/s ± 1% 2.99GB/s ± 1% ~ (p=0.194 n=20+19)
RegexpMatchEasy1_32-12 388MB/s ± 2% 385MB/s ± 2% -0.83% (p=0.008 n=19+19)
RegexpMatchEasy1_1K-12 2.07GB/s ± 1% 2.07GB/s ± 1% ~ (p=0.964 n=19+18)
RegexpMatchMedium_32-12 7.68MB/s ± 1% 7.64MB/s ± 2% -0.57% (p=0.020 n=19+20)
RegexpMatchMedium_1K-12 26.1MB/s ± 1% 26.1MB/s ± 1% ~ (p=0.211 n=18+18)
RegexpMatchHard_32-12 15.8MB/s ± 1% 15.8MB/s ± 1% ~ (p=0.180 n=18+19)
RegexpMatchHard_1K-12 16.8MB/s ± 1% 16.8MB/s ± 2% ~ (p=0.236 n=20+19)
Revcomp-12 477MB/s ± 1% 475MB/s ± 1% ~ (p=0.071 n=19+17)
Template-12 28.5MB/s ± 2% 26.6MB/s ± 1% -6.77% (p=0.000 n=19+20)
[Geo mean] 100MB/s 99.0MB/s -0.82%
Change-Id: I875bf6ceb306d1ee2f470cabf88aa6ede27c47a0
Reviewed-on: https://go-review.googlesource.com/16059
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We already have gcMarkWorkAvailable, but the check for GC mark work is
open-coded in several places. Generalize gcMarkWorkAvailable slightly
and replace these open-coded checks with calls to gcMarkWorkAvailable.
In addition to cleaning up the code, this puts us in a better position
to make this check slightly more complicated.
Change-Id: I1b29883300ecd82a1bf6be193e9b4ee96582a860
Reviewed-on: https://go-review.googlesource.com/16058
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Abandon (but still support) the old numbering system.
GOTRACEBACK=none is old 0
GOTRACEBACK=single is the new behavior
GOTRACEBACK=all is old 1
GOTRACEBACK=system is old 2
GOTRACEBACK=crash is unchanged
See doc comment change in runtime1.go for details.
Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release.
If you run into programs where printing only the current goroutine omits
needed information, please add details in a comment on that issue.
Fixes#12366.
Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db
Reviewed-on: https://go-review.googlesource.com/16512
Reviewed-by: Austin Clements <austin@google.com>
On ppc64x, the thread pointer, held in R13, points 0x7000 bytes past where
thread-local storage begins (presumably to maximize the amount of storage that
can be accessed with a 16-bit signed displacement). The relocations used to
indicate thread-local storage to the platform linker account for this, so to be
able to support external linking we need to change things so the linker applies
this offset instead of the runtime assembly.
Change-Id: I2556c249ab2d802cae62c44b2b4c5b44787d7059
Reviewed-on: https://go-review.googlesource.com/14233
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is needed to make external linking work.
Change-Id: I4cf7edb4ea318849cab92a697952f8745eed40c4
Reviewed-on: https://go-review.googlesource.com/14237
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Android linker does not handle TLS for us. We set up the TLS slot
for g, as darwin/386,amd64 handle instead. This is disgusting and
fragile. We will eventually fix this ugly hack by taking advantage
of the recent TLS IE model implementation. (Instead of referencing
an GOT entry, make the code sequence look into the TLS variable that
holds the offset.)
The TLS slot for g in android/amd64 assumes a fixed offset from %fs.
See runtime/cgo/gcc_android_amd64.c for details.
For golang/go#10743
Change-Id: I1a3fc207946c665515f79026a56ea19134ede2dd
Reviewed-on: https://go-review.googlesource.com/15991
Reviewed-by: David Crawshaw <crawshaw@golang.org>
In the Go signal handler on Plan 9, when a signal with
the _SigThrow flag is received, we call startpanic before
printing the stack trace.
The startpanic function calls systemstack which calls
startpanic_m. In the startpanic_m function, we call
allocmcache to allocate _g_.m.mcache. The problem is
that allocmcache calls nextSample, which does a floating
point operation to return a sampling point for heap profiling.
However, Plan 9 doesn't support floating point in the
signal handler.
This change adds a new function nextSampleNoFP, only
called when in the Plan 9 signal handler, which is
similar to nextSample, but avoids floating point.
Change-Id: Iaa30437aa0f7c8c84d40afbab7567ad3bd5ea2de
Reviewed-on: https://go-review.googlesource.com/16307
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The dynamic linker on linux/386 stores the address of the vsyscall helper at a
fixed offset from the %gs register on linux/386 for easy access from PIC code.
Change-Id: I635305cfecceef2289985d62e676e16810ed6b94
Reviewed-on: https://go-review.googlesource.com/16346
Reviewed-by: Ian Lance Taylor <iant@golang.org>
arena_{start,used,end} are already uintptr, so no need to convert them
to uintptr, much less to convert them to unsafe.Pointer and then to
uintptr. No binary change to pkg/linux_amd64/runtime.a.
Change-Id: Ia4232ed2a724c44fde7eba403c5fe8e6dccaa879
Reviewed-on: https://go-review.googlesource.com/16339
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Implement an abort note on Plan 9, as an
equivalent of the SIGABRT signal on other
operating systems.
Updates #11975.
Change-Id: I010c9b10f2fbd2471aacd1d073368d975a2f0592
Reviewed-on: https://go-review.googlesource.com/16300
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: David du Colombier <0intro@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When a new tiny block is allocated because we're allocating an object
that won't fit into the current block, mallocgc saves the new block if
it has more space leftover than the old block. However, the logic for
this was subtly broken in golang.org/cl/2814, resulting in never
saving (or consequently reusing) a tiny block.
Change-Id: Ib5f6769451fb82877ddeefe75dfe79ed4a04fd40
Reviewed-on: https://go-review.googlesource.com/16330
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I went looking for an arm system whose stacks are by default smaller
than 64KB. In fact the smallest common linux target I could find was
Android, which like iOS uses 1MB stacks.
Fixes#11873
Change-Id: Ieeb66ad095b3da18d47ba21360ea75152a4107c6
Reviewed-on: https://go-review.googlesource.com/14602
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Minux Ma <minux@golang.org>
This copies the change from CL 16158 (applied as
22d4c8bf13).
Updates #13013
Change-Id: Id7d02e63d92806f06a4e064a91b2fb6574fe385f
Reviewed-on: https://go-review.googlesource.com/16291
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Modified GOSSA{HASH.PKG} environment variable filters to
make it easier to make/run with all SSA for testing.
Disable attempts at SSA for architectures that are not
amd64 (avoid spurious errors/unimplementeds.)
Removed easy out for unimplemented features.
Add convert op for proper liveness in presence of uintptr
to/from unsafe.Pointer conversions.
Tweaked stack sizes to get a pass on windows;
1024 instead 768, was observed to pass at least once.
Change-Id: Ida3800afcda67d529e3b1cf48ca4a3f0fa48b2c5
Reviewed-on: https://go-review.googlesource.com/16201
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
This CL introduces a new mSpanList type to replace the empty mspan
variables that were previously used as list heads.
To be type safe, the previous circular linked list data structure is
now a tail queue instead. One complication of this is
mSpanList_Remove needs to know the list a span is being removed from,
but this appears to be computable in all circumstances.
As a temporary sanity check, mSpanList_Insert and mSpanList_InsertBack
record the list that an mspan has been inserted into so that
mSpanList_Remove can verify that the correct list was specified.
Whereas mspan is 112 bytes on amd64, mSpanList is only 16 bytes. This
shrinks the size of mheap from 50216 bytes to 12584 bytes.
Change-Id: I8146364753dbc3b4ab120afbb9c7b8740653c216
Reviewed-on: https://go-review.googlesource.com/15906
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Austin Clements <austin@google.com>
It's never used as a *byte anyway, so might as well just make it an
unsafe.Pointer instead.
Change-Id: I68ee418781ab2fc574eeac0498f2515b5561b7a8
Reviewed-on: https://go-review.googlesource.com/16175
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reduces the size of m by ~8% on linux/amd64 (1040 bytes -> 960 bytes).
There are also windows-specific fields, but they're currently
referenced in OS-independent source files (but only when
GOOS=="windows").
Change-Id: I13e1471ff585ccced1271f74209f8ed6df14c202
Reviewed-on: https://go-review.googlesource.com/16173
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change compiler-invoked interface functions to directly take
iface/eface parameters instead of fInterface/interface{} to avoid
needing to always convert.
For the handful of functions that legitimately need to take an
interface{} parameter, add efaceOf to type-safely convert *interface{}
to *eface.
Change-Id: I8928761a12fd3c771394f36adf93d3006a9fcf39
Reviewed-on: https://go-review.googlesource.com/16166
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Add explicit memory sanitizer instrumentation to the runtime and syscall
packages. The compiler does not instrument the runtime package. It
does instrument the syscall package, but we need to add a couple of
cases that it can't see.
Change-Id: I2d66073f713fe67e33a6720460d2bb8f72f31394
Reviewed-on: https://go-review.googlesource.com/16164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Allows removing a few gratuitous unsafe.Pointer conversions and
parallels the type of reflect.funcType's in and out fields ([]*rtype).
Change-Id: Ie5ca230a94407301a854dfd8782a3180d5054bc4
Reviewed-on: https://go-review.googlesource.com/16163
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These are the runtime support functions for letting Go code interoperate
with the C/C++ memory sanitizer. Calls to msanread/msanwrite are now
inserted by the compiler with the -msan option. Calls to
msanmalloc/msanfree will be from other runtime functions in a subsequent
CL.
Change-Id: I64fb061b38cc6519153face242eccd291c07d1f2
Reviewed-on: https://go-review.googlesource.com/16162
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The ragged barrier after entering the concurrent mark phase is
vestigial. This used to be the point where we enabled write barriers,
so it was necessary to synchronize all Ps to ensure write barriers
were enabled before any marking occurred. However, we've long since
switched to enabling write barriers during the concurrent scan phase,
so the start-the-world at the beginning of the concurrent scan phase
ensures that all Ps have enabled the write barrier.
Hence, we can eliminate the old "install write barrier" phase.
Fixes#11971.
Change-Id: I8cdcb84b5525cef19927d51ea11ba0a4db991ea8
Reviewed-on: https://go-review.googlesource.com/16044
Reviewed-by: Rick Hudson <rlh@golang.org>
Instead of open-coding conversions from *string to unsafe.Pointer then
to *stringStruct, add a helper function to add some type safety.
Bonus: This caught two **string values being converted to
*stringStruct in heapdump.go.
While here, get rid of the redundant _string type, but add in a
stringStructDWARF type used for generating DWARF debug info.
Change-Id: I8882f8cca66ac45190270f82019a5d85db023bd2
Reviewed-on: https://go-review.googlesource.com/16131
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The '1' part is left over from the C conversion, but no longer makes
sense given that print1.go no longer exists.
Change-Id: Iec171251370d740f234afdbd6fb1a4009fde6696
Reviewed-on: https://go-review.googlesource.com/16036
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
access, connect, socket.
In Android-L, logging is done by writing the log messages to the logd
process through a unix domain socket.
Also, changed the arg types of those syscall stubs to match linux
programming APIs.
For golang/go#10743
Change-Id: I66368a03316e253561e9e76aadd180c2cd2e48f3
Reviewed-on: https://go-review.googlesource.com/15993
Reviewed-by: David Crawshaw <crawshaw@golang.org>
When I saw that it was labelled "legacy", I went looking for users of it
to see how it was still used. But there aren't any. Save the next person
the trouble.
Change-Id: I921dd6c57b60331c9816542272555153ac133c02
Reviewed-on: https://go-review.googlesource.com/16035
Reviewed-by: Dave Cheney <dave@cheney.net>
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Building Go shared libraries requires that all functions that have declarations
without bodies have implementations and vice versa, so remove the
implementation of call16 and add a stub implementation of sigreturn.
Change-Id: I4d5a30c8637a5da7991054e151a536611d5bea46
Reviewed-on: https://go-review.googlesource.com/15966
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These functions are always called together and perform logically
related state resets, so combine them in to just gcResetMarkState.
Fixes#11427.
Change-Id: I06c17ef65f66186494887a767b3993126955b5fe
Reviewed-on: https://go-review.googlesource.com/16041
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently gcResetGState is called by func gcscan_m for concurrent GC
and directly by func gc for STW GC. Simplify this by consolidating
these two calls in to one call by func gc above where it splits for
concurrent and STW GC.
As a consequence, gcResetGState and gcResetMarkState are always called
together, so the next commit will consolidate these.
Change-Id: Ib62d404c7b32b28f7d3080d26ecf3966cbc4aca0
Reviewed-on: https://go-review.googlesource.com/16040
Reviewed-by: Rick Hudson <rlh@golang.org>
This work queue is no longer used (there are many reads of
work.partial, but the only write is in putpartial, which is never
called).
Fixes#11922.
Change-Id: I08b76c0c02a0867a9cdcb94783e1f7629d44249a
Reviewed-on: https://go-review.googlesource.com/15892
Reviewed-by: Rick Hudson <rlh@golang.org>
It appears this was made possible by commit 89f185f; before that, g was
not dereferenced above.
Change-Id: I70bc571d924b36351392fd4c13d681e938cfb573
Reviewed-on: https://go-review.googlesource.com/16033
Reviewed-by: Andrew Gerrand <adg@golang.org>
PIC code on ppc64le uses R2 as a TOC pointer and when calling a function
through a function pointer must ensure the function pointer is in R12. These
rules are easy enough to follow unconditionally in our assembly, so do that.
Change-Id: Icfc4e47ae5dfbe15f581cbdd785cdeed6e40bc32
Reviewed-on: https://go-review.googlesource.com/15526
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). Part 3 of that
is using a #define in the ppc64 assembly to refer to the size of the fixed
part of the stack (finding all these took me about a week!).
Change-Id: I50f22fe1c47af1ec59da1bd7ea8f84a4750df9b7
Reviewed-on: https://go-review.googlesource.com/15525
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). So to prepare
for this, make a constant for the fixed part of a stack and use that where
necessary.
Change-Id: I447949f4d725003bb82e7d2cf7991c1bca5aa887
Reviewed-on: https://go-review.googlesource.com/15523
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Replace the confusing game where a frame size of $-8 would suppress the
implicit setting up of a stack frame with a nice explicit flag.
The code to set up the function prologue is still a little confusing but better
than it was.
Change-Id: I1d49278ff42c6bc734ebfb079998b32bc53f8d9a
Reviewed-on: https://go-review.googlesource.com/15670
Reviewed-by: Minux Ma <minux@golang.org>
Changed racewalk/race detector to use FP in a more
sensible way.
Relaxed checks for CONVNOP when race detecting.
Modified tighten to ensure that GetClosurePtr cannot float
out of entry block (turns out this cannot be relaxed, DX is
sometimes stomped by other code accompanying race detection).
Added case for addr(CONVNOP)
Modified addr to take "bounded" flag to suppress nilchecks
where it is set (usually, by race detector).
Cannot leave unimplemented-complainer enabled because it
turns out we are optimistically running SSA on every platform.
Change-Id: Ife021654ee4065b3ffac62326d09b4b317b9f2e0
Reviewed-on: https://go-review.googlesource.com/15710
Reviewed-by: Keith Randall <khr@golang.org>
A TODO to merge is removed from panic1.go.
The rest is appended to panic.go
Updates #12952
Change-Id: Ied4382a455abc20bc2938e34d031802e6b4baf8b
Reviewed-on: https://go-review.googlesource.com/15905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
os/signal depends on a few unexported runtime functions. This removes the
assembly stubs it used to get access to these in favour of using
//go:linkname in runtime to make the functions accessible to os/signal.
This is motivated by ppc64le shared libraries, where you cannot BR to a symbol
defined in a shared library (only BL), but it seems like an improvment anyway.
Change-Id: I09361203ce38070bd3f132f6dc5ac212f2dc6f58
Reviewed-on: https://go-review.googlesource.com/15871
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
This isn't C anymore. No binary change to pkg/linux_amd64/runtime.a.
Change-Id: I24d66b0f5ac888f432b874aac684b1395e7c8345
Reviewed-on: https://go-review.googlesource.com/15903
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The current fastlog2 testing checks all 64M values in the domain of
interest, which is too much for platforms with no native floating point.
Reduce testing under testing.Short() to speed up builds for those platforms.
Related to #12620
Change-Id: Ie5dcd408724ba91c3b3fcf9ba0dddedb34706cd1
Reviewed-on: https://go-review.googlesource.com/15830
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Joel Sing <jsing@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The duplication of _Kind and kind constants is a legacy of the
conversion from C.
Change-Id: I368b35a41f215cf91ac4b09dac59699edb414a0e
Reviewed-on: https://go-review.googlesource.com/15800
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, when the mutator allocates, the runtime first allocates the
memory and then, if that G has done "enough" allocation, the runtime
checks whether the G has assist debt to pay off and, if so, pays it
off. This approach leads to under-assisting, where a G can allocate a
large region (or many small regions) before paying for it, or can even
exit with outstanding debt.
This commit flips this around so that a G always acquires enough
credit for an allocation before it can perform that allocation. We
continue to amortize the cost of assists by requiring that they
over-assist when triggered to build up credit for many allocations.
Fixes#11967.
Change-Id: Idac9f11133b328535667674d837be72c23ebd899
Reviewed-on: https://go-review.googlesource.com/15409
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently we track the per-G GC assist balance as two monotonically
increasing values: the bytes allocated by the G this cycle (gcalloc)
and the scan work performed by the G this cycle (gcscanwork). The
assist balance is hence assistRatio*gcalloc - gcscanwork.
This works, but has two important downsides:
1) It requires floating-point math to figure out if a G is in debt or
not. This makes it inappropriate to check for assist debt in the
hot path of mallocgc, so we only do this when a G allocates a new
span. As a result, Gs can operate "in the red", leading to
under-assist and extended GC cycle length.
2) Revising the assist ratio during a GC cycle can lead to an "assist
burst". If you think of plotting the scan work performed versus
heaps size, the assist ratio controls the slope of this line.
However, in the current system, the target line always passes
through 0 at the heap size that triggered GC, so if the runtime
increases the assist ratio, there has to be a potentially large
assist to jump from the current amount of scan work up to the new
target scan work for the current heap size.
This commit replaces this approach with directly tracking the GC
assist balance in terms of allocation credit bytes. Allocating N bytes
simply decreases this by N and assisting raises it by the amount of
scan work performed divided by the assist ratio (to get back to
bytes).
This will make it cheap to figure out if a G is in debt, which will
let us efficiently check if an assist is necessary *before* performing
an allocation and hence keep Gs "in the black".
This also fixes assist bursts because the assist ratio is now in terms
of *remaining* work, rather than work from the beginning of the GC
cycle. Hence, the plot of scan work versus heap size becomes
continuous: we can revise the slope, but this slope always starts from
where we are right now, rather than where we were at the beginning of
the cycle.
Change-Id: Ia821c5f07f8a433e8da7f195b52adfedd58bdf2c
Reviewed-on: https://go-review.googlesource.com/15408
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently we ensure a minimum heap distance of 1MB when computing the
assist ratio. Rather than enforcing this minimum on the heap distance,
it makes more sense to enforce that the heap goal itself is at least
1MB over the live heap size at the beginning of GC. Currently the two
approaches are semantically equivalent, but this will let us switch to
basing the assist ratio on current heap distance rather than the
initial heap distance, since we can't enforce this minimum on the
current heap distance (the GC may never finish because the goal posts
will always be 1MB away).
Change-Id: I0027b1c26a41a0152b01e5b67bdb1140d43ee903
Reviewed-on: https://go-review.googlesource.com/15604
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, gcController.scanWork is updated as lazily as possible
since it is only read at the end of the GC cycle. We're about to read
it during the GC cycle to improve the assist ratio revisions, so
modify gcDrain* to regularly flush to gcController.scanWork in much
the same way as we regularly flush to gcController.bgScanCredit.
One consequence of this is that it's difficult to keep gcw.scanWork
monotonic, so we give up on that and simply return the amount of scan
work done by gcDrainN rather than calculating it in the caller.
Change-Id: I7b50acdc39602f843eed0b5c6d2dacd7e762b81d
Reviewed-on: https://go-review.googlesource.com/15407
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently callers of gcDrain control whether it flushes scan work
credit to gcController.bgScanCredit by passing a value other than -1
for the flush threshold. Shortly we're going to make this always flush
scan work to gcController.scanWork and optionally also flush scan work
to gcController.bgScanCredit. This will be much easier if the flush
threshold is simply a constant (which it is in practice) and callers
merely control whether or not the flush includes the background
credit. Hence, replace the flush threshold argument with a flag.
Change-Id: Ia27db17de8a3f1e462a5d7137d4b5dc72f99a04e
Reviewed-on: https://go-review.googlesource.com/15406
Reviewed-by: Rick Hudson <rlh@golang.org>
These functions were nearly identical. Consolidate them by adding a
flags argument. In addition to cleaning up this code, this makes
further changes that affect both functions easier.
Change-Id: I6ec5c947603bbbd3ff4040113b2fbc240e99745f
Reviewed-on: https://go-review.googlesource.com/15405
Reviewed-by: Rick Hudson <rlh@golang.org>
The comment for assistRatio claimed it to be the reciprocal of what it
actually is.
Change-Id: If7f9bb853d75d0097facff3aa6704b224d9108b8
Reviewed-on: https://go-review.googlesource.com/15402
Reviewed-by: Russ Cox <rsc@golang.org>
(*T)(unsafe.Pointer(&t)) === &t
for t of type T
Change-Id: I43c1aa436747dfa0bf4cb0d615da1647633f9536
Reviewed-on: https://go-review.googlesource.com/15656
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Improve the aeshash implementation to make it harder to engineer collisions.
1) Scramble the seed before xoring with the input string. This
makes it harder to cancel known portions of the seed (like the size)
because it mixes the per-table seed into those other parts.
2) Use table-dependent seeds for all stripes when hashing >16 byte strings.
For small strings this change uses 4 aesenc ops instead of 3, so it
is somewhat slower. The first two can run in parallel, though, so
it isn't 33% slower.
benchmark old ns/op new ns/op delta
BenchmarkHash64-12 10.2 11.2 +9.80%
BenchmarkHash16-12 5.71 6.13 +7.36%
BenchmarkHash5-12 6.64 7.01 +5.57%
BenchmarkHashBytesSpeed-12 30.3 31.9 +5.28%
BenchmarkHash65536-12 2785 2882 +3.48%
BenchmarkHash1024-12 53.6 55.4 +3.36%
BenchmarkHashStringArraySpeed-12 54.9 56.5 +2.91%
BenchmarkHashStringSpeed-12 18.7 19.2 +2.67%
BenchmarkHashInt32Speed-12 14.8 15.1 +2.03%
BenchmarkHashInt64Speed-12 14.5 14.5 +0.00%
Change-Id: I59ea124b5cb92b1c7e8584008257347f9049996c
Reviewed-on: https://go-review.googlesource.com/14124
Reviewed-by: jcd . <jcd@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It's particularly nice to get rid of the android special cases in the linker.
Change-Id: I516363af7ce8a6b2f196fe49cb8887ac787a6dad
Reviewed-on: https://go-review.googlesource.com/14197
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The current heap sampling introduces some bias that interferes
with unsampling, producing unexpected heap profiles.
The solution is to use a Poisson process to generate the
sampling points, using the formulas described at
https://en.wikipedia.org/wiki/Poisson_process
This fixes#12620
Change-Id: If2400809ed3c41de504dd6cff06be14e476ff96c
Reviewed-on: https://go-review.googlesource.com/14590
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, amd64p32's memmove and memclr use 8 byte writes as much as
possible and 1 byte writes for the tail of the object. However, if an
object ends with a 4 byte pointer at an 8 byte aligned offset, this
may copy/zero the pointer field one byte at a time, allowing the
garbage collector to observe a partially copied pointer.
Fix this by using 4 byte writes instead of 8 byte writes.
Updates #12552.
Change-Id: I13324fd05756fb25ae57e812e836f0a975b5595c
Reviewed-on: https://go-review.googlesource.com/15370
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This fixes an issue where the runtime panics with "out of memory" or
"cannot allocate memory" even though there's ample memory by reducing
the number of memory mappings created by the memory allocator.
Commit 7e1b61c worked around issue #8832 where Linux's transparent
huge page support could dramatically increase the RSS of a Go process
by setting the MADV_NOHUGEPAGE flag on any regions of pages released
to the OS with MADV_DONTNEED. This had the side effect of also
increasing the number of VMAs (memory mappings) in a Go address space
because a separate VMA is needed for every region of the virtual
address space with different flags. Unfortunately, by default, Linux
limits the number of VMAs in an address space to 65530, and a large
heap can quickly reach this limit when the runtime starts scavenging
memory.
This commit dramatically reduces the number of VMAs. It does this
primarily by only adjusting the huge page flag at huge page
granularity. With this change, on amd64, even a pessimal heap that
alternates between MADV_NOHUGEPAGE and MADV_HUGEPAGE must reach 128GB
to reach the VMA limit. Because of this rounding to huge page
granularity, this change is also careful to leave large used and
unused regions huge page-enabled.
This change reduces the maximum number of VMAs during the runtime
benchmarks with GODEBUG=scavenge=1 from 692 to 49.
Fixes#12233.
Change-Id: Ic397776d042f20d53783a1cacf122e2e2db00584
Reviewed-on: https://go-review.googlesource.com/15191
Reviewed-by: Keith Randall <khr@golang.org>
In general, finishsweep_m must block until any spans that are
concurrently being swept have been swept. It accomplishes this by
looping over all spans, which, as in the previous commit, takes
~1ms/heap GB. Unfortunately, we do this during the STW sweep
termination phase, so multi-gigabyte heaps can push our STW time past
10ms.
However, there's no need to do this wait if the world is stopped
because, in effect, stopping the world already had to wait for
anything that was sweeping (and if it didn't, the wait in
finishsweep_m would deadlock). Hence, we can simply skip this loop if
the world is stopped, such as during sweep termination. In fact,
currently all calls to finishsweep_m are STW, but this hasn't always
been the case and may not be the case in the future, so we keep the
logic around.
For 24GB heaps, this reduces max pause time by 75% relative to tip and
by 90% relative to Go 1.5. Notably, all pauses are now well under
10ms. Here are the results for the garbage benchmark:
------------- max pause ------------
Heap Procs after change before change 1.5.1
24GB 12 3.8ms 16ms 37ms
24GB 4 3.7ms 16ms 37ms
4GB 4 3.7ms 3ms 6.9ms
In the 4GB/4P case, it seems the "before change" run got lucky: the
max went up, but the 99%ile pause time went down from 3ms to 2.04ms.
Change-Id: Ica22189559f231d408ef2815019c9dbb5f38bf31
Reviewed-on: https://go-review.googlesource.com/15071
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In order to compute the sweep ratio, the runtime needs to know how
many pages belong to spans in state _MSpanInUse. Currently it finds
this out by looping over all spans during mark termination. However,
this takes ~1ms/heap GB, so multi-gigabyte heaps can quickly push our
STW time past 10ms.
Replace the loop with an actively maintained count of in-use pages.
For multi-gigabyte heaps, this reduces max mark termination pause time
by 75%–90% relative to tip and by 85%–95% relative to Go 1.5.1. This
shifts the longest pause time for large heaps to the sweep termination
phase, so it only slightly decreases max pause time, though it roughly
halves mean pause time. Here are the results for the garbage
benchmark:
---- max mark termination pause ----
Heap Procs after change before change 1.5.1
24GB 12 1.9ms 18ms 37ms
24GB 4 3.7ms 18ms 37ms
4GB 4 920µs 3.8ms 6.9ms
Fixes#11484.
Change-Id: Ia2d28bb8a1e4f1c3b8ebf79fb203f12b9bf114ac
Reviewed-on: https://go-review.googlesource.com/15070
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This reduces pause time by ~25% relative to tip and by ~50% relative
to Go 1.5.1.
Currently one of the steps of STW mark termination is to loop (in
parallel) over all spans to find objects with finalizers in order to
mark all objects reachable from these objects and to treat the
finalizer special as a root. Unfortunately, even if there are no
finalizers at all, this loop takes roughly 1 ms/heap GB/core, so
multi-gigabyte heaps can quickly push our STW time past 10ms.
Fix this by moving this scan from mark termination to concurrent scan,
where it can run in parallel with mutators. The loop itself could also
be optimized, but this cost is small compared to concurrent marking.
Making this scan concurrent introduces two complications:
1) The scan currently walks the specials list of each span without
locking it, which is safe only with the world stopped. We fix this by
speculatively checking if a span has any specials (the vast majority
won't) and then locking the specials list only if there are specials
to check.
2) An object can have a finalizer set after concurrent scan, in which
case it won't have been marked appropriately by concurrent scan. If
the finalizer is a closure and is only reachable from the special, it
could be swept before it is run. Likewise, if the object is not marked
yet when the finalizer is set and then becomes unreachable before it
is marked, other objects reachable only from it may be swept before
the finalizer function is run. We fix this issue by making
addfinalizer ensure the same marking invariants as markroot does.
For multi-gigabyte heaps, this reduces max pause time by 20%–30%
relative to tip (depending on GOMAXPROCS) and by ~50% relative to Go
1.5.1 (where this loop was neither concurrent nor parallel). Here are
the results for the garbage benchmark:
---------------- max pause ----------------
Heap Procs Concurrent scan STW parallel scan 1.5.1
24GB 12 18ms 23ms 37ms
24GB 4 18ms 25ms 37ms
4GB 4 3.8ms 4.9ms 6.9ms
In all cases, 95%ile pause time is similar to the max pause time. This
also improves mean STW time by 10%–30%.
Fixes#11485.
Change-Id: I9359d8c3d120a51d23d924b52bf853a1299b1dfd
Reviewed-on: https://go-review.googlesource.com/14982
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, the GC modes constants are untyped and functions pass them
around as ints. Clean this up by introducing a proper type for these
constant.
Change-Id: Ibc022447bdfa203644921fbb548312d7e2272e8d
Reviewed-on: https://go-review.googlesource.com/14981
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change splits signal_unix.go into signal_unix.go and
signal2_unix.go and removes the fake symbol sigfwd from signal
forwarding unsupported platforms for clarification purpose.
Change-Id: I205eab5cf1930fda8a68659b35cfa9f3a0e67ca6
Reviewed-on: https://go-review.googlesource.com/12062
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reduce allocation to avoid running out of memory on the openbsd/arm builder,
until issue/12032 is resolved.
Update issue #12032
Change-Id: Ibd513829ffdbd0db6cd86a0a5409934336131156
Reviewed-on: https://go-review.googlesource.com/15242
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
sysReserve will return nil on failure - correctly handle this case and return
nil to the caller. Currently, a failure will result in h.arena_end being set
to psize, h.arena_used being set to zero and fun times ensue.
On the openbsd/arm builder this has resulted in:
runtime: address space conflict: map(0x0) = 0x40946000
fatal error: runtime: address space conflict
When it should be reporting out of memory instead.
Change-Id: Iba828d5ee48ee1946de75eba409e0cfb04f089d4
Reviewed-on: https://go-review.googlesource.com/15056
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The memory sanitizer (msan) is a nice compiler feature that can
dynamically check for memory errors in C code. It's not useful for Go
code, since Go is memory safe. But it is useful to be able to use the
memory sanitizer on C code that is linked into a Go program via cgo.
Without this change it does not work, as msan considers memory passed
from Go to C as uninitialized.
To make this work, change the runtime to call the C mmap function when
using cgo. When using msan the mmap call will be intercepted and marked
as returning initialized memory.
Work around what appears to be an msan bug by calling malloc before we
call mmap.
Change-Id: I8ab7286d7595ae84782f68a98bef6d3688b946f9
Reviewed-on: https://go-review.googlesource.com/15170
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
We've broken periodic GC a few times without noticing because there's
no test for it, partly because you have to wait two minutes to see if
it happens. This exposes control of the periodic GC timeout to runtime
tests and adds a test that cranks it down to zero and sleeps for a bit
to make sure periodic GCs happen.
Change-Id: I3ec44e967e99f4eda752f85c329eebd18b87709e
Reviewed-on: https://go-review.googlesource.com/13169
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
For variables which get SSA'd, SSA keeps track of all the def/kill.
It is only for on-stack variables that we need them.
This reduces stack frame sizes significantly because often the
only use of a variable was a varkill, and without that last use
the variable doesn't get allocated in the frame at all.
Fixes#12602
Change-Id: I3f00a768aa5ddd8d7772f375b25f846086a3e689
Reviewed-on: https://go-review.googlesource.com/14758
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Sometimes this read is instrumented by compiler when it creates
a temp to take address, but sometimes it is not (e.g. for global vars
compiler takes address of the global directly).
Instrument convT2E/I similarly to chansend and mapaccess.
Fixes#12664
Change-Id: Ia7807f15d735483996426c5f3aed60a33b279579
Reviewed-on: https://go-review.googlesource.com/14752
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This test fails on arm64 and some amd64 OSs and fails on Linux/amd64
if you remove the first runtime.GC(), which should be unnecessary, and
run it in all.bash (but not if you run it in isolation). I don't
understand any of these failures, so for now just remove this test.
TBR=rlh
Change-Id: Ibed00671126000ed7dc5b5d4af1f86fe4a1e30e1
Reviewed-on: https://go-review.googlesource.com/14767
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently when the GC prints an object for debugging (e.g., for a
failed invalidptr or checkmark check), it dumps the entire object. To
avoid inundating the user with output for really large objects, limit
this to printing just the first 128 words (which are most likely to be
useful in identifying the type of an object) and the 32 words around
the problematic field.
Change-Id: Id94a5c9d8162f8bd9b2a63bf0b1bfb0adde83c68
Reviewed-on: https://go-review.googlesource.com/14764
Reviewed-by: Rick Hudson <rlh@golang.org>
By default, the runtime panics if it detects a pointer to an
unallocated span. At this point, this usually catches bad uses of
unsafe or cgo in user code (though it could also catch runtime bugs).
Unfortunately, the rather cryptic error misleads users, offers users
little help with debugging their own problem, and offers the Go
developers little help with root-causing.
Improve the error message in various ways. First, the wording is
improved to make it clearer what condition was detected and to suggest
that this may be the result of incorrect use of unsafe or cgo. Second,
we add a dump of the object containing the bad pointer so that there's
at least some hope of figuring out why a bad pointer was stored in the
Go heap.
Change-Id: I57b91b12bc3cb04476399d7706679e096ce594b9
Reviewed-on: https://go-review.googlesource.com/14763
Reviewed-by: Rick Hudson <rlh@golang.org>
The placement and invocation of traceGoSysCall when using
entersyscallblock() instead of entersyscall() differs enough that the
TestTraceSymbolize test can fail on some platforms.
This change moves the invocation of traceGoSysCall for entersyscall() so
that the same number of "frames to skip" are present in the trace as when
entersyscallblock() is used ensuring system call traces remain identical
regardless of internal implementation choices.
Fixesgolang/go#12056
Change-Id: I8361e91aa3708f5053f98263dfe9feb8c5d1d969
Reviewed-on: https://go-review.googlesource.com/13861
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
As per iant suggestion during issue #12587 crash investigation.
Also adjust incorrect throw message in sysUsed while we are here.
Change-Id: Ice07904fdd6e0980308cb445965a696d26a1b92e
Reviewed-on: https://go-review.googlesource.com/14633
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The _rt0_arm_darwin_lib entrypoint has to conform to the darwin ARMv7
calling convention, which requires functions to preserve the value of
R11. Go uses R11 as the liblink REGTMP register, so save it manually.
Also avoid using R4, which is also callee-save.
Fixes#12590
Change-Id: I9c3b374e330f81ff8fc9c01fa20505a33ddcf39a
Reviewed-on: https://go-review.googlesource.com/14603
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The current code prints an error message and then tries to carry on.
This is not helpful for Go users: they see a message that means
nothing and that they can do nothing about. In the only known case of
this message, in issue 11498, the best guess is that the netpoll code
went into an infinite loop. Instead of doing that, crash the program.
Fixes#11498.
Change-Id: Idda3456c5b708f0df6a6b56c5bb4e796bbc39d7c
Reviewed-on: https://go-review.googlesource.com/12047
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Aeshash currently computes the hash of the empty string as
hash("", seed) = seed. This is bad because the hash of a compound
object with empty strings in it doesn't include information about
where those empty strings were. For instance [2]string{"", "foo"}
and [2]string{"foo", ""} might get the same hash.
Fix this by returning a scrambled seed instead of the seed itself.
With this fix, we can remove the scrambling done by the generated
array hash routines.
The test also rejects hash("", seed) = 0, if we ever thought
it would be a good idea to try that.
The fallback hash is already OK in this regard.
Change-Id: Iaedbaa5be8d6a246dc7e9383d795000e0f562037
Reviewed-on: https://go-review.googlesource.com/14129
Reviewed-by: jcd . <jcd@golang.org>
Windows amd64 requires all syscall callers to provide room for first
4 parameters on stack. We do that for all our syscalls, except inside
of usleep2. In https://codereview.appspot.com/7563043#msg3 rsc says:
"We don't need the stack alignment and first 4 parameters on amd64
because it's just a system call, not an ordinary function call."
He seems to be wrong on both counts. But alignment is already fixed.
Fix parameter space now too.
Fixes#12444
Change-Id: I66a2a18d2f2c3846e3aa556cc3acc8ec6240bea0
Reviewed-on: https://go-review.googlesource.com/14282
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Glibc uses some special signals for special thread operations. These
signals will be used in programs that use cgo and invoke certain glibc
functions, such as setgid. In order for this to work, these signals
need to not be masked by any thread. Before this change, they were
being masked by programs that used os/signal.Notify, because it
carefully masks all non-thread-specific signals in all threads so that a
dedicated thread will collect and report those signals (see ensureSigM
in signal1_unix.go).
This change adds the two glibc special signals to the set of signals
that are unmasked in each thread.
Fixes#12498.
Change-Id: I797d71a099a2169c186f024185d44a2e1972d4ad
Reviewed-on: https://go-review.googlesource.com/14297
Reviewed-by: David Crawshaw <crawshaw@golang.org>
This puts the _Root* indexes in a more friendly order and tweaks
markrootSpans to use a for-range loop instead of its own indexing.
Change-Id: I2c18d55c9a673ea396b6424d51ef4997a1a74825
Reviewed-on: https://go-review.googlesource.com/14548
Reviewed-by: Rick Hudson <rlh@golang.org>
Commit 0e6a6c5 removed readyExecute a long time ago, but left behind
the g.readyg field that was used by readyExecute. Remove this now
unused field.
Change-Id: I41b87ad2b427974d256ec7a7f6d4bdc2ce8a13bb
Reviewed-on: https://go-review.googlesource.com/13111
Reviewed-by: Rick Hudson <rlh@golang.org>
This is a cleanup following cc8f544, which was a minimal change to fix
issue #11617. This consolidates the two places in mSpan_Sweep that
update sweepgen. Previously this was necessary because sweepgen must
be updated before freeing the span, but we freed large spans early.
Now we free large spans later, so there's no need to duplicate the
sweepgen update. This also means large spans can take advantage of the
sweepgen sanity checking performed for other spans.
Change-Id: I23b79dbd9ec81d08575cd307cdc0fa6b20831768
Reviewed-on: https://go-review.googlesource.com/12451
Reviewed-by: Rick Hudson <rlh@golang.org>
Marking of span roots can represent a significant fraction of the time
spent in mark termination. Simply traversing the span list takes about
1ms per GB of heap and if there are a large number of finalizers (for
example, for network connections), it may take much longer.
Improve the situation by splitting the span scan into 128 subtasks
that can be executed in parallel and load balanced by the markroots
parallel for. This lets the GC balance this job across the Ps.
A better solution is to do this during concurrent mark, or to improve
it algorithmically, but this is a simple change with a lot of bang for
the buck.
This was suggested by Rhys Hiltner.
Updates #11485.
Change-Id: I8b281adf0ba827064e154a1b6cc32d4d8031c03c
Reviewed-on: https://go-review.googlesource.com/13112
Reviewed-by: Keith Randall <khr@golang.org>
The call to hash the trace stack reversed the "seed" and "size"
arguments to memhash and, hence, always called memhash with a 0 size,
which dutifully returned a hash value that depended only on the number
of PCs in the stack and not their values. As a result, all stacks were
put in to a very subset of the 8,192 buckets.
Fix this by passing these arguments in the correct order.
Change-Id: I67cd29312f5615c7ffa23e205008dd72c6b8af62
Reviewed-on: https://go-review.googlesource.com/13613
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
For now, we only use typedmemmove. This can be optimized
in future CLs.
Also add a feature to help with binary searching bad compilations.
Together with GOSSAPKG, GOSSAHASH specifies the last few binary digits
of the hash of function names that should be compiled. So
GOSSAHASH=0110 means compile only those functions whose last 4 bits
of hash are 0110. By adding digits to the front we can binary search
for the function whose SSA-generated code is causing a test to fail.
Change-Id: I5a8b6b70c6f034f59e5753965234cd42ea36d524
Reviewed-on: https://go-review.googlesource.com/14530
Reviewed-by: Keith Randall <khr@golang.org>
Fixes#11959
This test runs 100 concurrent callbacks from C to Go consuming 100
operating system threads, which at 8mb a piece (the default on linux/arm)
would reserve over 800mb of address space. This would frequently
cause the test to fail on platforms with ~1gb of ram, such as the
raspberry pi.
This change reduces the thread stack allocation to 256kb, a number picked
at random, but at 1/32th the previous size, should allow the test to
pass successfully on all platforms.
Change-Id: I8b8bbab30ea7b2972b3269a6ff91e6fe5bc717af
Reviewed-on: https://go-review.googlesource.com/13731
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Capitanio <capnm9@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
It's because runtime links to ntdll, and ntdll exports a couple
incompatible libc functions. We must link to msvcrt first and
then try ntdll.
Fixes#12030.
Change-Id: I0105417bada108da55f5ae4482c2423ac7a92957
Reviewed-on: https://go-review.googlesource.com/14472
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Simplify slice/map literal expressions.
Caught with gofmt -d -s, fixed with gofmt -w -s
Checked that the result can still be compiled with Go 1.4.
Change-Id: I06bce110bb5f46ee2f45113681294475aa6968bc
Reviewed-on: https://go-review.googlesource.com/13839
Reviewed-by: Andrew Gerrand <adg@golang.org>
this leaves lots of cruft behind, will delete that soon
Change-Id: I12d6b6192f89bcdd89b2b0873774bd3458373b8a
Reviewed-on: https://go-review.googlesource.com/14196
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Keep track of which types of keys need an update and which don't.
Strings need an update because the new key might pin a smaller backing store.
Floats need an update because it might be +0/-0.
Interfaces need an update because they may contain strings or floats.
Fixes#11088
Change-Id: I9ade53c1dfb3c1a2870d68d07201bc8128e9f217
Reviewed-on: https://go-review.googlesource.com/10843
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently the stack barrier code is mixed in with the mark and scan
code. Move all of the stack barrier related functions and variables to
a new dedicated source file. There are no code modifications.
Change-Id: I604603045465ef8573b9f88915d28ab6b5910903
Reviewed-on: https://go-review.googlesource.com/14050
Reviewed-by: Rick Hudson <rlh@golang.org>
When building a shared library, all functions that are declared must actually
be defined.
Change-Id: I1488690cecfb66e62d9fdb3b8d257a4dc31d202a
Reviewed-on: https://go-review.googlesource.com/14187
Reviewed-by: Dave Cheney <dave@cheney.net>
This is generated during fp code when -shared is active.
Change-Id: Ia1092299b9c3b63ff771ca4842158b42c34bd008
Reviewed-on: https://go-review.googlesource.com/14286
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Also simplifies some silliness around making the .tbss section wrt internal
vs external linking. The "make TLS make sense" project has quite a few more
steps to go.
Issue #11270
Change-Id: Ia4fa135cb22d916728ead95bdbc0ebc1ae06f05c
Reviewed-on: https://go-review.googlesource.com/13990
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Building for shared libraries requires that all functions that are declared
have an implementation and vice versa so make that so on arm64.
It would be nicer to not require the stub sigreturn (it will never be called)
but that seems a bit awkward.
Change-Id: I3cec81697161b452af81fa35939f748bd1acf7fd
Reviewed-on: https://go-review.googlesource.com/13995
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The hash tests generate occasional failures, quiet them some more.
In particular we can get 1 collision when the expected number is
.001 or so. That shouldn't be a dealbreaker.
Fixes#12311
Change-Id: I784e91b5d21f4f1f166dc51bde2d1cd3a7a3bfea
Reviewed-on: https://go-review.googlesource.com/13902
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Currently the stack barrier stub blindly unwinds the next stack
barrier from the G's stack barrier array without checking that it's
the right stack barrier. If through some bug the stack barrier array
position gets out of sync with where we actually are on the stack,
this could return to the wrong PC, which would lead to difficult to
debug crashes. To address this, this commit adds a check to the amd64
stack barrier stub that it's unwinding the correct stack barrier.
Updates #12238.
Change-Id: If824d95191d07e2512dc5dba0d9978cfd9f54e02
Reviewed-on: https://go-review.googlesource.com/13948
Reviewed-by: Russ Cox <rsc@golang.org>
Currently enabling the debugging mode where stack barriers are
installed at every frame requires recompiling the runtime. However,
this is potentially useful for field debugging and for runtime tests,
so make this mode a GODEBUG.
Updates #12238.
Change-Id: I6fb128f598b19568ae723a612e099c0ed96917f5
Reviewed-on: https://go-review.googlesource.com/13947
Reviewed-by: Russ Cox <rsc@golang.org>
Currently the runtime can install stack barriers in any frame.
However, the frame of cgocallback_gofunc is special: it's the one
function that switches from a regular G stack to the system stack on
return. Hence, the return PC slot in its frame on the G stack is
actually used to save getg().sched.pc (so tracebacks appear to unwind
to the last Go function running on that G), and not as an actual
return PC for cgocallback_gofunc.
Because of this, if we install a stack barrier in cgocallback_gofunc's
return PC slot, when cgocallback_gofunc does return, it will move the
stack barrier stub PC in to getg().sched.pc and switch back to the
system stack. The rest of the runtime doesn't know how to deal with a
stack barrier stub in sched.pc: nothing knows how to match it up with
the G's stack barrier array and, when the runtime removes stack
barriers, it doesn't know to undo the one in sched.pc. Hence, if the C
code later returns back in to Go code, it will attempt to return
through the stack barrier saved in sched.pc, which may no longer have
correct unwinding information.
Fix this by blacklisting cgocallback_gofunc's frame so the runtime
won't install a stack barrier in it's return PC slot.
Fixes#12238.
Change-Id: I46aa2155df2fd050dd50de3434b62987dc4947b8
Reviewed-on: https://go-review.googlesource.com/13944
Reviewed-by: Russ Cox <rsc@golang.org>
Right now we find out implicitly if stack barriers are in place,
or defers. This change makes sure we find out about short
unwinds always.
Change-Id: Ibdde1ba9c79eb792660dcb7aa6f186e4e4d559b3
Reviewed-on: https://go-review.googlesource.com/13966
Reviewed-by: Austin Clements <austin@google.com>
Change-Id: I63bf6d2fdf41b49ff8783052d5d6c53b20e2f050
Reviewed-on: https://go-review.googlesource.com/13760
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
I noticed that they were unimplemented on arm64 but then that they were
in fact not used at all.
Change-Id: Iee579feda2a5e374fa571bcc8c89e4ef607d50f6
Reviewed-on: https://go-review.googlesource.com/13951
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
s is a uint32 and can never be zero. It's max value is already tested
against sig.wanted, whose size is derived from _NSIG. This also
matches the test in signal_enable.
Fixes#11282
Change-Id: I8eec9c7df8eb8682433616462fe51b264c092475
Reviewed-on: https://go-review.googlesource.com/13940
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
No longer used after previous hashmap change.
Change-Id: I558470f872281e84a78406132df4e391d077b833
Reviewed-on: https://go-review.googlesource.com/13785
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Previously t.zero always pointed to runtime.zerovalue. Change the hashmap code
to always return a runtime pointer directly, and change that pointer to point
to a larger buffer if one is needed.
(It might be better to only copy from the pointer returned by the mapaccess
functions when the value type is small enough and have the compiler insert
explicit zeroing for larger value types, but I tried and failed to do this).
This removes all uses of the zero field of the type data; the field itself can
be removed in a separate change.
Fixes#11491
Change-Id: I5b81752ff4067d74a5a281c41e88f151bae0171e
Reviewed-on: https://go-review.googlesource.com/13784
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
A comparison of the form l == r where l is an interface and r is
concrete performs a type assertion on l to convert it to r's type.
However, the compiler fails to zero the temporary where the result of
the type assertion is written, so if the type is a pointer type and a
stack scan occurs while in the type assertion, it may see an invalid
pointer on the stack.
Fix this by zeroing the temporary. This is equivalent to the fix for
type switches from c4092ac.
Fixes#12253.
Change-Id: Iaf205d456b856c056b317b4e888ce892f0c555b9
Reviewed-on: https://go-review.googlesource.com/13872
Reviewed-by: Russ Cox <rsc@golang.org>
nmspinning has a value range of [0, 2^31-1]. Update the comment to
indicate this and fix the comparison so it's not always false.
Fixes#11280
Change-Id: Iedaf0654dcba5e2c800645f26b26a1a781ea1991
Reviewed-on: https://go-review.googlesource.com/13877
Reviewed-by: Minux Ma <minux@golang.org>
gobuf.g is a guintptr, so without hex(), it will be printed as
a decimal, which is not very helpful and inconsistent with how
other pointers are printed.
Change-Id: I7c0432e9709e90a5c3b3e22ce799551a6242d017
Reviewed-on: https://go-review.googlesource.com/13879
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I cannot find where it's being used.
This addresses a duplicate symbol issue encountered in golang/go#9327.
Change-Id: I8efda45a006ad3e19423748210c78bd5831215e0
Reviewed-on: https://go-review.googlesource.com/13615
Reviewed-by: Ian Lance Taylor <iant@golang.org>
CL 9184 changed the runtime and syscall packages to link Solaris binaries
directly instead of using dlopen/dlsym but did not remove the unused (and
now broken) references to dlopen, dlclose, and dlsym.
Fixes#11923
Change-Id: I36345ce5e7b371bd601b7d48af000f4ccacd62c0
Reviewed-on: https://go-review.googlesource.com/13410
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Changes the torture test in #12068 from failing about 1/10 times
to not failing in almost 2,000 runs.
This was only happening in -race mode because functions are
bigger in -race mode, so a few of the helpers for heapBitsBulkBarrier
were not being inlined, and they were not marked nosplit,
so (only in -race mode) the write barrier was being preempted by GC,
causing missed pointer updates.
Filed issue #12069 for diagnosis of any other similar errors.
Fixes#12068.
Change-Id: Ic174d9b050ba278b18b08ab0d85a73c33bd5b175
Reviewed-on: https://go-review.googlesource.com/13364
Reviewed-by: Austin Clements <austin@google.com>
Also, crash early on non-Linux SMP ARM systems when GOARM < 7;
without the proper synchronization, SMP cannot work.
Linux is okay because we call kernel-provided routines for
synchronization and barriers, and the kernel takes care of
providing the right routines for the current system.
On non-Linux systems we are left to fend for ourselves.
It is possible to use different synchronization on GOARM=6,
but it's too late to do that in the Go 1.5 cycle.
We don't believe there are any non-Linux SMP GOARM=6 systems anyway.
Fixes#12067.
Change-Id: I771a556e47893ed540ec2cd33d23c06720157ea3
Reviewed-on: https://go-review.googlesource.com/13363
Reviewed-by: Austin Clements <austin@google.com>
Currently, runtime.Goexit() calls goexit()—the goroutine exit stub—to
terminate the goroutine. This *mostly* works, but can cause a
"leftover stack barriers" panic if the following happens:
1. Goroutine A has a reasonably large stack.
2. The garbage collector scan phase runs and installs stack barriers
in A's stack. The top-most stack barrier happens to fall at address X.
3. Goroutine A unwinds the stack far enough to be a candidate for
stack shrinking, but not past X.
4. Goroutine A calls runtime.Goexit(), which calls goexit(), which
calls goexit1().
5. The garbage collector enters mark termination.
6. Goroutine A is preempted right at the prologue of goexit1() and
performs a stack shrink, which calls gentraceback.
gentraceback stops as soon as it sees goexit on the stack, which is
only two frames up at this point, even though there may really be many
frames above it. More to the point, the stack barrier at X is above
the goexit frame, so gentraceback never sees that stack barrier. At
the end of gentraceback, it checks that it saw all of the stack
barriers and panics because it didn't see the one at X.
The fix is simple: call goexit1, which actually implements the process
of exiting a goroutine, rather than goexit, the exit stub.
To make sure this doesn't happen again in the future, we also add an
argument to the stub prototype of goexit so you really, really have to
want to call it in order to call it. We were able to reliably
reproduce the above sequence with a fair amount of awful code inserted
at the right places in the runtime, but chose to change the goexit
prototype to ensure this wouldn't happen again rather than pollute the
runtime with ugly testing code.
Change-Id: Ifb6fb53087e09a252baddadc36eebf954468f2a8
Reviewed-on: https://go-review.googlesource.com/13323
Reviewed-by: Russ Cox <rsc@golang.org>
This makes TestTraceStressStartStop much less flaky.
Running under stress, it changes the failure rate from
above 1/100 to under 1/50000. That very unlikely
failure happens when an unexpected GoSysExit is
written. Not sure how that happens yet, but it is much
less important.
Fixes#11953.
Change-Id: I034671936334b4f3ab733614ef239aa121d20247
Reviewed-on: https://go-review.googlesource.com/13321
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
88e945f introduced a non-speculative double check of the heap trigger
before actually starting a concurrent GC. This was necessary to fix a
race for heap-triggered GC, but broke sysmon-triggered periodic GC,
since the heap check will of course fail for periodically triggered
GC.
Fix this by telling startGC whether or not this GC was triggered by
heap size or a timer and only doing the heap size double check for GCs
triggered by heap size.
Fixes#12026.
Change-Id: I7c3f6ec364545c36d619f2b4b3bf3b758e3bcbd6
Reviewed-on: https://go-review.googlesource.com/13168
Reviewed-by: Russ Cox <rsc@golang.org>
This is what is causing freebsd/arm to crash mysteriously when using cgo.
The bug was introduced in golang.org/cl/4030, which moved this code out
of rt0_go and into its own function. The ARM ABI says that calls must
be made with the stack pointer at an 8-byte boundary, but only FreeBSD
seems to crash when this is violated.
Fixes#10119.
Change-Id: Ibdbe76b2c7b80943ab66b8abbb38b47acb70b1e5
Reviewed-on: https://go-review.googlesource.com/13161
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
When commit 510fd13 enabled assists during the scan phase, it failed
to also update the code in the GC controller that computed the assist
CPU utilization and adjusted the trigger based on it. Fix that code so
it uses the start of the scan phase as the wall-clock time when
assists were enabled rather than the start of the mark phase.
Change-Id: I05013734b4448c3e2c730dc7b0b5ee28c86ed8cf
Reviewed-on: https://go-review.googlesource.com/13048
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
At the start of a GC cycle, the garbage collector computes the assist
ratio based on the total scannable heap size. This was intended to be
conservative; after all, this assumes the entire heap may be reachable
and hence needs to be scanned. But it only assumes that the *current*
entire heap may be reachable. It fails to account for heap allocated
during the GC cycle. If the trigger ratio is very low (near zero), and
most of the heap is reachable when GC starts (which is likely if the
trigger ratio is near zero), then it's possible for the mutator to
create new, reachable heap fast enough that the assists won't keep up
based on the assist ratio computed at the beginning of the cycle. As a
result, the heap can grow beyond the heap goal (by hundreds of megs in
stress tests like in issue #11911).
We already have some vestigial logic for dealing with situations like
this; it just doesn't run often enough. Currently, every 10 ms during
the GC cycle, the GC revises the assist ratio. This was put in before
we switched to a conservative assist ratio (when we really were using
estimates of scannable heap), and it turns out to be exactly what we
need now. However, every 10 ms is far too infrequent for a rapidly
allocating mutator.
This commit reuses this logic, but replaces the 10 ms timer with
revising the assist ratio every time the heap is locked, which
coincides precisely with when the statistics used to compute the
assist ratio are updated.
Fixes#11911.
Change-Id: I377b231ab064946228378fa10422a46d1b50f4c5
Reviewed-on: https://go-review.googlesource.com/13047
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This was useful in debugging the mutator assist behavior for #11911,
and it fits with the other gcpacertrace output.
Change-Id: I1e25590bb4098223a160de796578bd11086309c7
Reviewed-on: https://go-review.googlesource.com/13046
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Proportional concurrent sweep is currently based on a ratio of spans
to be swept per bytes of object allocation. However, proportional
sweeping is performed during span allocation, not object allocation,
in order to minimize contention and overhead. Since objects are
allocated from spans after those spans are allocated, the system tends
to operate in debt, which means when the next GC cycle starts, there
is often sweep debt remaining, so GC has to finish the sweep, which
delays the start of the cycle and delays enabling mutator assists.
For example, it's quite likely that many Ps will simultaneously refill
their span caches immediately after a GC cycle (because GC flushes the
span caches), but at this point, there has been very little object
allocation since the end of GC, so very little sweeping is done. The
Ps then allocate objects from these cached spans, which drives up the
bytes of object allocation, but since these allocations are coming
from cached spans, nothing considers whether more sweeping has to
happen. If the sweep ratio is high enough (which can happen if the
next GC trigger is very close to the retained heap size), this can
easily represent a sweep debt of thousands of pages.
Fix this by making proportional sweep proportional to the number of
bytes of spans allocated, rather than the number of bytes of objects
allocated. Prior to allocating a span, both the small object path and
the large object path ensure credit for allocating that span, so the
system operates in the black, rather than in the red.
Combined with the previous commit, this should eliminate all sweeping
from GC start up. On the stress test in issue #11911, this reduces the
time spent sweeping during GC (and delaying start up) by several
orders of magnitude:
mean 99%ile max
pre fix 1 ms 11 ms 144 ms
post fix 270 ns 735 ns 916 ns
Updates #11911.
Change-Id: I89223712883954c9d6ec2a7a51ecb97172097df3
Reviewed-on: https://go-review.googlesource.com/13044
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently it's possible for the next_gc heap size trigger computed for
the next GC cycle to be less than the current allocated heap size.
This means the next cycle will start immediately, which means there's
no time to perform the concurrent sweep between GC cycles. This places
responsibility for finishing the sweep on GC itself, which delays GC
start-up and hence delays mutator assist.
Fix this by ensuring that next_gc is always at least a little higher
than the allocated heap size, so we won't trigger the next cycle
instantly.
Updates #11911.
Change-Id: I74f0b887bf187518d5fedffc7989817cbcf30592
Reviewed-on: https://go-review.googlesource.com/13043
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently there are two sensitive periods during which a mutator can
allocate past the heap goal but mutator assists can't be enabled: 1)
at the beginning of GC between when the heap first passes the heap
trigger and sweep termination and 2) at the end of GC between mark
termination and when the background GC goroutine parks. During these
periods there's no back-pressure or safety net, so a rapidly
allocating mutator can allocate past the heap goal. This is
exacerbated if there are many goroutines because the GC coordinator is
scheduled as any other goroutine, so if it gets preempted during one
of these periods, it may stay preempted for a long period (10s or 100s
of milliseconds).
Normally the mutator does scan work to create back-pressure against
allocation, but there is no scan work during these periods. Hence, as
a fall back, if a mutator would assist but can't yet, simply yield the
CPU. This delays the mutator somewhat, but more importantly gives more
CPU time to the GC coordinator for it to complete the transition.
This is obviously a workaround. Issue #11970 suggests a far better but
far more invasive way to fix this.
Updates #11911. (This very nearly fixes the issue, but about once
every 15 minutes I get a GC cycle where the assists are enabled but
don't do enough work.)
Change-Id: I9768b79e3778abd3e06d306596c3bd77f65bf3f1
Reviewed-on: https://go-review.googlesource.com/13026
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently allocation checks the GC trigger speculatively during
allocation and then triggers the GC without rechecking. As a result,
it's possible for G 1 and G 2 to detect the trigger simultaneously,
both enter startGC, G 1 actually starts GC while G 2 gets preempted
until after the whole GC cycle, then G 2 immediately starts another GC
cycle even though the heap is now well under the trigger.
Fix this by re-checking the GC trigger non-speculatively just before
actually kicking off a new GC cycle.
This contributes to #11911 because when this happens, we definitely
don't finish the background sweep before starting the next GC cycle,
which can significantly delay the start of concurrent scan.
Change-Id: I560ab79ba5684ba435084410a9765d28f5745976
Reviewed-on: https://go-review.googlesource.com/13025
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
On most systems, a pointer is the worst case alignment, so adding
a pointer field at the end of a struct guarantees there will be no
padding added after that field (to satisfy overall struct alignment
due to some more-aligned field also present).
In the runtime, the map implementation needs a quick way to
get to the overflow pointer, which is last in the bucket struct,
so it uses size - sizeof(pointer) as the offset.
NaCl/amd64p32 is the exception, as always.
The worst case alignment is 64 bits but pointers are 32 bits.
There's a long history that is not worth going into, but when
we moved the overflow pointer to the end of the struct,
we didn't get the padding computation right.
The compiler computed the regular struct size and then
on amd64p32 added another 32-bit field.
And the runtime assumed it could step back two 32-bit fields
(one 64-bit register size) to get to the overflow pointer.
But in fact if the struct needed 64-bit alignment, the computation
of the regular struct size would have added a 32-bit pad already,
and then the code unconditionally added a second 32-bit pad.
This placed the overflow pointer three words from the end, not two.
The last two were padding, and since the runtime was consistent
about using the second-to-last word as the overflow pointer,
no harm done in the sense of overwriting useful memory.
But writing the overflow pointer to a non-pointer word of memory
means that the GC can't see the overflow blocks, so it will
collect them prematurely. Then bad things happen.
Correct all this in a few steps:
1. Add an explicit check at the end of the bucket layout in the
compiler that the overflow field is last in the struct, never
followed by padding.
2. When padding is needed on nacl (not always, just when needed),
insert it before the overflow pointer, to preserve the "last in the struct"
property.
3. Let the compiler have the final word on the width of the struct,
by inserting an explicit padding field instead of overwriting the
results of the width computation it does.
4. For the same reason (tell the truth to the compiler), set the type
of the overflow field when we're trying to pretend its not a pointer
(in this case the runtime maintains a list of the overflow blocks
elsewhere).
5. Make the runtime use "last in the struct" as its location algorithm.
This fixes TestTraceStress on nacl/amd64p32.
The 'bad map state' and 'invalid free list' failures no longer occur.
Fixes#11838.
Change-Id: If918887f8f252d988db0a35159944d2b36512f92
Reviewed-on: https://go-review.googlesource.com/12971
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Dangling pointer error. Unlikely to trigger in practice, but still.
Found by running GODEBUG=efence=1 GOGC=1 trace.test.
Change-Id: Ice474dedcf62dd33ab77526287a023ba3b166db9
Reviewed-on: https://go-review.googlesource.com/12991
Reviewed-by: Austin Clements <austin@google.com>