Mallocgc must be atomic wrt GC, but for performance reasons
don't acquirem/releasem on fast path. The code does not have
split stack checks, so it can't be preempted by GC.
Functions like roundup/add are inlined. And onM/racemalloc are nosplit.
Also add debug code that checks these assumptions.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 20.5 17.2 -16.10%
BenchmarkMalloc16 29.5 27.0 -8.47%
BenchmarkMallocTypeInfo8 31.5 27.6 -12.38%
BenchmarkMallocTypeInfo16 34.7 30.9 -10.95%
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/123100043
bv.data is an array of uint32s but the code was using
offsets computed for an array of bytes.
Add a test for stack GC info.
Fixes#8531.
LGTM=rsc
R=golang-codereviews
CC=golang-codereviews, khr, rsc
https://golang.org/cl/124450043
Restore https://golang.org/cl/41040043 after GC rewrite.
Original description:
On the plus side, we don't need to change the bits on malloc and free.
On the downside, we need to mark objects in the free lists during GC.
But the free lists are small at GC time, so it should be a net win.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 21.9 20.4 -6.85%
BenchmarkMalloc16 31.1 29.6 -4.82%
LGTM=khr
R=khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/122280043
This allows changing the addressing mode for constant
global addresses to use pc-relative addressing.
LGTM=rminnich, iant
R=golang-codereviews, rminnich, iant
CC=golang-codereviews
https://golang.org/cl/129830043
It's unclear why we do this broken double-checked locking.
The mutex is not held for the whole duration of CPU profiling.
Fixes#8365.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/116290043
Eliminating use of this extension makes it easier to port the Go runtime
to other compilers. This CL also disables the extension in cc to prevent
accidental use.
LGTM=rsc, khr
R=rsc, aram, khr, dvyukov
CC=axwalk, golang-codereviews
https://golang.org/cl/106790044
FlagNoGC is unused now.
FlagNoInvokeGC is unneeded as we don't invoke GC
on g0 and when holding locks anyway.
mal/malloc have very few uses and you never remember
the exact set of flags they use and the difference between them.
Moreover, eventually we need to give exact types to all allocations,
something what mal/malloc do not support.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/117580043
Shrinkstack does not touch normal heap anymore,
so we can shink stacks concurrently with marking.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, khr, rlh, rsc
https://golang.org/cl/122130043
Introduce the mFunction type to represent an mcall/onM-able function.
Name such functions using _m.
LGTM=bradfitz
R=bradfitz
CC=golang-codereviews
https://golang.org/cl/121320043
Hashing on the bytes instead of the words does
a (much) better job of using all the bits, so that
maps of floats have linear performance.
LGTM=khr
R=golang-codereviews, khr
CC=adonovan, golang-codereviews
https://golang.org/cl/126720044
The implementation 'return 0' results in too many collisions.
LGTM=khr
R=golang-codereviews, adonovan, khr
CC=golang-codereviews, iant, khr, r
https://golang.org/cl/125720044
It can happen legitimately if a profiling signal arrives at just the wrong moment.
It's harmless.
Fixes#8153.
LGTM=minux
R=golang-codereviews, minux
CC=golang-codereviews, iant, r
https://golang.org/cl/118670043
Full spans can't be passed to UncacheSpan since we get rid of free.
LGTM=rsc
R=golang-codereviews
CC=golang-codereviews, khr, rsc
https://golang.org/cl/119490044
Instead of including <sys/types.h> to get size_t, instead include
the ISO C standard <stddef.h> header, which defines fewer additional
types at risk of colliding with the user code. In particular, this
prevents collisions between <sys/types.h>'s userspace definitions with
the kernel definitions needed by defs_linux.go.
Also, -cdefs mode uses #pragma pack, so we can keep misaligned fields.
Fixes#8477.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/120610043
We call scanblock for lots of small root pieces
e.g. for every stack frame args and locals area.
Every scanblock invocation calls getempty/putempty,
which accesses lock-free stack shared among all worker threads.
One-element local cache allows most scanblock calls
to proceed without accessing the shared stack.
LGTM=rsc
R=golang-codereviews, rlh
CC=golang-codereviews, khr, rsc
https://golang.org/cl/121250043
We have an autogenerated version in zruntime_defs.
I am not sure what are the consequences as gdb never printed any values for me.
But it looks unnecessary to manually duplicate it.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews, iant, khr
https://golang.org/cl/115660043
For consistency with other code, as that was the only use of
memcopy outside of alg.goc.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/122030044
The gccgo version of USED only accepts a single variable, so
this simplifies merging.
LGTM=minux, dave
R=golang-codereviews, minux, dave
CC=golang-codereviews
https://golang.org/cl/115630043
A good cleanup anyway, and it makes some room for an additional
field needed for issue 8412.
Update #8412
LGTM=iant
R=iant, khr
CC=golang-codereviews
https://golang.org/cl/112700043
6a and 8a rearrange memmove such that the fallthrough from move_1or2 to move_0 ends up being a JMP to a RET. Insert an explicit RET to prevent such silliness.
Do the same for memclr as prophylaxis.
benchmark old ns/op new ns/op delta
BenchmarkMemmove1 4.59 4.13 -10.02%
BenchmarkMemmove2 4.58 4.13 -9.83%
LGTM=khr
R=golang-codereviews, dvyukov, minux, ruiu, bradfitz, khr
CC=golang-codereviews
https://golang.org/cl/120930043
Create proper closures so hash functions can be called
directly from Go. Rearrange calling convention so return
value is directly accessible.
LGTM=dvyukov
R=golang-codereviews, dvyukov, dave, khr
CC=golang-codereviews
https://golang.org/cl/119360043
Several reasons:
1. Significantly simplifies runtime.
2. This code proved to be buggy.
3. Free is incompatible with bump-the-pointer allocation.
4. We want to write runtime in Go, Go does not have free.
5. Too much code to free env strings on startup.
LGTM=khr
R=golang-codereviews, josharian, tracey.brendan, khr
CC=bradfitz, golang-codereviews, r, rlh, rsc
https://golang.org/cl/116390043
Stand-alone this test is fine. Run together with
others, however, the stack used can actually go
negative because other tests are freeing stack
during its execution.
This behavior is new with the new stack allocator.
The old allocator never returned (min-sized) stacks.
This test is fairly poor - it needs to run in
isolation to be accurate. Maybe we should delete it.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/119330044
The DISPATCH and CALLFN macro definitions depend on an inconsistency
between the internal cpp mini-implementation and the language proper in
whether center-dot is an identifier character. The macro depends on it not
being an identifier character, but the resulting code depends on it being one.
Remove the dependence on the inconsistency by placing the center-dot into
the macro invocation rather that the body.
No semantic change. This is just renaming macro arguments.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/119320043
This change introduces gomallocgc, a Go clone of mallocgc.
Only a few uses have been moved over, so there are still
lots of uses from C. Many of these C uses will be moved
over to Go (e.g. in slice.goc), but probably not all.
What should remain of C's mallocgc is an open question.
LGTM=rsc, dvyukov
R=rsc, khr, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/108840046
Implement the design described in:
https://docs.google.com/document/d/1v4Oqa0WwHunqlb8C3ObL_uNQw3DfSY-ztoA-4wWbKcg/pub
Summary of the changes:
GC uses "2-bits per word" pointer type info embed directly into bitmap.
Scanning of stacks/data/heap is unified.
The old spans types go away.
Compiler generates "sparse" 4-bits type info for GC (directly for GC bitmap).
Linker generates "dense" 2-bits type info for data/bss (the same as stacks use).
Summary of results:
-1680 lines of code total (-1000+ in mgc0.c only)
-25% memory consumption
-3-7% binary size
-15% GC pause reduction
-7% run time reduction
LGTM=khr
R=golang-codereviews, rsc, christoph, khr
CC=golang-codereviews, rlh
https://golang.org/cl/106260045
Sweepone may be running while a new span is allocating. It
must not see the state updated while the sweepgen is unset.
Fixes#8399
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/118050043
This is bad for 2 reasons:
1. if the code under lock ever grows stack,
it will deadlock as stack growing acquires mheap lock.
2. It currently deadlocks with SetCPUProfileRate:
scavenger locks mheap, receives prof signal and tries to lock prof lock;
meanwhile SetCPUProfileRate locks prof lock and tries to grow stack
(presumably in runtime.unlock->futexwakeup). Boom.
Let's assume that it
Fixes#8407.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews, khr
https://golang.org/cl/112640043
With cl/112640043 TestCgoDeadlockCrash episodically print:
unexpected return pc for runtime.newstackcall
After adding debug output I see the following trace:
runtime: unexpected return pc for runtime.newstackcall called from 0xc208011b00
runtime.throw(0x414da86)
src/pkg/runtime/panic.c:523 +0x77
runtime.gentraceback(0x40165fc, 0xba440c28, 0x0, 0xc208d15200, 0xc200000000, 0xc208ddfd20, 0x20, 0x0, 0x0, 0x300)
src/pkg/runtime/traceback_x86.c:185 +0xca4
runtime.callers(0x1, 0xc208ddfd20, 0x20)
src/pkg/runtime/traceback_x86.c:438 +0x98
mcommoninit(0xc208ddfc00)
src/pkg/runtime/proc.c:369 +0x5c
runtime.allocm(0xc208052000)
src/pkg/runtime/proc.c:686 +0xa6
newm(0x4017850, 0xc208052000)
src/pkg/runtime/proc.c:933 +0x27
startm(0xc208052000, 0x100000001)
src/pkg/runtime/proc.c:1011 +0xba
wakep()
src/pkg/runtime/proc.c:1071 +0x57
resetspinning()
src/pkg/runtime/proc.c:1297 +0xa1
schedule()
src/pkg/runtime/proc.c:1366 +0x14b
runtime.gosched0(0xc20808e240)
src/pkg/runtime/proc.c:1465 +0x5b
runtime.newstack()
src/pkg/runtime/stack.c:891 +0x44d
runtime: unexpected return pc for runtime.newstackcall called from 0xc208011b00
runtime.newstackcall(0x4000cbd, 0x4000b80)
src/pkg/runtime/asm_amd64.s:278 +0x6f
I suspect that it can happen on any stack split.
So don't unwind g0 stack.
Also, that comment is lying -- we can traceback w/o mcache,
CPU profiler does that.
LGTM=rsc
R=golang-codereviews
CC=golang-codereviews, khr, rsc
https://golang.org/cl/120040043
So we can tell from a binary which version of
Go built it.
LGTM=minux, rsc
R=golang-codereviews, minux, khr, rsc, dave
CC=golang-codereviews
https://golang.org/cl/117040043
In both cases we lie to malloc about the actual size that we need.
In panic we ask for less memory than we are going to use.
In slice we ask for more memory than we are going to use
(potentially asking for a fractional number of elements).
This breaks the new GC.
LGTM=khr
R=golang-codereviews, dave, khr
CC=golang-codereviews, rsc
https://golang.org/cl/116940043
Even though pointers are 4 bytes the stack frame should be kept
a multiple of 8 bytes so that return addresses pushed on the stack
are properly aligned.
Fixes#8379.
LGTM=dvyukov, minux
R=minux, bradfitz, dvyukov, dave
CC=golang-codereviews
https://golang.org/cl/115840048
These correspond to 2 and 3 word fat copies/clears on 8g, which dominate usage in the stdlib. (70% of copies and 46% of clears are for 2 or 3 words.) I missed these in CL 111350043, which added 2 and 3 word benchmarks for 6g. A follow-up CL will optimize these cases.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/115160043
benchmark old ns/op new ns/op delta
BenchmarkSelectUncontended 220 165 -25.00%
BenchmarkSelectContended 209 161 -22.97%
BenchmarkSelectProdCons 1042 904 -13.24%
But more importantly this change will allow
to get rid of free function in runtime.
Fixes#6494.
LGTM=rsc, khr
R=golang-codereviews, rsc, dominik.honnef, khr
CC=golang-codereviews, remyoudompheng
https://golang.org/cl/107670043
The CopyFat benchmarks were changed in CL 92760044. See CL 111350043 for discussion.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/116000043
These benchmarks are important for performance. When compiling the stdlib:
* 77.1% of the calls to sgen (copyfat) are for 16 bytes; another 8.7% are for 24 bytes. (The next most common is 32 bytes, at 5.7%.)
* Over half the calls to clearfat are for 16 or 24 bytes.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews
https://golang.org/cl/111350043
updatememstats is called on both the m and g stacks.
Call into flushallmcaches correctly. flushallmcaches
can only run on the M stack.
This is somewhat temporary. once ReadMemStats is in
Go we can have all of this code M-only.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/116880043
redo stack allocation. This is mostly the same as
the original CL with a few bug fixes.
1. add racemalloc() for stack allocations
2. fix poolalloc/poolfree to terminate free lists correctly.
3. adjust span ref count correctly.
4. don't use cache for sizes >= StackCacheSize.
Should fix bugs and memory leaks in original changelist.
««« original CL description
undo CL 104200047 / 318b04f28372
Breaks windows and race detector.
TBR=rsc
««« original CL description
runtime: stack allocator, separate from mallocgc
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
»»»
TBR=rsc
CC=golang-codereviews
https://golang.org/cl/101570044
»»»
LGTM=dvyukov
R=dvyukov, dave, khr, alex.brainman
CC=golang-codereviews
https://golang.org/cl/112240044
Resolves TODO for not walking all goroutines in NumGoroutines.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/107290044
1. Add select on sync channels benchmark.
2. Make channels in BenchmarkSelectNonblock shared.
With GOMAXPROCS=1 it is the same, but with GOMAXPROCS>1
it becomes a more interesting benchmark.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews
https://golang.org/cl/115780043
The issue is discovered during testing of a change to runtime.
Even if it is unlikely to happen, the comment can safe an hour
next person who hits it.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rlh, rsc
https://golang.org/cl/116790043
I don't see how it can lead to bad things today.
But it's better to kill it before it does.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/111130045
This CL adds 'dropg', which is called to drop the association
between m and its current goroutine, and it makes schedule
handle locked goroutines correctly, instead of requiring all
callers of schedule to do that.
The effect is that if you want to take over an m for, say,
garbage collection work while still allowing the current g
to run on some other m, you can do an mcall to a function
that is:
// dissociate gp
dropg();
gp->status = Gwaiting; // for ready
// put gp on run queue for others to find
runtime·ready(gp);
/* ... do other work here ... */
// done with m, let it run goroutines again
schedule();
Before this CL, the dropg() body had to be written explicitly,
and the check for lockedg before schedule had to be
written explicitly too, both of which make the code a bit
more fragile than it needs to be.
LGTM=iant
R=dvyukov, iant
CC=golang-codereviews, rlh
https://golang.org/cl/113110043
When we've switched to 8K pages,
heap started to grow by 128K instead of 64K,
because it was implicitly assuming that pages are 4K.
Fix that and make the code more robust.
LGTM=khr
R=golang-codereviews, dave, khr
CC=golang-codereviews, rsc
https://golang.org/cl/106450044
Maxstring is not updated in the new string routines,
this makes runtime think that long strings are bogus.
Fixes#8339.
LGTM=crawshaw, iant
R=golang-codereviews, crawshaw, iant
CC=golang-codereviews, khr, rsc
https://golang.org/cl/110930043
Both stdout and stderr are sent to /dev/null in android
apps. Introducing fatalf allows android to implement its
own copy that sends fatal errors to __android_log_print.
LGTM=minux, dave
R=minux, dave
CC=golang-codereviews
https://golang.org/cl/108400045
Based on cl/69170045 by Elias Naur.
There are currently several schemes for acquiring a TLS
slot to save the g register. None of them appear to work
for android. The closest are linux and darwin.
Linux uses a linker TLS relocation. This is not supported
by the android linker.
Darwin uses a fixed offset, and calls pthread_key_create
until it gets the slot it wants. As the runtime loads
late in the android process lifecycle, after an
arbitrary number of other libraries, we cannot rely on
any particular slot being available.
So we call pthread_key_create, take the first slot we are
given, and put it in runtime.tlsg, which we turn into a
regular variable in cmd/ld.
Makes android/arm cgo binaries work.
LGTM=minux
R=elias.naur, minux, dave, josharian
CC=golang-codereviews
https://golang.org/cl/106380043
The code in GC that handles gp->gobuf.ctxt is wrong,
because it does not mark the ctxt object itself,
if just queues the ctxt object for scanning.
So the ctxt object can be collected as garbage.
However, Gobuf.ctxt is void*, so it's always marked and
scanned through G.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/105490044
runtime·usleep and runtime·osyield fall back to calling an
assembly wrapper for the libc functions in the absence of a m,
so they can be called in cgo callback context.
LGTM=rsc
R=minux.ma, rsc
CC=dave, golang-codereviews
https://golang.org/cl/102620044
The main changes fall into a few patterns:
1. Replace #define with enum.
2. Add /*c2go */ comment giving effect of #define.
This is necessary for function-like #defines and
non-enum-able #defined constants.
(Not all compilers handle negative or large enums.)
3. Add extra braces in struct initializer.
(c2go does not implement the full rules.)
This is enough to let c2go typecheck the source tree.
There may be more changes once it is doing
other semantic analyses.
LGTM=minux, iant
R=minux, dave, iant
CC=golang-codereviews
https://golang.org/cl/106860045
A TLS slot is reserved by _rt0_.*_plan9 as an automatic and
its address (which is static on Plan 9) is saved in the
global _privates symbol. The startup linkage now is exactly
like that from Plan 9 libc, and the way we access g is
exactly as if we'd have used privalloc(2).
Aside from making the code more standard, this change
drastically simplifies it, both for 386 and for amd64, and
makes the Plan 9 code in liblink common for both 386 and
amd64.
The amd64 runtime code was cleared of nxm assumptions, and
now runs on the standard Plan 9 kernel.
Note handling fixes will follow in a separate CL.
LGTM=rsc
R=golang-codereviews, rsc, bradfitz, dave
CC=0intro, ality, golang-codereviews, jas, minux.ma, mischief
https://golang.org/cl/101510049
We restored registers correctly in the usual case where the thread
is a Go-managed thread and called runtime·sighandler, but we
failed to do so when runtime·sigtramp was called on a cgo-created
thread. In that case, runtime·sigtramp called runtime·badsignal,
a Go function, and did not restore registers after it returned
LGTM=rsc, dave
R=rsc, dave
CC=golang-codereviews, minux.ma
https://golang.org/cl/105280050
Breaks windows and race detector.
TBR=rsc
««« original CL description
runtime: stack allocator, separate from mallocgc
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
»»»
TBR=rsc
CC=golang-codereviews
https://golang.org/cl/101570044
In order to move malloc to Go, we need to have a
separate stack allocator. If we run out of stack
during malloc, malloc will not be available
to allocate a new stack.
Stacks are the last remaining FlagNoGC objects in the
GC heap. Once they are out, we can get rid of the
distinction between the allocated/blockboundary bits.
(This will be in a separate change.)
Fixes#7468Fixes#7424
LGTM=rsc, dvyukov
R=golang-codereviews, dvyukov, khr, dave, rsc
CC=golang-codereviews
https://golang.org/cl/104200047
Remove GC bitmap backward scanning.
This was already done once in https://golang.org/cl/5530074/
Still makes GC a bit faster.
On the garbage benchmark, before:
gc-pause-one=237345195
gc-pause-total=4746903
cputime=32427775
time=32458208
after:
gc-pause-one=235484019
gc-pause-total=4709680
cputime=31861965
time=31877772
Also prepares mgc0.c for future changes.
R=golang-codereviews, khr, khr
CC=golang-codereviews, rsc
https://golang.org/cl/105380043
newproc takes two extra pointers, not two extra registers.
On amd64p32 (nacl) they are different.
We diagnosed this before the 1.3 cut but the tree was frozen.
I believe this is causing the random problems on the builder.
Fixes#8199.
TBR=r
CC=golang-codereviews
https://golang.org/cl/102710043
Output number of spinning threads,
this is useful to understanding whether the scheduler
is in a steady state or not.
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/103540045
Say when a goroutine is locked to OS thread in crash reports
and goroutine profiles.
It can be useful to understand what goroutines consume OS threads
(syscall and locked), e.g. if you forget to call UnlockOSThread
or leak locked goroutines.
R=golang-codereviews
CC=golang-codereviews, rsc
https://golang.org/cl/94170043
The runtime has historically held two dedicated values g (current goroutine)
and m (current thread) in 'extern register' slots (TLS on x86, real registers
backed by TLS on ARM).
This CL removes the extern register m; code now uses g->m.
On ARM, this frees up the register that formerly held m (R9).
This is important for NaCl, because NaCl ARM code cannot use R9 at all.
The Go 1 macrobenchmarks (those with per-op times >= 10 µs) are unaffected:
BenchmarkBinaryTree17 5491374955 5471024381 -0.37%
BenchmarkFannkuch11 4357101311 4275174828 -1.88%
BenchmarkGobDecode 11029957 11364184 +3.03%
BenchmarkGobEncode 6852205 6784822 -0.98%
BenchmarkGzip 650795967 650152275 -0.10%
BenchmarkGunzip 140962363 141041670 +0.06%
BenchmarkHTTPClientServer 71581 73081 +2.10%
BenchmarkJSONEncode 31928079 31913356 -0.05%
BenchmarkJSONDecode 117470065 113689916 -3.22%
BenchmarkMandelbrot200 6008923 5998712 -0.17%
BenchmarkGoParse 6310917 6327487 +0.26%
BenchmarkRegexpMatchMedium_1K 114568 114763 +0.17%
BenchmarkRegexpMatchHard_1K 168977 169244 +0.16%
BenchmarkRevcomp 935294971 914060918 -2.27%
BenchmarkTemplate 145917123 148186096 +1.55%
Minux previous reported larger variations, but these were caused by
run-to-run noise, not repeatable slowdowns.
Actual code changes by Minux.
I only did the docs and the benchmarking.
LGTM=dvyukov, iant, minux
R=minux, josharian, iant, dave, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/109050043
MOV with SSE registers seems faster than REP MOVSQ if the
size being copied is less than about 2K. Previously we
didn't use MOV if the memory region is larger than 256
byte. This patch improves the performance of 257 ~ 2048
byte non-overlapping copy by using MOV.
Here is the benchmark result on Intel Xeon 3.5GHz (Nehalem).
benchmark old ns/op new ns/op delta
BenchmarkMemmove16 4 4 +0.42%
BenchmarkMemmove32 5 5 -0.20%
BenchmarkMemmove64 6 6 -0.81%
BenchmarkMemmove128 7 7 -0.82%
BenchmarkMemmove256 10 10 +1.92%
BenchmarkMemmove512 29 16 -44.90%
BenchmarkMemmove1024 37 25 -31.55%
BenchmarkMemmove2048 55 44 -19.46%
BenchmarkMemmove4096 92 91 -0.76%
benchmark old MB/s new MB/s speedup
BenchmarkMemmove16 3370.61 3356.88 1.00x
BenchmarkMemmove32 6368.68 6386.99 1.00x
BenchmarkMemmove64 10367.37 10462.62 1.01x
BenchmarkMemmove128 17551.16 17713.48 1.01x
BenchmarkMemmove256 24692.81 24142.99 0.98x
BenchmarkMemmove512 17428.70 31687.72 1.82x
BenchmarkMemmove1024 27401.82 40009.45 1.46x
BenchmarkMemmove2048 36884.86 45766.98 1.24x
BenchmarkMemmove4096 44295.91 44627.86 1.01x
LGTM=khr
R=golang-codereviews, gobot, khr
CC=golang-codereviews
https://golang.org/cl/90500043
This requires minimal changes to the runtime hooks. In particular,
synchronization events must be done only on valid addresses now,
so I've added the additional checks to race.c.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/101000046
Afterprologue check was required when did not know
about return arguments of functions and/or they were not zeroed.
Now 100% precision is required for stacks due to stack copying,
so it must work w/o afterprologue one way or another.
I can limit this change for 1.3 to merely adding a TODO,
but this check is super confusing so I don't want this knowledge to get lost.
LGTM=rsc
R=golang-codereviews, gobot, rsc, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/96580045
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
Reportedly in the Linux 3.16 kernel the VDSO will not have
section headers or a normal symbol table.
Too late for 1.3 but perhaps for 1.3.1, if there is one.
Fixes#8197.
LGTM=rsc
R=golang-codereviews, mattn.jp, rsc
CC=golang-codereviews
https://golang.org/cl/101260044
It appears that something about Go on Windows
cannot handle the fault cause by a jump to address 0.
The way Go represents and calls functions, this
never happened at all, until CL 105140044.
This CL changes the code added in CL 105140044
to make jump to 0 impossible once again.
Fixes#8047. (again, on Windows)
TBR=bradfitz
R=golang-codereviews, dave
CC=adg, golang-codereviews, iant, r
https://golang.org/cl/105120044
jmpdefer modifies PC, SP, and LR, and not atomically,
so walking past jmpdefer will often end up in a state
where the three are not a consistent execution snapshot.
This was causing warning messages a few frames later
when the traceback realized it was confused, but given
the right memory it could easily crash instead.
Update #8153
LGTM=minux, iant
R=golang-codereviews, minux, iant
CC=golang-codereviews, r
https://golang.org/cl/107970043
A runtime.Goexit during a panic-invoked deferred call
left the panic stack intact even though all the stack frames
are gone when the goroutine is torn down.
The next goroutine to reuse that struct will have a
bogus panic stack and can cause the traceback routines
to walk into garbage.
Most likely to happen during tests, because t.Fatal might
be called during a deferred func and uses runtime.Goexit.
This "not enough cleared in Goexit" failure mode has
happened to us multiple times now. Clear all the pointers
that don't make sense to keep, not just gp->panic.
Fixes#8158.
LGTM=iant, dvyukov
R=iant, dvyukov
CC=golang-codereviews
https://golang.org/cl/102220043
The 1-byte write was silently clearing a byte on the stack.
If there was another function call with more arguments
in the same stack frame, no harm done.
Otherwise, if the variable at that location was already zero,
no harm done.
Otherwise, problems.
Fixes#8139.
LGTM=dsymonds
R=golang-codereviews, dsymonds
CC=golang-codereviews, iant, r
https://golang.org/cl/100940043
We were requiring that the defer stack and the panic stack
be completely processed, thinking that if any were left over
the stack scan and the defer stack/panic stack must be out
of sync. It turns out that the panic stack may well have
leftover entries in some situations, and that's okay.
Fixes#8132.
LGTM=minux, r
R=golang-codereviews, minux, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/100900044
C globals are conservatively scanned. This helps
avoid false retention, especially for 32 bit.
LGTM=rsc
R=golang-codereviews, khr, rsc
CC=golang-codereviews
https://golang.org/cl/102040043
The 'continuation pc' is where the frame will continue
execution, if anywhere. For a frame that stopped execution
due to a CALL instruction, the continuation pc is immediately
after the CALL. But for a frame that stopped execution due to
a fault, the continuation pc is the pc after the most recent CALL
to deferproc in that frame, or else 0. That is where execution
will continue, if anywhere.
The liveness information is only recorded for CALL instructions.
This change makes sure that we never look for liveness information
except for CALL instructions.
Using a valid PC fixes crashes when a garbage collection or
stack copying tries to process a stack frame that has faulted.
Record continuation pc in heapdump (format change).
Fixes#8048.
LGTM=iant, khr
R=khr, iant, dvyukov
CC=golang-codereviews, r
https://golang.org/cl/100870044
Update #2675
The code here was using the error check for Linux/386,
not the one for FreeBSD/386. Most of the time it worked.
Thanks to Neel Natu (FreeBSD developer) for finding this.
The s/JCC/JAE/ a few lines later is a no-op but makes the
test match the rest of the file. Why we write JAE instead of JCC
I don't know, but the two are equivalent and the file might
as well be consistent.
LGTM=bradfitz, minux
R=golang-codereviews, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/99680044
The rtype struct is meant to be a copy of reflect.rtype. The
zero field was added to reflect.rtype in 18495:6e50725ac753.
LGTM=rsc
R=khr, rsc
CC=golang-codereviews
https://golang.org/cl/93660045
Add nacl.bash, the NaCl version of all.bash.
It's a separate script because it builds a variant of package syscall
with a large zip file embedded in it, containing all the input files
needed for tests.
Disable various tests new since the last round, mostly the ones using os/exec.
Fixes#7945.
LGTM=dave
R=golang-codereviews, remyoudompheng, dave, bradfitz
CC=golang-codereviews
https://golang.org/cl/100590044
The move from 4kB to 8kB in Go 1.2 was to eliminate many stack split hot spots.
The move back to 4kB was predicated on copying stacks eliminating
the potential for hot spots.
Unfortunately, the fact that stacks do not copy 100% of the time means
that hot spots can still happen under the right conditions, and the slowdown
is worse now than it was in Go 1.2. There is a real program in issue 8030 that
sees about a 30x slowdown: it has a reflect call near the top of the stack
which inhibits any stack copying on that segment.
Go back to 8kB until stack copying can be used 100% of the time.
Fixes#8030.
LGTM=khr, dave, iant
R=iant, khr, r, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/92540043
Currently freeOSMemory makes only marking phase of GC, but not sweeping phase.
So recently memory is not released after freeOSMemory.
Do both marking and sweeping during freeOSMemory.
Fixes#8019.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, rsc
https://golang.org/cl/97550043
The GC program describing a data structure sometimes trusts the
pointer base type and other times does not (if not, the garbage collector
must fall back on per-allocation type information stored in the heap).
Make the scanning of a pointer in an interface do the same.
This fixes a crash in a particular use of reflect.SliceHeader.
Fixes#8004.
LGTM=khr
R=golang-codereviews, khr
CC=0xe2.0x9a.0x9b, golang-codereviews, iant, r
https://golang.org/cl/100470045
mstats.last_gc is unix time now, it is compared with abstract monotonic time.
On my machine GC is forced every 5 mins regardless of last_gc.
LGTM=rsc
R=golang-codereviews
CC=golang-codereviews, iant, rsc
https://golang.org/cl/91350045
I have no test case for this at tip.
The original report included a program crashing at revision 88ac7297d2fa.
I tested this code at that revision and it does fix the crash.
However, at tip the reported code no longer crashes, presumably
because some allocation patterns have changed. I believe the
bug is still present at tip and that this code still fixes it.
Fixes#7143.
LGTM=alex.brainman
R=golang-codereviews, alex.brainman
CC=dvyukov, golang-codereviews
https://golang.org/cl/96300046
If it's not used (such as on other systems or if softfloat
is disabled) the linker will discard it.
The alternative is to teach cmd/go that every binary
depends on math implicitly on arm. I started down that
path but it's too scary. If we're going to get dependencies
right we should get dependencies right.
Fixes#6994.
LGTM=bradfitz, dave
R=golang-codereviews, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/95290043
<enter reason for undo>
««« original CL description
runtime/race: fix the link for the race detector.
LGTM=bradfitz
R=golang-dev, bradfitz
CC=golang-codereviews
https://golang.org/cl/100330043
»»»
TBR=minux
R=minux.ma
CC=golang-codereviews
https://golang.org/cl/96200044
Number of lost samples was overcounted (never reset).
Also remove unused variable (it's trivial to restore it for debugging if needed).
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews, rsc
https://golang.org/cl/96060043
Where the spelling changed from British to
US norm (e.g., optimise -> optimize) it follows
the style in that file.
LGTM=adonovan
R=golang-codereviews, adonovan
CC=golang-codereviews
https://golang.org/cl/96980043
Because gotraceback is called early and often, its cache commits to the value of getenv("GOTRACEBACK") before getenv is even ready. So now we reset its cache once getenv becomes ready. Panicking programs now dump core again.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/97800045
If slice append is the only place where a program allocates,
then it will consume all available memory w/o triggering GC.
This was demonstrated in the issue.
Fixes#7922.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews, iant, khr
https://golang.org/cl/91010048
The monotonic clock patch changed all runtime times
to abstract monotonic time. As the result user-visible
MemStats.LastGC become monotonic time as well.
Restore Unix time for LastGC.
This is the simplest way to expose time.now to runtime that I found.
Another option would be to change time.now to C called
int64 runtime.unixnanotime() and then express time.now in terms of it.
But this would require to introduce 2 64-bit divisions into time.now.
Another option would be to change time.now to C called
void runtime.unixnanotime1(struct {int64 sec, int32 nsec} *now)
and then express both time.now and runtime.unixnanotime in terms of it.
Fixes#7852.
LGTM=minux.ma, iant
R=minux.ma, rsc, iant
CC=golang-codereviews
https://golang.org/cl/93720045
The backing memory for >1 word interfaces was being scanned
conservatively.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/94000043
Use a real type for Gs instead of scanning them conservatively.
Zero the schedlink pointer when it is dead.
Update #7820
LGTM=rsc
R=rsc, dvyukov
CC=golang-codereviews
https://golang.org/cl/89360043
This has typically crashed in the past, although usually with
an 'all goroutines are asleep - deadlock!' message that shows
no goroutines (because there aren't any).
Previous discussion at:
https://groups.google.com/d/msg/golang-nuts/uCT_7WxxopQ/BoSBlLFzUTkJhttps://groups.google.com/d/msg/golang-dev/KUojayEr20I/u4fp_Ej5PdUJhttp://golang.org/issue/7711
There is general agreement that runtime.Goexit terminates the
main goroutine, so that main cannot return, so the program does
not exit.
The interpretation that all other goroutines exiting causes an
exit(0) is relatively new and was not part of those discussions.
That is what this CL changes.
Thankfully, even though the exit(0) has been there for a while,
some other accounting bugs made it very difficult to trigger,
so it is reasonable to replace. In particular, see golang.org/issue/7711#c10
for an examination of the behavior across past releases.
Fixes#7711.
LGTM=iant, r
R=golang-codereviews, iant, dvyukov, r
CC=golang-codereviews
https://golang.org/cl/88210044
Having the pointers means you can grub around in the
binary finding out more about them.
This helped with issue 7748.
LGTM=minux.ma, bradfitz
R=golang-codereviews, minux.ma, bradfitz
CC=golang-codereviews
https://golang.org/cl/88090045
When I did the original 386 ports on Linux and OS X, I chose to
define GS-relative expressions like 4(GS) as relative to the actual
thread-local storage base, which was usually GS but might not be
(it might be FS, or it might be a different constant offset from GS or FS).
The original scope was limited but since then the rewrites have
gotten out of control. Sometimes GS is rewritten, sometimes FS.
Some ports do other rewrites to enable shared libraries and
other linking. At no point in the code is it clear whether you are
looking at the real GS/FS or some synthesized thing that will be
rewritten. The code manipulating all these is duplicated in many
places.
The first step to fixing issue 7719 is to make the code intelligible
again.
This CL adds an explicit TLS pseudo-register to the 386 and amd64.
As a register, TLS refers to the thread-local storage base, and it
can only be loaded into another register:
MOVQ TLS, AX
An offset from the thread-local storage base is written off(reg)(TLS*1).
Semantically it is off(reg), but the (TLS*1) annotation marks this as
indexing from the loaded TLS base. This emits a relocation so that
if the linker needs to adjust the offset, it can. For example:
MOVQ TLS, AX
MOVQ 8(AX)(TLS*1), CX // load m into CX
On systems that support direct access to the TLS memory, this
pair of instructions can be reduced to a direct TLS memory reference:
MOVQ 8(TLS), CX // load m into CX
The 2-instruction and 1-instruction forms correspond roughly to
ELF TLS initial exec mode and ELF TLS local exec mode, respectively.
Liblink applies this rewrite on systems that support the 1-instruction form.
The decision is made using only the operating system (and probably
the -shared flag, eventually), not the link mode. If some link modes
on a particular operating system require the 2-instruction form,
then all builds for that operating system will use the 2-instruction
form, so that the link mode decision can be delayed to link time.
Obviously it is late to be making changes like this, but I despair
of correcting issue 7719 and issue 7164 without it. To make sure
I am not changing existing behavior, I built a "hello world" program
for every GOOS/GOARCH combination we have and then worked
to make sure that the rewrite generates exactly the same binaries,
byte for byte. There are a handful of TODOs in the code marking
kludges to get the byte-for-byte property, but at least now I can
explain exactly how each binary is handled.
The targets I tested this way are:
darwin-386
darwin-amd64
dragonfly-386
dragonfly-amd64
freebsd-386
freebsd-amd64
freebsd-arm
linux-386
linux-amd64
linux-arm
nacl-386
nacl-amd64p32
netbsd-386
netbsd-amd64
openbsd-386
openbsd-amd64
plan9-386
plan9-amd64
solaris-amd64
windows-386
windows-amd64
There were four exceptions to the byte-for-byte goal:
windows-386 and windows-amd64 have a time stamp
at bytes 137 and 138 of the header.
darwin-386 and plan9-386 have five or six modified
bytes in the middle of the Go symbol table, caused by
editing comments in runtime/sys_{darwin,plan9}_386.s.
Fixes#7164.
LGTM=iant
R=iant, aram, minux.ma, dave
CC=golang-codereviews
https://golang.org/cl/87920043
Do not consider idle finalizer/bgsweep/timer goroutines as doing something useful.
We can't simply set isbackground for the whole lifetime of the goroutines,
because when finalizer goroutine calls user function, we do want to consider it
as doing something useful.
This is borken due to timers for quite some time.
With background sweep is become even more broken.
Fixes#7784.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/87960044
Currently Pool can cache up to 15 elements per P, and these elements are not accesible to other Ps.
If a Pool caches large objects, say 2MB, and GOMAXPROCS is set to a large value, say 32,
then the Pool can waste up to 960MB.
The new caching policy caches at most 1 per-P element, the rest is shared between Ps.
Get/Put performance is unchanged. Nested Get/Put performance is 57% worse.
However, overall scalability of nested Get/Put is significantly improved,
so the new policy starts winning under contention.
benchmark old ns/op new ns/op delta
BenchmarkPool 27.4 26.7 -2.55%
BenchmarkPool-4 6.63 6.59 -0.60%
BenchmarkPool-16 1.98 1.87 -5.56%
BenchmarkPool-64 1.93 1.86 -3.63%
BenchmarkPoolOverlflow 3970 6235 +57.05%
BenchmarkPoolOverlflow-4 10935 1668 -84.75%
BenchmarkPoolOverlflow-16 13419 520 -96.12%
BenchmarkPoolOverlflow-64 10295 380 -96.31%
LGTM=rsc
R=rsc
CC=golang-codereviews, khr
https://golang.org/cl/86020043
It looks like maybe on slower builders 4 seconds is not enough.
Trying to get rid of the flaky failures.
TBR=iant
CC=golang-codereviews
https://golang.org/cl/86870044
It runs too long in -short mode.
Disable the one in init, because it doesn't respect -short.
Make the part that claims to test execution in a finalizer
actually execute the test in the finalizer.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=aram.h, golang-codereviews, iant, khr
https://golang.org/cl/86550045
We originally decided to skip this test in short mode
to prevent the parallel runtime test to timeout on the
Plan 9 builder. This should no longer be required since
the issue was fixed in CL 86210043.
LGTM=dave, bradfitz
R=dvyukov, dave, bradfitz
CC=golang-codereviews, rsc
https://golang.org/cl/84790044
If you pass ns = 100,000 to this function, timediv will
return ms = 0. tsemacquire in /sys/src/9/port/sysproc.c
will return immediately when ms == 0 and the semaphore
cannot be acquired immediately - it doesn't sleep - so
notetsleep will spin, chewing cpu and repeatedly reading
the time, until the 100us have passed.
Thanks to the time reads it won't take too many iterations,
but whatever we are waiting for does not get a chance to
run. Eventually the notetsleep spin loop returns and we
end up in the stoptheworld spin loop - actually a sleep
loop but we're not doing a good job of sleeping.
After 100ms or so of this, the kernel says enough and
schedules a different thread. That thread manages to do
whatever we're waiting for, and the spinning in the other
thread stops. If tsemacquire had actually slept, this
would have happened much quicker.
Many thanks to Russ Cox for help debugging.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/86210043
Cuts the number of calls from 6 to 2 in the non-debug case.
LGTM=iant
R=golang-codereviews, iant
CC=0intro, aram, golang-codereviews, khr
https://golang.org/cl/86040043
Getenv() should not call malloc when called from
gotraceback(). Instead, we return a static buffer
in this case, with enough room to hold the longest
value.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/85680043
On Plan 9 gotraceback calls getenv calls malloc, and we gotraceback
on every call to gentraceback, which happens during garbage collection.
Honestly I don't even know how this works on Plan 9.
I suspect it does not, and that we are getting by because
no one has tried to run with $GOTRACEBACK set at all.
This will speed up all the other systems by epsilon, since they
won't call getenv and atoi repeatedly.
LGTM=bradfitz
R=golang-codereviews, bradfitz, 0intro
CC=golang-codereviews
https://golang.org/cl/85430046
Given
type Outer struct {
*Inner
...
}
the compiler generates the implementation of (*Outer).M dispatching to
the embedded Inner. The implementation is logically:
func (p *Outer) M() {
(p.Inner).M()
}
but since the only change here is the replacement of one pointer
receiver with another, the actual generated code overwrites the
original receiver with the p.Inner pointer and then jumps to the M
method expecting the *Inner receiver.
During reflect.Value.Call, we create an argument frame and the
associated data structures to describe it to the garbage collector,
populate the frame, call reflect.call to run a function call using
that frame, and then copy the results back out of the frame. The
reflect.call function does a memmove of the frame structure onto the
stack (to set up the inputs), runs the call, and the memmoves the
stack back to the frame structure (to preserve the outputs).
Originally reflect.call did not distinguish inputs from outputs: both
memmoves were for the full stack frame. However, in the case where the
called function was one of these wrappers, the rewritten receiver is
almost certainly a different type than the original receiver. This is
not a problem on the stack, where we use the program counter to
determine the type information and understand that during (*Outer).M
the receiver is an *Outer while during (*Inner).M the receiver in the
same memory word is now an *Inner. But in the statically typed
argument frame created by reflect, the receiver is always an *Outer.
Copying the modified receiver pointer off the stack into the frame
will store an *Inner there, and then if a garbage collection happens
to scan that argument frame before it is discarded, it will scan the
*Inner memory as if it were an *Outer. If the two have different
memory layouts, the collection will intepret the memory incorrectly.
Fix by only copying back the results.
Fixes#7725.
LGTM=khr
R=khr
CC=dave, golang-codereviews
https://golang.org/cl/85180043
It turns out there is a relatively common pattern that relies on
inverted channel semaphore:
gate := make(chan bool, N)
for ... {
// limit concurrency
gate <- true
go func() {
foo(...)
<-gate
}()
}
// join all goroutines
for i := 0; i < N; i++ {
gate <- true
}
So handle synchronization on inverted semaphores with cap>1.
Fixes#7718.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/84880046
Defers generated from cgo lie to us about their argument layout.
Mark those defers as not copyable.
CL 83820043 contains an additional test for this code and should be
checked in (and enabled) after this change is in.
Fixes bug 7695.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews
https://golang.org/cl/84740043
Iterate the right number of times in arrays and channels.
Handle channels with zero-sized objects in them.
Output longer type names if we have them.
Compute argument offset correctly.
LGTM=rsc
R=golang-codereviews, rsc
CC=golang-codereviews
https://golang.org/cl/82980043
I have no idea what this code is for, but it pretty
clearly needs to be uint64, not uint32.
LGTM=aram
R=0intro, aram
CC=golang-codereviews
https://golang.org/cl/84410043
Trying to make GODEBUG=gcdead=1 work with liveness
and in particular ambiguously live variables.
1. In the liveness computation, mark all ambiguously live
variables as live for the entire function, except the entry.
They are zeroed directly after entry, and we need them not
to be poisoned thereafter.
2. In the liveness computation, compute liveness (and deadness)
for all parameters, not just pointer-containing parameters.
Otherwise gcdead poisons untracked scalar parameters and results.
3. Fix liveness debugging print for -live=2 to use correct bitmaps.
(Was not updated for compaction during compaction CL.)
4. Correct varkill during map literal initialization.
Was killing the map itself instead of the inserted value temp.
5. Disable aggressive varkill cleanup for call arguments if
the call appears in a defer or go statement.
6. In the garbage collector, avoid bug scanning empty
strings. An empty string is two zeros. The multiword
code only looked at the first zero and then interpreted
the next two bits in the bitmap as an ordinary word bitmap.
For a string the bits are 11 00, so if a live string was zero
length with a 0 base pointer, the poisoning code treated
the length as an ordinary word with code 00, meaning it
needed poisoning, turning the string into a poison-length
string with base pointer 0. By the same logic I believe that
a live nil slice (bits 11 01 00) will have its cap poisoned.
Always scan full multiword struct.
7. In the runtime, treat both poison words (PoisonGC and
PoisonStack) as invalid pointers that warrant crashes.
Manual testing as follows:
- Create a script called gcdead on your PATH containing:
#!/bin/bash
GODEBUG=gcdead=1 GOGC=10 GOTRACEBACK=2 exec "$@"
- Now you can build a test and then run 'gcdead ./foo.test'.
- More importantly, you can run 'go test -short -exec gcdead std'
to run all the tests.
Fixes#7676.
While here, enable the precise scanning of slices, since that was
disabled due to bugs like these. That now works, both with and
without gcdead.
Fixes#7549.
LGTM=khr
R=khr
CC=golang-codereviews
https://golang.org/cl/83410044
The garbage collector poison pointers
(0x6969696969696969 and 0x6868686868686868)
are malformed addresses on amd64.
That is, they are not 48-bit addresses sign extended
to 64 bits. This causes a different kind of hardware fault
than the usual 'unmapped page' when accessing such
an address, and OS X 10.9.2 sends the resulting SIGSEGV
incorrectly, making it look like it was user-generated
rather than kernel-generated and does not include the
faulting address. This means that in GODEBUG=gcdead=1
mode, if there is a bug and something tries to dereference
a poisoned pointer, the runtime delivers the SIGSEGV to
os/signal and returns to the faulting code, which faults
again, causing the process to hang instead of crashing.
Fix by rewriting "user-generated" SIGSEGV on OS X to
look like a kernel-generated SIGSEGV with fault address
0xb01dfacedebac1e.
I chose that address because (1) when printed in hex
during a crash, it is obviously spelling out English text,
(2) there are no current Google hits for that pointer,
which will make its origin easy to find once this CL
is indexed, and (3) it is not an altogether inaccurate
description of the situation.
Add a test. Maybe other systems will break too.
LGTM=khr
R=golang-codereviews, khr
CC=golang-codereviews, iant, ken
https://golang.org/cl/83270049
Delaying the runtime.throw until here will print more information.
In particular it will print the signal and code values, which means
it will show the fault address.
The canpanic checks were added recently, in CL 75320043.
They were just not added in exactly the right place.
LGTM=iant
R=dvyukov, iant
CC=golang-codereviews
https://golang.org/cl/83980043
Brad has been asking for this for a while.
I have resisted because I wanted to find a more general way to
do this, one that would keep the performance of code introducing
variables the same as the performance of code that did not.
(See golang.org/issue/3512#c20).
I have not found the more general way, and recent changes to
remove ambiguously live temporaries have blown away the
property I was trying to preserve, so that's no longer a reason
not to make the change.
Fixes#3512.
LGTM=iant
R=iant
CC=bradfitz, golang-codereviews, khr, r
https://golang.org/cl/83740044