Do not synchronize Add(1) with Wait().
Imitate read on first Add(1) and write on Wait(),
it allows to catch common misuses of WaitGroup:
- Add() called in the additional goroutine itself
- incorrect reuse of WaitGroup with multiple waiters
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10093044
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Especially important for Windows because it reserves VM
only in multiple of 64k.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/10082048
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
Remove unnecessary ( ) around == in && clause.
Add { } around multiline if body, even though it's one statement.
Add runtime: prefix to printed errors.
R=cshapiro, iant
CC=golang-dev
https://golang.org/cl/9685047
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).
R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
Before this change, grow work was done only
during map writes to ensure multithreaded safety.
This can lead to maps remaining in a partially
grown state for a long time, potentially forever.
This change allows grow work to happen during reads,
which will lead to grow work finishing sooner, making
the resulting map smaller and faster.
Grow work is not done in parallel. Reads can
happen in parallel while grow work is happening.
R=golang-dev, dvyukov, khr, iant
CC=golang-dev
https://golang.org/cl/8852047
instead of regular g stack. We do this so that the g stack
we're currently running on is no longer changing. Cuts
the root set down a bit (g0 stacks are not scanned, and
we don't need to scan gc's internal state). Also an
enabler for copyable stacks.
R=golang-dev, cshapiro, khr, 0xe2.0x9a.0x9b, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/9754044
mheap.map become a pointer, so nelem(h->map) returns 1 rather than the map size.
As the result coalescing with subsequent spans does not happen.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9649046
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
Then use the limit to make sure MHeap_LookupMaybe & inlined
copies don't return a span if the pointer is beyond the limit.
Use this fact to optimize all call sites.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/9869045
A nosplits was assumed to have no argument information and no
pointer map. However, nosplits created by the linker often
have both. This change uses the pointer map size as an
alternate source of argument size when processing a nosplit.
In addition, the symbol table construction pointer map size
and argument size consistency check is strengthened. If a
nptrs is greater than 0 it must be equal to the number of
argument words.
R=golang-dev, khr, khr
CC=golang-dev
https://golang.org/cl/9666047
to avoid unintentionally clobber R9/R10.
Thanks Lucio for the suggestion.
PS: yes, this could be considered a big change (but not an API change), but
as it turns out even temporarily changes R9/R10 in user code is unsafe and
leads to very hard to diagnose problems later, better to disable using R9/R10
when the user first uses it.
See CL 6300043 and CL 6305100 for two problems caused by misusing R9/R10.
R=golang-dev, khr, rsc
CC=golang-dev
https://golang.org/cl/9840043
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.
R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area. If an argument word
is known to contain a pointer, a bit is set. The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.
R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
The 'n' variable is used during rescan initiation in GC_END case,
but it's overwritten with chan capacity in GC_CHAN case.
As the result rescan is done with the wrong object size.
Fixes#5554.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9831043
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
Variables in data sections of 32-bit executables interfere with
garbage collector's ability to free objects and/or unnecessarily
slow down the garbage collector.
This changeset moves some static variables to .noptr sections.
'files' in symtab.c is now allocated dynamically.
R=golang-dev, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/9786044
This is needed for preemptive scheduler, because the goroutine
can be preempted at surprising points.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/9376043
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes#4973.
Fixes#5475.
R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
Currently per-sizeclass stats are lost for destroyed MCache's. This patch fixes this.
Also, only update mstats.heap_alloc on heap operations, because that's the only
stat that needs to be promptly updated. Everything else needs to be up-to-date only in ReadMemStats().
R=golang-dev, remyoudompheng, dave, iant
CC=golang-dev
https://golang.org/cl/9207047
The nlistmin/size thresholds are copied from tcmalloc,
but are unnecesary for Go malloc. We do not do explicit
frees into MCache. For sparse cases when we do (mainly hashmap),
simpler logic will do.
R=rsc, dave, iant
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/9373043
It contains the LHS of the range clause and gets
instrumented by racewalk, but it doesn't have any meaning.
Fixes#5446.
R=golang-dev, dvyukov, daniel.morsing, r
CC=golang-dev
https://golang.org/cl/9560044
The stack scanner for not started goroutines ignored the arguments
area when its size was unknown. With this change, the distance
between the stack pointer and the stack base will be used instead.
Fixes#5486
R=golang-dev, bradfitz, iant, dvyukov
CC=golang-dev
https://golang.org/cl/9440043
If a slice points to an array embedded in a struct,
the whole struct can be incorrectly scanned as the slice buffer.
Fixes#5443.
R=cshapiro, iant, r, cshapiro, minux.ma
CC=bradfitz, gobot, golang-dev
https://golang.org/cl/9372044
Allocs of size 16 can bypass atomic set of the allocated bit, while allocs of size 8 can not.
Allocs with and w/o type info hit different paths inside of malloc.
Current results on linux/amd64:
BenchmarkMalloc8 50000000 43.6 ns/op
BenchmarkMalloc16 50000000 46.7 ns/op
BenchmarkMallocTypeInfo8 50000000 61.3 ns/op
BenchmarkMallocTypeInfo16 50000000 63.5 ns/op
R=golang-dev, remyoudompheng, minux.ma, bradfitz, iant
CC=golang-dev
https://golang.org/cl/9090045
for checking for page boundary. Also avoid boundary check
when >=16 bytes are hashed.
benchmark old ns/op new ns/op delta
BenchmarkHashStringSpeed 23 22 -0.43%
BenchmarkHashBytesSpeed 44 42 -3.61%
BenchmarkHashStringArraySpeed 71 68 -4.05%
R=iant, khr
CC=gobot, golang-dev, google
https://golang.org/cl/9123046
Finer-grained transfers were relevant with per-M caches,
with per-P caches they are not relevant and harmful for performance.
For few small size classes where it makes difference,
it's fine to grab the whole span (4K).
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.45%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9374043
This is needed for preemptive scheduler,
it will preempt only when m->locks==0,
and we do not want to be preempted while
we have not completely unlocked the lock.
R=golang-dev, khr, iant
CC=golang-dev
https://golang.org/cl/9196047
Also change table type from int32[] to int8[] to save space in L1$.
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.68%
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/9199044
Move the documentation from race.go to doc.go, because
race.go uses +build race, so it's not normally parsed by go doc.
Rephrase the documentation for end users, provide link to race
detector manual.
Fixes#5444.
R=golang-dev, minux.ma, adg, r
CC=golang-dev
https://golang.org/cl/9144050
runtime.park() can access freed select descriptor
due to a racing free in another thread.
See the comment for details.
Slightly modified version of dvyukov's CL 9259045.
No test yet. Before this CL, the test described in issue 5422
would fail about every 40 times for me. With this CL, I ran
the test 5900 times with no failures.
Fixes#5422.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9311043
The linker can generate split stack prolog when a textflag 7 function
makes an indirect function call. If it happens, badsignal() crashes
trying to dereference g.
Fixes#5337.
R=bradfitz, dave, adg, iant, r, minux.ma
CC=adonovan, golang-dev
https://golang.org/cl/9226043
runtime.setmg() calls another function (cgo_save_gm), so it must save
LR onto stack.
Re-enabled TestCthread test in misc/cgo/test.
Fixes#4863.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9019043
It works on i386, but fails on amd64 and arm.
««« original CL description
runtime: prevent the GC from seeing the content of a frame in runfinq()
Fixes#5348.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/8954044
»»»
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8695051
This will let us ask people to rebuild the Go system without
precise GC, and then rebuild and retest their program, to see
if precise GC is causing whatever problem they are having.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8700043
UMTX_OP_WAIT expects that the address points to a uintptr, but
the code in lock_futex.c uses a uint32. UMTX_OP_WAIT_UINT is
just like UMTX_OP_WAIT, but the address points to a uint32.
This almost certainly makes no difference on a little-endian
system, but since the kernel supports it we should do the
right thing. And, who knows, maybe it matters.
R=golang-dev, bradfitz, r, ality
CC=golang-dev
https://golang.org/cl/8699043
The race detector uses a global lock to analyze atomic
operations. A panic in the middle of the code leaves the
lock acquired.
Similarly, the sync package may leave the race detectro
inconsistent when methods are called on nil pointers.
R=golang-dev, r, minux.ma, dvyukov, rsc, adg
CC=golang-dev
https://golang.org/cl/7981043
It's not trivial to make a comprehensive check
due to inferior pointers, reflect, gob, etc.
But this is essentially what I've used to debug
the GC issues.
Update #5193.
R=golang-dev, iant, 0xe2.0x9a.0x9b, r
CC=golang-dev
https://golang.org/cl/8455043
Use atomic operations on flags field to make sure we aren't
losing a flag update during parallel map operations.
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/8377046
The invariant is that there must be at least one running P or a thread polling network.
It was broken.
Fixes#5216.
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/8459043
This makes it an unsafe.Pointer in Go so the garbage collector
will treat it as a pointer to untyped data, not a pointer to
bytes.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/8286045