The original plan was to collect allocation stacks
for all memory blocks. But it was never implemented
and it's not in near plans and it's unclear how to do it at all.
R=golang-dev, dave, bradfitz
CC=golang-dev
https://golang.org/cl/12724044
Prior to this change, pointer maps encoded the disposition of
a word using a single bit. A zero signaled a non-pointer
value and a one signaled a pointer value. Interface values,
which are a effectively a union type, were conservatively
labeled as a pointer.
This change widens the logical element size of the pointer map
to two bits per word. As before, zero signals a non-pointer
value and one signals a pointer value. Additionally, a two
signals an iface pointer and a three signals an eface pointer.
Following other changes to the runtime, values two and three
will allow a type information to drive interpretation of the
subsequent word so only those interface values containing a
pointer value will be scanned.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12689046
I've placed net.runtime_Semacquire into netpoll.goc,
but netbsd does not yet use netpoll.goc.
R=golang-dev, bradfitz, iant
CC=golang-dev
https://golang.org/cl/12699045
The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.
On linux/amd64, Intel E5-2690:
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 9595 9454 -1.47%
BenchmarkTCP4Persistent-2 8978 8772 -2.29%
BenchmarkTCP4ConcurrentReadWrite 4900 4625 -5.61%
BenchmarkTCP4ConcurrentReadWrite-2 2603 2500 -3.96%
In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.
Fixes#6074.
R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
Introduce freezetheworld function that is a best-effort attempt to stop any concurrently running goroutines. Call it during crash.
Fixes#5873.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12054044
GetQueuedCompletionStatusEx allows to dequeue a batch of completion
notifications, which is more efficient than dequeueing one by one.
benchmark old ns/op new ns/op delta
BenchmarkClientServerParallel4 100605 90945 -9.60%
BenchmarkClientServerParallel4-2 90225 74504 -17.42%
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12436044
The test takes up to 64 seconds on windows builders.
I've tried to reduce number of iterations in the test,
but it does not affect run time.
Fixes#6054.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12531043
Previously, all word aligned locations in the local variables
area were scanned as conservative roots. With this change, a
bitmap is generated describing the locations of pointer values
in local variables.
With this change the argument bitmap information has been
changed to only store information about arguments. The locals
member, has been removed. In its place, the bitmap data for
local variables is now used to store the size of locals. If
the size is negative, the magnitude indicates the size of the
local variables area.
R=rsc
CC=golang-dev
https://golang.org/cl/12328044
Remove NOPROF/DUPOK from everything.
Edits done with a script, except pclinetest.asm which depended
on the DUPOK flag on main().
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12613044
We can then include this file in assembly to replace
cryptic constants like "7" with meaningful constants
like "(NOPROF|DUPOK|NOSPLIT)".
Converting just pkg/runtime/asm*.s for now. Dropping NOPROF
and DUPOK from lots of places where they aren't needed.
More .s files to come in a subsequent changelist.
A nonzero number in the textflag field now means
"has not been converted yet".
R=golang-dev, daniel.morsing, rsc, khr
CC=golang-dev
https://golang.org/cl/12568043
Updates #6046.
This CL just does maxstring and concatstring. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.
R=golang-dev, dave, iant
CC=golang-dev
https://golang.org/cl/12519044
NetBSD and OpenBSD are broken like OS X is. Good to know.
Drop required count from avg/2 to avg/3, because the
Plan 9 builder just barely missed avg/2 in one of its runs.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/12548043
Update #6046.
This CL just does findnull and findnullw. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12520043
Embed all data necessary for read/write operations directly into netFD.
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 27669 23341 -15.64%
BenchmarkTCP4Persistent-2 18173 12558 -30.90%
BenchmarkTCP4Persistent-4 10390 7319 -29.56%
This change will intentionally break all builders to see
how many allocations they do per read/write.
This will be fixed soon afterwards.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12413043
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
This is the same as reverted change 12250043
with save marked as textflag 7.
The problem was that if save calls morestack,
then subsequent lessstack spoils g->sched.pc/sp.
And that bad values were remembered in g->syscallpc/sp.
Entersyscallblock had the same problem,
but it was never triggered to date.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12478043
Basically a partial rollback of 12053043 until I can
figure out what is really going on.
Fixes bug 6051.
R=golang-dev
CC=golang-dev
https://golang.org/cl/12496043
This means that pprof will no longer report profiles on OS X.
That's unfortunate, but the profiles were often wrong and, worse,
it was difficult to tell whether the profile was wrong or not.
The workarounds were making the scheduler more complex,
possibly caused a deadlock (see issue 5519), and did not actually
deliver reliable results.
It may be possible for adventurous users to apply a patch to
their kernels to get working results, or perhaps having no results
will encourage someone to do the work of creating a profiling
thread like on Windows. Issue 6047 has details.
Fixes#5519.
Fixes#6047.
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/12429045
Break all 386 builders.
««« original CL description
runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
»»»
R=rsc
CC=golang-dev
https://golang.org/cl/12424045
It was needed for the old scheduler,
because there temporary could be more threads than gomaxprocs.
In the new scheduler gomaxprocs is always respected.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12438043
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
Windows dynamic priority boosting assumes that a process has different types
of dedicated threads -- GUI, IO, computational, etc. Go processes use
equivalent threads that all do a mix of GUI, IO, computations, etc.
In such context dynamic priority boosting does nothing but harm, so turn it off.
In particular, if 2 goroutines do heavy IO on a server uniprocessor machine,
windows rejects to schedule timer thread for 2+ seconds when priority boosting is enabled.
Fixes#5971.
R=alex.brainman
CC=golang-dev
https://golang.org/cl/12406043
It's okay to preempt at ordinary function calls because
compilers arrange that there are no live registers to save
on entry to the function call.
The software floating point routines are function calls
masquerading as individual machine instructions. They are
expected to keep all the registers intact. In particular,
they are expected not to clobber all the floating point
registers.
The floating point registers are kept per-M, because they
are not live at non-preemptive goroutine scheduling events,
and so keeping them per-M reduces the number of 132-byte
register blocks we are keeping in memory.
Because they are per-M, allowing the goroutine to be
rescheduled during software floating point simulation
would mean some other goroutine could overwrite the registers
or perhaps the goroutine would continue running on a different
M entirely.
Disallow preemption during the software floating point
routines to make sure that a function full of floating point
instructions has the same floating point registers throughout
its execution.
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/12298043
Per suggestion from Russ in February. Then strings.IndexByte
can be implemented in terms of the shared code in pkg runtime.
Update #3751
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12289043
This allows to at least determine goroutine "identity".
Now it looks like:
goroutine 12 [running]:
goroutine running on other thread; stack unavailable
created by testing.RunTests
src/pkg/testing/testing.go:440 +0x88e
R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/12248043
We see timeouts in these tests on some platforms,
but not on the others. The hypothesis is that
the problematic platforms are slow uniprocessors.
Stack traces do not suggest that the process
is completely hang, and it is able to schedule
the alarm goroutine. And if it actually hangs,
we still will be able to detect that.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12253043
Preemption during the software floating point code
could cause m (R9) to change, so that when the
original registers were restored at the end of the
floating point handler, the changed and correct m
would be replaced by the old and incorrect m.
TBR=dvyukov
CC=golang-dev
https://golang.org/cl/11883045
Operations on int64 are very stack consuming with 5c.
Fixes netbsd/arm build.
Before: TEXT runtime.timediv+0(SB),7,$52-16
After: TEXT runtime.timediv+0(SB),7,$44-16
The stack usage is unchanged on 386:
TEXT runtime.timediv+0(SB),7,$8-16
R=golang-dev, dvyukov, bradfitz
CC=golang-dev
https://golang.org/cl/12182044
Update #5139.
Double wakeup on Note was reported several times,
but no reliable reproducer.
There also was a strange report about weird value of epoll fd.
Maybe it's corruption of global data...
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12182043
Sysmon thread parks if no goroutines are running (runtime.sched.npidle ==
runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.
R=rsc
CC=golang-dev
https://golang.org/cl/12176043
Submitted with some unrelated changes that were not intended to go in.
««« original CL description
runtime: do not park sysmon thread if any goroutines are running
Sysmon thread parks if no goroutines are running (runtime.sched.npidle == runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12167043
»»»
R=rsc
CC=golang-dev
https://golang.org/cl/12171044
This is required to properly unwind reflect.methodValueCall/makeFuncStub.
Fixes#5954.
Stats for 'go install std':
61849 total INSTCALL
24655 currently have ArgSize metadata
27278 have ArgSize metadata with this change
godoc size before: 11351888, after: 11364288
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12163043
Sysmon thread parks if no goroutines are running (runtime.sched.npidle == runtime.gomaxprocs).
Currently it's unparked when a goroutine enters syscall, it was enough
to retake P's from blocking syscalls.
But it's not enough for reliable goroutine preemption. We need to ensure that
sysmon runs if any goroutines are running.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12167043
When comparing strings, check these (in order):
- length mismatch => not equal
- string pointer equal => equal
- if length is short:
- memeq on body
- if length is long:
- compare first&last few bytes, if different => not equal
- save entry as a possible match
- after checking every entry, if there is only one possible
match, use memeq on that entry. Otherwise, fallback to hash.
benchmark old ns/op new ns/op delta
BenchmarkSameLengthMap 43 4 -89.77%
Fixes#5194.
Update #3885.
R=golang-dev, bradfitz, khr, rsc
CC=golang-dev
https://golang.org/cl/12128044
struct Hmap is the header for a map value.
CL 8377046 made flags a uint32 so that it could be updated atomically,
but that bumped the struct to 56 bytes, which allocates as 64 bytes (on amd64).
hash0 is initialized from runtime.fastrand1, which returns a uint32,
so the top 32 bits were always zero anyway. Declare it as a uint32
to reclaim 4 bytes and bring the Hmap size back down to a 48-byte allocation.
Fixes#5237.
R=golang-dev, khr, khr
CC=bradfitz, dvyukov, golang-dev
https://golang.org/cl/12034047
notetsleep: nosplit stack overflow
120 assumed on entry to notetsleep
96 after notetsleep uses 24
88 on entry to runtime.semasleep
32 after runtime.semasleep uses 56
24 on entry to runtime.nanotime
-8 after runtime.nanotime uses 32
Nanotime seems to be using only 24 bytes of stack space.
Unless I am missing something.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12041044
notetsleep: nosplit stack overflow
120 assumed on entry to notetsleep
80 after notetsleep uses 40
72 on entry to runtime.futexsleep
16 after runtime.futexsleep uses 56
8 on entry to runtime.printf
-16 after runtime.printf uses 24
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12047043
Split stack checks (morestack) corrupt g->sched,
but g->sched must be preserved consistent for GC/traceback.
The change implements runtime.notetsleepg function,
which does entersyscall/exitsyscall and is carefully arranged
to not call any split functions in between.
R=rsc
CC=golang-dev
https://golang.org/cl/11575044
If netpoll has been told to block, it must not return with nil,
otherwise scheduler assumes that netpoll is disabled.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/11920044
Make it accept type, combine flags.
Several reasons for the change:
1. mallocgc and settype must be atomic wrt GC
2. settype is called from only one place now
3. it will help performance (eventually settype
functionality must be combined with markallocated)
4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore)
R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r
CC=golang-dev
https://golang.org/cl/10136043
Currently Darwin and FreeBSD support and NetBSD and OpenBSD do not
support EV_RECEIPT flag. We will drop use of EV_RECEIPT for now.
Also enables to build runtime-integrated network pollster on
freebsd/amd64,386 and openbsd/amd64,386. It just does build but never
runs pollster stuff.
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/11759044
Debugging the Windows breakage I noticed that SEH
only exists on 386, so we can balance the two stacks
a little more on amd64 and reclaim another word.
Now we're down to just one word consumed by
cgocallback_gofunc, having reclaimed 25% of the
overall budget (4 words out of 16).
Separately, fix windows/386 - the SEH must be on the
m0 stack, as must the saved SP, so we are forced to have
a three-word frame for 386. It matters much less for
386, because there 128 bytes gives 32 words to use.
R=dvyukov, alex.brainman
CC=golang-dev
https://golang.org/cl/11551044
Tying preemption to stack splits means that we have to able to
complete the call to exitsyscall (inside cgocallbackg at least for now)
without any stack split checks, meaning that the whole sequence
has to work within 128 bytes of stack, unless we increase the size
of the red zone. This CL frees up 24 bytes along that critical path
on amd64. (The 32-bit systems have plenty of space because all
their words are smaller.)
R=dvyukov
CC=golang-dev
https://golang.org/cl/11676043
Change use of x+(SP) to access the stack frame into x-(SP)
Fixes#5925.
R=golang-dev, bradfitz, dave, remyoudompheng, nick, rsc
CC=dave cheney <dave, golang-dev
https://golang.org/cl/11647043
notetsleepg is the same as notetsleep, but is called on user g.
It includes entersyscall/exitsyscall and will help to avoid
split stack functions in syscall status.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11681043
This CL introduces a FUNCDATA number for runtime-specific
garbage collection metadata, changes the C and Go compilers
to emit that metadata, and changes the runtime to expect it.
The old pseudo-instructions that carried this information
are gone, as is the linker code to process them.
R=golang-dev, dvyukov, cshapiro
CC=golang-dev
https://golang.org/cl/11406044
whose argument size is unknown (C vararg functions, and
assembly code without an explicit specification).
We used to use 0 to mean "unknown" and 1 to mean "zero".
Now we use ArgsSizeUnknown (0x80000000) to mean "unknown".
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/11590043
If the network is not polled for 10ms, sysmon starts polling network
on every iteration (every 20us) until another thread blocks in netpoll.
Fixes#5922.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/11569043
It assumes that the m will not change, and the m may
change if the goroutine is preempted.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/11560043
If we start a garbage collection on g0 during a
stack split or unsplit, we'll see morestack or lessstack
at the top of the stack. Record an argument frame size
for those, and record that they terminate the stack.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11533043
Deferreturn is synthesizing a new call frame.
It must not be interrupted between copying the args there
and fixing up the program counter, or else the stack will
be in an inconsistent state, one that will confuse the
garbage collector.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11522043
With preemption, _sfloat2 can show up in stack traces.
Write the function prototype in a way that accurately
shows the frame size and the fact that it might contain
pointers.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11523043
Windows was the only one seeing this bug reliably in the builder,
but it was easy to reproduce using 'GOGC=1 go test strconv'.
concatstring looked like it took only one string, but in fact it
takes a long list of strings. Add an explicit ... so that the traceback
will not use the "fixed" frame size and instead look at the
frame size metadata recorded by the caller.
R=golang-dev
TBR=golang-dev
CC=golang-dev
https://golang.org/cl/11531043
Otherwise the tests in pkg/runtime fail:
runtime: unknown argument frame size for runtime.deferreturn called from 0x48657b [runtime_test.func·022]
fatal error: invalid stack
...
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/11483043
Update #543
I believe the runtime is strong enough now to reenable
preemption during the function prologue.
Assuming this is or can be made stable, it will be in Go 1.2.
More aggressive preemption is not planned for Go 1.2.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/11433045
Currently preemption signal g->stackguard0==StackPreempt
can be lost if it is received when preemption is disabled
(e.g. m->lock!=0). This change duplicates the preemption
signal in g->preempt and restores g->stackguard0
when preemption is enabled.
Update #543.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10792043
With this CL, I believe the runtime always knows
the frame size during the gc walk. There is no fallback
to "assume entire stack frame of caller" anymore.
R=golang-dev, khr, cshapiro, dvyukov
CC=golang-dev
https://golang.org/cl/11374044
I have not done the system call stubs in sys_*.s.
I hope to avoid that, because those do not block, so those
frames will not appear in stack traces during garbage
collection.
R=golang-dev, dvyukov, khr
CC=golang-dev
https://golang.org/cl/11360043
Design at http://golang.org/s/go12symtab.
This enables some cleanup of the garbage collector metadata
that will be done in future CLs.
This CL does not move the old symtab and pclntab back into
an unmapped section of the file. That's a bit tricky and will be
done separately.
Fixes#4020.
R=golang-dev, dave, cshapiro, iant, r
CC=golang-dev, nigeltao
https://golang.org/cl/11085043
A type switch on a value with map index expressions,
could get a spurious instrumentation from a OTYPESW node.
These nodes do not need instrumentation because after
walk the type switch has been turned into a sequence
of ifs.
Fixes#5890.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/11308043
If the stack frame size is larger than the known-unmapped region at the
bottom of the address space, then the stack split prologue cannot use the usual
condition:
SP - size >= stackguard
because SP - size may wrap around to a very large number.
Instead, if the stack frame is large, the prologue tests:
SP - stackguard >= size
(This ends up being a few instructions more expensive, so we don't do it always.)
Preemption requests register by setting stackguard to a very large value, so
that the first test (SP - size >= stackguard) cannot possibly succeed.
Unfortunately, that same very large value causes a wraparound in the
second test (SP - stackguard >= size), making it succeed incorrectly.
To avoid *that* wraparound, we have to amend the test:
stackguard != StackPreempt && SP - stackguard >= size
This test is only used for functions with large frames, which essentially
always split the stack, so the cost of the few instructions is noise.
This CL and CL 11085043 together fix the known issues with preemption,
at the beginning of a function, so we will be able to try turning it on again.
R=ken2
CC=golang-dev
https://golang.org/cl/11205043
The current cas64 definition hard-codes the x86 behavior
of updating *old with the new value when the cas fails.
This is inconsistent with cas32 and casp.
Make it consistent.
This means that the cas64 uses will be epsilon less efficient
than they might be, because they have to do an unnecessary
memory load on x86. But so be it. Code clarity and consistency
is more important.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10909045
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
This is reincarnation of cl/9776044 with the bug fixed.
The bug was due to code added after cl/9776044 was created:
if(tick - (((uint64)tick*0x4325c53fu)>>36)*61 == 0 && runtime·sched.runqsize > 0) {
runtime·lock(&runtime·sched);
gp = globrunqget(m->p, 1);
runtime·unlock(&runtime·sched);
}
If M gets gp from global runq here, it does not reset m->spinning.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10743044
Currently it crashes as follows:
fatal error: unknown pc
...
goroutine 71698 [runnable]:
runtime.racegoend()
src/pkg/runtime/race.c:171
runtime.goexit()
src/pkg/runtime/proc.c:1276 +0x9
created by runtime_test.testConcurrentReadsAfterGrowth
src/pkg/runtime/map_test.go:264 +0x332
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10674047
using m->tls[0] to save ucontext pointer is not re-entry safe, and
the old code didn't set it before the early return when signal is
received on non-Go threads.
so misc/cgo/test used to hang when testing issue 5337.
R=golang-dev, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/10076045
When deleting a timer, a panic due to nil deref
would leave a lock held, possibly leading to a deadlock
in a defer. Instead return false on a nil timer.
Fixes#5745.
R=golang-dev, daniel.morsing, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/10373047
There are various problems, and both Dmitriy and I
will be away for the next week. Make the runtime a bit
more stable while we're gone.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/10848043
fn can clearly hold a closure in memory.
argp/pc point into stack and so can hold
in memory a block that was previously
a large stack serment.
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/10784043
On amd64 the frames are very close to the limit for a
nosplit (textflag 7) function, in part because the C compiler
does not make any attempt to reclaim space allocated for
completely registerized variables. Avoid a few short-lived
variables to reclaim two words.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10758043
Currently it replaces GOGCTRACE env var (GODEBUG=gctrace=1).
The plan is to extend it with other type of debug tracing,
e.g. GODEBUG=gctrace=1,schedtrace=100.
R=rsc
CC=bradfitz, daniel.morsing, gobot, golang-dev
https://golang.org/cl/10026045
The last patch for preemptive scheduler,
with this change stoptheworld issues preemption
requests every 100us.
Update #543.
R=golang-dev, daniel.morsing, rsc
CC=golang-dev
https://golang.org/cl/10264044
On x86 it is a few words lower on the stack than m->morebuf.sp
so it is a more precise check. Enabling the check requires recording
a valid gp->sched in reflect.call too. This is a good thing in general,
since it will make stack traces during reflect.call work better, and it
may be useful for preemption too.
R=dvyukov
CC=golang-dev
https://golang.org/cl/10709043
runtime.entersyscall() sets g->status = Gsyscall,
then calls runtime.lock() which causes stack split.
runtime.newstack() resets g->status to Grunning.
This will lead to crash during GC (world is not stopped) or GC will scan stack incorrectly.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10696043
Failure on bot:
http://build.golang.org/log/f4c648906e1289ec2237c1d0880fb1a8b1852a08
««« original CL description
runtime: fix CPU underutilization
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
TBR=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/10692043
runtime.newproc/ready are deliberately sloppy about waking new M's,
they only ensure that there is at least 1 spinning M.
Currently to compensate for that, schedule() checks if the current P
has local work and there are no spinning M's, it wakes up another one.
It does not work if goroutines do not call schedule.
With this change a spinning M wakes up another M when it finds work to do.
It's also not ideal, but it fixes the underutilization.
A proper check would require to know the exact number of runnable G's,
but it's too expensive to maintain.
Fixes#5586.
R=rsc
CC=gobot, golang-dev
https://golang.org/cl/9776044
Current code can print more arguments than necessary
and also incorrectly prints "...".
Update #5723.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10689043
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.
For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:
TEXT myfunc
check for split
jmpcond nosplit
call morestack
nosplit:
sub $xxx, sp
Now an extra instruction is inserted after the call:
TEXT myfunc
start:
check for split
jmpcond nosplit
call morestack
jmp start
nosplit:
sub $xxx, sp
The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.
The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.
The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.
Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)
runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow
goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
/Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
/Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
/Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
/Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
/Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
/Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
/Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8
And here is a seg fault during oldstack:
SIGSEGV: segmentation violation
PC=0x1b2a6
runtime.oldstack()
/Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
/Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22
goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
/Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
/Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
/Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8
goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
/Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
/Users/rsc/g/go/src/pkg/runtime/proc.c:166
rax 0x23ccc0
rbx 0x23ccc0
rcx 0x0
rdx 0x38
rdi 0x2102c0170
rsi 0x221032cfe0
rbp 0x221032cfa0
rsp 0x7fff5fbff5b0
r8 0x2102c0120
r9 0x221032cfa0
r10 0x221032c000
r11 0x104ce8
r12 0xe5c80
r13 0x1be82baac718
r14 0x13091135f7d69200
r15 0x0
rip 0x1b2a6
rflags 0x10246
cs 0x2b
fs 0x0
gs 0x0
Fixes#5723.
R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
Resubmit 3c2cddfbdaec now that windows callbacks
are not generated during runtime.
Fixes#5494
R=golang-dev, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/10487043
- change runtime_pollWait so it does not return
closed or timeout if IO is ready - windows must
know if IO has completed or not even after
interruption;
- add (*pollDesc).Prepare(mode int) that can be
used for both read and write, same for Wait;
- introduce runtime_pollWaitCanceled and expose
it in net as (*pollDesc).WaitCanceled(mode int);
Full windows netpoll changes are
here https://golang.org/cl/8670044/.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/10485043
until we decide what to do with issues 5659/5736.
Profiling with race detector is not very useful in general,
and now it makes race builders red.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10523043
If first GC runs concurrently with setGCPercent,
it can overwrite gcpercent value with default.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10242047
Currently global runqueue is starved if a group of goroutines
constantly respawn each other (local runqueue never becomes empty).
Fixes#5639.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10042044
It was used to request large stack segment for GC
when it was running not on g0.
Now GC is running on g0 with large stack,
and it is not needed anymore.
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/10242045
No need to change to Grunnable state.
Add some more checks for Grunning state.
R=golang-dev, rsc, khr, dvyukov
CC=golang-dev
https://golang.org/cl/10186045
The previous implementation would only record access to
the address of the array but the memory access to the whole
memory range must be recorded instead.
R=golang-dev, dvyukov, r
CC=golang-dev
https://golang.org/cl/8053044
Instrumentation of ntest expression should go to ntest->init.
Same for nincr.
Fixes#5340.
R=golang-dev, daniel.morsing
CC=golang-dev
https://golang.org/cl/10026046
Add gostartcall and gostartcallfn.
The old gogocall = gostartcall + gogo.
The old gogocallfn = gostartcallfn + gogo.
R=dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/10036044
In starttheworld() we assume that P's with local work
are situated in the beginning of idle P list.
However, once we start the first M, it can execute all local G's
and steal G's from other P's.
That breaks the assumption above. Thus starttheworld() will fail
to start some P's with local work.
It seems that it can not lead to very bad things, but still
it's wrong and breaks other assumtions
(e.g. we can have a spinning M with local work).
The fix is to collect all P's with local work first,
and only then start them.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10051045
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.
Should make addframeroots significantly more robust.
It's certainly smaller.
Also try to standardize on uintptr for saved pc, sp values.
Will make CL 10036044 trivial.
R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
There's no reason to use a different name on each architecture,
and doing so makes it impossible for portable code to refer to
the original Go runtime entry point. Rename it _rt0_go everywhere.
This is a global search and replace only.
R=golang-dev, bradfitz, minux.ma
CC=golang-dev
https://golang.org/cl/10196043
Do not synchronize Add(1) with Wait().
Imitate read on first Add(1) and write on Wait(),
it allows to catch common misuses of WaitGroup:
- Add() called in the additional goroutine itself
- incorrect reuse of WaitGroup with multiple waiters
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10093044
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Especially important for Windows because it reserves VM
only in multiple of 64k.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/10082048
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043
Remove unnecessary ( ) around == in && clause.
Add { } around multiline if body, even though it's one statement.
Add runtime: prefix to printed errors.
R=cshapiro, iant
CC=golang-dev
https://golang.org/cl/9685047
This is part of preemptive scheduler.
stackguard0 is checked in split stack checks and can be set to StackPreempt.
stackguard is not set to StackPreempt (holds the original value).
R=golang-dev, daniel.morsing, iant
CC=golang-dev
https://golang.org/cl/9875043
Before this change, grow work was done only
during map writes to ensure multithreaded safety.
This can lead to maps remaining in a partially
grown state for a long time, potentially forever.
This change allows grow work to happen during reads,
which will lead to grow work finishing sooner, making
the resulting map smaller and faster.
Grow work is not done in parallel. Reads can
happen in parallel while grow work is happening.
R=golang-dev, dvyukov, khr, iant
CC=golang-dev
https://golang.org/cl/8852047
instead of regular g stack. We do this so that the g stack
we're currently running on is no longer changing. Cuts
the root set down a bit (g0 stacks are not scanned, and
we don't need to scan gc's internal state). Also an
enabler for copyable stacks.
R=golang-dev, cshapiro, khr, 0xe2.0x9a.0x9b, dvyukov, rsc, iant
CC=golang-dev
https://golang.org/cl/9754044
mheap.map become a pointer, so nelem(h->map) returns 1 rather than the map size.
As the result coalescing with subsequent spans does not happen.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9649046
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
Reincarnation of committed and rolled back https://golang.org/cl/9805043
The latent bugs that it revealed are fixed:
https://golang.org/cl/9837049https://golang.org/cl/9778048
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9778049
Then use the limit to make sure MHeap_LookupMaybe & inlined
copies don't return a span if the pointer is beyond the limit.
Use this fact to optimize all call sites.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/9869045
A nosplits was assumed to have no argument information and no
pointer map. However, nosplits created by the linker often
have both. This change uses the pointer map size as an
alternate source of argument size when processing a nosplit.
In addition, the symbol table construction pointer map size
and argument size consistency check is strengthened. If a
nptrs is greater than 0 it must be equal to the number of
argument words.
R=golang-dev, khr, khr
CC=golang-dev
https://golang.org/cl/9666047
to avoid unintentionally clobber R9/R10.
Thanks Lucio for the suggestion.
PS: yes, this could be considered a big change (but not an API change), but
as it turns out even temporarily changes R9/R10 in user code is unsafe and
leads to very hard to diagnose problems later, better to disable using R9/R10
when the user first uses it.
See CL 6300043 and CL 6305100 for two problems caused by misusing R9/R10.
R=golang-dev, khr, rsc
CC=golang-dev
https://golang.org/cl/9840043
This is needed for preemptive scheduler, because during
stoptheworld we want to wait with timeout and re-preempt
M's on timeout.
R=golang-dev, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/9375043
With this change the compiler emits a bitmap for each function
covering its stack frame arguments area. If an argument word
is known to contain a pointer, a bit is set. The garbage
collector reads this information when scanning the stack by
frames and uses it to ignores locations known to not contain a
pointer.
R=golang-dev, bradfitz, daniel.morsing, dvyukov, khr, khr, iant, cshapiro
CC=golang-dev
https://golang.org/cl/9223046
This depends on: 9791044: runtime: allocate page table lazily
Once page table is moved out of heap, the heap becomes small.
This removes unnecessary dereferences during heap access.
No logical changes.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9802043
This removes the 256MB memory allocation at startup,
which conflicts with ulimit.
Also will allow to eliminate an unnecessary memory dereference in GC,
because the page table is usually mapped at known address.
Update #5049.
Update #5236.
R=golang-dev, khr, r, khr, rsc
CC=golang-dev
https://golang.org/cl/9791044
The 'n' variable is used during rescan initiation in GC_END case,
but it's overwritten with chan capacity in GC_CHAN case.
As the result rescan is done with the wrong object size.
Fixes#5554.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/9831043
multiple failures on amd64
««« original CL description
runtime: introduce helper persistentalloc() function
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
»»»
R=golang-dev
CC=golang-dev
https://golang.org/cl/9822043
It is a caching wrapper around SysAlloc() that can allocate small chunks.
Use it for symtab allocations. Reduces number of symtab walks from 4 to 3
(reduces buildfuncs time from 10ms to 7.5ms on a large binary,
reduces initial heap size by 680K on the same binary).
Also can be used for type info allocation, itab allocation.
There are also several places in GC where we do the same thing,
they can be changed to use persistentalloc().
Also can be used in FixAlloc, because each instance of FixAlloc allocates
in 128K regions, which is too eager.
R=golang-dev, daniel.morsing, khr
CC=golang-dev
https://golang.org/cl/9805043
Variables in data sections of 32-bit executables interfere with
garbage collector's ability to free objects and/or unnecessarily
slow down the garbage collector.
This changeset moves some static variables to .noptr sections.
'files' in symtab.c is now allocated dynamically.
R=golang-dev, dvyukov, minux.ma
CC=golang-dev
https://golang.org/cl/9786044
This is needed for preemptive scheduler, because the goroutine
can be preempted at surprising points.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/9376043
When cgo is used, runtime creates an additional M to handle callbacks on threads not created by Go.
This effectively disabled deadlock detection, which is a right thing, because Go program can be blocked
and only serve callbacks on external threads.
This also disables deadlock detection under race detector, because it happens to use cgo.
With this change the additional M is created lazily on first cgo call. So deadlock detector
works for programs that import "C", "net" or "net/http/pprof" but do not use them in fact.
Also fixes deadlock detector under race detector.
It should be fine to create the M later, because C code can not call into Go before first cgo call,
because C code does not know when Go initialization has completed. So a Go program need to call into C
first either to create an external thread, or notify a thread created in global ctor that Go
initialization has completed.
Fixes#4973.
Fixes#5475.
R=golang-dev, minux.ma, iant
CC=golang-dev
https://golang.org/cl/9303046
Currently per-sizeclass stats are lost for destroyed MCache's. This patch fixes this.
Also, only update mstats.heap_alloc on heap operations, because that's the only
stat that needs to be promptly updated. Everything else needs to be up-to-date only in ReadMemStats().
R=golang-dev, remyoudompheng, dave, iant
CC=golang-dev
https://golang.org/cl/9207047
The nlistmin/size thresholds are copied from tcmalloc,
but are unnecesary for Go malloc. We do not do explicit
frees into MCache. For sparse cases when we do (mainly hashmap),
simpler logic will do.
R=rsc, dave, iant
CC=gobot, golang-dev, r, remyoudompheng
https://golang.org/cl/9373043
It contains the LHS of the range clause and gets
instrumented by racewalk, but it doesn't have any meaning.
Fixes#5446.
R=golang-dev, dvyukov, daniel.morsing, r
CC=golang-dev
https://golang.org/cl/9560044
The stack scanner for not started goroutines ignored the arguments
area when its size was unknown. With this change, the distance
between the stack pointer and the stack base will be used instead.
Fixes#5486
R=golang-dev, bradfitz, iant, dvyukov
CC=golang-dev
https://golang.org/cl/9440043
If a slice points to an array embedded in a struct,
the whole struct can be incorrectly scanned as the slice buffer.
Fixes#5443.
R=cshapiro, iant, r, cshapiro, minux.ma
CC=bradfitz, gobot, golang-dev
https://golang.org/cl/9372044
Allocs of size 16 can bypass atomic set of the allocated bit, while allocs of size 8 can not.
Allocs with and w/o type info hit different paths inside of malloc.
Current results on linux/amd64:
BenchmarkMalloc8 50000000 43.6 ns/op
BenchmarkMalloc16 50000000 46.7 ns/op
BenchmarkMallocTypeInfo8 50000000 61.3 ns/op
BenchmarkMallocTypeInfo16 50000000 63.5 ns/op
R=golang-dev, remyoudompheng, minux.ma, bradfitz, iant
CC=golang-dev
https://golang.org/cl/9090045
for checking for page boundary. Also avoid boundary check
when >=16 bytes are hashed.
benchmark old ns/op new ns/op delta
BenchmarkHashStringSpeed 23 22 -0.43%
BenchmarkHashBytesSpeed 44 42 -3.61%
BenchmarkHashStringArraySpeed 71 68 -4.05%
R=iant, khr
CC=gobot, golang-dev, google
https://golang.org/cl/9123046
Finer-grained transfers were relevant with per-M caches,
with per-P caches they are not relevant and harmful for performance.
For few small size classes where it makes difference,
it's fine to grab the whole span (4K).
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.45%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9374043
This is needed for preemptive scheduler,
it will preempt only when m->locks==0,
and we do not want to be preempted while
we have not completely unlocked the lock.
R=golang-dev, khr, iant
CC=golang-dev
https://golang.org/cl/9196047
Also change table type from int32[] to int8[] to save space in L1$.
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.68%
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/9199044
Move the documentation from race.go to doc.go, because
race.go uses +build race, so it's not normally parsed by go doc.
Rephrase the documentation for end users, provide link to race
detector manual.
Fixes#5444.
R=golang-dev, minux.ma, adg, r
CC=golang-dev
https://golang.org/cl/9144050
runtime.park() can access freed select descriptor
due to a racing free in another thread.
See the comment for details.
Slightly modified version of dvyukov's CL 9259045.
No test yet. Before this CL, the test described in issue 5422
would fail about every 40 times for me. With this CL, I ran
the test 5900 times with no failures.
Fixes#5422.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9311043
The linker can generate split stack prolog when a textflag 7 function
makes an indirect function call. If it happens, badsignal() crashes
trying to dereference g.
Fixes#5337.
R=bradfitz, dave, adg, iant, r, minux.ma
CC=adonovan, golang-dev
https://golang.org/cl/9226043
runtime.setmg() calls another function (cgo_save_gm), so it must save
LR onto stack.
Re-enabled TestCthread test in misc/cgo/test.
Fixes#4863.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/9019043
It works on i386, but fails on amd64 and arm.
««« original CL description
runtime: prevent the GC from seeing the content of a frame in runfinq()
Fixes#5348.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/8954044
»»»
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8695051
This will let us ask people to rebuild the Go system without
precise GC, and then rebuild and retest their program, to see
if precise GC is causing whatever problem they are having.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8700043