In starttheworld() we assume that P's with local work
are situated in the beginning of idle P list.
However, once we start the first M, it can execute all local G's
and steal G's from other P's.
That breaks the assumption above. Thus starttheworld() will fail
to start some P's with local work.
It seems that it can not lead to very bad things, but still
it's wrong and breaks other assumtions
(e.g. we can have a spinning M with local work).
The fix is to collect all P's with local work first,
and only then start them.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/10051045
The garbage collection routine addframeroots is duplicating
logic in the traceback routine that calls it, sometimes correctly,
sometimes incorrectly, sometimes incompletely.
Pass necessary information to addframeroots instead of
deriving it anew.
Should make addframeroots significantly more robust.
It's certainly smaller.
Also try to standardize on uintptr for saved pc, sp values.
Will make CL 10036044 trivial.
R=golang-dev, dave, dvyukov
CC=golang-dev
https://golang.org/cl/10169045
This avoids problems with inlining in genwrappers, which
occurs after functions have been compiled. Compiling a
function may cause some unused local vars to be removed from
the list. Since a local var may be unused due to
optimization, it is possible that a removed local var winds up
beingused in the inlined version, in which case hilarity
ensues.
Fixes#5515.
R=golang-dev, khr, dave
CC=golang-dev
https://golang.org/cl/10210043
It was off in the old implementation (because there was no high-level
description of the function at all). Maybe some day the race detector
should be fixed to handle the wrapper and then enabled for it, but there's
no reason that has to be today.
R=golang-dev
TBR=dvyukov
CC=golang-dev
https://golang.org/cl/10037045
There's no reason to use a different name on each architecture,
and doing so makes it impossible for portable code to refer to
the original Go runtime entry point. Rename it _rt0_go everywhere.
This is a global search and replace only.
R=golang-dev, bradfitz, minux.ma
CC=golang-dev
https://golang.org/cl/10196043
This feature is not yet ready for real use. The CL marks a bite-sized
piece that is ready for review. TODOs that remain:
provide control over output
produce output without setting -v
make work on reflect, sync and time packages
(fail now due to link errors caused by inlining)
better documentation
Almost all packages work now, though, if clumsily; try:
go test -v -cover=count encoding/binary
R=rsc
CC=gobot, golang-dev, remyoudompheng
https://golang.org/cl/10050045
Requires adding new linker instruction
RET f(SB)
meaning return but then immediately call f.
This is what you'd use to implement a tail call after
fiddling with the arguments, but the compiler only
uses it in genwrapper.
This CL eliminates the copy-and-paste genembedtramp
functions from 5g/8g/6g and makes the code run on ARM
for the first time. It removes a small special case for function
generation, which should help Carl a bit, but at the same time
it does not bother to implement general tail call optimization,
which we do not want anyway.
Fixes#5627.
R=ken2
CC=golang-dev
https://golang.org/cl/10057044
The first identifier in an Object Identifer must be between 0 and 2
inclusive. The range of values that the second one can take depends
on the value of the first one.
The two first identifiers are not necessarily encoded in a single octet,
but in a varint.
R=golang-dev, agl
CC=golang-dev
https://golang.org/cl/10140046
The new code matches the code in cc/lex.c and the #define GETC.
This was causing problems scanning runtime·foo if the leading
· byte was returned by the buffer fill.
R=ken2
CC=golang-dev
https://golang.org/cl/10167043
Do not synchronize Add(1) with Wait().
Imitate read on first Add(1) and write on Wait(),
it allows to catch common misuses of WaitGroup:
- Add() called in the additional goroutine itself
- incorrect reuse of WaitGroup with multiple waiters
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/10093044
Also reduce FixAlloc allocation granulatiry from 128k to 16k,
small programs do not need that much memory for MCache's and MSpan's.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/10140044
Especially important for Windows because it reserves VM
only in multiple of 64k.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/10082048
Count only number of frees, everything else is derivable
and does not need to be counted on every malloc.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 68 66 -3.07%
BenchmarkMalloc16 75 70 -6.48%
BenchmarkMallocTypeInfo8 102 97 -4.80%
BenchmarkMallocTypeInfo16 108 105 -2.78%
R=golang-dev, dave, rsc
CC=golang-dev
https://golang.org/cl/9776043