There is a chance that the SIGQUIT will make the test process
dump its stacks as part of exiting, which would be nice for
finding out what it is doing.
Right now the builders are occasionally timing out running
the runtime test. I hope this will give us some information
about the state of the runtime.
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/12041051
struct Hmap is the header for a map value.
CL 8377046 made flags a uint32 so that it could be updated atomically,
but that bumped the struct to 56 bytes, which allocates as 64 bytes (on amd64).
hash0 is initialized from runtime.fastrand1, which returns a uint32,
so the top 32 bits were always zero anyway. Declare it as a uint32
to reclaim 4 bytes and bring the Hmap size back down to a 48-byte allocation.
Fixes#5237.
R=golang-dev, khr, khr
CC=bradfitz, dvyukov, golang-dev
https://golang.org/cl/12034047
If netFD is closed by finalizer, runtime netpoll descriptor is not freed.
R=golang-dev, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12037043
EscapeText now escapes 0xFFFD returned from DecodeRune as 0xFFFD, rather than passing through the original byte.
Fixes#5880.
R=golang-dev, r, bradfitz, adg
CC=golang-dev
https://golang.org/cl/11975043
I want to see the timing information in build logs,
and we can't see the logs for "ok" builds.
So make the build fail everywhere.
Will roll back immediately.
TBR=dvyukov
CC=golang-dev
https://golang.org/cl/12058046
notetsleep: nosplit stack overflow
120 assumed on entry to notetsleep
96 after notetsleep uses 24
88 on entry to runtime.semasleep
32 after runtime.semasleep uses 56
24 on entry to runtime.nanotime
-8 after runtime.nanotime uses 32
Nanotime seems to be using only 24 bytes of stack space.
Unless I am missing something.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12041044
notetsleep: nosplit stack overflow
120 assumed on entry to notetsleep
80 after notetsleep uses 40
72 on entry to runtime.futexsleep
16 after runtime.futexsleep uses 56
8 on entry to runtime.printf
-16 after runtime.printf uses 24
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12047043
Split stack checks (morestack) corrupt g->sched,
but g->sched must be preserved consistent for GC/traceback.
The change implements runtime.notetsleepg function,
which does entersyscall/exitsyscall and is carefully arranged
to not call any split functions in between.
R=rsc
CC=golang-dev
https://golang.org/cl/11575044
Close netpoll descriptor along with socket.
Ensure that error paths close the descriptor as well.
R=golang-dev, mikioh.mikioh, alex.brainman
CC=golang-dev
https://golang.org/cl/11987043
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=golang-dev, fvbommel, dave
CC=golang-dev
https://golang.org/cl/11984043
This will mean that sub-repositories won't get built against the
release branch. They are often not compatible because the subrepos
often run ahead of the current release (e.g. go.tools is using
new additions to go/ast, and go.net is using new things in syscall)
so there's little point in checking them against cherrypick commits
when they'll be tested against those commits on tip anyway.
R=golang-dev, adg
CC=golang-dev
https://golang.org/cl/12001043
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/11932044
This CL extends existing sockaddr interface to accommodate not only
internet protocol family endpoint addressess but unix network family
endpoint addresses.
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/11979043
If netpoll has been told to block, it must not return with nil,
otherwise scheduler assumes that netpoll is disabled.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/11920044
Add missing single quotation and backslash marks.
Change dot and underscore character keyword type.
"_" is a predeclared identifier, not a operator.
"." is a selector, x.f should be one identifier highlight.
So the fix is to change it.
Fixes#5775.
Fixes#5788.
Fixes#5798.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/10480044
Make it accept type, combine flags.
Several reasons for the change:
1. mallocgc and settype must be atomic wrt GC
2. settype is called from only one place now
3. it will help performance (eventually settype
functionality must be combined with markallocated)
4. flags are easier to read now (no mallocgc(sz, 0, 1, 0) anymore)
R=golang-dev, iant, nightlyone, rsc, dave, khr, bradfitz, r
CC=golang-dev
https://golang.org/cl/10136043