The equal algorithm used to take the size
equal(p, q *T, size uintptr) bool
With this change, it does not
equal(p, q *T) bool
Similarly for the hash algorithm.
The size is rarely used, as most equal functions know the size
of the thing they are comparing. For instance f32equal already
knows its inputs are 4 bytes in size.
For cases where the size is not known, we allocate a closure
(one for each size needed) that points to an assembly stub that
reads the size out of the closure and calls generic code that
has a size argument.
Reduces the size of the go binary by 0.07%. Performance impact
is not measurable.
Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec
Reviewed-on: https://go-review.googlesource.com/2392
Reviewed-by: Russ Cox <rsc@golang.org>
Use a lookup table to find the function which contains a pc. It is
faster than the old binary search. findfunc is used primarily for
stack copying and garbage collection.
benchmark old ns/op new ns/op delta
BenchmarkStackCopy 294746596 255400980 -13.35%
(findfunc is one of several tasks done by stack copy, the findfunc
time itself is about 2.5x faster.)
The lookup table is built at link time. The table grows the binary
size by about 0.5% of the text segment.
We impose a lower limit of 16 bytes on any function, which should not
have much of an impact. (The real constraint required is <=256
functions in every 4096 bytes, but 16 bytes/function is easier to
implement.)
Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca
Reviewed-on: https://go-review.googlesource.com/2097
Reviewed-by: Russ Cox <rsc@golang.org>
This implements support for calls to and from C in the ppc64 C ABI, as
well as supporting functionality such as an entry point from the
dynamic linker.
Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a
Reviewed-on: https://go-review.googlesource.com/2009
Reviewed-by: Russ Cox <rsc@golang.org>
Cgo will need this for calls from C to Go and for handling signals
that may occur in C code.
Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada
Reviewed-on: https://go-review.googlesource.com/2008
Reviewed-by: Russ Cox <rsc@golang.org>
Cache 2KB, 4KB, 8KB, and 16KB stacks. Larger stacks
will be allocated directly. There is no point in cacheing
32KB+ stacks as we ask for and return 32KB at a time
from the allocator.
Note that the minimum stack is 8K on windows/64bit and 4K on
windows/32bit and plan9. For these os/arch combinations,
the number of stack orders is less so that we have the same
maximum cached size.
Fixes#9045
Change-Id: Ia4195dd1858fb79fc0e6a91ae29c374d28839e44
Reviewed-on: https://go-review.googlesource.com/2098
Reviewed-by: Russ Cox <rsc@golang.org>
The ones at the end of M and G are just used to compute
their size for use in assembly. Generate the size explicitly.
The one at the end of itab is variable-sized, and at least one.
The ones at the end of interfacetype and uncommontype are not
needed, as the preceding slice references them (the slice was
originally added for use by reflect?).
The one at the end of stackmap is already accessed correctly,
and the runtime never allocates one.
Update #9401
Change-Id: Ia75e3aaee38425f038c506868a17105bd64c712f
Reviewed-on: https://go-review.googlesource.com/2420
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Fold in some startup randomness to make the hash vary across
different runs. This helps prevent attackers from choosing
keys that all map to the same bucket.
Also, reorganize the hash a bit. Move the *m1 multiply to after
the xor of the current hash and the message. For hash quality
it doesn't really matter, but for DDOS resistance it helps a lot
(any processing done to the message before it is merged with the
random seed is useless, as it is easily inverted by an attacker).
Update #9365
Change-Id: Ib19968168e1bbc541d1d28be2701bb83e53f1e24
Reviewed-on: https://go-review.googlesource.com/2344
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This CL only fixes the build, there are two failing tests:
RaceMapBigValAccess1 and RaceMapBigValAccess2
in runtime/race tests. I haven't investigated why yet.
Updates #9516.
Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f
Reviewed-on: https://go-review.googlesource.com/2401
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
sysReserve doesn't actually reserve the full amount requested on
64-bit systems, because of problems with ulimit. Instead it checks
that it can get the first 64 kB and assumes it can grab the rest as
needed. This doesn't work well with the "let the kernel pick an address"
mode, so don't do that. Pick a high address instead.
Change-Id: I4de143a0e6fdeb467fa6ecf63dcd0c1c1618a31c
Reviewed-on: https://go-review.googlesource.com/2345
Reviewed-by: Rick Hudson <rlh@golang.org>
The line 'mp.schedlink = mnext' has an implicit write barrier call,
which needs a valid g. Move it above the setg(nil).
Change-Id: If3e86c948e856e10032ad89f038bf569659300e0
Reviewed-on: https://go-review.googlesource.com/2347
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
First, call clearcheckmarks immediately after changing checkmark,
so that there is less time when the checkmark flag and the bitmap
are inconsistent. The tiny gap between the two lines is fine, because
the world is stopped. Before, the gap was much larger and included
such code as "go bgsweep()", which allocated.
Second, modify gcphase only when the world is stopped.
As written, gcscan_m was changing gcphase from 0 to GCscan
and back to 0 while other goroutines were running.
Another goroutine running at the same time might decide to
sleep, see GCscan, call gcphasework, and start "helping" by
scanning its stack. That's fine, except that if gcphase flips back
to 0 as the goroutine calls scanblock, it will start draining the
work buffers prematurely.
Both of these were found wbshadow=2 (and a lot of hard work).
Eventually that will run automatically, but right now it still
doesn't quite work for all.bash, due to mmap conflicts with
pthread-created threads.
Change-Id: I99aa8210cff9c6e7d0a1b62c75be32a23321897b
Reviewed-on: https://go-review.googlesource.com/2340
Reviewed-by: Rick Hudson <rlh@golang.org>
Use typedmemmove, typedslicecopy, and adjust reflect.call
to execute the necessary write barriers.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iec5b5b0c1be5589295e28e5228e37f1a92e07742
Reviewed-on: https://go-review.googlesource.com/2312
Reviewed-by: Keith Randall <khr@golang.org>
A side effect of this change is that when assertI2T writes to the
memory for the T being extracted, it can use typedmemmove
for write barriers.
There are other ways we could have done this, but this one
finishes a TODO in package runtime.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Icbc8aabfd8a9b1f00be2e421af0e3b29fa54d01e
Reviewed-on: https://go-review.googlesource.com/2279
Reviewed-by: Keith Randall <khr@golang.org>
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iea83d693480c2f3008b4e80d55821acff65970a6
Reviewed-on: https://go-review.googlesource.com/2277
Reviewed-by: Keith Randall <khr@golang.org>
Preparation for replacing many memmove calls in runtime
with typedmemmove, which is a clearer description of what
the routine is doing.
For the same reason, rename writebarriercopy to typedslicecopy.
Change-Id: I6f23bef2c2215509fefba175b16908f76dc7538c
Reviewed-on: https://go-review.googlesource.com/2276
Reviewed-by: Rick Hudson <rlh@golang.org>
Add write barrier to atomic operations manipulating pointers.
In general an atomic write of a pointer word may indicate racy accesses,
so there is no strictly safe way to attempt to keep the shadow copy
in sync with the real one. Instead, mark the shadow copy as not used.
Redirect sync/atomic pointer routines back to the runtime ones,
so that there is only one copy of the write barrier and shadow logic.
In time we might consider doing this for most of the sync/atomic
functions, but for now only the pointer routines need that treatment.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I852936b9a111a6cb9079cfaf6bd78b43016c0242
Reviewed-on: https://go-review.googlesource.com/2066
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The Gobuf.g goroutine pointer is almost always updated by assembly code.
In one of the few places it is updated by Go code - func save - it must be
treated as a uintptr to avoid a write barrier being emitted at a bad time.
Instead of figuring out how to emit the write barriers missing in the
assembly manipulation, change the type of the field to uintptr, so that
it does not require write barriers at all.
Goroutine structs are published in the allg list and never freed.
That will keep the goroutine structs from being collected.
There is never a time that Gobuf.g's contain the only references
to a goroutine: the publishing of the goroutine in allg comes first.
Goroutine pointers are also kept in non-GC-visible places like TLS,
so I can't see them ever moving. If we did want to start moving data
in the GC, we'd need to allocate the goroutine structs from an
alternate arena. This CL doesn't make that problem any worse.
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: I85f91312ec3e0ef69ead0fff1a560b0cfb095e1a
Reviewed-on: https://go-review.googlesource.com/2065
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Found with GODEBUG=wbshadow=1 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Ic8624401d7c8225a935f719f96f2675c6f5c0d7c
Reviewed-on: https://go-review.googlesource.com/2064
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is the detection code. It works well enough that I know of
a handful of missing write barriers. However, those are subtle
enough that I'll address them in separate followup CLs.
GODEBUG=wbshadow=1 checks for a write that bypassed the
write barrier at the next write barrier of the same word.
If a bug can be detected in this mode it is typically easy to
understand, since the crash says quite clearly what kind of
word has missed a write barrier.
GODEBUG=wbshadow=2 adds a check of the write barrier
shadow copy during garbage collection. Bugs detected at
garbage collection can be difficult to understand, because
there is no context for what the found word means.
Typically you have to reproduce the problem with allocfreetrace=1
in order to understand the type of the badly updated word.
Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e
Reviewed-on: https://go-review.googlesource.com/2061
Reviewed-by: Austin Clements <austin@google.com>
Noticed while investigating the speed of the runtime tests, as part
of debugging while Plan 9's runtime tests are timing out on GCE.
Change-Id: I95f5a3d967a0b45ec1ebf10067e193f51db84e26
Reviewed-on: https://go-review.googlesource.com/2283
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This reverts commit ab0535ae3f.
I think it will remain useful to distinguish code that must
run on a system stack from code that can run on either stack,
even if that distinction is no
longer based on the implementation language.
That is, I expect to add a //go:systemstack comment that,
in terms of the old implementation, tells the compiler,
to pretend this function was written in C.
Change-Id: I33d2ebb2f99ae12496484c6ec8ed07233d693275
Reviewed-on: https://go-review.googlesource.com/2275
Reviewed-by: Russ Cox <rsc@golang.org>
Shell out to `uname -r` this time, so that the test will compile
even if the platform doesn't have syscall.Sysctl.
Change-Id: I3a19ab5d820bdb94586a97f4507b3837d7040525
Reviewed-on: https://go-review.googlesource.com/2271
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The test program requires static constructor, which in turn needs
external linking to work, but external linking never works on 10.6.
This should fix the darwin-{386,amd64} builders.
Change-Id: I714fdd3e35f9a7e5f5659cf26367feec9412444f
Reviewed-on: https://go-review.googlesource.com/2235
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Fixes build on plan9 and windows.
Change-Id: Ic9b02c641ab84e4f6d8149de71b9eb495e3343b2
Reviewed-on: https://go-review.googlesource.com/2233
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
I missed this one in golang.org/cl/2232 and only tested the patch
on openbsd/amd64.
Change-Id: I4ff437ae0bfc61c989896c01904b6d33f9bdf0ec
Reviewed-on: https://go-review.googlesource.com/2234
Reviewed-by: Minux Ma <minux@golang.org>
This is a genuine bug exposed by our test for issue 9456: our wrapper
for pthread_create is not initialized until we initialize cgo itself,
but it is possible that a static constructor could call pthread_create,
and in that case, it will be calling a nil function pointer.
Fix that by also initializing the sys_pthread_create function pointer
inside our pthread_create wrapper function, and use a pthread_once to
make sure it is only initialized once.
Fix build for openbsd.
Change-Id: Ica4da2c21fcaec186fdd3379128ef46f0e767ed7
Reviewed-on: https://go-review.googlesource.com/2232
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Some libraries, for example, OpenBLAS, create work threads in a global constructor.
If we're doing cpu profiling, it's possible that SIGPROF might come to some of the
worker threads before we make our first cgo call. Cgocallback used to terminate the
process when that happens, but it's better to miss a couple profiling signals than
to abort in this case.
Fixes#9456.
Change-Id: I112b8e1a6e10e6cc8ac695a4b518c0f577309b6b
Reviewed-on: https://go-review.googlesource.com/2141
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Following change 2154, the goatoi function
was renamed atoi.
However, this definition conflicts with the
atoi function defined in the Plan 9 runtime,
which takes a []byte instead of a string.
This change fixes the build on Plan 9.
Change-Id: Ia0f7ca2f965bd5e3cce3177bba9c806f64db05eb
Reviewed-on: https://go-review.googlesource.com/2165
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
They are no longer needed now that C is gone.
goatoi -> atoi
gofuncname/funcname -> funcname/cfuncname
goroundupsize -> already existing roundupsize
Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7
Reviewed-on: https://go-review.googlesource.com/2154
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Now that we've removed all the C code in runtime and the C compilers,
there is no need to have a separate stackguard field to check for C
code on Go stack.
Remove field g.stackguard1 and rename g.stackguard0 to g.stackguard.
Adjust liblink and cmd/ld as necessary.
Change-Id: I54e75db5a93d783e86af5ff1a6cd497d669d8d33
Reviewed-on: https://go-review.googlesource.com/2144
Reviewed-by: Keith Randall <khr@golang.org>
The goalg function was a holdover from when we had algorithm
tables in both C and Go. It is no longer needed.
Change-Id: Ia0c1af35bef3497a899f22084a1a7b42daae72a0
Reviewed-on: https://go-review.googlesource.com/2099
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Currently we do very a complex rebalancing of runnable goroutines
between queues, which tries to preserve scheduling fairness.
Besides being complex and error-prone, it also destroys all locality
of scheduling.
This change uses simpler scheme: leave runnable goroutines where
they are, during starttheworld start all Ps with local work,
plus start one additional P in case we have excessive runnable
goroutines in local queues or in the global queue.
The schedler must be able to operate efficiently w/o the rebalancing,
because garbage collections do not have to happen frequently.
The immediate need is execution tracing support: handling of
garabage collection which does stoptheworld/starttheworld several
times becomes exceedingly complex if the current execution can
jump between Ps during starttheworld.
Change-Id: I4fdb7a6d80ca4bd08900d0c6a0a252a95b1a2c90
Reviewed-on: https://go-review.googlesource.com/1951
Reviewed-by: Rick Hudson <rlh@golang.org>
Add a nil byte at the end of the itoa buffer,
before calling gostringnocopy. This prevents
gostringnocopy to read past the buffer size.
Change-Id: I87494a8dd6ea45263882536bf6c0f294eda6866d
Reviewed-on: https://go-review.googlesource.com/2033
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Replace with uses of //go:linkname in Go files, direct use of name in .s files.
The only one that really truly needs a jump is reflect.call; the jump is now
next to the runtime.reflectcall assembly implementations.
Change-Id: Ie7ff3020a8f60a8e4c8645fe236e7883a3f23f46
Reviewed-on: https://go-review.googlesource.com/1962
Reviewed-by: Austin Clements <austin@google.com>
These signals are used by glibc to broadcast setuid/setgid to all
threads and to send pthread cancellations. Unlike other signals, the
Go runtime does not intercept these because they must invoke the libc
handlers (see issues #3871 and #6997). However, because 1) these
signals may be issued asynchronously by a thread running C code to
another thread running Go code and 2) glibc does not set SA_ONSTACK
for its handlers, glibc's signal handler may be run on a Go stack.
Signal frames range from 1.5K on amd64 to many kilobytes on ppc64, so
this may overflow the Go stack and corrupt heap (or other stack) data.
Fix this by ensuring that these signal handlers have the SA_ONSTACK
flag (but not otherwise taking over the handler).
This has been a problem since Go 1.1, but it's likely that people
haven't encountered it because it only affects setuid/setgid and
pthread_cancel.
Fixes#9600.
Change-Id: I6cf5f5c2d3aa48998d632f61f1ddc2778dcfd300
Reviewed-on: https://go-review.googlesource.com/1887
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Calls to goproc/deferproc used to push & pop two extra arguments,
the argument size and the function to call. Now, we allocate space
for those arguments in the outargs section so we don't have to
modify the SP.
Defers now use the stack pointer (instead of the argument pointer)
to identify which frame they are associated with.
A followon CL might simplify funcspdelta and some of the stack
walking code.
Fixes issue #8641
Change-Id: I835ec2f42f0392c5dec7cb0fe6bba6f2aed1dad8
Reviewed-on: https://go-review.googlesource.com/1601
Reviewed-by: Russ Cox <rsc@golang.org>
For arm and powerpc, as well as x86 without aes instructions.
Contains a mixture of ideas from cityhash and xxhash.
Compared to our old fallback on ARM, it's ~no slower on
small objects and up to ~50% faster on large objects. More
importantly, it is a much better hash function and thus has
less chance of bad behavior.
Fixes#8737
benchmark old ns/op new ns/op delta
BenchmarkHash5 173 181 +4.62%
BenchmarkHash16 252 212 -15.87%
BenchmarkHash64 575 419 -27.13%
BenchmarkHash1024 7173 3995 -44.31%
BenchmarkHash65536 516940 313173 -39.42%
BenchmarkHashStringSpeed 300 279 -7.00%
BenchmarkHashBytesSpeed 478 424 -11.30%
BenchmarkHashInt32Speed 217 207 -4.61%
BenchmarkHashInt64Speed 262 231 -11.83%
BenchmarkHashStringArraySpeed 609 631 +3.61%
Change-Id: I0a9335028f32b10ad484966e3019987973afd3eb
Reviewed-on: https://go-review.googlesource.com/1360
Reviewed-by: Russ Cox <rsc@golang.org>
Pointers to zero-sized values may end up pointing to the next
object in memory, and possibly off the end of a span. This
can cause memory leaks and/or confuse the garbage collector.
By putting the overflow pointer at the end of the bucket, we
make sure that pointers to any zero-sized keys or values don't
accidentally point to the next object in memory.
fixes#9384
Change-Id: I5d434df176984cb0210b4d0195dd106d6eb28f73
Reviewed-on: https://go-review.googlesource.com/1869
Reviewed-by: Russ Cox <rsc@golang.org>
with uintptr, the check for < 0 will never succeed in mem_plan9.go's
sbrk() because the brk_ syscall returns -1 on failure. fixes the plan9/amd64 build.
this failed on plan9/amd64 because of the attempt to allocate 136GB in mallocinit(),
which failed. it was just by chance that on plan9/386 allocations never failed.
Change-Id: Ia3059cf5eb752e20d9e60c9619e591b80e8fb03c
Reviewed-on: https://go-review.googlesource.com/1590
Reviewed-by: Anthony Martin <ality@pbrane.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
"x*41" computes the same value as "x*31 + x*7 + x*3" and (when
compiled by gc) requires just one multiply instruction instead of
three.
Alternatively, the expression could be written as "(x<<2+x)<<3 + x" to
use shifts instead of multiplies (which is how GCC optimizes "x*41").
But gc currently emits suboptimal instructions for this expression
anyway (e.g., separate SHL+ADD instructions rather than LEA on
386/amd64). Also, if such an optimization was worthwhile, it would
seem better to implement it as part of gc's strength reduction logic.
Change-Id: I7156b793229d723bbc9a52aa9ed6111291335277
Reviewed-on: https://go-review.googlesource.com/1830
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
It shouldn't semacquire() inside an acquirem(), the runtime
thinks that means deadlock. It actually isn't a deadlock, but it
looks like it because acquirem() does m.locks++.
Candidate for inclusion in 1.4.1. runtime.Stack with all=true
is pretty unuseable in GOMAXPROCS>1 environment.
fixes#9321
Change-Id: Iac6b664217d24763b9878c20e49229a1ecffc805
Reviewed-on: https://go-review.googlesource.com/1600
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Most types are reflexive (k == k for all k of type t), so don't
bother calling equal(k, k) when the key type is reflexive.
Change-Id: Ia716b4198b8b298687843b94b878dbc5e8fc2c65
Reviewed-on: https://go-review.googlesource.com/1480
Reviewed-by: Russ Cox <rsc@golang.org>
//go:nowritebarrier can only be used in package runtime.
It does not disable write barriers; it is an assertion, checked
by the compiler, that the following function needs no write
barriers.
Change-Id: Id7978b779b66dc1feea39ee6bda9fd4d80280b7c
Reviewed-on: https://go-review.googlesource.com/1224
Reviewed-by: Rick Hudson <rlh@golang.org>
I tried to submit this in Go 1.4 as cl/107540044 but tripped over the
changes for getting C off the G stack. This is a rewritten version that
avoids cgo and works directly with the underlying log device.
Change-Id: I14c227dbb4202690c2c67c5a613d6c6689a6662a
Reviewed-on: https://go-review.googlesource.com/1285
Reviewed-by: Keith Randall <khr@golang.org>
It could only handle one finalizer before it raised an out-of-bounds error.
Fixes issue #9172
Change-Id: Ibb4d0c8aff2d78a1396e248c7129a631176ab427
Reviewed-on: https://go-review.googlesource.com/1201
Reviewed-by: Russ Cox <rsc@golang.org>
needm used to print an error before exiting when it was called too
early, but this error was lost in the transition to Go. Bring back
the error so we don't silently exit(1) when this happens.
Change-Id: I8086932783fd29a337d7dea31b9d6facb64cb5c1
Reviewed-on: https://go-review.googlesource.com/1226
Reviewed-by: Russ Cox <rsc@golang.org>
Avoids a potential O(n^2) performance problem when dequeueing
from very popular channels.
benchmark old ns/op new ns/op delta
BenchmarkChanPopular 2563782 627201 -75.54%
Change-Id: I231aaeafea0ecd93d27b268a0b2128530df3ddd6
Reviewed-on: https://go-review.googlesource.com/1200
Reviewed-by: Russ Cox <rsc@golang.org>
If the symbol table isn't sorted, we print it and abort. However, we
were missing the line break after each symbol, resulting in one
gigantic line instead of a nicely formatted table.
Change-Id: Ie5c6f3c256d0e648277cb3db4496512a79d266dd
Reviewed-on: https://go-review.googlesource.com/1182
Reviewed-by: Russ Cox <rsc@golang.org>
When we start work on Gerrit, ppc64 and garbage collection
work will continue in the master branch, not the dev branches.
(We may still use dev branches for other things later, but
these are ready to be merged, and doing it now, before moving
to Git means we don't have to have dev branches working
in the Gerrit workflow on day one.)
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/183140043
640 bytes ought to be enough for anybody.
We'll bring this back down before Go 1.5. That's issue 9214.
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/188730043
This is going to hurt a bit but we'll make it better later.
Now the race detector can be run again.
I added the write barrier optimizations from
CL 183020043 to try to make it hurt a little less.
TBR=rlh
CC=golang-codereviews
https://golang.org/cl/185070043
This is the last system-dependent file written by cmd/dist.
They are all now written by go generate.
cmd/dist is not needed to start building package runtime
for a different system anymore.
Now all the generated files can be assumed generated, so
delete the clumsy hacks in cmd/api.
Re-enable api check in run.bash.
LGTM=bradfitz
R=bradfitz
CC=golang-codereviews
https://golang.org/cl/185040044
During garbage collection, after scanning a stack, we think about
shrinking it to reclaim some memory. The shrinking code (called
while the world is stopped) checked that the status was Gwaiting
or Grunnable and then changed the state to Gcopystack, to essentially
lock the stack so that no other GC thread is scanning it.
The same locking happens for stack growth (and is more necessary there).
oldstatus = runtime·readgstatus(gp);
oldstatus &= ~Gscan;
if(oldstatus == Gwaiting || oldstatus == Grunnable)
runtime·casgstatus(gp, oldstatus, Gcopystack); // oldstatus is Gwaiting or Grunnable
else
runtime·throw("copystack: bad status, not Gwaiting or Grunnable");
Unfortunately, "stop the world" doesn't stop everything. It stops all
normal goroutine execution, but the network polling thread is still
blocked in epoll and may wake up. If it does, and it chooses a goroutine
to mark runnable, and that goroutine is the one whose stack is shrinking,
then it can happen that between readgstatus and casgstatus, the status
changes from Gwaiting to Grunnable.
casgstatus assumes that if the status is not what is expected, it is a
transient change (like from Gwaiting to Gscanwaiting and back, or like
from Gwaiting to Gcopystack and back), and it loops until the status
has been restored to the expected value. In this case, the status has
changed semi-permanently from Gwaiting to Grunnable - it won't
change again until the GC is done and the world can continue, but the
GC is waiting for the status to change back. This wedges the program.
To fix, call a special variant of casgstatus that accepts either Gwaiting
or Grunnable as valid statuses.
Without the fix bug with the extra check+throw in casgstatus, the
program below dies in a few seconds (2-10) with GOMAXPROCS=8
on a 2012 Retina MacBook Pro. With the fix, it runs for minutes
and minutes.
package main
import (
"io"
"log"
"net"
"runtime"
)
func main() {
const N = 100
for i := 0; i < N; i++ {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
log.Fatal(err)
}
ch := make(chan net.Conn, 1)
go func() {
var err error
c1, err := net.Dial("tcp", l.Addr().String())
if err != nil {
log.Fatal(err)
}
ch <- c1
}()
c2, err := l.Accept()
if err != nil {
log.Fatal(err)
}
c1 := <-ch
l.Close()
go netguy(c1, c2)
go netguy(c2, c1)
c1.Write(make([]byte, 100))
}
for {
runtime.GC()
}
}
func netguy(r, w net.Conn) {
buf := make([]byte, 100)
for {
bigstack(1000)
_, err := io.ReadFull(r, buf)
if err != nil {
log.Fatal(err)
}
w.Write(buf)
}
}
var g int
func bigstack(n int) {
var buf [100]byte
if n > 0 {
bigstack(n - 1)
}
g = int(buf[0]) + int(buf[99])
}
Fixes#9186.
LGTM=rlh
R=austin, rlh
CC=dvyukov, golang-codereviews, iant, khr, r
https://golang.org/cl/179680043
Otherwise both zgoos_linux.go and zgoos_android.go will be compiled
for GOOS=android.
LGTM=crawshaw, rsc
R=rsc, crawshaw
CC=golang-codereviews
https://golang.org/cl/178110043
We don't know what we need yet, so add them all.
Add them even on x86 architectures (as no-ops) so that
the GC can refer to them unconditionally.
Eventually we'll know what we want and probably
have just one 'prefetch' with an appropriate meaning
on each architecture.
LGTM=rlh
R=rlh
CC=golang-codereviews
https://golang.org/cl/179160043
Thanks to Aram Hăvărneanu, Nick Owens
and Russ Cox for the early reviews.
LGTM=aram, rsc
R=rsc, lucio.dere, aram, ality
CC=golang-codereviews, mischief
https://golang.org/cl/175370043
Race detector runtime does not tolerate operations on addresses
that was not previously declared with __tsan_map_shadow
(namely, data, bss and heap). The corresponding address
checks for atomic operations were removed in
https://golang.org/cl/111310044
Restore these checks.
It's tricker than just not calling into race runtime,
because it is the race runtime that makes the atomic
operations themselves (if we do not call into race runtime
we skip the atomic operation itself as well). So instead we call
__tsan_go_ignore_sync_start/end around the atomic operation.
This forces race runtime to skip all other processing
except than doing the atomic operation itself.
Fixes#9136.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/179030043