1
0
mirror of https://github.com/golang/go synced 2024-11-20 03:04:40 -07:00
Commit Graph

1487 Commits

Author SHA1 Message Date
Austin Clements
3cd56b4dca runtime: combine gcResetGState and gcResetMarkState
These functions are always called together and perform logically
related state resets, so combine them in to just gcResetMarkState.

Fixes #11427.

Change-Id: I06c17ef65f66186494887a767b3993126955b5fe
Reviewed-on: https://go-review.googlesource.com/16041
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-19 18:38:07 +00:00
Austin Clements
b0d5e5c500 runtime: consolidate gcResetGState calls
Currently gcResetGState is called by func gcscan_m for concurrent GC
and directly by func gc for STW GC. Simplify this by consolidating
these two calls in to one call by func gc above where it splits for
concurrent and STW GC.

As a consequence, gcResetGState and gcResetMarkState are always called
together, so the next commit will consolidate these.

Change-Id: Ib62d404c7b32b28f7d3080d26ecf3966cbc4aca0
Reviewed-on: https://go-review.googlesource.com/16040
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-19 18:38:00 +00:00
Austin Clements
feb92a8e8c runtime: remove work.partial queue
This work queue is no longer used (there are many reads of
work.partial, but the only write is in putpartial, which is never
called).

Fixes #11922.

Change-Id: I08b76c0c02a0867a9cdcb94783e1f7629d44249a
Reviewed-on: https://go-review.googlesource.com/15892
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-19 18:37:54 +00:00
Aaron Jacobs
5d88323fa6 runtime: remove a redundant nil pointer check.
It appears this was made possible by commit 89f185f; before that, g was
not dereferenced above.

Change-Id: I70bc571d924b36351392fd4c13d681e938cfb573
Reviewed-on: https://go-review.googlesource.com/16033
Reviewed-by: Andrew Gerrand <adg@golang.org>
2015-10-19 09:58:15 +00:00
Nodir Turakulov
386fa03609 runtime: merge proc1.go -> proc.go
from proc1.go to proc.go:
* prepend header comment explaining "Goroutine scheduler"
* insert m0 and g0 var defs after the comment
* append the rest

Updates #12952

Change-Id: I35ee9ae3287675cde0c1b6aeaca0a460393f2354
Reviewed-on: https://go-review.googlesource.com/16024
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-19 01:11:00 +00:00
Nodir Turakulov
243757576d runtime: merge race1.go -> race.go
* append contents of race1.go to race.go
* delete "Implementation of the race detector API." comment
  from race1.go

Updates #12952

Change-Id: Ibdd9c4dc79a63c3bef69eade9525578063c86c1c
Reviewed-on: https://go-review.googlesource.com/16023
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-18 23:48:22 +00:00
Michael Hudson-Doyle
6deb3c0619 runtime, runtime/cgo: conform to PIC register use rules in ppc64 asm
PIC code on ppc64le uses R2 as a TOC pointer and when calling a function
through a function pointer must ensure the function pointer is in R12.  These
rules are easy enough to follow unconditionally in our assembly, so do that.

Change-Id: Icfc4e47ae5dfbe15f581cbdd785cdeed6e40bc32
Reviewed-on: https://go-review.googlesource.com/15526
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-10-18 23:36:39 +00:00
Michael Hudson-Doyle
b8f8969fbd reflect, runtime, runtime/cgo: use ppc64 asm constant for fixed frame size
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). Part 3 of that
is using a #define in the ppc64 assembly to refer to the size of the fixed
part of the stack (finding all these took me about a week!).

Change-Id: I50f22fe1c47af1ec59da1bd7ea8f84a4750df9b7
Reviewed-on: https://go-review.googlesource.com/15525
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-10-18 23:15:26 +00:00
Michael Hudson-Doyle
a4855812e2 runtime: add a constant for the smallest possible stack frame
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). So to prepare
for this, make a constant for the fixed part of a stack and use that where
necessary.

Change-Id: I447949f4d725003bb82e7d2cf7991c1bca5aa887
Reviewed-on: https://go-review.googlesource.com/15523
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-18 22:14:00 +00:00
Michael Hudson-Doyle
45c06b27a4 cmd/internal/obj, runtime: add NOFRAME flag to suppress stack frame set up on ppc64x
Replace the confusing game where a frame size of $-8 would suppress the
implicit setting up of a stack frame with a nice explicit flag.

The code to set up the function prologue is still a little confusing but better
than it was.

Change-Id: I1d49278ff42c6bc734ebfb079998b32bc53f8d9a
Reviewed-on: https://go-review.googlesource.com/15670
Reviewed-by: Minux Ma <minux@golang.org>
2015-10-18 22:13:30 +00:00
Nodir Turakulov
db2e73faeb runtime: merge stack{1,2}.go -> stack.go
* rename stack1.go -> stack.go
* prepend contents of stack2.go to stack.go

Updates #12952

Change-Id: I60d409af37162a5a7596c678dfebc2cea89564ff
Reviewed-on: https://go-review.googlesource.com/16008
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-17 20:52:22 +00:00
Matthew Dempsky
4562784bae runtime: remove some unnecessary unsafe code in mfixalloc
Change-Id: Ie9ea4af4315a4d0eb69d0569726bb3eca2b397af
Reviewed-on: https://go-review.googlesource.com/16005
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-17 00:26:26 +00:00
Nodir Turakulov
9358f7fa61 runtime: merge panic1.go into panic.go
A TODO to merge is removed from panic1.go.
The rest is appended to panic.go

Updates #12952

Change-Id: Ied4382a455abc20bc2938e34d031802e6b4baf8b
Reviewed-on: https://go-review.googlesource.com/15905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-16 15:51:49 +00:00
Nodir Turakulov
d72d299f3e runtime: rename print1.go -> print.go
It seems that it was called print1.go mistakenly: print.go was deleted
in the same commit:
https://go.googlesource.com/go/+/597b266eafe7d63e9be8da1c1b4813bd2998a11c

Updates #12952

Change-Id: I371e59d6cebc8824857df3f3ee89101147dfffc0
Reviewed-on: https://go-review.googlesource.com/15950
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-16 15:51:30 +00:00
Nodir Turakulov
881b0e7880 runtime: merge string1.go into string.go
string1.go contents are appended to string.go as is

Updates #12952

Change-Id: I30083ba7fdd362d4421e964a494c76ca865bedc2
Reviewed-on: https://go-review.googlesource.com/15951
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-16 15:46:02 +00:00
Michael Hudson-Doyle
42c7929c04 runtime, runtime/debug: access unexported runtime functions with //go:linkname, not assembly stubs
Change-Id: I88f80f5914d6e4c179f3d28aa59fc29b7ef0cc66
Reviewed-on: https://go-review.googlesource.com/15960
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-16 09:14:25 +00:00
Michael Hudson-Doyle
0b8d583320 runtime, os/signal: use //go:linkname instead of assembly stubs to get access to runtime functions
os/signal depends on a few unexported runtime functions. This removes the
assembly stubs it used to get access to these in favour of using
//go:linkname in runtime to make the functions accessible to os/signal.

This is motivated by ppc64le shared libraries, where you cannot BR to a symbol
defined in a shared library (only BL), but it seems like an improvment anyway.

Change-Id: I09361203ce38070bd3f132f6dc5ac212f2dc6f58
Reviewed-on: https://go-review.googlesource.com/15871
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-10-16 07:11:04 +00:00
Matthew Dempsky
4c2465d47d runtime: use unsafe.Pointer(x) instead of (unsafe.Pointer)(x)
This isn't C anymore.  No binary change to pkg/linux_amd64/runtime.a.

Change-Id: I24d66b0f5ac888f432b874aac684b1395e7c8345
Reviewed-on: https://go-review.googlesource.com/15903
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-15 21:48:37 +00:00
Raul Silvera
1d765b77a0 runtime: Reduce testing for fastlog2 implementation
The current fastlog2 testing checks all 64M values in the domain of
interest, which is too much for platforms with no native floating point.

Reduce testing under testing.Short() to speed up builds for those platforms.

Related to #12620

Change-Id: Ie5dcd408724ba91c3b3fcf9ba0dddedb34706cd1
Reviewed-on: https://go-review.googlesource.com/15830
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Joel Sing <jsing@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-14 04:54:33 +00:00
Ian Lance Taylor
2961cab965 runtime: remove _Kind constants
The duplication of _Kind and kind constants is a legacy of the
conversion from C.

Change-Id: I368b35a41f215cf91ac4b09dac59699edb414a0e
Reviewed-on: https://go-review.googlesource.com/15800
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-13 00:15:36 +00:00
Austin Clements
65aa2da617 runtime: assist before allocating
Currently, when the mutator allocates, the runtime first allocates the
memory and then, if that G has done "enough" allocation, the runtime
checks whether the G has assist debt to pay off and, if so, pays it
off. This approach leads to under-assisting, where a G can allocate a
large region (or many small regions) before paying for it, or can even
exit with outstanding debt.

This commit flips this around so that a G always acquires enough
credit for an allocation before it can perform that allocation. We
continue to amortize the cost of assists by requiring that they
over-assist when triggered to build up credit for many allocations.

Fixes #11967.

Change-Id: Idac9f11133b328535667674d837be72c23ebd899
Reviewed-on: https://go-review.googlesource.com/15409
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
2015-10-09 19:39:03 +00:00
Austin Clements
89c341c5e9 runtime: directly track GC assist balance
Currently we track the per-G GC assist balance as two monotonically
increasing values: the bytes allocated by the G this cycle (gcalloc)
and the scan work performed by the G this cycle (gcscanwork). The
assist balance is hence assistRatio*gcalloc - gcscanwork.

This works, but has two important downsides:

1) It requires floating-point math to figure out if a G is in debt or
   not. This makes it inappropriate to check for assist debt in the
   hot path of mallocgc, so we only do this when a G allocates a new
   span. As a result, Gs can operate "in the red", leading to
   under-assist and extended GC cycle length.

2) Revising the assist ratio during a GC cycle can lead to an "assist
   burst". If you think of plotting the scan work performed versus
   heaps size, the assist ratio controls the slope of this line.
   However, in the current system, the target line always passes
   through 0 at the heap size that triggered GC, so if the runtime
   increases the assist ratio, there has to be a potentially large
   assist to jump from the current amount of scan work up to the new
   target scan work for the current heap size.

This commit replaces this approach with directly tracking the GC
assist balance in terms of allocation credit bytes. Allocating N bytes
simply decreases this by N and assisting raises it by the amount of
scan work performed divided by the assist ratio (to get back to
bytes).

This will make it cheap to figure out if a G is in debt, which will
let us efficiently check if an assist is necessary *before* performing
an allocation and hence keep Gs "in the black".

This also fixes assist bursts because the assist ratio is now in terms
of *remaining* work, rather than work from the beginning of the GC
cycle. Hence, the plot of scan work versus heap size becomes
continuous: we can revise the slope, but this slope always starts from
where we are right now, rather than where we were at the beginning of
the cycle.

Change-Id: Ia821c5f07f8a433e8da7f195b52adfedd58bdf2c
Reviewed-on: https://go-review.googlesource.com/15408
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:38:52 +00:00
Austin Clements
9e77c89868 runtime: ensure minimum heap distance via heap goal
Currently we ensure a minimum heap distance of 1MB when computing the
assist ratio. Rather than enforcing this minimum on the heap distance,
it makes more sense to enforce that the heap goal itself is at least
1MB over the live heap size at the beginning of GC. Currently the two
approaches are semantically equivalent, but this will let us switch to
basing the assist ratio on current heap distance rather than the
initial heap distance, since we can't enforce this minimum on the
current heap distance (the GC may never finish because the goal posts
will always be 1MB away).

Change-Id: I0027b1c26a41a0152b01e5b67bdb1140d43ee903
Reviewed-on: https://go-review.googlesource.com/15604
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:38:39 +00:00
Austin Clements
8e8219deb5 runtime: update gcController.scanWork regularly
Currently, gcController.scanWork is updated as lazily as possible
since it is only read at the end of the GC cycle. We're about to read
it during the GC cycle to improve the assist ratio revisions, so
modify gcDrain* to regularly flush to gcController.scanWork in much
the same way as we regularly flush to gcController.bgScanCredit.

One consequence of this is that it's difficult to keep gcw.scanWork
monotonic, so we give up on that and simply return the amount of scan
work done by gcDrainN rather than calculating it in the caller.

Change-Id: I7b50acdc39602f843eed0b5c6d2dacd7e762b81d
Reviewed-on: https://go-review.googlesource.com/15407
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:38:29 +00:00
Austin Clements
c18b163c15 runtime: control background scan credit flushing with flag
Currently callers of gcDrain control whether it flushes scan work
credit to gcController.bgScanCredit by passing a value other than -1
for the flush threshold. Shortly we're going to make this always flush
scan work to gcController.scanWork and optionally also flush scan work
to gcController.bgScanCredit. This will be much easier if the flush
threshold is simply a constant (which it is in practice) and callers
merely control whether or not the flush includes the background
credit. Hence, replace the flush threshold argument with a flag.

Change-Id: Ia27db17de8a3f1e462a5d7137d4b5dc72f99a04e
Reviewed-on: https://go-review.googlesource.com/15406
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:38:16 +00:00
Austin Clements
9b3cdaf0a3 runtime: consolidate gcDrain and gcDrainUntilPreempt
These functions were nearly identical. Consolidate them by adding a
flags argument. In addition to cleaning up this code, this makes
further changes that affect both functions easier.

Change-Id: I6ec5c947603bbbd3ff4040113b2fbc240e99745f
Reviewed-on: https://go-review.googlesource.com/15405
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:38:03 +00:00
Austin Clements
39ed682206 runtime: explain why continuous assist revising is necessary
Change-Id: I950af8d80433b3ae8a1da0aa7a8d2d0b295dd313
Reviewed-on: https://go-review.googlesource.com/15404
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-10-09 19:37:53 +00:00
Austin Clements
3271250ec4 runtime: fix comment for gcAssistAlloc
Change-Id: I312e56e95d8ef8ae036d16444ab1e2df1285845d
Reviewed-on: https://go-review.googlesource.com/15403
Reviewed-by: Russ Cox <rsc@golang.org>
2015-10-09 19:37:41 +00:00
Austin Clements
3e57b17dc3 runtime: fix comment for assistRatio
The comment for assistRatio claimed it to be the reciprocal of what it
actually is.

Change-Id: If7f9bb853d75d0097facff3aa6704b224d9108b8
Reviewed-on: https://go-review.googlesource.com/15402
Reviewed-by: Russ Cox <rsc@golang.org>
2015-10-09 19:37:23 +00:00
Nodir Turakulov
3be4d59820 runtime: remove redundant type cast
(*T)(unsafe.Pointer(&t)) === &t
for t of type T

Change-Id: I43c1aa436747dfa0bf4cb0d615da1647633f9536
Reviewed-on: https://go-review.googlesource.com/15656
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-09 18:48:36 +00:00
Keith Randall
91059de095 runtime: make aeshash more DOS-proof
Improve the aeshash implementation to make it harder to engineer collisions.

1) Scramble the seed before xoring with the input string.  This
   makes it harder to cancel known portions of the seed (like the size)
   because it mixes the per-table seed into those other parts.

2) Use table-dependent seeds for all stripes when hashing >16 byte strings.

For small strings this change uses 4 aesenc ops instead of 3, so it
is somewhat slower.  The first two can run in parallel, though, so
it isn't 33% slower.

benchmark                            old ns/op     new ns/op     delta
BenchmarkHash64-12                   10.2          11.2          +9.80%
BenchmarkHash16-12                   5.71          6.13          +7.36%
BenchmarkHash5-12                    6.64          7.01          +5.57%
BenchmarkHashBytesSpeed-12           30.3          31.9          +5.28%
BenchmarkHash65536-12                2785          2882          +3.48%
BenchmarkHash1024-12                 53.6          55.4          +3.36%
BenchmarkHashStringArraySpeed-12     54.9          56.5          +2.91%
BenchmarkHashStringSpeed-12          18.7          19.2          +2.67%
BenchmarkHashInt32Speed-12           14.8          15.1          +2.03%
BenchmarkHashInt64Speed-12           14.5          14.5          +0.00%

Change-Id: I59ea124b5cb92b1c7e8584008257347f9049996c
Reviewed-on: https://go-review.googlesource.com/14124
Reviewed-by: jcd . <jcd@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-08 16:43:03 +00:00
Michael Hudson-Doyle
168a51b3a1 runtime: adjust the arm64 memmove and memclr to operate by word as much as they can
Not only is this an obvious optimization:

benchmark                           old MB/s     new MB/s     speedup
BenchmarkMemmove1-4                 35.35        29.65        0.84x
BenchmarkMemmove2-4                 63.78        52.53        0.82x
BenchmarkMemmove3-4                 89.72        73.96        0.82x
BenchmarkMemmove4-4                 109.94       95.73        0.87x
BenchmarkMemmove5-4                 127.60       112.80       0.88x
BenchmarkMemmove6-4                 143.59       126.67       0.88x
BenchmarkMemmove7-4                 157.90       138.92       0.88x
BenchmarkMemmove8-4                 167.18       231.81       1.39x
BenchmarkMemmove9-4                 175.23       252.07       1.44x
BenchmarkMemmove10-4                165.68       261.10       1.58x
BenchmarkMemmove11-4                174.43       263.31       1.51x
BenchmarkMemmove12-4                180.76       267.56       1.48x
BenchmarkMemmove13-4                189.06       284.93       1.51x
BenchmarkMemmove14-4                186.31       284.72       1.53x
BenchmarkMemmove15-4                195.75       281.62       1.44x
BenchmarkMemmove16-4                202.96       439.23       2.16x
BenchmarkMemmove32-4                264.77       775.77       2.93x
BenchmarkMemmove64-4                306.81       1209.64      3.94x
BenchmarkMemmove128-4               357.03       1515.41      4.24x
BenchmarkMemmove256-4               380.77       2066.01      5.43x
BenchmarkMemmove512-4               385.05       2556.45      6.64x
BenchmarkMemmove1024-4              381.23       2804.10      7.36x
BenchmarkMemmove2048-4              379.06       2814.83      7.43x
BenchmarkMemmove4096-4              387.43       3064.96      7.91x
BenchmarkMemmoveUnaligned1-4        28.91        25.40        0.88x
BenchmarkMemmoveUnaligned2-4        56.13        47.56        0.85x
BenchmarkMemmoveUnaligned3-4        74.32        69.31        0.93x
BenchmarkMemmoveUnaligned4-4        97.02        83.58        0.86x
BenchmarkMemmoveUnaligned5-4        110.17       103.62       0.94x
BenchmarkMemmoveUnaligned6-4        124.95       113.26       0.91x
BenchmarkMemmoveUnaligned7-4        142.37       130.82       0.92x
BenchmarkMemmoveUnaligned8-4        151.20       205.64       1.36x
BenchmarkMemmoveUnaligned9-4        166.97       215.42       1.29x
BenchmarkMemmoveUnaligned10-4       148.49       221.22       1.49x
BenchmarkMemmoveUnaligned11-4       159.47       239.57       1.50x
BenchmarkMemmoveUnaligned12-4       163.52       247.32       1.51x
BenchmarkMemmoveUnaligned13-4       167.55       256.54       1.53x
BenchmarkMemmoveUnaligned14-4       175.12       251.03       1.43x
BenchmarkMemmoveUnaligned15-4       192.10       267.13       1.39x
BenchmarkMemmoveUnaligned16-4       190.76       378.87       1.99x
BenchmarkMemmoveUnaligned32-4       259.02       562.98       2.17x
BenchmarkMemmoveUnaligned64-4       317.72       842.44       2.65x
BenchmarkMemmoveUnaligned128-4      355.43       1274.49      3.59x
BenchmarkMemmoveUnaligned256-4      378.17       1815.74      4.80x
BenchmarkMemmoveUnaligned512-4      362.15       2180.81      6.02x
BenchmarkMemmoveUnaligned1024-4     376.07       2453.58      6.52x
BenchmarkMemmoveUnaligned2048-4     381.66       2568.32      6.73x
BenchmarkMemmoveUnaligned4096-4     398.51       2669.36      6.70x
BenchmarkMemclr5-4                  113.83       107.93       0.95x
BenchmarkMemclr16-4                 223.84       389.63       1.74x
BenchmarkMemclr64-4                 421.99       1209.58      2.87x
BenchmarkMemclr256-4                525.94       2411.58      4.59x
BenchmarkMemclr4096-4               581.66       4372.20      7.52x
BenchmarkMemclr65536-4              565.84       4747.48      8.39x
BenchmarkGoMemclr5-4                194.63       160.31       0.82x
BenchmarkGoMemclr16-4               295.30       630.07       2.13x
BenchmarkGoMemclr64-4               480.24       1884.03      3.92x
BenchmarkGoMemclr256-4              540.23       2926.49      5.42x

but it turns out that it's necessary to avoid the GC seeing partially written
pointers.

It's of course possible to be more sophisticated (using ldp/stp to move 16
bytes at a time in the core loop and unrolling the tail copying loops being
the obvious ideas) but I wanted something simple and (reasonably) obviously
correct.

Fixes #12552

Change-Id: Iaeaf8a812cd06f4747ba2f792de1ded738890735
Reviewed-on: https://go-review.googlesource.com/14813
Reviewed-by: Austin Clements <austin@google.com>
2015-10-08 07:49:35 +00:00
Michael Hudson-Doyle
a5cb76243a cmd/internal/obj, cmd/link, runtime: lots of TLS cleanup
It's particularly nice to get rid of the android special cases in the linker.

Change-Id: I516363af7ce8a6b2f196fe49cb8887ac787a6dad
Reviewed-on: https://go-review.googlesource.com/14197
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-10-08 00:21:30 +00:00
Raul Silvera
27ee719fb3 pprof: improve sampling for heap profiling
The current heap sampling introduces some bias that interferes
with unsampling, producing unexpected heap profiles.
The solution is to use a Poisson process to generate the
sampling points, using the formulas described at
https://en.wikipedia.org/wiki/Poisson_process

This fixes #12620

Change-Id: If2400809ed3c41de504dd6cff06be14e476ff96c
Reviewed-on: https://go-review.googlesource.com/14590
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-05 08:15:09 +00:00
Austin Clements
9f6df6c940 runtime: use 4 byte writes in amd64p32 memmove/memclr
Currently, amd64p32's memmove and memclr use 8 byte writes as much as
possible and 1 byte writes for the tail of the object. However, if an
object ends with a 4 byte pointer at an 8 byte aligned offset, this
may copy/zero the pointer field one byte at a time, allowing the
garbage collector to observe a partially copied pointer.

Fix this by using 4 byte writes instead of 8 byte writes.

Updates #12552.

Change-Id: I13324fd05756fb25ae57e812e836f0a975b5595c
Reviewed-on: https://go-review.googlesource.com/15370
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2015-10-02 22:49:15 +00:00
Austin Clements
44078a3228 runtime: adjust huge page flags only on huge page granularity
This fixes an issue where the runtime panics with "out of memory" or
"cannot allocate memory" even though there's ample memory by reducing
the number of memory mappings created by the memory allocator.

Commit 7e1b61c worked around issue #8832 where Linux's transparent
huge page support could dramatically increase the RSS of a Go process
by setting the MADV_NOHUGEPAGE flag on any regions of pages released
to the OS with MADV_DONTNEED. This had the side effect of also
increasing the number of VMAs (memory mappings) in a Go address space
because a separate VMA is needed for every region of the virtual
address space with different flags. Unfortunately, by default, Linux
limits the number of VMAs in an address space to 65530, and a large
heap can quickly reach this limit when the runtime starts scavenging
memory.

This commit dramatically reduces the number of VMAs. It does this
primarily by only adjusting the huge page flag at huge page
granularity. With this change, on amd64, even a pessimal heap that
alternates between MADV_NOHUGEPAGE and MADV_HUGEPAGE must reach 128GB
to reach the VMA limit. Because of this rounding to huge page
granularity, this change is also careful to leave large used and
unused regions huge page-enabled.

This change reduces the maximum number of VMAs during the runtime
benchmarks with GODEBUG=scavenge=1 from 692 to 49.

Fixes #12233.

Change-Id: Ic397776d042f20d53783a1cacf122e2e2db00584
Reviewed-on: https://go-review.googlesource.com/15191
Reviewed-by: Keith Randall <khr@golang.org>
2015-10-02 20:20:43 +00:00
Austin Clements
9a31d38f65 runtime: remove sweep wait loop in finishsweep_m
In general, finishsweep_m must block until any spans that are
concurrently being swept have been swept. It accomplishes this by
looping over all spans, which, as in the previous commit, takes
~1ms/heap GB. Unfortunately, we do this during the STW sweep
termination phase, so multi-gigabyte heaps can push our STW time past
10ms.

However, there's no need to do this wait if the world is stopped
because, in effect, stopping the world already had to wait for
anything that was sweeping (and if it didn't, the wait in
finishsweep_m would deadlock). Hence, we can simply skip this loop if
the world is stopped, such as during sweep termination. In fact,
currently all calls to finishsweep_m are STW, but this hasn't always
been the case and may not be the case in the future, so we keep the
logic around.

For 24GB heaps, this reduces max pause time by 75% relative to tip and
by 90% relative to Go 1.5. Notably, all pauses are now well under
10ms. Here are the results for the garbage benchmark:

               ------------- max pause ------------
Heap   Procs   after change   before change   1.5.1
24GB     12        3.8ms          16ms         37ms
24GB      4        3.7ms          16ms         37ms
 4GB      4        3.7ms           3ms        6.9ms

In the 4GB/4P case, it seems the "before change" run got lucky: the
max went up, but the 99%ile pause time went down from 3ms to 2.04ms.

Change-Id: Ica22189559f231d408ef2815019c9dbb5f38bf31
Reviewed-on: https://go-review.googlesource.com/15071
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-02 19:56:01 +00:00
Austin Clements
dac220b0a9 runtime: remove in-use page count loop from STW
In order to compute the sweep ratio, the runtime needs to know how
many pages belong to spans in state _MSpanInUse. Currently it finds
this out by looping over all spans during mark termination. However,
this takes ~1ms/heap GB, so multi-gigabyte heaps can quickly push our
STW time past 10ms.

Replace the loop with an actively maintained count of in-use pages.

For multi-gigabyte heaps, this reduces max mark termination pause time
by 75%–90% relative to tip and by 85%–95% relative to Go 1.5.1. This
shifts the longest pause time for large heaps to the sweep termination
phase, so it only slightly decreases max pause time, though it roughly
halves mean pause time. Here are the results for the garbage
benchmark:

               ---- max mark termination pause ----
Heap   Procs   after change   before change   1.5.1
24GB     12        1.9ms          18ms         37ms
24GB      4        3.7ms          18ms         37ms
 4GB      4        920µs         3.8ms        6.9ms

Fixes #11484.

Change-Id: Ia2d28bb8a1e4f1c3b8ebf79fb203f12b9bf114ac
Reviewed-on: https://go-review.googlesource.com/15070
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-02 19:55:55 +00:00
Austin Clements
608c1b0d56 runtime: scan objects with finalizers concurrently
This reduces pause time by ~25% relative to tip and by ~50% relative
to Go 1.5.1.

Currently one of the steps of STW mark termination is to loop (in
parallel) over all spans to find objects with finalizers in order to
mark all objects reachable from these objects and to treat the
finalizer special as a root. Unfortunately, even if there are no
finalizers at all, this loop takes roughly 1 ms/heap GB/core, so
multi-gigabyte heaps can quickly push our STW time past 10ms.

Fix this by moving this scan from mark termination to concurrent scan,
where it can run in parallel with mutators. The loop itself could also
be optimized, but this cost is small compared to concurrent marking.

Making this scan concurrent introduces two complications:

1) The scan currently walks the specials list of each span without
locking it, which is safe only with the world stopped. We fix this by
speculatively checking if a span has any specials (the vast majority
won't) and then locking the specials list only if there are specials
to check.

2) An object can have a finalizer set after concurrent scan, in which
case it won't have been marked appropriately by concurrent scan. If
the finalizer is a closure and is only reachable from the special, it
could be swept before it is run. Likewise, if the object is not marked
yet when the finalizer is set and then becomes unreachable before it
is marked, other objects reachable only from it may be swept before
the finalizer function is run. We fix this issue by making
addfinalizer ensure the same marking invariants as markroot does.

For multi-gigabyte heaps, this reduces max pause time by 20%–30%
relative to tip (depending on GOMAXPROCS) and by ~50% relative to Go
1.5.1 (where this loop was neither concurrent nor parallel). Here are
the results for the garbage benchmark:

               ---------------- max pause ----------------
Heap   Procs   Concurrent scan   STW parallel scan   1.5.1
24GB     12         18ms              23ms            37ms
24GB      4         18ms              25ms            37ms
 4GB      4         3.8ms            4.9ms           6.9ms

In all cases, 95%ile pause time is similar to the max pause time. This
also improves mean STW time by 10%–30%.

Fixes #11485.

Change-Id: I9359d8c3d120a51d23d924b52bf853a1299b1dfd
Reviewed-on: https://go-review.googlesource.com/14982
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-02 19:55:48 +00:00
Austin Clements
fbd2660af3 runtime: introduce gcMode type for GC modes
Currently, the GC modes constants are untyped and functions pass them
around as ints. Clean this up by introducing a proper type for these
constant.

Change-Id: Ibc022447bdfa203644921fbb548312d7e2272e8d
Reviewed-on: https://go-review.googlesource.com/14981
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-10-02 19:55:41 +00:00
Austin Clements
1b84bb8c7c runtime: fix out-of-date comment on gcWork usage
Change-Id: I3c21ffa80a5c14911e07238b1f64bec686ed7b72
Reviewed-on: https://go-review.googlesource.com/14980
Reviewed-by: Minux Ma <minux@golang.org>
2015-10-02 19:55:34 +00:00
David Crawshaw
47ccf96a95 runtime: darwin/386 entrypoint for c-archive
Change-Id: Ic22597b5e2824cffe9598cb9b506af3426c285fd
Reviewed-on: https://go-review.googlesource.com/12412
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-10-02 11:45:52 +00:00
Michael Hudson-Doyle
2c911143fd runtime: adjust the ppc64x memmove and memclr to copy by word as much as it can
Issue #12552 can happen on ppc64 too, although much less frequently in my
testing. I'm fairly sure this fixes it (2 out of 200 runs of oracle.test failed
without this change and 0 of 200 failed with it). It's also a lot faster for
large moves/clears:

name           old speed      new speed       delta
Memmove1-6      157MB/s ± 9%    144MB/s ± 0%    -8.20%         (p=0.004 n=10+9)
Memmove2-6      281MB/s ± 1%    249MB/s ± 1%   -11.53%        (p=0.000 n=10+10)
Memmove3-6      376MB/s ± 1%    328MB/s ± 1%   -12.64%        (p=0.000 n=10+10)
Memmove4-6      475MB/s ± 4%    345MB/s ± 1%   -27.28%         (p=0.000 n=10+8)
Memmove5-6      540MB/s ± 1%    393MB/s ± 0%   -27.21%        (p=0.000 n=10+10)
Memmove6-6      609MB/s ± 0%    423MB/s ± 0%   -30.56%         (p=0.000 n=9+10)
Memmove7-6      659MB/s ± 0%    468MB/s ± 0%   -28.99%         (p=0.000 n=8+10)
Memmove8-6      705MB/s ± 0%   1295MB/s ± 1%   +83.73%          (p=0.000 n=9+9)
Memmove9-6      740MB/s ± 1%   1241MB/s ± 1%   +67.61%         (p=0.000 n=10+8)
Memmove10-6     780MB/s ± 0%   1162MB/s ± 1%   +48.95%         (p=0.000 n=10+9)
Memmove11-6     811MB/s ± 0%   1180MB/s ± 0%   +45.58%          (p=0.000 n=8+9)
Memmove12-6     820MB/s ± 1%   1073MB/s ± 1%   +30.83%         (p=0.000 n=10+9)
Memmove13-6     849MB/s ± 0%   1068MB/s ± 1%   +25.87%        (p=0.000 n=10+10)
Memmove14-6     877MB/s ± 0%    911MB/s ± 0%    +3.83%        (p=0.000 n=10+10)
Memmove15-6     893MB/s ± 0%    922MB/s ± 0%    +3.25%         (p=0.000 n=10+9)
Memmove16-6     897MB/s ± 1%   2418MB/s ± 1%  +169.67%         (p=0.000 n=10+9)
Memmove32-6     908MB/s ± 0%   3927MB/s ± 2%  +332.64%         (p=0.000 n=10+8)
Memmove64-6    1.11GB/s ± 0%   5.59GB/s ± 0%  +404.64%          (p=0.000 n=9+9)
Memmove128-6   1.25GB/s ± 0%   6.71GB/s ± 2%  +437.49%         (p=0.000 n=9+10)
Memmove256-6   1.33GB/s ± 0%   7.25GB/s ± 1%  +445.06%        (p=0.000 n=10+10)
Memmove512-6   1.38GB/s ± 0%   8.87GB/s ± 0%  +544.43%        (p=0.000 n=10+10)
Memmove1024-6  1.40GB/s ± 0%  10.00GB/s ± 0%  +613.80%        (p=0.000 n=10+10)
Memmove2048-6  1.41GB/s ± 0%  10.65GB/s ± 0%  +652.95%         (p=0.000 n=9+10)
Memmove4096-6  1.42GB/s ± 0%  11.01GB/s ± 0%  +675.37%         (p=0.000 n=8+10)
Memclr5-6       269MB/s ± 1%    264MB/s ± 0%    -1.80%        (p=0.000 n=10+10)
Memclr16-6      600MB/s ± 0%    887MB/s ± 1%   +47.83%        (p=0.000 n=10+10)
Memclr64-6     1.06GB/s ± 0%   2.91GB/s ± 1%  +174.58%         (p=0.000 n=8+10)
Memclr256-6    1.32GB/s ± 0%   6.58GB/s ± 0%  +399.86%         (p=0.000 n=9+10)
Memclr4096-6   1.42GB/s ± 0%  10.90GB/s ± 0%  +668.03%         (p=0.000 n=8+10)
Memclr65536-6  1.43GB/s ± 0%  11.37GB/s ± 0%  +697.83%          (p=0.000 n=9+8)
GoMemclr5-6     359MB/s ± 0%    360MB/s ± 0%    +0.46%        (p=0.000 n=10+10)
GoMemclr16-6    750MB/s ± 0%   1264MB/s ± 1%   +68.45%        (p=0.000 n=10+10)
GoMemclr64-6   1.17GB/s ± 0%   3.78GB/s ± 1%  +223.58%         (p=0.000 n=10+9)
GoMemclr256-6  1.35GB/s ± 0%   7.47GB/s ± 0%  +452.44%        (p=0.000 n=10+10)

Update #12552

Change-Id: I7192e9deb9684a843aed37f58a16a4e29970e893
Reviewed-on: https://go-review.googlesource.com/14840
Reviewed-by: Minux Ma <minux@golang.org>
2015-10-02 07:50:52 +00:00
Mikio Hara
9fb79380f0 runtime: drop sigfwd from signal forwarding unsupported platforms
This change splits signal_unix.go into signal_unix.go and
signal2_unix.go and removes the fake symbol sigfwd from signal
forwarding unsupported platforms for clarification purpose.

Change-Id: I205eab5cf1930fda8a68659b35cfa9f3a0e67ca6
Reviewed-on: https://go-review.googlesource.com/12062
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-10-02 01:07:44 +00:00
Joel Sing
db70c019d7 runtime/trace: reduce memory usage for trace stress tests on openbsd/arm
Reduce allocation to avoid running out of memory on the openbsd/arm builder,
until issue/12032 is resolved.

Update issue #12032

Change-Id: Ibd513829ffdbd0db6cd86a0a5409934336131156
Reviewed-on: https://go-review.googlesource.com/15242
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-10-01 18:00:55 +00:00
Joel Sing
1d5251f707 runtime: handle sysReserve failure in mHeap_SysAlloc
sysReserve will return nil on failure - correctly handle this case and return
nil to the caller. Currently, a failure will result in h.arena_end being set
to psize, h.arena_used being set to zero and fun times ensue.

On the openbsd/arm builder this has resulted in:

  runtime: address space conflict: map(0x0) = 0x40946000
  fatal error: runtime: address space conflict

When it should be reporting out of memory instead.

Change-Id: Iba828d5ee48ee1946de75eba409e0cfb04f089d4
Reviewed-on: https://go-review.googlesource.com/15056
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-10-01 14:40:02 +00:00
Jeremy Schlatter
59bacb285c runtime: update comment to match function name
Change-Id: I8f22434ade576cc7e3e6d9f357bba12c1296e3d1
Reviewed-on: https://go-review.googlesource.com/15250
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-01 13:12:50 +00:00
Ian Lance Taylor
0c1f0549b8 runtime, runtime/cgo: support using msan on cgo code
The memory sanitizer (msan) is a nice compiler feature that can
dynamically check for memory errors in C code.  It's not useful for Go
code, since Go is memory safe.  But it is useful to be able to use the
memory sanitizer on C code that is linked into a Go program via cgo.
Without this change it does not work, as msan considers memory passed
from Go to C as uninitialized.

To make this work, change the runtime to call the C mmap function when
using cgo.  When using msan the mmap call will be intercepted and marked
as returning initialized memory.

Work around what appears to be an msan bug by calling malloc before we
call mmap.

Change-Id: I8ab7286d7595ae84782f68a98bef6d3688b946f9
Reviewed-on: https://go-review.googlesource.com/15170
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-09-30 22:17:55 +00:00
Austin Clements
e01be84149 runtime: test that periodic GC works
We've broken periodic GC a few times without noticing because there's
no test for it, partly because you have to wait two minutes to see if
it happens. This exposes control of the periodic GC timeout to runtime
tests and adds a test that cranks it down to zero and sleeps for a bit
to make sure periodic GCs happen.

Change-Id: I3ec44e967e99f4eda752f85c329eebd18b87709e
Reviewed-on: https://go-review.googlesource.com/13169
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
2015-09-30 19:24:07 +00:00
Shenghou Ma
604fbab3f1 runtime: fix incomplete sentence in comment
Fixes #12709.

Change-Id: If5a2536458fcd26d6f003dde1bfc02f86b09fa94
Reviewed-on: https://go-review.googlesource.com/14793
Reviewed-by: Andrew Gerrand <adg@golang.org>
2015-09-23 17:05:39 +00:00
Alex Brainman
d02a4c1d60 runtime: test that timeBeginPeriod succeeds
Change-Id: I5183f767dadb6d24a34d2460d02e97ddbaab129a
Reviewed-on: https://go-review.googlesource.com/12546
Run-TryBot: Minux Ma <minux@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-23 09:01:08 +00:00
Austin Clements
b307910b6e runtime: fix offset in invalidptr panic message
Change-Id: I00e1eebbf5e1a01c8fad5ca5324aa8eec1e4d731
Reviewed-on: https://go-review.googlesource.com/14792
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-22 16:55:17 +00:00
Ilya Tocar
5cf281a9b7 runtime: optimize duffcopy on amd64
Use movups to copy 16 bytes at a time.
Results (haswell):

name            old time/op  new time/op  delta
CopyFat8-48     0.62ns ± 3%  0.63ns ± 3%     ~     (p=0.535 n=20+20)
CopyFat12-48    0.92ns ± 2%  0.93ns ± 3%     ~     (p=0.594 n=17+18)
CopyFat16-48    1.23ns ± 2%  1.23ns ± 2%     ~     (p=0.839 n=20+19)
CopyFat24-48    1.85ns ± 2%  1.84ns ± 0%   -0.48%  (p=0.014 n=19+20)
CopyFat32-48    2.45ns ± 0%  2.45ns ± 1%     ~     (p=1.000 n=16+16)
CopyFat64-48    3.30ns ± 2%  2.14ns ± 1%  -35.00%  (p=0.000 n=20+18)
CopyFat128-48   6.05ns ± 0%  3.98ns ± 0%  -34.22%  (p=0.000 n=18+17)
CopyFat256-48   11.9ns ± 3%   7.7ns ± 0%  -35.87%  (p=0.000 n=20+17)
CopyFat512-48   23.0ns ± 2%  15.1ns ± 2%  -34.52%  (p=0.000 n=20+18)
CopyFat1024-48  44.8ns ± 1%  29.8ns ± 2%  -33.48%  (p=0.000 n=17+19)

Change-Id: I8a78773c656d400726a020894461e00c59f896bf
Reviewed-on: https://go-review.googlesource.com/14836
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2015-09-22 15:02:37 +00:00
Dmitry Vyukov
9172a1b573 runtime: race instrument read of convT2E/I arg
Sometimes this read is instrumented by compiler when it creates
a temp to take address, but sometimes it is not (e.g. for global vars
compiler takes address of the global directly).

Instrument convT2E/I similarly to chansend and mapaccess.

Fixes #12664

Change-Id: Ia7807f15d735483996426c5f3aed60a33b279579
Reviewed-on: https://go-review.googlesource.com/14752
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-19 10:26:36 +00:00
Austin Clements
c742ff6adc runtime: remove flaky TestInvalidptrCrash to fix build
This test fails on arm64 and some amd64 OSs and fails on Linux/amd64
if you remove the first runtime.GC(), which should be unnecessary, and
run it in all.bash (but not if you run it in isolation). I don't
understand any of these failures, so for now just remove this test.

TBR=rlh

Change-Id: Ibed00671126000ed7dc5b5d4af1f86fe4a1e30e1
Reviewed-on: https://go-review.googlesource.com/14767
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-19 01:43:00 +00:00
Austin Clements
97b64d88eb runtime: avoid debug prints of huge objects
Currently when the GC prints an object for debugging (e.g., for a
failed invalidptr or checkmark check), it dumps the entire object. To
avoid inundating the user with output for really large objects, limit
this to printing just the first 128 words (which are most likely to be
useful in identifying the type of an object) and the 32 words around
the problematic field.

Change-Id: Id94a5c9d8162f8bd9b2a63bf0b1bfb0adde83c68
Reviewed-on: https://go-review.googlesource.com/14764
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-18 22:23:18 +00:00
Austin Clements
b7c55ba496 runtime: improve invalid pointer error message
By default, the runtime panics if it detects a pointer to an
unallocated span. At this point, this usually catches bad uses of
unsafe or cgo in user code (though it could also catch runtime bugs).
Unfortunately, the rather cryptic error misleads users, offers users
little help with debugging their own problem, and offers the Go
developers little help with root-causing.

Improve the error message in various ways. First, the wording is
improved to make it clearer what condition was detected and to suggest
that this may be the result of incorrect use of unsafe or cgo. Second,
we add a dump of the object containing the bad pointer so that there's
at least some hope of figuring out why a bad pointer was stored in the
Go heap.

Change-Id: I57b91b12bc3cb04476399d7706679e096ce594b9
Reviewed-on: https://go-review.googlesource.com/14763
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-18 22:23:11 +00:00
Shawn Walker-Salas
001a75a74c runtime/trace: fix tracing of blocking system calls
The placement and invocation of traceGoSysCall when using
entersyscallblock() instead of entersyscall() differs enough that the
TestTraceSymbolize test can fail on some platforms.

This change moves the invocation of traceGoSysCall for entersyscall() so
that the same number of "frames to skip" are present in the trace as when
entersyscallblock() is used ensuring system call traces remain identical
regardless of internal implementation choices.

Fixes golang/go#12056

Change-Id: I8361e91aa3708f5053f98263dfe9feb8c5d1d969
Reviewed-on: https://go-review.googlesource.com/13861
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-09-17 09:06:20 +00:00
Alex Brainman
3d1f8c2379 runtime: print errno and byte count before crashing in mem_windows.go
As per iant suggestion during issue #12587 crash investigation.

Also adjust incorrect throw message in sysUsed while we are here.

Change-Id: Ice07904fdd6e0980308cb445965a696d26a1b92e
Reviewed-on: https://go-review.googlesource.com/14633
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-17 07:06:42 +00:00
David Crawshaw
9337dc9b5e runtime/debug: more explicit Stack docs
Change-Id: I81a7f22be827519b5290b4acbcba357680cad3c4
Reviewed-on: https://go-review.googlesource.com/14605
Reviewed-by: Rob Pike <r@golang.org>
2015-09-16 22:25:11 +00:00
Ilya Tocar
2421c6e3df runtime: optimize duffzero for amd64.
Use MOVUPS to zero 16 bytes at a time.

results (haswell):

name             old time/op  new time/op  delta
ClearFat8-48     0.62ns ± 2%  0.62ns ± 1%     ~     (p=0.085 n=20+15)
ClearFat12-48    0.93ns ± 2%  0.93ns ± 2%     ~     (p=0.757 n=19+19)
ClearFat16-48    1.23ns ± 1%  1.23ns ± 1%     ~     (p=0.896 n=19+17)
ClearFat24-48    1.85ns ± 2%  1.84ns ± 0%   -0.51%  (p=0.023 n=20+15)
ClearFat32-48    2.45ns ± 0%  2.46ns ± 2%     ~     (p=0.053 n=17+18)
ClearFat40-48    1.99ns ± 0%  0.92ns ± 2%  -53.54%  (p=0.000 n=19+20)
ClearFat48-48    2.15ns ± 1%  0.92ns ± 2%  -56.93%  (p=0.000 n=19+20)
ClearFat56-48    2.46ns ± 1%  1.23ns ± 0%  -49.98%  (p=0.000 n=19+14)
ClearFat64-48    2.76ns ± 0%  2.14ns ± 1%  -22.21%  (p=0.000 n=17+17)
ClearFat128-48   5.21ns ± 0%  3.99ns ± 0%  -23.46%  (p=0.000 n=17+19)
ClearFat256-48   10.3ns ± 4%   7.7ns ± 0%  -25.37%  (p=0.000 n=20+17)
ClearFat512-48   20.2ns ± 4%  15.0ns ± 1%  -25.58%  (p=0.000 n=20+17)
ClearFat1024-48  39.7ns ± 2%  29.7ns ± 0%  -25.05%  (p=0.000 n=19+19)

Change-Id: I200401eec971b2dd2450c0651c51e378bd982405
Reviewed-on: https://go-review.googlesource.com/14408
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-16 16:07:44 +00:00
David Crawshaw
2d697b2401 runtime/debug: implement Stack using runtime.Stack
Fixes #12363

Change-Id: I1a025ab6a1cbd5a58f5c2bce5416788387495428
Reviewed-on: https://go-review.googlesource.com/14604
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-16 11:36:21 +00:00
David Crawshaw
fb30270037 runtime: preserve R11 in darwin/arm entrypoint
The _rt0_arm_darwin_lib entrypoint has to conform to the darwin ARMv7
calling convention, which requires functions to preserve the value of
R11. Go uses R11 as the liblink REGTMP register, so save it manually.

Also avoid using R4, which is also callee-save.

Fixes #12590

Change-Id: I9c3b374e330f81ff8fc9c01fa20505a33ddcf39a
Reviewed-on: https://go-review.googlesource.com/14603
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-09-16 11:23:32 +00:00
Ian Lance Taylor
b6d115a583 runtime: on unexpected netpoll error, throw instead of looping
The current code prints an error message and then tries to carry on.
This is not helpful for Go users: they see a message that means
nothing and that they can do nothing about.  In the only known case of
this message, in issue 11498, the best guess is that the netpoll code
went into an infinite loop.  Instead of doing that, crash the program.

Fixes #11498.

Change-Id: Idda3456c5b708f0df6a6b56c5bb4e796bbc39d7c
Reviewed-on: https://go-review.googlesource.com/12047
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-09-15 17:56:56 +00:00
Keith Randall
731bdc5115 runtime: fix aeshash of empty string
Aeshash currently computes the hash of the empty string as
hash("", seed) = seed.  This is bad because the hash of a compound
object with empty strings in it doesn't include information about
where those empty strings were.  For instance [2]string{"", "foo"}
and [2]string{"foo", ""} might get the same hash.

Fix this by returning a scrambled seed instead of the seed itself.
With this fix, we can remove the scrambling done by the generated
array hash routines.

The test also rejects hash("", seed) = 0, if we ever thought
it would be a good idea to try that.

The fallback hash is already OK in this regard.

Change-Id: Iaedbaa5be8d6a246dc7e9383d795000e0f562037
Reviewed-on: https://go-review.googlesource.com/14129
Reviewed-by: jcd . <jcd@golang.org>
2015-09-15 17:51:23 +00:00
Alex Brainman
d7c12042bf runtime: provide room for first 4 syscall parameters in windows usleep2
Windows amd64 requires all syscall callers to provide room for first
4 parameters on stack. We do that for all our syscalls, except inside
of usleep2. In https://codereview.appspot.com/7563043#msg3 rsc says:

"We don't need the stack alignment and first 4 parameters on amd64
because it's just a system call, not an ordinary function call."

He seems to be wrong on both counts. But alignment is already fixed.
Fix parameter space now too.

Fixes #12444

Change-Id: I66a2a18d2f2c3846e3aa556cc3acc8ec6240bea0
Reviewed-on: https://go-review.googlesource.com/14282
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-15 01:12:32 +00:00
Ian Lance Taylor
ffd7d31787 runtime: unblock special glibc signals on each thread
Glibc uses some special signals for special thread operations.  These
signals will be used in programs that use cgo and invoke certain glibc
functions, such as setgid.  In order for this to work, these signals
need to not be masked by any thread.  Before this change, they were
being masked by programs that used os/signal.Notify, because it
carefully masks all non-thread-specific signals in all threads so that a
dedicated thread will collect and report those signals (see ensureSigM
in signal1_unix.go).

This change adds the two glibc special signals to the set of signals
that are unmasked in each thread.

Fixes #12498.

Change-Id: I797d71a099a2169c186f024185d44a2e1972d4ad
Reviewed-on: https://go-review.googlesource.com/14297
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-09-14 21:59:54 +00:00
Austin Clements
4ac4085f8e runtime: minor clarifications of markroot
This puts the _Root* indexes in a more friendly order and tweaks
markrootSpans to use a for-range loop instead of its own indexing.

Change-Id: I2c18d55c9a673ea396b6424d51ef4997a1a74825
Reviewed-on: https://go-review.googlesource.com/14548
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-14 19:37:44 +00:00
Austin Clements
a1cad70a2f runtime: remove unused g.readyg field
Commit 0e6a6c5 removed readyExecute a long time ago, but left behind
the g.readyg field that was used by readyExecute. Remove this now
unused field.

Change-Id: I41b87ad2b427974d256ec7a7f6d4bdc2ce8a13bb
Reviewed-on: https://go-review.googlesource.com/13111
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-14 18:40:22 +00:00
Austin Clements
70462f90ec runtime: simplify mSpan_Sweep
This is a cleanup following cc8f544, which was a minimal change to fix
issue #11617. This consolidates the two places in mSpan_Sweep that
update sweepgen. Previously this was necessary because sweepgen must
be updated before freeing the span, but we freed large spans early.
Now we free large spans later, so there's no need to duplicate the
sweepgen update. This also means large spans can take advantage of the
sweepgen sanity checking performed for other spans.

Change-Id: I23b79dbd9ec81d08575cd307cdc0fa6b20831768
Reviewed-on: https://go-review.googlesource.com/12451
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-14 18:29:58 +00:00
Austin Clements
572f08a064 runtime: split marking of span roots into 128 subtasks
Marking of span roots can represent a significant fraction of the time
spent in mark termination. Simply traversing the span list takes about
1ms per GB of heap and if there are a large number of finalizers (for
example, for network connections), it may take much longer.

Improve the situation by splitting the span scan into 128 subtasks
that can be executed in parallel and load balanced by the markroots
parallel for. This lets the GC balance this job across the Ps.

A better solution is to do this during concurrent mark, or to improve
it algorithmically, but this is a simple change with a lot of bang for
the buck.

This was suggested by Rhys Hiltner.

Updates #11485.

Change-Id: I8b281adf0ba827064e154a1b6cc32d4d8031c03c
Reviewed-on: https://go-review.googlesource.com/13112
Reviewed-by: Keith Randall <khr@golang.org>
2015-09-14 18:15:40 +00:00
Austin Clements
739f133837 runtime: fix hashing of trace stacks
The call to hash the trace stack reversed the "seed" and "size"
arguments to memhash and, hence, always called memhash with a 0 size,
which dutifully returned a hash value that depended only on the number
of PCs in the stack and not their values. As a result, all stacks were
put in to a very subset of the 8,192 buckets.

Fix this by passing these arguments in the correct order.

Change-Id: I67cd29312f5615c7ffa23e205008dd72c6b8af62
Reviewed-on: https://go-review.googlesource.com/13613
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-09-14 18:14:14 +00:00
Dave Cheney
4f48507d90 runtime: reduce pthread stack size in TestCgoCallbackGC
Fixes #11959

This test runs 100 concurrent callbacks from C to Go consuming 100
operating system threads, which at 8mb a piece (the default on linux/arm)
would reserve over 800mb of address space. This would frequently
cause the test to fail on platforms with ~1gb of ram, such as the
raspberry pi.

This change reduces the thread stack allocation to 256kb, a number picked
at random, but at 1/32th the previous size, should allow the test to
pass successfully on all platforms.

Change-Id: I8b8bbab30ea7b2972b3269a6ff91e6fe5bc717af
Reviewed-on: https://go-review.googlesource.com/13731
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Capitanio <capnm9@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
2015-09-13 23:46:55 +00:00
Shenghou Ma
0b5bcf53ee runtime/cgo: explicitly link msvcrt on windows
It's because runtime links to ntdll, and ntdll exports a couple
incompatible libc functions. We must link to msvcrt first and
then try ntdll.

Fixes #12030.

Change-Id: I0105417bada108da55f5ae4482c2423ac7a92957
Reviewed-on: https://go-review.googlesource.com/14472
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-09-12 08:34:52 +00:00
Rob Pike
67ddae87b9 all: use one 'l' when cancelling everywhere except Solaris
Fixes #11626.

Change-Id: I1b70c0844473c3b57a53d7cca747ea5cdc68d232
Reviewed-on: https://go-review.googlesource.com/14526
Run-TryBot: Rob Pike <r@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-11 18:31:51 +00:00
Didier Spezia
4f33436004 runtime,internal/trace: map/slice literals janitoring
Simplify slice/map literal expressions.
Caught with gofmt -d -s, fixed with gofmt -w -s
Checked that the result can still be compiled with Go 1.4.

Change-Id: I06bce110bb5f46ee2f45113681294475aa6968bc
Reviewed-on: https://go-review.googlesource.com/13839
Reviewed-by: Andrew Gerrand <adg@golang.org>
2015-09-11 14:03:43 +00:00
Michael Hudson-Doyle
b0344e9fd5 cmd/internal/obj, cmd/link, runtime: a saner model for TLS on arm
this leaves lots of cruft behind, will delete that soon

Change-Id: I12d6b6192f89bcdd89b2b0873774bd3458373b8a
Reviewed-on: https://go-review.googlesource.com/14196
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-10 19:49:13 +00:00
Shenghou Ma
1cf05ee612 runtime: move arch1_$GOARCH.go into arch_$GOARCH.go
Update #12563.

Change-Id: Id87f8e53586accd662575c31961c39787268df7a
Reviewed-on: https://go-review.googlesource.com/14471
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-10 04:52:24 +00:00
Keith Randall
00c638d243 runtime: on map update, don't overwrite key if we don't need to.
Keep track of which types of keys need an update and which don't.

Strings need an update because the new key might pin a smaller backing store.
Floats need an update because it might be +0/-0.
Interfaces need an update because they may contain strings or floats.

Fixes #11088

Change-Id: I9ade53c1dfb3c1a2870d68d07201bc8128e9f217
Reviewed-on: https://go-review.googlesource.com/10843
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-09 21:06:49 +00:00
Austin Clements
e9089e4ab6 runtime: add high-level description of how stack barriers work
Change-Id: I6affe75b5fa9dbf513c16200bff4fd7aa5f3a985
Reviewed-on: https://go-review.googlesource.com/14051
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-09 01:18:56 +00:00
Austin Clements
92e68c89f6 runtime: move stack barrier code to its own file
Currently the stack barrier code is mixed in with the mark and scan
code. Move all of the stack barrier related functions and variables to
a new dedicated source file. There are no code modifications.

Change-Id: I604603045465ef8573b9f88915d28ab6b5910903
Reviewed-on: https://go-review.googlesource.com/14050
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-09-09 01:18:50 +00:00
Michael Hudson-Doyle
0388d4303f runtime: remove unused FUNCDATA_DeadValueMaps
Change-Id: Iccb0221bd9aef062d20798b952eaa09d9e60b902
Reviewed-on: https://go-review.googlesource.com/14345
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-07 21:02:11 +00:00
Michael Hudson-Doyle
31322996fd runtime: add stub sigreturn on arm
When building a shared library, all functions that are declared must actually
be defined.

Change-Id: I1488690cecfb66e62d9fdb3b8d257a4dc31d202a
Reviewed-on: https://go-review.googlesource.com/14187
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-09-07 07:49:09 +00:00
Michael Hudson-Doyle
40af15f28e runtime: teach softfloat interpreter about "add r11, pc, r11"
This is generated during fp code when -shared is active.

Change-Id: Ia1092299b9c3b63ff771ca4842158b42c34bd008
Reviewed-on: https://go-review.googlesource.com/14286
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-09-04 06:43:35 +00:00
Michael Hudson-Doyle
9e6ba37b86 cmd/internal/obj: some platform independent bits of proper toolchain support for thread local storage
Also simplifies some silliness around making the .tbss section wrt internal
vs external linking. The "make TLS make sense" project has quite a few more
steps to go.

Issue #11270

Change-Id: Ia4fa135cb22d916728ead95bdbc0ebc1ae06f05c
Reviewed-on: https://go-review.googlesource.com/13990
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-09-03 14:06:07 +00:00
Michael Hudson-Doyle
9f0baca505 runtime: fixes for arm64 shared libraries
Building for shared libraries requires that all functions that are declared
have an implementation and vice versa so make that so on arm64.

It would be nicer to not require the stub sigreturn (it will never be called)
but that seems a bit awkward.

Change-Id: I3cec81697161b452af81fa35939f748bd1acf7fd
Reviewed-on: https://go-review.googlesource.com/13995
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-09-03 01:07:40 +00:00
Keith Randall
a088f1b76c runtime: soften up hash checks a bit
The hash tests generate occasional failures, quiet them some more.

In particular we can get 1 collision when the expected number is
.001 or so. That shouldn't be a dealbreaker.

Fixes #12311

Change-Id: I784e91b5d21f4f1f166dc51bde2d1cd3a7a3bfea
Reviewed-on: https://go-review.googlesource.com/13902
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2015-08-31 19:38:24 +00:00
Shenghou Ma
32d3b96e8b runtime: implement cmpstring and bytes.Compare in assembly for ppc64
Change-Id: I15bf55aa5ac3588c05f0a253f583c52bab209892
Reviewed-on: https://go-review.googlesource.com/14041
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-08-31 18:41:58 +00:00
Austin Clements
77e528293b runtime: check that stack barrier unwind is in sync
Currently the stack barrier stub blindly unwinds the next stack
barrier from the G's stack barrier array without checking that it's
the right stack barrier. If through some bug the stack barrier array
position gets out of sync with where we actually are on the stack,
this could return to the wrong PC, which would lead to difficult to
debug crashes. To address this, this commit adds a check to the amd64
stack barrier stub that it's unwinding the correct stack barrier.

Updates #12238.

Change-Id: If824d95191d07e2512dc5dba0d9978cfd9f54e02
Reviewed-on: https://go-review.googlesource.com/13948
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-30 16:07:02 +00:00
Austin Clements
3bfc9df21a runtime: add GODEBUG for stack barriers at every frame
Currently enabling the debugging mode where stack barriers are
installed at every frame requires recompiling the runtime. However,
this is potentially useful for field debugging and for runtime tests,
so make this mode a GODEBUG.

Updates #12238.

Change-Id: I6fb128f598b19568ae723a612e099c0ed96917f5
Reviewed-on: https://go-review.googlesource.com/13947
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-30 16:06:55 +00:00
Austin Clements
e2bb03f175 runtime: don't install a stack barrier in cgocallback_gofunc's frame
Currently the runtime can install stack barriers in any frame.
However, the frame of cgocallback_gofunc is special: it's the one
function that switches from a regular G stack to the system stack on
return. Hence, the return PC slot in its frame on the G stack is
actually used to save getg().sched.pc (so tracebacks appear to unwind
to the last Go function running on that G), and not as an actual
return PC for cgocallback_gofunc.

Because of this, if we install a stack barrier in cgocallback_gofunc's
return PC slot, when cgocallback_gofunc does return, it will move the
stack barrier stub PC in to getg().sched.pc and switch back to the
system stack. The rest of the runtime doesn't know how to deal with a
stack barrier stub in sched.pc: nothing knows how to match it up with
the G's stack barrier array and, when the runtime removes stack
barriers, it doesn't know to undo the one in sched.pc. Hence, if the C
code later returns back in to Go code, it will attempt to return
through the stack barrier saved in sched.pc, which may no longer have
correct unwinding information.

Fix this by blacklisting cgocallback_gofunc's frame so the runtime
won't install a stack barrier in it's return PC slot.

Fixes #12238.

Change-Id: I46aa2155df2fd050dd50de3434b62987dc4947b8
Reviewed-on: https://go-review.googlesource.com/13944
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-30 16:06:47 +00:00
Keith Randall
805e56ef47 runtime: short-circuit bytes.Compare if src and dst are the same slice
Should only matter on ppc64 and ppc64le.

Fixes #11336

Change-Id: Id4b0ac28b573648e1aa98e87bf010f00d006b146
Reviewed-on: https://go-review.googlesource.com/13901
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-08-29 02:43:57 +00:00
Russ Cox
9c04d00214 runtime: check explicitly for short unwinding of stacks
Right now we find out implicitly if stack barriers are in place,
or defers. This change makes sure we find out about short
unwinds always.

Change-Id: Ibdde1ba9c79eb792660dcb7aa6f186e4e4d559b3
Reviewed-on: https://go-review.googlesource.com/13966
Reviewed-by: Austin Clements <austin@google.com>
2015-08-28 16:05:59 +00:00
Tim Cooijmans
34db31d5f5 src/runtime: Add missing defs for android/386.
Change-Id: I63bf6d2fdf41b49ff8783052d5d6c53b20e2f050
Reviewed-on: https://go-review.googlesource.com/13760
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
2015-08-27 15:14:41 +00:00
Michael Hudson-Doyle
d497eeb005 runtime: remove unused xchgp/xchgp1
I noticed that they were unimplemented on arm64 but then that they were
in fact not used at all.

Change-Id: Iee579feda2a5e374fa571bcc8c89e4ef607d50f6
Reviewed-on: https://go-review.googlesource.com/13951
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-27 00:28:35 +00:00
Uttam C Pawar
32add8d7c8 bytes: improve Compare function on amd64 for large byte arrays
This patch contains only loop unrolling change for size > 63B

Following are the performance numbers for various sizes on
On Haswell based system: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz.

benchcmp go.head.8.25.15.txt go.head.8.25.15.opt.txt
benchmark                       old ns/op     new ns/op     delta
BenchmarkBytesCompare1-4        5.37          5.37          +0.00%
BenchmarkBytesCompare2-4        5.37          5.38          +0.19%
BenchmarkBytesCompare4-4        5.37          5.37          +0.00%
BenchmarkBytesCompare8-4        4.42          4.38          -0.90%
BenchmarkBytesCompare16-4       4.27          4.45          +4.22%
BenchmarkBytesCompare32-4       5.30          5.36          +1.13%
BenchmarkBytesCompare64-4       6.93          6.78          -2.16%
BenchmarkBytesCompare128-4      10.3          9.50          -7.77%
BenchmarkBytesCompare256-4      17.1          13.8          -19.30%
BenchmarkBytesCompare512-4      31.3          22.1          -29.39%
BenchmarkBytesCompare1024-4     62.5          39.0          -37.60%
BenchmarkBytesCompare2048-4     112           73.2          -34.64%

Change-Id: I4eeb1c22732fd62cbac97ba757b0d29f648d4ef1
Reviewed-on: https://go-review.googlesource.com/11871
Reviewed-by: Keith Randall <khr@golang.org>
2015-08-26 03:52:20 +00:00
Todd Neal
a94e906c41 runtime: remove always false comparison in sigsend
s is a uint32 and can never be zero. It's max value is already tested
against sig.wanted, whose size is derived from _NSIG.  This also
matches the test in signal_enable.

Fixes #11282

Change-Id: I8eec9c7df8eb8682433616462fe51b264c092475
Reviewed-on: https://go-review.googlesource.com/13940
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-08-26 01:02:55 +00:00
Michael Hudson-Doyle
af78482d6b cmd/compile, cmd/link, reflect, runtime: remove type.zero field
No longer used after previous hashmap change.

Change-Id: I558470f872281e84a78406132df4e391d077b833
Reviewed-on: https://go-review.googlesource.com/13785
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-26 00:28:17 +00:00
Michael Hudson-Doyle
38519e69d0 cmd/compile, runtime: stop returning t.zero on hashmap miss
Previously t.zero always pointed to runtime.zerovalue. Change the hashmap code
to always return a runtime pointer directly, and change that pointer to point
to a larger buffer if one is needed.

(It might be better to only copy from the pointer returned by the mapaccess
functions when the value type is small enough and have the compiler insert
explicit zeroing for larger value types, but I tried and failed to do this).

This removes all uses of the zero field of the type data; the field itself can
be removed in a separate change.

Fixes #11491

Change-Id: I5b81752ff4067d74a5a281c41e88f151bae0171e
Reviewed-on: https://go-review.googlesource.com/13784
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-08-26 00:03:21 +00:00
Austin Clements
05a3b1fce5 cmd/compile: fix uninitialized memory in compare of interface value
A comparison of the form l == r where l is an interface and r is
concrete performs a type assertion on l to convert it to r's type.
However, the compiler fails to zero the temporary where the result of
the type assertion is written, so if the type is a pointer type and a
stack scan occurs while in the type assertion, it may see an invalid
pointer on the stack.

Fix this by zeroing the temporary. This is equivalent to the fix for
type switches from c4092ac.

Fixes #12253.

Change-Id: Iaf205d456b856c056b317b4e888ce892f0c555b9
Reviewed-on: https://go-review.googlesource.com/13872
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-25 14:37:08 +00:00
Dave Cheney
686d44d9e0 runtime: check pointer equality in arm64 cmpbody
Updates #11336

Follow the lead of amd64 by doing a pointer equality check
before comparing string/byte contents on arm64.

BenchmarkCompareBytesEqual-8               25.8           26.3           +1.94%
BenchmarkCompareBytesToNil-8               9.59           9.59           +0.00%
BenchmarkCompareBytesEmpty-8               9.59           9.17           -4.38%
BenchmarkCompareBytesIdentical-8           26.3           9.17           -65.13%
BenchmarkCompareBytesSameLength-8          16.3           16.3           +0.00%
BenchmarkCompareBytesDifferentLength-8     16.3           16.3           +0.00%
BenchmarkCompareBytesBigUnaligned-8        1132038        1131409        -0.06%
BenchmarkCompareBytesBig-8                 1126758        1128470        +0.15%
BenchmarkCompareBytesBigIdentical-8        1084366        9.17           -100.00%

Change-Id: Id7125c31957eff1ddb78897d4511bd50e79af3f7
Reviewed-on: https://go-review.googlesource.com/13885
Reviewed-by: Keith Randall <khr@golang.org>
2015-08-25 03:29:47 +00:00
Todd Neal
3efe36d4c4 runtime: fix nmspinning comparison
nmspinning has a value range of [0, 2^31-1].  Update the comment to
indicate this and fix the comparison so it's not always false.

Fixes #11280

Change-Id: Iedaf0654dcba5e2c800645f26b26a1a781ea1991
Reviewed-on: https://go-review.googlesource.com/13877
Reviewed-by: Minux Ma <minux@golang.org>
2015-08-25 02:44:11 +00:00
Shenghou Ma
24be0997a2 runtime: add a missing hex conversion
gobuf.g is a guintptr, so without hex(), it will be printed as
a decimal, which is not very helpful and inconsistent with how
other pointers are printed.

Change-Id: I7c0432e9709e90a5c3b3e22ce799551a6242d017
Reviewed-on: https://go-review.googlesource.com/13879
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-25 01:37:54 +00:00
Dave Cheney
1135b9d671 runtime: check pointer equality in arm cmpbody
Updates #11336

Follow the lead of amd64 do a pointer equality check
before comparing string/byte contents on arm.

BenchmarkCompareBytesEqual-4               208             211             +1.44%
BenchmarkCompareBytesToNil-4               83.6            81.8            -2.15%
BenchmarkCompareBytesEmpty-4               80.2            75.2            -6.23%
BenchmarkCompareBytesIdentical-4           208             75.2            -63.85%
BenchmarkCompareBytesSameLength-4          126             128             +1.59%
BenchmarkCompareBytesDifferentLength-4     128             130             +1.56%
BenchmarkCompareBytesBigUnaligned-4        14192804        14060971        -0.93%
BenchmarkCompareBytesBig-4                 12277313        12128193        -1.21%
BenchmarkCompareBytesBigIdentical-4        9385046         78.5            -100.00%

Change-Id: I5b24620018688c5fe04b6ff6743a24c4ce225788
Reviewed-on: https://go-review.googlesource.com/13881
Reviewed-by: Keith Randall <khr@golang.org>
2015-08-24 21:18:33 +00:00
Hyang-Ah (Hana) Kim
db5eb2a2c3 runtime/cgo: remove __stack_chk_fail_local
I cannot find where it's being used.

This addresses a duplicate symbol issue encountered in golang/go#9327.

Change-Id: I8efda45a006ad3e19423748210c78bd5831215e0
Reviewed-on: https://go-review.googlesource.com/13615
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-21 15:56:36 +00:00
Shawn Walker-Salas
d9e3d16796 runtime, syscall: remove unused bits from Solaris implementation
CL 9184 changed the runtime and syscall packages to link Solaris binaries
directly instead of using dlopen/dlsym but did not remove the unused (and
now broken) references to dlopen, dlclose, and dlsym.

Fixes #11923

Change-Id: I36345ce5e7b371bd601b7d48af000f4ccacd62c0
Reviewed-on: https://go-review.googlesource.com/13410
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
2015-08-21 11:39:24 +00:00
Russ Cox
3ae17043f7 runtime: make sure heapBitsBulkBarrier cannot be preempted
Changes the torture test in #12068 from failing about 1/10 times
to not failing in almost 2,000 runs.

This was only happening in -race mode because functions are
bigger in -race mode, so a few of the helpers for heapBitsBulkBarrier
were not being inlined, and they were not marked nosplit,
so (only in -race mode) the write barrier was being preempted by GC,
causing missed pointer updates.

Filed issue #12069 for diagnosis of any other similar errors.

Fixes #12068.

Change-Id: Ic174d9b050ba278b18b08ab0d85a73c33bd5b175
Reviewed-on: https://go-review.googlesource.com/13364
Reviewed-by: Austin Clements <austin@google.com>
2015-08-07 17:55:26 +00:00
Russ Cox
4a19081358 runtime: run on GOARM=5 and GOARM=6 uniprocessor freebsd/arm systems
Also, crash early on non-Linux SMP ARM systems when GOARM < 7;
without the proper synchronization, SMP cannot work.

Linux is okay because we call kernel-provided routines for
synchronization and barriers, and the kernel takes care of
providing the right routines for the current system.
On non-Linux systems we are left to fend for ourselves.

It is possible to use different synchronization on GOARM=6,
but it's too late to do that in the Go 1.5 cycle.
We don't believe there are any non-Linux SMP GOARM=6 systems anyway.

Fixes #12067.

Change-Id: I771a556e47893ed540ec2cd33d23c06720157ea3
Reviewed-on: https://go-review.googlesource.com/13363
Reviewed-by: Austin Clements <austin@google.com>
2015-08-07 17:39:07 +00:00
Austin Clements
ad731887a7 runtime: call goexit1 instead of goexit
Currently, runtime.Goexit() calls goexit()—the goroutine exit stub—to
terminate the goroutine. This *mostly* works, but can cause a
"leftover stack barriers" panic if the following happens:

1. Goroutine A has a reasonably large stack.

2. The garbage collector scan phase runs and installs stack barriers
   in A's stack. The top-most stack barrier happens to fall at address X.

3. Goroutine A unwinds the stack far enough to be a candidate for
   stack shrinking, but not past X.

4. Goroutine A calls runtime.Goexit(), which calls goexit(), which
   calls goexit1().

5. The garbage collector enters mark termination.

6. Goroutine A is preempted right at the prologue of goexit1() and
   performs a stack shrink, which calls gentraceback.

gentraceback stops as soon as it sees goexit on the stack, which is
only two frames up at this point, even though there may really be many
frames above it. More to the point, the stack barrier at X is above
the goexit frame, so gentraceback never sees that stack barrier. At
the end of gentraceback, it checks that it saw all of the stack
barriers and panics because it didn't see the one at X.

The fix is simple: call goexit1, which actually implements the process
of exiting a goroutine, rather than goexit, the exit stub.

To make sure this doesn't happen again in the future, we also add an
argument to the stub prototype of goexit so you really, really have to
want to call it in order to call it. We were able to reliably
reproduce the above sequence with a fair amount of awful code inserted
at the right places in the runtime, but chose to change the goexit
prototype to ensure this wouldn't happen again rather than pollute the
runtime with ugly testing code.

Change-Id: Ifb6fb53087e09a252baddadc36eebf954468f2a8
Reviewed-on: https://go-review.googlesource.com/13323
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-06 20:21:05 +00:00
Russ Cox
26baed6af7 runtime: fix race that dropped GoSysExit events from trace
This makes TestTraceStressStartStop much less flaky.
Running under stress, it changes the failure rate from
above 1/100 to under 1/50000. That very unlikely
failure happens when an unexpected GoSysExit is
written. Not sure how that happens yet, but it is much
less important.

Fixes #11953.

Change-Id: I034671936334b4f3ab733614ef239aa121d20247
Reviewed-on: https://go-review.googlesource.com/13321
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
2015-08-06 19:29:09 +00:00
Austin Clements
d57f037302 runtime: don't recheck heap trigger for periodic GC
88e945f introduced a non-speculative double check of the heap trigger
before actually starting a concurrent GC. This was necessary to fix a
race for heap-triggered GC, but broke sysmon-triggered periodic GC,
since the heap check will of course fail for periodically triggered
GC.

Fix this by telling startGC whether or not this GC was triggered by
heap size or a timer and only doing the heap size double check for GCs
triggered by heap size.

Fixes #12026.

Change-Id: I7c3f6ec364545c36d619f2b4b3bf3b758e3bcbd6
Reviewed-on: https://go-review.googlesource.com/13168
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-05 17:28:56 +00:00
Russ Cox
2a60d77059 runtime: align stack pointer during initcgo call on arm
This is what is causing freebsd/arm to crash mysteriously when using cgo.
The bug was introduced in golang.org/cl/4030, which moved this code out
of rt0_go and into its own function. The ARM ABI says that calls must
be made with the stack pointer at an 8-byte boundary, but only FreeBSD
seems to crash when this is violated.

Fixes #10119.

Change-Id: Ibdbe76b2c7b80943ab66b8abbb38b47acb70b1e5
Reviewed-on: https://go-review.googlesource.com/13161
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-08-05 05:31:34 +00:00
Austin Clements
be39a42920 runtime: fix typos in comments
Change-Id: I66f7937b22bb6e05c3f2f0f2a057151020ad9699
Reviewed-on: https://go-review.googlesource.com/13049
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:56 +00:00
Austin Clements
e3870aa6f3 runtime: fix assist utilization computation
When commit 510fd13 enabled assists during the scan phase, it failed
to also update the code in the GC controller that computed the assist
CPU utilization and adjusted the trigger based on it. Fix that code so
it uses the start of the scan phase as the wall-clock time when
assists were enabled rather than the start of the mark phase.

Change-Id: I05013734b4448c3e2c730dc7b0b5ee28c86ed8cf
Reviewed-on: https://go-review.googlesource.com/13048
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:53 +00:00
Austin Clements
1fb01a88f9 runtime: revise assist ratio aggressively
At the start of a GC cycle, the garbage collector computes the assist
ratio based on the total scannable heap size. This was intended to be
conservative; after all, this assumes the entire heap may be reachable
and hence needs to be scanned. But it only assumes that the *current*
entire heap may be reachable. It fails to account for heap allocated
during the GC cycle. If the trigger ratio is very low (near zero), and
most of the heap is reachable when GC starts (which is likely if the
trigger ratio is near zero), then it's possible for the mutator to
create new, reachable heap fast enough that the assists won't keep up
based on the assist ratio computed at the beginning of the cycle. As a
result, the heap can grow beyond the heap goal (by hundreds of megs in
stress tests like in issue #11911).

We already have some vestigial logic for dealing with situations like
this; it just doesn't run often enough. Currently, every 10 ms during
the GC cycle, the GC revises the assist ratio. This was put in before
we switched to a conservative assist ratio (when we really were using
estimates of scannable heap), and it turns out to be exactly what we
need now. However, every 10 ms is far too infrequent for a rapidly
allocating mutator.

This commit reuses this logic, but replaces the 10 ms timer with
revising the assist ratio every time the heap is locked, which
coincides precisely with when the statistics used to compute the
assist ratio are updated.

Fixes #11911.

Change-Id: I377b231ab064946228378fa10422a46d1b50f4c5
Reviewed-on: https://go-review.googlesource.com/13047
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:48 +00:00
Austin Clements
f9dc3382ad runtime: when gcpacertrace > 0, print information about assist ratio
This was useful in debugging the mutator assist behavior for #11911,
and it fits with the other gcpacertrace output.

Change-Id: I1e25590bb4098223a160de796578bd11086309c7
Reviewed-on: https://go-review.googlesource.com/13046
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:46 +00:00
Austin Clements
fc9ca85f4c runtime: make sweep proportional to spans bytes allocated
Proportional concurrent sweep is currently based on a ratio of spans
to be swept per bytes of object allocation. However, proportional
sweeping is performed during span allocation, not object allocation,
in order to minimize contention and overhead. Since objects are
allocated from spans after those spans are allocated, the system tends
to operate in debt, which means when the next GC cycle starts, there
is often sweep debt remaining, so GC has to finish the sweep, which
delays the start of the cycle and delays enabling mutator assists.

For example, it's quite likely that many Ps will simultaneously refill
their span caches immediately after a GC cycle (because GC flushes the
span caches), but at this point, there has been very little object
allocation since the end of GC, so very little sweeping is done. The
Ps then allocate objects from these cached spans, which drives up the
bytes of object allocation, but since these allocations are coming
from cached spans, nothing considers whether more sweeping has to
happen. If the sweep ratio is high enough (which can happen if the
next GC trigger is very close to the retained heap size), this can
easily represent a sweep debt of thousands of pages.

Fix this by making proportional sweep proportional to the number of
bytes of spans allocated, rather than the number of bytes of objects
allocated. Prior to allocating a span, both the small object path and
the large object path ensure credit for allocating that span, so the
system operates in the black, rather than in the red.

Combined with the previous commit, this should eliminate all sweeping
from GC start up. On the stress test in issue #11911, this reduces the
time spent sweeping during GC (and delaying start up) by several
orders of magnitude:

                mean    99%ile     max
    pre fix      1 ms    11 ms   144 ms
    post fix   270 ns   735 ns   916 ns

Updates #11911.

Change-Id: I89223712883954c9d6ec2a7a51ecb97172097df3
Reviewed-on: https://go-review.googlesource.com/13044
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:44 +00:00
Austin Clements
e30c6d64ba runtime: always give concurrent sweep some heap distance
Currently it's possible for the next_gc heap size trigger computed for
the next GC cycle to be less than the current allocated heap size.
This means the next cycle will start immediately, which means there's
no time to perform the concurrent sweep between GC cycles. This places
responsibility for finishing the sweep on GC itself, which delays GC
start-up and hence delays mutator assist.

Fix this by ensuring that next_gc is always at least a little higher
than the allocated heap size, so we won't trigger the next cycle
instantly.

Updates #11911.

Change-Id: I74f0b887bf187518d5fedffc7989817cbcf30592
Reviewed-on: https://go-review.googlesource.com/13043
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-04 18:54:41 +00:00
Austin Clements
fb5230af8a runtime: assist the GC during GC startup and shutdown
Currently there are two sensitive periods during which a mutator can
allocate past the heap goal but mutator assists can't be enabled: 1)
at the beginning of GC between when the heap first passes the heap
trigger and sweep termination and 2) at the end of GC between mark
termination and when the background GC goroutine parks. During these
periods there's no back-pressure or safety net, so a rapidly
allocating mutator can allocate past the heap goal. This is
exacerbated if there are many goroutines because the GC coordinator is
scheduled as any other goroutine, so if it gets preempted during one
of these periods, it may stay preempted for a long period (10s or 100s
of milliseconds).

Normally the mutator does scan work to create back-pressure against
allocation, but there is no scan work during these periods. Hence, as
a fall back, if a mutator would assist but can't yet, simply yield the
CPU. This delays the mutator somewhat, but more importantly gives more
CPU time to the GC coordinator for it to complete the transition.

This is obviously a workaround. Issue #11970 suggests a far better but
far more invasive way to fix this.

Updates #11911. (This very nearly fixes the issue, but about once
every 15 minutes I get a GC cycle where the assists are enabled but
don't do enough work.)

Change-Id: I9768b79e3778abd3e06d306596c3bd77f65bf3f1
Reviewed-on: https://go-review.googlesource.com/13026
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-08-04 18:54:38 +00:00
Austin Clements
88e945fd23 runtime: recheck GC trigger before actually starting GC
Currently allocation checks the GC trigger speculatively during
allocation and then triggers the GC without rechecking. As a result,
it's possible for G 1 and G 2 to detect the trigger simultaneously,
both enter startGC, G 1 actually starts GC while G 2 gets preempted
until after the whole GC cycle, then G 2 immediately starts another GC
cycle even though the heap is now well under the trigger.

Fix this by re-checking the GC trigger non-speculatively just before
actually kicking off a new GC cycle.

This contributes to #11911 because when this happens, we definitely
don't finish the background sweep before starting the next GC cycle,
which can significantly delay the start of concurrent scan.

Change-Id: I560ab79ba5684ba435084410a9765d28f5745976
Reviewed-on: https://go-review.googlesource.com/13025
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-08-04 18:54:32 +00:00
Mikio Hara
5e15e28e0e runtime: skip TestCgoCallbackGC on dragonfly
Updates #11990.

Change-Id: I6c58923a1b5a3805acfb6e333e3c9e87f4edf4ba
Reviewed-on: https://go-review.googlesource.com/13050
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-03 04:41:48 +00:00
Russ Cox
c5dff7282e cmd/compile, runtime: fix placement of map bucket overflow pointer on nacl
On most systems, a pointer is the worst case alignment, so adding
a pointer field at the end of a struct guarantees there will be no
padding added after that field (to satisfy overall struct alignment
due to some more-aligned field also present).

In the runtime, the map implementation needs a quick way to
get to the overflow pointer, which is last in the bucket struct,
so it uses size - sizeof(pointer) as the offset.

NaCl/amd64p32 is the exception, as always.
The worst case alignment is 64 bits but pointers are 32 bits.
There's a long history that is not worth going into, but when
we moved the overflow pointer to the end of the struct,
we didn't get the padding computation right.
The compiler computed the regular struct size and then
on amd64p32 added another 32-bit field.
And the runtime assumed it could step back two 32-bit fields
(one 64-bit register size) to get to the overflow pointer.
But in fact if the struct needed 64-bit alignment, the computation
of the regular struct size would have added a 32-bit pad already,
and then the code unconditionally added a second 32-bit pad.
This placed the overflow pointer three words from the end, not two.
The last two were padding, and since the runtime was consistent
about using the second-to-last word as the overflow pointer,
no harm done in the sense of overwriting useful memory.
But writing the overflow pointer to a non-pointer word of memory
means that the GC can't see the overflow blocks, so it will
collect them prematurely. Then bad things happen.

Correct all this in a few steps:

1. Add an explicit check at the end of the bucket layout in the
compiler that the overflow field is last in the struct, never
followed by padding.

2. When padding is needed on nacl (not always, just when needed),
insert it before the overflow pointer, to preserve the "last in the struct"
property.

3. Let the compiler have the final word on the width of the struct,
by inserting an explicit padding field instead of overwriting the
results of the width computation it does.

4. For the same reason (tell the truth to the compiler), set the type
of the overflow field when we're trying to pretend its not a pointer
(in this case the runtime maintains a list of the overflow blocks
elsewhere).

5. Make the runtime use "last in the struct" as its location algorithm.

This fixes TestTraceStress on nacl/amd64p32.
The 'bad map state' and 'invalid free list' failures no longer occur.

Fixes #11838.

Change-Id: If918887f8f252d988db0a35159944d2b36512f92
Reviewed-on: https://go-review.googlesource.com/12971
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-07-31 18:49:32 +00:00
Russ Cox
108ec5f75a runtime: fix systemstack tracebacks on nacl/arm
For #11956.

Change-Id: Ic9b57cafa197953cc7f435941e44d42b60b3ddf0
Reviewed-on: https://go-review.googlesource.com/13011
Reviewed-by: Dave Cheney <dave@cheney.net>
2015-07-31 04:35:38 +00:00
Russ Cox
abdc77a288 runtime: avoid reference to stale stack after GC shrinkstack
Dangling pointer error. Unlikely to trigger in practice, but still.
Found by running GODEBUG=efence=1 GOGC=1 trace.test.

Change-Id: Ice474dedcf62dd33ab77526287a023ba3b166db9
Reviewed-on: https://go-review.googlesource.com/12991
Reviewed-by: Austin Clements <austin@google.com>
2015-07-31 02:18:42 +00:00
Russ Cox
4bd8040d47 runtime, sync/atomic: add memory barriers in arm cas routines
This only triggers on ARMv7+.
If there are important SMP ARMv6 machines we can reconsider.

Makes TestLFStress tests pass and sync/atomic tests not time out
on Apple iPad Mini 3.

Fixes #7977.
Fixes #10189.

Change-Id: Ie424dea3765176a377d39746be9aa8265d11bec4
Reviewed-on: https://go-review.googlesource.com/12950
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-07-30 20:11:11 +00:00
Russ Cox
e0c180c44f runtime/cgo: fix darwin/amd64 signal handling setup
Was not allocating space for the frame above sigpanic,
nor was it pushing the LR into the right place.
Because traceback past sigpanic only needs the
LR for faulting leaves, this was not noticed too much.
But it did break the sync/atomic nil deref tests.

Change-Id: Icba53fffa193423aab744c37f21ee893ce2ee3ac
Reviewed-on: https://go-review.googlesource.com/12926
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-07-30 19:18:45 +00:00
Russ Cox
b2dfacf35e runtime: change arm software div/mod call sequence not to modify stack
Instead of pushing the denominator argument on the stack,
the denominator is now passed in m.

This fixes a variety of bugs related to trying to take stack traces
backwards from the middle of the software div/mod routines.
Some of those bugs have been kludged around in the past,
but others have not. Instead of trying to patch up after breaking
the stack, this CL stops breaking the stack.

This is an update of https://golang.org/cl/19810043,
which was rolled back in https://golang.org/cl/20350043.

The problem in the original CL was that there were divisions
at bad times, when m was not available. These were divisions
by constant denominators, either in C code or in assembly.
The Go compiler knows how to generate division by multiplication
for constant denominators, but the C compiler did not.
There is no longer any C code, so that's taken care of.
There was one problematic DIV in runtime.usleep (assembly)
but https://golang.org/cl/12898 took care of that one.
So now this approach is safe.

Reject DIV/MOD in NOSPLIT functions to keep them from
coming back.

Fixes #6681.
Fixes #6699.
Fixes #10486.

Change-Id: I09a13c76ad08ba75b3bd5d46a3eb78e66a84ab38
Reviewed-on: https://go-review.googlesource.com/12899
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-30 16:14:05 +00:00
Russ Cox
c9d2c7f0d2 runtime: replace divide with multiply in runtime.usleep on arm
We want to adjust the DIV calling convention to use m,
and usleep can be called without an m, so switch to a
multiplication by the reciprocal (and test).

Step toward a fix for #6699 and #10486.

Change-Id: Iccf76a18432d835e48ec64a2fa34a0e4d6d4b955
Reviewed-on: https://go-review.googlesource.com/12898
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-30 15:48:29 +00:00
David Crawshaw
b7205b92c0 runtime/trace: test requires 'go tool addr2line'
For the android/arm builder.

Change-Id: Iad4881689223cd6479870da9541524a8cc458cce
Reviewed-on: https://go-review.googlesource.com/12859
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
2015-07-30 05:57:37 +00:00
Russ Cox
c4092ac398 cmd/compile: fix uninitialized memory during type switch assertE2I2
Fixes arm64 builder crash.

The bug is possible on all architectures; you just have to get lucky
and hit a preemption or a stack growth on entry to assertE2I2.
The test stacks the deck.

Change-Id: I8419da909b06249b1ad15830cbb64e386b6aa5f6
Reviewed-on: https://go-review.googlesource.com/12890
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2015-07-30 05:21:56 +00:00
Russ Cox
bfac8623d5 runtime: enable TestEmptySlice
It says to disable until #7564 is fixed. It was fixed in April 2014.

Change-Id: I9bebfe96802bafdd2d1a0a47591df346d91b000c
Reviewed-on: https://go-review.googlesource.com/12858
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-30 04:47:16 +00:00
Russ Cox
d3ffc975f3 runtime: set invalidptr=1 by default, as documented
Also make invalidptr control the recently added GC pointer check,
as documented.

Change-Id: Iccfdf49480219d12be8b33b8f03d8312d8ceabed
Reviewed-on: https://go-review.googlesource.com/12857
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2015-07-29 23:50:20 +00:00
Russ Cox
bd5ca22232 runtime/trace: remove existing Skips
The skips added in CL 12579, based on incorrect time stamps,
should be sufficient to identify and exclude all the time-related
flakiness on these systems.

If there is other flakiness, we want to find out.

For #10512.

Change-Id: I5b588ac1585b2e9d1d18143520d2d51686b563e3
Reviewed-on: https://go-review.googlesource.com/12746
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 22:32:23 +00:00
Russ Cox
80c98fa901 runtime/trace: record event sequence numbers explicitly
Nearly all the flaky failures we've seen in trace tests have been
due to the use of time stamps to determine relative event ordering.
This is tricky for many reasons, including:
 - different cores might not have exactly synchronized clocks
 - VMs are worse than real hardware
 - non-x86 chips have different timer resolution than x86 chips
 - on fast systems two events can end up with the same time stamp

Stop trying to make time reliable. It's clearly not going to be for Go 1.5.
Instead, record an explicit event sequence number for ordering.
Using our own counter solves all of the above problems.

The trace still contains time stamps, of course. The sequence number
is just used for ordering.

Should alleviate #10554 somewhat. Then tickDiv can be chosen to
be a useful time unit instead of having to be exact for ordering.

Separating ordering and time stamps lets the trace parser diagnose
systems where the time stamp order and actual order do not match
for one reason or another. This CL adds that check to the end of
trace.Parse, after all other sequence order-based checking.
If that error is found, we skip the test instead of failing it.
Putting the check in trace.Parse means that cmd/trace will pick
up the same check, refusing to display a trace where the time stamps
do not match actual ordering.

Using net/http's BenchmarkClientServerParallel4 on various CPU counts,
not tracing vs tracing:

name                      old time/op    new time/op    delta
ClientServerParallel4       50.4µs ± 4%    80.2µs ± 4%  +59.06%        (p=0.000 n=10+10)
ClientServerParallel4-2     33.1µs ± 7%    57.8µs ± 5%  +74.53%        (p=0.000 n=10+10)
ClientServerParallel4-4     18.5µs ± 4%    32.6µs ± 3%  +75.77%        (p=0.000 n=10+10)
ClientServerParallel4-6     12.9µs ± 5%    24.4µs ± 2%  +89.33%        (p=0.000 n=10+10)
ClientServerParallel4-8     11.4µs ± 6%    21.0µs ± 3%  +83.40%        (p=0.000 n=10+10)
ClientServerParallel4-12    14.4µs ± 4%    23.8µs ± 4%  +65.67%        (p=0.000 n=10+10)

Fixes #10512.

Change-Id: I173eecf8191e86feefd728a5aad25bf1bc094b12
Reviewed-on: https://go-review.googlesource.com/12579
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 22:32:14 +00:00
Russ Cox
fde392623a runtime: ignore arguments in cgocallback_gofunc frame
Otherwise the GC may see uninitialized memory there,
which might be old pointers that are retained, or it might
trigger the invalid pointer check.

Fixes #11907.

Change-Id: I67e306384a68468eef45da1a8eb5c9df216a77c0
Reviewed-on: https://go-review.googlesource.com/12852
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 22:30:46 +00:00
Russ Cox
f6dfe16798 runtime: fix darwin/amd64 assembly frame sizes
Change-Id: I2f0ecdc02ce275feadf07e402b54f988513e9b49
Reviewed-on: https://go-review.googlesource.com/12855
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29 22:26:02 +00:00
Russ Cox
4addec3aaa runtime: reenable bad pointer check in GC
The last time we tried this, linux/arm64 broke.
The series of CLs leading to this one fixes that problem.
Let's try again.

Fixes #9880.

Change-Id: I67bc1d959175ec972d4dcbe4aa6f153790f74251
Reviewed-on: https://go-review.googlesource.com/12849
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 21:37:55 +00:00
Russ Cox
421220571d runtime, reflect: use correctly aligned stack frame sizes on arm64
arm64 requires either no stack frame or a frame with a size that is 8 mod 16
(adding the saved LR will make it 16-aligned).

The cmd/internal/obj/arm64 has been silently aligning frames, but it led to
a terrible bug when the compiler and obj disagreed on the frame size,
and it's just generally confusing, so we're going to make misaligned frames
an error instead of something that is silently changed.

This CL prepares by updating assembly files.
Note that the changes in this CL are already being done silently by
cmd/internal/obj/arm64, so there is no semantic effect here,
just a clarity effect.

For #9880.

Change-Id: Ibd6928dc5fdcd896c2bacd0291bf26b364591e28
Reviewed-on: https://go-review.googlesource.com/12845
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 21:35:35 +00:00
Austin Clements
23e4744c07 runtime: report GC CPU utilization in MemStats
This adds a GCCPUFraction field to MemStats that reports the
cumulative fraction of the program's execution time spent in the
garbage collector. This is equivalent to the utilization percent shown
in the gctrace output and makes this available programmatically.

This does make one small effect on the gctrace output: we now report
the duration of mark termination up to just before the final
start-the-world, rather than up to just after. However, unlike
stop-the-world, I don't believe there's any way that start-the-world
can block, so it should take negligible time.

While there are many statistics one might want to expose via MemStats,
this is one of the few that will undoubtedly remain meaningful
regardless of future changes to the memory system.

The diff for this change is larger than the actual change. Mostly it
lifts the code for computing the GC CPU utilization out of the
debug.gctrace path.

Updates #10323.

Change-Id: I0f7dc3fdcafe95e8d1233ceb79de606b48acd989
Reviewed-on: https://go-review.googlesource.com/12844
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29 20:23:34 +00:00
Austin Clements
4b71660c5b runtime: always capture GC phase transition times
Currently we only capture GC phase transition times if
debug.gctrace>0, but we're about to compute GC CPU utilization
regardless of whether debug.gctrace is set, so we need these
regardless of debug.gctrace.

Change-Id: If3acf16505a43d416e9a99753206f03287180660
Reviewed-on: https://go-review.googlesource.com/12843
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-07-29 20:23:25 +00:00
Austin Clements
87f97c73d3 runtime: avoid race between SIGPROF traceback and stack barriers
The following sequence of events can lead to the runtime attempting an
out-of-bounds access on a stack barrier slice:

1. A SIGPROF comes in on a thread while the G on that thread is in
   _Gsyscall. The sigprof handler calls gentraceback, which saves a
   local copy of the G's stkbar slice. Currently the G has no stack
   barriers, so this slice is empty.

2. On another thread, the GC concurrently scans the stack of the
   goroutine being profiled (it considers it stopped because it's in
   _Gsyscall) and installs stack barriers.

3. Back on the sigprof thread, gentraceback comes across a stack
   barrier in the stack and attempts to look it up in its (zero
   length) copy of G's old stkbar slice, which causes an out-of-bounds
   access.

This commit fixes this by adding a simple cas spin to synchronize the
SIGPROF handler with stack barrier insertion.

In general I would prefer that this synchronization be done through
the G status, since that's how stack scans are otherwise synchronized,
but adding a new lock is a much smaller change and G statuses are full
of subtlety.

Fixes #11863.

Change-Id: Ie89614a6238bb9c6a5b1190499b0b48ec759eaf7
Reviewed-on: https://go-review.googlesource.com/12748
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-29 19:31:46 +00:00
Rick Hudson
e95bc5fef7 runtime: force mutator to give work buffer to GC
The scheduler, work buffer's dispose, and write barriers
can conspire to hide the a pointer from the GC's concurent
mark phase. If this pointer is the only path to a large
amount of marking the STW mark termination phase may take
a lot of time.

Consider the following:
1) dispose places a work buffer on the partial queue
2) the GC is busy so it does not immediately remove and
   process the work buffer
3) the scheduler runs a mutator whose write barrier dequeues the
   work buffer from the partial queue so the GC won't see it
This repeats until the GC reaches the mark termination
phase where the GC finally discovers the pointer along
with a lot of work to do.

This CL fixes the problem by having the mutator
dispose of the buffer to the full queue instead of
the partial queue. Since the write buffer never asks for full
buffers the conspiracy described above is not possible.

Updates #11694.

Change-Id: I2ce832f9657a7570f800e8ce4459cd9e304ef43b
Reviewed-on: https://go-review.googlesource.com/12840
Reviewed-by: Austin Clements <austin@google.com>
2015-07-29 18:56:11 +00:00
Dmitry Vyukov
0c22a74e85 runtime: fix out-of-bounds in stack debugging
Currently stackDebug=4 crashes as:

panic: runtime error: index out of range
fatal error: panic on system stack
runtime stack:
runtime.throw(0x607470, 0x15)
	src/runtime/panic.go:527 +0x96
runtime.gopanic(0x5ada00, 0xc82000a1d0)
	src/runtime/panic.go:354 +0xb9
runtime.panicindex()
	src/runtime/panic.go:12 +0x49
runtime.adjustpointers(0xc820065ac8, 0x7ffe58b56100, 0x7ffe58b56318, 0x0)
	src/runtime/stack1.go:428 +0x5fb
runtime.adjustframe(0x7ffe58b56200, 0x7ffe58b56318, 0x1)
	src/runtime/stack1.go:542 +0x780
runtime.gentraceback(0x487760, 0xc820065ac0, 0x0, 0xc820001080, 0x0, 0x0, 0x7fffffff, 0x6341b8, 0x7ffe58b56318, 0x0, ...)
	src/runtime/traceback.go:336 +0xa7e
runtime.copystack(0xc820001080, 0x1000)
	src/runtime/stack1.go:616 +0x3b1
runtime.newstack()
	src/runtime/stack1.go:801 +0xdde

Change-Id: If2d60960231480a9dbe545d87385fe650d6db808
Reviewed-on: https://go-review.googlesource.com/12763
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-28 20:11:19 +00:00
Russ Cox
7a63ab1a65 runtime: use 64k page rounding on arm64
Fixes #11886.

Change-Id: I9392fd2ef5951173ae275b3ab42db4f8bd2e1d7a
Reviewed-on: https://go-review.googlesource.com/12747
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-07-28 19:59:00 +00:00
David du Colombier
68117a91ae runtime: fix x86 stack trace for call to heap memory on Plan 9
Russ Cox fixed this issue for other systems
in CL 12026, but the Plan 9 part was forgotten.

Fixes #11656.

Change-Id: I91c033687987ba43d13ad8f42e3fe4c7a78e6075
Reviewed-on: https://go-review.googlesource.com/12762
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-28 19:01:41 +00:00
Ian Lance Taylor
0229317d76 runtime: don't define libc_getpid in os3_solaris.go
The function is already defined between syscall_solaris.go and
syscall2_solaris.go.

Change-Id: I034baf7c8531566bebfdbc5a4061352cbcc31449
Reviewed-on: https://go-review.googlesource.com/12773
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-28 14:07:17 +00:00
Ian Lance Taylor
deaf0333df runtime: fix definitions of getpid and kill on Solaris
A further attempt to fix raiseproc on Solaris.

Change-Id: I8d8000d6ccd0cd9f029ebe1f211b76ecee230cd0
Reviewed-on: https://go-review.googlesource.com/12771
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-07-28 06:21:08 +00:00
Ian Lance Taylor
d7223c6cc1 runtime: correct implementation of raiseproc on Solaris
I forgot that the libc raise function only sends the signal to the
current thread.  We need to actually use kill and getpid here, as we
do on other systems.

Change-Id: Iac34af822c93468bf68cab8879db3ee20891caaf
Reviewed-on: https://go-review.googlesource.com/12704
Reviewed-by: Russ Cox <rsc@golang.org>
2015-07-28 05:41:27 +00:00
David Crawshaw
249894ab6c runtime/cgo: remove TMPDIR logic for iOS
Seems like the simplest solution for 1.5. All the parts of the test
suite I can run on my current device (for which my exception handler
fix no longer works, apparently) pass without this code. I'll move it
into x/mobile/app.

Fixes #11884

Change-Id: I2da40c8c7b48a4c6970c4d709dd7c148a22e8727
Reviewed-on: https://go-review.googlesource.com/12721
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-07-27 21:28:31 +00:00
Austin Clements
c1f7a56fc0 runtime: close window that hides GC work from concurrent mark
Currently we enter mark 2 by first flushing all existing gcWork caches
and then setting gcBlackenPromptly, which disables further gcWork
caching. However, if a worker or assist pulls a work buffer in to its
gcWork cache after that cache has been flushed but before caching is
disabled, that work may remain in that cache until mark termination.
If that work represents a heap bottleneck (e.g., a single pointer that
is the only way to reach a large amount of the heap), this can force
mark termination to do a large amount of work, resulting in a long
STW.

Fix this by reversing the order of these steps: first disable caching,
then flush all existing caches.

Rick Hudson <rlh> did the hard work of tracking this down. This CL
combined with CL 12672 and CL 12646 distills the critical parts of his
fix from CL 12539.

Fixes #11694.

Change-Id: Ib10d0a21e3f6170a80727d0286f9990df049fed2
Reviewed-on: https://go-review.googlesource.com/12688
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-07-27 20:00:25 +00:00