1
0
mirror of https://github.com/golang/go synced 2024-11-23 20:50:04 -07:00
Commit Graph

41843 Commits

Author SHA1 Message Date
Michael Anthony Knyszek
a5a6f61043 runtime: fix (*gcSweepBuf).block guarantees
Currently gcSweepBuf guarantees that push operations may be performed
concurrently with each other and that block operations may be performed
concurrently with push operations as well.

Unfortunately, this isn't quite true. The existing code allows push
operations to happen concurrently with each other, but block operations
may return blocks with nil entries. The way this can happen is if two
concurrent pushers grab a slot to push to, and the first one (the one
with the earlier slot in the buffer) doesn't quite write a span value
when the block is called. The existing code in block only checks if the
very last value in the block is nil, when really an arbitrary number of
the last few values in the block may or may not be nil.

Today, this case can't actually happen because when push operations
happen concurrently during a GC (which is the only time block is
called), they only ever happen during an allocation with the heap lock
held, effectively serializing them. A block operation may happen
concurrently with one of these pushes, but its callers will never see a
nil mspan. Outside of a GC, this isn't a problem because although push
operations from allocations can run concurrently with push operations
from sweeping, block operations will never run.

In essence, the real concurrency guarantees provided by gcSweepBuf are
that block operations may happen concurrently with push operations, but
that push operations may not be concurrent with each other if there are
any block operations.

To fix this, and to prepare for push operations happening without the
heap lock held in a future CL, we update the documentation for block to
correctly state that there may be nil entries in the returned slice.
While we're here, make the mspan writes into the buffer atomic to avoid
a block user racing on a nil check, and document that the user should
load mspan values from the returned slice atomically. Finally, we make
all callers of block adhere to the new rules.

We choose to allow nil values rather than filter them out because the
only caller of block is markrootSpans, and if it catches a nil entry,
then there wasn't anything to mark in there anyway since the span is
just being created.

Updates #35112.

Change-Id: I6450aab15f51690d7a000ba5b3d529cf2ca5da1e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203318
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 17:01:05 +00:00
Michael Anthony Knyszek
dac936a4ab runtime: make more page sweeper operations atomic
This change makes it so that allocation and free related page sweeper
metadata operations (e.g. pageInUse and pagesInUse) are atomic rather
than protected by the heap lock. This will help in reducing the length
of the critical path with the heap lock held in future changes.

Updates #35112.

Change-Id: Ie82bff024204dd17c4c671af63350a7a41add354
Reviewed-on: https://go-review.googlesource.com/c/go/+/196640
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 17:00:57 +00:00
Cherry Zhang
47232f0d92 cmd/internal/obj/arm64: make function epilogue async-signal safe
When the frame size is large, we generate

MOVD.P	0xf0(SP), LR
ADD	$(framesize-0xf0), SP

This is problematic: after the first instruction, we have a
partial frame of size (framesize-0xf0). If we try to unwind the
stack at this point, we'll try to read the LR from the stack at
0(SP) (the new SP) as the frame size is not 0. But this slot does
not contain a valid LR.

Fix this by not changing SP in two instructions. Instead,
generate

MOVD	(SP), LR
ADD	$framesize, SP

This affects not only async preemption but also profiling. So we
change the generated instructions, instead of marking unsafe
point.

Change-Id: I4e78c62d50ffc4acff70ccfbfec16a5ccae17f24
Reviewed-on: https://go-review.googlesource.com/c/go/+/206057
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-11-08 16:46:24 +00:00
Cherry Zhang
374c2847f9 runtime: add async preemption support on PPC64
This CL adds support of call injection and async preemption on
PPC64.

For the injected call to return to the preempted PC, we have to
clobber either LR or CTR. For reasons mentioned in previous CLs,
we choose CTR. Previous CLs have marked code sequences that use
CTR async-nonpreemtible.

Change-Id: Ia642b5f06a890dd52476f45023b2a830c522eee0
Reviewed-on: https://go-review.googlesource.com/c/go/+/203824
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-08 16:44:48 +00:00
Michael Anthony Knyszek
7f574e476a runtime: remove unnecessary large parameter to mheap_.alloc
mheap_.alloc currently accepts both a spanClass and a "large" parameter
indicating whether the allocation is large. These are redundant, since
spanClass.sizeclass() == 0 is an equivalent way to determine this and is
already used in mheap_.alloc. There are no places in the runtime where
the size class could be non-zero and large == true.

Updates #35112.

Change-Id: Ie66facf8f0faca6f4cd3d20a8ac4bc259e11823d
Reviewed-on: https://go-review.googlesource.com/c/go/+/196639
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 16:44:33 +00:00
Michael Anthony Knyszek
ffb5646fe0 runtime: define maximum supported physical page and huge page sizes
This change defines a maximum supported physical and huge page size in
the runtime based on the new page allocator's implementation, and uses
them where appropriate.

Furthemore, if the system exceeds the maximum supported huge page
size, we simply ignore it silently.

It also fixes a huge-page-related test which is only triggered by a
condition which is definitely wrong.

Finally, it adds a few TODOs related to code clean-up and supporting
larger huge page sizes.

Updates #35112.
Fixes #35431.

Change-Id: Ie4348afb6bf047cce2c1433576d1514720d8230f
Reviewed-on: https://go-review.googlesource.com/c/go/+/205937
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-08 16:35:48 +00:00
Bryan C. Mills
a782472dcd cmd/go: delete flaky TestQEMUUserMode
If QEMU user-mode is actually a supported configuration, then per
http://golang.org/wiki/PortingPolicy it needs to have a builder
running tests for all packages, not just a simple “hello world”
program.

Updates #1508
Updates #13024
Fixes #35457

Change-Id: Ib6122b06ad1d265550a0e92131506266495893cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/206137
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-08 16:31:00 +00:00
Michael Anthony Knyszek
ae4534e659 runtime: ensure heap memstats are updated atomically
For the most part, heap memstats are already updated atomically when
passed down to OS-level memory functions (e.g. sysMap). Elsewhere,
however, they're updated with the heap lock.

In order to facilitate holding the heap lock for less time during
allocation paths, this change more consistently makes the update of
these statistics atomic by calling mSysStat{Inc,Dec} appropriately
instead of simply adding or subtracting. It also ensures these values
are loaded atomically.

Furthermore, an undocumented but safe update condition for these
memstats is during STW, at which point using atomics is unnecessary.
This change also documents this condition in mstats.go.

Updates #35112.

Change-Id: I87d0b6c27b98c88099acd2563ea23f8da1239b66
Reviewed-on: https://go-review.googlesource.com/c/go/+/196638
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 16:21:04 +00:00
Michael Anthony Knyszek
814c5058bb runtime: remove useless heap_objects accounting
This change removes useless additional heap_objects accounting for large
objects. heap_objects is computed from scratch at ReadMemStats time
(which stops the world) by using nlargealloc and nlargefree, so mutating
heap_objects turns out to be pointless.

As a result, the "large" parameter on "mheap_.freeSpan" is no longer
necessary and so this change cleans that up too.

Change-Id: I7d6b486d9b57c018e3db46221d81b55fe4c1b021
Reviewed-on: https://go-review.googlesource.com/c/go/+/196637
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 16:20:27 +00:00
Michael Anthony Knyszek
4208dbef16 runtime: make allocNeedsZero lock-free
In preparation for a lockless fast path in the page allocator, this
change makes it so that checking if an allocation needs to be zeroed may
be done atomically.

Unfortunately, this means there is a CAS-loop to ensure monotonicity of
the zeroedBase value in heapArena. This CAS-loop exits if an allocator
acquiring memory further on in the arena wins or if it succeeds. The
CAS-loop should have a relatively small amount of contention because of
this monotonicity, though it would be ideal if we could just have
CAS-ers with the greatest value always win. The CAS-loop is unnecessary
in the steady-state, but should bring some start-up performance gains as
it's likely cheaper than the additional zeroing required, especially for
large allocations.

For very large allocations that span arenas, the CAS-loop should be
completely uncontended for most of the arenas it touches, it may only
encounter contention on the first and last arena.

Updates #35112.

Change-Id: If3d19198b33f1b1387b71e1ce5902d39a5c0f98e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203859
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 16:20:17 +00:00
Bryan C. Mills
52aebe8d21 runtime: skip TestPingPongHog on builders
This test is failing consistently in the longtest builders,
potentially masking regressions in other packages.

Updates #35271

Change-Id: Idc03171c0109b5c8d4913e0af2078c1115666897
Reviewed-on: https://go-review.googlesource.com/c/go/+/206098
Reviewed-by: Carlos Amedee <carlos@golang.org>
2019-11-08 15:10:39 +00:00
Hana (Hyang-Ah) Kim
45b4ed7577 runtime/pprof: delete unused locForPC
Change-Id: Ie4754fefba6057b1cf558d0096fe0e83355f8eff
Reviewed-on: https://go-review.googlesource.com/c/go/+/205098
TryBot-Result: Gobot Gobot <gobot@golang.org>
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-08 04:44:53 +00:00
Hana (Hyang-Ah) Kim
e25de44ef2 runtime/pprof: correct inlined function location encoding for non-CPU profiles
Change-Id: Id270a3477bf1a581755c4311eb12f990aa2260b5
Reviewed-on: https://go-review.googlesource.com/c/go/+/205097
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-08 04:08:53 +00:00
Ian Lance Taylor
f5e89c2214 Revert "math/cmplx: handle special cases"
This reverts CL 169501.

Reason for revert: The new tests fail at least on s390x and MIPS. This is likely a minor bug in the compiler or runtime. But this point in the release cycle is not the time to debug these details, which are unlikely to be new. Let's try again for 1.15.

Updates #29320
Fixes #35443

Change-Id: I2218b2083f8974b57d528e3742524393fc72b355
Reviewed-on: https://go-review.googlesource.com/c/go/+/206037
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-08 03:07:30 +00:00
Hana (Hyang-Ah) Kim
e038c7e418 runtime/pprof: correctly encode inlined functions in CPU profile
The pprof profile proto message expects inlined functions of a PC
to be encoded in one Location entry using multiple Line entries.
https://github.com/google/pprof/blob/5e96527/proto/profile.proto#L177-L184

runtime/pprof has encoded the symbolization information by creating
a Location for each PC found in the stack trace and including info
from all the frames expanded from the PC using runtime.CallersFrames.
This assumes inlined functions are represented as a single PC in the
stack trace. (https://go-review.googlesource.com/41256)

In the recent years, behavior around inlining and the traceback
changed significantly (e.g. https://golang.org/cl/152537,
https://golang.org/issue/29582, and many changes). Now the PCs
in the stack trace represent user frames even including inline
marks. As a result, the profile proto started to allocate a Location
entry for each user frame, lose the inline information (so pprof
presented incorrect results when inlined functions are involved),
and confuse the pprof tool with those PCs made up for inline marks.

This CL attempts to detect inlined call frames from the stack traces
of CPU profiles, and organize the Location information as intended.
Currently, runtime does not provide a reliable and convenient way to
detect inlined call frames and expand user frames from a given externally
recognizable PCs. So we use heuristics to recover the groups
  - inlined call frames have nil Func field
  - inlined call frames will have the same Entry point
  - but must be careful with recursive functions that have the
    same Entry point by definition, and non-Go functions that
    may lack most of the fields of Frame.

The followup CL will address the issue with other profile types.

Change-Id: I0c9667ab016a3e898d648f31c3f82d84c15398db
Reviewed-on: https://go-review.googlesource.com/c/go/+/204636
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-08 02:40:04 +00:00
Michael Anthony Knyszek
33dfd3529b runtime: remove old page allocator
This change removes the old page allocator from the runtime.

Updates #35112.

Change-Id: Ib20e1c030f869b6318cd6f4288a9befdbae1b771
Reviewed-on: https://go-review.googlesource.com/c/go/+/195700
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 00:07:43 +00:00
Michael Anthony Knyszek
e6135c2768 runtime: switch to new page allocator
This change flips the oldPageAllocator constant enabling the new page
allocator in the Go runtime.

Updates #35112.

Change-Id: I7fc8332af9fd0e43ce28dd5ebc1c1ce519ce6d0c
Reviewed-on: https://go-review.googlesource.com/c/go/+/201765
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 23:55:56 +00:00
Cherry Zhang
52d5e76b39 runtime: disable async preemption on darwin/arm(64) for now
Enabling async preemption on darwin/arm and darwin/arm64 causes
the builder to fail, e.g.
https://build.golang.org/log/03f727b8f91b0c75bf54ff508d7d2f00b5cad4bf

Due to the limited resource, I haven't been able to get access on
those devices to debug. Disable async preemption for now.

Updates #35439.

Change-Id: I5a31ad6962c2bae8e6e9b8303c494610a8a4e50a
Reviewed-on: https://go-review.googlesource.com/c/go/+/205842
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-07 22:17:54 +00:00
Keith Randall
953cc7490a doc: document new math.Fma function
This accidentally got committed - please review the whole paragraph
as if it was new.

Change-Id: I98e1db4670634c6e792d26201ce0cd329a6928b6
Reviewed-on: https://go-review.googlesource.com/c/go/+/202579
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2019-11-07 21:59:39 +00:00
Matthew Dempsky
4cde749f63 cmd/compile: restore more missing -m=2 escape analysis details
This CL also restores analysis details for (1) expressions that are
directly heap allocated because of being too large for the stack or
non-constant in size, and (2) for assignments that we short circuit
because we flow their address to another escaping object.

No change to normal compilation behavior. Only adds additional Printfs
guarded by -m=2.

Updates #31489.

Change-Id: I43682195d389398d75ced2054e29d9907bb966e7
Reviewed-on: https://go-review.googlesource.com/c/go/+/205917
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-07 21:59:16 +00:00
Cherry Zhang
a930fede73 runtime: add async preemption support on MIPS and MIPS64
This CL adds support of call injection and async preemption on
MIPS and MIPS64.

Like ARM64, we need to clobber one register (REGTMP) for
returning from the injected call. Previous CLs have marked code
sequences that use REGTMP async-nonpreemtible.

It seems on MIPS/MIPS64, a CALL instruction is not "atomic" (!).
If a signal is delivered right at the CALL instruction, we may
see an updated LR with a not-yet-updated PC. In some cases this
may lead to failed stack unwinding. Don't preempt in this case.

Change-Id: I99437b2d05869ded5c0c8cb55265dbfc933aedab
Reviewed-on: https://go-review.googlesource.com/c/go/+/203720
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 20:59:14 +00:00
Michael Anthony Knyszek
a120cc8b36 runtime: compute whether a span needs zeroing in the new page allocator
This change adds the allocNeedZero method to mheap which uses the new
heapArena field zeroedBase to determine whether a new allocation needs
zeroing. The purpose of this work is to avoid zeroing memory that is
fresh from the OS in the context of the new allocator, where we no
longer have the concept of a free span to track this information.

The new field in heapArena, zeroedBase, is small, which runs counter to
the advice in the doc comment for heapArena. Since heapArenas are
already not a multiple of the system page size, this advice seems stale,
and we're OK with using an extra physical page for a heapArena. So, this
change also deletes the comment with that advice.

Updates #35112.

Change-Id: I688cd9fd3c57a98a6d43c45cf699543ce16697e2
Reviewed-on: https://go-review.googlesource.com/c/go/+/203858
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 20:52:05 +00:00
Cherry Zhang
933bf75eda runtime: add async preemption support on S390X
This CL adds support of call injection and async preemption on
S390X.

Like ARM64, we need to clobber one register (REGTMP) for
returning from the injected call. Previous CLs have marked code
sequences that use REGTMP async-nonpreemtible.

Change-Id: I78adbc5fd70ca245da390f6266623385b45c9dfc
Reviewed-on: https://go-review.googlesource.com/c/go/+/204106
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 20:45:45 +00:00
Cherry Zhang
4751db93ef cmd/internal/obj/s390x: mark unsafe points
For async preemption, we will be using REGTMP as a temporary
register in injected call on S390X, which will clobber it. So any
code that uses REGTMP is not safe for async preemption.

In the assembler backend, we expand a Prog to multiple machine
instructions and use REGTMP as a temporary register if necessary.
These need to be marked unsafe. Unlike ARM64 and MIPS,
instructions on S390X are variable length so we don't use the
length as a condition. Instead, we set a bit on the Prog whenever
REGTMP is used.

Change-Id: Ie5d14068a950f4c7cea51dff2c4a8bdc19ec9348
Reviewed-on: https://go-review.googlesource.com/c/go/+/204105
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 20:34:27 +00:00
Michael Anthony Knyszek
689f6f77f0 runtime: integrate new page allocator into runtime
This change integrates all the bits and pieces of the new page allocator
into the runtime, behind a global constant.

Updates #35112.

Change-Id: I6696bde7bab098a498ab37ed2a2caad2a05d30ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/201764
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 20:14:02 +00:00
Michael Anthony Knyszek
21445b091e runtime: make the scavenger self-paced
Currently the runtime background scavenger is paced externally,
controlled by a collection of variables which together describe a line
that we'd like to stay under.

However, the line to stay under is computed as a function of the number
of free and unscavenged huge pages in the heap at the end of the last
GC. Aside from this number being inaccurate (which is still acceptable),
the scavenging system also makes an order-of-magnitude assumption as to
how expensive scavenging a single page actually is.

This change simplifies the scavenger in preparation for making it
operate on bitmaps. It makes it so that the scavenger paces itself, by
measuring the amount of time it takes to scavenge a single page. The
scavenging methods on mheap already avoid breaking huge pages, so if we
scavenge a real huge page, then we'll have paced correctly, otherwise
we'll sleep for longer to avoid using more than scavengePercent wall
clock time.

Unfortunately, all this involves measuring time, which is quite tricky.
Currently we don't directly account for long process sleeps or OS-level
context switches (which is quite difficult to do in general), but we do
account for Go scheduler overhead and variations in it by maintaining an
EWMA of the ratio of time spent scavenging to the time spent sleeping.
This ratio, as well as the sleep time, are bounded in order to deal with
the aforementioned OS-related anomalies.

Updates #35112.

Change-Id: Ieca8b088fdfca2bebb06bcde25ef14a42fd5216b
Reviewed-on: https://go-review.googlesource.com/c/go/+/201763
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 20:12:18 +00:00
Brian Kessler
68dce4296e math/cmplx: handle special cases
Implement special case handling and testing to ensure
conformance with the C99 standard annex G.6 Complex arithmetic.

Fixes #29320

Change-Id: Ieb0527191dd7fdea5b1aecb42b9e23aae3f74260
Reviewed-on: https://go-review.googlesource.com/c/go/+/169501
Run-TryBot: Brian Kessler <brian.m.kessler@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2019-11-07 19:48:30 +00:00
Cherry Zhang
3c47eada3f cmd/internal/obj/ppc64: handle MOVDU for SP delta
If a MOVDU instruction is used with an offset of SP, the
instruction changes SP therefore needs an SP delta, which is used
for generating the PC-SP table for stack unwinding. MOVDU is
frequently used for allocating the frame and saving the LR in the
same instruction, so this is particularly useful.

Change-Id: Icb63eb55aa01c3dc350ac4e4cff6371f4c3c5867
Reviewed-on: https://go-review.googlesource.com/c/go/+/205279
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:20:57 +00:00
Cherry Zhang
ceca99bdeb cmd/compile, cmd/internal/obj/ppc64: mark unsafe points
We'll use CTR as a scratch register for call injection. Mark code
sequences that use CTR as unsafe for async preemption. Currently
it is only used in LoweredZero and LoweredMove. It is unfortunate
that they are nonpreemptible. But I think it is still better than
using LR for call injection and marking all leaf functions
nonpreemptible.

Also mark the prologue of large frame functions nonpreemptible,
as we write below SP.

Change-Id: I05a75431499f3f4b2f23651a7b17f7fcf2afbe06
Reviewed-on: https://go-review.googlesource.com/c/go/+/203823
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:20:35 +00:00
Cherry Zhang
24e549a717 cmd/compile, cmd/internal/obj/ppc64: use LR for indirect calls
On PPC64, indirect calls can be made through LR or CTR. Currently
both are used. This CL changes it to always use LR.

For async preemption, to return from the injected call, we need
an indirect jump back to the PC we preeempted. This jump can be
made through LR or CTR. So we'll have to clobber either LR or CTR.
Currently, LR is used more frequently. In particular, for a leaf
function, LR is live throughout the function. We don't want to
make leaf functions nonpreemptible. So we choose CTR for the call
injection. For code sequences that use CTR, if it is ok to use
another register, change it to.

Plus, it is a call so it will clobber LR anyway. It doesn't need
to also clobber CTR (even without preemption).

Change-Id: I07bd0e93b94a1a3aa2be2cd465801136165d8ab8
Reviewed-on: https://go-review.googlesource.com/c/go/+/203822
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:20:03 +00:00
Cherry Zhang
a96cfa75c6 cmd/compile: mark unsafe points for MIPS and MIPS64
Mark atomic LL/SC loops as unsafe for async preemption, as they
use REGTMP.

Change-Id: I5be7f93ad3ee337049ec7c3efd6fdc30eef87d97
Reviewed-on: https://go-review.googlesource.com/c/go/+/203719
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 19:18:59 +00:00
Cherry Zhang
69dcdbd2ba cmd/internal/obj/mips: mark unsafe points
For async preemption, we will be using REGTMP as a temporary
register in injected call on MIPS, which will clobber it. So any
code that uses REGTMP is not safe for async preemption.

In the assembler backend, we expand a Prog to multiple machine
instructions and use REGTMP as a temporary register if necessary.
These need to be marked unsafe. In fact, most of the
multi-instruction Progs use REGTMP, so we mark all of them,
except ones that are whitelisted.

Change-Id: Ic00ae5589683c2c9525abdaee076d884df6b0d1e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203718
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:18:49 +00:00
Cherry Zhang
1b0b980904 runtime: add async preemption support on ARM64
This CL adds support of call injection and async preemption on
ARM64.

There seems no way to return from the injected call without
clobbering *any* register. So we have to clobber one, which is
chosen to be REGTMP. Previous CLs have marked code sequences
that use REGTMP async-nonpreemtible.

Change-Id: Ieca4e3ba5557adf3d0f5d923bce5f1769b58e30b
Reviewed-on: https://go-review.googlesource.com/c/go/+/203461
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 19:18:12 +00:00
Cherry Zhang
4736088463 cmd/internal/obj/arm64: mark unsafe points
For async preemption, we will be using REGTMP as a temporary
register in injected call on ARM64, which will clobber it. So any
code that uses REGTMP is not safe for async preemption.

In the assembler backend, we expand a Prog to multiple machine
instructions and use REGTMP as a temporary register if necessary.
These need to be marked unsafe. In fact, most of the
multi-instruction Progs use REGTMP, so we mark all of them,
except ones that are whitelisted.

Change-Id: I6e97805a13950e3b693fb606d77834940ac3722e
Reviewed-on: https://go-review.googlesource.com/c/go/+/203460
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:17:50 +00:00
Michael Anthony Knyszek
e5ce13c178 runtime: add option to scavenge with lock held throughout
This change adds a "locked" parameter to scavenge() and scavengeone()
which allows these methods to be run with the heap lock acquired, and
synchronously with respect to others which acquire the heap lock.

This mode is necessary for both heap-growth scavenging (multiple
asynchronous scavengers here could be problematic) and
debug.FreeOSMemory.

Updates #35112.

Change-Id: I24eea8e40f971760999c980981893676b4c9b666
Reviewed-on: https://go-review.googlesource.com/c/go/+/195699
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 19:14:47 +00:00
Michael Anthony Knyszek
e1ddf0507c runtime: count scavenged bits for new allocation for new page allocator
This change makes it so that the new page allocator returns the number
of pages that are scavenged in a new allocation so that mheap can update
memstats appropriately.

The accounting could be embedded into pageAlloc, but that would make
the new allocator more difficult to test.

Updates #35112.

Change-Id: I0f94f563d7af2458e6d534f589d2e7dd6af26d12
Reviewed-on: https://go-review.googlesource.com/c/go/+/195698
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 19:14:38 +00:00
Michael Anthony Knyszek
73317080e1 runtime: add scavenging code for new page allocator
This change adds a scavenger for the new page allocator along with
tests. The scavenger walks over the heap backwards once per GC, looking
for memory to scavenge. It walks across the heap without any lock held,
searching optimistically. If it finds what appears to be a scavenging
candidate it acquires the heap lock and attempts to verify it. Upon
verification it then scavenges.

Notably, unlike the old scavenger, it doesn't show any preference for
huge pages and instead follows a more strict last-page-first policy.

Updates #35112.

Change-Id: I0621ef73c999a471843eab2d1307ae5679dd18d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/195697
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 19:14:27 +00:00
Michael Anthony Knyszek
39e8cb0faa runtime: add new page allocator core
This change adds a new bitmap-based allocator to the runtime with tests.
It does not yet integrate the page allocator into the runtime and thus
this change is almost purely additive.

Updates #35112.

Change-Id: Ic3d024c28abee8be8797d3918116a80f901cc2bf
Reviewed-on: https://go-review.googlesource.com/c/go/+/190622
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 19:11:26 +00:00
Ian Lance Taylor
05aa4a7b74 runtime: set GODEBUG=asyncpreemptoff=1 in TestCrashDumpsAllThreads
Fixes #35356

Change-Id: I67b9e57b88d00ed98cbc3aa0aeb26b5f2d75a3f7
Reviewed-on: https://go-review.googlesource.com/c/go/+/205720
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-07 18:39:03 +00:00
Bryan C. Mills
ee2268c6bc cmd/go: add math/bits to runtime packages in TestNewReleaseRebuildsStalePackagesInGOPATH
This fixes a test failure introduced in CL 190620.

Updates #35112

Change-Id: I568ae85a456ccd8103563b0ce2e42b7348776a5c
Reviewed-on: https://go-review.googlesource.com/c/go/+/205877
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-11-07 18:06:59 +00:00
Michael Anthony Knyszek
763d3ac75c runtime: make sysReserve return page-aligned memory on js-wasm
This change ensures js-wasm returns page-aligned memory. While today
its lack of alignment doesn't cause problems, this is an invariant of
sysAlloc which is documented in HACKING.md but isn't upheld by js-wasm.

Any code that calls sysAlloc directly for small structures expects a
certain alignment (e.g. debuglog, tracebufs) but this is not maintained
by js-wasm's sysAlloc.

Where sysReserve comes into play is that sysAlloc is implemented in
terms of sysReserve on js-wasm. Also, the documentation of sysReserve
says that the returned memory is "OS-aligned" which on most platforms
means page-aligned, but the "OS-alignment" on js-wasm is effectively 1,
which doesn't seem right either.

The expected impact of this change is increased memory use on wasm,
since there's no way to decommit memory, and any small structures
allocated with sysAlloc won't be packed quite as tightly. However, any
memory increase should be minimal. Most calls to sysReserve and sysAlloc
already aligned their request to physPageSize before calling it; there
are only a few circumstances where this is not true, and they involve
allocating an amount of memory returned by unsafe.Sizeof where it's
actually quite important that we get the alignment right.

Updates #35112.

Change-Id: I9ca171e507ff3bd186326ccf611b35b9ebea1bfe
Reviewed-on: https://go-review.googlesource.com/c/go/+/205277
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Richard Musiol <neelance@gmail.com>
2019-11-07 17:45:27 +00:00
Michael Anthony Knyszek
cec01395c5 runtime: add packed bitmap summaries
This change adds the concept of summaries and of summarizing a set of
pallocBits, a core concept in the new page allocator. These summaries
are really just three integers packed into a uint64. This change also
adds tests and a benchmark for generating these summaries.

Updates #35112.

Change-Id: I69686316086c820c792b7a54235859c2105e5fee
Reviewed-on: https://go-review.googlesource.com/c/go/+/190621
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-07 17:45:15 +00:00
Michael Anthony Knyszek
b3a361337c runtime: add pallocbits and tests
This change adds a per-chunk bitmap for page allocation called
pallocBits with algorithms for allocating and freeing pages out of the
bitmap. This change also adds tests for pallocBits, but does not yet
integrate it into the runtime.

Updates #35112.

Change-Id: I479006ed9f1609c80eedfff0580d5426b064b0ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/190620
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 16:35:53 +00:00
Brian Kessler
6b1d5471b9 cmd/compile: add signed indivisibility by power of 2 rules
Commit 44343c777c (CL 173557) added rules for handling
divisibility checks for powers of 2 for signed integers, x%c ==0.
This change adds the complementary indivisibility rules, x%c != 0.

Fixes #34166

Change-Id: I87379e30af7aff633371acca82db2397da9b2c07
Reviewed-on: https://go-review.googlesource.com/c/go/+/194219
Run-TryBot: Brian Kessler <brian.m.kessler@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 16:30:46 +00:00
Michael Anthony Knyszek
14849f0fa5 runtime: add new page allocator constants and description
This change is the first of a series of changes which replace the
current page allocator (which is based on the contents of mgclarge.go
and some of mheap.go) with one based on free/used bitmaps.

It adds in the key constants for the page allocator as well as a comment
describing the implementation.

Updates #35112.

Change-Id: I839d3a07f46842ad379701d27aa691885afdba63
Reviewed-on: https://go-review.googlesource.com/c/go/+/190619
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 16:20:25 +00:00
Jay Conrod
0bf2eb5d4d cmd/go/internal/modfetch: switch to golang.org/x/mod/zip
zip.Create is now used to filter and translate zip files from VCS tools.

zip.Unzip is now used instead of Unzip.

Fixes #35290

Change-Id: I4aa41b2e96bf147c09db43d1d189b8393cafb06f
Reviewed-on: https://go-review.googlesource.com/c/go/+/204917
Run-TryBot: Jay Conrod <jayconrod@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bryan C. Mills <bcmills@google.com>
2019-11-07 16:13:31 +00:00
Michael Anthony Knyszek
947f8504d9 runtime: map reserved memory as NORESERVE on solaris
This changes makes it so that sysReserve, which creates a PROT_NONE
mapping, maps that memory as NORESERVE. Before this change, relatively
large PROT_NONE mappings could cause fork to fail with ENOMEM, reported
as "not enough space". Presumably this refers to swap space, since
adding this flag causes the failures to go away.

This helps unblock page allocator work, since it allows us to make large
PROT_NONE mappings on solaris safely.

Updates #35112.

Change-Id: Ic3cba310c626e93d5db0f27269e2569bb7bc393e
Reviewed-on: https://go-review.googlesource.com/c/go/+/205759
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-07 15:51:45 +00:00
Jay Conrod
f7d0a42121 cmd/go/internal/modfetch/zip_sum_test: long test for zip sum stability
This CL adds a new test package which downloads specific versions of
~1000 modules in direct mode and verifies that modules have the same
sums and the zip files have the same SHA-256 hashes.

This test takes a long time to run and depends heavily on external
data that may disappear. It must be enabled manually with -zipsum.

Fixes #35290

Change-Id: Ic6959e685096e8b09cea291f19d5bd0255432284
Reviewed-on: https://go-review.googlesource.com/c/go/+/204838
Reviewed-by: Bryan C. Mills <bcmills@google.com>
2019-11-07 15:10:59 +00:00
Bryan C. Mills
73d57bf80f Revert "sync: yield to the waiter when unlocking a starving mutex"
This reverts CL 200577.

Reason for revert: broke linux-arm64-packet and solaris-amd64-oraclerel builders

Fixes #35424
Updates #33747

Change-Id: I2575fd84d37995d458183caae54704f15d8b8426
Reviewed-on: https://go-review.googlesource.com/c/go/+/205817
Run-TryBot: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-07 15:04:03 +00:00
Russ Cox
e8f01d591f math: test portable FMA even on system with hardware FMA
This makes it a little less likely the portable FMA will be
broken without realizing it.

Change-Id: I7f7f4509b35160a9709f8b8a0e494c09ea6e410a
Reviewed-on: https://go-review.googlesource.com/c/go/+/205337
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-11-07 14:53:38 +00:00