No point in computing this info on startup.
Compute it at build time.
This lets us spend more time computing & checking the size classes.
Improve the div magic for rounding to the start of an object.
We can now use 32-bit multiplies & shifts, which should help
32-bit platforms.
The static data is <1KB.
The actual size classes are not changed by this CL.
Change-Id: I6450cec7d1b2b4ad31fd3f945f504ed2ec6570e7
Reviewed-on: https://go-review.googlesource.com/32219
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This fixes systems for which ccache is the default compiler.
Also remove a couple of temporary files created by TestImportMain.
Fixes#17668.
Change-Id: I1edefdcec5f417be0533c146253c35ff4928c1c0
Reviewed-on: https://go-review.googlesource.com/32328
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Provides redirection support for 307, 308 server statuses.
Provides redirection support for DELETE method.
Updates old tests that assumed all redirects were treated
the way 301, 302 and 303 are processed.
Fixes#9348Fixes#10767Fixes#13994
Change-Id: Iffa8dbe0ff28a1afa8da59869290ec805b1dd2c4
Reviewed-on: https://go-review.googlesource.com/29852
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The GZIP format records the ModTime as an uint32 counting seconds since
the Unix epoch. The zero value is explicitly defined in section 2.3.1
as meaning no timestamp is available.
Currently, the Writer always encodes the ModTime even if it is the zero
time.Time value, which causes the Writer to try and encode the value
-62135596800 into the uint32 MTIME field. This causes an overflow and
results in our GZIP files having MTIME fields indicating a date in 2042-07-13.
We alter the Writer to only encode ModTime if the value does not underflow
the MTIME field (i.e., it is newer than the Unix epoch). We do not attempt
to fix what happens when the timestamp overflows in the year 2106.
We alter the Reader to only decode ModTime if the value is non-zero.
There is no risk of overflowing time.Time when decoding.
Fixes#17663
Change-Id: Ie1b65770c6342cd7b14aeebe10e5a49e6c9eb730
Reviewed-on: https://go-review.googlesource.com/32325
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
After the final slash, dots are %-escaped when constructing a symbol name,
so that in the actual symbol table, the import path githost.com/my.git
becomes githost.com/my%2egit. In this case, -X githost.com/my.git.Value=foo
needs to set githost.com/my%2egit.Value. This is a detail of the object format
and not something users should know or depend on, so apply the escaping
as needed.
People who have run across this already and figured out and started using
the escaped forms with -X will find those forms not working anymore.
That is, -X githost.com/my%2egit.Value=foo is the Go 1.7 workaround but
will stop working in Go 1.8 once this proper fix is in place.
People who need to keep scripts working with older and newer versions of Go
can safely pass both forms, and one will be ignored:
-X githost.com/my%2egit.Value=foo -X githost.com/my.git.Value=foo
Fixes#16710.
Change-Id: I0e994ccdd412a4eb8349fefce9aeb3bfc9a83cd8
Reviewed-on: https://go-review.googlesource.com/31970
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The filepath.Abs function in windows did not call Clean as the
documentation claimed. This change not only fixes that behavior but
also adjusts TestAbs to verify Abs calls Clean as documented.
Fixes#17210
Change-Id: I20c5f5026042fd7bd9d929ff5b17c8b2653f8afe
Reviewed-on: https://go-review.googlesource.com/32292
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Improves the error message by moving the field name before the body
of a struct, in the error message for unknown fields for structs.
* Exhibit:
Given program:
package main
import "time"
func main() {
_ = struct {
about string
before map[string]uint
update map[string]int
updateTime time.Time
expect map[string]int
}{
about: "this one",
updates: map[string]int{"gopher": 10},
}
}
* Before:
./issue17631.go:20: unknown struct { about string; before map[string]uint;
update map[string]int; updateTime time.Time; expect map[string]int } field
'updates' in struct literal
* After:
./issue17631.go:20: unknown field 'updates' in struct literal of type { about string;
before map[string]uint; update map[string]int; updateTime time.Time;
expect map[string]int }
Fixes#17631
Change-Id: I76842616411b931b5ad7a76bd42860dfde7739f4
Reviewed-on: https://go-review.googlesource.com/32240
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
I did 'export GORACE=atexit_sleep_ms=0' in a console
and then was puzzled as to why race tests fail.
Existing GORACE env var may (or may not) override
the one that we setup.
Filter out GORACE as we do for other important env vars.
Change-Id: I29be86b0cbb9b5dc7f9efb15729ade86fc79b0e0
Reviewed-on: https://go-review.googlesource.com/32163
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change-Id: Iec35f9b62982da40de400397bc456149216303dc
Reviewed-on: https://go-review.googlesource.com/32297
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The docs used to imply that using == would compare Locations, but of
course it just compares Location pointers, which will have unpredictable
results depending on how the pointers are loaded.
Change-Id: I783c1309e476a9616a1c1c290eac713aba3b0b57
Reviewed-on: https://go-review.googlesource.com/32332
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
On solaris/amd64 sometimes the reported cycle count is negative. Replace
with 0.
Change-Id: I364eea5ca072281245c7ab3afb0bf69adc3a8eae
Reviewed-on: https://go-review.googlesource.com/32258
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Use "have" and "want" and multiple lines like other similar error
messages. Also, fix handling of ... and multi-value function calls.
Fixes#17650.
Change-Id: I4850e79c080eac8df3b92a4accf9e470dff63c9a
Reviewed-on: https://go-review.googlesource.com/32261
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On amd64p32, rt0_go attempts to reserve 128 bytes of scratch space on
the stack, but due to a register mixup this ends up being a no-op. Fix
this so we actually reserve the stack space.
Change-Id: I04dbfbeb44f3109528c8ec74e1136bc00d7e1faa
Reviewed-on: https://go-review.googlesource.com/32331
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
With the hybrid barrier in place, we can now disable stack rescanning
by default. This commit adds a "gcrescanstacks" GODEBUG variable that
is off by default but can be set to re-enable STW stack rescanning.
The plan is to leave this off but available in Go 1.8 for debugging
and as a fallback.
With this change, worst-case mark termination time at GOMAXPROCS=12
*not* including time spent stopping the world (which is still
unbounded) is reliably under 100 µs, with a 95%ile around 50 µs in
every benchmark I tried (the go1 benchmarks, the x/benchmarks garbage
benchmark, and the gcbench activegs and rpc benchmarks). Including
time spent stopping the world usually adds about 20 µs to total STW
time at GOMAXPROCS=12, but I've seen it add around 150 µs in these
benchmarks when a goroutine takes time to reach a safe point (see
issue #10958) or when stopping the world races with goroutine
switches. At GOMAXPROCS=1, where this isn't an issue, worst case STW
is typically 30 µs.
The go-gcbench activegs benchmark is designed to stress large numbers
of dirty stacks. This commit reduces 95%ile STW time for 500k dirty
stacks by nearly three orders of magnitude, from 150ms to 195µs.
This has little effect on the throughput of the go1 benchmarks or the
x/benchmarks benchmarks.
name old time/op new time/op delta
XGarbage-12 2.31ms ± 0% 2.32ms ± 1% +0.28% (p=0.001 n=17+16)
XJSON-12 12.4ms ± 0% 12.4ms ± 0% +0.41% (p=0.000 n=18+18)
XHTTP-12 11.8µs ± 0% 11.8µs ± 1% ~ (p=0.492 n=20+18)
It reduces the tail latency of the x/benchmarks HTTP benchmark:
name old p50-time new p50-time delta
XHTTP-12 489µs ± 0% 491µs ± 1% +0.54% (p=0.000 n=20+18)
name old p95-time new p95-time delta
XHTTP-12 957µs ± 1% 960µs ± 1% +0.28% (p=0.002 n=20+17)
name old p99-time new p99-time delta
XHTTP-12 1.76ms ± 1% 1.64ms ± 1% -7.20% (p=0.000 n=20+18)
Comparing to the beginning of the hybrid barrier implementation
("runtime: parallelize STW mcache flushing") shows that the hybrid
barrier trades a small performance impact for much better STW latency,
as expected. The magnitude of the performance impact is generally
small:
name old time/op new time/op delta
BinaryTree17-12 2.37s ± 1% 2.42s ± 1% +2.04% (p=0.000 n=19+18)
Fannkuch11-12 2.84s ± 0% 2.72s ± 0% -4.00% (p=0.000 n=19+19)
FmtFprintfEmpty-12 44.2ns ± 1% 45.2ns ± 1% +2.20% (p=0.000 n=17+19)
FmtFprintfString-12 130ns ± 1% 134ns ± 0% +2.94% (p=0.000 n=18+16)
FmtFprintfInt-12 114ns ± 1% 117ns ± 0% +3.01% (p=0.000 n=19+15)
FmtFprintfIntInt-12 176ns ± 1% 182ns ± 0% +3.17% (p=0.000 n=20+15)
FmtFprintfPrefixedInt-12 186ns ± 1% 187ns ± 1% +1.04% (p=0.000 n=20+19)
FmtFprintfFloat-12 251ns ± 1% 250ns ± 1% -0.74% (p=0.000 n=17+18)
FmtManyArgs-12 746ns ± 1% 761ns ± 0% +2.08% (p=0.000 n=19+20)
GobDecode-12 6.57ms ± 1% 6.65ms ± 1% +1.11% (p=0.000 n=19+20)
GobEncode-12 5.59ms ± 1% 5.65ms ± 0% +1.08% (p=0.000 n=17+17)
Gzip-12 223ms ± 1% 223ms ± 1% -0.31% (p=0.006 n=20+20)
Gunzip-12 38.0ms ± 0% 37.9ms ± 1% -0.25% (p=0.009 n=19+20)
HTTPClientServer-12 77.5µs ± 1% 78.9µs ± 2% +1.89% (p=0.000 n=20+20)
JSONEncode-12 14.7ms ± 1% 14.9ms ± 0% +0.75% (p=0.000 n=20+20)
JSONDecode-12 53.0ms ± 1% 55.9ms ± 1% +5.54% (p=0.000 n=19+19)
Mandelbrot200-12 3.81ms ± 0% 3.81ms ± 1% +0.20% (p=0.023 n=17+19)
GoParse-12 3.17ms ± 1% 3.18ms ± 1% ~ (p=0.057 n=20+19)
RegexpMatchEasy0_32-12 71.7ns ± 1% 70.4ns ± 1% -1.77% (p=0.000 n=19+20)
RegexpMatchEasy0_1K-12 946ns ± 0% 946ns ± 0% ~ (p=0.405 n=18+18)
RegexpMatchEasy1_32-12 67.2ns ± 2% 67.3ns ± 2% ~ (p=0.732 n=20+20)
RegexpMatchEasy1_1K-12 374ns ± 1% 378ns ± 1% +1.14% (p=0.000 n=18+19)
RegexpMatchMedium_32-12 107ns ± 1% 107ns ± 1% ~ (p=0.259 n=18+20)
RegexpMatchMedium_1K-12 34.2µs ± 1% 34.5µs ± 1% +1.03% (p=0.000 n=18+18)
RegexpMatchHard_32-12 1.77µs ± 1% 1.79µs ± 1% +0.73% (p=0.000 n=19+18)
RegexpMatchHard_1K-12 53.6µs ± 1% 54.2µs ± 1% +1.10% (p=0.000 n=19+19)
Template-12 61.5ms ± 1% 63.9ms ± 0% +3.96% (p=0.000 n=18+18)
TimeParse-12 303ns ± 1% 300ns ± 1% -1.08% (p=0.000 n=19+20)
TimeFormat-12 318ns ± 1% 320ns ± 0% +0.79% (p=0.000 n=19+19)
Revcomp-12 (*) 509ms ± 3% 504ms ± 0% ~ (p=0.967 n=7+12)
[Geo mean] 54.3µs 54.8µs +0.88%
(*) Revcomp is highly non-linear, so I only took samples with 2
iterations.
name old time/op new time/op delta
XGarbage-12 2.25ms ± 0% 2.32ms ± 1% +2.74% (p=0.000 n=16+16)
XJSON-12 11.6ms ± 0% 12.4ms ± 0% +6.81% (p=0.000 n=18+18)
XHTTP-12 11.6µs ± 1% 11.8µs ± 1% +1.62% (p=0.000 n=17+18)
Updates #17503.
Updates #17099, since you can't have a rescan list bug if there's no
rescan list. I'm not marking it as fixed, since gcrescanstacks can
still be set to re-enable the rescan lists.
Change-Id: I6e926b4c2dbd4cd56721869d4f817bdbb330b851
Reviewed-on: https://go-review.googlesource.com/31766
Reviewed-by: Rick Hudson <rlh@golang.org>
This implements the unconditional version of the hybrid deletion write
barrier, which always shades both the old and new pointer. It's
unconditional for now because barriers on channel operations require
checking both the source and destination stacks and we don't have a
way to funnel this information into the write barrier at the moment.
As part of this change, we modify the typed memclr operations
introduced earlier to invoke the write barrier.
This has basically no overall effect on benchmark performance. This is
good, since it indicates that neither the extra shade nor the new bulk
clear barriers have much effect. It also has little effect on latency.
This is expected, since we haven't yet modified mark termination to
take advantage of the hybrid barrier.
Updates #17503.
Change-Id: Iebedf84af2f0e857bd5d3a2d525f760b5cf7224b
Reviewed-on: https://go-review.googlesource.com/31765
Reviewed-by: Rick Hudson <rlh@golang.org>
With the hybrid barrier, unless we're doing a STW GC or hit a very
rare race (~once per all.bash) that can start mark termination before
all of the work is drained, we don't need to drain the work queue at
all. Even draining an empty work queue is rather expensive since we
have to enter the getfull() barrier, so it's worth avoiding this.
Conveniently, it's quite easy to detect whether or not we actually
need the getufull() barrier: since the world is stopped when we enter
mark termination, everything must have flushed its work to the work
queue, so we can just check the queue. If the queue is empty and we
haven't queued up any jobs that may create more work (which should
always be the case with the hybrid barrier), we can simply have all GC
workers perform non-blocking drains.
Also conveniently, this solution is quite safe. If we do somehow screw
something up and there's work on the work queue, some worker will
still process it, it just may not happen in parallel.
This is not the "right" solution, but it's simple, expedient,
low-risk, and maintains compatibility with debug.gcrescanstacks. When
we remove the gcrescanstacks fallback in Go 1.9, we should also fix
the race that starts mark termination early, and then we can eliminate
work draining from mark termination.
Updates #17503.
Change-Id: I7b3cd5de6a248ab29d78c2b42aed8b7443641361
Reviewed-on: https://go-review.googlesource.com/32186
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently bulkBarrierPreWrite calls writebarrierptr_prewrite, but this
means that we check writeBarrier.needed twice and perform cgo checks
twice.
Change bulkBarrierPreWrite to call writebarrierptr_prewrite1 to skip
over these duplicate checks.
This may speed up bulkBarrierPreWrite slightly, but mostly this will
save us from running out of nosplit stack space on ppc64x in the near
future.
Updates #17503.
Change-Id: I1cea1a2207e884ab1a279c6a5e378dcdc048b63e
Reviewed-on: https://go-review.googlesource.com/31890
Reviewed-by: Rick Hudson <rlh@golang.org>
gobuf.ctxt is set to nil from many places in assembly code and these
assignments require write barriers with the hybrid barrier.
Conveniently, in most of these places ctxt should already be nil, in
which case we don't need the barrier. This commit changes these places
to assert that ctxt is already nil.
gogo is more complicated, since ctxt may not already be nil. For gogo,
we manually perform the write barrier if ctxt is not nil.
Updates #17503.
Change-Id: I9d75e27c75a1b7f8b715ad112fc5d45ffa856d30
Reviewed-on: https://go-review.googlesource.com/31764
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently, we perform write barriers after performing pointer writes.
At the moment, it simply doesn't matter what order this happens in, as
long as they appear atomic to GC. But both the hybrid barrier and ROC
are going to require a pre-write write barrier.
For the hybrid barrier, this is important because the barrier needs to
observe both the current value of the slot and the value that will be
written to it. (Alternatively, the caller could do the write and pass
in the old value, but it seems easier and more useful to just swap the
order of the barrier and the write.)
For ROC, this is necessary because, if the pointer write is going to
make the pointer reachable to some goroutine that it currently is not
visible to, the garbage collector must take some special action before
that pointer becomes more broadly visible.
This commits swaps pointer writes around so the write barrier occurs
before the pointer write.
The main subtlety here is bulk memory writes. Currently, these copy to
the destination first and then use the pointer bitmap of the
destination to find the copied pointers and invoke the write barrier.
This is necessary because the source may not have a pointer bitmap. To
handle these, we pass both the source and the destination to the bulk
memory barrier, which uses the pointer bitmap of the destination, but
reads the pointer values from the source.
Updates #17503.
Change-Id: I78ecc0c5c94ee81c29019c305b3d232069294a55
Reviewed-on: https://go-review.googlesource.com/31763
Reviewed-by: Rick Hudson <rlh@golang.org>
We reject import of main packages, but we missed tests.
Reject in all tests except test of that main package.
We reject local (relative) imports from code with a
non-local import path, but again we missed tests.
Reject those too.
Fixes#14811.
Fixes#15795.
Fixes#17475.
Change-Id: I535ff26889520276a891904f54f1a85b2c40207d
Reviewed-on: https://go-review.googlesource.com/31821
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
Materialize float constant 0 from integer zero register, instead
of loading from constant pool.
Also fix assembling FMOV from zero register to FP register.
Change-Id: Ie413dd342cedebdb95ba8cfc220e23ed2a39e885
Reviewed-on: https://go-review.googlesource.com/32250
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Apparently on macOS Sierra LLDB thinks /usr/lib/dyld is mapped
at address 0, even if Go code starts at 0x1000, and it looks up
addresses from dyld which shadows Go symbols. Move Go binary at
a higher address to avoid clash.
Fixes#17463. Re-enable TestLldbPython.
Change-Id: I89ca6f3ee48aa6da9862bfa0c2da91477cc93255
Reviewed-on: https://go-review.googlesource.com/32185
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
As for dropg, save is writing a nil pointer that will generate a write
barrier with the hybrid barrier. However, in this case, ctxt always
should already be nil, so replace the write with an assertion that
this is the case.
At this point, we're ready to disable the write barrier elision
optimizations that interfere with the hybrid barrier.
Updates #17503.
Change-Id: I83208e65aa33403d442401f355b2e013ab9a50e9
Reviewed-on: https://go-review.googlesource.com/31571
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently this contains no write barriers because it's writing nil
pointers, but with the hybrid barrier, even these will produce write
barriers. However, since these are *gs and *ms, they don't need write
barriers, so we can simply eliminate them.
Updates #17503.
Change-Id: Ib188a60492c5cfb352814bf9b2bcb2941fb7d6c0
Reviewed-on: https://go-review.googlesource.com/31570
Reviewed-by: Rick Hudson <rlh@golang.org>
The hybrid barrier requires allocate-black, but there's one case where
we don't currently allocate black: the tiny allocator. If we allocate
a *new* tiny alloc block during GC, it will be allocated black, but if
we allocated the current block before GC, it won't be black, and the
further allocations from it won't mark it, which means we may free a
reachable tiny block during sweeping.
Fix this by passing over all mcaches at the beginning of mark, while
the world is still stopped, and greying their tiny blocks.
Updates #17503.
Change-Id: I04d4df7cc2f553f8f7b1e4cb0b52e2946588111a
Reviewed-on: https://go-review.googlesource.com/31456
Reviewed-by: Rick Hudson <rlh@golang.org>
The hybrid barrier requires barriers on stack-to-stack copies if
either stack is grey. There are only two instances of this in the
runtime: channel sends and starting a goroutine. Channel sends already
use typedmemmove and hence have the necessary barriers. This commits
adds barriers for the stack-to-stack copy when starting a goroutine.
Updates #17503.
Change-Id: Ibb55e08127ca4d021ac54be61cb96732efa5df5b
Reviewed-on: https://go-review.googlesource.com/31455
Reviewed-by: Rick Hudson <rlh@golang.org>
Original Change by Daria Kolistratova <daria.kolistratova@intel.com>
Added functions with suffix proto and stuff from pprof tool to translate
to protobuf. Done as the profile proto is more extensible than the legacy
pprof format and is pprof's preferred profile format. Large part was taken
from https://github.com/google/pprof tool. Tested by hand and compared the
result with translated by pprof tool, profiles are identical.
Fixes#16093
Change-Id: I2751345b09a66ee2b6aa64be76cba4cd1c326aa6
Reviewed-on: https://go-review.googlesource.com/32257
Run-TryBot: Michael Matloob <matloob@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Alan Donovan <adonovan@google.com>
Waiting 2ms for all the kicked-off goroutines to run and block
seems a little optimistic. No harm done by waiting for 200ms instead.
Fixes#17238.
Change-Id: I827532ea2f5f1f3ed04179f8957dd2c563946ed0
Reviewed-on: https://go-review.googlesource.com/32103
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently we initialize LR on a new stack by writing nil to it. But
this is an initializing write since the newly allocated stack is not
zeroed, so this is unsafe with the hybrid barrier. Change this is a
uintptr write to avoid a bad write barrier.
Updates #17503.
Change-Id: I062ac352e35df7da4644c1f2a5aaab87049d1f60
Reviewed-on: https://go-review.googlesource.com/32093
Reviewed-by: Rick Hudson <rlh@golang.org>
We reuse finalizers in finblocks, which are allocated off-heap. This
means they have to be zero-initialized before becoming visible to the
garbage collector. We actually already do this by clearing the
finalizer before returning it to the pool, but we're not careful to
enforce correct memory ordering. Fix this by manipulating the
finalizer count atomically so these writes synchronize properly with
the garbage collector.
Updates #17503.
Change-Id: I7797d31df3c656c9fe654bc6da287f66a9e2037d
Reviewed-on: https://go-review.googlesource.com/31454
Reviewed-by: Rick Hudson <rlh@golang.org>
runfinq allocates a stack frame on the heap for constructing the
finalizer function calls and reuses it for each call. However, because
the type of this frame is constantly shifting, it tells mallocgc there
are no pointers in it and it acts essentially like uninitialized
memory between uses. But runfinq uses pointer writes with write
barriers to "initialize" this memory, which is not going to be safe
with the hybrid barrier, since the hybrid barrier may see a stale
pointer left in the "uninitialized" frame.
Fix this by zero-initializing the argument values in the frame before
writing the argument pointers.
Updates #17503.
Change-Id: I951c0a2be427eb9082a32d65c4410e6fdef041be
Reviewed-on: https://go-review.googlesource.com/31453
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, zeroing generates an ssa.OpZero, which never has write
barriers, even if the assignment is an OASWB. The hybrid barrier
requires write barriers on zeroing, so change OASWB to generate an
ssa.OpZeroWB when assigning the zero value, which turns into a
typedmemclr.
Updates #17503.
Change-Id: Ib37ac5e39f578447dbd6b36a6a54117d5624784d
Reviewed-on: https://go-review.googlesource.com/31451
Reviewed-by: Cherry Zhang <cherryyz@google.com>