This modifies bulkBarrierPreWrite to use the buffered write barrier
instead of the eager write barrier. This reduces the number of system
stack switches and sanity checks by a factor of the buffer size
(currently 256). This affects both typedmemmove and typedmemclr.
Since this is purely a runtime change, it applies to all arches
(unlike the pointer write barrier).
name old time/op new time/op delta
BulkWriteBarrier-12 7.33ns ± 6% 4.46ns ± 9% -39.10% (p=0.000 n=20+19)
Updates #22460.
Change-Id: I6a686a63bbf08be02b9b97250e37163c5a90cdd8
Reviewed-on: https://go-review.googlesource.com/73832
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, typedslicecopy meticulously performs a typedmemmove on
every element of the slice. This probably used to be necessary because
we only had an individual element's type, but now we use the heap
bitmap, so we only need to know whether the type has any pointers and
how big it is. Hence, this CL rewrites typedslicecopy to simply
perform one bulk barrier and one memmove.
This also has a side-effect of eliminating two unnecessary write
barriers per slice element that were coming from updates to dstp and
srcp, which were stored in the parent stack frame. However, most of
the win comes from eliminating the loops.
name old time/op new time/op delta
BulkWriteBarrier-12 7.83ns ±10% 7.33ns ± 6% -6.45% (p=0.000 n=20+20)
Updates #22460.
Change-Id: Id3450e9f36cc8e0892f268319b136f0d8f5464b8
Reviewed-on: https://go-review.googlesource.com/73831
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This adds a benchmark of typedslicecopy and its bulk write barriers.
For #22460.
Change-Id: I439ca3b130bb22944468095f8f18b464e5bb43ca
Reviewed-on: https://go-review.googlesource.com/74051
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This implements runtime support for buffered write barriers on amd64.
The buffered write barrier has a fast path that simply enqueues
pointers in a per-P buffer. Unlike the current write barrier, this
fast path is *not* a normal Go call and does not require the compiler
to spill general-purpose registers or put arguments on the stack. When
the buffer fills up, the write barrier takes the slow path, which
spills all general purpose registers and flushes the buffer. We don't
allow safe-points or stack splits while this frame is active, so it
doesn't matter that we have no type information for the spilled
registers in this frame.
One minor complication is cgocheck=2 mode, which uses the write
barrier to detect Go pointers being written to non-Go memory. We
obviously can't buffer this, so instead we set the buffer to its
minimum size, forcing the write barrier into the slow path on every
call. For this specific case, we pass additional information as
arguments to the flush function. This also requires enabling the cgo
write barrier slightly later during runtime initialization, after Ps
(and the per-P write barrier buffers) have been initialized.
The code in this CL is not yet active. The next CL will modify the
compiler to generate calls to the new write barrier.
This reduces the average cost of the write barrier by roughly a factor
of 4, which will pay for the cost of having it enabled more of the
time after we make the GC pacer less aggressive. (Benchmarks will be
in the next CL.)
Updates #14951.
Updates #22460.
Change-Id: I396b5b0e2c5e5c4acfd761a3235fd15abadc6cb1
Reviewed-on: https://go-review.googlesource.com/73711
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently systemstack always calls its argument, even if we're already
on the system stack. Unfortunately, traceback with _TraceJump stops at
the first systemstack it sees, which often cuts off runtime stacks
early in profiles.
Fix this by performing a tail call if we're already on the system
stack. This eliminates it from the traceback entirely, so it won't
stop prematurely (or all get mushed into a single node in the profile
graph).
Change-Id: Ibc69e8765e899f8d3806078517b8c7314da196f4
Reviewed-on: https://go-review.googlesource.com/74050
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
The upcoming CL 73212 will see through mtime modifications.
Change the underlying file too.
Change-Id: Ib23b4136a62ee87bce408b76bb0385451ae7dcd2
Reviewed-on: https://go-review.googlesource.com/74130
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This adds support for math Abs, Copysign to be instrinsics on ppc64x.
New instruction FCPSGN is added to generate fcpsgn. Some new
rules are added to improve the int<->float conversions that are
generated mainly due to the Float64bits and Float64frombits in
the math package. PPC64.rules is also modified as suggested
in the review for CL 63290.
Improvements:
benchmark old ns/op new ns/op delta
BenchmarkAbs-16 1.12 0.69 -38.39%
BenchmarkCopysign-16 1.30 0.93 -28.46%
BenchmarkNextafter32-16 9.34 8.05 -13.81%
BenchmarkFrexp-16 8.81 7.60 -13.73%
Others that used Copysign also saw smaller improvements.
I attempted to make this work using rules since that
seems to be preferred, but due to the use of Float64bits and
Float64frombits in these functions, several rules had to be added and
even then not all cases were matched. Using rules became too
complicated and seemed too fragile for these.
Updates #21390
Change-Id: Ia265da9a18355e08000818a4fba1a40e9e031995
Reviewed-on: https://go-review.googlesource.com/67130
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Keith Randall <khr@golang.org>
Errors occur in runtime test testCgoPprofPIE when the test
is built by passing -pie to the external linker with code
that was not built as PIC. This occurs on ppc64le because
non-PIC is the default, and fails only on newer distros
where the address range used for programs is high enough
to cause relocation overflow. This test should be built
with -buildmode=pie since that correctly generates PIC
with -pie.
Related issues are #21954 and #22126.
Updates #22459
Change-Id: Ib641440bc9f94ad2b97efcda14a4b482647be8f7
Reviewed-on: https://go-review.googlesource.com/73970
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This pragma is not actually honored by the compiler.
The tests implicitly relied on the inliner being unable
to inline closures with captured variables, which will
soon change.
Fixes#22208
Change-Id: I13abc9c930b9156d43ec216f8efb768952a29439
Reviewed-on: https://go-review.googlesource.com/73211
Reviewed-by: Michael Munday <mike.munday@ibm.com>
If runtime.GOROOT() and the os.Executable method for finding GOROOT
find the same directory but with different spellings, prefer the spelling
returned by runtime.GOROOT().
This avoids an inconsistency if "pwd" returns one spelling but a
different spelling is used in $PATH (and therefore in os.Executable()).
make.bash runs with GOROOT=$(cd .. && pwd); the goal is to allow
the resulting toolchain to use that default setting (unless moved)
even if the directory spelling is different in $PATH.
Change-Id: If96b28b9e8697f4888f153a400b40bbf58a9128b
Reviewed-on: https://go-review.googlesource.com/74250
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
recordspan has two remaining write barriers from writing to the
pointer to the backing store of h.allspans. However, h.allspans is
always backed by off-heap memory, so let the compiler know this.
Unfortunately, this isn't quite as clean as most go:notinheap uses
because we can't directly name the backing store of a slice, but we
can get it done with some judicious casting.
For #22460.
Change-Id: I296f92fa41cf2cb6ae572b35749af23967533877
Reviewed-on: https://go-review.googlesource.com/73414
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently copy and append for types containing only scalars and
notinheap pointers still get compiled to have write barriers, even
though those write barriers are unnecessary. Fix these to use
HasHeapPointer instead of just Haspointer so that they elide write
barriers when possible.
This fixes the unnecessary write barrier in runtime.recordspan when it
grows the h.allspans slice. This is important because recordspan gets
called (*very* indirectly) from (*gcWork).tryGet, which is
go:nowritebarrierrec. Unfortunately, the compiler's analysis has no
hope of seeing this because it goes through the indirect call
fixalloc.first, but I saw it happen.
Change-Id: Ieba3abc555a45f573705eab780debcfe5c4f5dd1
Reviewed-on: https://go-review.googlesource.com/73413
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently (*Type).HasHeapPointer only ignores pointers go:notinheap
types if the type itself is a pointer to a go:notinheap type. However,
if it's some other type that contains pointers where all of those
pointers are go:notinheap, it will conservatively return true. As a
result, we'll use write barriers where they aren't needed, for example
calling typedmemmove instead of just memmove on structs that contain
only go:notinheap pointers.
Fix this by making HasHeapPointer walk the whole type looking for
pointers that aren't marked go:notinheap.
Change-Id: Ib8c6abf6f7a20f34969d1d402c5498e0b990be59
Reviewed-on: https://go-review.googlesource.com/73412
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Most write barrier calls are inserted by SSA, but copy and append are
lowered to runtime.typedslicecopy during walk. Fix these to set
Func.WBPos and emit the "write barrier" warning, as done for the write
barriers inserted by SSA. As part of this, we refactor setting WBPos
and emitting this warning into the frontend so it can be shared by
both walk and SSA.
Change-Id: I5fe9997d9bdb55e03e01dd58aee28908c35f606b
Reviewed-on: https://go-review.googlesource.com/73411
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The crypto.Signer interface takes pre-hased messages for ECDSA and RSA,
but the argument in the implementations was called “msg”, not “digest”,
which is confusing.
This change renames them to help clarify the intended use.
Change-Id: Ie2fb8753ca5280e493810d211c7c66223f94af88
Reviewed-on: https://go-review.googlesource.com/70950
Reviewed-by: Filippo Valsorda <hi@filippo.io>
The current go:nowritebarrierrec checker has two problems that limit
its coverage:
1. It doesn't understand that systemstack calls its argument, which
means there are several cases where we fail to detect prohibited write
barriers.
2. It only observes calls in the AST, so calls constructed during
lowering by SSA aren't followed.
This CL completely rewrites this checker to address these issues.
The current checker runs entirely after walk and uses visitBottomUp,
which introduces several problems for checking across systemstack.
First, visitBottomUp itself doesn't understand systemstack calls, so
the callee may be ordered after the caller, causing the checker to
fail to propagate constraints. Second, many systemstack calls are
passed a closure, which is quite difficult to resolve back to the
function definition after transformclosure and walk have run. Third,
visitBottomUp works exclusively on the AST, so it can't observe calls
created by SSA.
To address these problems, this commit splits the check into two
phases and rewrites it to use a call graph generated during SSA
lowering. The first phase runs before transformclosure/walk and simply
records systemstack arguments when they're easy to get. Then, it
modifies genssa to record static call edges at the point where we're
lowering to Progs (which is the latest point at which position
information is conveniently available). Finally, the second phase runs
after all functions have been lowered and uses a direct BFS walk of
the call graph (combining systemstack calls with static calls) to find
prohibited write barriers and construct nice error messages.
Fixes#22384.
For #22460.
Change-Id: I39668f7f2366ab3c1ab1a71eaf25484d25349540
Reviewed-on: https://go-review.googlesource.com/72773
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
We're about to start tracking nowritebarrierrec through systemstack
calls, which detects that we're calling markroot (which has write
barriers) from gchelper, which is called from the scheduler during STW
apparently without a P.
But it turns out that func helpgc, which wakes up blocked Ms to run
gchelper, installs a P for gchelper to use. This means there *is* a P
when gchelper runs, so it is allowed to have write barriers. Tell the
compiler this by marking gchelper go:yeswritebarrierrec. Also,
document the call to gchelper so I don't have to spend another half a
day puzzling over how on earth this could possibly work before
discovering the spooky action-at-a-distance in helpgc.
Updates #22384.
For #22460.
Change-Id: I7394c9b4871745575f87a2d4fbbc5b8e54d669f7
Reviewed-on: https://go-review.googlesource.com/72772
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which will reveal write barriers in persistentalloc prohibited
by various callers.
The pointers manipulated by persistentalloc are always to off-heap
memory, so this removes these write barriers statically by introducing
a new go:notinheap type to represent generic off-heap memory.
Updates #22384.
For #22460.
Change-Id: Id449d9ebf145b14d55476a833e7f076b0d261d57
Reviewed-on: https://go-review.googlesource.com/72771
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which will reveal write barriers in startpanic_m prohibited by
various callers.
We actually can allow write barriers here because the write barrier is
a no-op when we're panicking. Let the compiler know.
Updates #22384.
For #22460.
Change-Id: Ifb3a38d3dd9a4125c278c3680f8648f987a5b0b8
Reviewed-on: https://go-review.googlesource.com/72770
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently most of these are marked go:nowritebarrier as a hint, but
it's actually important that these not invoke write barriers
recursively. The danger is that some gcWork method would invoke the
write barrier while the gcWork is in an inconsistent state and that
the write barrier would in turn invoke some other gcWork method, which
would crash or permanently corrupt the gcWork. Simply marking the
write barrier itself as go:nowritebarrierrec isn't sufficient to
prevent this if the write barrier doesn't use the outer method.
Thankfully, this doesn't cause any build failures, so we were getting
this right. :)
For #22460.
Change-Id: I35a7292a584200eb35a49507cd3fe359ba2206f6
Reviewed-on: https://go-review.googlesource.com/72554
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, newstack and gogo have write barriers for maintaining the
context register saved in g.sched.ctxt. This is troublesome, because
newstack can be called from go:nowritebarrierrec places that can't
allow write barriers. It happens to be benign because g.sched.ctxt
will always be nil on entry to newstack *and* it so happens the
incoming ctxt will also always be nil in these contexts (I
think/hope), but this is playing with fire. It's also desirable to
mark newstack go:nowritebarrierrec to prevent any other, non-benign
write barriers from creeping in, but we can't do that right now
because of this one write barrier.
Fix all of this by observing that g.sched.ctxt is really just a saved
live pointer register. Hence, we can shade it when we scan g's stack
and otherwise move it back and forth between the actual context
register and g.sched.ctxt without write barriers. This means we can
save it in morestack along with all of the other g.sched, eliminate
the save from newstack along with its troublesome write barrier, and
eliminate the shenanigans in gogo to invoke the write barrier when
restoring it.
Once we've done all of this, we can mark newstack
go:nowritebarrierrec.
Fixes#22385.
For #22460.
Change-Id: I43c24958e3f6785b53c1350e1e83c2844e0d1522
Reviewed-on: https://go-review.googlesource.com/72553
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
In case of a failed/cancelled build, src/cmd/dist/dist might be left in
place.
Change-Id: Id81b5d663476a880101a2eed54fa051c40b0b0bc
Reviewed-on: https://go-review.googlesource.com/74150
Reviewed-by: Ian Lance Taylor <iant@golang.org>
On linux/amd64, for now.
Updates #16706
Change-Id: Ib8c89b6edc73fb88042c06873ff815d387491504
Reviewed-on: https://go-review.googlesource.com/69117
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This is ugly but needed on the builders, because they do not set
PWD/GOROOT consistently, and the new content-based staleness
understands that the setting of GOROOT influences the content in
the linker outputs.
Change-Id: I0606f2c70b719619b188864ad3ae1b34432140cb
Reviewed-on: https://go-review.googlesource.com/74070
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Header.WriteSubset uses a sync.Pool but wouldn't Put the sorter back in
the pool if there was an error writing to the io.Writer
I'm not really sure why the sorter is returned to begin with. The
comment says "for possible return to headerSorterCache".
This also doesn't address potential panics that might occur, but the
overhead of doing the Put in a defer would likely be too great.
Change-Id: If3c45a4c3e11f6ec65d187e25b63455b0142d4e3
Reviewed-on: https://go-review.googlesource.com/73910
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Tom Bergan <tombergan@google.com>
New relocation flavor R_DWARFFILEREF, to be applied to DWARF attribute
values that correspond to file references (ex: DW_AT_decl_file,
DW_AT_call_file). The LSym for this relocation is the file itself; the
linker replaces the relocation target with the index of the specified
file in the line table's file section.
Note: for testing purposes this patch changes the DWARF function
subprogram DIE abbrev to include DW_AT_decl_file (allowed by DWARF
but not especially useful) so as to have a way to test this
functionality. This attribute will be removed once there are other
file reference attributes (coming as part of inlining support).
Change-Id: Icf676beb60fcc33f06d78e747ef717532daaa3ba
Reviewed-on: https://go-review.googlesource.com/73330
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The compiler depends on the way heap and sort break ties
in some cases. Instead of trying to find them all, bundle
those packages into the bootstrap compiler builds.
The overall goal is that Go1.4 building cmd/compile during the
bootstrap process produces a semantically equivalent compiler
to cmd/compile compiling itself. After this CL, that property is true,
at least for the compiler compiling itself and the other tools.
A test for this property will be in CL 73212.
Change-Id: Icc1ba7cbe828f5673e8198ebacb18c7c01f3a735
Reviewed-on: https://go-review.googlesource.com/73952
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This will let us build the latest sort when bootstrapping the compiler.
The compiler depends on the precise tie-breaks used by sort in
some cases, and it's easier to bring sort along than require checking
every sort call ever added to the compiler.
Change-Id: Idc622f89aedbb40d848708c76650fc28779d0c3c
Reviewed-on: https://go-review.googlesource.com/73951
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Don't pass -gdwarf-2 to the external linker when external linkage is
requested. The Go compiler is now emitting DWARF version 4, so this
doesn't seem needed any more.
Fixes#22455
Change-Id: Ic4122c55e946619a266430f2d26f06d6803dd232
Reviewed-on: https://go-review.googlesource.com/73672
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
env.MkEnv was computing the full gcc command line to report as
$GOGCCFLAGS in "go env" output, which meant running gcc (or clang)
multiple times to discern which flags are available.
We also set $GOGCCFLAGS in the environment, but nothing actually uses
that as far as I can tell - it was always intended only for debugging.
Move GOGCCFLAGS to env.ExtraEnvVars, which displayed in "go env"
output but not set in child processes and not computed nearly as
often.
The effect is that trivial commands like "go help" or "go env GOARCH"
or "go tool -n compile" now run in about 0.01s instead of 0.1s,
because they no longer run gcc 4 times each.
go test -short cmd/go drops from 81s to 44s (and needs more trimming).
The $GOROOT/test suite drops from 92s to 33s, because the number of
gcc invocation drops from 13,336 to 0.
Overall, all.bash drops from 5m53s to 4m07s wall time.
Change-Id: Ia85abc89e1e2bb126b933aff3bf7c5f6c0984cd5
Reviewed-on: https://go-review.googlesource.com/73850
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Directly return error instead of assigning to err and then returning.
Change-Id: Ie5c466cac70cc6d52ee72ebba3e497e0da8a5797
Reviewed-on: https://go-review.googlesource.com/73531
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Function sqDiff is called multiple times in the hot path (x, y loops) of
drawPaletted from the image/draw package; number of sqDiff calls is
between 4×width×height and 4×width×height×len(palette) for each
drawPaletted call.
Simplify this function by removing arguments comparison and relying
instead on signed to unsigned integer conversion rules and properties of
unsigned integer values operations guaranteed by the spec:
> For unsigned integer values, the operations +, -, *, and << are
> computed modulo 2n, where n is the bit width of the unsigned integer's
> type. Loosely speaking, these unsigned integer operations discard high
> bits upon overflow, and programs may rely on ``wrap around''.
image/gif package benchmark that depends on the code updated shows
throughput improvements:
name old time/op new time/op delta
QuantizedEncode-4 788ms ± 2% 468ms ± 9% -40.58% (p=0.000 n=9+10)
name old speed new speed delta
QuantizedEncode-4 1.56MB/s ± 2% 2.63MB/s ± 8% +68.47% (p=0.000 n=9+10)
Closes#22375.
Change-Id: Ic9a540e39ceb21e7741d308af1cfbe61b4ac347b
Reviewed-on: https://go-review.googlesource.com/72373
Reviewed-by: Nigel Tao <nigeltao@golang.org>
1. Move AXXX constants (A-enumeration) from "a.out.go" to "aenum.go"
2. Move VEX-encoded optabs from "asm6.go" to "vex_optabs.go"
Also run "go generate" over aenum.go. This explains diff in "anames.go".
Initialization of opindex is split into 2 loops:
one for `vexOptab`, second for `optab`.
Rationale:
when VEX instructions are generated with current structure,
asm6.go is modified, which can lead to merge conflicts and
larger diffs than desired. Same for a.out.go.
This change makes x86avxgen usage possible:
https://go-review.googlesource.com/c/arch/+/66972
Change-Id: Id9eefcf5ccf0a89440e5d01bcb80926a8163b41d
Reviewed-on: https://go-review.googlesource.com/70630
Run-TryBot: Iskander Sharipov <iskander.sharipov@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
TestParallelRWMutexReaders has a non-preemptible loop in it that can
deadlock if GC triggers. "Fix" it like we've fixed similar tests.
Updates #10958.
Change-Id: I13618f522f5ef0c864e7171ad2f655edececacd7
Reviewed-on: https://go-review.googlesource.com/73710
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This picks up a few changes and should stop the macOS crashes.
Fixes#22456.
Change-Id: I7e0aae119a5564fcfaa16eeab7422bdd5ff0497b
Reviewed-on: https://go-review.googlesource.com/73691
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change adds a Square root method to the big.Float type, with
signature
(z *Float) Sqrt(x *Float) *Float
Fixes#20460
Change-Id: I050aaed0615fe0894e11c800744600648343c223
Reviewed-on: https://go-review.googlesource.com/67830
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Both the linker and the plugin package were inconsistent
about when they applied the path encoding defined in
objabi.PathToPrefix. As a result, only some symbols from
a package path that required encoding were being found.
So always encoding the path.
Fixes#22295
Change-Id: Ife86c79ca20b2e9307008ed83885e193d32b7dc4
Reviewed-on: https://go-review.googlesource.com/72390
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
...because that's an illegal addressing mode.
I double-checked handling of this code, and 387 is the only
place where this check is missing.
Fixes#22429
Change-Id: I2284fe729ea86251c6af2f04076ddf7a5e66367c
Reviewed-on: https://go-review.googlesource.com/73551
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
drawPaletted has to discover R,G,B,A color values of each source image
pixel in a given rectangle. Doing that by calling image.Image.At()
method returning color.Color interface is quite taxing allocation-wise
since interface values go through heap. Introduce special cases for some
concrete source types by fetching color values using type-specific
methods.
name old time/op new time/op delta
Paletted-4 7.62ms ± 4% 3.72ms ± 3% -51.20% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
Paletted-4 480kB ± 0% 0kB ± 0% -99.99% (p=0.000 n=4+5)
name old allocs/op new allocs/op delta
Paletted-4 120k ± 0% 0k ± 0% -100.00% (p=0.008 n=5+5)
Updates #15759.
Change-Id: I0ce1770ff600ac80599541aaad4c2c826855c8fb
Reviewed-on: https://go-review.googlesource.com/72370
Reviewed-by: Nigel Tao <nigeltao@golang.org>