On 386 the below code triggered an infinite loop in growslice:
x = make([]byte, 1<<30-1, 1<<30-1)
x = append(x, x...)
Check for overflow when calculating the new slice capacity
and set the new capacity to the requested capacity when an overflow
is detected to avoid an infinite loop.
No automatic test added due to requiring to allocate 1GB of memory
on a 32bit plaform before use of append is able to trigger the
overflow check.
Fixes#21441
Change-Id: Ia871cc9f88479dacf2c7044531b233f83d2fcedf
Reviewed-on: https://go-review.googlesource.com/57950
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
This slightly improves the generated code on x86 architectures,
including on many hot paths.
It is a no-op on other architectures.
Change-Id: I86336fd846bc5805a27bbec572e8c73dcbd0d567
Reviewed-on: https://go-review.googlesource.com/57411
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This is necessary when you aren't actively changing the runtime. Oops.
Also, run the tests on the builders, to avoid silent failures (#17472).
Change-Id: I1fc03790cdbddddb07026a772137a79919dcaac7
Reviewed-on: https://go-review.googlesource.com/58050
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When deleting entries from a map, only clear the key and value
if they contain pointers. And use memclrHasPointers to do so.
While we're here, specialize key clearing in mapdelete_faststr,
and fix another missed usage of add in mapdelete.
Benchmarking impeded by #21546.
Change-Id: I3f6f924f738d6b899b722d6438e9e63f52359b84
Reviewed-on: https://go-review.googlesource.com/57630
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Move the tophash checks after the equality/length checks.
For fast32/fast64, since we've done a full equality check already,
just check whether tophash is empty instead of checking tophash.
This is cheaper and allows us to skip calculating tophash.
These changes are modeled on the changes in CL 57590,
which were polished based on benchmarking.
Benchmarking directly is impeded by #21546.
Change-Id: I0e17163028e34720310d1bf8f95c5ef42d223e00
Reviewed-on: https://go-review.googlesource.com/57611
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This better matches the style of the rest of the runtime.
Change-Id: I6abb755df50eb3d9086678629c0d184177e1981f
Reviewed-on: https://go-review.googlesource.com/57610
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
During rebase of golang.org/cl/55152 the bucket argument
which was removed in golang.org/cl/56290 from makemap
was not removed from the argument list of makemap64.
This did lead to "pointer in unallocated span" errors
on 32bit platforms since the compiler did only generate
calls to makemap64 without the bucket argument.
Fixes#21568
Change-Id: Ia964a3c285837cd901297f4e16e40402148f8c1c
Reviewed-on: https://go-review.googlesource.com/57990
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The intent is to allow more aggressive refactoring
in the runtime without silent performance changes.
The test would be useful for many functions.
I've seeded it with the runtime functions tophash and add;
it will grow organically (or wither!) from here.
Updates #21536 and #17566
Change-Id: Ib26d9cfd395e7a8844150224da0856add7bedc42
Reviewed-on: https://go-review.googlesource.com/57410
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Where possible generate calls to runtime makemap with int hint argument
during compile time instead of makemap with int64 hint argument.
This eliminates converting the hint argument for calls to makemap with
int64 hint argument for platforms where int64 values do not fit into
an argument of type int.
A similar optimization for makeslice was introduced in CL
golang.org/cl/27851.
386:
name old time/op new time/op delta
NewEmptyMap 53.5ns ± 5% 41.9ns ± 5% -21.56% (p=0.000 n=10+10)
NewSmallMap 182ns ± 1% 165ns ± 1% -8.92% (p=0.000 n=10+10)
Change-Id: Ibd2b4c57b36f171b173bf7a0602b3a59771e6e44
Reviewed-on: https://go-review.googlesource.com/55142
Reviewed-by: Keith Randall <khr@golang.org>
eqstring is only called for strings with equal lengths.
Instead of pushing a pointer and length for each argument string
on the stack we can omit pushing one of the lengths on the stack.
Changing eqstrings signature to eqstring(*uint8, *uint8, int) bool
to implement the above optimization would make it very similar to the
existing memequal(*any, *any, uintptr) bool function.
Since string lengths are positive we can avoid code redundancy and
use memequal instead of using eqstring with an optimized signature.
go command binary size reduced by 4128 bytes on amd64.
name old time/op new time/op delta
CompareStringEqual 6.03ns ± 1% 5.71ns ± 1% -5.23% (p=0.000 n=19+18)
CompareStringIdentical 2.88ns ± 1% 3.22ns ± 7% +11.86% (p=0.000 n=20+20)
CompareStringSameLength 4.31ns ± 1% 4.01ns ± 1% -7.17% (p=0.000 n=19+19)
CompareStringDifferentLength 0.29ns ± 2% 0.29ns ± 2% ~ (p=1.000 n=20+20)
CompareStringBigUnaligned 64.3µs ± 2% 64.1µs ± 3% ~ (p=0.164 n=20+19)
CompareStringBig 61.9µs ± 1% 61.6µs ± 2% -0.46% (p=0.033 n=20+19)
Change-Id: Ice15f3b937c981f0d3bc8479a9ea0d10658ac8df
Reviewed-on: https://go-review.googlesource.com/53650
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
If there are no pointers, then clearing memory doesn't help GC,
and the memory is otherwise dead, so don't bother clearing it.
Change-Id: I953f4a3264939f2825e82292030eda2e835cbb97
Reviewed-on: https://go-review.googlesource.com/57350
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Because profile labels are copied from the goroutine into the tag
buffer by the signal handler, there's a carefully-crafted set of race
detector annotations to create the necessary happens-before edges
between setting a goroutine's profile label and retrieving it from the
profile tag buffer.
Given the constraints of the signal handler, we have to approximate
the true synchronization behavior. Currently, that approximation is
too weak.
Ideally, runtime_setProfLabel would perform a store-release on
&getg().labels and copying each label into the profile would perform a
load-acquire on &getg().labels. This would create the necessary
happens-before edges through each individual g.labels object.
Since we can't do this in the signal handler, we instead synchronize
on a "labelSync" global. The problem occurs with the following
sequence:
1. Goroutine 1 calls setProfLabel, which does a store-release on
labelSync.
2. Goroutine 2 calls setProfLabel, which does a store-release on
labelSync.
3. Goroutine 3 reads the profile, which does a load-acquire on
labelSync.
The problem is that the load-acquire only synchronizes with the *most
recent* store-release to labelSync, and the two store-releases don't
synchronize with each other. So, once goroutine 3 touches the label
set by goroutine 1, we report a race.
The solution is to use racereleasemerge. This is like a
read-modify-write, rather than just a store-release. Each RMW of
labelSync in runtime_setProfLabel synchronizes with the previous RMW
of labelSync, and this ultimately carries forward to the load-acquire,
so it synchronizes with *all* setProfLabel operations, not just the
most recent.
Change-Id: Iab58329b156122002fff12cfe64fbeacb31c9613
Reviewed-on: https://go-review.googlesource.com/56670
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Minor refactoring. This is a step towards specializing evacuate
for mapfast key types.
Change-Id: Icffe2759b7d38e5c008d03941918d5a912ce62f6
Reviewed-on: https://go-review.googlesource.com/56933
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Since oldbucket == h.nevacuate, we can just increment h.nevacuate here.
This removes oldbucket from scope, which will be useful shortly.
Change-Id: I70f81ec3995f17845ebf5d77ccd20ea4338f23e6
Reviewed-on: https://go-review.googlesource.com/56932
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Avelino <t@avelino.xxx>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The number of times that alg has to be spilled
and restored makes it better to just reload it.
Change-Id: I2674752a889ecad59dab54da1d68fad03db1ca85
Reviewed-on: https://go-review.googlesource.com/56931
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The new code is not quite equivalent to the old,
in that if newbit was very large it might have altered the new tophash.
The old behavior is unnecessary and probably undesirable.
Change-Id: I7fb3222520cb61081a857adcddfbb9078ead7122
Reviewed-on: https://go-review.googlesource.com/56930
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
After the key and value arrays, we have an overflow pointer.
So there's no way a past-the-end key or value pointer could point
past the end of the containing bucket.
So we don't need this additional protection.
Update #21459
Change-Id: I7726140033b06b187f7a7d566b3af8cdcaeab0b0
Reviewed-on: https://go-review.googlesource.com/56772
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Avelino <t@avelino.xxx>
Found with mvdan.cc/unindent. Prioritized the ones with the biggest wins
for now.
Change-Id: I2b032e45cdd559fc9ed5b1ee4c4de42c4c92e07b
Reviewed-on: https://go-review.googlesource.com/56470
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Use mallogc instead of newarray to save some overhead since
makechan already checks for _MaxMem constraints.
Flattens the if else construct that determines if buf and hchan struct
should be allocated in one mallocgc call and where buf should point to.
Uses maxSliceCap to avoid divisions similar to makeslice.
name old time/op new time/op delta
MakeChan/Byte 82.0ns ± 8% 81.4ns ± 7% ~ (p=0.643 n=10+10)
MakeChan/Int 97.9ns ± 2% 96.6ns ± 2% -1.40% (p=0.009 n=10+10)
MakeChan/Ptr 128ns ± 3% 120ns ± 1% -6.63% (p=0.000 n=10+10)
MakeChan/Struct/0 66.7ns ± 4% 66.4ns ± 2% ~ (p=0.697 n=10+10)
MakeChan/Struct/32 136ns ± 1% 130ns ± 0% -4.42% (p=0.000 n=10+10)
MakeChan/Struct/40 150ns ± 1% 150ns ± 1% ~ (p=0.725 n=10+10)
Change-Id: Ibb5675d0843a072aae2bfa58ecd39cf4cd926533
Reviewed-on: https://go-review.googlesource.com/55132
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Stack allocated hmap structs are explicitly zeroed before being
passed by pointer to makemap.
Heap allocated hmap structs are created with newobject
which also zeroes on allocation.
Therefore, setting the hmap fields to 0 or nil is redundant
since they will have been zeroed when hmap was allocated.
Change-Id: I5fc55b75e9dc5ba69f5e3588d6c746f53b45ba66
Reviewed-on: https://go-review.googlesource.com/56291
Reviewed-by: Keith Randall <khr@golang.org>
We use a call to strncpy to work around a TSAN bug (wherein TSAN only
delivers asynchronous signals when the thread receiving the signal
calls a libc function). Unfortunately, GCC 7 inlines the call,
avoiding the TSAN libc trap entirely.
Per Ian's suggestion, use global variables as strncpy arguments: that
way, the compiler can't make any assumptions about the concrete values
and can't inline the call away.
fixes#21196
Change-Id: Ie95f1feaf9af1a8056f924f49c29cfc8515385d7
Reviewed-on: https://go-review.googlesource.com/55872
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The code was adding race.Errors to t.raceErrors before checking
Failed, but Failed was using t.raceErrors+race.Errors. We don't want
to change Failed, since that would affect tests themselves, so modify
the harness to not unnecessarily change t.raceErrors.
Updates #19851Fixes#21338
Change-Id: I7bfdf281f90e045146c92444f1370d55c45221d4
Reviewed-on: https://go-review.googlesource.com/54050
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Run-TryBot: Joe Tsai <thebrokentoaster@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Rather than emitting spaces and newlines for println
as we walk the expression, construct it all up front.
This enables further optimizations.
This requires using printstring instead of print in
the implementation of printsp and printnl,
on pain of infinite recursion.
That's ok; it's more efficient anyway, and just as simple.
While we're here, do it for other print routines as well.
Change-Id: I61d7df143810e00710c4d4d948d904007a7fd190
Reviewed-on: https://go-review.googlesource.com/55097
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Writing to selectdone on the stack of another goroutine meant a
pretty subtle dance between the select code and the stack copying
code. Instead move the selectdone variable into the g struct.
Change-Id: Id246aaf18077c625adef7ca2d62794afef1bdd1b
Reviewed-on: https://go-review.googlesource.com/53390
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
I noticed that we don't set an itab's function pointers at compile
time. Instead, we currently do it at executable startup.
Set the function pointers at compile time instead. This shortens
startup time. It has no effect on normal binary size. Object files
will have more relocations, but that isn't a big deal.
For PIE there are additional pointers that will need to be adjusted at
load time. There are already other pointers in an itab that need to be
adjusted, so the cache line will already be paged in. There might be
some binary size overhead to mark these pointers. The "go test -c
-buildmode=pie net/http" binary is 0.18% bigger.
Update #20505
Change-Id: I267c82489915b509ff66e512fc7319b2dd79b8f7
Reviewed-on: https://go-review.googlesource.com/44341
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Currently, GC captures the start-the-world time stamp after
startTheWorldWithSema returns. This is problematic for two reasons:
1. It's possible to get preempted between startTheWorldWithSema
starting the world and calling nanotime.
2. startTheWorldWithSema does several clean-up tasks after the world
is up and running that on rare occasions can take upwards of 10ms.
Since the runtime uses the start-the-world time stamp to compute the
STW duration, both of these can significantly inflate the reported STW
duration.
Fix this by having startTheWorldWithSema itself call nanotime once the
world is started.
Change-Id: I114630234fb73c9dabae50a2ef1884661f2459db
Reviewed-on: https://go-review.googlesource.com/55410
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
struct32 and struct40 structs are already declared, remove them to
make runtime tests build.
Change-Id: I3814f2b850dcb15c4002a3aa22e2a9326e5a5e53
Reviewed-on: https://go-review.googlesource.com/55614
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Instead of comparing if the number of elements will
not fit into memory check if the memory size of the
slices backing memory is higher then the memory limit.
This avoids a division or maxElems lookup.
With et.size > 0:
uintptr(newcap) > maxSliceCap(et.size)
-> uintptr(int(capmem / et.size)) > _MaxMem / et.size
-> capmem / et.size > _MaxMem / et.size
-> capmem > _MaxMem
Note that due to integer division from capmem > _MaxMem
it does not follow that uintptr(newcap) > maxSliceCap(et.size).
Consolidated runtime GrowSlice benchmarks by using sub-benchmarks and
added more struct sizes to show performance improvement when division
is avoided for element sizes larger than 32 bytes.
AMD64:
GrowSlice/Byte 38.9ns ± 2% 38.9ns ± 1% ~ (p=0.974 n=20+20)
GrowSlice/Int 58.3ns ± 3% 58.0ns ± 2% ~ (p=0.154 n=20+19)
GrowSlice/Ptr 95.7ns ± 2% 95.1ns ± 2% -0.60% (p=0.034 n=20+20)
GrowSlice/Struct/24 95.4ns ± 1% 93.9ns ± 1% -1.54% (p=0.000 n=19+19)
GrowSlice/Struct/32 110ns ± 1% 108ns ± 1% -1.76% (p=0.000 n=19+20)
GrowSlice/Struct/40 138ns ± 1% 128ns ± 1% -7.09% (p=0.000 n=20+20)
Change-Id: I1c37857c74ea809da373e668791caffb6a5cbbd3
Reviewed-on: https://go-review.googlesource.com/53471
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
We weren't initializing this field for dynamically-generated itabs.
Turns out it doesn't matter, as any time we use this field we also
generate a static itab for the interface type / concrete type pair.
But we should initialize it anyway, just to be safe.
Performance on the benchmarks in CL 44339:
benchmark old ns/op new ns/op delta
BenchmarkItabFew-12 1040585 26466 -97.46%
BenchmarkItabAll-12 228873499 4287696 -98.13%
Change-Id: I58ed2b31e6c98b584122bdaf844fee7268b58295
Reviewed-on: https://go-review.googlesource.com/44475
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
We don't use it any more, remove it.
Change-Id: I76ce1a4c2e7048fdd13a37d3718b5abf39ed9d26
Reviewed-on: https://go-review.googlesource.com/44474
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Just use fun[0]==0 to indicate a bad itab.
Change-Id: I28ecb2d2d857090c1ecc40b1d1866ac24a844848
Reviewed-on: https://go-review.googlesource.com/44473
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Keep itabs in a growable hash table.
Use a simple open-addressable hash table, quadratic probing, power
of two sized.
Synchronization gets a bit more tricky. The common read path now
has two atomic reads, one to get the table pointer and one to read
the entry out of the table.
I set the max load factor to 75%, kind of arbitrarily. There's a
space-speed tradeoff here, and I'm not sure where we should land.
Because we use open addressing the itab.link field is no longer needed.
I'll remove it in a separate CL.
Fixes#20505
Change-Id: Ifb3d9a337512d6cf968c1fceb1eeaf89559afebf
Reviewed-on: https://go-review.googlesource.com/44472
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Last runtime use was removed in https://golang.org/cl/133700043,
September 2014.
Replace plan9 syscall uses with plan9-specific variable.
Change-Id: Ifb910c021c1419a7c782959f90b054ed600d9e19
Reviewed-on: https://go-review.googlesource.com/55450
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The preceding cleanup made it clear that two cases
(have golden data, unreachable key) are handled identically.
Simplify the control flow to reflect that.
Simplifies the code and generates shorter machine code.
Change-Id: Id612e0da6679813e855506f47222c58ea6497d70
Reviewed-on: https://go-review.googlesource.com/55093
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This change unifies the x and y cases.
It shrinks evacuate's machine code by ~25% and its stack size by ~15%.
It also eliminates a critical branch.
Whether an entry should go to x or y is designed to be unpredictable.
As a result, half of the branch predictions for useX were wrong.
Mispredicting that branch can easily incur an expensive cache miss.
Switching to an xy array allows elimination of that branch,
which in turn reduces cache misses.
Change-Id: Ie9cef53744b96c724c377ac0985b487fc50b49b1
Reviewed-on: https://go-review.googlesource.com/54653
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Make the calculation of k and v a bit lazier.
None of the following code cares about indirect-vs-direct k,
and it happens on all code paths, so check t.indirectkey earlier.
Simplifies the code and reduces both machine code and stack size.
Change-Id: I5ea4c0772848d7a4b15383baedb9a1f7feb47201
Reviewed-on: https://go-review.googlesource.com/55092
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This avoids division and multiplication.
Instrumentation suggests that this is a very common case.
Change-Id: I2d5d5012d4f4df4c4af1f9f85ca9c323c9889c0e
Reviewed-on: https://go-review.googlesource.com/54657
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This avoids the never triggered capacity checks in newarray.
Change-Id: Ib72b204adcb9e3fd3ab963defe0cd40e22d5d492
Reviewed-on: https://go-review.googlesource.com/54731
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This makes sure that its argument is marked live on entry.
We need its arg to be live so defers of KeepAlive get
scanned correctly by the GC.
Fixes#21402
Change-Id: I906813e433d0e9726ca46483723303338da5b4d7
Reviewed-on: https://go-review.googlesource.com/55150
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>