Currently we have lots of (s.start << _PageShift) and variants. We now
have an s.base() function that returns this. It's faster and more
readable, so use it.
Change-Id: I888060a9dae15ea75ca8cc1c2b31c905e71b452b
Reviewed-on: https://go-review.googlesource.com/22559
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
These used to be used for the list of newly freed objects, but that's
no longer a thing.
Change-Id: I5a4503137b74ec0eae5372ca271b1aa0b32df074
Reviewed-on: https://go-review.googlesource.com/22557
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Our compilers now provides instrinsics including
sys.Ctz64 that support CTZ (count trailing zero)
instructions. This CL replaces the Go versions
of CTZ with the compiler intrinsic.
Count trailing zeros CTZ finds the least
significant 1 in a word and returns the number
of less significant 0s in the word.
Allocation uses the bitmap created by the garbage
collector to locate an unmarked object. The logic
takes a word of the bitmap, complements, and then
caches it. It then uses CTZ to locate an available
unmarked object. It then shifts marked bits out of
the bitmap word preparing it for the next search.
Once all the unmarked objects are used in the
cached work the bitmap gets another word and
repeats the process.
Change-Id: Id2fc42d1d4b9893efaa2e1bd01896985b7e42f82
Reviewed-on: https://go-review.googlesource.com/21366
Reviewed-by: Austin Clements <austin@google.com>
Two changes are included here that are dependent on the other.
The first is that allocBits and gcamrkBits are changed to
a *uint8 which points to the first byte of that span's
mark and alloc bits. Several places were altered to
perform pointer arithmetic to locate the byte corresponding
to an object in the span. The actual bit corresponding
to an object is indexed in the byte by using the lower three
bits of the objects index.
The second change avoids the redundant calculation of an
object's index. The index is returned from heapBitsForObject
and then used by the functions indexing allocBits
and gcmarkBits.
Finally we no longer allocate the gc bits in the span
structures. Instead we use an arena based allocation scheme
that allows for a more compact bit map as well as recycling
and bulk clearing of the mark bits.
Change-Id: If4d04b2021c092ec39a4caef5937a8182c64dfef
Reviewed-on: https://go-review.googlesource.com/20705
Reviewed-by: Austin Clements <austin@google.com>
The complexity of the GC work buffers put and tryGet
prevented them from being inlined. This CL simplifies
the fast path thus enabling inlining. If the fast
path does not succeed the previous put and tryGet
functions are called.
Change-Id: I6da6495d0dadf42bd0377c110b502274cc01acf5
Reviewed-on: https://go-review.googlesource.com/20704
Reviewed-by: Austin Clements <austin@google.com>
Prior to this CL the base of a span was calculated in various
places using shifts or calls to base(). This CL now
always calls base() which has been optimized to calculate the
base of the span when the span is initialized and store that
value in the span structure.
Change-Id: I661f2bfa21e3748a249cdf049ef9062db6e78100
Reviewed-on: https://go-review.googlesource.com/20703
Reviewed-by: Austin Clements <austin@google.com>
Prior to this CL the sweep phase was responsible for locating
all objects that were about to be freed and calling a function
to process the object. This was done by the function
heapBitsSweepSpan. Part of processing included calls to
tracefree and msanfree as well as counting how many objects
were freed.
The calls to tracefree and msanfree have been moved into the
gcmalloc routine and called when the object is about to be
reallocated. The counting of free objects has been optimized
using an array based popcnt algorithm and if all the objects
in a span are free then span is freed.
Similarly the code to locate the next free object has been
optimized to use an array based ctz (count trailing zero).
Various hot paths in the allocation logic have been optimized.
At this point the garbage benchmark is within 3% of the 1.6
release.
Change-Id: I00643c442e2ada1685c010c3447e4ea8537d2dfa
Reviewed-on: https://go-review.googlesource.com/20201
Reviewed-by: Austin Clements <austin@google.com>
Add to each span a 64 bit cache (allocCache) of the allocBits
at freeindex. allocCache is shifted such that the lowest bit
corresponds to the bit freeindex. allocBits uses a 0 to
indicate an object is free, on the other hand allocCache
uses a 1 to indicate an object is free. This facilitates
ctz64 (count trailing zero) which counts the number of 0s
trailing the least significant 1. This is also the index of
the least significant 1.
Each span maintains a freeindex indicating the boundary
between allocated objects and unallocated objects. allocCache
is shifted as freeindex is incremented such that the low bit
in allocCache corresponds to the bit a freeindex in the
allocBits array.
Currently ctz64 is written in Go using a for loop so it is
not very efficient. Use of the hardware instruction will
follow. With this in mind comparisons of the garbage
benchmark are as follows.
1.6 release 2.8 seconds
dev:garbage branch 3.1 seconds.
Profiling shows the go implementation of ctz64 takes up
1% of the total time.
Change-Id: If084ed9c3b1eda9f3c6ab2e794625cb870b8167f
Reviewed-on: https://go-review.googlesource.com/20200
Reviewed-by: Austin Clements <austin@google.com>
Most (all?) processors that Go supports supply a hardware
instruction that takes a byte and returns the number
of zeros trailing the first 1 encountered, or 8
if no ones are found. This is the index within the
byte of the first 1 encountered. CTZ should improve the
performance of the nextFreeIndex function.
Since nextFreeIndex wants the next unmarked (0) bit
a bit-wise complement is needed before calling ctz.
Furthermore unmarked bits associated with previously
allocated objects need to be ignored. Instead of writing
a 1 as we allocate the code masks all bits less than the
freeindex after loading the byte.
While this CL does not actual execute a CTZ instruction
it supplies a ctz function with the appropiate signature
along with the logic to execute it.
Change-Id: I5c55ce0ed48ca22c21c4dd9f969b0819b4eadaa7
Reviewed-on: https://go-review.googlesource.com/20169
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is a renaming of the field ref to the
more appropriate allocCount. The field
holds the number of objects in the span
that are currently allocated. Some throws
strings were adjusted to more accurately
convey the meaning of allocCount.
Change-Id: I10daf44e3e9cc24a10912638c7de3c1984ef8efe
Reviewed-on: https://go-review.googlesource.com/19518
Reviewed-by: Austin Clements <austin@google.com>
Instead of building a freelist from the mark bits generated
by the GC this CL allocates directly from the mark bits.
The approach moves the mark bits from the pointer/no pointer
heap structures into their own per span data structures. The
mark/allocation vectors consist of a single mark bit per
object. Two vectors are maintained, one for allocation and
one for the GC's mark phase. During the GC cycle's sweep
phase the interpretation of the vectors is swapped. The
mark vector becomes the allocation vector and the old
allocation vector is cleared and becomes the mark vector that
the next GC cycle will use.
Marked entries in the allocation vector indicate that the
object is not free. Each allocation vector maintains a boundary
between areas of the span already allocated from and areas
not yet allocated from. As objects are allocated this boundary
is moved until it reaches the end of the span. At this point
further allocations will be done from another span.
Since we no longer sweep a span inspecting each freed object
the responsibility for maintaining pointer/scalar bits in
the heapBitMap containing is now the responsibility of the
the routines doing the actual allocation.
This CL is functionally complete and ready for performance
tuning.
Change-Id: I336e0fc21eef1066e0b68c7067cc71b9f3d50e04
Reviewed-on: https://go-review.googlesource.com/19470
Reviewed-by: Austin Clements <austin@google.com>
The gcmarkBits is a bit vector used by the GC to mark
reachable objects. Once a GC cycle is complete the gcmarkBits
swap places with the allocBits. allocBits is then used directly
by malloc to locate free objects, thus avoiding the
construction of a linked free list. This CL introduces a set
of helper functions for manipulating gcmarkBits and allocBits
that will be used by later CLs to realize the actual
algorithm. Minimal attempts have been made to optimize these
helper routines.
Change-Id: I55ad6240ca32cd456e8ed4973c6970b3b882dd34
Reviewed-on: https://go-review.googlesource.com/19420
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In preparation for changing how the next free object is chosen
refactor and consolidate code into a single function.
Change-Id: I6836cd88ed7cbf0b2df87abd7c1c3b9fabc1cbd8
Reviewed-on: https://go-review.googlesource.com/19317
Reviewed-by: Austin Clements <austin@google.com>
The freelist for normal objects and the freelist
for stacks share the same mspan field for holding
the list head but are operated on by different code
sequences. This overloading complicates the use of bit
vectors for allocation of normal objects. This change
refactors the use of the stackfreelist out from the
use of freelist.
Change-Id: I5b155b5b8a1fcd8e24c12ee1eb0800ad9b6b4fa0
Reviewed-on: https://go-review.googlesource.com/19315
Reviewed-by: Austin Clements <austin@google.com>
The bitmap allocation data structure prototypes. Before
this is released these underlying data structures need
to be more performant but the signatures of helper
functions utilizing these structures will remain stable.
Change-Id: I5ace12f2fb512a7038a52bbde2bfb7e98783bcbe
Reviewed-on: https://go-review.googlesource.com/19221
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Updates #15462
Automatic refactor with sed -e.
Replace all oconv(op, 0) to string conversion with the raw op value
which fmt's %v verb can print directly.
The remaining oconv(op, FmtSharp) will be replaced with op.GoString and
%#v in the next CL.
Change-Id: I5e2f7ee0bd35caa65c6dd6cb1a866b5e4519e641
Reviewed-on: https://go-review.googlesource.com/22499
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
After CL 22461, c-archive build on darwin/arm is by default compiled
with -shared, so update the install path.
Fix build.
Change-Id: Ie93dbd226ed416b834da0234210f4b98bc0e3606
Reviewed-on: https://go-review.googlesource.com/22507
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
These are used at the bottom level of various GC operations that must
not be preempted. To be on the safe side, mark them all nosplit.
Change-Id: I8f7360e79c9852bd044df71413b8581ad764380c
Reviewed-on: https://go-review.googlesource.com/22504
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
The new type was inheriting the tflagExtraStar from its prototype.
Fixes#15467
Change-Id: Ic22c2a55cee7580cb59228d52b97e1c0a1e60220
Reviewed-on: https://go-review.googlesource.com/22501
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Fixes#15468
Change-Id: I8723171f87774a98d5e80e7832ebb96dd1fbea74
Reviewed-on: https://go-review.googlesource.com/22524
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
builtin.go was auto-generated via go generate; all other
changes were manual.
The new format reduces the export data size by ~65% on average
for the std library packages (and there is still quite a bit of
room for improvement).
The average time to write export data is reduced by (at least)
62% as measured in one run over the std lib, it is likely more.
The average time to read import data is reduced by (at least)
37% as measured in one run over the std lib, it is likely more.
There is also room to improve this time.
The compiler transparently handles both packages using the old
and the new format.
Comparing the -S output of the go build for each package via
the cmp.bash script (added) shows identical assembly code for
all packages, but 6 files show file:line differences:
The following files have differences because they use cgo
and cgo uses different temp. directories for different builds.
Harmless.
src/crypto/x509
src/net
src/os/user
src/runtime/cgo
The following files have file:line differences that are not yet
fully explained; however the differences exist w/ and w/o new export
format (pre-existing condition). See issue #15453.
src/go/internal/gccgoimporter
src/go/internal/gcimporter
In summary, switching to the new export format produces the same
package files as before for all practical purposes.
How can you tell which one you have (if you care): Open a package
(.a) file in an editor. Textual export data starts with a $$ after
the header and is more or less legible; binary export data starts
with a $$B after the header and is mostly unreadable. A stand-alone
decoder (for debugging) is in the works.
In case of a problem, please first try reverting back to the old
textual format to determine if the cause is the new export format:
For a stand-alone compiler invocation:
- go tool compile -newexport=0 <files>
For a single package:
- go build -gcflags="-newexport=0" <pkg>
For make/all.bash:
- (export GO_GCFLAGS="-newexport=0"; sh make.bash)
Fixes#13241.
Change-Id: I2588cb463be80af22446bf80c225e92ab79878b8
Reviewed-on: https://go-review.googlesource.com/22123
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Now it is possible to build a c-archive as PIC on darwin/arm (this is
now the default). Then the system linker can link the binary using
the archive as PIE.
Fixes#12896.
Change-Id: Iad84131572422190f5fa036e7d71910dc155f155
Reviewed-on: https://go-review.googlesource.com/22461
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TestBuiltin will fail if run on Windows and builtin.go was generated
on a non-Windows machine (or vice versa) because path names have
different separators. Avoid problem altogether by not writing pos
info for builtin packages. It's not needed.
Affects -newexport only.
Change-Id: I8944f343452faebaea9a08b5fb62829bed77c148
Reviewed-on: https://go-review.googlesource.com/22498
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The line numbers of ONAMEs are the location of their
declaration, not their use.
The line numbers of named OLITERALs are also the location
of their declaration.
Ignore both of these. Instead, we will inherit the line number from
the containing syntactic item.
Fixes#14742Fixes#15430
Change-Id: Ie43b5b9f6321cbf8cead56e37ccc9364d0702f2f
Reviewed-on: https://go-review.googlesource.com/22479
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Current V-register range is V32~V63 on arm64. This patch changes it to
V0~V31.
fix#15465.
Change-Id: I90dab42dea46825ec5d7a8321ec4f6550735feb8
Reviewed-on: https://go-review.googlesource.com/22520
Reviewed-by: Aram Hăvărneanu <aram@mgk.ro>
Run-TryBot: Aram Hăvărneanu <aram@mgk.ro>
TryBot-Result: Gobot Gobot <gobot@golang.org>
TestNoRaceIOHttp does all kinds of bad things:
1. Binds to a fixed port, so concurrent tests fail.
2. Registers HTTP handler multiple times, so repeated tests fail.
3. Relies on sleep to wait for listen.
Fix all of that.
Change-Id: I1210b7797ef5e92465b37dc407246d92a2a24fe8
Reviewed-on: https://go-review.googlesource.com/19953
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Updates #15462
Semi automatic change with gofmt -r and hand fixups for callers outside
internal/gc.
All the uses of gc.Oconv outside cmd/compile/internal/gc were for the
Oconv(op, 0) form, which is already handled the Op.String method.
Replace the use of gc.Oconv(op, 0) with op itself, which will call
Op.String via the %v or %s verb. Unexport Oconv.
Change-Id: I84da2a2e4381b35f52efce427b2d6a3bccdf2526
Reviewed-on: https://go-review.googlesource.com/22496
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Previously the Transport would cache idle connections from the
Transport for later reuse, but if a peer server disconnected
(e.g. idle timeout), we would not proactively remove the *persistConn
from the Transport's idle list, leading to a waste of memory
(potentially forever).
Instead, when the persistConn's readLoop terminates, remote it from
the idle list, if present.
This also adds the beginning of accounting for the total number of
idle connections, which will be needed for Transport.MaxIdleConns
later.
Updates #15461
Change-Id: Iab091f180f8dd1ee0d78f34b9705d68743b5557b
Reviewed-on: https://go-review.googlesource.com/22492
Reviewed-by: Andrew Gerrand <adg@golang.org>
It comes up every few months that we can't understand why
the go command is rebuilding some package.
Add diagnostics so that the go command can explain itself
if asked.
For #2775, #3506, #12074.
Change-Id: I1c73b492589b49886bf31a8f9d05514adbd6ed70
Reviewed-on: https://go-review.googlesource.com/22432
Reviewed-by: Rob Pike <r@golang.org>
Renames block to blockGeneric so that it can be called when the
assembly feature check fails. This means making block a var on
platforms without an assembly implementation (similar to the sha1
package).
Also adds a test to check that the fallback path works correctly
when the feature check fails.
name old speed new speed delta
Hash8Bytes 6.42MB/s ± 1% 27.14MB/s ± 0% +323.01% (p=0.000 n=10+10)
Hash1K 53.9MB/s ± 0% 511.1MB/s ± 0% +847.57% (p=0.000 n=10+9)
Hash8K 57.1MB/s ± 1% 609.7MB/s ± 0% +967.04% (p=0.000 n=10+10)
Change-Id: If962b2a5c9160b3a0b76ccee53b2fd809468ed3d
Reviewed-on: https://go-review.googlesource.com/22460
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bill O'Farrell <billotosyr@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently we clear gcscanvalid in both casgstatus and
casfrom_Gscanstatus if the new status is _Grunning. This is very
important to do in casgstatus. However, this is potentially wrong in
casfrom_Gscanstatus because in this case the caller doesn't own gp and
hence the write is racy. Unlike the other _Gscan statuses, during
_Gscanrunning, the G is still running. This does not indicate that
it's transitioning into a running state. The scan simply hasn't
happened yet, so it's neither valid nor invalid.
Conveniently, this also means clearing gcscanvalid is unnecessary in
this case because the G was already in _Grunning, so we can simply
remove this code. What will happen instead is that the G will be
preempted to scan itself, that scan will set gcscanvalid to true, and
then the G will return to _Grunning via casgstatus, clearing
gcscanvalid.
This fix will become necessary shortly when we start keeping track of
the set of G's with dirty stacks, since it will no longer be
idempotent to simply set gcscanvalid to false.
Change-Id: I688c82e6fbf00d5dbbbff49efa66acb99ee86785
Reviewed-on: https://go-review.googlesource.com/20669
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This adds a best-effort pass to remove stack barriers immediately
after the end of mark termination. This isn't necessary for the Go
runtime, but should help external tools that perform stack walks but
aren't aware of Go's stack barriers such as GDB, perf, and VTune.
(Though clearly they'll still have trouble unwinding stacks during
mark.)
Change-Id: I66600fae1f03ee36b5459d2b00dcc376269af18e
Reviewed-on: https://go-review.googlesource.com/20668
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently we remove stack barriers during STW mark termination, which
has a non-trivial per-goroutine cost and means that we have to touch
even clean stacks during mark termination. However, there's no problem
with leaving them in during the sweep phase. They just have to be out
by the time we install new stack barriers immediately prior to
scanning the stack such as during the mark phase of the next GC cycle
or during mark termination in a STW GC.
Hence, move the gcRemoveStackBarriers from STW mark termination to
just before we install new stack barriers during concurrent mark. This
removes the cost from STW. Furthermore, this combined with concurrent
stack shrinking means that the mark termination scan of a clean stack
is a complete no-op, which will make it possible to skip clean stacks
entirely during mark termination.
This has the downside that it will mess up anything outside of Go that
tries to walk Go stacks all the time instead of just some of the time.
This includes tools like GDB, perf, and VTune. We'll improve the
situation shortly.
Change-Id: Ia40baad8f8c16aeefac05425e00b0cf478137097
Reviewed-on: https://go-review.googlesource.com/20667
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently we enqueue span root mark jobs during both concurrent mark
and mark termination, but we make the job a no-op during mark
termination.
This is silly. Instead of queueing them up just to not do them, don't
queue them up in the first place.
Change-Id: Ie1d36de884abfb17dd0db6f0449a2b7c997affab
Reviewed-on: https://go-review.googlesource.com/20666
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently we free cached stacks of dead Gs during STW stack root
marking. We do this during STW because there's no way to take
ownership of a particular dead G, so attempting to free a dead G's
stack during concurrent stack root marking could race with reusing
that G.
However, we can do this concurrently if we take a completely different
approach. One way to prevent reuse of a dead G is to remove it from
the free G list. Hence, this adds a new fixed root marking task that
simply removes all Gs from the list of dead Gs with cached stacks,
frees their stacks, and then adds them to the list of dead Gs without
cached stacks.
This is also a necessary step toward rescanning only dirty stacks,
since it eliminates another task from STW stack marking.
Change-Id: Iefbad03078b284a2e7bf30fba397da4ca87fe095
Reviewed-on: https://go-review.googlesource.com/20665
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently all free Gs are added to one list. Split this into two
lists: one for free Gs with cached stacks and one for Gs without
cached stacks.
This lets us preferentially allocate Gs that already have a stack, but
more importantly, it sets us up to free cached G stacks concurrently.
Change-Id: Idbe486f708997e1c9d166662995283f02d1eeb3c
Reviewed-on: https://go-review.googlesource.com/20664
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Let's define the line number of a multiline rule as the line
number on which the -> appears. This helps make the rule
cover analysis look a bit nicer.
Change-Id: I4ac4c09f2240285976590ecfd416bc4c05e78946
Reviewed-on: https://go-review.googlesource.com/22473
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Instead of eagerly creating strings like "literal 2.01" for every
lexed number in case we need to mention it in an error message, defer
this work to (*parser).syntax_error.
name old allocs/op new allocs/op delta
Template 482k ± 0% 482k ± 0% -0.12% (p=0.000 n=9+10)
GoTypes 1.35M ± 0% 1.35M ± 0% -0.04% (p=0.015 n=10+10)
Compiler 5.45M ± 0% 5.44M ± 0% -0.12% (p=0.000 n=9+8)
Change-Id: I333b3c80e583864914412fb38f8c0b7f1d8c8821
Reviewed-on: https://go-review.googlesource.com/22480
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>