Currently, the concurrent sweep follows a 1:1 rule: when allocation
needs a span, it sweeps a span (likewise, when a large allocation
needs N pages, it sweeps until it frees N pages). This rule worked
well for the STW collector (especially when GOGC==100) because it did
no more sweeping than necessary to keep the heap from growing, would
generally finish sweeping just before GC, and ensured good temporal
locality between sweeping a page and allocating from it.
It doesn't work well with concurrent GC. Since concurrent GC requires
starting GC earlier (sometimes much earlier), the sweep often won't be
done when GC starts. Unfortunately, the first thing GC has to do is
finish the sweep. In the mean time, the mutator can continue
allocating, pushing the heap size even closer to the goal size. This
worked okay with the 7/8ths trigger, but it gets into a vicious cycle
with the GC trigger controller: if the mutator is allocating quickly
and driving the trigger lower, more and more sweep work will be left
to GC; this both causes GC to take longer (allowing the mutator to
allocate more during GC) and delays the start of the concurrent mark
phase, which throws off the GC controller's statistics and generally
causes it to push the trigger even lower.
As an example of a particularly bad case, the garbage benchmark with
GOMAXPROCS=4 and -benchmem 512 (MB) spends the first 0.4-0.8 seconds
of each GC cycle sweeping, during which the heap grows by between
109MB and 252MB.
To fix this, this change replaces the 1:1 sweep rule with a
proportional sweep rule. At the end of GC, GC knows exactly how much
heap allocation will occur before the next concurrent GC as well as
how many span pages must be swept. This change computes this "sweep
ratio" and when the mallocgc asks for a span, the mcentral sweeps
enough spans to bring the swept span count into ratio with the
allocated byte count.
On the benchmark from above, this entirely eliminates sweeping at the
beginning of GC, which reduces the time between startGC readying the
GC goroutine and GC stopping the world for sweep termination to ~100µs
during which the heap grows at most 134KB.
Change-Id: I35422d6bba0c2310d48bb1f8f30a72d29e98c1af
Reviewed-on: https://go-review.googlesource.com/8921
Reviewed-by: Rick Hudson <rlh@golang.org>
This field used to decrease with sweeps (and potentially go
negative). Now it is always zero or positive, so change it to a
uintptr so it meshes better with other memory stats.
Change-Id: I6a50a956ddc6077eeaf92011c51743cb69540a3c
Reviewed-on: https://go-review.googlesource.com/8899
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, concurrent GC triggers at a fixed 7/8*GOGC heap growth. For
mutators that allocate slowly, this means GC will trigger too early
and run too often, wasting CPU time on GC. For mutators that allocate
quickly, this means GC will trigger too late, causing the program to
exceed the GOGC heap growth goal and/or to exceed CPU goals because of
a high mutator assist ratio.
This change adds a feedback control loop to dynamically adjust the GC
trigger from cycle to cycle. By monitoring the heap growth and GC CPU
utilization from cycle to cycle, this adjusts the Go garbage collector
to target the GOGC heap growth goal and the 25% CPU utilization goal.
Change-Id: Ic82eef288c1fa122f73b69fe604d32cbb219e293
Reviewed-on: https://go-review.googlesource.com/8851
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the concurrent mark phase is performed by the main GC
goroutine. Prior to the previous commit enabling preemption, this
caused marking to always consume 1/GOMAXPROCS of the available CPU
time. If GOMAXPROCS=1, this meant background GC would consume 100% of
the CPU (effectively a STW). If GOMAXPROCS>4, background GC would use
less than the goal of 25%. If GOMAXPROCS=4, background GC would use
the goal 25%, but if the mutator wasn't using the remaining 75%,
background marking wouldn't take advantage of the idle time. Enabling
preemption in the previous commit made GC miss CPU targets in
completely different ways, but set us up to bring everything back in
line.
This change replaces the fixed GC goroutine with per-P background mark
goroutines. Once started, these goroutines don't go in the standard
run queues; instead, they are scheduled specially such that the time
spent in mutator assists and the background mark goroutines totals 25%
of the CPU time available to the program. Furthermore, this lets
background marking take advantage of idle Ps, which significantly
boosts GC performance for applications that under-utilize the CPU.
This requires also changing how time is reported for gctrace, so this
change splits the concurrent mark CPU time into assist/background/idle
scanning.
This also requires increasing the size of the StackRecord slice used
in a GoroutineProfile test.
Change-Id: I0936ff907d2cee6cb687a208f2df47e8988e3157
Reviewed-on: https://go-review.googlesource.com/8850
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the entire GC process runs with g.m.preemptoff set. In the
concurrent phases, the parts that actually need preemption disabled
are run on a system stack and there's no overall need to stay on the
same M or P during the concurrent phases. Hence, move the setting of
g.m.preemptoff to when we start mark termination, at which point we
really do need preemption disabled.
This dramatically changes the scheduling behavior of the concurrent
mark phase. Currently, since this is non-preemptible, concurrent mark
gets one dedicated P (so 1/GOMAXPROCS utilization). With this change,
the GC goroutine is scheduled like any other goroutine during
concurrent mark, so it gets 1/<runnable goroutines> utilization.
You might think it's not even necessary to set g.m.preemptoff at that
point since the world is stopped, but stackalloc/stackfree use this as
a signal that the per-P pools are not safe to access without
synchronization.
Change-Id: I08aebe8179a7d304650fb8449ff36262b3771099
Reviewed-on: https://go-review.googlesource.com/8839
Reviewed-by: Rick Hudson <rlh@golang.org>
This time is tracked per P and periodically flushed to the global
controller state. This will be used to compute mutator assist
utilization in order to schedule background GC work.
Change-Id: Ib94f90903d426a02cf488bf0e2ef67a068eb3eec
Reviewed-on: https://go-review.googlesource.com/8837
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, mutator allocation periodically assists the garbage
collector by performing a small, fixed amount of scanning work.
However, to control heap growth, mutators need to perform scanning
work *proportional* to their allocation rate.
This change implements proportional mutator assists. This uses the
scan work estimate computed by the garbage collector at the beginning
of each cycle to compute how much scan work must be performed per
allocation byte to complete the estimated scan work by the time the
heap reaches the goal size. When allocation triggers an assist, it
uses this ratio and the amount allocated since the last assist to
compute the assist work, then attempts to steal as much of this work
as possible from the background collector's credit, and then performs
any remaining scan work itself.
Change-Id: I98b2078147a60d01d6228b99afd414ef857e4fba
Reviewed-on: https://go-review.googlesource.com/8836
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently, the "n" in gcDrainN is in terms of objects to scan. This is
used by gchelpwork to perform a limited amount of work on allocation,
but is a pretty arbitrary way to bound this amount of work since the
number of objects has little relation to how long they take to scan.
Modify gcDrainN to perform a fixed amount of scan work instead. For
now, gchelpwork still performs a fairly arbitrary amount of scan work,
but at least this is much more closely related to how long the work
will take. Shortly, we'll use this to precisely control the scan work
performed by mutator assists during allocation to achieve the heap
size goal.
Change-Id: I3cd07fe0516304298a0af188d0ccdf621d4651cc
Reviewed-on: https://go-review.googlesource.com/8835
Reviewed-by: Rick Hudson <rlh@golang.org>
This tracks scan work done by background GC in a global pool. Mutator
assists will draw on this credit to avoid doing work when background
GC is staying ahead.
Unlike the other GC controller tracking variables, this will be both
written and read throughout the cycle. Hence, we can't arbitrarily
delay updates like we can for scan work and bytes marked. However, we
still want to minimize contention, so this global credit pool is
allowed some error from the "true" amount of credit. Background GC
accumulates credit locally up to a limit and only then flushes to the
global pool. Similarly, mutator assists will draw from the credit pool
in batches.
Change-Id: I1aa4fc604b63bf53d1ee2a967694dffdfc3e255e
Reviewed-on: https://go-review.googlesource.com/8834
Reviewed-by: Rick Hudson <rlh@golang.org>
This implements tracking the scan work ratio of a GC cycle and using
this to estimate the scan work that will be required by the next GC
cycle. Currently this estimate is unused; it will be used to drive
mutator assists.
Change-Id: I8685b59d89cf1d83eddfc9b30d84da4e3a7f4b72
Reviewed-on: https://go-review.googlesource.com/8833
Reviewed-by: Rick Hudson <rlh@golang.org>
This tracks the amount of scan work in terms of scanned pointers
during the concurrent mark phase. We'll use this information to
estimate scan work for the next cycle.
Currently this aggregates the work counter in gcWork and dispose
atomically aggregates this into a global work counter. dispose happens
relatively infrequently, so the contention on the global counter
should be low. If this turns out to be an issue, we can reduce the
number of disposes, and if it's still a problem, we can switch to
per-P counters.
Change-Id: Iac0364c466ee35fab781dbbbe7970a5f3c4e1fc1
Reviewed-on: https://go-review.googlesource.com/8832
Reviewed-by: Rick Hudson <rlh@golang.org>
These currently use portable implementations in terms of their uint64
counterparts.
Change-Id: Icba5f7134cfcf9d0429edabcdd73091d97e5e905
Reviewed-on: https://go-review.googlesource.com/8831
Reviewed-by: Rick Hudson <rlh@golang.org>
This change exposes reflect.ArrayOf to create new reflect.Type array
types at runtime, when given a reflect.Type element.
- reflect: implement ArrayOf
- reflect: tests for ArrayOf
- runtime: document that typeAlg is used by reflect and must be kept in
synchronized
Fixes#5996.
Change-Id: I5d07213364ca915c25612deea390507c19461758
Reviewed-on: https://go-review.googlesource.com/4111
Reviewed-by: Keith Randall <khr@golang.org>
This change fixes inconsistent error values on
Lookup{Addr,CNAME,Host,IP.MX,NS,Port,SRV,TXT}.
Updates #4856.
Change-Id: I059bc8ffb96ee74dff8a8c4e8e6ae3e4a462a7ef
Reviewed-on: https://go-review.googlesource.com/9108
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on Interfaces,
InterfaceAddrs, InterfaceBy{Index,Name}, and Addrs and MulticastAddrs
methods of Interface.
Updates #4856.
Change-Id: I09e65522a22f45c641792d774ebf7a0081b874ad
Reviewed-on: https://go-review.googlesource.com/9140
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on
Set{Deadline,ReadDeadline,WriteDeadline,ReadBuffer,WriteBuffer} for
Conn, Listener and PacketConn, and
Set{KeepAlive,KeepAlivePeriod,Linger,NoDelay} for TCPConn.
Updates #4856.
Change-Id: I34ca5e98f6de72863f85b2527478b20d8d5394dd
Reviewed-on: https://go-review.googlesource.com/9109
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on
File{Conn,Listener,PacketConn} and File method of Conn, Listener.
Updates #4856.
Change-Id: I3197b9277bef0e034427e3a44fa77523acaa2520
Reviewed-on: https://go-review.googlesource.com/9101
Reviewed-by: Ian Lance Taylor <iant@golang.org>
They don't really make any sense on this side of the compiler/linker divide.
Some of the code touching these fields was the support for R_TLS when
thechar=='6' which turns out to be dead and so I just removed all of that.
Change-Id: I4e265613c4e7fcc30a965fffb7fd5f45017f06f3
Reviewed-on: https://go-review.googlesource.com/9107
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Also moves a few server test helpers into mockserver_test.go.
Change-Id: I5a95c9bc6f0c4683751bcca77e26a8586a377466
Reviewed-on: https://go-review.googlesource.com/9106
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: I1ea4175466c9113c1f41b012ba8266ee2b06e3a3
Reviewed-on: https://go-review.googlesource.com/8522
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Thanks to Russ for the hints.
Change-Id: Ie35a71d432b9d68bd30c7a364b4dce1bd3db806e
Reviewed-on: https://go-review.googlesource.com/9102
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Optimized heapBitsForObject by special casing
objects whose size is a power of two. When a
span holding such objects is initialized I
added a mask that when &ed with an interior pointer
results in the base of the pointer. For the garbage
benchmark this resulted in CPU_CLK_UNHALTED in
heapBitsForObject going from 7.7% down to 5.9%
of the total, INST_RETIRED went from 12.2 -> 8.7.
Here are the benchmarks that were at lease plus or minus 1%.
benchmark old ns/op new ns/op delta
BenchmarkFmtFprintfString 249 221 -11.24%
BenchmarkFmtFprintfInt 247 223 -9.72%
BenchmarkFmtFprintfEmpty 76.5 69.6 -9.02%
BenchmarkBinaryTree17 4106631412 3744550160 -8.82%
BenchmarkFmtFprintfFloat 424 399 -5.90%
BenchmarkGoParse 4484421 4242115 -5.40%
BenchmarkGobEncode 8803668 8449107 -4.03%
BenchmarkFmtManyArgs 1494 1436 -3.88%
BenchmarkGobDecode 10431051 10032606 -3.82%
BenchmarkFannkuch11 2591306713 2517400464 -2.85%
BenchmarkTimeParse 361 371 +2.77%
BenchmarkJSONDecode 70620492 68830357 -2.53%
BenchmarkRegexpMatchMedium_1K 54693 53343 -2.47%
BenchmarkTemplate 90008879 91929940 +2.13%
BenchmarkTimeFormat 380 387 +1.84%
BenchmarkRegexpMatchEasy1_32 111 113 +1.80%
BenchmarkJSONEncode 21359159 21007583 -1.65%
BenchmarkRegexpMatchEasy1_1K 603 613 +1.66%
BenchmarkRegexpMatchEasy0_32 127 129 +1.57%
BenchmarkFmtFprintfIntInt 399 393 -1.50%
BenchmarkRegexpMatchEasy0_1K 373 378 +1.34%
Change-Id: I78e297161026f8b5cc7507c965fd3e486f81ed29
Reviewed-on: https://go-review.googlesource.com/8980
Reviewed-by: Austin Clements <austin@google.com>
http://golang.org/cl/7623 refactored how line history works and
introduced a new TrimPathPrefix field to replace the existing Trimpath
field, but never removed the latter or updated its users.
Fixes#10503.
Change-Id: Ief90a55b6cef2e8062b59856a4c7dcc0df01d3f2
Reviewed-on: https://go-review.googlesource.com/9113
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
This is primarily about making the code clearer, but as part of the cleanup
componentgen is now much more consistent about what it does and does
not attempt.
The new limit is to 8 move instructions.
The old limit was either 3 or 4 small things but in the details it was
quite inconsistent: ints, interfaces, strings, and slices all counted as small;
it handled a struct containing two ints, but not a struct containing a struct
containing two ints; it handled slices and interfaces and a struct containing
a slice but not a struct containing an interface; and so on.
The new code runs at about the same speed as the old code if limited to 4 moves,
but that's much more restrictive when the pieces are strings or interfaces.
With the limit raised to 8 moves, this CL is sometimes a significant improvement:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4361174290 4362870005 +0.04%
BenchmarkFannkuch11 3008201483 2974408533 -1.12%
BenchmarkFmtFprintfEmpty 79.0 79.5 +0.63%
BenchmarkFmtFprintfString 281 261 -7.12%
BenchmarkFmtFprintfInt 264 262 -0.76%
BenchmarkFmtFprintfIntInt 447 443 -0.89%
BenchmarkFmtFprintfPrefixedInt 354 361 +1.98%
BenchmarkFmtFprintfFloat 500 452 -9.60%
BenchmarkFmtManyArgs 1688 1693 +0.30%
BenchmarkGobDecode 11718456 11741179 +0.19%
BenchmarkGobEncode 10144620 10161627 +0.17%
BenchmarkGzip 437631642 435271877 -0.54%
BenchmarkGunzip 109468858 110173606 +0.64%
BenchmarkHTTPClientServer 76248 75362 -1.16%
BenchmarkJSONEncode 24160474 23753091 -1.69%
BenchmarkJSONDecode 84470041 82902026 -1.86%
BenchmarkMandelbrot200 4676857 4687040 +0.22%
BenchmarkGoParse 4954602 4923965 -0.62%
BenchmarkRegexpMatchEasy0_32 151 151 +0.00%
BenchmarkRegexpMatchEasy0_1K 450 452 +0.44%
BenchmarkRegexpMatchEasy1_32 131 130 -0.76%
BenchmarkRegexpMatchEasy1_1K 713 695 -2.52%
BenchmarkRegexpMatchMedium_32 227 218 -3.96%
BenchmarkRegexpMatchMedium_1K 63911 62966 -1.48%
BenchmarkRegexpMatchHard_32 3163 3026 -4.33%
BenchmarkRegexpMatchHard_1K 93985 90266 -3.96%
BenchmarkRevcomp 650697093 649211600 -0.23%
BenchmarkTemplate 107049170 106804076 -0.23%
BenchmarkTimeParse 448 452 +0.89%
BenchmarkTimeFormat 468 460 -1.71%
Change-Id: I08563133883e88bb9db9e9e4dee438a5af2787da
Reviewed-on: https://go-review.googlesource.com/9004
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
This CL revises CL 7504 to use explicitly uintptr types for the
struct fields that are going to be updated sometimes without
write barriers. The result is that the fields are now updated *always*
without write barriers.
This approach has two important properties:
1) Now the GC never looks at the field, so if the missing reference
could cause a problem, it will do so all the time, not just when the
write barrier is missed at just the right moment.
2) Now a write barrier never happens for the field, avoiding the
(correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1.
Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e
Reviewed-on: https://go-review.googlesource.com/9019
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The majority of this CL was prepared via scripted invocations of
`gofmt -w -r "$SYM -> obj.$SYM" cmd/internal/ld/*.go` and `gofmt -w -r
"ld.$SYM -> obj.$SYM" cmd/?l/*.go`.
Because of issue #7417, that was followed by repeatedly running an AWK
script to identify lines that differed other than whitespace changes
or "ld." or "obj." prefixes and manually restoring comments.
Finally, the redundant constants from cmd/internal/ld/link.go were
removed, and "goimports -w" was used to cleanup import lines.
Passes rsc.io/toolstash/buildall, even when modified to also build cmd.
Fixes#10055.
Change-Id: Icd5dbe819a3b6520ce883748e60017dc8e9a2e85
Reviewed-on: https://go-review.googlesource.com/9112
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The go command prints paths in errors relative to its current directory.
Since all.bash and run.bash are run in $GOROOT/src, prefer to run
the go command, so that the relative paths are correct.
Before this CL, running all.bash in $GOROOT/src:
##### Testing race detector
# net/http
src/net/http/transport.go:1257: cannot take the address of <node EFACE>
This is wrong (or at least less useful) because there is no $GOROOT/src/src/net/http directory.
Change-Id: I0c0d52c22830d79b3715f51a6329a3d33de52a72
Reviewed-on: https://go-review.googlesource.com/9157
Reviewed-by: Rob Pike <r@golang.org>
The callee-saved registers must be saved because for the c-shared case
this code is invoked from C code in the system library, and that code
expects the registers to be saved. The tests were passing because in
the normal case the code calls a cgo function that naturally saves
callee-saved registers anyhow. However, it fails when the code takes
the non-cgo path.
Change-Id: I9c1f5e884f5a72db9614478049b1863641c8b2b9
Reviewed-on: https://go-review.googlesource.com/9114
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Turns out all the necessary pieces have already been submitted.
Change-Id: I19c8d614cd756821ce400ca7a338029002780b18
Reviewed-on: https://go-review.googlesource.com/9076
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Change-Id: I0d3f9841500e0a41f1c427244869bf3736a31e18
Reviewed-on: https://go-review.googlesource.com/9075
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
In https://golang.org/cl/7797 I attempted to use myimportpath to set the value
of the go.importpath.$foo. symbol for the module being compiled, but I messed
it up and only set the name (which the linker rewrites anyway). This lead to
the importpath for the module being compiled being "". This was hard to notice,
because all modules that import another define the importpath for their
imported modules correctly -- but main is not imported, and this meant that the
reflect module saw all fields of all types defined in the main module as
exported.
The fix is to do what I meant to do the first time, add a test and change the
go tool to compile main packages with -p main and not -p
command-line-arguments.
Fixes#10332
Change-Id: I5fc6e9b1dc2b26f058641e382f9a56a526eca291
Reviewed-on: https://go-review.googlesource.com/8481
Reviewed-by: Russ Cox <rsc@golang.org>
This avoids a race condition with go1.go wanting to examine files in
the current directory with filepath.Walk(".", walkFn).
Fixes#10497.
Change-Id: I2159f40a08d1a768195dbb7ea3c27e38cf9740bb
Reviewed-on: https://go-review.googlesource.com/9110
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change fixes inconsistent error values on Accept{,TCP,Unix}.
Updates #4856.
Change-Id: Ie3bb534c19a724cacb3ea3f3656e46c810b2123f
Reviewed-on: https://go-review.googlesource.com/8996
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on Close, CloseRead and
CloseWrite.
Updates #4856.
Change-Id: I3c4d46ccd7d6e1a2f52d8e75b512f62c533a368d
Reviewed-on: https://go-review.googlesource.com/8994
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on Write,
WriteTo{,UDP,IP,Unix} and WriteMsg{UDP,IP,Unix}.
Updates #4856.
Change-Id: I4208ab6a0650455ad7d70a80a2d6169351d6055f
Reviewed-on: https://go-review.googlesource.com/8993
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change fixes inconsistent error values on Read,
ReadFrom{,UDP,IP,Unix} and ReadMsg{UDP,IP,Unix}.
Updates #4856.
Change-Id: I7de5663094e09be2d78cdb18ce6f1e7ec260888d
Reviewed-on: https://go-review.googlesource.com/8992
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Clear out gg.go files, and move things into consistent places between
the cmd/?g directories.
Change-Id: I81e04180613b806e0bfbb88519e66111ce9f74a3
Reviewed-on: https://go-review.googlesource.com/9080
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change adds a type addrinfoErrno to represent getaddrinfo,
getnameinfo-specific errors, and uses it in cgo-based lookup functions.
Also retags cgo files for clarification and does minor cleanup.
Change-Id: I6db7130ad7bf35bbd4e8839a97759e1364c43828
Reviewed-on: https://go-review.googlesource.com/9020
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I have left the Diag calls in place where I believe Ctxt.Cursym != nil
which means this CL is not the improvement I had hoped for. However
it is now safe to call Exitf whereever you are in the linker, which
makes it easier to reason about some code.
Change-Id: I8261e761ca9719f7d216e2747314adfe464e3337
Reviewed-on: https://go-review.googlesource.com/8668
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>