This works after golang.org/cl/13120 is running on the
coordinator (maybe it already is).
Change-Id: I4053d8e2f32fafd47b927203a6f66d5858e23376
Reviewed-on: https://go-review.googlesource.com/13165
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Tracing functionality was moved from runtime/pprof to runtime/trace.
Change-Id: I694e0f209d043c7ffecb113f1825175bf963dde3
Reviewed-on: https://go-review.googlesource.com/13074
Reviewed-by: Rob Pike <r@golang.org>
This is what is causing freebsd/arm to crash mysteriously when using cgo.
The bug was introduced in golang.org/cl/4030, which moved this code out
of rt0_go and into its own function. The ARM ABI says that calls must
be made with the stack pointer at an 8-byte boundary, but only FreeBSD
seems to crash when this is violated.
Fixes#10119.
Change-Id: Ibdbe76b2c7b80943ab66b8abbb38b47acb70b1e5
Reviewed-on: https://go-review.googlesource.com/13161
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
This change allows the download page to redirect the user to
/doc/install?download=filename so the user can see installation
instructions specific to the file they are downloading.
This change also expands the "Test your Go installation" section
to instruct the user to create a workspace, hopefully leading
to less confusion down the line.
It also changes the front page download link to go directly
to the downloads page, which will in turn take them to the
installation instructions (the original destination).
This is related to this change to the tools repo:
https://golang.org/cl/13180
Change-Id: I658327bdb93ad228fb1846e389b281b15da91b1d
Reviewed-on: https://go-review.googlesource.com/13151
Reviewed-by: Chris Broadfoot <cbro@golang.org>
When commit 510fd13 enabled assists during the scan phase, it failed
to also update the code in the GC controller that computed the assist
CPU utilization and adjusted the trigger based on it. Fix that code so
it uses the start of the scan phase as the wall-clock time when
assists were enabled rather than the start of the mark phase.
Change-Id: I05013734b4448c3e2c730dc7b0b5ee28c86ed8cf
Reviewed-on: https://go-review.googlesource.com/13048
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
At the start of a GC cycle, the garbage collector computes the assist
ratio based on the total scannable heap size. This was intended to be
conservative; after all, this assumes the entire heap may be reachable
and hence needs to be scanned. But it only assumes that the *current*
entire heap may be reachable. It fails to account for heap allocated
during the GC cycle. If the trigger ratio is very low (near zero), and
most of the heap is reachable when GC starts (which is likely if the
trigger ratio is near zero), then it's possible for the mutator to
create new, reachable heap fast enough that the assists won't keep up
based on the assist ratio computed at the beginning of the cycle. As a
result, the heap can grow beyond the heap goal (by hundreds of megs in
stress tests like in issue #11911).
We already have some vestigial logic for dealing with situations like
this; it just doesn't run often enough. Currently, every 10 ms during
the GC cycle, the GC revises the assist ratio. This was put in before
we switched to a conservative assist ratio (when we really were using
estimates of scannable heap), and it turns out to be exactly what we
need now. However, every 10 ms is far too infrequent for a rapidly
allocating mutator.
This commit reuses this logic, but replaces the 10 ms timer with
revising the assist ratio every time the heap is locked, which
coincides precisely with when the statistics used to compute the
assist ratio are updated.
Fixes#11911.
Change-Id: I377b231ab064946228378fa10422a46d1b50f4c5
Reviewed-on: https://go-review.googlesource.com/13047
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This was useful in debugging the mutator assist behavior for #11911,
and it fits with the other gcpacertrace output.
Change-Id: I1e25590bb4098223a160de796578bd11086309c7
Reviewed-on: https://go-review.googlesource.com/13046
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Proportional concurrent sweep is currently based on a ratio of spans
to be swept per bytes of object allocation. However, proportional
sweeping is performed during span allocation, not object allocation,
in order to minimize contention and overhead. Since objects are
allocated from spans after those spans are allocated, the system tends
to operate in debt, which means when the next GC cycle starts, there
is often sweep debt remaining, so GC has to finish the sweep, which
delays the start of the cycle and delays enabling mutator assists.
For example, it's quite likely that many Ps will simultaneously refill
their span caches immediately after a GC cycle (because GC flushes the
span caches), but at this point, there has been very little object
allocation since the end of GC, so very little sweeping is done. The
Ps then allocate objects from these cached spans, which drives up the
bytes of object allocation, but since these allocations are coming
from cached spans, nothing considers whether more sweeping has to
happen. If the sweep ratio is high enough (which can happen if the
next GC trigger is very close to the retained heap size), this can
easily represent a sweep debt of thousands of pages.
Fix this by making proportional sweep proportional to the number of
bytes of spans allocated, rather than the number of bytes of objects
allocated. Prior to allocating a span, both the small object path and
the large object path ensure credit for allocating that span, so the
system operates in the black, rather than in the red.
Combined with the previous commit, this should eliminate all sweeping
from GC start up. On the stress test in issue #11911, this reduces the
time spent sweeping during GC (and delaying start up) by several
orders of magnitude:
mean 99%ile max
pre fix 1 ms 11 ms 144 ms
post fix 270 ns 735 ns 916 ns
Updates #11911.
Change-Id: I89223712883954c9d6ec2a7a51ecb97172097df3
Reviewed-on: https://go-review.googlesource.com/13044
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently it's possible for the next_gc heap size trigger computed for
the next GC cycle to be less than the current allocated heap size.
This means the next cycle will start immediately, which means there's
no time to perform the concurrent sweep between GC cycles. This places
responsibility for finishing the sweep on GC itself, which delays GC
start-up and hence delays mutator assist.
Fix this by ensuring that next_gc is always at least a little higher
than the allocated heap size, so we won't trigger the next cycle
instantly.
Updates #11911.
Change-Id: I74f0b887bf187518d5fedffc7989817cbcf30592
Reviewed-on: https://go-review.googlesource.com/13043
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently there are two sensitive periods during which a mutator can
allocate past the heap goal but mutator assists can't be enabled: 1)
at the beginning of GC between when the heap first passes the heap
trigger and sweep termination and 2) at the end of GC between mark
termination and when the background GC goroutine parks. During these
periods there's no back-pressure or safety net, so a rapidly
allocating mutator can allocate past the heap goal. This is
exacerbated if there are many goroutines because the GC coordinator is
scheduled as any other goroutine, so if it gets preempted during one
of these periods, it may stay preempted for a long period (10s or 100s
of milliseconds).
Normally the mutator does scan work to create back-pressure against
allocation, but there is no scan work during these periods. Hence, as
a fall back, if a mutator would assist but can't yet, simply yield the
CPU. This delays the mutator somewhat, but more importantly gives more
CPU time to the GC coordinator for it to complete the transition.
This is obviously a workaround. Issue #11970 suggests a far better but
far more invasive way to fix this.
Updates #11911. (This very nearly fixes the issue, but about once
every 15 minutes I get a GC cycle where the assists are enabled but
don't do enough work.)
Change-Id: I9768b79e3778abd3e06d306596c3bd77f65bf3f1
Reviewed-on: https://go-review.googlesource.com/13026
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently allocation checks the GC trigger speculatively during
allocation and then triggers the GC without rechecking. As a result,
it's possible for G 1 and G 2 to detect the trigger simultaneously,
both enter startGC, G 1 actually starts GC while G 2 gets preempted
until after the whole GC cycle, then G 2 immediately starts another GC
cycle even though the heap is now well under the trigger.
Fix this by re-checking the GC trigger non-speculatively just before
actually kicking off a new GC cycle.
This contributes to #11911 because when this happens, we definitely
don't finish the background sweep before starting the next GC cycle,
which can significantly delay the start of concurrent scan.
Change-Id: I560ab79ba5684ba435084410a9765d28f5745976
Reviewed-on: https://go-review.googlesource.com/13025
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Do not assume that if stat shows multiple links that we should mark the
file as a hardlink in the tar format. If the hardlink link was not
referenced, this caused a link to "/". On an overlay file system, all
files have multiple links.
The caller must keep the inode references and set TypeLink, Size = 0,
and LinkName themselves.
Change-Id: I873b8a235bc8f8fbb271db74ee54232da36ca013
Reviewed-on: https://go-review.googlesource.com/13045
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The buildmode docs mention exported functions, but don't say anything
about how to export them. Mention the cgo tool to make this somewhat
clearer.
Fixes#11955.
Change-Id: Ie5420445daa87f5aceec6ad743465d5d32d0a786
Reviewed-on: https://go-review.googlesource.com/13080
Reviewed-by: Russ Cox <rsc@golang.org>
For niceness, when go/exact was moved from x/tools, it
was renamed go/constant.
For simplicity, when go/types was moved from x/tools, its
imports of (now) go/constant were done with a rename:
import exact "go/constant"
This kept the code just as it was before and avoided the issue
of what to call the internal constant called, um, constant.
But not all was hidden, as the text of some fields of structs and
the like leaked the old name, so things like "exact.Value" appeared
in type definitions and function signatures in the documentation.
This is unacceptable.
Fix the documentation issue by fixing the code. Rename the constant
constant constant_, and remove the renaming import.
This should go into 1.5. It's mostly a mechanical change, is
internal to the package, and fixes the documentation. It contains
no semantic changes except to fix a benchmark that was broken
in the original transition.
Fixes#11949.
Change-Id: Ieb94b6558535b504180b1378f19e8f5a96f92d3c
Reviewed-on: https://go-review.googlesource.com/13051
Reviewed-by: Russ Cox <rsc@golang.org>
I walked through the steps for a contribution but ended up
with an error when doing "git mail" because I didn't have a
signed agreement.
Added a section to check for or create one through Gerrit right
after the user has created the account and logged in.
Moved some info from copyright section to the new section.
Change-Id: I79bbd3e18fc3a742fa59a242085da14be9e19ba0
Reviewed-on: https://go-review.googlesource.com/13062
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
This was confusing when I was trying to fix go build -o.
Perhaps due to that fix, this can now be simplified from
three functions to one.
Change-Id: I878a6d243b14132a631e7c62a3bb6d101bc243ea
Reviewed-on: https://go-review.googlesource.com/13027
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Quoting the new docs:
«
If the arguments to build are a list of .go files, build treats
them as a list of source files specifying a single package.
When compiling a single main package, build writes
the resulting executable to an output file named after
the first source file ('go build ed.go rx.go' writes 'ed' or 'ed.exe')
or the source code directory ('go build unix/sam' writes 'sam' or 'sam.exe').
The '.exe' suffix is added when writing a Windows executable.
When compiling multiple packages or a single non-main package,
build compiles the packages but discards the resulting object,
serving only as a check that the packages can be built.
The -o flag, only allowed when compiling a single package,
forces build to write the resulting executable or object
to the named output file, instead of the default behavior described
in the last two paragraphs.
»
There is a change in behavior here, namely that 'go build -o x.a x.go'
where x.go is not a command (not package main) did not write any
output files (back to at least Go 1.2) but now writes x.a.
This seems more reasonable than trying to explain that -o is
sometimes silently ignored.
Otherwise the behavior is unchanged.
The lines being deleted in goFilesPackage look like they are
setting up 'go build x.o' to write 'x.a', but they were overridden
by the p.target = "" in runBuild. Again back to at least Go 1.2,
'go build x.go' for a non-main package has never produced
output. It seems better to keep it that way than to change it,
both for historical consistency and for consistency with
'go build strings' and 'go build std'.
All of this behavior is now tested.
Fixes#10865.
Change-Id: Iccdf21f366fbc8b5ae600a1e50dfe7fc3bff8b1c
Reviewed-on: https://go-review.googlesource.com/13024
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
It was failing with multiple goroutines a few out of every thousand
runs (with errRequestCanceled) because it was using the same
*http.Request for all 5 RoundTrips, but the RoundTrips' goroutines
(notably the readLoop method) were all still running, sharing that
same pointer. Because the response has no body (which is what
TestZeroLengthPostAndResponse tests), the readLoop was marking the
connection as reusable early (before the caller read until the body's
EOF), but the Transport code was clearing the Request's cancelation
func *AFTER* the caller had already received it from RoundTrip. This
let the test continue looping and do the next request with the same
pointer, fetch a connection, and then between getConn and roundTrip
have an invariant violated: the Request's cancelation func was nil,
tripping this check:
if !pc.t.replaceReqCanceler(req.Request, pc.cancelRequest) {
pc.t.putIdleConn(pc)
return nil, errRequestCanceled
}
The solution is to clear the request cancelation func in the readLoop
goroutine in the no-body case before it's returned to the caller.
This now passes reliably:
$ go test -race -run=TestZeroLengthPostAndResponse -count=3000
I think we've only seen this recently because we now randomize scheduling
of goroutines in race mode (https://golang.org/cl/11795). This race
has existed for a long time but the window was hard to hit.
Change-Id: Idb91c582919f85aef5b9e5ef23706f1ba9126e9a
Reviewed-on: https://go-review.googlesource.com/13070
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Introduced in https://go-review.googlesource.com/12865 (git rev c2db5f4c).
This fix doesn't add any new lock acquistions: it just moves the
existing one taken by the unreadDataSize method and moves it out
wider.
It became flaky at rev c2db5f4c, but now reliably passes again:
$ go test -v -race -run=TestTransportAndServerSharedBodyRace -count=100 net/http
Fixes#11985
Change-Id: I6956d62839fd7c37e2f7441b1d425793f4a0db30
Reviewed-on: https://go-review.googlesource.com/12909
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
HTTP servers attempt to entirely consume a request body before sending a
response. However, when doing so, it previously would ignore any errors
encountered.
Unfortunately, the errors triggered at this stage are indicative of at
least a couple problems: read timeouts and chunked encoding errors.
This means properly crafted and/or timed requests could lead to a
"smuggled" request.
The fix is to inspect the errors created by the response body Reader,
and treat anything other than io.EOF or ErrBodyReadAfterClose as
fatal to the connection.
Fixes#11930
Change-Id: I0bf18006d7d8f6537529823fc450f2e2bdb7c18e
Reviewed-on: https://go-review.googlesource.com/12865
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This makes the receiver name consistent with the rest of the methods on
type Server.
Change-Id: Ic2a007d3b5eb50bd87030e15405e9856109cf590
Reviewed-on: https://go-review.googlesource.com/13035
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
The test uses external linking mode, which is probably not available
if cgo does not work.
Fixes#11969.
Change-Id: Id1c2828cd2540391e16b422bf51674ba6ff084b0
Reviewed-on: https://go-review.googlesource.com/13005
Reviewed-by: Russ Cox <rsc@golang.org>
The percolation of errors upward in the load process could
drop errors, meaning that a build tree could, depending on the
processing order, import the same directory as both "p/vendor/x"
and as "x". That's not supposed to be allowed. But then, worse,
the build would generate two jobs for building that directory,
which would use the same work space and overwrite each other's files,
leading to very strange failures.
Two fixes:
1. Fix the propagation of errors upward (prefer errors over success).
2. Check explicitly for duplicated packages before starting a build.
New test for #1.
Since #2 can't happen, tested #2 by hand after reverting fix for #1.
Fixes#11913.
Change-Id: I6d2fc65f93b8fb5f3b263ace8d5f68d803a2ae5c
Reviewed-on: https://go-review.googlesource.com/13022
Reviewed-by: Ian Lance Taylor <iant@golang.org>
On most systems, a pointer is the worst case alignment, so adding
a pointer field at the end of a struct guarantees there will be no
padding added after that field (to satisfy overall struct alignment
due to some more-aligned field also present).
In the runtime, the map implementation needs a quick way to
get to the overflow pointer, which is last in the bucket struct,
so it uses size - sizeof(pointer) as the offset.
NaCl/amd64p32 is the exception, as always.
The worst case alignment is 64 bits but pointers are 32 bits.
There's a long history that is not worth going into, but when
we moved the overflow pointer to the end of the struct,
we didn't get the padding computation right.
The compiler computed the regular struct size and then
on amd64p32 added another 32-bit field.
And the runtime assumed it could step back two 32-bit fields
(one 64-bit register size) to get to the overflow pointer.
But in fact if the struct needed 64-bit alignment, the computation
of the regular struct size would have added a 32-bit pad already,
and then the code unconditionally added a second 32-bit pad.
This placed the overflow pointer three words from the end, not two.
The last two were padding, and since the runtime was consistent
about using the second-to-last word as the overflow pointer,
no harm done in the sense of overwriting useful memory.
But writing the overflow pointer to a non-pointer word of memory
means that the GC can't see the overflow blocks, so it will
collect them prematurely. Then bad things happen.
Correct all this in a few steps:
1. Add an explicit check at the end of the bucket layout in the
compiler that the overflow field is last in the struct, never
followed by padding.
2. When padding is needed on nacl (not always, just when needed),
insert it before the overflow pointer, to preserve the "last in the struct"
property.
3. Let the compiler have the final word on the width of the struct,
by inserting an explicit padding field instead of overwriting the
results of the width computation it does.
4. For the same reason (tell the truth to the compiler), set the type
of the overflow field when we're trying to pretend its not a pointer
(in this case the runtime maintains a list of the overflow blocks
elsewhere).
5. Make the runtime use "last in the struct" as its location algorithm.
This fixes TestTraceStress on nacl/amd64p32.
The 'bad map state' and 'invalid free list' failures no longer occur.
Fixes#11838.
Change-Id: If918887f8f252d988db0a35159944d2b36512f92
Reviewed-on: https://go-review.googlesource.com/12971
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Fixes some minor issues regarding quoted-string when parsing
the local-part.
Those strings should return an error:
- quoted-string without any content: `""@test.com`
- quoted-string containing tab: "\"\t\"@test.com"
Fixes#11293
Change-Id: Ied93eb6831915c9b1f8e727cea14168af21f8d3b
Reviewed-on: https://go-review.googlesource.com/12905
Reviewed-by: Russ Cox <rsc@golang.org>
The code already fixed large non-stack offsets
but explicitly excluded stack references.
Perhaps you could get away with that before,
but current versions of nacl reject such stack
references. Rewrite them the same as the others.
For #11956 but probably not the last problem.
Change-Id: I0db4e3a1ed4f88ccddf0d30228982960091d9fb7
Reviewed-on: https://go-review.googlesource.com/13010
Reviewed-by: Dave Cheney <dave@cheney.net>
Dangling pointer error. Unlikely to trigger in practice, but still.
Found by running GODEBUG=efence=1 GOGC=1 trace.test.
Change-Id: Ice474dedcf62dd33ab77526287a023ba3b166db9
Reviewed-on: https://go-review.googlesource.com/12991
Reviewed-by: Austin Clements <austin@google.com>
In https://golang.org/cl/12080 we forbade installing cross-compiled
binaries into a subdirectory of $GOBIN, in order to fix
https://golang.org/issue/9769. However, that fix was too aggressive,
in that it also forbade installing into a subdirectory of $GOPATH/bin.
This patch permits installing cross-compiled binaries into a
subdirectory $GOPATH/bin while continuing to forbid installing into a
subdirectory of $GOBIN.
Fixes#11778.
Change-Id: Ibc9919554e8c275beff54ec8bf919cfaa03b11ba
Reviewed-on: https://go-review.googlesource.com/12938
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The spec didn't specify several aspects of expression switches:
- The switch expression is evaluated exactly once.
- Switch expressions evaluating to an untyped value are converted
to the respective default type before use.
- An (untyped) nil value is not permitted as expression switch
value. (We could permit it relatively easily, but gc doesn't,
and disallowing it is in symmetry with the rules for var decls
without explicit type and untyped initializer expressions.)
- The comparison x == t between each case expression x and
switch expression value t must be valid.
- (Some) duplicate constant case expressions are not permitted.
This change also clarifies the following issues:
4524: mult. equal int const switch case values should be illegal
-> spec issue fixed
6398: switch w/ no value uses bool rather than untyped bool
-> spec issue fixed
11578: allows duplicate switch cases -> go/types bug
11667: int overflow in switch expression -> go/types bug
11668: use of untyped nil in switch -> not a gc bug
Fixes#4524.
Fixes#6398.
Fixes#11668.
Change-Id: Iae4ab3e714575a5d11c92c9b8fbf027aa706b370
Reviewed-on: https://go-review.googlesource.com/12711
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Rob Pike <r@golang.org>