TestMemStats currently requires that NumGC != 0, but GC may
legitimately not have run (for example, if this test runs first, or
GOGC is set high, etc). Accept NumGC == 0 and instead sanity check
NumGC by making sure that all pause times after NumGC are 0.
Fixes#11989.
Change-Id: I4203859fbb83292d59a509f2eeb24d6033e7aabc
Reviewed-on: https://go-review.googlesource.com/17830
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
This simply copies the current version of math/big into the
compiler directory. The change was created automatically by
running cmd/compile/internal/big/vendor.bash. No other manual
changes.
Change-Id: Ica225d196b3ac10dfd9d4dc1e4e4ef0b22812ff9
Reviewed-on: https://go-review.googlesource.com/17900
Run-TryBot: Robert Griesemer <gri@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The Transport had a delicate protocol between its readLoop goroutine
and the goroutine calling RoundTrip. The basic concern is that the
caller's RoundTrip goroutine wants to wait for either a
connection-level error (the conn dying) or the response. But sometimes
both happen: there's a valid response (without a body), but the conn
is also going away. Both goroutines' logic dealing with this had grown
large and complicated with hard-to-follow comments over the years.
Simplify and document. Pull some bits into functions and do all
bodyless stuff in one place (it's special enough), rather than having
a bunch of conditionals scattered everywhere. One test is no longer
even applicable since the race it tested is no longer possible (the
code doesn't exist).
The bug that this fixes is that when the Transport reads a bodyless
response from a server, it was returning that response before
returning the persistent connection to the idle pool. As a result,
~1/1000 of serial requests would end up creating a new connection
rather than re-using the just-used connection due to goroutine
scheduling chance. Instead, this now adds bodyless responses'
connections back to the idle pool first, then sends the response to
the RoundTrip goroutine, but making sure that the RoundTrip goroutine
is outside of its select on the connection dying.
There's a new buffered channel involved now, which is a minor
complication, but it's much more self-contained and well-documented
than the previous complexity. (The alternative of making the
responseAndError channel itself unbuffered is too invasive and risky
at this point; it would require a number of changes to avoid
deadlocked goroutines in error cases)
In any case, flakes look to be gone now. We'll see if trybots agree.
Fixes#13633
Change-Id: I95a22942b2aa334ae7c87331fddd751d4cdfdffc
Reviewed-on: https://go-review.googlesource.com/17890
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
This matches SIGEMT on other systems that use it (SIGEMT is not used
for most linux systems).
Change-Id: If394c06c9ed1cb3ea2564385a8edfbed8b5566d1
Reviewed-on: https://go-review.googlesource.com/17874
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Also fixed conversion bug and added corresponding test case.
Change-Id: I26f143fbc8d40a6d073ecb095e61b461495f3d68
Reviewed-on: https://go-review.googlesource.com/17872
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Alan Donovan <adonovan@google.com>
This might deflake it. Or it'll at least give us more debugging clues.
Fixes#13626 maybe
Change-Id: Ie8cd0375d60dad033ec6a64830a90e7b9152a3d9
Reviewed-on: https://go-review.googlesource.com/17825
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
CloseNotifier wasn't well specified previously. This CL simplifies its
implementation, clarifies the public documentation on CloseNotifier,
clarifies internal documentation on conn, and fixes two CloseNotifier
bugs in the process.
The main change, though, is tightening the rules and expectations for using
CloseNotifier:
* the caller must consume the Request.Body first (old rule, unwritten)
* the received value is the "true" value (old rule, unwritten)
* no promises for channel sends after Handler returns (old rule, unwritten)
* a subsequent pipelined request fires the CloseNotifier (new behavior;
previously it never fired and thus effectively deadlocked as in #13165)
* advise that it should only be used without HTTP/1.1 pipelining (use HTTP/2
or non-idempotent browsers). Not that browsers actually use pipelining.
The main implementation change is that each Handler now gets its own
CloseNotifier channel value, rather than sharing one between the whole
conn. This means Handlers can't affect subsequent requests. This is
how HTTP/2's Server works too. The old docs never clarified a behavior
either way. The other side effect of each request getting its own
CloseNotifier channel is that one handler can't "poison" the
underlying conn preventing subsequent requests on the same connection
from using CloseNotifier (this is #9763).
In the old implementation, once any request on a connection used
ClosedNotifier, the conn's underlying bufio.Reader source was switched
from the TCPConn to the read side of the pipe being fed by a
never-ending copy. Since it was impossible to abort that never-ending
copy, we could never get back to a fresh state where it was possible
to return the underlying TCPConn to callers of Hijack. Now, instead of
a never-ending Copy, the background goroutine doing a Read from the
TCPConn (or *tls.Conn) only reads a single byte. That single byte
can be in the request body, a socket timeout error, io.EOF error, or
the first byte of the second body. In any case, the new *connReader
type stitches sync and async reads together like an io.MultiReader. To
clarify the flow of Read data and combat the complexity of too many
wrapper Reader types, the *connReader absorbs the io.LimitReader
previously used for bounding request header reads. The
liveSwitchReader type is removed. (an unused switchWriter type is also
removed)
Many fields on *conn are also documented more fully.
Fixes#9763 (CloseNotify + Hijack together)
Fixes#13165 (deadlock with CloseNotify + pipelined requests)
Change-Id: I40abc0a1992d05b294d627d1838c33cbccb9dd65
Reviewed-on: https://go-review.googlesource.com/17750
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, sysmon triggers a forced GC solely based on
memstats.last_gc. However, memstats.last_gc isn't updated until mark
termination, so once sysmon starts triggering forced GC, it will keep
triggering them until GC finishes. The first of these actually starts
a GC; the remainder up to the last print "GC forced", but gcStart
returns immediately because gcphase != _GCoff; then the last may start
another GC if the previous GC finishes (and sets last_gc) between
sysmon triggering it and gcStart checking the GC phase.
Fix this by expanding the condition for starting a forced GC to also
require that no GC is currently running. This, combined with the way
forcegchelper blocks until the GC cycle is started, ensures sysmon
only starts one GC when the time exceeds the forced GC threshold.
Fixes#13458.
Change-Id: Ie6cf841927f6085136be3f45259956cd5cf10d23
Reviewed-on: https://go-review.googlesource.com/17819
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The addition of stack barrier locking to copystack subsumes the
partial fix from commit bbd1a1c for SIGPROF during copystack. With the
stack barrier locking, this commit simplifies the rule in sigprof to:
the user stack can be traced only if sigprof can acquire the stack
barrier lock.
Updates #12932, #13362.
Change-Id: I1c1f80015053d0ac7761e9e0c7437c2aba26663f
Reviewed-on: https://go-review.googlesource.com/17192
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
After fixing #13587, I noticed that the "OAS2FUNC in disguise" block
looked like it probably needed write barriers too. However, testing
revealed the multi-value "return f()" case was already being handled
correctly.
It turns out this block is dead code due to "return f()" already being
transformed into "t1, t2, ..., tN := f(); return t1, t2, ..., tN" by
orderstmt when f is a multi-valued function.
Updates #13587.
Change-Id: Icde46dccc55beda2ea5fd5fcafc9aae26cec1552
Reviewed-on: https://go-review.googlesource.com/17759
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Currently, runtime/debug.SetGCPercent does not adjust the controller
trigger ratio. As a result, runtime reductions of GOGC don't take full
effect until after one more concurrent cycle has happened, which
adjusts the trigger ratio to account for the new gcpercent.
Fix this by lowering the trigger ratio if necessary in setGCPercent.
Change-Id: I4d23e0c58d91939b86ac60fa5d53ef91d0d89e0c
Reviewed-on: https://go-review.googlesource.com/17813
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we drop worldsema and then print the gctrace. We did this so
that if stderr is a pipe or a blocked terminal, blocking on printing
the gctrace would not block another GC from starting. However, this is
a bit of a fool's errand because a blocked runtime print will block
the whole M/P, so after GOMAXPROCS GC cycles, the whole system will
freeze. Furthermore, now this is much less of an issue because
allocation will block indefinitely if it can't start a GC (whereas it
used to be that allocation could run away). Finally, this allows
another GC cycle to start while the previous cycle is printing the
gctrace, which leads to races on reading various statistics to print
them and the next GC cycle overwriting those statistics.
Fix this by moving the release of worldsema after the gctrace print.
Change-Id: I3d044ea0f77d80f3b4050af6b771e7912258662a
Reviewed-on: https://go-review.googlesource.com/17812
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently we reset the sweep stats just after gcMarkTermination starts
the world and releases worldsema. However, background sweeping can
start the moment we start the world and, in fact, pause sweeping can
start the moment we release worldsema (because another GC cycle can
start up), so these need to be cleared before starting the world.
Change-Id: I95701e3de6af76bb3fbf2ee65719985bf57d20b2
Reviewed-on: https://go-review.googlesource.com/17811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, we update memstats.heap_live from mcache.local_cachealloc
whenever we lock the heap (e.g., to obtain a fresh span or to release
an unused span). However, under the right circumstances,
local_cachealloc can accumulate allocations up to the size of
the *entire heap* without flushing them to heap_live. Specifically,
since span allocations from an mcentral don't lock the heap, if a
large number of pages are held in an mcentral and the application
continues to use and free objects of that size class (e.g., the
BinaryTree17 benchmark), local_cachealloc won't be flushed until the
mcentral runs out of spans.
This is a problem because, unlike many of the memory statistics that
are purely informative, heap_live is used to determine when the
garbage collector should start and how hard it should work.
This commit eliminates local_cachealloc, instead atomically updating
heap_live directly. To control contention, we do this only when
obtaining a span from an mcentral. Furthermore, we make heap_live
conservative: allocating a span assumes that all free slots in that
span will be used and accounts for these when the span is
allocated, *before* the objects themselves are. This is important
because 1) this triggers the GC earlier than necessary rather than
potentially too late and 2) this leads to a conservative GC rate
rather than a GC rate that is potentially too low.
Alternatively, we could have flushed local_cachealloc when it passed
some threshold, but this would require determining a threshold and
would cause heap_live to underestimate the true value rather than
overestimate.
Fixes#12199.
name old time/op new time/op delta
BinaryTree17-12 2.88s ± 4% 2.88s ± 1% ~ (p=0.470 n=19+19)
Fannkuch11-12 2.48s ± 1% 2.48s ± 1% ~ (p=0.243 n=16+19)
FmtFprintfEmpty-12 50.9ns ± 2% 50.7ns ± 1% ~ (p=0.238 n=15+14)
FmtFprintfString-12 175ns ± 1% 171ns ± 1% -2.48% (p=0.000 n=18+18)
FmtFprintfInt-12 159ns ± 1% 158ns ± 1% -0.78% (p=0.000 n=19+18)
FmtFprintfIntInt-12 270ns ± 1% 265ns ± 2% -1.67% (p=0.000 n=18+18)
FmtFprintfPrefixedInt-12 235ns ± 1% 234ns ± 0% ~ (p=0.362 n=18+19)
FmtFprintfFloat-12 309ns ± 1% 308ns ± 1% -0.41% (p=0.001 n=18+19)
FmtManyArgs-12 1.10µs ± 1% 1.08µs ± 0% -1.96% (p=0.000 n=19+18)
GobDecode-12 7.81ms ± 1% 7.80ms ± 1% ~ (p=0.425 n=18+19)
GobEncode-12 6.53ms ± 1% 6.53ms ± 1% ~ (p=0.817 n=19+19)
Gzip-12 312ms ± 1% 312ms ± 2% ~ (p=0.967 n=19+20)
Gunzip-12 42.0ms ± 1% 41.9ms ± 1% ~ (p=0.172 n=19+19)
HTTPClientServer-12 63.7µs ± 1% 63.8µs ± 1% ~ (p=0.639 n=19+19)
JSONEncode-12 16.4ms ± 1% 16.4ms ± 1% ~ (p=0.954 n=19+19)
JSONDecode-12 58.5ms ± 1% 57.8ms ± 1% -1.27% (p=0.000 n=18+19)
Mandelbrot200-12 3.86ms ± 1% 3.88ms ± 0% +0.44% (p=0.000 n=18+18)
GoParse-12 3.67ms ± 2% 3.66ms ± 1% -0.52% (p=0.001 n=18+19)
RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 0% ~ (p=0.257 n=19+18)
RegexpMatchEasy0_1K-12 347ns ± 1% 347ns ± 1% ~ (p=0.527 n=18+18)
RegexpMatchEasy1_32-12 83.7ns ± 2% 83.1ns ± 2% ~ (p=0.096 n=18+19)
RegexpMatchEasy1_1K-12 509ns ± 1% 505ns ± 1% -0.75% (p=0.000 n=18+19)
RegexpMatchMedium_32-12 130ns ± 2% 129ns ± 1% ~ (p=0.962 n=20+20)
RegexpMatchMedium_1K-12 39.5µs ± 2% 39.4µs ± 1% ~ (p=0.376 n=20+19)
RegexpMatchHard_32-12 2.04µs ± 0% 2.04µs ± 1% ~ (p=0.195 n=18+17)
RegexpMatchHard_1K-12 61.4µs ± 1% 61.4µs ± 1% ~ (p=0.885 n=19+19)
Revcomp-12 540ms ± 2% 542ms ± 4% ~ (p=0.552 n=19+17)
Template-12 69.6ms ± 1% 71.2ms ± 1% +2.39% (p=0.000 n=20+20)
TimeParse-12 357ns ± 1% 357ns ± 1% ~ (p=0.883 n=18+20)
TimeFormat-12 379ns ± 1% 362ns ± 1% -4.53% (p=0.000 n=18+19)
[Geo mean] 62.0µs 61.8µs -0.44%
name old time/op new time/op delta
XBenchGarbage-12 5.89ms ± 2% 5.81ms ± 2% -1.41% (p=0.000 n=19+18)
Change-Id: I96b31cca6ae77c30693a891cff3fe663fa2447a0
Reviewed-on: https://go-review.googlesource.com/17748
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
deductSweepCredit expects the size in bytes of the span being
allocated, but mCentral_CacheSpan passes the size of a single object
in the span. As a result, we don't sweep enough on that call and when
mCentral_CacheSpan later calls reimburseSweepCredit, it's very likely
to underflow mheap_.spanBytesAlloc, which causes the next call to
deductSweepCredit to think it owes a huge number of pages and finish
off the whole sweep.
In addition to causing the occasional allocation that triggers the
full sweep to be potentially extremely expensive relative to other
allocations, this can indirectly slow down many other allocations.
deductSweepCredit uses sweepone to sweep spans, which returns
fully-unused spans to the heap, where these spans are freed and
coalesced with neighboring free spans. On the other hand, when
mCentral_CacheSpan sweeps a span, it does so with the intent to
immediately reuse that span and, as a result, will not return the span
to the heap even if it is fully unused. This saves on the cost of
locking the heap, finding a span, and initializing that span. For
example, before this change, with GOMAXPROCS=1 (or the background
sweeper disabled) BinaryTree17 returned roughly 220K spans to the heap
and allocated new spans from the heap roughly 232K times. After this
change, it returns 1.3K spans to the heap and allocates new spans from
the heap 39K times. (With background sweeping these numbers are
effectively unchanged because the background sweeper sweeps almost all
of the spans with sweepone; however, parallel sweeping saves more than
the cost of allocating spans from the heap.)
Fixes#13535.
Fixes#13589.
name old time/op new time/op delta
BinaryTree17-12 3.03s ± 1% 2.86s ± 4% -5.61% (p=0.000 n=18+20)
Fannkuch11-12 2.48s ± 1% 2.49s ± 1% ~ (p=0.060 n=17+20)
FmtFprintfEmpty-12 50.7ns ± 1% 50.9ns ± 1% +0.43% (p=0.025 n=15+16)
FmtFprintfString-12 174ns ± 2% 174ns ± 2% ~ (p=0.539 n=19+20)
FmtFprintfInt-12 158ns ± 1% 158ns ± 1% ~ (p=0.300 n=18+20)
FmtFprintfIntInt-12 269ns ± 2% 269ns ± 2% ~ (p=0.784 n=20+18)
FmtFprintfPrefixedInt-12 233ns ± 1% 234ns ± 1% ~ (p=0.389 n=18+18)
FmtFprintfFloat-12 309ns ± 1% 310ns ± 1% +0.25% (p=0.048 n=18+18)
FmtManyArgs-12 1.10µs ± 1% 1.10µs ± 1% ~ (p=0.259 n=18+19)
GobDecode-12 7.81ms ± 1% 7.72ms ± 1% -1.17% (p=0.000 n=19+19)
GobEncode-12 6.56ms ± 0% 6.55ms ± 1% ~ (p=0.433 n=17+19)
Gzip-12 318ms ± 2% 317ms ± 1% ~ (p=0.578 n=19+18)
Gunzip-12 42.1ms ± 2% 42.0ms ± 0% -0.45% (p=0.007 n=18+16)
HTTPClientServer-12 63.9µs ± 1% 64.0µs ± 1% ~ (p=0.146 n=17+19)
JSONEncode-12 16.4ms ± 1% 16.4ms ± 1% ~ (p=0.271 n=19+19)
JSONDecode-12 58.1ms ± 1% 58.0ms ± 1% ~ (p=0.152 n=18+18)
Mandelbrot200-12 3.85ms ± 0% 3.85ms ± 0% ~ (p=0.126 n=19+18)
GoParse-12 3.71ms ± 1% 3.64ms ± 1% -1.86% (p=0.000 n=20+18)
RegexpMatchEasy0_32-12 100ns ± 2% 100ns ± 1% ~ (p=0.588 n=20+20)
RegexpMatchEasy0_1K-12 346ns ± 1% 347ns ± 1% +0.27% (p=0.014 n=17+20)
RegexpMatchEasy1_32-12 82.9ns ± 3% 83.5ns ± 3% ~ (p=0.096 n=19+20)
RegexpMatchEasy1_1K-12 506ns ± 1% 506ns ± 1% ~ (p=0.530 n=19+19)
RegexpMatchMedium_32-12 129ns ± 2% 129ns ± 1% ~ (p=0.566 n=20+19)
RegexpMatchMedium_1K-12 39.4µs ± 1% 39.4µs ± 1% ~ (p=0.713 n=19+20)
RegexpMatchHard_32-12 2.05µs ± 1% 2.06µs ± 1% +0.36% (p=0.008 n=18+20)
RegexpMatchHard_1K-12 61.6µs ± 1% 61.7µs ± 1% ~ (p=0.286 n=19+20)
Revcomp-12 538ms ± 1% 541ms ± 2% ~ (p=0.081 n=18+19)
Template-12 71.5ms ± 2% 71.6ms ± 1% ~ (p=0.513 n=20+19)
TimeParse-12 357ns ± 1% 357ns ± 1% ~ (p=0.935 n=19+18)
TimeFormat-12 352ns ± 1% 352ns ± 1% ~ (p=0.293 n=19+20)
[Geo mean] 62.0µs 61.9µs -0.21%
name old time/op new time/op delta
XBenchGarbage-12 5.83ms ± 2% 5.86ms ± 3% ~ (p=0.247 n=19+20)
Change-Id: I790bb530adace27ccf25d372f24a11954b88443c
Reviewed-on: https://go-review.googlesource.com/17745
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Analogous to https://go-review.googlesource.com/#/c/8457/ this
code synthesizes an set of program arguments for Android on the
arm64 architecture.
Change-Id: I851958b4b0944ec79d7a1426a3bb2cfc31746797
Reviewed-on: https://go-review.googlesource.com/17782
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Otherwise it's hard to tell the difference between
link1 and link2 or other tests.
Change-Id: I36c153cccb10959535595938dfbc49db930b9fac
Reviewed-on: https://go-review.googlesource.com/17851
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Updates bundled copy of x/net/http2 to include
https://golang.org/cl/17823 (catching panics in Handlers)
Fixes#13555
Change-Id: I08e4e38e736a8d93f5ec200e8041c143fc6eafce
Reviewed-on: https://go-review.googlesource.com/17824
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Use two internal representations for Float values (similar to what is done
for Int values). Transparently switch to a big.Float representation when
big.Rat values become unwieldy. This is almost never needed for real-world
programs but it is trivial to create test cases that cannot be handled with
rational arithmetic alone.
As a consequence, the go/constant API semantics changes slightly: Until now,
a value could always be represented in its "smallest" form (e.g., float values
that happened to be integers would be represented as integers). Now, constant
Kind depends on how the value was created, rather than its actual value. (The
reason why we cannot automatically "normalize" values to their smallest form
anymore is because floating-point numbers are not exact in general; and thus
normalization is often not possible in the first place, or would throw away
precision when it is not desired.) This has repercussions as to how constant
Values are used go/types and required corresponding adjustments.
Details of the changes:
go/constant package:
- use big.Rat and big.Float values to represent floating-point values
(internal change)
- changed semantic of Value.Kind accordingly
- String now returns a short, human-readable form of a value
(this leads to better error messages in go/types)
- added ToInt, ToFloat, and ToComplex conversion functions
- added ExactString to obtain an exact string form of a value
go/types:
- adjusted and simplified implementation of representableConst
- adjusted various places where Value.Kind was expected to be "smallest"
by calling the respective ToInt/Float/Complex conversion functions
- enabled 5 disabled tests in stdlib_test.go that now work
api checker:
- print all constant values in a short human-readable form (floats are
printed in floating-point form), but also print an exact form if it
is different from the short form
- adjusted test golden file and go.1.1.text reference file
Fixes#11327.
Change-Id: I492b704aae5b0238e5b7cee13e18ffce61193587
Reviewed-on: https://go-review.googlesource.com/17360
Reviewed-by: Alan Donovan <adonovan@google.com>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This CL also updates the bundled http2 package with the h2 fix from
https://golang.org/cl/17757Fixes#13159
Change-Id: If0e3b4bd04d0dceed67d1b416ed838c9f1961576
Reviewed-on: https://go-review.googlesource.com/17758
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
TestLchown was creating a hard-link instead of a symlink. It would
have passed if you replaced all Lchown() calls in it with Chown().
Change-Id: I3a108948ec25fcbac8ea890a6eaf5bac094f0800
Reviewed-on: https://go-review.googlesource.com/17397
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Followup to CL 17716, which updated cgo's boilerplate prologue code to
use standard C's _Complex instead of GCC's __complex extension.
Change-Id: I74f29b0cc3d13cab2853441cafbfe77853bba4f9
Reviewed-on: https://go-review.googlesource.com/17820
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
To prevent races with the garbage collector, stack spans cannot be
reused as heap spans during a GC. We deal with this by caching stack
spans during GC and releasing them at the end of mark termination.
However, while our cache lets us reuse small stack spans, currently
large stack spans are *not* reused. This can cause significant memory
growth in programs that allocate large stacks rapidly, but grow the
heap slowly (such as in issue #13552).
Fix this by adding logic to reuse large stack spans for other stacks.
Fixes#11466.
Fixes#13552. Without this change, the program in this issue creeps to
over 1GB of memory over the course of a few hours. With this change,
it stays rock solid at around 30MB.
Change-Id: If8b2d85464aa80c96230a1990715e39aa803904f
Reviewed-on: https://go-review.googlesource.com/17814
Reviewed-by: Keith Randall <khr@golang.org>
These three files contain only code written for Go
(and trivial amounts at that), not any code ported
from Inferno or Plan 9.
Remove the incorrect Inferno/Plan 9 notices.
Fixes#13576.
Change-Id: Ib9901fb360232282aae5ee0f4aa527bd6f4eaaed
Reviewed-on: https://go-review.googlesource.com/17779
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I thought that we avoided creating on-disk Unix sockets,
but I was mistaken. Use one to test CL 17458.
Fixes#11826.
Change-Id: Iaa1fb007b95fa6be48200586522a6d4789ecd346
Reviewed-on: https://go-review.googlesource.com/17725
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
(instead of using a GCC extension).
Change-Id: I110dc45bfe5f1377fe3453070eccde283b5cc161
Reviewed-on: https://go-review.googlesource.com/17716
Reviewed-by: Russ Cox <rsc@golang.org>
New implementation of TimeoutHandler: buffer everything to memory.
All or nothing: either the handler finishes completely within the
timeout (in which case the wrapper writes it all), or it misses the
timeout and none of it gets written, in which case handler wrapper can
reliably print the error response without fear that some of the
wrapped Handler's code already wrote to the output.
Now the goroutine running the wrapped Handler has its own write buffer
and Header copy.
Document the limitations.
Fixes#9162
Change-Id: Ia058c1d62cefd11843e7a2fc1ae1609d75de2441
Reviewed-on: https://go-review.googlesource.com/17752
Reviewed-by: David Symonds <dsymonds@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
At present, the series of File{Conn,Listener,PacketConn} APIs are the
only way to configure platform-specific socket options such as
SO_REUSE{ADDR,PORT}, TCP_FASTOPEN. This change adds missing test cases
that test read and write operations on connections created by File APIs
and removes redundant parameter tests which are already tested in
server_test.go.
Also adds comment on full stack test cases for IPConn.
Fixes#10730.
Change-Id: I67abb083781b602e876f72a6775a593c0f363c38
Reviewed-on: https://go-review.googlesource.com/17476
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Replaced code that substituted 0 for rounded-up 1 with
code to try again. This has minimal effect on the existing
stream of random numbers, but restores uniformity.
Fixes#12290.
Change-Id: Ib68f0b0a4a173339bcd0274cc16509f7b0977de8
Reviewed-on: https://go-review.googlesource.com/17670
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>