Currently runtime.callers invokes gentraceback with the pc and sp of
the G it is called from, but always passes g0 even if it was called
from a regular g. Right now this has no ill effects because
runtime.callers does not use either callback argument or the
_TraceJumpStack flag, but it makes the code fragile and will break
some upcoming changes.
Fix this by lifting the getg() call outside of the systemstack in
runtime.callers.
Change-Id: I4e1e927961c0e0cd4dcf28693be47df7bae9e122
Reviewed-on: https://go-review.googlesource.com/10292
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Also mentions golang.org/x/net/ipv4 and golang.org/x/net/ipv6.
Change-Id: I653deac7a5e2b129237655a72d6c91207f1b1685
Reviewed-on: https://go-review.googlesource.com/9779
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Formerly it would return a BadExpr.
This prevents partial syntax from being discarded, and makes the error
recovery logic more consistent with other places where an identifier
was expected but not found.
+ test
Change-Id: I223c0c0589e7ceb7207ae951b8f71b9275a1eb73
Reviewed-on: https://go-review.googlesource.com/10269
Reviewed-by: Robert Griesemer <gri@golang.org>
An error in string slice offsets caused the loop to run forever if the
first character in the argument was a period.
Fixes#10833.
Change-Id: Iefb6aac5cff8864fe93d08e2600cb07d82c6f6df
Reviewed-on: https://go-review.googlesource.com/10285
Reviewed-by: Russ Cox <rsc@golang.org>
This is dead code. If you want to quiesce the system the
preferred way is to use forEachP(func(*p){}).
Change-Id: Ic7677a5dd55e3639b99e78ddeb2c71dd1dd091fa
Reviewed-on: https://go-review.googlesource.com/10267
Reviewed-by: Austin Clements <austin@google.com>
Makes little difference internally but makes go list output more useful.
Change-Id: I1fa1f839107de08818427382b2aef8dc4d765b36
Reviewed-on: https://go-review.googlesource.com/10192
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
On Windows, we need to make sure that the node under test has external
connectivity.
Fixes#10795.
Change-Id: I99f2336180c7b56474fa90a4a6cdd5a6c4dd3805
Reviewed-on: https://go-review.googlesource.com/10006
Reviewed-by: Ian Lance Taylor <iant@golang.org>
386 is not affected because it doesn't use ginscmp.
Fixes#10843.
Change-Id: I1b3a133bd1e5fabc85236f15d060dbaa4c391cf3
Reviewed-on: https://go-review.googlesource.com/10116
Reviewed-by: Russ Cox <rsc@golang.org>
In css, js, and html, the replacement operations are implemented
by iterating on strings (rune by rune). The for/range
statement is used. The length of the rune is required
and added to the index to properly slice the string.
This is potentially wrong because there is a discrepancy between
the result of utf8.RuneLen and the increment of the index
(set by the for/range statement). For invalid strings,
utf8.RuneLen('\ufffd') == 3, while the index is incremented
only by 1 byte.
htmlReplacer triggers a panic at slicing time for some
invalid strings.
Use a more robust iteration mechanism based on
utf8.DecodeRuneInString, and make sure the same
pattern is used for all similar functions in this
package.
Fixes#10799
Change-Id: Ibad3857b2819435d9fa564f06fc2ca8774102841
Reviewed-on: https://go-review.googlesource.com/10105
Reviewed-by: Rob Pike <r@golang.org>
Commit 9c9e36b pushed these errors down to where the write barriers
are actually emitted, but forgot to remove the original error that was
being pushed down.
Change-Id: I751752a896e78fb9e63d69f88e7fb8d1ff5d344c
Reviewed-on: https://go-review.googlesource.com/10264
Reviewed-by: Russ Cox <rsc@golang.org>
The existing implementation executes `gofmt` binary from PATH
environment variable on invocation `go fmt` command.
Relying on PATH might lead to confusions for users with several Go installations.
It's more appropriate to run `gofmt` from GOBIN (if defined) or GOROOT.
Fixes#10755
Change-Id: I56d42a747319c766f2911508fab3994c3a366d12
Reviewed-on: https://go-review.googlesource.com/9900
Reviewed-by: Rob Pike <r@golang.org>
Prior to this CL whenever the GC marking was enabled and
a P was looking for work we supplied a G to help
the GC do its marking tasks. Once this G finished all
the marking available it would release the P to find another
available G. In the case where there was no work the P would drop
into findrunnable which would execute the mark helper G which would
immediately return and the P would drop into findrunnable again repeating
the process. Since the P was always given a G to run it never blocks.
This CL first checks if the GC mark helper G has available work and if
not the P immediately falls through to its blocking logic.
Fixes#10901
Change-Id: I94ac9646866ba64b7892af358888bc9950de23b5
Reviewed-on: https://go-review.googlesource.com/10189
Reviewed-by: Austin Clements <austin@google.com>
Currently setGCPercent sets heapminimum to heapminimum*GOGC/100. The
real intent is to set heapminimum to a scaled multiple of a fixed
default heap minimum, not to scale heapminimum based on its current
value. This turns out to be okay because setGCPercent is only called
once and heapminimum is initially set to this default heap minimum.
However, the code as written is confusing, especially since
setGCPercent is otherwise written so it could be called again to
change GOGC. Fix this by introducing a defaultHeapMinimum constant and
using this instead of the current value of heapminimum to compute the
scaled heap minimum.
As part of this, this commit improves the documentation on
heapminimum.
Change-Id: I4eb82c73dc2eb44a6e5a17c780a747a2e73d7493
Reviewed-on: https://go-review.googlesource.com/10181
Reviewed-by: Russ Cox <rsc@golang.org>
I think "the flag" was a typo, and the word "after" was repetitive.
Change-Id: I81c034ca11a3a778ff1eb4b3af5b96bc525ab985
Reviewed-on: https://go-review.googlesource.com/10195
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Rearrange Node fields to enable better struct packing.
This reduces readability in favor of shrinking
the size of Nodes.
This reduces the size of Node from 328 to 312.
This reduces the memory usage to compile the
rotate tests by about 4.4%.
No functional changes. Passes toolstash -cmp.
Updates #9933.
Change-Id: I2764c5847fb1635ddc898e2ee385d007d67f03c5
Reviewed-on: https://go-review.googlesource.com/10141
Reviewed-by: Russ Cox <rsc@golang.org>
Param will be converted from an anonymous to a
named field in a subsequent, automated CL.
Reduces Node size from 368 to 328.
Reduces inuse_space on the rotate tests by about 3%.
No functional changes. Passes toolstash -cmp.
Updates #9933.
Change-Id: I5867b00328abf17ee24aea6ca58876bae9d8bfed
Reviewed-on: https://go-review.googlesource.com/10210
Reviewed-by: Russ Cox <rsc@golang.org>
Funcdepth was already int32. Make Escloopdepth
and Decldepth also int32 instead of int.
No functional changes for non-absurd code. Passes toolstash -cmp.
Change-Id: I47e145dd732b6a73cfcc6d45956df0dbccdcd999
Reviewed-on: https://go-review.googlesource.com/10129
Reviewed-by: Russ Cox <rsc@golang.org>
This is a duplicate of CL 9491.
That CL broke the build due to pprof shortcomings
and was reverted in CL 9565.
CL 9623 fixed pprof, so this can go in again.
Fixes#10659.
Change-Id: If470fc90b3db2ade1d161b4417abd2f5c6c330b8
Reviewed-on: https://go-review.googlesource.com/10212
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Better layout.
Fixes#10859.
The issue suggests rearranging so the comment comes out
after the methods. I tried this and it looks good but it is less
useful, since the stuff you're probably looking for - the methods
- are scrolled away by the comment. The most important
information should be last because that leaves it on your
screen after the print if the output is long.
Change-Id: I560f992601ccbe2293c347fa1b1018a3f5346c82
Reviewed-on: https://go-review.googlesource.com/10160
Reviewed-by: Russ Cox <rsc@golang.org>
And fix to work on filesystems with only 1s resolution.
Fixes#10724
Change-Id: Ia07463f090b4290fc27f5953fa94186463d7afc7
Reviewed-on: https://go-review.googlesource.com/9768
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Modified esc.go to allow slice literals (before append)
to be non-escaping. Modified tests to account for changes
in escape behavior and to also test the two cases that
were previously not tested.
Also minor cleanups to debug-printing within esc.go
Allocation stats for running compiler
( cd src/html/template;
for i in {1..5} ; do
go tool 6g -memprofile=testzz.${i}.prof -memprofilerate=1 *.go ;
go tool pprof -alloc_objects -text testzz.${i}.prof ;
done ; )
before about 86k allocations
after about 83k allocations
Fixes#8972
Change-Id: Ib61dd70dc74adb40d6f6fdda6eaa4bf7d83481de
Reviewed-on: https://go-review.googlesource.com/10118
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, forEachP reuses the stopwait and stopnote fields from
stopTheWorld to track how many Ps have not responded to the safe-point
request and to sleep until all Ps have responded.
It was assumed this was safe because both stopTheWorld and forEachP
must occur under the worlsema and hence stopwait and stopnote cannot
be used for both purposes simultaneously and callers could always
determine the appropriate use based on sched.gcwaiting (which is only
set by stopTheWorld). However, this is not the case, since it's
possible for there to be a window between when an M observes that
gcwaiting is set and when it checks stopwait during which stopwait
could have changed meanings. When this happens, the M decrements
stopwait and may wakeup stopnote, but does not otherwise participate
in the forEachP protocol. As a result, stopwait is decremented too
many times, so it may reach zero before all Ps have run the safe-point
function, causing forEachP to wake up early. It will then either
observe that some P has not run the safe-point function and panic with
"P did not run fn", or the remaining P (or Ps) will run the safe-point
function before it wakes up and it will observe that stopwait is
negative and panic with "not stopped".
Fix this problem by giving forEachP its own safePointWait and
safePointNote fields.
One known sequence of events that can cause this race is as
follows. It involves three actors:
G1 is running on M1 on P1. P1 has an empty run queue.
G2/M2 is in a blocked syscall and has lost its P. (The details of this
don't matter, it just needs to be in a position where it needs to grab
an idle P.)
GC just started on G3/M3/P3. (These aren't very involved, they just
have to be separate from the other G's, M's, and P's.)
1. GC calls stopTheWorld(), which sets sched.gcwaiting to 1.
Now G1/M1 begins to enter a syscall:
2. G1/M1 invokes reentersyscall, which sets the P1's status to
_Psyscall.
3. G1/M1's reentersyscall observes gcwaiting != 0 and calls
entersyscall_gcwait.
4. G1/M1's entersyscall_gcwait blocks acquiring sched.lock.
Back on GC:
5. stopTheWorld cas's P1's status to _Pgcstop, does other stuff, and
returns.
6. GC does stuff and then calls startTheWorld().
7. startTheWorld() calls procresize(), which sets P1's status to
_Pidle and puts P1 on the idle list.
Now G2/M2 returns from its syscall and takes over P1:
8. G2/M2 returns from its blocked syscall and gets P1 from the idle
list.
9. G2/M2 acquires P1, which sets P1's status to _Prunning.
10. G2/M2 starts a new syscall and invokes reentersyscall, which sets
P1's status to _Psyscall.
Back on G1/M1:
11. G1/M1 finally acquires sched.lock in entersyscall_gcwait.
At this point, G1/M1 still thinks it's running on P1. P1's status is
_Psyscall, which is consistent with what G1/M1 is doing, but it's
_Psyscall because *G2/M2* put it in to _Psyscall, not G1/M1. This is
basically an ABA race on P1's status.
Because forEachP currently shares stopwait with stopTheWorld. G1/M1's
entersyscall_gcwait observes the non-zero stopwait set by forEachP,
but mistakes it for a stopTheWorld. It cas's P1's status from
_Psyscall (set by G2/M2) to _Pgcstop and proceeds to decrement
stopwait one more time than forEachP was expecting.
Fixes#10618. (See the issue for details on why the above race is safe
when forEachP is not involved.)
Prior to this commit, the command
stress ./runtime.test -test.run TestFutexsleep\|TestGoroutineProfile
would reliably fail after a few hundred runs. With this commit, it
ran for over 2 million runs and never crashed.
Change-Id: I9a91ea20035b34b6e5f07ef135b144115f281f30
Reviewed-on: https://go-review.googlesource.com/10157
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, startTheWorld releases worldsema before starting the
world. Since startTheWorld can change gomaxprocs after allowing Ps to
run, this means that gomaxprocs can change while another P holds
worldsema.
Unfortunately, the garbage collector and forEachP assume that holding
worldsema protects against changes in gomaxprocs (which it *almost*
does). In particular, this is causing somewhat frequent "P did not run
fn" crashes in forEachP in the runtime tests because gomaxprocs is
changing between the several loops that forEachP does over all the Ps.
Fix this by only releasing worldsema after the world is started.
This relates to issue #10618. forEachP still fails under stress
testing, but much less frequently.
Change-Id: I085d627b70cca9ebe9af28fe73b9872f1bb224ff
Reviewed-on: https://go-review.googlesource.com/10156
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, startTheWorld clears preemptoff for the current M before
starting the world. A few callers increment m.locks around
startTheWorld, presumably to prevent preemption any time during
starting the world. This is almost certainly pointless (none of the
other callers do this), but there's no harm in making startTheWorld
keep preemption disabled until it's all done, which definitely lets us
drop these m.locks manipulations.
Change-Id: I8a93658abd0c72276c9bafa3d2c7848a65b4691a
Reviewed-on: https://go-review.googlesource.com/10155
Reviewed-by: Russ Cox <rsc@golang.org>
There are several steps to stopping and starting the world and
currently they're open-coded in several places. The garbage collector
is the only thing that needs to stop and start the world in a
non-trivial pattern. Replace all other uses with calls to higher-level
functions that implement the entire pattern necessary to stop and
start the world.
This is a pure refectoring and should not change any code semantics.
In the following commits, we'll make changes that are easier to do
with this abstraction in place.
This commit renames the old starttheworld to startTheWorldWithSema.
This is a slight misnomer right now because the callers release
worldsema just before calling this. However, a later commit will swap
these and I don't want to think of another name in the mean time.
Change-Id: I5dc97f87b44fb98963c49c777d7053653974c911
Reviewed-on: https://go-review.googlesource.com/10154
Reviewed-by: Russ Cox <rsc@golang.org>
In order to avoid deadlocks, startGC avoids kicking off GC if locks
are held by the calling M. However, it currently fails to check
preemptoff, which is the other way to disable preemption.
Fix this by adding a check for preemptoff.
Change-Id: Ie1083166e5ba4af5c9d6c5a42efdfaaef41ca997
Reviewed-on: https://go-review.googlesource.com/10153
Reviewed-by: Russ Cox <rsc@golang.org>
It is misleading when stack trace say:
signal arrived during cgo execution
but we are not in cgo call.
Change-Id: I627e2f2bdc7755074677f77f21befc070a101914
Reviewed-on: https://go-review.googlesource.com/9190
Reviewed-by: Russ Cox <rsc@golang.org>
If make.bash fails, there is no point continuing any further.
Fixes#10880.
Change-Id: I350cc16999372422ad3d2e0327d52d467886a5b1
Reviewed-on: https://go-review.googlesource.com/10180
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, runqsteal steals Gs from another P into an intermediate
buffer and then copies those Gs into the current P's run queue. This
intermediate buffer itself was moved from the stack to the P in commit
c4fe503 to eliminate the cost of zeroing it on every steal.
This commit follows up c4fe503 by stealing directly into the current
P's run queue, which eliminates the copy and the need for the
intermediate buffer. The update to the tail pointer is only committed
once the entire steal operation has succeeded, so the semantics of
stealing do not change.
Change-Id: Icdd7a0eb82668980bf42c0154b51eef6419fdd51
Reviewed-on: https://go-review.googlesource.com/9998
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Small types record the location of pointers in their memory layout
by using a simple bitmap. In Go 1.4 the bitmap held 4-bit entries,
and in Go 1.5 the bitmap holds 1-bit entries, but in both cases using
a bitmap for a large type containing arrays does not make sense:
if someone refers to the type [1<<28]*byte in a program in such
a way that the type information makes it into the binary, it would be
a waste of space to write a 128 MB (for 4-bit entries) or even 32 MB
(for 1-bit entries) bitmap full of 1s into the binary or even to keep
one in memory during the execution of the program.
For large types containing arrays, it is much more compact to describe
the locations of pointers using a notation that can express repetition
than to lay out a bitmap of pointers. Go 1.4 included such a notation,
called ``GC programs'' but it was complex, required recursion during
decoding, and was generally slow. Dmitriy measured the execution of
these programs writing directly to the heap bitmap as being 7x slower
than copying from a preunrolled 4-bit mask (and frankly that code was
not terribly fast either). For some tests, unrollgcprog1 was seen costing
as much as 3x more than the rest of malloc combined.
This CL introduces a different form for the GC programs. They use a
simple Lempel-Ziv-style encoding of the 1-bit pointer information,
in which the only operations are (1) emit the following n bits
and (2) repeat the last n bits c more times. This encoding can be
generated directly from the Go type information (using repetition
only for arrays or large runs of non-pointer data) and it can be decoded
very efficiently. In particular the decoding requires little state and
no recursion, so that the entire decoding can run without any memory
accesses other than the reads of the encoding and the writes of the
decoded form to the heap bitmap. For recursive types like arrays of
arrays of arrays, the inner instructions are only executed once, not
n times, so that large repetitions run at full speed. (In contrast, large
repetitions in the old programs repeated the individual bit-level layout
of the inner data over and over.) The result is as much as 25x faster
decoding compared to the old form.
Because the old decoder was so slow, Go 1.4 had three (or so) cases
for how to set the heap bitmap bits for an allocation of a given type:
(1) If the type had an even number of words up to 32 words, then
the 4-bit pointer mask for the type fit in no more than 16 bytes;
store the 4-bit pointer mask directly in the binary and copy from it.
(1b) If the type had an odd number of words up to 15 words, then
the 4-bit pointer mask for the type, doubled to end on a byte boundary,
fit in no more than 16 bytes; store that doubled mask directly in the
binary and copy from it.
(2) If the type had an even number of words up to 128 words,
or an odd number of words up to 63 words (again due to doubling),
then the 4-bit pointer mask would fit in a 64-byte unrolled mask.
Store a GC program in the binary, but leave space in the BSS for
the unrolled mask. Execute the GC program to construct the mask the
first time it is needed, and thereafter copy from the mask.
(3) Otherwise, store a GC program and execute it to write directly to
the heap bitmap each time an object of that type is allocated.
(This is the case that was 7x slower than the other two.)
Because the new pointer masks store 1-bit entries instead of 4-bit
entries and because using the decoder no longer carries a significant
overhead, after this CL (that is, for Go 1.5) there are only two cases:
(1) If the type is 128 words or less (no condition about odd or even),
store the 1-bit pointer mask directly in the binary and use it to
initialize the heap bitmap during malloc. (Implemented in CL 9702.)
(2) There is no case 2 anymore.
(3) Otherwise, store a GC program and execute it to write directly to
the heap bitmap each time an object of that type is allocated.
Executing the GC program directly into the heap bitmap (case (3) above)
was disabled for the Go 1.5 dev cycle, both to avoid needing to use
GC programs for typedmemmove and to avoid updating that code as
the heap bitmap format changed. Typedmemmove no longer uses this
type information; as of CL 9886 it uses the heap bitmap directly.
Now that the heap bitmap format is stable, we reintroduce GC programs
and their space savings.
Benchmarks for heapBitsSetType, before this CL vs this CL:
name old mean new mean delta
SetTypePtr 7.59ns × (0.99,1.02) 5.16ns × (1.00,1.00) -32.05% (p=0.000)
SetTypePtr8 21.0ns × (0.98,1.05) 21.4ns × (1.00,1.00) ~ (p=0.179)
SetTypePtr16 24.1ns × (0.99,1.01) 24.6ns × (1.00,1.00) +2.41% (p=0.001)
SetTypePtr32 31.2ns × (0.99,1.01) 32.4ns × (0.99,1.02) +3.72% (p=0.001)
SetTypePtr64 45.2ns × (1.00,1.00) 47.2ns × (1.00,1.00) +4.42% (p=0.000)
SetTypePtr126 75.8ns × (0.99,1.01) 79.1ns × (1.00,1.00) +4.25% (p=0.000)
SetTypePtr128 74.3ns × (0.99,1.01) 77.6ns × (1.00,1.01) +4.55% (p=0.000)
SetTypePtrSlice 726ns × (1.00,1.01) 712ns × (1.00,1.00) -1.95% (p=0.001)
SetTypeNode1 20.0ns × (0.99,1.01) 20.7ns × (1.00,1.00) +3.71% (p=0.000)
SetTypeNode1Slice 112ns × (1.00,1.00) 113ns × (0.99,1.00) ~ (p=0.070)
SetTypeNode8 23.9ns × (1.00,1.00) 24.7ns × (1.00,1.01) +3.18% (p=0.000)
SetTypeNode8Slice 294ns × (0.99,1.02) 287ns × (0.99,1.01) -2.38% (p=0.015)
SetTypeNode64 52.8ns × (0.99,1.03) 51.8ns × (0.99,1.01) ~ (p=0.069)
SetTypeNode64Slice 1.13µs × (0.99,1.05) 1.14µs × (0.99,1.00) ~ (p=0.767)
SetTypeNode64Dead 36.0ns × (1.00,1.01) 32.5ns × (0.99,1.00) -9.67% (p=0.000)
SetTypeNode64DeadSlice 1.43µs × (0.99,1.01) 1.40µs × (1.00,1.00) -2.39% (p=0.001)
SetTypeNode124 75.7ns × (1.00,1.01) 79.0ns × (1.00,1.00) +4.44% (p=0.000)
SetTypeNode124Slice 1.94µs × (1.00,1.01) 2.04µs × (0.99,1.01) +4.98% (p=0.000)
SetTypeNode126 75.4ns × (1.00,1.01) 77.7ns × (0.99,1.01) +3.11% (p=0.000)
SetTypeNode126Slice 1.95µs × (0.99,1.01) 2.03µs × (1.00,1.00) +3.74% (p=0.000)
SetTypeNode128 85.4ns × (0.99,1.01) 122.0ns × (1.00,1.00) +42.89% (p=0.000)
SetTypeNode128Slice 2.20µs × (1.00,1.01) 2.36µs × (0.98,1.02) +7.48% (p=0.001)
SetTypeNode130 83.3ns × (1.00,1.00) 123.0ns × (1.00,1.00) +47.61% (p=0.000)
SetTypeNode130Slice 2.30µs × (0.99,1.01) 2.40µs × (0.98,1.01) +4.37% (p=0.000)
SetTypeNode1024 498ns × (1.00,1.00) 537ns × (1.00,1.00) +7.96% (p=0.000)
SetTypeNode1024Slice 15.5µs × (0.99,1.01) 17.8µs × (1.00,1.00) +15.27% (p=0.000)
The above compares always using a cached pointer mask (and the
corresponding waste of memory) against using the programs directly.
Some slowdown is expected, in exchange for having a better general algorithm.
The GC programs kick in for SetTypeNode128, SetTypeNode130, SetTypeNode1024,
along with the slice variants of those.
It is possible that the cutoff of 128 words (bits) should be raised
in a followup CL, but even with this low cutoff the GC programs are
faster than Go 1.4's "fast path" non-GC program case.
Benchmarks for heapBitsSetType, Go 1.4 vs this CL:
name old mean new mean delta
SetTypePtr 6.89ns × (1.00,1.00) 5.17ns × (1.00,1.00) -25.02% (p=0.000)
SetTypePtr8 25.8ns × (0.97,1.05) 21.5ns × (1.00,1.00) -16.70% (p=0.000)
SetTypePtr16 39.8ns × (0.97,1.02) 24.7ns × (0.99,1.01) -37.81% (p=0.000)
SetTypePtr32 68.8ns × (0.98,1.01) 32.2ns × (1.00,1.01) -53.18% (p=0.000)
SetTypePtr64 130ns × (1.00,1.00) 47ns × (1.00,1.00) -63.67% (p=0.000)
SetTypePtr126 241ns × (0.99,1.01) 79ns × (1.00,1.01) -67.25% (p=0.000)
SetTypePtr128 2.07µs × (1.00,1.00) 0.08µs × (1.00,1.00) -96.27% (p=0.000)
SetTypePtrSlice 1.05µs × (0.99,1.01) 0.72µs × (0.99,1.02) -31.70% (p=0.000)
SetTypeNode1 16.0ns × (0.99,1.01) 20.8ns × (0.99,1.03) +29.91% (p=0.000)
SetTypeNode1Slice 184ns × (0.99,1.01) 112ns × (0.99,1.01) -39.26% (p=0.000)
SetTypeNode8 29.5ns × (0.97,1.02) 24.6ns × (1.00,1.00) -16.50% (p=0.000)
SetTypeNode8Slice 624ns × (0.98,1.02) 285ns × (1.00,1.00) -54.31% (p=0.000)
SetTypeNode64 135ns × (0.96,1.08) 52ns × (0.99,1.02) -61.32% (p=0.000)
SetTypeNode64Slice 3.83µs × (1.00,1.00) 1.14µs × (0.99,1.01) -70.16% (p=0.000)
SetTypeNode64Dead 134ns × (0.99,1.01) 32ns × (1.00,1.01) -75.74% (p=0.000)
SetTypeNode64DeadSlice 3.83µs × (0.99,1.00) 1.40µs × (1.00,1.01) -63.42% (p=0.000)
SetTypeNode124 240ns × (0.99,1.01) 79ns × (1.00,1.01) -67.05% (p=0.000)
SetTypeNode124Slice 7.27µs × (1.00,1.00) 2.04µs × (1.00,1.00) -71.95% (p=0.000)
SetTypeNode126 2.06µs × (0.99,1.01) 0.08µs × (0.99,1.01) -96.23% (p=0.000)
SetTypeNode126Slice 64.4µs × (1.00,1.00) 2.0µs × (1.00,1.00) -96.85% (p=0.000)
SetTypeNode128 2.09µs × (1.00,1.01) 0.12µs × (1.00,1.00) -94.15% (p=0.000)
SetTypeNode128Slice 65.4µs × (1.00,1.00) 2.4µs × (0.99,1.03) -96.39% (p=0.000)
SetTypeNode130 2.11µs × (1.00,1.00) 0.12µs × (1.00,1.00) -94.18% (p=0.000)
SetTypeNode130Slice 66.3µs × (1.00,1.00) 2.4µs × (0.97,1.08) -96.34% (p=0.000)
SetTypeNode1024 16.0µs × (1.00,1.01) 0.5µs × (1.00,1.00) -96.65% (p=0.000)
SetTypeNode1024Slice 512µs × (1.00,1.00) 18µs × (0.98,1.04) -96.45% (p=0.000)
SetTypeNode124 uses a 124 data + 2 ptr = 126-word allocation.
Both Go 1.4 and this CL are using pointer bitmaps for this case,
so that's an overall 3x speedup for using pointer bitmaps.
SetTypeNode128 uses a 128 data + 2 ptr = 130-word allocation.
Both Go 1.4 and this CL are running the GC program for this case,
so that's an overall 17x speedup when using GC programs (and
I've seen >20x on other systems).
Comparing Go 1.4's SetTypeNode124 (pointer bitmap) against
this CL's SetTypeNode128 (GC program), the slow path in the
code in this CL is 2x faster than the fast path in Go 1.4.
The Go 1 benchmarks are basically unaffected compared to just before this CL.
Go 1 benchmarks, before this CL vs this CL:
name old mean new mean delta
BinaryTree17 5.87s × (0.97,1.04) 5.91s × (0.96,1.04) ~ (p=0.306)
Fannkuch11 4.38s × (1.00,1.00) 4.37s × (1.00,1.01) -0.22% (p=0.006)
FmtFprintfEmpty 90.7ns × (0.97,1.10) 89.3ns × (0.96,1.09) ~ (p=0.280)
FmtFprintfString 282ns × (0.98,1.04) 287ns × (0.98,1.07) +1.72% (p=0.039)
FmtFprintfInt 269ns × (0.99,1.03) 282ns × (0.97,1.04) +4.87% (p=0.000)
FmtFprintfIntInt 478ns × (0.99,1.02) 481ns × (0.99,1.02) +0.61% (p=0.048)
FmtFprintfPrefixedInt 399ns × (0.98,1.03) 400ns × (0.98,1.05) ~ (p=0.533)
FmtFprintfFloat 563ns × (0.99,1.01) 570ns × (1.00,1.01) +1.37% (p=0.000)
FmtManyArgs 1.89µs × (0.99,1.01) 1.92µs × (0.99,1.02) +1.88% (p=0.000)
GobDecode 15.2ms × (0.99,1.01) 15.2ms × (0.98,1.05) ~ (p=0.609)
GobEncode 11.6ms × (0.98,1.03) 11.9ms × (0.98,1.04) +2.17% (p=0.000)
Gzip 648ms × (0.99,1.01) 648ms × (1.00,1.01) ~ (p=0.835)
Gunzip 142ms × (1.00,1.00) 143ms × (1.00,1.01) ~ (p=0.169)
HTTPClientServer 90.5µs × (0.98,1.03) 91.5µs × (0.98,1.04) +1.04% (p=0.045)
JSONEncode 31.5ms × (0.98,1.03) 31.4ms × (0.98,1.03) ~ (p=0.549)
JSONDecode 111ms × (0.99,1.01) 107ms × (0.99,1.01) -3.21% (p=0.000)
Mandelbrot200 6.01ms × (1.00,1.00) 6.01ms × (1.00,1.00) ~ (p=0.878)
GoParse 6.54ms × (0.99,1.02) 6.61ms × (0.99,1.03) +1.08% (p=0.004)
RegexpMatchEasy0_32 160ns × (1.00,1.01) 161ns × (1.00,1.00) +0.40% (p=0.000)
RegexpMatchEasy0_1K 560ns × (0.99,1.01) 559ns × (0.99,1.01) ~ (p=0.088)
RegexpMatchEasy1_32 138ns × (0.99,1.01) 138ns × (1.00,1.00) ~ (p=0.380)
RegexpMatchEasy1_1K 877ns × (1.00,1.00) 878ns × (1.00,1.00) ~ (p=0.157)
RegexpMatchMedium_32 251ns × (0.99,1.00) 251ns × (1.00,1.01) +0.28% (p=0.021)
RegexpMatchMedium_1K 72.6µs × (1.00,1.00) 72.6µs × (1.00,1.00) ~ (p=0.539)
RegexpMatchHard_32 3.84µs × (1.00,1.00) 3.84µs × (1.00,1.00) ~ (p=0.378)
RegexpMatchHard_1K 117µs × (1.00,1.00) 117µs × (1.00,1.00) ~ (p=0.067)
Revcomp 904ms × (0.99,1.02) 904ms × (0.99,1.01) ~ (p=0.943)
Template 125ms × (0.99,1.02) 127ms × (0.99,1.01) +1.79% (p=0.000)
TimeParse 627ns × (0.99,1.01) 622ns × (0.99,1.01) -0.88% (p=0.000)
TimeFormat 655ns × (0.99,1.02) 655ns × (0.99,1.02) ~ (p=0.976)
For the record, Go 1 benchmarks, Go 1.4 vs this CL:
name old mean new mean delta
BinaryTree17 4.61s × (0.97,1.05) 5.91s × (0.98,1.03) +28.35% (p=0.000)
Fannkuch11 4.40s × (0.99,1.03) 4.41s × (0.99,1.01) ~ (p=0.212)
FmtFprintfEmpty 102ns × (0.99,1.01) 84ns × (0.99,1.02) -18.38% (p=0.000)
FmtFprintfString 302ns × (0.98,1.01) 303ns × (0.99,1.02) ~ (p=0.203)
FmtFprintfInt 313ns × (0.97,1.05) 270ns × (0.99,1.01) -13.69% (p=0.000)
FmtFprintfIntInt 524ns × (0.98,1.02) 477ns × (0.99,1.00) -8.87% (p=0.000)
FmtFprintfPrefixedInt 424ns × (0.98,1.02) 386ns × (0.99,1.01) -8.96% (p=0.000)
FmtFprintfFloat 652ns × (0.98,1.02) 594ns × (0.97,1.05) -8.97% (p=0.000)
FmtManyArgs 2.13µs × (0.99,1.02) 1.94µs × (0.99,1.01) -8.92% (p=0.000)
GobDecode 17.1ms × (0.99,1.02) 14.9ms × (0.98,1.03) -13.07% (p=0.000)
GobEncode 13.5ms × (0.98,1.03) 11.5ms × (0.98,1.03) -15.25% (p=0.000)
Gzip 656ms × (0.99,1.02) 647ms × (0.99,1.01) -1.29% (p=0.000)
Gunzip 143ms × (0.99,1.02) 144ms × (0.99,1.01) ~ (p=0.204)
HTTPClientServer 88.2µs × (0.98,1.02) 90.8µs × (0.98,1.01) +2.93% (p=0.000)
JSONEncode 32.2ms × (0.98,1.02) 30.9ms × (0.97,1.04) -4.06% (p=0.001)
JSONDecode 121ms × (0.98,1.02) 110ms × (0.98,1.05) -8.95% (p=0.000)
Mandelbrot200 6.06ms × (0.99,1.01) 6.11ms × (0.98,1.04) ~ (p=0.184)
GoParse 6.76ms × (0.97,1.04) 6.58ms × (0.98,1.05) -2.63% (p=0.003)
RegexpMatchEasy0_32 195ns × (1.00,1.01) 155ns × (0.99,1.01) -20.43% (p=0.000)
RegexpMatchEasy0_1K 479ns × (0.98,1.03) 535ns × (0.99,1.02) +11.59% (p=0.000)
RegexpMatchEasy1_32 169ns × (0.99,1.02) 131ns × (0.99,1.03) -22.44% (p=0.000)
RegexpMatchEasy1_1K 1.53µs × (0.99,1.01) 0.87µs × (0.99,1.02) -43.07% (p=0.000)
RegexpMatchMedium_32 334ns × (0.99,1.01) 242ns × (0.99,1.01) -27.53% (p=0.000)
RegexpMatchMedium_1K 125µs × (1.00,1.01) 72µs × (0.99,1.03) -42.53% (p=0.000)
RegexpMatchHard_32 6.03µs × (0.99,1.01) 3.79µs × (0.99,1.01) -37.12% (p=0.000)
RegexpMatchHard_1K 189µs × (0.99,1.02) 115µs × (0.99,1.01) -39.20% (p=0.000)
Revcomp 935ms × (0.96,1.03) 926ms × (0.98,1.02) ~ (p=0.083)
Template 146ms × (0.97,1.05) 119ms × (0.99,1.01) -18.37% (p=0.000)
TimeParse 660ns × (0.99,1.01) 624ns × (0.99,1.02) -5.43% (p=0.000)
TimeFormat 670ns × (0.98,1.02) 710ns × (1.00,1.01) +5.97% (p=0.000)
This CL is a bit larger than I would like, but the compiler, linker, runtime,
and package reflect all need to be in sync about the format of these programs,
so there is no easy way to split this into independent changes (at least
while keeping the build working at each change).
Fixes#9625.
Fixes#10524.
Change-Id: I9e3e20d6097099d0f8532d1cb5b1af528804989a
Reviewed-on: https://go-review.googlesource.com/9888
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
The Template objects are supposed to be goroutine-safe once they
have been parsed. This includes the text and html ones.
For html/template, the escape mechanism is triggered at execution
time. It may alter the internal structures of the template, so
a mutex protects them against concurrent accesses.
The text/template package is free of any synchronization primitive.
A race condition may occur when nested templates are escaped:
the escape algorithm alters the function maps of the associated
text templates, while a concurrent template execution may access
the function maps in read mode.
The less invasive fix I have found is to introduce a RWMutex in
text/template to protect the function maps. This is unfortunate
but it should be effective.
Fixes#9945
Change-Id: I1edb73c0ed0f1fcddd2f1516230b548b92ab1269
Reviewed-on: https://go-review.googlesource.com/10101
Reviewed-by: Rob Pike <r@golang.org>
This allows the removal of a fudge in data.go.
We have to defer the calls to adddynlib on non-Darwin until after we have
decided whether we are externally or internally linking. The Macho/ELF
separation could do with some cleaning up, but: code freeze.
Fixing this once rather than per-arch is what inspired the previous CLs.
Change-Id: I0166f7078a045dc09827745479211247466c0c54
Reviewed-on: https://go-review.googlesource.com/10002
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
The only essential difference is elf32 vs elf64, I assume the other differences
are bugs in one version or another...
Change-Id: Ie6ff33d5574a6592b543df9983eff8fdf88c97a1
Reviewed-on: https://go-review.googlesource.com/10001
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Russ Cox <rsc@golang.org>
They were all essentially the same.
Change-Id: I6e0b548cda6e4bbe2ec3b3025b746d1f6d332d48
Reviewed-on: https://go-review.googlesource.com/10000
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This is an automated follow-up to CL 10120.
It was generated with a combination of eg and gofmt -r.
No functional changes. Passes toolstash -cmp.
Change-Id: I0dc6d146372012b4cce9cc4064066daa6694eee6
Reviewed-on: https://go-review.googlesource.com/10144
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The previous implementation spawned an extra goroutine to handle
rechecking resolv.conf for changes.
This change eliminates the extra goroutine, and has rechecking
done as part of a lookup. A side effect of this change is that the
first lookup after a resolv.conf change will now succeed, whereas
previously it would have failed. It also fixes rechecking logic to
ignore resolv.conf parsing errors as it should.
Fixes#8652Fixes#10576Fixes#10649Fixes#10650Fixes#10845
Change-Id: I502b587c445fa8eca5207ca4f2c8ec8c339fec7f
Reviewed-on: https://go-review.googlesource.com/9991
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This extends cmd/yacc with support for
%error { tokens } : message
syntax to specify custom error messages to use instead of the default
generic ones. This allows merging go.errors into go.y and removing
the yaccerrors.go tool.
Updates #9968.
Change-Id: I781219c568b86472755f877f48401eaeab00ead5
Reviewed-on: https://go-review.googlesource.com/8563
Reviewed-by: Russ Cox <rsc@golang.org>
This reverts commit 5726af54eb.
It broke all the builds.
Change-Id: I4b1dde86f9433717d303c1dabd6aa1a2bf97fab2
Reviewed-on: https://go-review.googlesource.com/10143
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
All slice types which have elements of kind reflect.Uint8 are marshalled
into base64 for compactness. When decoding such data into a custom type
based on []byte the decoder checked the slice kind instead of the slice
element kind, so no appropriate decoder was found.
Fixed by letting the decoder check slice element kind like the encoder.
This guarantees that already encoded data can still be successfully
decoded.
Fixes#8962.
Change-Id: Ia320d4dc2c6e9e5fe6d8dc15788c81da23d20c4f
Reviewed-on: https://go-review.googlesource.com/9371
Reviewed-by: Peter Waldschmidt <peter@waldschmidt.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Name will be converted from an anonymous to a
named field in a subsequent, automated CL.
No functional changes. Passes toolstash -cmp.
This reduces the size of gc.Node from 424 to 400 bytes.
This in turn reduces the permanent (pprof -inuse_space)
memory usage while compiling the test/rotate?.go tests:
test old(MB) new(MB) change
rotate0 379.49 367.30 -3.21%
rotate1 373.42 361.59 -3.16%
rotate2 381.17 368.77 -3.25%
rotate3 374.30 362.48 -3.15%
Updates #9933.
Change-Id: I21479527c136add4f1efb9342774e3be3e276e83
Reviewed-on: https://go-review.googlesource.com/10120
Reviewed-by: Russ Cox <rsc@golang.org>
This CL was generated by updating Val in go.go
and then running:
sed -i "" 's/\.U\.[SBXFC]val = /.U = /' *.go
sed -i "" 's/\.U\.Sval/.U.\(string\)/g' *.go *.y
sed -i "" 's/\.U\.Bval/.U.\(bool\)/g' *.go *.y
sed -i "" 's/\.U\.Xval/.U.\(\*Mpint\)/g' *.go *.y
sed -i "" 's/\.U\.Fval/.U.\(\*Mpflt\)/g' *.go *.y
sed -i "" 's/\.U\.Cval/.U.\(\*Mpcplx\)/g' *.go *.y
No functional changes. Passes toolstash -cmp.
This reduces the size of gc.Node from 424 to 392 bytes.
This in turn reduces the permanent (pprof -inuse_space)
memory usage while compiling the test/rotate?.go tests:
test old(MB) new(MB) change
rotate0 379.49 364.78 -3.87%
rotate1 373.42 359.07 -3.84%
rotate2 381.17 366.24 -3.91%
rotate3 374.30 359.95 -3.83%
CL 8445 was similar to this; gri asked that Val's implementation
be hidden first. CLs 8912, 9263, and 9267 have at least
isolated the changes to the cmd/internal/gc package.
Updates #9933.
Change-Id: I83ddfe003d48e0a73c92e819edd3b5e620023084
Reviewed-on: https://go-review.googlesource.com/10059
Reviewed-by: Russ Cox <rsc@golang.org>
This trivial change is a prerequisite to
converting Val.U to an interface{}.
No functional changes. Passes toolstash -cmp.
Change-Id: I17ff036f68d29a9ed0097a8b23ae1c91e6ce8c21
Reviewed-on: https://go-review.googlesource.com/10058
Reviewed-by: Russ Cox <rsc@golang.org>
Remove all uses of Node.Val outside of the gc package.
A subsequent, automated commit in the Go 1.6 cycle
will unexport Node.Val.
No functional changes. Passes toolstash -cmp.
Change-Id: Ia92ae6a7766c83ab3e45c69edab24a9581c824f9
Reviewed-on: https://go-review.googlesource.com/9267
Reviewed-by: Russ Cox <rsc@golang.org>
Remove all uses of Mp* outside of the gc package.
A subsequent, automated commit in the Go 1.6
cycle will unexport all Mp* functions and types.
No functional changes. Passes toolstash -cmp.
Change-Id: Ie1604cb5b84ffb30b47f4777d4235570f2c62709
Reviewed-on: https://go-review.googlesource.com/9263
Reviewed-by: Russ Cox <rsc@golang.org>
Preallocating them in reflect means that
(1) if you say _ = PtrTo(ArrayOf(1000000000, reflect.TypeOf(byte(0)))), you just allocated 1GB of data
(2) if you say it again, that's *another* GB of data.
The only use of t.zero in the runtime is for map elements.
Delay the allocation until the creation of a map with that element type,
and share the zeros.
The one downside of the shared zero is that it's not garbage collected,
but it's also never written, so the OS should be able to handle it fairly
efficiently.
Change-Id: I56b098a091abf3ac0945de28ebef9a6c08e76614
Reviewed-on: https://go-review.googlesource.com/10111
Reviewed-by: Keith Randall <khr@golang.org>
According to MSDN RegQueryValueEx page:
If the data has the REG_SZ, REG_MULTI_SZ or REG_EXPAND_SZ type, the
string may not have been stored with the proper terminating null
characters. Therefore, even if the function returns ERROR_SUCCESS, the
application should ensure that the string is properly terminated before
using it; otherwise, it may overwrite a buffer. (Note that REG_MULTI_SZ
strings should have two terminating null characters.)
Test written by Alex Brainman <alex.brainman@gmail.com>
Change-Id: I8c0852e0527e27ceed949134ed5e6de944189986
Reviewed-on: https://go-review.googlesource.com/9806
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
See CL 9962 for the rationale.
Change-Id: I73c714fce258430eea1e61d3835f5c8e9014ca1f
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/9925
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Auto-generated using the following bash script:
for i in z*_*_*.go; do
goosgoarch=`basename ${i/${i/_*/}_/} .go`
goos=${goosgoarch/_*/}
goarch=${goosgoarch/*_/}
echo $i $goos $goarch
[ "$goos" = "windows" ] && continue
sed -i -e "/^package /i\/\/ +build $goarch,$goos\n" "$i"
done
Change-Id: I756fee551d1698080e4591fed8f058ae0450aaa5
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/10113
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Missed a case; just need to call validateType.
Fixes#10800.
Change-Id: I81997ca7a9feb1be31c8b47e631b32712d7ffb86
Reviewed-on: https://go-review.googlesource.com/10031
Reviewed-by: Andrew Gerrand <adg@golang.org>
This reduces the depth of the inlining at a particular call site.
The inliner introduces many temporary variables, and the compiler can do
a better job with fewer. Being verbose in the bodies of these helper functions
seems like a reasonable tradeoff: the uses are still just as readable, and
they run faster in some important cases.
Change-Id: I5323976ed3704d0acd18fb31176cfbf5ba23a89c
Reviewed-on: https://go-review.googlesource.com/9883
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
BenchmarkSkipValue was sensitive to the value of
b.N due to its significant startup cost.
Two adjacent runs before this CL:
BenchmarkSkipValue 50 21047499 ns/op 93.37 MB/s
BenchmarkSkipValue 100 17260554 ns/op 118.05 MB/s
After this CL, using benchtime to recreate the
difference in b.N:
BenchmarkSkipValue 50 15204797 ns/op 131.67 MB/s
BenchmarkSkipValue 100 15332319 ns/op 130.58 MB/s
Change-Id: Iac86f86dd774d535302fa5e4c08f89f8da00be9e
Reviewed-on: https://go-review.googlesource.com/10053
Reviewed-by: Andrew Gerrand <adg@golang.org>
The Transport's writer to the remote server is wrapped in a
bufio.Writer to suppress many small writes while writing headers and
trailers. However, when writing the request body, the buffering may get
in the way if the request body is arriving slowly.
Because the io.Copy from the Request.Body to the writer is already
buffered, the outer bufio.Writer is unnecessary and prevents small
Request.Body.Reads from going to the server right away. (and the
io.Reader contract does say to return when you've got something,
instead of blocking waiting for more). After the body is finished, the
Transport's bufio.Writer is still used for any trailers following.
A previous attempted fix for this made the chunk writer always flush
if the underlying type was a bufio.Writer, but that is not quite
correct. This CL instead makes it opt-in by using a private sentinel
type (wrapping a *bufio.Writer) to the chunk writer that requests
Flushes after each chunk body (the chunk header & chunk body are still
buffered together into one write).
Fixes#6574
Change-Id: Icefcdf17130c9e285c80b69af295bfd3e72c3a70
Reviewed-on: https://go-review.googlesource.com/10021
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
I was previously setting GOARM=arm5 (due to confusion with previously
seeing buildall.sh's temporary of "arm5" as a GOARCH and
misremembernig), but GOARM=arm5 was acting like GOARM=5 only on
accident. See https://go-review.googlesource.com/#/c/10023/
Instead, fail if GOARM is not a known value.
Change-Id: I9ba4fd7268df233d40b09f0431f37cd85a049847
Reviewed-on: https://go-review.googlesource.com/10024
Reviewed-by: Minux Ma <minux@golang.org>
Was otherwise absent unless bound to an exported symbol,
as in the BUG with strings.Title.
Fixes#10781.
Change-Id: I1543137073a9dee9e546bc9d648ca54fc9632dde
Reviewed-on: https://go-review.googlesource.com/9899
Reviewed-by: Russ Cox <rsc@golang.org>
Set overflowing integer constants to 1 rather than 0 to avoid
spurious div-zero errors in subsequent constant expressions.
Also: Exclude new test case from go/types test since it's
running too long (go/types doesn't have an upper constant
size limit at the moment).
Fixes#7746.
Change-Id: I3768488ad9909a3cf995247b81ee78a8eb5a1e41
Reviewed-on: https://go-review.googlesource.com/9165
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
One important use case is a pipeline computation that pass values
from one Goroutine to the next and then exits or is placed in a
wait state. If GOMAXPROCS > 1 a Goroutine running on P1 will enable
another Goroutine and then immediately make P1 available to execute
it. We need to prevent other Ps from stealing the G that P1 is about
to execute. Otherwise the Gs can thrash between Ps causing unneeded
synchronization and slowing down throughput.
Fix this by changing the stealing logic so that when a P attempts to
steal the only G on some other P's run queue, it will pause
momentarily to allow the victim P to schedule the G.
As part of optimizing stealing we also use a per P victim queue
move stolen gs. This eliminates the zeroing of a stack local victim
queue which turned out to be expensive.
This CL is a necessary but not sufficient prerequisite to changing
the default value of GOMAXPROCS to something > 1 which is another
CL/discussion.
For highly serialized programs, such as GoroutineRing below this can
make a large difference. For larger and more parallel programs such
as the x/benchmarks there is no noticeable detriment.
~/work/code/src/rsc.io/benchstat/benchstat old.txt new.txt
name old mean new mean delta
GoroutineRing 30.2µs × (0.98,1.01) 30.1µs × (0.97,1.04) ~ (p=0.941)
GoroutineRing-2 113µs × (0.91,1.07) 30µs × (0.98,1.03) -73.17% (p=0.004)
GoroutineRing-4 144µs × (0.98,1.02) 32µs × (0.98,1.01) -77.69% (p=0.000)
GoroutineRingBuf 32.7µs × (0.97,1.03) 32.5µs × (0.97,1.02) ~ (p=0.795)
GoroutineRingBuf-2 120µs × (0.92,1.08) 33µs × (1.00,1.00) -72.48% (p=0.004)
GoroutineRingBuf-4 138µs × (0.92,1.06) 33µs × (1.00,1.00) -76.21% (p=0.003)
The bench benchmarks show little impact.
old new
garbage 7032879 7011696
httpold 25509 25301
splayold 1022073 1019499
jsonold 28230624 28081433
Change-Id: I228c48fed8d85c9bbef16a7edc53ab7898506f50
Reviewed-on: https://go-review.googlesource.com/9872
Reviewed-by: Austin Clements <austin@google.com>
FileConn and FilePacketConn APIs accept user-configured socket
descriptors to make them work together with runtime-integrated network
poller, but there's a limitation. The APIs reject protocol sockets that
are not supported by standard library. It's very hard for the net,
syscall packages to look after all platform, feature-specific sockets.
This change allows various platform, feature-specific socket descriptors
to use runtime-integrated network poller by using SocketConn,
SocketPacketConn APIs that bridge between the net, syscall packages and
platforms.
New exposed APIs:
pkg net, func SocketConn(*os.File, SocketAddr) (Conn, error)
pkg net, func SocketPacketConn(*os.File, SocketAddr) (PacketConn, error)
pkg net, type SocketAddr interface { Addr, Raw }
pkg net, type SocketAddr interface, Addr([]uint8) Addr
pkg net, type SocketAddr interface, Raw(Addr) []uint8
Fixes#10565.
Change-Id: Iec57499b3d84bb5cb0bcf3f664330c535eec11e3
Reviewed-on: https://go-review.googlesource.com/9275
Reviewed-by: Ian Lance Taylor <iant@golang.org>
If __LP64__ is defined then the type "long" is 64-bits, and there is
no need to explicitly request _FILE_OFFSET_BITS == 64. This changes
the definitions of F_GETLK, F_SETLK, and F_SETLKW on PPC to the values
that the kernel requires. The values used in C when _FILE_OFFSET_BITS
== 64 are corrected by the glibc fcntl function before making the
system call.
With this change, regenerate ppc64le files on Ubuntu trusty.
Change-Id: I8dddbd8a6bae877efff818f5c5dd06291ade3238
Reviewed-on: https://go-review.googlesource.com/9962
Reviewed-by: Minux Ma <minux@golang.org>
This will skip system call numbers that are ifdef'ed out in unistd.h,
as occurs on PPC.
Change-Id: I88e640e4621c7a8cc266433f34a7b4be71543ec9
Reviewed-on: https://go-review.googlesource.com/9966
Reviewed-by: Minux Ma <minux@golang.org>
Running these tests on the system stack is problematic because they
allocate Ps, which are large enough to overflow the system stack if
they are stack-allocated. It used to be necessary to run these tests
on the system stack because they were written in C, but since this is
no longer the case, we can fix this problem by simply not running the
tests on the system stack.
This also means we no longer need the hack in one of these tests that
forces the allocated Ps to escape to the heap, so eliminate that as
well.
Change-Id: I9064f5f8fd7f7b446ff39a22a70b172cfcb2dc57
Reviewed-on: https://go-review.googlesource.com/9923
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
The line in cgen.go was lost during the ginscmp CL.
The ggen.go change is not strictly necessary, but
it makes the 5g -S output for x[0] match what it said
before the ginscmp CL.
Change-Id: I5890a9ec1ac69a38509416eda5aea13b8b12b94a
Reviewed-on: https://go-review.googlesource.com/9929
Reviewed-by: Russ Cox <rsc@golang.org>
Fix bug on Linux SysProcAttr handling: setting both Pdeathsig and
Credential caused Pdeathsig to be ignored. This is because the kernel
clears the deathsignal field when performing a setuid/setgid
system call.
Avoid this by moving Pdeathsig handling after Credential handling.
Fixes#9686
Change-Id: Id01896ad4e979b8c448e0061f00aa8762ca0ac94
Reviewed-on: https://go-review.googlesource.com/3290
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The code generated for x = append(x, v) is roughly:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
}
t[len(t)] = v
len(t)++
x = t
We used to generate this code as Go pseudocode during walk.
Generate it instead as actual instructions during gen.
Doing so lets us apply a few optimizations. The most important
is that when, as in the above example, the source slice and the
destination slice are the same, the code can instead do:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
x = {base(t), len(t)+1, cap(t)}
} else {
len(x)++
}
t[len(t)] = v
That is, in the fast path that does not reallocate the array,
only the updated length needs to be written back to x,
not the array pointer and not the capacity. This is more like
what you'd write by hand in C. It's faster in general, since
the fast path elides two of the three stores, but it's especially
faster when the form of x is such that the base pointer write
would turn into a write barrier. No write, no barrier.
name old mean new mean delta
BinaryTree17 5.68s × (0.97,1.04) 5.81s × (0.98,1.03) +2.35% (p=0.023)
Fannkuch11 4.41s × (0.98,1.03) 4.35s × (1.00,1.00) ~ (p=0.090)
FmtFprintfEmpty 92.7ns × (0.91,1.16) 86.0ns × (0.94,1.11) -7.31% (p=0.038)
FmtFprintfString 281ns × (0.96,1.08) 276ns × (0.98,1.04) ~ (p=0.219)
FmtFprintfInt 288ns × (0.97,1.06) 274ns × (0.98,1.06) -4.94% (p=0.002)
FmtFprintfIntInt 493ns × (0.97,1.04) 506ns × (0.99,1.01) +2.65% (p=0.009)
FmtFprintfPrefixedInt 423ns × (0.97,1.04) 391ns × (0.99,1.01) -7.52% (p=0.000)
FmtFprintfFloat 598ns × (0.99,1.01) 566ns × (0.99,1.01) -5.27% (p=0.000)
FmtManyArgs 1.89µs × (0.98,1.05) 1.91µs × (0.99,1.01) ~ (p=0.231)
GobDecode 14.8ms × (0.98,1.03) 15.3ms × (0.99,1.02) +3.01% (p=0.000)
GobEncode 12.3ms × (0.98,1.01) 11.5ms × (0.97,1.03) -5.93% (p=0.000)
Gzip 656ms × (0.99,1.05) 645ms × (0.99,1.01) ~ (p=0.055)
Gunzip 142ms × (1.00,1.00) 142ms × (1.00,1.00) -0.32% (p=0.034)
HTTPClientServer 91.2µs × (0.97,1.04) 90.5µs × (0.97,1.04) ~ (p=0.468)
JSONEncode 32.6ms × (0.97,1.08) 32.0ms × (0.98,1.03) ~ (p=0.190)
JSONDecode 114ms × (0.97,1.05) 114ms × (0.99,1.01) ~ (p=0.887)
Mandelbrot200 6.11ms × (0.98,1.04) 6.04ms × (1.00,1.01) ~ (p=0.167)
GoParse 6.66ms × (0.97,1.04) 6.47ms × (0.97,1.05) -2.81% (p=0.014)
RegexpMatchEasy0_32 159ns × (0.99,1.00) 171ns × (0.93,1.07) +7.19% (p=0.002)
RegexpMatchEasy0_1K 538ns × (1.00,1.01) 550ns × (0.98,1.01) +2.30% (p=0.000)
RegexpMatchEasy1_32 138ns × (1.00,1.00) 135ns × (0.99,1.02) -1.60% (p=0.000)
RegexpMatchEasy1_1K 869ns × (0.99,1.01) 879ns × (1.00,1.01) +1.08% (p=0.000)
RegexpMatchMedium_32 252ns × (0.99,1.01) 243ns × (1.00,1.00) -3.71% (p=0.000)
RegexpMatchMedium_1K 72.7µs × (1.00,1.00) 70.3µs × (1.00,1.00) -3.34% (p=0.000)
RegexpMatchHard_32 3.85µs × (1.00,1.00) 3.82µs × (1.00,1.01) -0.81% (p=0.000)
RegexpMatchHard_1K 118µs × (1.00,1.00) 117µs × (1.00,1.00) -0.56% (p=0.000)
Revcomp 920ms × (0.97,1.07) 917ms × (0.97,1.04) ~ (p=0.808)
Template 129ms × (0.98,1.03) 114ms × (0.99,1.01) -12.06% (p=0.000)
TimeParse 619ns × (0.99,1.01) 622ns × (0.99,1.01) ~ (p=0.062)
TimeFormat 661ns × (0.98,1.04) 665ns × (0.99,1.01) ~ (p=0.524)
See next CL for combination with a similar optimization for slice.
The benchmarks that are slower in this CL are still faster overall
with the combination of the two.
Change-Id: I2a7421658091b2488c64741b4db15ab6c3b4cb7e
Reviewed-on: https://go-review.googlesource.com/9812
Reviewed-by: David Chase <drchase@google.com>
This lets us abstract away which arguments can be constants and so on
and lets the back ends reverse the order of arguments if that helps.
Change-Id: I283ec1d694f2dd84eba22e5eb4aad78a2d2d9eb0
Reviewed-on: https://go-review.googlesource.com/9810
Reviewed-by: David Chase <drchase@google.com>
Messages that are too big are rejected when read, so they should
be rejected when written too.
Fixes#10518.
Change-Id: I96678fbe2d94f51b957fe26faef33cd8df3823dd
Reviewed-on: https://go-review.googlesource.com/9965
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The status buffer built by the exit function
was not nil-terminated.
Fixes#10789.
Change-Id: I2d34ac50a19d138176c4b47393497ba7070d5b61
Reviewed-on: https://go-review.googlesource.com/9953
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Once added to the signal queue, the pointer passed to the
signal handler could no longer be valid. Instead of passing
the pointer to the note string, we recopy the value of the
note string to a static array in the signal queue.
Fixes#10784.
Change-Id: Iddd6837b58a14dfaa16b069308ae28a7b8e0965b
Reviewed-on: https://go-review.googlesource.com/9950
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Today's earlier fix can stay, but it's a band-aid over the real problem,
which is that bad code was slipping through the type checker
into the back end (and luckily causing a type error there).
I discovered this because my new append does not use the same
temporaries and failed the test as written.
Fixes#9521.
Change-Id: I7e33e2ea15743406e15c6f3fdf73e1edecda69bd
Reviewed-on: https://go-review.googlesource.com/9921
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Any Git branch can be the default branch not only master. Removing
hardwired 'checkout master', and using 'checkout {tag}' is the best
choice. It works with and without a master branch. Furthermore it
resolves the Github default branch issue. Changing Github default
branch is effectively changing HEAD.
Fixes#9032
Change-Id: I19a1221bcefe0806e7556c124c6da7ac0c2160b5
Reviewed-on: https://go-review.googlesource.com/5312
Reviewed-by: Russ Cox <rsc@golang.org>
registry.ReadSubKeyNames requires QUERY access right in addition to
ENUMERATE_SUB_KEYS.
This was making TestLocalZoneAbbr fail on Windows 7 in Paris/Madrid
timezone. It succeeded on Windows 8 because timezone name changed from
"Paris/Madrid" to "Romance Standard Time", the latter being matched by
an abbrs entry.
Change-Id: I791287ba9d1b3556246fa4e9e1604a1fbba1f5e6
Reviewed-on: https://go-review.googlesource.com/9809
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TestAcceptIgnoreSomeErrors was created to test that network
accept function ignores some errors. But conditions created
by the test also affects network reads. Change the test to
ignore these read errors when acceptable.
Fixes#10785
Change-Id: I3da85cb55bd3e78c1980ad949e53e82391f9b41e
Reviewed-on: https://go-review.googlesource.com/9942
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This:
1) Defines the ABI hash of a package (as the SHA1 of the __.PKGDEF)
2) Defines the ABI hash of a shared library (sort the packages by import
path, concatenate the hashes of the packages and SHA1 that)
3) When building a shared library, compute the above value and define a
global symbol that points to a go string that has the hash as its value.
4) When linking against a shared library, read the abi hash from the
library and put both the value seen at link time and a reference
to the global symbol into the moduledata.
5) During runtime initialization, check that the hash seen at link time
still matches the hash the global symbol points to.
Change-Id: Iaa54c783790e6dde3057a2feadc35473d49614a5
Reviewed-on: https://go-review.googlesource.com/8773
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
addmoduledata is called from a .init_array function and need to follow the
platform ABI. It contains accesses to global data which are rewritten to use
R15 by the assembler, and as R15 is callee-save we need to save it.
Change-Id: I03893efb1576aed4f102f2465421f256f3bb0f30
Reviewed-on: https://go-review.googlesource.com/9941
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: I89778988baec1cf4a35d9342c7dbe8c4c08ff3cd
Reviewed-on: https://go-review.googlesource.com/9893
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Instead of errors like:
./blank2.go:15: cannot use ~b1 (type []int) as type int in assignment
we now have:
./blank2.go:15: cannot use _ (type []int) as type int in assignment
Less confusing for users.
Fixes#9521
Change-Id: Ieab9859040e8e0df95deeaee7eeb408d3be61c0f
Reviewed-on: https://go-review.googlesource.com/9902
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
DWARF generation appears to assume Cpos is cheap and this makes linking godoc
about 8% faster and linking the standard library into a single shared library
about 22% faster on my machine.
Updates #10571
Change-Id: I3f81efd0174e356716e7971c4f59810b72378177
Reviewed-on: https://go-review.googlesource.com/9913
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When running the client header timeout test, there is a race between
us timing out and waiting on the remaining requests to be serviced. If
the client times out before the server blocks on the channel in the
handler, we will be simultaneously adding to a waitgroup with the
value 0 and waiting on it when we call TestServer.Close().
This is largely a theoretical race. We have to time out before we
enter the handler and the only reason we would time out if we're
blocked on the channel. Nevertheless, make the race detector happy
by turning the close into a channel send. This turns the defer call
into a synchronization point and we can be sure that we've entered
the handler before we close the server.
Fixes#10780
Change-Id: Id73b017d1eb7503e446aa51538712ef49f2f5c9e
Reviewed-on: https://go-review.googlesource.com/9905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The current implementation of typedmemmove walks the ptrmask
in the type to find out where pointers are. This led to turning off
GC programs for the Go 1.5 dev cycle, so that there would always
be a ptrmask. Instead of also interpreting the GC programs,
interpret the heap bitmap, which we know must be available and
up to date. (There is no point to write barriers when writing outside
the heap.)
This CL is only about correctness. The next CL will optimize the code.
Change-Id: Id1305c7c071fd2734ab96634b0e1c745b23fa793
Reviewed-on: https://go-review.googlesource.com/9886
Reviewed-by: Austin Clements <austin@google.com>
We want typedmemmove to use the heap bitmap to determine
where pointers are, instead of reinterpreting the type information.
The heap bitmap is simpler to access.
In general, typedmemmove will need to be able to look up the bits
for any word and find valid pointer information, so fill even after the
dead marker. Not filling after the dead marker was an optimization
I introduced only a few days ago, when reintroducing the dead marker
code. At the time I said it probably wouldn't last, and it didn't.
Change-Id: I6ba01bff17ddee1ff429f454abe29867ec60606e
Reviewed-on: https://go-review.googlesource.com/9885
Reviewed-by: Austin Clements <austin@google.com>
Moving them up makes them properly aligned on 32-bit systems.
There are some odd fields above them right now
(like fixalloc and mutex maybe).
Change-Id: I57851a5bbb2e7cc339712f004f99bb6c0cce0ca5
Reviewed-on: https://go-review.googlesource.com/9889
Reviewed-by: Austin Clements <austin@google.com>
Dead code.
This field is left over from Go 1.4, when we elided the fake write
barrier in this case. Today, it's unused (always false).
The upcoming append/slice changes handle this case again,
but without needing this field.
Change-Id: Ic6f160b64efdc1bbed02097ee03050f8cd0ab1b8
Reviewed-on: https://go-review.googlesource.com/9789
Reviewed-by: David Chase <drchase@google.com>
If you are using -h to get a stack trace at the site of the failure,
Yyerror will never return. Dump the register allocation sites
before calling Yyerror.
Change-Id: I51266c03e06cb5084c2eaa89b367b9ed85ba286a
Reviewed-on: https://go-review.googlesource.com/9788
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Dave Cheney <dave@cheney.net>
The new(uint64) was moving to the stack, which may not be aligned.
Change-Id: Iad070964202001b52029494d43e299fed980f939
Reviewed-on: https://go-review.googlesource.com/9787
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David Chase <drchase@google.com>
The -g mode is a debugging mode that prints instructions
as they are constructed. Gbranch was just missing the print.
Change-Id: I3fb45fd9bd3996ed96df5be903b9fd6bd97148b0
Reviewed-on: https://go-review.googlesource.com/9827
Reviewed-by: Rick Hudson <rlh@golang.org>
Reintroduce an optimization discarded during the initial conversion
from 4-bit heap bitmaps to 2-bit heap bitmaps: when we reach the
place in the bitmap where there are no more pointers, mark that position
for the GC so that it can avoid scanning past that place.
During heapBitsSetType we can also avoid initializing heap bitmap
beyond that location, which gives a bit of a win compared to Go 1.4.
This particular optimization (not initializing the heap bitmap) may not last:
we might change typedmemmove to use the heap bitmap, in which
case it would all need to be initialized. The early stop in the GC scan
will stay no matter what.
Compared to Go 1.4 (github.com/rsc/go, branch go14bench):
name old mean new mean delta
SetTypeNode64 80.7ns × (1.00,1.01) 57.4ns × (1.00,1.01) -28.83% (p=0.000)
SetTypeNode64Dead 80.5ns × (1.00,1.01) 13.1ns × (0.99,1.02) -83.77% (p=0.000)
SetTypeNode64Slice 2.16µs × (1.00,1.01) 1.54µs × (1.00,1.01) -28.75% (p=0.000)
SetTypeNode64DeadSlice 2.16µs × (1.00,1.01) 1.52µs × (1.00,1.00) -29.74% (p=0.000)
Compared to previous CL:
name old mean new mean delta
SetTypeNode64 56.7ns × (1.00,1.00) 57.4ns × (1.00,1.01) +1.19% (p=0.000)
SetTypeNode64Dead 57.2ns × (1.00,1.00) 13.1ns × (0.99,1.02) -77.15% (p=0.000)
SetTypeNode64Slice 1.56µs × (1.00,1.01) 1.54µs × (1.00,1.01) -0.89% (p=0.000)
SetTypeNode64DeadSlice 1.55µs × (1.00,1.01) 1.52µs × (1.00,1.00) -2.23% (p=0.000)
This is the last CL in the sequence converting from the 4-bit heap
to the 2-bit heap, with all the same optimizations reenabled.
Compared to before that process began (compared to CL 9701 patch set 1):
name old mean new mean delta
BinaryTree17 5.87s × (0.94,1.09) 5.91s × (0.96,1.06) ~ (p=0.578)
Fannkuch11 4.32s × (1.00,1.00) 4.32s × (1.00,1.00) ~ (p=0.474)
FmtFprintfEmpty 89.1ns × (0.95,1.16) 89.0ns × (0.93,1.10) ~ (p=0.942)
FmtFprintfString 283ns × (0.98,1.02) 298ns × (0.98,1.06) +5.33% (p=0.000)
FmtFprintfInt 284ns × (0.98,1.04) 286ns × (0.98,1.03) ~ (p=0.208)
FmtFprintfIntInt 486ns × (0.98,1.03) 498ns × (0.97,1.06) +2.48% (p=0.000)
FmtFprintfPrefixedInt 400ns × (0.99,1.02) 408ns × (0.98,1.02) +2.23% (p=0.000)
FmtFprintfFloat 566ns × (0.99,1.01) 587ns × (0.98,1.01) +3.69% (p=0.000)
FmtManyArgs 1.91µs × (0.99,1.02) 1.94µs × (0.99,1.02) +1.81% (p=0.000)
GobDecode 15.5ms × (0.98,1.05) 15.8ms × (0.98,1.03) +1.94% (p=0.002)
GobEncode 11.9ms × (0.97,1.03) 12.0ms × (0.96,1.09) ~ (p=0.263)
Gzip 648ms × (0.99,1.01) 648ms × (0.99,1.01) ~ (p=0.992)
Gunzip 143ms × (1.00,1.00) 143ms × (1.00,1.01) ~ (p=0.585)
HTTPClientServer 89.2µs × (0.99,1.02) 90.3µs × (0.98,1.01) +1.24% (p=0.000)
JSONEncode 32.3ms × (0.97,1.06) 31.6ms × (0.99,1.01) -2.29% (p=0.000)
JSONDecode 106ms × (0.99,1.01) 107ms × (1.00,1.01) +0.62% (p=0.000)
Mandelbrot200 6.02ms × (1.00,1.00) 6.03ms × (1.00,1.01) ~ (p=0.250)
GoParse 6.57ms × (0.97,1.06) 6.53ms × (0.99,1.03) ~ (p=0.243)
RegexpMatchEasy0_32 162ns × (1.00,1.00) 161ns × (1.00,1.01) -0.80% (p=0.000)
RegexpMatchEasy0_1K 561ns × (0.99,1.02) 541ns × (0.99,1.01) -3.67% (p=0.000)
RegexpMatchEasy1_32 145ns × (0.95,1.04) 138ns × (1.00,1.00) -5.04% (p=0.000)
RegexpMatchEasy1_1K 864ns × (0.99,1.04) 887ns × (0.99,1.01) +2.57% (p=0.000)
RegexpMatchMedium_32 255ns × (0.99,1.04) 253ns × (0.99,1.01) -1.05% (p=0.012)
RegexpMatchMedium_1K 73.9µs × (0.98,1.04) 72.8µs × (1.00,1.00) -1.51% (p=0.005)
RegexpMatchHard_32 3.92µs × (0.98,1.04) 3.85µs × (1.00,1.01) -1.88% (p=0.002)
RegexpMatchHard_1K 120µs × (0.98,1.04) 117µs × (1.00,1.01) -2.02% (p=0.001)
Revcomp 936ms × (0.95,1.08) 922ms × (0.97,1.08) ~ (p=0.234)
Template 130ms × (0.98,1.04) 126ms × (0.99,1.01) -2.99% (p=0.000)
TimeParse 638ns × (0.98,1.05) 628ns × (0.99,1.01) -1.54% (p=0.004)
TimeFormat 674ns × (0.99,1.01) 668ns × (0.99,1.01) -0.80% (p=0.001)
The slowdown of the first few benchmarks seems to be due to the new
atomic operations for certain small size allocations. But the larger
benchmarks mostly improve, probably due to the decreased memory
pressure from having half as much heap bitmap.
CL 9706, which removes the (never used anymore) wbshadow mode,
gets back what is lost in the early microbenchmarks.
Change-Id: I37423a209e8ec2a2e92538b45cac5422a6acd32d
Reviewed-on: https://go-review.googlesource.com/9705
Reviewed-by: Rick Hudson <rlh@golang.org>
For the conversion of the heap bitmap from 4-bit to 2-bit fields,
I replaced heapBitsSetType with the dumbest thing that could possibly work:
two atomic operations (atomicand8+atomicor8) per 2-bit field.
This CL replaces that code with a proper implementation that
avoids the atomics whenever possible. Benchmarks vs base CL
(before the conversion to 2-bit heap bitmap) and vs Go 1.4 below.
Compared to Go 1.4, SetTypePtr (a 1-pointer allocation)
is 10ns slower because a race against the concurrent GC requires the
use of an atomicor8 that used to be an ordinary write. This slowdown
was present even in the base CL.
Compared to both Go 1.4 and base, SetTypeNode8 (a 10-word allocation)
is 10ns slower because it too needs a new atomic, because with the
denser representation, the byte on the end of the allocation is now shared
with the object next to it; this was not true with the 4-bit representation.
Excluding these two (fundamental) slowdowns due to the use of atomics,
the new code is noticeably faster than both Go 1.4 and the base CL.
The next CL will reintroduce the ``typeDead'' optimization.
Stats are from 5 runs on a MacBookPro10,2 (late 2012 Core i5).
Compared to base CL (** = new atomic)
name old mean new mean delta
SetTypePtr 14.1ns × (0.99,1.02) 14.7ns × (0.93,1.10) ~ (p=0.175)
SetTypePtr8 18.4ns × (1.00,1.01) 18.6ns × (0.81,1.21) ~ (p=0.866)
SetTypePtr16 28.7ns × (1.00,1.00) 22.4ns × (0.90,1.27) -21.88% (p=0.015)
SetTypePtr32 52.3ns × (1.00,1.00) 33.8ns × (0.93,1.24) -35.37% (p=0.001)
SetTypePtr64 79.2ns × (1.00,1.00) 55.1ns × (1.00,1.01) -30.43% (p=0.000)
SetTypePtr126 118ns × (1.00,1.00) 100ns × (1.00,1.00) -15.97% (p=0.000)
SetTypePtr128 130ns × (0.92,1.19) 98ns × (1.00,1.00) -24.36% (p=0.008)
SetTypePtrSlice 726ns × (0.96,1.08) 760ns × (1.00,1.00) ~ (p=0.152)
SetTypeNode1 14.1ns × (0.94,1.15) 12.0ns × (1.00,1.01) -14.60% (p=0.020)
SetTypeNode1Slice 135ns × (0.96,1.07) 88ns × (1.00,1.00) -34.53% (p=0.000)
SetTypeNode8 20.9ns × (1.00,1.01) 32.6ns × (1.00,1.00) +55.37% (p=0.000) **
SetTypeNode8Slice 414ns × (0.99,1.02) 244ns × (1.00,1.00) -41.09% (p=0.000)
SetTypeNode64 80.0ns × (1.00,1.00) 57.4ns × (1.00,1.00) -28.23% (p=0.000)
SetTypeNode64Slice 2.15µs × (1.00,1.01) 1.56µs × (1.00,1.00) -27.43% (p=0.000)
SetTypeNode124 119ns × (0.99,1.00) 100ns × (1.00,1.00) -16.11% (p=0.000)
SetTypeNode124Slice 3.40µs × (1.00,1.00) 2.93µs × (1.00,1.00) -13.80% (p=0.000)
SetTypeNode126 120ns × (1.00,1.01) 98ns × (1.00,1.00) -18.19% (p=0.000)
SetTypeNode126Slice 3.53µs × (0.98,1.08) 3.02µs × (1.00,1.00) -14.49% (p=0.002)
SetTypeNode1024 726ns × (0.97,1.09) 740ns × (1.00,1.00) ~ (p=0.451)
SetTypeNode1024Slice 24.9µs × (0.89,1.37) 23.1µs × (1.00,1.00) ~ (p=0.476)
Compared to Go 1.4 (** = new atomic)
name old mean new mean delta
SetTypePtr 5.71ns × (0.89,1.19) 14.68ns × (0.93,1.10) +157.24% (p=0.000) **
SetTypePtr8 19.3ns × (0.96,1.10) 18.6ns × (0.81,1.21) ~ (p=0.638)
SetTypePtr16 30.7ns × (0.99,1.03) 22.4ns × (0.90,1.27) -26.88% (p=0.005)
SetTypePtr32 51.5ns × (1.00,1.00) 33.8ns × (0.93,1.24) -34.40% (p=0.001)
SetTypePtr64 83.6ns × (0.94,1.12) 55.1ns × (1.00,1.01) -34.12% (p=0.001)
SetTypePtr126 137ns × (0.87,1.26) 100ns × (1.00,1.00) -27.10% (p=0.028)
SetTypePtrSlice 865ns × (0.80,1.23) 760ns × (1.00,1.00) ~ (p=0.243)
SetTypeNode1 15.2ns × (0.88,1.12) 12.0ns × (1.00,1.01) -20.89% (p=0.014)
SetTypeNode1Slice 156ns × (0.93,1.16) 88ns × (1.00,1.00) -43.57% (p=0.001)
SetTypeNode8 23.8ns × (0.90,1.18) 32.6ns × (1.00,1.00) +36.76% (p=0.003) **
SetTypeNode8Slice 502ns × (0.92,1.10) 244ns × (1.00,1.00) -51.46% (p=0.000)
SetTypeNode64 85.6ns × (0.94,1.11) 57.4ns × (1.00,1.00) -32.89% (p=0.001)
SetTypeNode64Slice 2.36µs × (0.91,1.14) 1.56µs × (1.00,1.00) -33.96% (p=0.002)
SetTypeNode124 130ns × (0.91,1.12) 100ns × (1.00,1.00) -23.49% (p=0.004)
SetTypeNode124Slice 3.81µs × (0.90,1.22) 2.93µs × (1.00,1.00) -23.09% (p=0.025)
There are fewer benchmarks vs Go 1.4 because unrolling directly
into the heap bitmap is not yet implemented, so those would not
be meaningful comparisons.
These benchmarks were not present in Go 1.4 as distributed.
The backport to Go 1.4 is in github.com/rsc/go's go14bench branch,
commit 71d5ee5.
Change-Id: I95ed05a22bf484b0fc9efad549279e766c98d2b6
Reviewed-on: https://go-review.googlesource.com/9704
Reviewed-by: Rick Hudson <rlh@golang.org>
Previous CLs changed the representation of the non-heap type bitmaps
to be 1-bit bitmaps (pointer or not). Before this CL, the heap bitmap
stored a 2-bit type for each word and a mark bit and checkmark bit
for the first word of the object. (There used to be additional per-word bits.)
Reduce heap bitmap to 2-bit, with 1 dedicated to pointer or not,
and the other used for mark, checkmark, and "keep scanning forward
to find pointers in this object." See comments for details.
This CL replaces heapBitsSetType with very slow but obviously correct code.
A followup CL will optimize it. (Spoiler: the new code is faster than Go 1.4 was.)
Change-Id: I999577a133f3cfecacebdec9cdc3573c235c7fb9
Reviewed-on: https://go-review.googlesource.com/9703
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The type information in reflect.Type and the GC programs is now
1 bit per word, down from 2 bits.
The in-memory unrolled type bitmap representation are now
1 bit per word, down from 4 bits.
The conversion from the unrolled (now 1-bit) bitmap to the
heap bitmap (still 4-bit) is not optimized. A followup CL will
work on that, after the heap bitmap has been converted to 2-bit.
The typeDead optimization, in which a special value denotes
that there are no more pointers anywhere in the object, is lost
in this CL. A followup CL will bring it back in the final form of
heapBitsSetType.
Change-Id: If61e67950c16a293b0b516a6fd9a1c755b6d5549
Reviewed-on: https://go-review.googlesource.com/9702
Reviewed-by: Austin Clements <austin@google.com>
There was an old benchmark that measured this indirectly
via allocation, but I don't understand how to factor out the
allocation cost when interpreting the numbers.
Replace with a benchmark that only calls heapBitsSetType,
that does not allocate. This was not possible when the
benchmark was first written, because heapBitsSetType had
not been factored out of mallocgc.
Change-Id: I30f0f02362efab3465a50769398be859832e6640
Reviewed-on: https://go-review.googlesource.com/9701
Reviewed-by: Austin Clements <austin@google.com>
Since we now have stack information for code running on the
systemstack, we can traceback over it. To make cpu profiles useful,
add a case in gentraceback to jump over systemstack switches.
Fixes#10609.
Change-Id: I21f47fcc802c07c5d4a1ada56374314e388a6dc7
Reviewed-on: https://go-review.googlesource.com/9506
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
I have around twenty of such values on a Windows 7 development machine.
regedit displays (translated): "invalid 32-bits DWORD value".
Change-Id: Ib37a414ee4c85e891b0a25fed2ddad9e105f5f4e
Reviewed-on: https://go-review.googlesource.com/9901
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
On darwin/arm, the test sometimes fails with:
Process 557 resuming
--- FAIL: TestWriteTimeoutFluctuation (1.64s)
timeout_test.go:706: Write took over 1s; expected 0.1s
FAIL
Process 557 exited with status = 1 (0x00000001)
go_darwin_arm_exec: timeout running tests
This change increaes timeout on iOS builders from 1s to 3s as a
temporarily fix.
Updates #10775.
Change-Id: Ifdaf99cf5b8582c1a636a0f7d5cc66bb276efd72
Reviewed-on: https://go-review.googlesource.com/9915
Reviewed-by: Minux Ma <minux@golang.org>