The cleanup also makes it ~5% faster, but that's
not the point of this CL.
Optimizations can come in future CLs.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6286043
It's very unfortunate that the type of Data field of struct
RawSockaddr is [14]uint8 on Linux/ARM instead of [14]int8
on all the others.
btw, it should be [14]int8 according to my header files.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6275050
Move address info flags to per-platform files. This is needed to
enable cgo on NetBSD (and later OpenBSD), as some of the currently
used AI_* defines do not exist on these platforms.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6250075
Use perfect cuckoo hash, to avoid binary search.
Define Atom bits as offset+len in long string instead
of enumeration, to avoid string headers.
Before: 1909 string bytes + 6060 tables = 7969 total data
After: 1406 string bytes + 2048 tables = 3454 total data
benchmark old ns/op new ns/op delta
BenchmarkLookup 83878 64681 -22.89%
R=nigeltao, r
CC=golang-dev
https://golang.org/cl/6262051
Ceil to 4.81 from 20.6 ns/op
Floor to 4.37 from 13.5 ns/op
Trunc to 3.97 from 14.3 ns/op
Also changed three MOVSDs to MOVAPDs in log_amd64.s
R=rsc, golang-dev
CC=golang-dev
https://golang.org/cl/6262048
Currently walk() doesn't check for err == SkipDir when iterating
a directory list, but such promise is made in the docs for WalkFunc.
Fixes#3486.
R=rsc, r
CC=golang-dev
https://golang.org/cl/6257059
Now that gri has made go/parser 15% faster, I offer this
change to slow back down cmd/api ~proportionately, adding
FreeBSD to the go1-checked set of platforms.
Really we should have done this earlier. This will prevent us
from breaking FreeBSD compatibility accidentally in the
future.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/6279044
To avoid goroutines during init, the nextItem function was a
clever workaround. Now that init goroutines are permitted,
restore the original, simpler design.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6282043
- only compute current line position if needed
(i.e., if a comment is present)
- added benchmark
benchmark old ns/op new ns/op delta
BenchmarkParse 10902990 9313330 -14.58%
benchmark old MB/s new MB/s speedup
BenchmarkParse 5.31 6.22 1.17x
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/6270043
Saving the code in case we improve things enough that
it matters later, but at least right now it is not worth doing.
R=ken2
CC=golang-dev
https://golang.org/cl/6248071
The previous code was preparing arrays of entries that would be
filled if there was one entry every 128 bytes. Moving to a 4096
byte interval reduces the overhead per megabyte of address space
to 2kB from 64kB (on 64-bit systems).
The performance impact will be negative for very small MemProfileRate.
test/bench/garbage/tree2 -heapsize 800000000 (default memprofilerate)
Before: mprof 65993056 bytes (1664 bucketmem + 65991392 addrmem)
After: mprof 1989984 bytes (1680 bucketmem + 1988304 addrmem)
R=golang-dev, rsc
CC=golang-dev, remy
https://golang.org/cl/6257069
The previous heap profile format did not include buckets with
zero used bytes. Also add several missing MemStats fields in
debug mode.
R=golang-dev, rsc
CC=golang-dev, remy
https://golang.org/cl/6249068
Drop expecttaken function in favor of extra argument
to gbranch and bgen. Mark loop condition as likely to
be true, so that loops are generated inline.
The main benefit here is contiguous code when trying
to read the generated assembly. It has only minor effects
on the timing, and they mostly cancel the minor effects
that aligning function entry points had. One exception:
both changes made Fannkuch faster.
Compared to before CL 6244066 (before aligned functions)
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4222117400 4201958800 -0.48%
BenchmarkFannkuch11 3462631800 3215908600 -7.13%
BenchmarkGobDecode 20887622 20899164 +0.06%
BenchmarkGobEncode 9548772 9439083 -1.15%
BenchmarkGzip 151687 152060 +0.25%
BenchmarkGunzip 8742 8711 -0.35%
BenchmarkJSONEncode 62730560 62686700 -0.07%
BenchmarkJSONDecode 252569180 252368960 -0.08%
BenchmarkMandelbrot200 5267599 5252531 -0.29%
BenchmarkRevcomp25M 980813500 985248400 +0.45%
BenchmarkTemplate 361259100 357414680 -1.06%
Compared to tip (aligned functions):
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4140739800 4201958800 +1.48%
BenchmarkFannkuch11 3259914400 3215908600 -1.35%
BenchmarkGobDecode 20620222 20899164 +1.35%
BenchmarkGobEncode 9384886 9439083 +0.58%
BenchmarkGzip 150333 152060 +1.15%
BenchmarkGunzip 8741 8711 -0.34%
BenchmarkJSONEncode 65210990 62686700 -3.87%
BenchmarkJSONDecode 249394860 252368960 +1.19%
BenchmarkMandelbrot200 5273394 5252531 -0.40%
BenchmarkRevcomp25M 996013800 985248400 -1.08%
BenchmarkTemplate 360620840 357414680 -0.89%
R=ken2
CC=golang-dev
https://golang.org/cl/6245069
On 6l and 8l, this is a real instruction, guaranteed to
cause an 'undefined instruction' exception.
On 5l, we simulate it as BL to address 0.
The plan is to use it as a signal to the linker that this
point in the instruction stream cannot be reached
(hence the changes to nofollow). This will help the
compiler explain that panicindex and friends do not
return without having to put a list of these functions
in the linker.
R=ken2
CC=golang-dev
https://golang.org/cl/6255064
16 seems pretty standard on x86 for function entry.
I don't know if ARM would benefit, so I used just 4
(single instruction alignment).
This has a minor absolute effect on the current timings.
The main hope is that it will make them more consistent from
run to run.
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 4222117400 4140739800 -1.93%
BenchmarkFannkuch11 3462631800 3259914400 -5.85%
BenchmarkGobDecode 20887622 20620222 -1.28%
BenchmarkGobEncode 9548772 9384886 -1.72%
BenchmarkGzip 151687 150333 -0.89%
BenchmarkGunzip 8742 8741 -0.01%
BenchmarkJSONEncode 62730560 65210990 +3.95%
BenchmarkJSONDecode 252569180 249394860 -1.26%
BenchmarkMandelbrot200 5267599 5273394 +0.11%
BenchmarkRevcomp25M 980813500 996013800 +1.55%
BenchmarkTemplate 361259100 360620840 -0.18%
R=ken2
CC=golang-dev
https://golang.org/cl/6244066
The code was inconsistent about when it used
brchain(x) and when it used x directly, with the result
that you could end up emitting code for brchain(x) but
leave the jump pointing at an unemitted x.
R=ken2
CC=golang-dev
https://golang.org/cl/6250077
This bug has been introduced in the following revision:
changeset: 11404:26dceba5c610
user: Ivan Krasin <krasin@golang.org>
date: Mon Jan 23 09:19:39 2012 -0500
summary: compress/flate: reduce memory pressure at cost of additional arithmetic operation.
This is the review page for that CL: https://golang.org/cl/5555070/
R=rsc, imkrasin
CC=golang-dev
https://golang.org/cl/6249067
The correct procid is needed for unparking LWPs on NetBSD - always
initialise procid in minit() so that cgo works correctly. The non-cgo
case already works correctly since procid is initialised via
lwp_create().
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6257071
On NetBSD a cgo enabled binary has more than 32 sections - bump NSECTS
so that we can actually link them successfully.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6261052
filterPaeth takes []byte arguments instead of byte arguments,
which avoids some redudant computation of the previous pixel
in the inner loop.
Also eliminate a bounds check in decoding the up filter.
benchmark old ns/op new ns/op delta
BenchmarkDecodeGray 3139636 2812531 -10.42%
BenchmarkDecodeNRGBAGradient 12341520 10971680 -11.10%
BenchmarkDecodeNRGBAOpaque 10740780 9612455 -10.51%
BenchmarkDecodePaletted 1819535 1818913 -0.03%
BenchmarkDecodeRGB 8974695 8178070 -8.88%
R=rsc
CC=golang-dev
https://golang.org/cl/6243061
A block with finalizer might also be profiled. The special bit
is needed to unregister the block from the profile. It will be
unset only when the block is freed.
Fixes#3668.
R=golang-dev, rsc
CC=golang-dev, remy
https://golang.org/cl/6249066
The check for Stringer etc. can only fire if the test is not a builtin, so avoid
the expensive check if we know there's no chance.
Also put in a fast path for pad, which saves a more modest amount.
benchmark old ns/op new ns/op delta
BenchmarkSprintfEmpty 148 152 +2.70%
BenchmarkSprintfString 585 497 -15.04%
BenchmarkSprintfInt 441 396 -10.20%
BenchmarkSprintfIntInt 718 603 -16.02%
BenchmarkSprintfPrefixedInt 676 621 -8.14%
BenchmarkSprintfFloat 1003 953 -4.99%
BenchmarkManyArgs 2945 2312 -21.49%
BenchmarkScanInts 1704152 1734441 +1.78%
BenchmarkScanRecursiveInt 1837397 1828920 -0.46%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/6245068
This prevents clients from seeing RSTs and missing the response
body.
TCP stacks vary. The included test failed on Darwin before but
passed on Linux.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6256066
It was only being used for (*Stmt).Exec, not Query, and not for
the same two methods on *DB.
This unifies (*Stmt).Exec's old inline code into the old
subsetArgs function, renaming it in the process (changing the
old word "subset" to "driver", mostly converted earlier)
Fixes#3640
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6258045
It's sad to introduce a new macro, but rnd shows up consistently
in profiles, and the function call overwhelms the two arithmetic
instructions it performs.
R=r
CC=golang-dev
https://golang.org/cl/6260051
Plan 9 versions for amd64 have 2 megabyte pages.
This also fixes the logic for 32-bit vs 64-bit Plan 9,
making 64-bit the default, and adds logic to generate
a symbols table.
R=golang-dev, rsc, rminnich, ality, 0intro
CC=golang-dev, john
https://golang.org/cl/6218046
The old code generated for a bounds check was
CMP
JLT ok
CALL panicindex
ok:
...
The new code is (once the linker finishes with it):
CMP
JGE panic
...
panic:
CALL panicindex
which moves the calls out of line, putting more useful
code in each cache line. This matters especially in tight
loops, such as in Fannkuch. The benefit is more modest
elsewhere, but real.
From test/bench/go1, amd64:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 6096092000 6088808000 -0.12%
BenchmarkFannkuch11 6151404000 4020463000 -34.64%
BenchmarkGobDecode 28990050 28894630 -0.33%
BenchmarkGobEncode 12406310 12136730 -2.17%
BenchmarkGzip 179923 179903 -0.01%
BenchmarkGunzip 11219 11130 -0.79%
BenchmarkJSONEncode 86429350 86515900 +0.10%
BenchmarkJSONDecode 334593800 315728400 -5.64%
BenchmarkRevcomp25M 1219763000 1180767000 -3.20%
BenchmarkTemplate 492947600 483646800 -1.89%
And 386:
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 6354902000 6243000000 -1.76%
BenchmarkFannkuch11 8043769000 7326965000 -8.91%
BenchmarkGobDecode 19010800 18941230 -0.37%
BenchmarkGobEncode 14077500 13792460 -2.02%
BenchmarkGzip 194087 193619 -0.24%
BenchmarkGunzip 12495 12457 -0.30%
BenchmarkJSONEncode 125636400 125451400 -0.15%
BenchmarkJSONDecode 696648600 685032800 -1.67%
BenchmarkRevcomp25M 2058088000 2052545000 -0.27%
BenchmarkTemplate 602140000 589876800 -2.04%
To implement this, two new instruction forms:
JLT target // same as always
JLT $0, target // branch expected not taken
JLT $1, target // branch expected taken
The linker could also emit the prediction prefixes, but it
does not: expected taken branches are reversed so that the
expected case is not taken (as in example above), and
the default expectaton for such a jump is not taken
already.
R=golang-dev, gri, r, dave
CC=golang-dev
https://golang.org/cl/6248049
Implement the (3-per-family) Noah's Ark clause (i.e. don't put
more than three identical elements on the list of active formatting
elements.
Also, when running tests, sort attributes by name before dumping
them.
Pass 4 additional tests with Noah's Ark clause (including one
that needs attributes to be sorted).
Pass 5 additional, unrelated tests because of sorting attributes.
R=nigeltao, rsc
CC=golang-dev
https://golang.org/cl/6247056
CanonicalHeaderKey didn't allocate, but it did use unnecessary
CPU in the hot path, deciding it didn't need to allocate.
I considered using constants for all these common header keys
but I didn't think it would be prettier. "Content-Length" looks
better than contentLength or hdrContentLength, etc.
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/6255053
Comment on cache keys above connectMethod says "http to proxy, http
anywhere after that", however in reality target address was always
included, which prevented http requests to different target
addresses to reuse the same http proxy connection.
R=golang-dev, r, rsc, bradfitz
CC=golang-dev
https://golang.org/cl/5901064
CL 5956051 introduced too many call != nil checks, so
attempt to improve this by splitting logic into three
distinct parts.
R=r
CC=golang-dev
https://golang.org/cl/6248048
for expr1, expr2 = range slice
was assigning to expr1 and expr2 in sequence
instead of in parallel. Now it assigns in parallel,
as it should. This matters for things like
for i, x[i] = range slice.
Fixes#3464.
R=ken2
CC=golang-dev
https://golang.org/cl/6252048
This is from CL 5451105 but was dropped from that CL.
See also CL 6137051.
The only change compared to 5451105 is to check for
h != nil in reflect·mapiterinit; allowing use of nil maps
must have happened after that original CL.
Fixes#3573.
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/6215078
Remove redundant checks for integration points.
Ignore null bytes in text.
Don't break out of foreign content for a <font> tag unless it
has a color, face, or size attribute.
Check for MathML text integration points when breaking out of
foreign content.
Pass two new tests.
R=nigeltao
CC=golang-dev
https://golang.org/cl/6256045
The bulk of the gains come from hoisting the modulo ops outside of
the inner loop.
Reducing the digest type from 8 bytes to 4 bytes gains another 1% on
the hash/adler32 micro-benchmark.
Benchmarks for $GOOS,$GOARCH = linux,amd64 below.
hash/adler32 benchmark:
benchmark old ns/op new ns/op delta
BenchmarkAdler32KB 1660 1364 -17.83%
image/png benchmark:
benchmark old ns/op new ns/op delta
BenchmarkDecodeGray 2466909 2425539 -1.68%
BenchmarkDecodeNRGBAGradient 9884500 9751705 -1.34%
BenchmarkDecodeNRGBAOpaque 8511615 8379800 -1.55%
BenchmarkDecodePaletted 1366683 1330677 -2.63%
BenchmarkDecodeRGB 6987496 6884974 -1.47%
BenchmarkEncodePaletted 6292408 6040052 -4.01%
BenchmarkEncodeRGBOpaque 19780680 19178440 -3.04%
BenchmarkEncodeRGBA 80738600 79076800 -2.06%
Wall time for Denis Cheremisov's PNG-decoding program given in
https://groups.google.com/group/golang-nuts/browse_thread/thread/22aa8a05040fdd49
Before: 2.44s
After: 2.26s
Delta: -7%
R=rsc
CC=golang-dev
https://golang.org/cl/6251044
When client fails to write a request is sends caller that error,
however server might have failed to read that request in the mean
time and replied with that error. When client then reads the
response the call would no longer be pending, so call will be nil
Handle this gracefully by discarding such server responses
R=golang-dev, r
CC=golang-dev, rsc
https://golang.org/cl/5956051
* Eliminate bounds check on known small shifts.
* Rewrite x<<s | x>>(32-s) as a rotate (constant s).
* More aggressive (but still minimal) range analysis.
R=ken, dave, iant
CC=golang-dev
https://golang.org/cl/6209077
The previous attempt to explain this got it backwards (all the more reason to be
sad we couldn't make the two functions behave the same).
Fixes#3669.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6249051
There's no need for the 16-bit arithmetic here,
and it tickles a long-standing compiler bug.
Fix the exp code not to use 16-bit math and
create an explicit test for the compiler bug.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/6256048
- interface methods appeared under VarDecl in search results
(long-standing TODO)
- don't walk parts of AST which contain no indexable material
(minor performance tuning)
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6228047
The documentation says so, but in the case of a normalized
integral Rat, the denominator was a new value. Changed the
internal representation to use an Int to represent the
denominator (with the sign ignored), so a reference to it
can always be returned.
Clarified documentation and added test cases.
Fixes#3521.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/6237045
* Shift/rotate by constant doesn't have to stop subprop. (also in 8g)
* Remove redundant MOVLQZX instructions.
* An attempt at issuing loads early.
Good for 0.5% on a good day, might not be worth keeping.
Need to understand more about whether the x86
looks ahead to what loads might be coming up.
R=ken2, ken
CC=golang-dev
https://golang.org/cl/6203091
Detect HTML integration points and MathML text integration points.
At these points, process tokens as HTML, not as foreign content.
Pass 33 more tests.
R=nigeltao
CC=golang-dev
https://golang.org/cl/6249044