requires a decoder to do its own byte buffering instead of using
bufio.Reader, due to byte stuffing.
benchmark old MB/s new MB/s speedup
BenchmarkDecodeBaseline 33.40 50.65 1.52x
BenchmarkDecodeProgressive 24.34 31.92 1.31x
On 6g, unsafe.Sizeof(huffman{}) falls from 4872 to 964 bytes, and
the decoder struct contains 8 of those.
LGTM=r
R=r, nightlyone
CC=bradfitz, couchmoney, golang-codereviews, raph
https://golang.org/cl/109050045
This is a clone of 101370043, which I accidentally applied to the
release branch first.
No big deal, it needed to be applied there anyway.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/108090043
Storing temporary values to a slice is slower than storing
them to local variables of type byte.
benchmark old MB/s new MB/s speedup
BenchmarkEncodeToStringBase32 102.21 156.66 1.53x
BenchmarkEncodeToStringBase64 124.25 177.91 1.43x
LGTM=crawshaw
R=golang-codereviews, crawshaw, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/109820045
Just to be more thorough.
No need to push this to 1.3; it's just a test change that
worked without any changes to the code being tested.
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/109080045
genericReplacer.lookup is called for each byte of an input
string. In many (most?) cases, lookup will fail for the first
byte, and it will return immediately. Adding a fast path for
that case seems worth it.
Benchmark on my Xeon 3.5GHz Linux box:
benchmark old ns/op new ns/op delta
BenchmarkGenericNoMatch 2691 774 -71.24%
BenchmarkGenericMatch1 7920 8151 +2.92%
BenchmarkGenericMatch2 52336 39927 -23.71%
BenchmarkSingleMaxSkipping 1575 1575 +0.00%
BenchmarkSingleLongSuffixFail 1429 1429 +0.00%
BenchmarkSingleMatch 56228 55444 -1.39%
BenchmarkByteByteNoMatch 568 568 +0.00%
BenchmarkByteByteMatch 977 972 -0.51%
BenchmarkByteStringMatch 1669 1687 +1.08%
BenchmarkHTMLEscapeNew 422 422 +0.00%
BenchmarkHTMLEscapeOld 692 670 -3.18%
BenchmarkByteByteReplaces 8492 8474 -0.21%
BenchmarkByteByteMap 2817 2808 -0.32%
LGTM=rsc
R=golang-codereviews, bradfitz, dave, rsc
CC=golang-codereviews
https://golang.org/cl/79200044
Bug was introduced recently. Add more tests, fix the bugs.
Suppress + sign when not required in zero padding.
Do not zero pad infinities.
All old tests still pass.
This time for sure!
Fixes#8217.
LGTM=rsc
R=golang-codereviews, dan.kortschak, rsc
CC=golang-codereviews
https://golang.org/cl/103480043
Using flet to replace kill-region with delete-region was a hack,
flet is now (GNU Emacs 24.3) deprecated and at least two people
have reported an issue where using go--delete-whole-line would
permanently break their kill ring. While that issue is probably
caused by faulty third party code (possibly prelude), it's easier
to write a clean implementation than to tweak the hack.
LGTM=ruiu, adonovan
R=adonovan, ruiu
CC=adg, golang-codereviews
https://golang.org/cl/106010043
No functional changes.
Generating shorter functions improves compilation time. On my laptop, this test's running time goes from 5.5s to 1.5s; the wall clock time to run all tests goes down 1s. On Raspberry Pi, this CL cuts 50s off the wall clock time to run all tests.
Fixes#7503.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/72590045
The go:nosplit change wasn't the problem, reinstating.
««« original CL description
undo CL 93380044 / 7f0999348917
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
»»»
TBR=bradfitz
R=bradfitz, golang-codereviews
CC=golang-codereviews
https://golang.org/cl/103490043
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
We don't need to shift array elements to shuffle them.
We just have to swap a selected element with 0th element.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/91750044
Printf("%x", "abc") was "0x610x620x63"; is now "0x616263", which
is surely better.
Printf("% #x", "abc") is still "0x61 0x62 0x63".
Fixes#8080.
LGTM=bradfitz, gri
R=golang-codereviews, bradfitz, gri
CC=golang-codereviews
https://golang.org/cl/106990043
Also added a test to verify os.Getppid() works across all platforms
LGTM=alex.brainman
R=golang-codereviews, alex.brainman, shreveal, iant
CC=golang-codereviews
https://golang.org/cl/102320044
Reportedly in the Linux 3.16 kernel the VDSO will not have
section headers or a normal symbol table.
Too late for 1.3 but perhaps for 1.3.1, if there is one.
Fixes#8197.
LGTM=rsc
R=golang-codereviews, mattn.jp, rsc
CC=golang-codereviews
https://golang.org/cl/101260044
bufio.Scanner.Scan returns whether the scan succeeded, not whether it
is done, so the test was mistakenly breaking early.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/93670045
makes windows-amd64-race benchmarks slower
««« original CL description
testing: make benchmarking faster
Allow the number of benchmark iterations to grow faster for fast benchmarks, and don't round up twice.
Using the default benchtime, this CL reduces wall clock time to run benchmarks:
net/http 49s -> 37s (-24%)
runtime 8m31s -> 5m55s (-30%)
bytes 2m37s -> 1m29s (-43%)
encoding/json 29s -> 21s (-27%)
strings 1m16s -> 53s (-30%)
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/101970047
»»»
TBR=josharian
CC=golang-codereviews
https://golang.org/cl/105950044
It appears that something about Go on Windows
cannot handle the fault cause by a jump to address 0.
The way Go represents and calls functions, this
never happened at all, until CL 105140044.
This CL changes the code added in CL 105140044
to make jump to 0 impossible once again.
Fixes#8047. (again, on Windows)
TBR=bradfitz
R=golang-codereviews, dave
CC=adg, golang-codereviews, iant, r
https://golang.org/cl/105120044
Rob asked for this change to make maintaining go1.4.txt easier.
If you are not sure of a change, it is still okay to send for review.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/109880044
jmpdefer modifies PC, SP, and LR, and not atomically,
so walking past jmpdefer will often end up in a state
where the three are not a consistent execution snapshot.
This was causing warning messages a few frames later
when the traceback realized it was confused, but given
the right memory it could easily crash instead.
Update #8153
LGTM=minux, iant
R=golang-codereviews, minux, iant
CC=golang-codereviews, r
https://golang.org/cl/107970043
The test requires that timerproc runs, but busy loops and starves
the scheduler so that, with high probability, timerproc doesn't run.
Avoid the issue by expecting the test to succeed; if not, a major
outside timeout will kill it and let us know.
As you can see from the diffs, there have been several attempts to
fix this with chicanery, but none has worked. Don't bother trying
any more.
Fixes#8136.
LGTM=rsc
R=rsc, josharian
CC=golang-codereviews
https://golang.org/cl/105140043
Allow the number of benchmark iterations to grow faster for fast benchmarks, and don't round up twice.
Using the default benchtime, this CL reduces wall clock time to run benchmarks:
net/http 49s -> 37s (-24%)
runtime 8m31s -> 5m55s (-30%)
bytes 2m37s -> 1m29s (-43%)
encoding/json 29s -> 21s (-27%)
strings 1m16s -> 53s (-30%)
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/101970047