The previous call to parseRange already checks whether
all the ranges start before the end of file.
LGTM=robert.hencke, bradfitz
R=golang-codereviews, robert.hencke, gobot, bradfitz
CC=golang-codereviews
https://golang.org/cl/91880044
Update #1435
This proposal disables Setuid and Setgid on all linux platforms.
Issue 1435 has been open for a long time, and it is unlikely to be addressed soon so an argument was made by a commenter
https://code.google.com/p/go/issues/detail?id=1435#c45
That these functions should made to fail rather than succeed in their broken state.
LGTM=ruiu, iant
R=iant, ruiu
CC=golang-codereviews
https://golang.org/cl/106170043
MOV with SSE registers seems faster than REP MOVSQ if the
size being copied is less than about 2K. Previously we
didn't use MOV if the memory region is larger than 256
byte. This patch improves the performance of 257 ~ 2048
byte non-overlapping copy by using MOV.
Here is the benchmark result on Intel Xeon 3.5GHz (Nehalem).
benchmark old ns/op new ns/op delta
BenchmarkMemmove16 4 4 +0.42%
BenchmarkMemmove32 5 5 -0.20%
BenchmarkMemmove64 6 6 -0.81%
BenchmarkMemmove128 7 7 -0.82%
BenchmarkMemmove256 10 10 +1.92%
BenchmarkMemmove512 29 16 -44.90%
BenchmarkMemmove1024 37 25 -31.55%
BenchmarkMemmove2048 55 44 -19.46%
BenchmarkMemmove4096 92 91 -0.76%
benchmark old MB/s new MB/s speedup
BenchmarkMemmove16 3370.61 3356.88 1.00x
BenchmarkMemmove32 6368.68 6386.99 1.00x
BenchmarkMemmove64 10367.37 10462.62 1.01x
BenchmarkMemmove128 17551.16 17713.48 1.01x
BenchmarkMemmove256 24692.81 24142.99 0.98x
BenchmarkMemmove512 17428.70 31687.72 1.82x
BenchmarkMemmove1024 27401.82 40009.45 1.46x
BenchmarkMemmove2048 36884.86 45766.98 1.24x
BenchmarkMemmove4096 44295.91 44627.86 1.01x
LGTM=khr
R=golang-codereviews, gobot, khr
CC=golang-codereviews
https://golang.org/cl/90500043
sync.Pool is not supposed to be used everywhere, but is
a last resort.
««« original CL description
strings: use sync.Pool to cache buffer
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 3596 3094 -13.96%
benchmark old allocs new allocs delta
BenchmarkByteReplacerWriteString 1 0 -100.00%
LGTM=dvyukov
R=bradfitz, dave, dvyukov
CC=golang-codereviews
https://golang.org/cl/101330053
»»»
LGTM=dave
R=r, dave
CC=golang-codereviews
https://golang.org/cl/102610043
This requires minimal changes to the runtime hooks. In particular,
synchronization events must be done only on valid addresses now,
so I've added the additional checks to race.c.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/101000046
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 7359 3661 -50.25%
LGTM=dave
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/102550043
Afterprologue check was required when did not know
about return arguments of functions and/or they were not zeroed.
Now 100% precision is required for stacks due to stack copying,
so it must work w/o afterprologue one way or another.
I can limit this change for 1.3 to merely adding a TODO,
but this check is super confusing so I don't want this knowledge to get lost.
LGTM=rsc
R=golang-codereviews, gobot, rsc, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/96580045
Use WriteString instead of allocating a byte slice as a
buffer. This was a TODO.
benchmark old ns/op new ns/op delta
BenchmarkWriteString 40139 19991 -50.20%
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/107190044
requires a decoder to do its own byte buffering instead of using
bufio.Reader, due to byte stuffing.
benchmark old MB/s new MB/s speedup
BenchmarkDecodeBaseline 33.40 50.65 1.52x
BenchmarkDecodeProgressive 24.34 31.92 1.31x
On 6g, unsafe.Sizeof(huffman{}) falls from 4872 to 964 bytes, and
the decoder struct contains 8 of those.
LGTM=r
R=r, nightlyone
CC=bradfitz, couchmoney, golang-codereviews, raph
https://golang.org/cl/109050045
Storing temporary values to a slice is slower than storing
them to local variables of type byte.
benchmark old MB/s new MB/s speedup
BenchmarkEncodeToStringBase32 102.21 156.66 1.53x
BenchmarkEncodeToStringBase64 124.25 177.91 1.43x
LGTM=crawshaw
R=golang-codereviews, crawshaw, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/109820045
Just to be more thorough.
No need to push this to 1.3; it's just a test change that
worked without any changes to the code being tested.
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/109080045
genericReplacer.lookup is called for each byte of an input
string. In many (most?) cases, lookup will fail for the first
byte, and it will return immediately. Adding a fast path for
that case seems worth it.
Benchmark on my Xeon 3.5GHz Linux box:
benchmark old ns/op new ns/op delta
BenchmarkGenericNoMatch 2691 774 -71.24%
BenchmarkGenericMatch1 7920 8151 +2.92%
BenchmarkGenericMatch2 52336 39927 -23.71%
BenchmarkSingleMaxSkipping 1575 1575 +0.00%
BenchmarkSingleLongSuffixFail 1429 1429 +0.00%
BenchmarkSingleMatch 56228 55444 -1.39%
BenchmarkByteByteNoMatch 568 568 +0.00%
BenchmarkByteByteMatch 977 972 -0.51%
BenchmarkByteStringMatch 1669 1687 +1.08%
BenchmarkHTMLEscapeNew 422 422 +0.00%
BenchmarkHTMLEscapeOld 692 670 -3.18%
BenchmarkByteByteReplaces 8492 8474 -0.21%
BenchmarkByteByteMap 2817 2808 -0.32%
LGTM=rsc
R=golang-codereviews, bradfitz, dave, rsc
CC=golang-codereviews
https://golang.org/cl/79200044
Bug was introduced recently. Add more tests, fix the bugs.
Suppress + sign when not required in zero padding.
Do not zero pad infinities.
All old tests still pass.
This time for sure!
Fixes#8217.
LGTM=rsc
R=golang-codereviews, dan.kortschak, rsc
CC=golang-codereviews
https://golang.org/cl/103480043
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
We don't need to shift array elements to shuffle them.
We just have to swap a selected element with 0th element.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/91750044
Printf("%x", "abc") was "0x610x620x63"; is now "0x616263", which
is surely better.
Printf("% #x", "abc") is still "0x61 0x62 0x63".
Fixes#8080.
LGTM=bradfitz, gri
R=golang-codereviews, bradfitz, gri
CC=golang-codereviews
https://golang.org/cl/106990043
Also added a test to verify os.Getppid() works across all platforms
LGTM=alex.brainman
R=golang-codereviews, alex.brainman, shreveal, iant
CC=golang-codereviews
https://golang.org/cl/102320044
Reportedly in the Linux 3.16 kernel the VDSO will not have
section headers or a normal symbol table.
Too late for 1.3 but perhaps for 1.3.1, if there is one.
Fixes#8197.
LGTM=rsc
R=golang-codereviews, mattn.jp, rsc
CC=golang-codereviews
https://golang.org/cl/101260044
bufio.Scanner.Scan returns whether the scan succeeded, not whether it
is done, so the test was mistakenly breaking early.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/93670045
makes windows-amd64-race benchmarks slower
««« original CL description
testing: make benchmarking faster
Allow the number of benchmark iterations to grow faster for fast benchmarks, and don't round up twice.
Using the default benchtime, this CL reduces wall clock time to run benchmarks:
net/http 49s -> 37s (-24%)
runtime 8m31s -> 5m55s (-30%)
bytes 2m37s -> 1m29s (-43%)
encoding/json 29s -> 21s (-27%)
strings 1m16s -> 53s (-30%)
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/101970047
»»»
TBR=josharian
CC=golang-codereviews
https://golang.org/cl/105950044
It appears that something about Go on Windows
cannot handle the fault cause by a jump to address 0.
The way Go represents and calls functions, this
never happened at all, until CL 105140044.
This CL changes the code added in CL 105140044
to make jump to 0 impossible once again.
Fixes#8047. (again, on Windows)
TBR=bradfitz
R=golang-codereviews, dave
CC=adg, golang-codereviews, iant, r
https://golang.org/cl/105120044