Update #1435
This proposal disables Setuid and Setgid on all linux platforms.
Issue 1435 has been open for a long time, and it is unlikely to be addressed soon so an argument was made by a commenter
https://code.google.com/p/go/issues/detail?id=1435#c45
That these functions should made to fail rather than succeed in their broken state.
LGTM=ruiu, iant
R=iant, ruiu
CC=golang-codereviews
https://golang.org/cl/106170043
MOV with SSE registers seems faster than REP MOVSQ if the
size being copied is less than about 2K. Previously we
didn't use MOV if the memory region is larger than 256
byte. This patch improves the performance of 257 ~ 2048
byte non-overlapping copy by using MOV.
Here is the benchmark result on Intel Xeon 3.5GHz (Nehalem).
benchmark old ns/op new ns/op delta
BenchmarkMemmove16 4 4 +0.42%
BenchmarkMemmove32 5 5 -0.20%
BenchmarkMemmove64 6 6 -0.81%
BenchmarkMemmove128 7 7 -0.82%
BenchmarkMemmove256 10 10 +1.92%
BenchmarkMemmove512 29 16 -44.90%
BenchmarkMemmove1024 37 25 -31.55%
BenchmarkMemmove2048 55 44 -19.46%
BenchmarkMemmove4096 92 91 -0.76%
benchmark old MB/s new MB/s speedup
BenchmarkMemmove16 3370.61 3356.88 1.00x
BenchmarkMemmove32 6368.68 6386.99 1.00x
BenchmarkMemmove64 10367.37 10462.62 1.01x
BenchmarkMemmove128 17551.16 17713.48 1.01x
BenchmarkMemmove256 24692.81 24142.99 0.98x
BenchmarkMemmove512 17428.70 31687.72 1.82x
BenchmarkMemmove1024 27401.82 40009.45 1.46x
BenchmarkMemmove2048 36884.86 45766.98 1.24x
BenchmarkMemmove4096 44295.91 44627.86 1.01x
LGTM=khr
R=golang-codereviews, gobot, khr
CC=golang-codereviews
https://golang.org/cl/90500043
sync.Pool is not supposed to be used everywhere, but is
a last resort.
««« original CL description
strings: use sync.Pool to cache buffer
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 3596 3094 -13.96%
benchmark old allocs new allocs delta
BenchmarkByteReplacerWriteString 1 0 -100.00%
LGTM=dvyukov
R=bradfitz, dave, dvyukov
CC=golang-codereviews
https://golang.org/cl/101330053
»»»
LGTM=dave
R=r, dave
CC=golang-codereviews
https://golang.org/cl/102610043
Fixes#8074.
The issue was not reproduceable by revision
go version devel +e0ad7e329637 Thu Jun 19 22:19:56 2014 -0700 linux/arm
But include the original test case in case the issue reopens itself.
LGTM=dvyukov
R=golang-codereviews, dvyukov
CC=golang-codereviews
https://golang.org/cl/107290043
This requires minimal changes to the runtime hooks. In particular,
synchronization events must be done only on valid addresses now,
so I've added the additional checks to race.c.
LGTM=iant
R=iant
CC=golang-codereviews
https://golang.org/cl/101000046
benchmark old ns/op new ns/op delta
BenchmarkByteReplacerWriteString 7359 3661 -50.25%
LGTM=dave
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/102550043
Afterprologue check was required when did not know
about return arguments of functions and/or they were not zeroed.
Now 100% precision is required for stacks due to stack copying,
so it must work w/o afterprologue one way or another.
I can limit this change for 1.3 to merely adding a TODO,
but this check is super confusing so I don't want this knowledge to get lost.
LGTM=rsc
R=golang-codereviews, gobot, rsc, khr
CC=golang-codereviews, khr, rsc
https://golang.org/cl/96580045
Use WriteString instead of allocating a byte slice as a
buffer. This was a TODO.
benchmark old ns/op new ns/op delta
BenchmarkWriteString 40139 19991 -50.20%
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/107190044
requires a decoder to do its own byte buffering instead of using
bufio.Reader, due to byte stuffing.
benchmark old MB/s new MB/s speedup
BenchmarkDecodeBaseline 33.40 50.65 1.52x
BenchmarkDecodeProgressive 24.34 31.92 1.31x
On 6g, unsafe.Sizeof(huffman{}) falls from 4872 to 964 bytes, and
the decoder struct contains 8 of those.
LGTM=r
R=r, nightlyone
CC=bradfitz, couchmoney, golang-codereviews, raph
https://golang.org/cl/109050045
This is a clone of 101370043, which I accidentally applied to the
release branch first.
No big deal, it needed to be applied there anyway.
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/108090043
Storing temporary values to a slice is slower than storing
them to local variables of type byte.
benchmark old MB/s new MB/s speedup
BenchmarkEncodeToStringBase32 102.21 156.66 1.53x
BenchmarkEncodeToStringBase64 124.25 177.91 1.43x
LGTM=crawshaw
R=golang-codereviews, crawshaw, bradfitz, dave
CC=golang-codereviews
https://golang.org/cl/109820045
Just to be more thorough.
No need to push this to 1.3; it's just a test change that
worked without any changes to the code being tested.
LGTM=crawshaw
R=golang-codereviews, crawshaw
CC=golang-codereviews
https://golang.org/cl/109080045
genericReplacer.lookup is called for each byte of an input
string. In many (most?) cases, lookup will fail for the first
byte, and it will return immediately. Adding a fast path for
that case seems worth it.
Benchmark on my Xeon 3.5GHz Linux box:
benchmark old ns/op new ns/op delta
BenchmarkGenericNoMatch 2691 774 -71.24%
BenchmarkGenericMatch1 7920 8151 +2.92%
BenchmarkGenericMatch2 52336 39927 -23.71%
BenchmarkSingleMaxSkipping 1575 1575 +0.00%
BenchmarkSingleLongSuffixFail 1429 1429 +0.00%
BenchmarkSingleMatch 56228 55444 -1.39%
BenchmarkByteByteNoMatch 568 568 +0.00%
BenchmarkByteByteMatch 977 972 -0.51%
BenchmarkByteStringMatch 1669 1687 +1.08%
BenchmarkHTMLEscapeNew 422 422 +0.00%
BenchmarkHTMLEscapeOld 692 670 -3.18%
BenchmarkByteByteReplaces 8492 8474 -0.21%
BenchmarkByteByteMap 2817 2808 -0.32%
LGTM=rsc
R=golang-codereviews, bradfitz, dave, rsc
CC=golang-codereviews
https://golang.org/cl/79200044
Bug was introduced recently. Add more tests, fix the bugs.
Suppress + sign when not required in zero padding.
Do not zero pad infinities.
All old tests still pass.
This time for sure!
Fixes#8217.
LGTM=rsc
R=golang-codereviews, dan.kortschak, rsc
CC=golang-codereviews
https://golang.org/cl/103480043
Using flet to replace kill-region with delete-region was a hack,
flet is now (GNU Emacs 24.3) deprecated and at least two people
have reported an issue where using go--delete-whole-line would
permanently break their kill ring. While that issue is probably
caused by faulty third party code (possibly prelude), it's easier
to write a clean implementation than to tweak the hack.
LGTM=ruiu, adonovan
R=adonovan, ruiu
CC=adg, golang-codereviews
https://golang.org/cl/106010043
No functional changes.
Generating shorter functions improves compilation time. On my laptop, this test's running time goes from 5.5s to 1.5s; the wall clock time to run all tests goes down 1s. On Raspberry Pi, this CL cuts 50s off the wall clock time to run all tests.
Fixes#7503.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/72590045
The go:nosplit change wasn't the problem, reinstating.
««« original CL description
undo CL 93380044 / 7f0999348917
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
»»»
TBR=bradfitz
R=bradfitz, golang-codereviews
CC=golang-codereviews
https://golang.org/cl/103490043
Partial undo, just of go:nosplit annotation. Somehow it
is breaking the windows builders.
TBR=bradfitz
««« original CL description
runtime: implement string ops in Go
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
»»»
TBR=bradfitz
CC=golang-codereviews
https://golang.org/cl/105260044
Also implement go:nosplit annotation. Not really needed
for now, but we'll definitely need it for other conversions.
benchmark old ns/op new ns/op delta
BenchmarkRuneIterate 534 474 -11.24%
BenchmarkRuneIterate2 535 470 -12.15%
LGTM=bradfitz
R=golang-codereviews, dave, bradfitz, minux
CC=golang-codereviews
https://golang.org/cl/93380044
We don't need to shift array elements to shuffle them.
We just have to swap a selected element with 0th element.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/91750044