Share garbage between different bufio Readers. When a Reader
has zero buffered data, put its buffer into a pool.
This acknowledges that most bufio.Readers eventually get
read to completion, and their buffers are then no longer
needed.
benchmark old ns/op new ns/op delta
BenchmarkReaderEmpty 2993 1058 -64.65%
benchmark old allocs new allocs delta
BenchmarkReaderEmpty 3 2 -33.33%
benchmark old bytes new bytes delta
BenchmarkReaderEmpty 4278 133 -96.89%
Update #5100
R=r
CC=adg, dvyukov, gobot, golang-dev, rogpeppe
https://golang.org/cl/8819049
The stack scanner for not started goroutines ignored the arguments
area when its size was unknown. With this change, the distance
between the stack pointer and the stack base will be used instead.
Fixes#5486
R=golang-dev, bradfitz, iant, dvyukov
CC=golang-dev
https://golang.org/cl/9440043
A test added in b37d2fdcc4d9 didn't work with some values of GOMAXPROCS
because the defer statements were in the wrong order: the Pipe could be
closed before the TLS Client was.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9187047
If a slice points to an array embedded in a struct,
the whole struct can be incorrectly scanned as the slice buffer.
Fixes#5443.
R=cshapiro, iant, r, cshapiro, minux.ma
CC=bradfitz, gobot, golang-dev
https://golang.org/cl/9372044
Allocs of size 16 can bypass atomic set of the allocated bit, while allocs of size 8 can not.
Allocs with and w/o type info hit different paths inside of malloc.
Current results on linux/amd64:
BenchmarkMalloc8 50000000 43.6 ns/op
BenchmarkMalloc16 50000000 46.7 ns/op
BenchmarkMallocTypeInfo8 50000000 61.3 ns/op
BenchmarkMallocTypeInfo16 50000000 63.5 ns/op
R=golang-dev, remyoudompheng, minux.ma, bradfitz, iant
CC=golang-dev
https://golang.org/cl/9090045
for checking for page boundary. Also avoid boundary check
when >=16 bytes are hashed.
benchmark old ns/op new ns/op delta
BenchmarkHashStringSpeed 23 22 -0.43%
BenchmarkHashBytesSpeed 44 42 -3.61%
BenchmarkHashStringArraySpeed 71 68 -4.05%
R=iant, khr
CC=gobot, golang-dev, google
https://golang.org/cl/9123046
Finer-grained transfers were relevant with per-M caches,
with per-P caches they are not relevant and harmful for performance.
For few small size classes where it makes difference,
it's fine to grab the whole span (4K).
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.45%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9374043
The PKCS#1 spec requires that the PS padding in an RSA message be at
least 8 bytes long. We were not previously checking this. This isn't
important in the most common situation (session key encryption), but
the impact is unclear in other cases.
This change enforces the specified minimum size.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/9222045
OpenSSL can be configured to send empty records in order to randomise
the CBC IV. This is an early version of 1/n-1 record splitting (that Go
does) and is quite reasonable, but it results in tls.Conn.Read
returning (0, nil).
This change ignores up to 100 consecutive, empty records to avoid
returning (0, nil) to callers.
Fixes 5309.
R=golang-dev, r, minux.ma
CC=golang-dev
https://golang.org/cl/8852044
This patch resulted from a bit of quick optimisation in response to a
golang-nuts post. It looks like one could save a couple other copies in
this function, but this addresses the inner loop and is fairly simple.
benchmark old ns/op new ns/op delta
BenchmarkGCD10x10 1964 1711 -12.88%
BenchmarkGCD10x100 2019 1736 -14.02%
BenchmarkGCD10x1000 2471 2171 -12.14%
BenchmarkGCD10x10000 6040 5778 -4.34%
BenchmarkGCD10x100000 43204 43025 -0.41%
BenchmarkGCD100x100 11004 8520 -22.57%
BenchmarkGCD100x1000 11820 9446 -20.08%
BenchmarkGCD100x10000 23846 21382 -10.33%
BenchmarkGCD100x100000 133691 131505 -1.64%
BenchmarkGCD1000x1000 120041 95591 -20.37%
BenchmarkGCD1000x10000 136887 113600 -17.01%
BenchmarkGCD1000x100000 295370 273912 -7.26%
BenchmarkGCD10000x10000 2556126 2205198 -13.73%
BenchmarkGCD10000x100000 3159512 2808038 -11.12%
BenchmarkGCD100000x100000 150543094 139986045 -7.01%
R=gri, remyoudompheng
CC=bradfitz, gobot, golang-dev, gri
https://golang.org/cl/9424043
This is needed for preemptive scheduler,
it will preempt only when m->locks==0,
and we do not want to be preempted while
we have not completely unlocked the lock.
R=golang-dev, khr, iant
CC=golang-dev
https://golang.org/cl/9196047
Also change table type from int32[] to int8[] to save space in L1$.
benchmark old ns/op new ns/op delta
BenchmarkMalloc 42 40 -4.68%
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/9199044
Trying to lookup user's display name with directory services can
take several seconds when user's computer is not in a domain.
As a workaround, check if computer is joined in a domain first,
and don't use directory services if it is not.
Additionally, don't leak tokens in user.Current().
Fixes#5298.
R=golang-dev, bradfitz, alex.brainman, lucio.dere
CC=golang-dev
https://golang.org/cl/8541047
This fixes fontification, navigation and indentation for methods
of the form `func (Foo) Bar...`
R=adonovan
CC=gobot, golang-dev
https://golang.org/cl/8951043
The *Encoder is almost always garbage. It doesn't need an
encodeState inside of it (and its bytes.Buffer), since it's
only needed locally inside of Encode.
benchmark old ns/op new ns/op delta
BenchmarkEncoderEncode 2562 2553 -0.35%
benchmark old bytes new bytes delta
BenchmarkEncoderEncode 283 102 -63.96%
R=r
CC=gobot, golang-dev
https://golang.org/cl/9365044