The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.
On linux/amd64, Intel E5-2690:
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 9595 9454 -1.47%
BenchmarkTCP4Persistent-2 8978 8772 -2.29%
BenchmarkTCP4ConcurrentReadWrite 4900 4625 -5.61%
BenchmarkTCP4ConcurrentReadWrite-2 2603 2500 -3.96%
In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.
Fixes#6074.
R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
The old code was caching per-type struct field info. Instead,
cache type-specific encoding funcs, tailored for that
particular type to avoid unnecessary reflection at runtime.
Once the machine is built once, future encodings of that type
just run the func.
benchmark old ns/op new ns/op delta
BenchmarkCodeEncoder 48424939 36975320 -23.64%
benchmark old MB/s new MB/s speedup
BenchmarkCodeEncoder 40.07 52.48 1.31x
Additionally, the numbers seem stable now at ~52 MB/s, whereas
the numbers for the old code were all over the place: 11 MB/s,
40 MB/s, 13 MB/s, 39 MB/s, etc. In the benchmark above I compared
against the best I saw the old code do.
R=rsc, adg
CC=gobot, golang-dev, r
https://golang.org/cl/9129044
Simple approach. Still generates garbage, but not as much.
benchmark old ns/op new ns/op delta
BenchmarkWriteSlice1000Int32s 40260 18791 -53.33%
benchmark old MB/s new MB/s speedup
BenchmarkWriteSlice1000Int32s 99.35 212.87 2.14x
Fixes#2634.
R=golang-dev, crawshaw
CC=golang-dev
https://golang.org/cl/12680046
Introduce freezetheworld function that is a best-effort attempt to stop any concurrently running goroutines. Call it during crash.
Fixes#5873.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12054044
Original CL by rsc (11916045):
The motivation for disallowing them was RFC 4180 saying
"The last field in the record must not be followed by a comma."
I believe this is an admonition to CSV generators, not readers.
When reading, anything followed by a comma is not the last field.
Fixes#5892.
R=golang-dev, rsc, r
CC=golang-dev
https://golang.org/cl/12294043
By separating finding the end of the comment from the end of the action,
we can diagnose malformed comments better.
Also tweak the documentation to make the comment syntax clearer.
Fixes#6022.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12570044
g% 6c ~/x.c
/Users/rsc/x.c:1 duplicate types given: STRUCT s and VOID
/Users/rsc/x.c:1 no return at end of function: f
g%
Fixes#6083.
R=ken2
CC=golang-dev
https://golang.org/cl/12691043
See issue 4949 for a full explanation.
Allocs go from 1 to zero in the non-addressable case.
Fixes#4949.
BenchmarkInterfaceBig 90 14 -84.01%
BenchmarkInterfaceSmall 14 14 +0.00%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12646043
Unlike the existing net package own pollster, runtime-integrated
network pollster on BSD variants, actually kqueue, requires a socket
that has beed passed to syscall.Listen previously for a stream
listener.
This CL separates pollDesc.Init of Unix network pollster from newFD
to avoid any breakages in the transition from Unix network pollster
to runtime-integrated pollster. Upcoming CLs will rearrange the call
order of pollster and syscall functions like the following;
- For dialers that open active connections, pollDesc.Init will be
called in between syscall.Bind and syscall.Connect.
- For stream listeners that open passive stream connections,
pollDesc.Init will be called just after syscall.Listen.
- For datagram listeners that open datagram connections,
pollDesc.Init will be called just after syscall.Bind.
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=dvyukov, bradfitz
CC=golang-dev
https://golang.org/cl/12663043
Having a trailing dot in the string doesn't really simplify
the checking loop in isDomainName. Avoid this unnecessary allocation.
Also make the valid domain names more explicit by adding some more
test cases.
benchmark old ns/op new ns/op delta
BenchmarkDNSNames 2420.0 983.0 -59.38%
benchmark old allocs new allocs delta
BenchmarkDNSNames 12 0 -100.00%
benchmark old bytes new bytes delta
BenchmarkDNSNames 336 0 -100.00%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12662043
Renders code coverage as an overlay, replicating the look of the
HTML that go tool cover produces.
Also some cleanups.
R=adonovan, bradfitz
CC=golang-dev
https://golang.org/cl/12684043
MOVBS and MOVHS are defined as duplicates of MOVB and MOVH,
and perform sign-extension moving.
No change is made to code generation.
Update #1837
R=rsc, bradfitz
CC=golang-dev
https://golang.org/cl/12682043
- adjusted test files so that they actually type-check
- adjusted go1.txt, go1.1.txt, next.txt
- to run, provide build tag: api_tool
Fixes#4538.
R=bradfitz
CC=golang-dev
https://golang.org/cl/12300043
The ResponseWriter's ReadFrom method was causing side effects on
the output before any data was read.
Now, bail out early and do a normal copy (which does a read
before writing) when our input and output are known to not to
be the pair of types we need for sendfile.
Fixes#5660
R=golang-dev, rsc, nightlyone
CC=golang-dev
https://golang.org/cl/12632043
I moved the pointer block from one end of the frame
to the other toward the end of working on the last CL,
and of course that made the optimization no longer work.
Now it works again:
0030 (bug361.go:12) DATA gclocals·0+0(SB)/4,$4
0030 (bug361.go:12) DATA gclocals·0+4(SB)/4,$3
0030 (bug361.go:12) GLOBL gclocals·0+0(SB),8,$8
Fixes arm build (this time for sure!).
TBR=golang-dev
CC=cshapiro, golang-dev, iant
https://golang.org/cl/12627044
Sort non-pointer-containing data to the low end of the
stack frame, and make the bitmaps only cover the
pointer-containing top end.
Generates significantly less garbage collection bitmap
for programs with large byte buffers on the stack.
Only 2% shorter for godoc, but 99.99998% shorter
in some test cases.
Fixes arm build.
TBR=golang-dev
CC=cshapiro, golang-dev, iant
https://golang.org/cl/12541047
The receiver name is optional. when Method's receiver name messing,
the functionList regex can't match the Method,
e.g. `func (*T) ProtoMessage() {}`.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12530044
Individual variables bigger than 10 MB are now
moved to the heap, as if they had escaped on
their own.
This avoids ridiculous stacks for programs that
do things like
x := [1<<30]byte{}
... use x ...
If 10 MB is too small, we can raise the limit.
Fixes#6077.
R=ken2
CC=golang-dev
https://golang.org/cl/12650045
GetQueuedCompletionStatusEx allows to dequeue a batch of completion
notifications, which is more efficient than dequeueing one by one.
benchmark old ns/op new ns/op delta
BenchmarkClientServerParallel4 100605 90945 -9.60%
BenchmarkClientServerParallel4-2 90225 74504 -17.42%
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12436044
In prep for Robert's forthcoming cmd/api rewrite which
depends on the go.tools subrepo, we'll need to be more
careful about how and when we run cmd/api.
Rather than implement this policy in both run.bash and
run.bat, this change moves the policy and mechanism into
cmd/api/run.go, which will then evolve.
The plan is in a TODO in run.go.
R=golang-dev, gri
CC=golang-dev
https://golang.org/cl/12482044
The test takes up to 64 seconds on windows builders.
I've tried to reduce number of iterations in the test,
but it does not affect run time.
Fixes#6054.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12531043