This CL refactors the existing listenerSockaddr function into several
methods on netFD.
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=golang-dev, dave, alex.brainman, dvyukov, remyoudompheng
CC=golang-dev
https://golang.org/cl/12023043
Updates #6046.
This CL just does maxstring and concatstring. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.
R=golang-dev, dave, iant
CC=golang-dev
https://golang.org/cl/12519044
I broke it with the darwin getwd attrlist stuff (0583e9d36dd).
plan9 doesn't have syscall.ENOTSUP.
It's in api/go1.txt as a symbol always available (not context-specific):
pkg syscall, const ENOTSUP Errno
... but plan9 isn't considered by cmd/api, so it only looks
universally available. Alternatively, we could add a fake ENOTSUP
to plan9, but they were making efforts earlier to clean their
syscall package, so I'd prefer not to dump more in it.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12509044
This change replaces the hard-coded switch on compression method
in zipfile reader and writer with a map into which users can
register compressors and decompressors in their init()s.
R=golang-dev, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/12421043
NetBSD and OpenBSD are broken like OS X is. Good to know.
Drop required count from avg/2 to avg/3, because the
Plan 9 builder just barely missed avg/2 in one of its runs.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/12548043
Looks like latest FreeBSD doesn't set address family identifer
for RTAX_NETMASK stuff; probably RTAX_GENMASK too, not confirmed.
This CL tries to identify address families by using the length of
each socket address if possible.
The issue is confirmed on FreeBSD 9.1.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12332043
Unlike the existing net package own pollster, runtime-integrated
network pollster on BSD variants, actually kqueue, requires a socket
that has beed passed to syscall.Listen previously for a stream
listener.
This CL separates pollDesc.Init (actually runtime_pollOpen) from newFD
to allow control of each state of sockets and adds init method to netFD
instead. Upcoming CLs will rearrange the call order of runtime-integrated
pollster and syscall functions like the following;
- For dialers that open active connections, runtime_pollOpen will be
called in between syscall.Bind and syscall.Connect.
- For stream listeners that open passive stream connections,
runtime_pollOpen will be called just after syscall.Listen.
- For datagram listeners that open datagram connections,
runtime_pollOpen will be called just after syscall.Bind.
This is in preparation for runtime-integrated network pollster for BSD
variants.
Update #5199
R=dvyukov, alex.brainman, minux.ma
CC=golang-dev
https://golang.org/cl/8608044
Update #6046.
This CL just does findnull and findnullw. There are other functions
to fix but doing them a few at a time will help isolate any (unlikely)
breakages these changes bring up in architectures I can't test
myself.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12520043
Embed all data necessary for read/write operations directly into netFD.
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 27669 23341 -15.64%
BenchmarkTCP4Persistent-2 18173 12558 -30.90%
BenchmarkTCP4Persistent-4 10390 7319 -29.56%
This change will intentionally break all builders to see
how many allocations they do per read/write.
This will be fixed soon afterwards.
R=golang-dev, alex.brainman
CC=golang-dev
https://golang.org/cl/12413043
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
This is the same as reverted change 12250043
with save marked as textflag 7.
The problem was that if save calls morestack,
then subsequent lessstack spoils g->sched.pc/sp.
And that bad values were remembered in g->syscallpc/sp.
Entersyscallblock had the same problem,
but it was never triggered to date.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12478043
Basically a partial rollback of 12053043 until I can
figure out what is really going on.
Fixes bug 6051.
R=golang-dev
CC=golang-dev
https://golang.org/cl/12496043
This means that pprof will no longer report profiles on OS X.
That's unfortunate, but the profiles were often wrong and, worse,
it was difficult to tell whether the profile was wrong or not.
The workarounds were making the scheduler more complex,
possibly caused a deadlock (see issue 5519), and did not actually
deliver reliable results.
It may be possible for adventurous users to apply a patch to
their kernels to get working results, or perhaps having no results
will encourage someone to do the work of creating a profiling
thread like on Windows. Issue 6047 has details.
Fixes#5519.
Fixes#6047.
R=golang-dev, bradfitz, r
CC=golang-dev
https://golang.org/cl/12429045
This means that in the common case (modern kernel), we only
make 1 system call to dup instead of two, and we also avoid
grabbing the syscall.ForkLock.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/12476043
you do reflect.call with too big an argument list.
Not worth the hassle.
Fixes#6023Fixes#6033
R=golang-dev, bradfitz, dave
CC=golang-dev
https://golang.org/cl/12485043
For normal slices a[i:j] we're generating 3 bounds
checks: j<={len(string),cap(slice)}, j<=j (!), and i<=j.
Somehow snuck in as part of the [i:j:k] implementation
where the second check does something.
Remove the second check when we don't need it.
R=rsc, r
CC=golang-dev
https://golang.org/cl/12311046
While we're here, add a test for the same functionality in gzip,
which was already implemented, and add bzip2 CRC checks.
Fixes#5772.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/12387044
Break all 386 builders.
««« original CL description
runtime: use gcpc/gcsp during traceback of goroutines in syscalls
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
»»»
R=rsc
CC=golang-dev
https://golang.org/cl/12424045
It was needed for the old scheduler,
because there temporary could be more threads than gomaxprocs.
In the new scheduler gomaxprocs is always respected.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12438043
gcpc/gcsp are used by GC in similar situation.
gcpc/gcsp are also more stable than gp->sched,
because gp->sched is mutated by entersyscall/exitsyscall
in morestack and mcall. So it has higher chances of being inconsistent.
Also, rename gcpc/gcsp to syscallpc/syscallsp.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12250043
In the event that code tries to use a hash function that isn't compiled
in and panics, give the developer a fighting chance of figuring out
which hash function it needed.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/12420045
Runtime netpoll supports at most one read waiter
and at most one write waiter. It's responsibility
of net package to ensure that. Currently windows
implementation allows more than one waiter in Accept.
It leads to "fatal error: netpollblock: double wait".
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12400045
Whether the keys are concatenated or separate (or a mixture) depends on the server.
Fixes#5979.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12433043
Windows dynamic priority boosting assumes that a process has different types
of dedicated threads -- GUI, IO, computational, etc. Go processes use
equivalent threads that all do a mix of GUI, IO, computations, etc.
In such context dynamic priority boosting does nothing but harm, so turn it off.
In particular, if 2 goroutines do heavy IO on a server uniprocessor machine,
windows rejects to schedule timer thread for 2+ seconds when priority boosting is enabled.
Fixes#5971.
R=alex.brainman
CC=golang-dev
https://golang.org/cl/12406043
The test isn't checking deliberate panics so catching them just makes the code longer.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/12420043