If for whatever reason seh points into Go heap region,
the dangling pointer will cause memory corruption during GC.
Update #5193.
R=golang-dev, alex.brainman, iant
CC=golang-dev
https://golang.org/cl/8402045
We've decided to leave logging to third-parties (there are too
many formats), which others have done.
And we can't change the behavior of the various response
fields at this point anyway. Plus I argue they're correct and
match their documention.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8391043
Fixes#5175.
Race detector runtime expects values passed to MapShadow() to be page-aligned,
because they are used in mmap() call. If they are not aligned mmap() trims
either beginning or end of the mapping.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8325043
- comment fixes
- s/z/x/ in (*rat).Float64 to match convention for functions
returning a non-*Rat
- minor test output tweaking
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8327044
The smtp package originally allowed PLAIN whenever, but then
the TLS check was added for paranoia, but it's too paranoid:
it prevents using PLAIN auth even from localhost to localhost
when the server advertises PLAIN support.
This CL also permits the client to send PLAIN if the server
advertises it.
Fixes#5184
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8279043
Save an allocation per GET request and don't call io.LimitedReader(r, 0)
just to read 0 bytes. There's already an eofReader global variable
for when we just want a non-nil io.Reader to immediately EOF.
(Sorry, I know Rob told me to stop, but I was bored on the plane and
wrote this before I received the recent "please, really stop" email.)
benchmark old ns/op new ns/op delta
BenchmarkServerHandlerTypeLen 13888 13279 -4.39%
BenchmarkServerHandlerNoLen 12912 12229 -5.29%
BenchmarkServerHandlerNoType 13348 12632 -5.36%
BenchmarkServerHandlerNoHeader 10911 10261 -5.96%
benchmark old allocs new allocs delta
BenchmarkServerHandlerTypeLen 20 19 -5.00%
BenchmarkServerHandlerNoLen 18 17 -5.56%
BenchmarkServerHandlerNoType 18 17 -5.56%
BenchmarkServerHandlerNoHeader 13 12 -7.69%
benchmark old bytes new bytes delta
BenchmarkServerHandlerTypeLen 1913 1878 -1.83%
BenchmarkServerHandlerNoLen 1878 1843 -1.86%
BenchmarkServerHandlerNoType 1878 1844 -1.81%
BenchmarkServerHandlerNoHeader 1085 1051 -3.13%
Fixes#5188
R=golang-dev, adg, r
CC=golang-dev
https://golang.org/cl/8297044
My old code was trying to be too smart.
Also: Slightly better error message format
for gofmt -r pattern errors.
Fixes#4406.
R=golang-dev, adg
CC=golang-dev
https://golang.org/cl/8267045
This changes the map lookup behavior for string maps with 2-8 keys.
There was already previously a fastpath for 0 items and 1 item.
Now, if a string-keyed map has <= 8 items, first check all the
keys for length first. If only one has the right length, then
just check it for equality and avoid hashing altogether. Once
the map has more than 8 items, always hash like normal.
I don't know why some of the other non-string map benchmarks
got faster. This was with benchtime=2s, multiple times. I haven't
anything else getting slower, though.
benchmark old ns/op new ns/op delta
BenchmarkHashStringSpeed 37 34 -8.20%
BenchmarkHashInt32Speed 32 29 -10.67%
BenchmarkHashInt64Speed 31 27 -12.82%
BenchmarkHashStringArraySpeed 105 99 -5.43%
BenchmarkMegMap 274206 255153 -6.95%
BenchmarkMegOneMap 27 23 -14.80%
BenchmarkMegEqMap 148332 116089 -21.74%
BenchmarkMegEmptyMap 4 3 -12.72%
BenchmarkSmallStrMap 22 22 -0.89%
BenchmarkMapStringKeysEight_32 42 23 -43.71%
BenchmarkMapStringKeysEight_64 55 23 -56.96%
BenchmarkMapStringKeysEight_1M 279688 24 -99.99%
BenchmarkIntMap 16 15 -10.18%
BenchmarkRepeatedLookupStrMapKey32 40 37 -8.15%
BenchmarkRepeatedLookupStrMapKey1M 287918 272980 -5.19%
BenchmarkNewEmptyMap 156 130 -16.67%
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/7641057
It was unnecessarily cloning and then mutating a map that had
a very short lifetime (just that function).
No new tests, because they were added in revision 833bf2ef1527
(TestHeaderToWire). The benchmarks below are from the earlier
commit, revision 52e3407d.
I noticed this inefficiency when reviewing a change Peter Buhr
is looking into, which will also use these benchmarks.
benchmark old ns/op new ns/op delta
BenchmarkServerHandlerTypeLen 12547 12325 -1.77%
BenchmarkServerHandlerNoLen 12466 11167 -10.42%
BenchmarkServerHandlerNoType 12699 11800 -7.08%
BenchmarkServerHandlerNoHeader 11901 9210 -22.61%
benchmark old allocs new allocs delta
BenchmarkServerHandlerTypeLen 21 20 -4.76%
BenchmarkServerHandlerNoLen 20 18 -10.00%
BenchmarkServerHandlerNoType 20 18 -10.00%
BenchmarkServerHandlerNoHeader 17 13 -23.53%
benchmark old bytes new bytes delta
BenchmarkServerHandlerTypeLen 1930 1913 -0.88%
BenchmarkServerHandlerNoLen 1912 1879 -1.73%
BenchmarkServerHandlerNoType 1912 1878 -1.78%
BenchmarkServerHandlerNoHeader 1491 1086 -27.16%
R=golang-dev, adg
CC=golang-dev
https://golang.org/cl/8268046
The expected precision setting for the x87 on Win32 is 53-bit
but MinGW resets the floating point unit to 64-bit. Win32
object code generally expects values to be rounded to double,
not double extended, precision.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/8175044
Demonstrates one way to sort a slice of structs according
to different sort criteria, done in sequence.
One possible answer to a question that comes up often.
R=golang-dev, gri, bradfitz, adg, adg, rogpeppe
CC=golang-dev
https://golang.org/cl/8182044
Doing grow work on reads is not multithreaded safe.
Changed code to do grow work only on inserts & deletes.
This is a short-term fix, eventually we'll want to do
grow work in parallel to recover the space of the old
table.
Fixes#5120.
R=bradfitz, khr
CC=golang-dev
https://golang.org/cl/8242043
The text is printed only if the test fails or -test.v is set.
Document this behavior in the testing package and 'go help test'.
Also put a 'go install' into mkdoc.sh so I don't get tricked by the
process of updating the documentation ever again.
Fixes#5174.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/8118047
Closes the API documentation gap between platforms.
Also makes the code textual representation same between platforms.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8148043
Closes the API documentation gap between platforms.
Also makes the code textual representation same between platforms.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8147043
Since we can't properly handle anything except 100, treat all
1xx informational responses as sketchy and don't reuse the
connection for future requests.
The only other 1xx response code currently in use in the wild
is WebSockets' use of "101 Switching Protocols", but our
code.google.com/p/go.net/websockets doesn't use Client or
Transport: it uses ReadResponse directly, so is unaffected by
this CL. (and its tests still pass)
So this CL is entirely just future-proofing paranoia.
Also: the Internet is weird.
Update #2184
Update #3665
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/8208043
Whoops. I'm surprised it even worked before. (Need two pipes,
not one.)
Also, remove the whole pipe registration business, since it
wasn't even required in the previous version. (I'd later fixed
it at the end of send100Response, but forgot to delete it)
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/8191044
This CL ensures we use the correct socket options for
passive and active open sockets.
For the passive open sockets created by Listen functions,
additional SO_REUSEADDR, SO_REUSEPORT options are required
for the quick service restart and/or multicasting.
For the active open sockets created by Dial functions, no
additional options are required.
R=golang-dev, dave, bradfitz, rsc
CC=golang-dev
https://golang.org/cl/7795050
"There are only two hard problems in computer science:
cache invalidation, naming things, and off-by-one errors."
The HTTP server code already strips Expect: 100-continue on
requests, so httputil.ReverseProxy should be unaffected, but
some servers send unsolicited HTTP/1.1 100 Continue responses,
so we need to skip over them if they're seen to avoid getting
off-by-one on Transport requests/responses.
This does change the behavior of people who were using Client
or Transport directly and explicitly setting "Expect: 100-continue"
themselves, but it didn't work before anyway. Now instead of the
user code seeing a 100 response and then things blowing up, now
it basically works, except the Transport will still blast away
the full request body immediately. That's the part that needs
to be finished to close this issue.
This is the safe quick fix.
Update #3665
R=golang-dev, dsymonds, dave, jgrahamc
CC=golang-dev
https://golang.org/cl/8166045
This benchmark verifies that CL #8173043 reduces time spent
sliding the Buffer's contents.
Results without and with CL #8173043 applied:
benchmark old ns/op new ns/op delta
BenchmarkBufferFullSmallReads 755336 175054 -76.82%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/8174043
Also added a new benchmark from the same test:
benchmark old ns/op new ns/op delta
BenchmarkBufferNotEmptyWriteRead 2643698 709189 -73.17%
Fixes#5154
R=golang-dev, r, gri
CC=golang-dev
https://golang.org/cl/8164043
Adds the missing wildcard port assignment description to ListenUDP.
Also updates the wildcard port description on ListenTCP.
R=golang-dev, dave, r
CC=golang-dev
https://golang.org/cl/8063043
Also removes redundant tests that run Go 1.0 non-IPv6 support
Windows code on IPv6 enabled Windows kernels.
R=alex.brainman, golang-dev, bradfitz, dave
CC=golang-dev
https://golang.org/cl/7812052
With the faster strings package, the difference between
the specialized code and strings.Split is in the noise:
benchmark old ns/op new ns/op delta
BenchmarkPrint 16724291 16686729 -0.22%
(Measured on a Mac Pro, 2.8GHz Quad-core Intel Xeon,
4GB 800 MHz DDR2, Mac OS X 10.8.3)
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/8100044
Saves both the textproto.Reader allocation, and its internal
scratch buffer growing.
benchmark old ns/op new ns/op delta
BenchmarkServerFakeConnWithKeepAliveLite 10324 10149 -1.70%
benchmark old allocs new allocs delta
BenchmarkServerFakeConnWithKeepAliveLite 19 17 -10.53%
benchmark old bytes new bytes delta
BenchmarkServerFakeConnWithKeepAliveLite 1559 1492 -4.30%
R=golang-dev, r, gri
CC=golang-dev
https://golang.org/cl/8094046
Removes another per-request allocation. Also makes the code more
readable, IMO. And more testable.
benchmark old ns/op new ns/op delta
BenchmarkServerFakeConnWithKeepAliveLite 10539 10324 -2.04%
benchmark old allocs new allocs delta
BenchmarkServerFakeConnWithKeepAliveLite 20 19 -5.00%
benchmark old bytes new bytes delta
BenchmarkServerFakeConnWithKeepAliveLite 1609 1559 -3.11%
R=golang-dev, gri
CC=golang-dev
https://golang.org/cl/8118044
We already depend on strings in this file, so use it.
Plus strings.Index will be faster than a manual loop
once issue 3751 is finished.
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/8116043
A chunkWriter and a response are 1:1. Make them contiguous in
memory and save an allocation.
benchmark old ns/op new ns/op delta
BenchmarkServerFakeConnWithKeepAliveLite 10715 10539 -1.64%
benchmark old allocs new allocs delta
BenchmarkServerFakeConnWithKeepAliveLite 21 20 -4.76%
benchmark old bytes new bytes delta
BenchmarkServerFakeConnWithKeepAliveLite 1626 1609 -1.05%
R=golang-dev, gri
CC=golang-dev
https://golang.org/cl/8114043
Add more tests around the various orders handlers can access
and flush response headers.
Also clarify the documentation on fields of response and
chunkWriter.
While there, remove an allocation (a header clone) for simple
handlers.
benchmark old ns/op new ns/op delta
BenchmarkServerFakeConnWithKeepAliveLite 15245 14966 -1.83%
benchmark old allocs new allocs delta
BenchmarkServerFakeConnWithKeepAliveLite 24 23 -4.17%
benchmark old bytes new bytes delta
BenchmarkServerFakeConnWithKeepAliveLite 1717 1668 -2.85%
R=golang-dev, gri
CC=golang-dev
https://golang.org/cl/8101043
Motivated by garbage profiling in HTTP benchmarks. This
changes means new empty maps are just one small allocation
(the HMap) instead the HMap + the relatively larger h->buckets
allocation. This helps maps which remain empty throughout
their life.
benchmark old ns/op new ns/op delta
BenchmarkNewEmptyMap 196 107 -45.41%
benchmark old allocs new allocs delta
BenchmarkNewEmptyMap 2 1 -50.00%
benchmark old bytes new bytes delta
BenchmarkNewEmptyMap 195 50 -74.36%
R=khr, golang-dev, r
CC=golang-dev
https://golang.org/cl/7722046
There was another bufio.Writer not being reused, found with
GOGC=off and -test.memprofile.
benchmark old ns/op new ns/op delta
BenchmarkServerFakeConnWithKeepAlive 18270 16046 -12.17%
benchmark old allocs new allocs delta
BenchmarkServerFakeConnWithKeepAlive 38 36 -5.26%
benchmark old bytes new bytes delta
BenchmarkServerFakeConnWithKeepAlive 4598 2488 -45.89%
Update #5100
R=golang-dev, gri
CC=golang-dev
https://golang.org/cl/8038047
A HMUL node appears in some constant divisions, but
to observe a false negative in race detector the divisor must be
suitably chosen to make sure the only memory access is
done for HMUL.
R=dvyukov
CC=golang-dev
https://golang.org/cl/7935045
For Go 1.1, stop checking the rlimit, because it broke now
that mheap is allocated using SysAlloc. See issue 5049.
R=r
CC=golang-dev
https://golang.org/cl/7741050
The arm gentraceback mishandled frame linkage values pointing
to the assembly return function. This function is special as
its frame size is zero and it contains only one instruction.
These conditions would preserve the frame pointer and result
in an off by one error when unwinding the caller.
Fixes#5124
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/8023043
Prevents storm of error messages if something goes wrong.
In the case of issue 5073 the epoll fd was closed by the test.
Update #5073.
R=golang-dev, r, rsc
CC=golang-dev
https://golang.org/cl/7966043
This CL avoids test data sharing in repetitive test runs;
e.g., go test net -cpu=1,1,1
R=golang-dev, fullung, bradfitz
CC=golang-dev
https://golang.org/cl/8011043
Handle interface comparison correctly,
add a few more tests, mark more nodes as impossible.
R=dvyukov, golang-dev
CC=golang-dev
https://golang.org/cl/7942045
Make the copy directly in the convert switch instead of an extra loop.
Also stops converting nil-[]byte to zero-[]byte when assigning to *interface
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/7962044
At some point in the past, I believe the GCD algorithm was setting d to
be negative. The RSA code has been correcting that ever since but, now,
it appears to have changed and the correction isn't needed.
Having d be too large is harmless, it's just a little odd and I
happened to notice.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/7948044
This keeps the logic about how to set the thread-local variables
m and g in code compiled and linked by the gc toolchain,
an important property for upcoming cgo changes.
It's also just a nice cleanup: one less place to update when
these details change.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/7560048
The right operand of a && and || is only executed conditionnally,
so the instrumentation must be more careful. In particular
it should not turn nodes assumed to be cheap after walk into
expensive ones.
Update #4228
R=dvyukov, golang-dev
CC=golang-dev
https://golang.org/cl/7986043
The ARM implementation of runtime.cgocallback_gofunc diverged
from the calling convention by leaving a word of garbage at
the top of the stack and storing the return PC above the
locals. This change stores the return PC at the top of the
stack and removes the save area above the locals.
Update #5124
This CL fixes first part of the ARM issues and added the unwind test.
R=golang-dev, bradfitz, minux.ma, cshapiro, rsc
CC=golang-dev
https://golang.org/cl/7728045
The edit makes Hypot's description match the form
used in the other routines in this package.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/8003046
Adds the new debugging constant 'checkgc'. If its value is non-zero
all calls to mallocgc() from hashmap.c will start a garbage collection.
Fixes#5074.
R=golang-dev, khr
CC=golang-dev, rsc
https://golang.org/cl/7663051
Fixes performance of the current windows network poller
with the new scheduler.
Gives runtime a hint when GetQueuedCompletionStatus() will block.
Fixes#5068.
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 4004000 33906 -99.15%
BenchmarkTCP4Persistent-2 21790 17513 -19.63%
BenchmarkTCP4Persistent-4 44760 34270 -23.44%
BenchmarkTCP4Persistent-6 45280 43000 -5.04%
R=golang-dev, alex.brainman, coocood, rsc
CC=golang-dev
https://golang.org/cl/7612045