Fixes#8348.
Trying to work around clang's dodgy support for .arch by reverting to the external assembler didn't work out so well. Minux had a much better solution to encode the instructions we need as .word directives which avoids .arch altogether.
I've confirmed with gdb that this form produces the expected machine code
Dump of assembler code for function crosscall_arm1:
0x00000000 <+0>: push {r4, r5, r6, r7, r8, r9, r10, r11, r12, lr}
0x00000004 <+4>: mov r4, r0
0x00000008 <+8>: mov r5, r1
0x0000000c <+12>: mov r0, r2
0x00000010 <+16>: blx r5
0x00000014 <+20>: blx r4
0x00000018 <+24>: pop {r4, r5, r6, r7, r8, r9, r10, r11, r12, pc}
There is another compilation failure that blocks building Go with clang on arm
# ../misc/cgo/test
# _/home/dfc/go/misc/cgo/test
/tmp/--407b12.s: Assembler messages:
/tmp/--407b12.s:59: Error: selected processor does not support ARM mode `blx r0'
clang: error: assembler command failed with exit code 1 (use -v to see invocation)
FAIL _/home/dfc/go/misc/cgo/test [build failed]
I'll open a new issue for that
LGTM=iant
R=iant, minux
CC=golang-codereviews
https://golang.org/cl/158180047
Set correct version number at Windows installer based on
Go's Mercurial tag.
Name | Version
------------------------------------------------
Go Programming Language amd64 go1.3.3 | 1.3.3
Go Programming Language amd64 go1.2rc3 | 1.2
Go Programming Language amd64 go1.2beta1 | 1.2
Fixes#8239.
LGTM=adg
R=adg, c.emil.hessman, alex.brainman
CC=golang-codereviews
https://golang.org/cl/160950044
Partial undo, changes to ldelf.c retained.
Some platforms are still not working even with the integrated assembler disabled, will have to find another solution.
««« original CL description
cmd/cgo: disable clang's integrated assembler
Fixes#8348.
Clang's internal assembler (introduced by default in clang 3.4) understands the .arch directive, but doesn't change the default value of -march. This causes the build to fail when we use BLX (armv5 and above) when clang is compiled for the default armv4t architecture (which appears to be the default on all the distros I've used).
This is probably a clang bug, so work around it for the time being by disabling the integrated assembler when compiling the cgo assembly shim.
This CL also includes a small change to ldelf.c which was required as clang 3.4 and above generate more weird symtab entries.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/156430044
»»»
LGTM=minux
R=iant, minux
CC=golang-codereviews
https://golang.org/cl/162880044
Get rid of gocputicks(), it is no longer used.
LGTM=bradfitz, dave
R=golang-codereviews, bradfitz, dave, minux
CC=golang-codereviews
https://golang.org/cl/161110044
This brings cmd/gc in line with the spec on this question.
It might break existing code, but that code was not conformant
with the spec.
Credit to Rémy for finding the broken code.
Fixes#6366.
LGTM=r
R=golang-codereviews, r
CC=adonovan, golang-codereviews, gri
https://golang.org/cl/129550043
Allows parsing some file formats that assign special
meaning to which stream data is found in.
Will do the same for compress/bzip2 once this is
reviewed and submitted.
Fixes#6486.
LGTM=nigeltao
R=nigeltao, dan.kortschak
CC=adg, bradfitz, golang-codereviews, r
https://golang.org/cl/159120044
Fixes#8348.
Clang's internal assembler (introduced by default in clang 3.4) understands the .arch directive, but doesn't change the default value of -march. This causes the build to fail when we use BLX (armv5 and above) when clang is compiled for the default armv4t architecture (which appears to be the default on all the distros I've used).
This is probably a clang bug, so work around it for the time being by disabling the integrated assembler when compiling the cgo assembly shim.
This CL also includes a small change to ldelf.c which was required as clang 3.4 and above generate more weird symtab entries.
LGTM=iant
R=golang-codereviews, iant
CC=golang-codereviews
https://golang.org/cl/156430044
It has been failing periodically on Solaris/x64.
Change blockevent so it always records an event if we called
SetBlockProfileRate(1), even if the time delta is negative or zero.
Hopefully this will fix the test on Solaris.
Caveat: I don't actually know what the Solaris problem is, this
is just an educated guess.
LGTM=dave
R=dvyukov, dave
CC=golang-codereviews
https://golang.org/cl/159150043
Russ Cox pointed out that environment strings are not
required to be nil-terminated on Plan 9.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/159130044
Since CL 104570043 and 112720043, we are using the
nsec system call instead of /dev/bintime on Plan 9.
LGTM=rsc
R=rsc
CC=aram, golang-codereviews
https://golang.org/cl/155590043
Shell scripts depend on the old behavior too often.
It's too late to make this change.
LGTM=bradfitz
R=rsc, bradfitz
CC=golang-codereviews
https://golang.org/cl/161890044
This test was failing but did not break the build because it
was not run when -test.short was used.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/157150043
I came across this while debugging a GC problem in gccgo.
There is code in assignTo and cvtT2I that handles assignment
to all interface values. It allocates an empty interface even
if the real type is a non-empty interface. The fields are
then set for a non-empty interface, but the memory is recorded
as holding an empty interface. This means that the GC has
incorrect information.
This is extremely unlikely to fail, because the code in the GC
that handles empty interfaces looks like this:
obj = nil;
typ = eface->type;
if(typ != nil) {
if(!(typ->kind&KindDirectIface) || !(typ->kind&KindNoPointers))
obj = eface->data;
In the current runtime the condition is always true--if
KindDirectIface is set, then KindNoPointers is clear--and we
always want to set obj = eface->data. So the question is what
happens when we incorrectly store a non-empty interface value
in memory marked as an empty interface. In that case
eface->type will not be a *rtype as we expect, but will
instead be a pointer to an Itab. We are going to use this
pointer to look at a *rtype kind field. The *rtype struct
starts out like this:
type rtype struct {
size uintptr
hash uint32 // hash of type; avoids computation in hash tables
_ uint8 // unused/padding
align uint8 // alignment of variable with this type
fieldAlign uint8 // alignment of struct field with this type
kind uint8 // enumeration for C
An Itab always has at least two pointers, so on a
little-endian 64-bit system the kind field will be the high
byte of the second pointer. This will normally be zero, so
the test of typ->kind will succeed, which is what we want.
On a 32-bit system it might be possible to construct a failing
case by somehow getting the Itab for an interface with one
method to be immediately followed by a word that is all ones.
The effect would be that the test would sometimes fail and the
GC would not mark obj, leading to an invalid dangling
pointer. I have not tried to construct this test.
I noticed this in gccgo, where this error is much more likely
to cause trouble for a rather random reason: gccgo uses a
different layout of rtype, and in gccgo the kind field happens
to be the low byte of a pointer, not the high byte.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/155450044
The stack blowout can no longer happen,
but we can still test that too-complex regexps
are rejected.
Replacement for CL 162770043.
LGTM=iant, r
R=r, iant
CC=bradfitz, golang-codereviews
https://golang.org/cl/162860043
This is already tested by TestRE2Exhaustive, but the build has
not broken because that test is not run when using -test.short.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/155580043
https://golang.org/cl/152700045/ made it possible for struct literals assigned to globals to use <N> as the RHS. Normally, this is to zero out variables on first use. Because globals are already zero (or their linker initialized value), we just ignored this.
Now that <N> can occur from non-initialization code, we need to emit this code. We don't use <N> for initialization of globals any more, so this shouldn't cause any excessive zeroing.
Fixes#8961.
LGTM=rsc
R=golang-codereviews, rsc
CC=bradfitz, golang-codereviews
https://golang.org/cl/154540044
As we did with encoding, provide a trivial byte reader for
faster decoding. We can also reduce some of the copying
by doing the allocation all at once using a slightly different
interface from byte buffers.
benchmark old ns/op new ns/op delta
BenchmarkEndToEndPipe 13368 12902 -3.49%
BenchmarkEndToEndByteBuffer 5969 5642 -5.48%
BenchmarkEndToEndSliceByteBuffer 479485 470798 -1.81%
BenchmarkEncodeComplex128Slice 92367 92201 -0.18%
BenchmarkEncodeFloat64Slice 39990 38960 -2.58%
BenchmarkEncodeInt32Slice 30510 27938 -8.43%
BenchmarkEncodeStringSlice 33753 33365 -1.15%
BenchmarkDecodeComplex128Slice 232278 196704 -15.32%
BenchmarkDecodeFloat64Slice 150258 128191 -14.69%
BenchmarkDecodeInt32Slice 133806 115748 -13.50%
BenchmarkDecodeStringSlice 335117 300534 -10.32%
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/154360049
This adds a Reset() to compress/flate's decompressor and plumbs that through
to compress/zlib and compress/gzip's Readers so callers can avoid large
allocations when performing many inflate operations. In particular this
preserves the allocation of the decompressor.hist buffer, which is 32kb and
overwritten as needed while inflating.
On the benchmark described in issue 6317, produces the following speedup on
my 2.3ghz Intel Core i7 MBP with go version devel +6b696a34e0af Sun Aug 03
15:14:59 2014 -0700 darwin/amd64:
blocked.text w/out patch vs blocked.text w/ patch:
benchmark old ns/op new ns/op delta
BenchmarkGunzip 8371577533 7927917687 -5.30%
benchmark old allocs new allocs delta
BenchmarkGunzip 176818 148519 -16.00%
benchmark old bytes new bytes delta
BenchmarkGunzip 292184936 12739528 -95.64%
flat.text vs blocked.text w/patch:
benchmark old ns/op new ns/op delta
BenchmarkGunzip 7939447827 7927917687 -0.15%
benchmark old allocs new allocs delta
BenchmarkGunzip 90702 148519 +63.74%
benchmark old bytes new bytes delta
BenchmarkGunzip 9959528 12739528 +27.91%
Similar speedups to those bradfitz saw in https://golang.org/cl/13416045.
Fixes#6317.
Fixes#7950.
LGTM=nigeltao
R=golang-codereviews, bradfitz, dan.kortschak, adg, nigeltao, jamesr
CC=golang-codereviews
https://golang.org/cl/97140043
This is a day 1 error in the flag package: It did not check
that a flag was set at most once on the command line.
Because user-defined flags may have more general
properties, the check applies only to the standard flag
types in this package: bool, string, etc.
Fixes#8960.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/156390043
select {
case <- c:
case <- c:
}
In this case, c.recvq lists two SudoGs which have the same G.
So we can't use the G as the key to dequeue the correct SudoG,
as that key is ambiguous. Dequeueing the wrong SudoG ends up
freeing a SudoG that is still in c.recvq.
The fix is to use the actual SudoG pointer as the key.
LGTM=dvyukov
R=rsc, bradfitz, dvyukov, khr
CC=austin, golang-codereviews
https://golang.org/cl/159040043
Simple bug in argument processing: The final arg may
be the pipeline value, in which case it gets bound to the
fixed argument section. The code got that wrong. Easy
to fix.
Fixes#8950.
LGTM=bradfitz
R=golang-codereviews, bradfitz
CC=golang-codereviews
https://golang.org/cl/161750043
Bytes buffers have more API and are a little slower. Since appending
is a key part of the path in encode, using a faster implementation
speeds things up measurably.
The couple of positive swings are likely garbage-collection related
since memory allocation looks different in the benchmark now.
I am not concerned by them.
benchmark old ns/op new ns/op delta
BenchmarkEndToEndPipe 6620 6388 -3.50%
BenchmarkEndToEndByteBuffer 3548 3600 +1.47%
BenchmarkEndToEndSliceByteBuffer 336678 367980 +9.30%
BenchmarkEncodeComplex128Slice 78199 71297 -8.83%
BenchmarkEncodeFloat64Slice 37731 32258 -14.51%
BenchmarkEncodeInt32Slice 26780 22977 -14.20%
BenchmarkEncodeStringSlice 35882 26492 -26.17%
BenchmarkDecodeComplex128Slice 194819 185126 -4.98%
BenchmarkDecodeFloat64Slice 120538 120102 -0.36%
BenchmarkDecodeInt32Slice 106442 107275 +0.78%
BenchmarkDecodeStringSlice 272902 269866 -1.11%
LGTM=ruiu
R=golang-codereviews, ruiu
CC=golang-codereviews
https://golang.org/cl/160990043
Use go generate to write better loops for decoding arrays,
just as we did for encoding. It doesn't help as much,
relatively speaking, but it's still noticeable.
benchmark old ns/op new ns/op delta
BenchmarkDecodeComplex128Slice 202348 184529 -8.81%
BenchmarkDecodeFloat64Slice 135800 120979 -10.91%
BenchmarkDecodeInt32Slice 121200 105149 -13.24%
BenchmarkDecodeStringSlice 288129 278214 -3.44%
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/154420044
Don't use cmd/pprof as it is not necessary installed
and does not work on nacl and plan9.
Instead just look at the raw profile.
LGTM=crawshaw, rsc
R=golang-codereviews, crawshaw, 0intro, rsc
CC=golang-codereviews
https://golang.org/cl/159010043
Better to avoid the memory loads and just use immediate constants.
This especially applies to zeroing, which was being done by
copying zeros from elsewhere in the binary, even if the value
was going to be completely initialized with non-zero values.
The zero writes were optimized away but the zero loads from
the data segment were not.
LGTM=r
R=r, bradfitz, dvyukov
CC=golang-codereviews
https://golang.org/cl/152700045
Replace i < 0 || i >= x with uint(i) >= uint(x).
Shorten a few other code sequences.
Move the kind bits to the bottom of the flag word, to avoid shifts.
LGTM=r
R=r, bradfitz
CC=golang-codereviews
https://golang.org/cl/159020043
We borrow a trick from the fmt package and avoid reflection
to walk the elements when possible. We could push further with
unsafe (and we may) but this is a good start.
Decode can benefit similarly; it will be done separately.
Use go generate (engen.go) to produce the helper functions
(enc_helpers.go).
benchmark old ns/op new ns/op delta
BenchmarkEndToEndPipe 6593 6482 -1.68%
BenchmarkEndToEndByteBuffer 3662 3684 +0.60%
BenchmarkEndToEndSliceByteBuffer 350306 351693 +0.40%
BenchmarkComplex128Slice 96347 80045 -16.92%
BenchmarkInt32Slice 42484 26008 -38.78%
BenchmarkFloat64Slice 51143 36265 -29.09%
BenchmarkStringSlice 53402 35077 -34.32%
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/156310043
gogo called from GC is okay
for the same reasons that
gogo called from System or ExternalCode is okay.
All three are fake stack traces.
Fixes#8408.
LGTM=dvyukov, r
R=r, dvyukov
CC=golang-codereviews
https://golang.org/cl/152580043
Dmitriy believes this broke Windows.
It looks like build.golang.org stopped before that,
but it's worth a shot.
««« original CL description
runtime: make pprof a little nicer
Update #8942
This does not fully address issue 8942 but it does make
the profiles much more useful, until that issue can be
fixed completely.
LGTM=dvyukov
R=r, dvyukov
CC=golang-codereviews
https://golang.org/cl/159990043
»»»
TBR=dvyukov
CC=golang-codereviews
https://golang.org/cl/160030043
It cannot run 'go tool pprof'. There is no guarantee that's installed.
It needs to build a temporary pprof binary and run that.
It also needs to skip the test on systems that can't build and
run binaries, namely android and nacl.
See src/cmd/nm/nm_test.go's TestNM for a template.
Update #8867
Status: Accepted
TBR=dvyukov
CC=golang-codereviews
https://golang.org/cl/153710043
Update #8942
This does not fully address issue 8942 but it does make
the profiles much more useful, until that issue can be
fixed completely.
LGTM=dvyukov
R=r, dvyukov
CC=golang-codereviews
https://golang.org/cl/159990043
There are 3 issues:
1. Skip argument of callers is off by 3,
so that all allocations are deep inside of memory profiler.
2. Memory profiling statistics are not updated after runtime.GC.
3. Testing package does not update memory profiling statistics
before capturing the profile.
Also add an end-to-end test.
Fixes#8867.
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/148710043
Both of these forms can avoid writing to the base pointer in x
(in the slice, always, and in the append, most of the time).
For Go 1.5, will need to change the compilation of x = x[0:y]
to avoid writing to the base pointer, so that the elision is safe,
and will need to change the compilation of x = append(x, ...)
to write to the base pointer (through a barrier) only when
growing the underlying array, so that the general elision is safe.
For Go 1.4, elide the write barrier always, a change that should
have equivalent performance characteristics but is much
simpler and therefore safer.
benchmark old ns/op new ns/op delta
BenchmarkBinaryTree17 3910526122 3918802545 +0.21%
BenchmarkFannkuch11 3747650699 3732600693 -0.40%
BenchmarkFmtFprintfEmpty 106 98.7 -6.89%
BenchmarkFmtFprintfString 280 269 -3.93%
BenchmarkFmtFprintfInt 296 282 -4.73%
BenchmarkFmtFprintfIntInt 467 470 +0.64%
BenchmarkFmtFprintfPrefixedInt 418 398 -4.78%
BenchmarkFmtFprintfFloat 574 535 -6.79%
BenchmarkFmtManyArgs 1768 1818 +2.83%
BenchmarkGobDecode 14916799 14925182 +0.06%
BenchmarkGobEncode 14110076 13358298 -5.33%
BenchmarkGzip 546609795 542630402 -0.73%
BenchmarkGunzip 136270657 136496277 +0.17%
BenchmarkHTTPClientServer 126574 125245 -1.05%
BenchmarkJSONEncode 30006238 27862354 -7.14%
BenchmarkJSONDecode 106020889 102664600 -3.17%
BenchmarkMandelbrot200 5793550 5818320 +0.43%
BenchmarkGoParse 5437608 5463962 +0.48%
BenchmarkRegexpMatchEasy0_32 192 179 -6.77%
BenchmarkRegexpMatchEasy0_1K 462 460 -0.43%
BenchmarkRegexpMatchEasy1_32 168 153 -8.93%
BenchmarkRegexpMatchEasy1_1K 1420 1280 -9.86%
BenchmarkRegexpMatchMedium_32 338 286 -15.38%
BenchmarkRegexpMatchMedium_1K 107435 98027 -8.76%
BenchmarkRegexpMatchHard_32 5941 4846 -18.43%
BenchmarkRegexpMatchHard_1K 185965 153830 -17.28%
BenchmarkRevcomp 795497458 798447829 +0.37%
BenchmarkTemplate 132091559 134938425 +2.16%
BenchmarkTimeParse 604 608 +0.66%
BenchmarkTimeFormat 551 548 -0.54%
LGTM=r
R=r, dave
CC=golang-codereviews, iant, khr, rlh
https://golang.org/cl/159960043
A new attack on CBC padding in SSLv3 was released yesterday[1]. Go only
supports SSLv3 as a server, not as a client. An easy fix is to change
the default minimum version to TLS 1.0 but that seems a little much
this late in the 1.4 process as it may break some things.
Thus this patch adds server support for TLS_FALLBACK_SCSV[2] -- a
mechanism for solving the fallback problem overall. Chrome has
implemented this since February and Google has urged others to do so in
light of yesterday's news.
With this change, clients can indicate that they are doing a fallback
connection and Go servers will be able to correctly reject them.
[1] http://googleonlinesecurity.blogspot.com/2014/10/this-poodle-bites-exploiting-ssl-30.html
[2] https://tools.ietf.org/html/draft-ietf-tls-downgrade-scsv-00
LGTM=rsc
R=rsc
CC=golang-codereviews
https://golang.org/cl/157090043
Among other things, *x = T{} does not need a write barrier.
The changes here avoid an unnecessary copy even when
no pointers are involved, so it may have larger effects.
In 6g and 8g, avoid manually repeated STOSQ in favor of
writing explicit MOVs, under the theory that the MOVs
should have fewer dependencies and pipeline better.
Benchmarks compare best of 5 on a 2012 MacBook Pro Core i5
with TurboBoost disabled. Most improvements can be explained
by the changes in this CL.
The effect in Revcomp is real but harder to explain: none of
the instructions in the inner loop changed. I suspect loop
alignment but really have no idea.
benchmark old new delta
BenchmarkBinaryTree17 3809027371 3819907076 +0.29%
BenchmarkFannkuch11 3607547556 3686983012 +2.20%
BenchmarkFmtFprintfEmpty 118 103 -12.71%
BenchmarkFmtFprintfString 289 277 -4.15%
BenchmarkFmtFprintfInt 304 290 -4.61%
BenchmarkFmtFprintfIntInt 507 458 -9.66%
BenchmarkFmtFprintfPrefixedInt 425 408 -4.00%
BenchmarkFmtFprintfFloat 555 555 +0.00%
BenchmarkFmtManyArgs 1835 1733 -5.56%
BenchmarkGobDecode 14738209 14639331 -0.67%
BenchmarkGobEncode 14239039 13703571 -3.76%
BenchmarkGzip 538211054 538701315 +0.09%
BenchmarkGunzip 135430877 134818459 -0.45%
BenchmarkHTTPClientServer 116488 116618 +0.11%
BenchmarkJSONEncode 28923406 29294334 +1.28%
BenchmarkJSONDecode 105779820 104289543 -1.41%
BenchmarkMandelbrot200 5791758 5771964 -0.34%
BenchmarkGoParse 5376642 5310943 -1.22%
BenchmarkRegexpMatchEasy0_32 195 190 -2.56%
BenchmarkRegexpMatchEasy0_1K 477 455 -4.61%
BenchmarkRegexpMatchEasy1_32 170 165 -2.94%
BenchmarkRegexpMatchEasy1_1K 1410 1394 -1.13%
BenchmarkRegexpMatchMedium_32 336 329 -2.08%
BenchmarkRegexpMatchMedium_1K 108979 106328 -2.43%
BenchmarkRegexpMatchHard_32 5854 5821 -0.56%
BenchmarkRegexpMatchHard_1K 185089 182838 -1.22%
BenchmarkRevcomp 834920364 780202624 -6.55%
BenchmarkTemplate 137046937 129728756 -5.34%
BenchmarkTimeParse 600 594 -1.00%
BenchmarkTimeFormat 559 539 -3.58%
LGTM=r
R=r
CC=golang-codereviews, iant, khr, rlh
https://golang.org/cl/157910047