1
0
mirror of https://github.com/golang/go synced 2024-11-12 12:30:21 -07:00
Commit Graph

21600 Commits

Author SHA1 Message Date
Peter Waller
ececbe89d4 net/http/httputil: ReverseProxy request cancellation
If an inbound connection is closed, cancel the outbound http request.

This is particularly useful if the outbound request may consume resources
unnecessarily until it is cancelled.

Fixes #8406

Change-Id: I738c4489186ce342f7e21d0ea3f529722c5b443a
Signed-off-by: Peter Waller <p@pwaller.net>
Reviewed-on: https://go-review.googlesource.com/2320
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-09 19:44:13 +00:00
Shenghou Ma
3aba41d6c3 runtime: source startupRandomData from auxv AT_RANDOM on linux/arm.
Fixes #9541.

Change-Id: I5d659ad50d7c3d1c92ed9feb86cda4c1a6e62054
Reviewed-on: https://go-review.googlesource.com/2584
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-09 06:50:11 +00:00
Martin Möhrmann
a3876ac21c log: optimize itoa
Reduce buffer to maximally needed size for conversion of 64bit integers.
Reduce number of used integer divisions.

benchmark            old ns/op     new ns/op     delta
BenchmarkItoa        144           119           -17.36%
BenchmarkPrintln     783           752           -3.96%

Change-Id: I6d57a7feebf90f303be5952767107302eccf4631
Reviewed-on: https://go-review.googlesource.com/2215
Reviewed-by: Rob Pike <r@golang.org>
2015-01-09 00:22:10 +00:00
Keith Randall
1de9c4073b runtime: use urandom instead of random
Random is bad, it can block and prevent binaries from starting.
Use urandom instead.  We'd rather have bad random bits than no
random bits.

Change-Id: I360e1cb90ace5518a1b51708822a1dae27071ebd
Reviewed-on: https://go-review.googlesource.com/2582
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Minux Ma <minux@golang.org>
2015-01-09 00:09:42 +00:00
Shenghou Ma
5664eda733 cmd/go: document import path checking
This is a replay of CL 189760043 that is in release-branch.go1.4,
but not in master branch somehow.

Change-Id: I11eb40a24273e7be397e092ef040e54efb8ffe86
Reviewed-on: https://go-review.googlesource.com/2541
Reviewed-by: Andrew Gerrand <adg@golang.org>
2015-01-08 23:24:59 +00:00
Keith Randall
7b2524217e runtime: fix 32-bit build
In 32-bit worlds, 8-byte objects are only aligned to 4-byte boundaries.

Change-Id: I91469a9a67b1ee31dd508a4e105c39c815ecde58
Reviewed-on: https://go-review.googlesource.com/2581
Reviewed-by: Keith Randall <khr@golang.org>
2015-01-08 21:39:57 +00:00
Robert Griesemer
fbe2845cdd doc: document math/big performance improvements
Change-Id: I2b40cd544dda550ac6ac6da19ba3867ec30b2774
Reviewed-on: https://go-review.googlesource.com/2563
Reviewed-by: Robert Griesemer <gri@golang.org>
2015-01-08 21:10:39 +00:00
Keith Randall
6f07ac2f28 cmd/gc: pad structs which end in zero-sized fields
For a non-zero-sized struct with a final zero-sized field,
add a byte to the size (before rounding to alignment).  This
change ensures that taking the address of the zero-sized field
will not incorrectly leak the following object in memory.

reflect.funcLayout also needs this treatment.

Fixes #9401

Change-Id: I1dc503dc5af4ca22c8f8c048fb7b4541cc957e0f
Reviewed-on: https://go-review.googlesource.com/2452
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-08 21:05:10 +00:00
Robert Griesemer
654a185f20 math/big: faster assembly kernels for AddVx/SubVx for 386.
(analog to Change-Id: Ia473e9ab9c63a955c252426684176bca566645ae)

Fixes #9243.

benchmark              old ns/op     new ns/op     delta
BenchmarkAddVV_1       5.76          5.60          -2.78%
BenchmarkAddVV_2       7.17          6.98          -2.65%
BenchmarkAddVV_3       8.69          8.57          -1.38%
BenchmarkAddVV_4       10.5          10.5          +0.00%
BenchmarkAddVV_5       13.3          11.6          -12.78%
BenchmarkAddVV_1e1     20.4          19.3          -5.39%
BenchmarkAddVV_1e2     166           140           -15.66%
BenchmarkAddVV_1e3     1588          1278          -19.52%
BenchmarkAddVV_1e4     16138         12657         -21.57%
BenchmarkAddVV_1e5     167608        127836        -23.73%
BenchmarkAddVW_1       4.87          4.76          -2.26%
BenchmarkAddVW_2       6.10          6.07          -0.49%
BenchmarkAddVW_3       7.75          7.65          -1.29%
BenchmarkAddVW_4       9.30          9.39          +0.97%
BenchmarkAddVW_5       10.8          10.9          +0.93%
BenchmarkAddVW_1e1     18.8          18.8          +0.00%
BenchmarkAddVW_1e2     143           134           -6.29%
BenchmarkAddVW_1e3     1390          1266          -8.92%
BenchmarkAddVW_1e4     13877         12545         -9.60%
BenchmarkAddVW_1e5     155330        125432        -19.25%

benchmark              old MB/s     new MB/s     speedup
BenchmarkAddVV_1       5556.09      5715.12      1.03x
BenchmarkAddVV_2       8926.55      9170.64      1.03x
BenchmarkAddVV_3       11042.15     11201.77     1.01x
BenchmarkAddVV_4       12168.21     12245.50     1.01x
BenchmarkAddVV_5       12041.39     13805.73     1.15x
BenchmarkAddVV_1e1     15659.65     16548.18     1.06x
BenchmarkAddVV_1e2     19268.57     22728.64     1.18x
BenchmarkAddVV_1e3     20141.45     25033.36     1.24x
BenchmarkAddVV_1e4     19827.86     25281.92     1.28x
BenchmarkAddVV_1e5     19092.06     25031.92     1.31x
BenchmarkAddVW_1       822.12       840.92       1.02x
BenchmarkAddVW_2       1310.89      1317.89      1.01x
BenchmarkAddVW_3       1549.31      1568.26      1.01x
BenchmarkAddVW_4       1720.45      1703.77      0.99x
BenchmarkAddVW_5       1857.12      1828.66      0.98x
BenchmarkAddVW_1e1     2126.39      2132.38      1.00x
BenchmarkAddVW_1e2     2784.49      2969.21      1.07x
BenchmarkAddVW_1e3     2876.89      3157.35      1.10x
BenchmarkAddVW_1e4     2882.32      3188.51      1.11x
BenchmarkAddVW_1e5     2575.16      3188.96      1.24x

(measured on OS X 10.9.5, 2.3 GHz Intel Core i7, 8GB 1333 MHz DDR3)

Change-Id: I46698729d5e0bc3e277aa0146a9d7a086c0c26f1
Reviewed-on: https://go-review.googlesource.com/2560
Reviewed-by: Keith Randall <khr@golang.org>
2015-01-08 20:58:59 +00:00
Martin Möhrmann
06ed8f0df7 strconv: speed up atoi for common cases
Add compile time constants for bases 10 and 16 instead of computing the cutoff
value on every invocation of ParseUint by a division.

Reduce usage of slice operations.

amd64:
benchmark              old ns/op     new ns/op     delta
BenchmarkAtoi          44.6          36.0          -19.28%
BenchmarkAtoiNeg       44.2          38.9          -11.99%
BenchmarkAtoi64        72.5          56.7          -21.79%
BenchmarkAtoi64Neg     66.1          58.6          -11.35%

386:
benchmark              old ns/op     new ns/op     delta
BenchmarkAtoi          86.6          73.0          -15.70%
BenchmarkAtoiNeg       86.6          72.3          -16.51%
BenchmarkAtoi64        126           108           -14.29%
BenchmarkAtoi64Neg     126           108           -14.29%

Change-Id: I0a271132120d776c97bb4ed1099793c73e159893
Reviewed-on: https://go-review.googlesource.com/2460
Reviewed-by: Robert Griesemer <gri@golang.org>
2015-01-08 20:58:26 +00:00
Rick Hudson
db7fd1c142 runtime: increase GC concurrency.
run GC in its own background goroutine making the
caller runnable if resources are available. This is
critical in single goroutine applications.
Allow goroutines that allocate a lot to help out
the GC and in doing so throttle their own allocation.
Adjust test so that it only detects that a GC is run
during init calls and not whether the GC is memory
efficient. Memory efficiency work will happen later
in 1.5.

Change-Id: I4306f5e377bb47c69bda1aedba66164f12b20c2b
Reviewed-on: https://go-review.googlesource.com/2349
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-01-08 20:34:56 +00:00
Robert Griesemer
f21ee1e1d8 README: emphasize that we don't accept pull requests
Change-Id: Ie31f957f6b60b0a9405147c7a0af789df01a4b02
Reviewed-on: https://go-review.googlesource.com/2550
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-08 18:50:47 +00:00
Austin Clements
babeb4a963 runtime: improve GC times printing
This improves the printing of GC times to be both more human-friendly
and to provide enough information for the construction of MMU curves
and other statistics.  The new times look like:

GC: #8 72413852ns @143036695895725 pause=622900 maxpause=427037 goroutines=11 gomaxprocs=4
GC:     sweep term: 190584ns	   max=190584	total=275001	procs=4
GC:     scan:       260397ns	   max=260397	total=902666	procs=1
GC:     install wb: 5279ns	   max=5279	total=18642	procs=4
GC:     mark:       71530555ns	   max=71530555	total=186694660	procs=1
GC:     mark term:  427037ns	   max=427037	total=1691184	procs=4

This prints gomaxprocs and the number of procs used in each phase for
the benefit of analyzing mutator utilization during concurrent phases.
This also means the analysis doesn't have to hard-code which phases
are STW.

This prints the absolute start time only for the GC cycle.  The other
start times can be derived from the phase durations.  This declutters
the view for humans readers and doesn't pose any additional complexity
for machine readers.

This removes the confusing "cycle" terminology.  Instead, this places
the phase duration after the phase name and adds a "ns" unit, which
both makes it implicitly clear that this is the duration of that phase
and indicates the units of the times.

This adds a "GC:" prefix to all lines for easier identification.

Finally, this generally cleans up the code as well as the placement of
spaces in the output and adds print locking so the statistics blocks
are never interrupted by other prints.

Change-Id: Ifd056db83ed1b888de7dfa9a8fc5732b01ccc631
Reviewed-on: https://go-review.googlesource.com/2542
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-01-08 17:22:15 +00:00
Robert Griesemer
067acd51b0 math/big: faster "pure Go" addition/subtraction for long vectors
(platforms w/o corresponding assembly kernels)

For short vector adds there's some erradic slow-down, but overall
these routines have become significantly faster. This only matters
for platforms w/o native (assembly) versions of these kernels, so
we are not concerned about the minor slow-down for short vectors.

This code was already reviewed under Mercurial (golang.org/cl/172810043)
but wasn't submitted before the switch to git.

Benchmarks run on 2.3GHz Intel Core i7, running OS X 10.9.5,
with the respective AddVV and AddVW assembly routines disabled.

benchmark              old ns/op     new ns/op     delta
BenchmarkAddVV_1       6.59          7.09          +7.59%
BenchmarkAddVV_2       10.3          10.1          -1.94%
BenchmarkAddVV_3       10.9          12.6          +15.60%
BenchmarkAddVV_4       13.9          15.6          +12.23%
BenchmarkAddVV_5       16.8          17.3          +2.98%
BenchmarkAddVV_1e1     29.5          29.9          +1.36%
BenchmarkAddVV_1e2     246           232           -5.69%
BenchmarkAddVV_1e3     2374          2185          -7.96%
BenchmarkAddVV_1e4     58942         22292         -62.18%
BenchmarkAddVV_1e5     668622        225279        -66.31%
BenchmarkAddVW_1       6.81          5.58          -18.06%
BenchmarkAddVW_2       7.69          6.86          -10.79%
BenchmarkAddVW_3       9.56          8.32          -12.97%
BenchmarkAddVW_4       12.1          9.53          -21.24%
BenchmarkAddVW_5       13.2          10.9          -17.42%
BenchmarkAddVW_1e1     23.4          18.0          -23.08%
BenchmarkAddVW_1e2     175           141           -19.43%
BenchmarkAddVW_1e3     1568          1266          -19.26%
BenchmarkAddVW_1e4     15425         12596         -18.34%
BenchmarkAddVW_1e5     156737        133539        -14.80%
BenchmarkFibo          381678466     132958666     -65.16%

benchmark              old MB/s     new MB/s     speedup
BenchmarkAddVV_1       9715.25      9028.30      0.93x
BenchmarkAddVV_2       12461.72     12622.60     1.01x
BenchmarkAddVV_3       17549.64     15243.82     0.87x
BenchmarkAddVV_4       18392.54     16398.29     0.89x
BenchmarkAddVV_5       18995.23     18496.57     0.97x
BenchmarkAddVV_1e1     21708.98     21438.28     0.99x
BenchmarkAddVV_1e2     25956.53     27506.88     1.06x
BenchmarkAddVV_1e3     26947.93     29286.66     1.09x
BenchmarkAddVV_1e4     10857.96     28709.46     2.64x
BenchmarkAddVV_1e5     9571.91      28409.21     2.97x
BenchmarkAddVW_1       1175.28      1433.98      1.22x
BenchmarkAddVW_2       2080.01      2332.54      1.12x
BenchmarkAddVW_3       2509.28      2883.97      1.15x
BenchmarkAddVW_4       2646.09      3356.83      1.27x
BenchmarkAddVW_5       3020.69      3671.07      1.22x
BenchmarkAddVW_1e1     3425.76      4441.40      1.30x
BenchmarkAddVW_1e2     4553.17      5642.96      1.24x
BenchmarkAddVW_1e3     5100.14      6318.72      1.24x
BenchmarkAddVW_1e4     5186.15      6350.96      1.22x
BenchmarkAddVW_1e5     5104.07      5990.74      1.17x

Change-Id: I7a62023b1105248a0e85e5b9819d3fd4266123d4
Reviewed-on: https://go-review.googlesource.com/2480
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Alan Donovan <adonovan@google.com>
2015-01-08 17:00:59 +00:00
Robert Griesemer
80b3ff9f82 math/big: faster assembly kernels for AddVx/SubVx for amd64.
Replaced use of rotate instructions (RCRQ, RCLQ) with ADDQ/SBBQ
for restoring/saving the carry flag per suggestion from Torbjörn
Granlund (author of GMP bignum libs for C).
The rotate instructions tend to be slower on todays machines.

benchmark              old ns/op     new ns/op     delta
BenchmarkAddVV_1       5.69          5.51          -3.16%
BenchmarkAddVV_2       7.15          6.87          -3.92%
BenchmarkAddVV_3       8.69          8.06          -7.25%
BenchmarkAddVV_4       8.10          8.13          +0.37%
BenchmarkAddVV_5       8.37          8.47          +1.19%
BenchmarkAddVV_1e1     13.1          12.0          -8.40%
BenchmarkAddVV_1e2     78.1          69.4          -11.14%
BenchmarkAddVV_1e3     815           656           -19.51%
BenchmarkAddVV_1e4     8137          7345          -9.73%
BenchmarkAddVV_1e5     100127        93909         -6.21%
BenchmarkAddVW_1       4.86          4.71          -3.09%
BenchmarkAddVW_2       5.67          5.50          -3.00%
BenchmarkAddVW_3       6.51          6.34          -2.61%
BenchmarkAddVW_4       6.69          6.66          -0.45%
BenchmarkAddVW_5       7.20          7.21          +0.14%
BenchmarkAddVW_1e1     10.0          9.34          -6.60%
BenchmarkAddVW_1e2     45.4          52.3          +15.20%
BenchmarkAddVW_1e3     417           491           +17.75%
BenchmarkAddVW_1e4     4760          4852          +1.93%
BenchmarkAddVW_1e5     69107         67717         -2.01%

benchmark              old MB/s      new MB/s      speedup
BenchmarkAddVV_1       11241.82      11610.28      1.03x
BenchmarkAddVV_2       17902.68      18631.82      1.04x
BenchmarkAddVV_3       22082.43      23835.64      1.08x
BenchmarkAddVV_4       31588.18      31492.06      1.00x
BenchmarkAddVV_5       38229.90      37783.17      0.99x
BenchmarkAddVV_1e1     48891.67      53340.91      1.09x
BenchmarkAddVV_1e2     81940.61      92191.86      1.13x
BenchmarkAddVV_1e3     78443.09      97480.44      1.24x
BenchmarkAddVV_1e4     78644.18      87129.50      1.11x
BenchmarkAddVV_1e5     63918.48      68150.84      1.07x
BenchmarkAddVW_1       13165.09      13581.00      1.03x
BenchmarkAddVW_2       22588.04      23275.41      1.03x
BenchmarkAddVW_3       29483.82      30303.96      1.03x
BenchmarkAddVW_4       38286.54      38453.21      1.00x
BenchmarkAddVW_5       44414.57      44370.59      1.00x
BenchmarkAddVW_1e1     63816.84      68494.08      1.07x
BenchmarkAddVW_1e2     140885.41     122427.16     0.87x
BenchmarkAddVW_1e3     153258.31     130325.28     0.85x
BenchmarkAddVW_1e4     134447.63     131904.02     0.98x
BenchmarkAddVW_1e5     92609.41      94509.88      1.02x

Change-Id: Ia473e9ab9c63a955c252426684176bca566645ae
Reviewed-on: https://go-review.googlesource.com/2503
Reviewed-by: Keith Randall <khr@golang.org>
2015-01-08 16:57:11 +00:00
Martin Möhrmann
878fa886a6 strconv: add atoi tests for uncommon bases and syntax errors
Edge cases like base 2 and 36 conversions are now covered.
Many tests are mirrored from the itoa tests.

Added more test cases for syntax errors.

Change-Id: Iad8b2fb4854f898c2bfa18cdeb0cb4a758fcfc2e
Reviewed-on: https://go-review.googlesource.com/2463
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
2015-01-08 16:51:47 +00:00
Alex Brainman
d00024bd60 syscall: use go generate to build zsyscall_windows.go
I would like to create new syscalls in src/internal/syscall,
and I prefer not to add new shell scripts for that.

Replacement for CL 136000043.

Change-Id: I840116b5914a2324f516cdb8603c78973d28aeb4
Reviewed-on: https://go-review.googlesource.com/1940
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-08 06:07:56 +00:00
Keith Randall
fcfbeb3adf test: shorten test runtime
This test was taking a long time, reduce its zealousness.

Change-Id: Ib824247b84b0039a9ec690f72336bef3738d4c44
Reviewed-on: https://go-review.googlesource.com/2502
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
2015-01-08 04:49:43 +00:00
Brad Fitzpatrick
abb2aa2085 build: add GOTESTONLY environment test for Plan 9's run.rc
$GOTESTONLY controls which set of tests gets run. Only "std" is
supported. This should bring the time of plan9 builder down
from 90 minutes to a maybe 10-15 minutes when running on GCE.

(Plan 9 has performance problems when running on GCE, and/or with the
os/exec package)

This is a temporary workaround for one builder. The other Plan 9
builders will continue to do full builds. The plan9 buidler will be
renamed plan9-386-gcepartial or something to indicate it's not running
the 'test/*' directory, or API tests. Go on Plan 9 has bigger problems
for now. This lets us get trybots going sooner including Plan 9,
without waiting 90+ minutes.

Update #9491

Change-Id: Ic505e9169c6b304ed4029b7bdfb77bb5c8fa8daa
Reviewed-on: https://go-review.googlesource.com/2522
Reviewed-by: Rob Pike <r@golang.org>
2015-01-08 04:35:23 +00:00
Brad Fitzpatrick
e16ab38dc9 build: increase Plan 9 timeout for runtime multi-CPU test, add temporary -v
This isn't the final answer, but it will give us a clue about what's
going on.

Update #9491

Change-Id: I997f6004eb97e86a4a89a8caabaf58cfdf92a8f0
Reviewed-on: https://go-review.googlesource.com/2510
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-08 01:11:43 +00:00
Matthew Dempsky
ee94cd1dff cmd/cgo, go/build: finish a cleanup TODO
Removing #cgo directive parsing from cmd/cgo was done in
https://golang.org/cl/8610044.

Change-Id: Id1bec58c6ec1f932df0ce0ee84ff253655bb73ff
Reviewed-on: https://go-review.googlesource.com/2501
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-08 00:59:37 +00:00
Shenghou Ma
583293349b misc/swig/stdio: fix broken nil pointer test
SWIG has always returned a typed interface value for a C++ class,
so the interface value will never be nil even if the pointer itself
is NULL. ptr == NULL in C/C++ should be ptr.Swigcptr() == 0 in Go.

Fixes #9514.

Change-Id: I3778b91acf54d2ff22d7427fbf2b6ec9b9ce3b43
Reviewed-on: https://go-review.googlesource.com/2440
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-07 23:24:23 +00:00
David du Colombier
8c69ce0b90 build: increase timeout in run.rc
Increasing the timeout prevents the runtime test
to time out on the Plan 9 instances running on GCE.

Update golang/go#9491

Change-Id: Id9c2b0c4e59b103608565168655799b353afcd77
Reviewed-on: https://go-review.googlesource.com/2462
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-07 22:58:38 +00:00
Matthew Dempsky
b2aab72d9a cmd/cgo: remove obsolete -cdefs flag
Now that there's no 6c compiler anymore, there's no need for cgo to
generate C headers that are compatible with it.

Fixes #9528

Change-Id: I43f53869719eb9a6065f1b39f66f060e604cbee0
Reviewed-on: https://go-review.googlesource.com/2482
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-07 22:49:59 +00:00
Josh Bleecher Snyder
f7e43f14d3 runtime: remove stray commas in assembly
Change-Id: I4dc97ff8111bdc5ca6e4e3af06aaf4f768031c68
Reviewed-on: https://go-review.googlesource.com/2473
Reviewed-by: Minux Ma <minux@golang.org>
2015-01-07 22:41:54 +00:00
Josh Bleecher Snyder
43e6923131 cmd/gc: optimize existence-only map lookups
The compiler converts 'val, ok = m[key]' to

        tmp, ok = <runtime call>
        val = *tmp

For lookups of the form '_, ok = m[key]',
the second statement is unnecessary.
By not generating it we save a nil check.

Change-Id: I21346cc195cb3c62e041af8b18770c0940358695
Reviewed-on: https://go-review.googlesource.com/1975
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 22:36:06 +00:00
Josh Bleecher Snyder
43c87aa481 cmd/6g, cmd/8g, liblink: improve handling of float constants
* Enable basic constant propagation for floats.
  The constant propagation is still not as aggressive as it could be.
* Implement MOVSS $(0), Xx and MOVSD $(0), Xx as XORPS Xx, Xx.

Sample code:

func f32() float32 {
	var f float32
	return f
}

func f64() float64 {
	var f float64
	return f
}

Before:

"".f32 t=1 size=32 value=0 args=0x8 locals=0x0
	0x0000 00000 (demo.go:3)	TEXT	"".f32+0(SB),4,$0-8
	0x0000 00000 (demo.go:3)	FUNCDATA	$0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB)
	0x0000 00000 (demo.go:3)	FUNCDATA	$1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB)
	0x0000 00000 (demo.go:3)	MOVSS	$f32.00000000+0(SB),X0
	0x0008 00008 (demo.go:4)	MOVSS	$f32.00000000+0(SB),X0
	0x0010 00016 (demo.go:5)	MOVSS	X0,"".~r0+8(FP)
	0x0016 00022 (demo.go:5)	RET	,
"".f64 t=1 size=32 value=0 args=0x8 locals=0x0
	0x0000 00000 (demo.go:8)	TEXT	"".f64+0(SB),4,$0-8
	0x0000 00000 (demo.go:8)	FUNCDATA	$0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB)
	0x0000 00000 (demo.go:8)	FUNCDATA	$1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB)
	0x0000 00000 (demo.go:8)	MOVSD	$f64.0000000000000000+0(SB),X0
	0x0008 00008 (demo.go:9)	MOVSD	$f64.0000000000000000+0(SB),X0
	0x0010 00016 (demo.go:10)	MOVSD	X0,"".~r0+8(FP)
	0x0016 00022 (demo.go:10)	RET	,

After:

"".f32 t=1 size=16 value=0 args=0x8 locals=0x0
	0x0000 00000 (demo.go:3)	TEXT	"".f32+0(SB),4,$0-8
	0x0000 00000 (demo.go:3)	FUNCDATA	$0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB)
	0x0000 00000 (demo.go:3)	FUNCDATA	$1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB)
	0x0000 00000 (demo.go:3)	XORPS	X0,X0
	0x0003 00003 (demo.go:5)	MOVSS	X0,"".~r0+8(FP)
	0x0009 00009 (demo.go:5)	RET	,
"".f64 t=1 size=16 value=0 args=0x8 locals=0x0
	0x0000 00000 (demo.go:8)	TEXT	"".f64+0(SB),4,$0-8
	0x0000 00000 (demo.go:8)	FUNCDATA	$0,gclocals·a7a3692b8e27e823add69ec4239ba55f+0(SB)
	0x0000 00000 (demo.go:8)	FUNCDATA	$1,gclocals·3280bececceccd33cb74587feedb1f9f+0(SB)
	0x0000 00000 (demo.go:8)	XORPS	X0,X0
	0x0003 00003 (demo.go:10)	MOVSD	X0,"".~r0+8(FP)
	0x0009 00009 (demo.go:10)	RET	,

Change-Id: Ie9eb65e324af4f664153d0a7cd22bb16b0fba16d
Reviewed-on: https://go-review.googlesource.com/2053
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 22:26:55 +00:00
Keith Randall
d5e4c4061b runtime: remove size argument from hash and equal algorithms
The equal algorithm used to take the size
   equal(p, q *T, size uintptr) bool
With this change, it does not
   equal(p, q *T) bool
Similarly for the hash algorithm.

The size is rarely used, as most equal functions know the size
of the thing they are comparing.  For instance f32equal already
knows its inputs are 4 bytes in size.

For cases where the size is not known, we allocate a closure
(one for each size needed) that points to an assembly stub that
reads the size out of the closure and calls generic code that
has a size argument.

Reduces the size of the go binary by 0.07%.  Performance impact
is not measurable.

Change-Id: I6e00adf3dde7ad2974adbcff0ee91e86d2194fec
Reviewed-on: https://go-review.googlesource.com/2392
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 21:57:01 +00:00
Josh Bleecher Snyder
60801c4853 test: delete testlib
It is unused as of e7173dfd.

Change-Id: I3e4ea3fc66cf0a768ff28172a151b244952eefc9
Reviewed-on: https://go-review.googlesource.com/2093
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-07 21:48:48 +00:00
Keith Randall
63116de558 runtime: faster version of findfunc
Use a lookup table to find the function which contains a pc.  It is
faster than the old binary search.  findfunc is used primarily for
stack copying and garbage collection.

benchmark              old ns/op     new ns/op     delta
BenchmarkStackCopy     294746596     255400980     -13.35%

(findfunc is one of several tasks done by stack copy, the findfunc
time itself is about 2.5x faster.)

The lookup table is built at link time.  The table grows the binary
size by about 0.5% of the text segment.

We impose a lower limit of 16 bytes on any function, which should not
have much of an impact.  (The real constraint required is <=256
functions in every 4096 bytes, but 16 bytes/function is easier to
implement.)

Change-Id: Ic315b7a2c83e1f7203cd2a50e5d21a822e18fdca
Reviewed-on: https://go-review.googlesource.com/2097
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 21:24:21 +00:00
Austin Clements
af7ca8dce4 cmd/cgo, runtime/cgo: support ppc64
This implements support for calls to and from C in the ppc64 C ABI, as
well as supporting functionality such as an entry point from the
dynamic linker.

Change-Id: I68da6df50d5638cb1a3d3fef773fb412d7bf631a
Reviewed-on: https://go-review.googlesource.com/2009
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:36:27 +00:00
Austin Clements
f1c4444dfc runtime: set up C TLS and save g to it on ppc64
Cgo will need this for calls from C to Go and for handling signals
that may occur in C code.

Change-Id: I50cc4caf17cd142bff501e7180a1e27721463ada
Reviewed-on: https://go-review.googlesource.com/2008
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:36:19 +00:00
Austin Clements
bbd2127909 cmd/9g: don't use R13
R13 is the C TLS pointer.  Once we're calling to and from C code, if
we clobber R13 in our code, sigtramp won't know whether to get the
current g from REGG or from C TLS.  The simplest solution is for Go
code to preserve the C TLS pointer.  This is equivalent to what other
platforms do, except that on other platforms the TLS pointer is in a
special register.

Change-Id: I076e9cb83fd78843eb68cb07c748c4705c9a4c82
Reviewed-on: https://go-review.googlesource.com/2007
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:36:08 +00:00
Austin Clements
db923390a0 cmd/9l: support internal linking
This implements the ELF relocations and dynamic linking tables
necessary to support internal linking on ppc64.  It also marks ppc64le
ELF files as ABI v2; failing to do this doesn't seem to confuse the
loader, but it does confuse libbfd (and hence gdb, objdump, etc).

Change-Id: I559dddf89b39052e1b6288a4dd5e72693b5355e4
Reviewed-on: https://go-review.googlesource.com/2006
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:35:54 +00:00
Austin Clements
ac5a1ac318 cmd/ld: support for relocation variants
Most ppc64 relocations come in six or more variants where the basic
relocation formula is the same, but which bits of the computed value
are installed where changes.  Introduce the concept of "variants" for
internal relocations to support this.  Since this applies to
architecture-independent relocation types like R_PCREL, we do this in
relocsym.

Currently there is only an identity variant.  A later CL that adds
support for ppc64 ELF relocations will introduce more.

Change-Id: I0c5f0e7dbe5beece79cd24fe36267d37c52f1a0c
Reviewed-on: https://go-review.googlesource.com/2005
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:35:37 +00:00
Austin Clements
fcdffb3f33 cmd/ld: support 2 byte relocations
ppc64 has a bunch of these.

Change-Id: I3b93ed2bae378322a8dec036b1681e520b56ff53
Reviewed-on: https://go-review.googlesource.com/2003
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
2015-01-07 20:35:13 +00:00
Austin Clements
e32fe2049d cmd/ld: decode local entry offset from ppc64 symbols
ppc64 function symbols have both a global entry point and a local
entry point, where the difference is stashed in sym.other.  We'll need
this information to generate calls to ELF ABI functions.

Change-Id: Ibe343923f56801de7ebec29946c79690a9ffde57
Reviewed-on: https://go-review.googlesource.com/2002
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
2015-01-07 20:34:55 +00:00
Keith Randall
ec767c10b3 runtime: add comment about channels already handling zero-sized objects correctly.
update #9401

Change-Id: I634a772814e7cd066f631a68342e7c3dc9d27e72
Reviewed-on: https://go-review.googlesource.com/2370
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:25:06 +00:00
Keith Randall
5aae246f1e runtime: increase number of stack orders to 4
Cache 2KB, 4KB, 8KB, and 16KB stacks.  Larger stacks
will be allocated directly.  There is no point in cacheing
32KB+ stacks as we ask for and return 32KB at a time
from the allocator.

Note that the minimum stack is 8K on windows/64bit and 4K on
windows/32bit and plan9.  For these os/arch combinations,
the number of stack orders is less so that we have the same
maximum cached size.

Fixes #9045

Change-Id: Ia4195dd1858fb79fc0e6a91ae29c374d28839e44
Reviewed-on: https://go-review.googlesource.com/2098
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 20:13:06 +00:00
Oling Cat
ce36552083 doc/contribute: add necessary <code> tags, remove an extra close parenthesis.
Change-Id: I7238ae84d637534a345e5d077b8c63466148bd75
Reviewed-on: https://go-review.googlesource.com/1521
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 17:26:50 +00:00
Keith Randall
1dd0163ce3 runtime: remove trailing empty arrays in structs
The ones at the end of M and G are just used to compute
their size for use in assembly.  Generate the size explicitly.
The one at the end of itab is variable-sized, and at least one.
The ones at the end of interfacetype and uncommontype are not
needed, as the preceding slice references them (the slice was
originally added for use by reflect?).
The one at the end of stackmap is already accessed correctly,
and the runtime never allocates one.

Update #9401

Change-Id: Ia75e3aaee38425f038c506868a17105bd64c712f
Reviewed-on: https://go-review.googlesource.com/2420
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 16:05:16 +00:00
Keith Randall
ce5cb037d1 runtime: use some startup randomness in the fallback hashes
Fold in some startup randomness to make the hash vary across
different runs.  This helps prevent attackers from choosing
keys that all map to the same bucket.

Also, reorganize the hash a bit.  Move the *m1 multiply to after
the xor of the current hash and the message.  For hash quality
it doesn't really matter, but for DDOS resistance it helps a lot
(any processing done to the message before it is merged with the
random seed is useless, as it is easily inverted by an attacker).

Update #9365

Change-Id: Ib19968168e1bbc541d1d28be2701bb83e53f1e24
Reviewed-on: https://go-review.googlesource.com/2344
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-07 16:02:05 +00:00
Matthew Dempsky
31775c5a95 cmd/cgo: update code and docs to reflect post-6c world
The gc toolchain no longer includes a C compiler, so mentions of "6c"
can be removed or replaced by 6g as appropriate.  Similarly, some cgo
functions that previously generated C source output no longer need to.

Change-Id: I1ae6b02630cff9eaadeae6f3176c0c7824e8fbe5
Reviewed-on: https://go-review.googlesource.com/2391
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-07 15:14:07 +00:00
Brad Fitzpatrick
3ef39472a8 doc: add bufio.Reader.Discard to go1.5.txt
Change-Id: I315b338968cb1d9298664d181de44a691b325bb8
Reviewed-on: https://go-review.googlesource.com/2450
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-07 06:41:35 +00:00
Brad Fitzpatrick
ee2ecc4552 bufio: add Reader.Discard
Reader.Discard is the complement to Peek. It discards the next n bytes
of input.

We already have Reader.Buffered to see how many bytes of data are
sitting available in memory, and Reader.Peek to get that that buffer
directly. But once you're done with the Peek'd data, you can't get rid
of it, other than Reading it.
Both Read and io.CopyN(ioutil.Discard, bufReader, N) are relatively
slow. People instead resort to multiple blind ReadByte calls, just to
advance the internal b.r variable.

I've wanted this previously, several people have asked for it in the
past on golang-nuts/dev, and somebody just asked me for it again in a
private email. There are a few places in the standard library we'd use
it too.

Change-Id: I85dfad47704a58bd42f6867adbc9e4e1792bc3b0
Reviewed-on: https://go-review.googlesource.com/2260
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-07 06:37:57 +00:00
Shenghou Ma
5f179c7cef runtime: fix build for race detector
This CL only fixes the build, there are two failing tests:
RaceMapBigValAccess1 and RaceMapBigValAccess2
in runtime/race tests. I haven't investigated why yet.

Updates #9516.

Change-Id: If5bd2f0bee1ee45b1977990ab71e2917aada505f
Reviewed-on: https://go-review.googlesource.com/2401
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-01-07 03:20:42 +00:00
Martin Möhrmann
e5864cd939 sort: optimize symMerge performance for blocks with one element
Use direct binary insertion instead of recursive calls to symMerge
when one of the blocks has only one element.

benchmark                   old ns/op      new ns/op      delta
BenchmarkStableString1K     421999         397629         -5.77%
BenchmarkStableInt1K        123422         120592         -2.29%
BenchmarkStableInt64K       9629094        9620200        -0.09%
BenchmarkStable1e2          123089         120209         -2.34%
BenchmarkStable1e4          39505228       36870029       -6.67%
BenchmarkStable1e6          8196612367     7630840157     -6.90%

Change-Id: I49905a909e8595cfa05920ccf9aa00a8f3036110
Reviewed-on: https://go-review.googlesource.com/2219
Reviewed-by: Robert Griesemer <gri@golang.org>
2015-01-06 23:30:46 +00:00
Russ Cox
bc2601a1df runtime: allocate wbshadow at high address
sysReserve doesn't actually reserve the full amount requested on
64-bit systems, because of problems with ulimit. Instead it checks
that it can get the first 64 kB and assumes it can grab the rest as
needed. This doesn't work well with the "let the kernel pick an address"
mode, so don't do that. Pick a high address instead.

Change-Id: I4de143a0e6fdeb467fa6ecf63dcd0c1c1618a31c
Reviewed-on: https://go-review.googlesource.com/2345
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-01-06 22:28:52 +00:00
Russ Cox
9b638bf1bf runtime: adjust dropm for write barriers
The line 'mp.schedlink = mnext' has an implicit write barrier call,
which needs a valid g. Move it above the setg(nil).

Change-Id: If3e86c948e856e10032ad89f038bf569659300e0
Reviewed-on: https://go-review.googlesource.com/2347
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-01-06 22:23:14 +00:00
Russ Cox
949dd10222 misc/cgo: disable TestAllocateFromC in wbshadow mode
This test is doing pointer graph manipulation from C, and we
cannot support that with concurrent GC. The wbshadow mode
correctly diagnoses missing write barriers.

Disable the test in that mode for now. There is a bigger issue
behind it, namely SWIG, but for now we are focused on making
all.bash pass with wbshadow enabled.

Change-Id: I55891596d4c763e39b74082191d4a5fac7161642
Reviewed-on: https://go-review.googlesource.com/2346
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2015-01-06 22:22:59 +00:00