The only essential difference is elf32 vs elf64, I assume the other differences
are bugs in one version or another...
Change-Id: Ie6ff33d5574a6592b543df9983eff8fdf88c97a1
Reviewed-on: https://go-review.googlesource.com/10001
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Russ Cox <rsc@golang.org>
They were all essentially the same.
Change-Id: I6e0b548cda6e4bbe2ec3b3025b746d1f6d332d48
Reviewed-on: https://go-review.googlesource.com/10000
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This is an automated follow-up to CL 10120.
It was generated with a combination of eg and gofmt -r.
No functional changes. Passes toolstash -cmp.
Change-Id: I0dc6d146372012b4cce9cc4064066daa6694eee6
Reviewed-on: https://go-review.googlesource.com/10144
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The previous implementation spawned an extra goroutine to handle
rechecking resolv.conf for changes.
This change eliminates the extra goroutine, and has rechecking
done as part of a lookup. A side effect of this change is that the
first lookup after a resolv.conf change will now succeed, whereas
previously it would have failed. It also fixes rechecking logic to
ignore resolv.conf parsing errors as it should.
Fixes#8652Fixes#10576Fixes#10649Fixes#10650Fixes#10845
Change-Id: I502b587c445fa8eca5207ca4f2c8ec8c339fec7f
Reviewed-on: https://go-review.googlesource.com/9991
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This extends cmd/yacc with support for
%error { tokens } : message
syntax to specify custom error messages to use instead of the default
generic ones. This allows merging go.errors into go.y and removing
the yaccerrors.go tool.
Updates #9968.
Change-Id: I781219c568b86472755f877f48401eaeab00ead5
Reviewed-on: https://go-review.googlesource.com/8563
Reviewed-by: Russ Cox <rsc@golang.org>
This reverts commit 5726af54eb.
It broke all the builds.
Change-Id: I4b1dde86f9433717d303c1dabd6aa1a2bf97fab2
Reviewed-on: https://go-review.googlesource.com/10143
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
All slice types which have elements of kind reflect.Uint8 are marshalled
into base64 for compactness. When decoding such data into a custom type
based on []byte the decoder checked the slice kind instead of the slice
element kind, so no appropriate decoder was found.
Fixed by letting the decoder check slice element kind like the encoder.
This guarantees that already encoded data can still be successfully
decoded.
Fixes#8962.
Change-Id: Ia320d4dc2c6e9e5fe6d8dc15788c81da23d20c4f
Reviewed-on: https://go-review.googlesource.com/9371
Reviewed-by: Peter Waldschmidt <peter@waldschmidt.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Name will be converted from an anonymous to a
named field in a subsequent, automated CL.
No functional changes. Passes toolstash -cmp.
This reduces the size of gc.Node from 424 to 400 bytes.
This in turn reduces the permanent (pprof -inuse_space)
memory usage while compiling the test/rotate?.go tests:
test old(MB) new(MB) change
rotate0 379.49 367.30 -3.21%
rotate1 373.42 361.59 -3.16%
rotate2 381.17 368.77 -3.25%
rotate3 374.30 362.48 -3.15%
Updates #9933.
Change-Id: I21479527c136add4f1efb9342774e3be3e276e83
Reviewed-on: https://go-review.googlesource.com/10120
Reviewed-by: Russ Cox <rsc@golang.org>
This CL was generated by updating Val in go.go
and then running:
sed -i "" 's/\.U\.[SBXFC]val = /.U = /' *.go
sed -i "" 's/\.U\.Sval/.U.\(string\)/g' *.go *.y
sed -i "" 's/\.U\.Bval/.U.\(bool\)/g' *.go *.y
sed -i "" 's/\.U\.Xval/.U.\(\*Mpint\)/g' *.go *.y
sed -i "" 's/\.U\.Fval/.U.\(\*Mpflt\)/g' *.go *.y
sed -i "" 's/\.U\.Cval/.U.\(\*Mpcplx\)/g' *.go *.y
No functional changes. Passes toolstash -cmp.
This reduces the size of gc.Node from 424 to 392 bytes.
This in turn reduces the permanent (pprof -inuse_space)
memory usage while compiling the test/rotate?.go tests:
test old(MB) new(MB) change
rotate0 379.49 364.78 -3.87%
rotate1 373.42 359.07 -3.84%
rotate2 381.17 366.24 -3.91%
rotate3 374.30 359.95 -3.83%
CL 8445 was similar to this; gri asked that Val's implementation
be hidden first. CLs 8912, 9263, and 9267 have at least
isolated the changes to the cmd/internal/gc package.
Updates #9933.
Change-Id: I83ddfe003d48e0a73c92e819edd3b5e620023084
Reviewed-on: https://go-review.googlesource.com/10059
Reviewed-by: Russ Cox <rsc@golang.org>
This trivial change is a prerequisite to
converting Val.U to an interface{}.
No functional changes. Passes toolstash -cmp.
Change-Id: I17ff036f68d29a9ed0097a8b23ae1c91e6ce8c21
Reviewed-on: https://go-review.googlesource.com/10058
Reviewed-by: Russ Cox <rsc@golang.org>
Remove all uses of Node.Val outside of the gc package.
A subsequent, automated commit in the Go 1.6 cycle
will unexport Node.Val.
No functional changes. Passes toolstash -cmp.
Change-Id: Ia92ae6a7766c83ab3e45c69edab24a9581c824f9
Reviewed-on: https://go-review.googlesource.com/9267
Reviewed-by: Russ Cox <rsc@golang.org>
Remove all uses of Mp* outside of the gc package.
A subsequent, automated commit in the Go 1.6
cycle will unexport all Mp* functions and types.
No functional changes. Passes toolstash -cmp.
Change-Id: Ie1604cb5b84ffb30b47f4777d4235570f2c62709
Reviewed-on: https://go-review.googlesource.com/9263
Reviewed-by: Russ Cox <rsc@golang.org>
Preallocating them in reflect means that
(1) if you say _ = PtrTo(ArrayOf(1000000000, reflect.TypeOf(byte(0)))), you just allocated 1GB of data
(2) if you say it again, that's *another* GB of data.
The only use of t.zero in the runtime is for map elements.
Delay the allocation until the creation of a map with that element type,
and share the zeros.
The one downside of the shared zero is that it's not garbage collected,
but it's also never written, so the OS should be able to handle it fairly
efficiently.
Change-Id: I56b098a091abf3ac0945de28ebef9a6c08e76614
Reviewed-on: https://go-review.googlesource.com/10111
Reviewed-by: Keith Randall <khr@golang.org>
According to MSDN RegQueryValueEx page:
If the data has the REG_SZ, REG_MULTI_SZ or REG_EXPAND_SZ type, the
string may not have been stored with the proper terminating null
characters. Therefore, even if the function returns ERROR_SUCCESS, the
application should ensure that the string is properly terminated before
using it; otherwise, it may overwrite a buffer. (Note that REG_MULTI_SZ
strings should have two terminating null characters.)
Test written by Alex Brainman <alex.brainman@gmail.com>
Change-Id: I8c0852e0527e27ceed949134ed5e6de944189986
Reviewed-on: https://go-review.googlesource.com/9806
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
See CL 9962 for the rationale.
Change-Id: I73c714fce258430eea1e61d3835f5c8e9014ca1f
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/9925
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Auto-generated using the following bash script:
for i in z*_*_*.go; do
goosgoarch=`basename ${i/${i/_*/}_/} .go`
goos=${goosgoarch/_*/}
goarch=${goosgoarch/*_/}
echo $i $goos $goarch
[ "$goos" = "windows" ] && continue
sed -i -e "/^package /i\/\/ +build $goarch,$goos\n" "$i"
done
Change-Id: I756fee551d1698080e4591fed8f058ae0450aaa5
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/10113
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Missed a case; just need to call validateType.
Fixes#10800.
Change-Id: I81997ca7a9feb1be31c8b47e631b32712d7ffb86
Reviewed-on: https://go-review.googlesource.com/10031
Reviewed-by: Andrew Gerrand <adg@golang.org>
This reduces the depth of the inlining at a particular call site.
The inliner introduces many temporary variables, and the compiler can do
a better job with fewer. Being verbose in the bodies of these helper functions
seems like a reasonable tradeoff: the uses are still just as readable, and
they run faster in some important cases.
Change-Id: I5323976ed3704d0acd18fb31176cfbf5ba23a89c
Reviewed-on: https://go-review.googlesource.com/9883
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
BenchmarkSkipValue was sensitive to the value of
b.N due to its significant startup cost.
Two adjacent runs before this CL:
BenchmarkSkipValue 50 21047499 ns/op 93.37 MB/s
BenchmarkSkipValue 100 17260554 ns/op 118.05 MB/s
After this CL, using benchtime to recreate the
difference in b.N:
BenchmarkSkipValue 50 15204797 ns/op 131.67 MB/s
BenchmarkSkipValue 100 15332319 ns/op 130.58 MB/s
Change-Id: Iac86f86dd774d535302fa5e4c08f89f8da00be9e
Reviewed-on: https://go-review.googlesource.com/10053
Reviewed-by: Andrew Gerrand <adg@golang.org>
The Transport's writer to the remote server is wrapped in a
bufio.Writer to suppress many small writes while writing headers and
trailers. However, when writing the request body, the buffering may get
in the way if the request body is arriving slowly.
Because the io.Copy from the Request.Body to the writer is already
buffered, the outer bufio.Writer is unnecessary and prevents small
Request.Body.Reads from going to the server right away. (and the
io.Reader contract does say to return when you've got something,
instead of blocking waiting for more). After the body is finished, the
Transport's bufio.Writer is still used for any trailers following.
A previous attempted fix for this made the chunk writer always flush
if the underlying type was a bufio.Writer, but that is not quite
correct. This CL instead makes it opt-in by using a private sentinel
type (wrapping a *bufio.Writer) to the chunk writer that requests
Flushes after each chunk body (the chunk header & chunk body are still
buffered together into one write).
Fixes#6574
Change-Id: Icefcdf17130c9e285c80b69af295bfd3e72c3a70
Reviewed-on: https://go-review.googlesource.com/10021
Reviewed-by: Andrew Gerrand <adg@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
I was previously setting GOARM=arm5 (due to confusion with previously
seeing buildall.sh's temporary of "arm5" as a GOARCH and
misremembernig), but GOARM=arm5 was acting like GOARM=5 only on
accident. See https://go-review.googlesource.com/#/c/10023/
Instead, fail if GOARM is not a known value.
Change-Id: I9ba4fd7268df233d40b09f0431f37cd85a049847
Reviewed-on: https://go-review.googlesource.com/10024
Reviewed-by: Minux Ma <minux@golang.org>
Was otherwise absent unless bound to an exported symbol,
as in the BUG with strings.Title.
Fixes#10781.
Change-Id: I1543137073a9dee9e546bc9d648ca54fc9632dde
Reviewed-on: https://go-review.googlesource.com/9899
Reviewed-by: Russ Cox <rsc@golang.org>
Set overflowing integer constants to 1 rather than 0 to avoid
spurious div-zero errors in subsequent constant expressions.
Also: Exclude new test case from go/types test since it's
running too long (go/types doesn't have an upper constant
size limit at the moment).
Fixes#7746.
Change-Id: I3768488ad9909a3cf995247b81ee78a8eb5a1e41
Reviewed-on: https://go-review.googlesource.com/9165
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
One important use case is a pipeline computation that pass values
from one Goroutine to the next and then exits or is placed in a
wait state. If GOMAXPROCS > 1 a Goroutine running on P1 will enable
another Goroutine and then immediately make P1 available to execute
it. We need to prevent other Ps from stealing the G that P1 is about
to execute. Otherwise the Gs can thrash between Ps causing unneeded
synchronization and slowing down throughput.
Fix this by changing the stealing logic so that when a P attempts to
steal the only G on some other P's run queue, it will pause
momentarily to allow the victim P to schedule the G.
As part of optimizing stealing we also use a per P victim queue
move stolen gs. This eliminates the zeroing of a stack local victim
queue which turned out to be expensive.
This CL is a necessary but not sufficient prerequisite to changing
the default value of GOMAXPROCS to something > 1 which is another
CL/discussion.
For highly serialized programs, such as GoroutineRing below this can
make a large difference. For larger and more parallel programs such
as the x/benchmarks there is no noticeable detriment.
~/work/code/src/rsc.io/benchstat/benchstat old.txt new.txt
name old mean new mean delta
GoroutineRing 30.2µs × (0.98,1.01) 30.1µs × (0.97,1.04) ~ (p=0.941)
GoroutineRing-2 113µs × (0.91,1.07) 30µs × (0.98,1.03) -73.17% (p=0.004)
GoroutineRing-4 144µs × (0.98,1.02) 32µs × (0.98,1.01) -77.69% (p=0.000)
GoroutineRingBuf 32.7µs × (0.97,1.03) 32.5µs × (0.97,1.02) ~ (p=0.795)
GoroutineRingBuf-2 120µs × (0.92,1.08) 33µs × (1.00,1.00) -72.48% (p=0.004)
GoroutineRingBuf-4 138µs × (0.92,1.06) 33µs × (1.00,1.00) -76.21% (p=0.003)
The bench benchmarks show little impact.
old new
garbage 7032879 7011696
httpold 25509 25301
splayold 1022073 1019499
jsonold 28230624 28081433
Change-Id: I228c48fed8d85c9bbef16a7edc53ab7898506f50
Reviewed-on: https://go-review.googlesource.com/9872
Reviewed-by: Austin Clements <austin@google.com>
FileConn and FilePacketConn APIs accept user-configured socket
descriptors to make them work together with runtime-integrated network
poller, but there's a limitation. The APIs reject protocol sockets that
are not supported by standard library. It's very hard for the net,
syscall packages to look after all platform, feature-specific sockets.
This change allows various platform, feature-specific socket descriptors
to use runtime-integrated network poller by using SocketConn,
SocketPacketConn APIs that bridge between the net, syscall packages and
platforms.
New exposed APIs:
pkg net, func SocketConn(*os.File, SocketAddr) (Conn, error)
pkg net, func SocketPacketConn(*os.File, SocketAddr) (PacketConn, error)
pkg net, type SocketAddr interface { Addr, Raw }
pkg net, type SocketAddr interface, Addr([]uint8) Addr
pkg net, type SocketAddr interface, Raw(Addr) []uint8
Fixes#10565.
Change-Id: Iec57499b3d84bb5cb0bcf3f664330c535eec11e3
Reviewed-on: https://go-review.googlesource.com/9275
Reviewed-by: Ian Lance Taylor <iant@golang.org>
If __LP64__ is defined then the type "long" is 64-bits, and there is
no need to explicitly request _FILE_OFFSET_BITS == 64. This changes
the definitions of F_GETLK, F_SETLK, and F_SETLKW on PPC to the values
that the kernel requires. The values used in C when _FILE_OFFSET_BITS
== 64 are corrected by the glibc fcntl function before making the
system call.
With this change, regenerate ppc64le files on Ubuntu trusty.
Change-Id: I8dddbd8a6bae877efff818f5c5dd06291ade3238
Reviewed-on: https://go-review.googlesource.com/9962
Reviewed-by: Minux Ma <minux@golang.org>
This will skip system call numbers that are ifdef'ed out in unistd.h,
as occurs on PPC.
Change-Id: I88e640e4621c7a8cc266433f34a7b4be71543ec9
Reviewed-on: https://go-review.googlesource.com/9966
Reviewed-by: Minux Ma <minux@golang.org>
Running these tests on the system stack is problematic because they
allocate Ps, which are large enough to overflow the system stack if
they are stack-allocated. It used to be necessary to run these tests
on the system stack because they were written in C, but since this is
no longer the case, we can fix this problem by simply not running the
tests on the system stack.
This also means we no longer need the hack in one of these tests that
forces the allocated Ps to escape to the heap, so eliminate that as
well.
Change-Id: I9064f5f8fd7f7b446ff39a22a70b172cfcb2dc57
Reviewed-on: https://go-review.googlesource.com/9923
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
The line in cgen.go was lost during the ginscmp CL.
The ggen.go change is not strictly necessary, but
it makes the 5g -S output for x[0] match what it said
before the ginscmp CL.
Change-Id: I5890a9ec1ac69a38509416eda5aea13b8b12b94a
Reviewed-on: https://go-review.googlesource.com/9929
Reviewed-by: Russ Cox <rsc@golang.org>
Fix bug on Linux SysProcAttr handling: setting both Pdeathsig and
Credential caused Pdeathsig to be ignored. This is because the kernel
clears the deathsignal field when performing a setuid/setgid
system call.
Avoid this by moving Pdeathsig handling after Credential handling.
Fixes#9686
Change-Id: Id01896ad4e979b8c448e0061f00aa8762ca0ac94
Reviewed-on: https://go-review.googlesource.com/3290
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The code generated for x = append(x, v) is roughly:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
}
t[len(t)] = v
len(t)++
x = t
We used to generate this code as Go pseudocode during walk.
Generate it instead as actual instructions during gen.
Doing so lets us apply a few optimizations. The most important
is that when, as in the above example, the source slice and the
destination slice are the same, the code can instead do:
t := x
if len(t)+1 > cap(t) {
t = grow(t)
x = {base(t), len(t)+1, cap(t)}
} else {
len(x)++
}
t[len(t)] = v
That is, in the fast path that does not reallocate the array,
only the updated length needs to be written back to x,
not the array pointer and not the capacity. This is more like
what you'd write by hand in C. It's faster in general, since
the fast path elides two of the three stores, but it's especially
faster when the form of x is such that the base pointer write
would turn into a write barrier. No write, no barrier.
name old mean new mean delta
BinaryTree17 5.68s × (0.97,1.04) 5.81s × (0.98,1.03) +2.35% (p=0.023)
Fannkuch11 4.41s × (0.98,1.03) 4.35s × (1.00,1.00) ~ (p=0.090)
FmtFprintfEmpty 92.7ns × (0.91,1.16) 86.0ns × (0.94,1.11) -7.31% (p=0.038)
FmtFprintfString 281ns × (0.96,1.08) 276ns × (0.98,1.04) ~ (p=0.219)
FmtFprintfInt 288ns × (0.97,1.06) 274ns × (0.98,1.06) -4.94% (p=0.002)
FmtFprintfIntInt 493ns × (0.97,1.04) 506ns × (0.99,1.01) +2.65% (p=0.009)
FmtFprintfPrefixedInt 423ns × (0.97,1.04) 391ns × (0.99,1.01) -7.52% (p=0.000)
FmtFprintfFloat 598ns × (0.99,1.01) 566ns × (0.99,1.01) -5.27% (p=0.000)
FmtManyArgs 1.89µs × (0.98,1.05) 1.91µs × (0.99,1.01) ~ (p=0.231)
GobDecode 14.8ms × (0.98,1.03) 15.3ms × (0.99,1.02) +3.01% (p=0.000)
GobEncode 12.3ms × (0.98,1.01) 11.5ms × (0.97,1.03) -5.93% (p=0.000)
Gzip 656ms × (0.99,1.05) 645ms × (0.99,1.01) ~ (p=0.055)
Gunzip 142ms × (1.00,1.00) 142ms × (1.00,1.00) -0.32% (p=0.034)
HTTPClientServer 91.2µs × (0.97,1.04) 90.5µs × (0.97,1.04) ~ (p=0.468)
JSONEncode 32.6ms × (0.97,1.08) 32.0ms × (0.98,1.03) ~ (p=0.190)
JSONDecode 114ms × (0.97,1.05) 114ms × (0.99,1.01) ~ (p=0.887)
Mandelbrot200 6.11ms × (0.98,1.04) 6.04ms × (1.00,1.01) ~ (p=0.167)
GoParse 6.66ms × (0.97,1.04) 6.47ms × (0.97,1.05) -2.81% (p=0.014)
RegexpMatchEasy0_32 159ns × (0.99,1.00) 171ns × (0.93,1.07) +7.19% (p=0.002)
RegexpMatchEasy0_1K 538ns × (1.00,1.01) 550ns × (0.98,1.01) +2.30% (p=0.000)
RegexpMatchEasy1_32 138ns × (1.00,1.00) 135ns × (0.99,1.02) -1.60% (p=0.000)
RegexpMatchEasy1_1K 869ns × (0.99,1.01) 879ns × (1.00,1.01) +1.08% (p=0.000)
RegexpMatchMedium_32 252ns × (0.99,1.01) 243ns × (1.00,1.00) -3.71% (p=0.000)
RegexpMatchMedium_1K 72.7µs × (1.00,1.00) 70.3µs × (1.00,1.00) -3.34% (p=0.000)
RegexpMatchHard_32 3.85µs × (1.00,1.00) 3.82µs × (1.00,1.01) -0.81% (p=0.000)
RegexpMatchHard_1K 118µs × (1.00,1.00) 117µs × (1.00,1.00) -0.56% (p=0.000)
Revcomp 920ms × (0.97,1.07) 917ms × (0.97,1.04) ~ (p=0.808)
Template 129ms × (0.98,1.03) 114ms × (0.99,1.01) -12.06% (p=0.000)
TimeParse 619ns × (0.99,1.01) 622ns × (0.99,1.01) ~ (p=0.062)
TimeFormat 661ns × (0.98,1.04) 665ns × (0.99,1.01) ~ (p=0.524)
See next CL for combination with a similar optimization for slice.
The benchmarks that are slower in this CL are still faster overall
with the combination of the two.
Change-Id: I2a7421658091b2488c64741b4db15ab6c3b4cb7e
Reviewed-on: https://go-review.googlesource.com/9812
Reviewed-by: David Chase <drchase@google.com>
This lets us abstract away which arguments can be constants and so on
and lets the back ends reverse the order of arguments if that helps.
Change-Id: I283ec1d694f2dd84eba22e5eb4aad78a2d2d9eb0
Reviewed-on: https://go-review.googlesource.com/9810
Reviewed-by: David Chase <drchase@google.com>
Messages that are too big are rejected when read, so they should
be rejected when written too.
Fixes#10518.
Change-Id: I96678fbe2d94f51b957fe26faef33cd8df3823dd
Reviewed-on: https://go-review.googlesource.com/9965
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The status buffer built by the exit function
was not nil-terminated.
Fixes#10789.
Change-Id: I2d34ac50a19d138176c4b47393497ba7070d5b61
Reviewed-on: https://go-review.googlesource.com/9953
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Once added to the signal queue, the pointer passed to the
signal handler could no longer be valid. Instead of passing
the pointer to the note string, we recopy the value of the
note string to a static array in the signal queue.
Fixes#10784.
Change-Id: Iddd6837b58a14dfaa16b069308ae28a7b8e0965b
Reviewed-on: https://go-review.googlesource.com/9950
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Today's earlier fix can stay, but it's a band-aid over the real problem,
which is that bad code was slipping through the type checker
into the back end (and luckily causing a type error there).
I discovered this because my new append does not use the same
temporaries and failed the test as written.
Fixes#9521.
Change-Id: I7e33e2ea15743406e15c6f3fdf73e1edecda69bd
Reviewed-on: https://go-review.googlesource.com/9921
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Any Git branch can be the default branch not only master. Removing
hardwired 'checkout master', and using 'checkout {tag}' is the best
choice. It works with and without a master branch. Furthermore it
resolves the Github default branch issue. Changing Github default
branch is effectively changing HEAD.
Fixes#9032
Change-Id: I19a1221bcefe0806e7556c124c6da7ac0c2160b5
Reviewed-on: https://go-review.googlesource.com/5312
Reviewed-by: Russ Cox <rsc@golang.org>
registry.ReadSubKeyNames requires QUERY access right in addition to
ENUMERATE_SUB_KEYS.
This was making TestLocalZoneAbbr fail on Windows 7 in Paris/Madrid
timezone. It succeeded on Windows 8 because timezone name changed from
"Paris/Madrid" to "Romance Standard Time", the latter being matched by
an abbrs entry.
Change-Id: I791287ba9d1b3556246fa4e9e1604a1fbba1f5e6
Reviewed-on: https://go-review.googlesource.com/9809
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TestAcceptIgnoreSomeErrors was created to test that network
accept function ignores some errors. But conditions created
by the test also affects network reads. Change the test to
ignore these read errors when acceptable.
Fixes#10785
Change-Id: I3da85cb55bd3e78c1980ad949e53e82391f9b41e
Reviewed-on: https://go-review.googlesource.com/9942
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This:
1) Defines the ABI hash of a package (as the SHA1 of the __.PKGDEF)
2) Defines the ABI hash of a shared library (sort the packages by import
path, concatenate the hashes of the packages and SHA1 that)
3) When building a shared library, compute the above value and define a
global symbol that points to a go string that has the hash as its value.
4) When linking against a shared library, read the abi hash from the
library and put both the value seen at link time and a reference
to the global symbol into the moduledata.
5) During runtime initialization, check that the hash seen at link time
still matches the hash the global symbol points to.
Change-Id: Iaa54c783790e6dde3057a2feadc35473d49614a5
Reviewed-on: https://go-review.googlesource.com/8773
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
addmoduledata is called from a .init_array function and need to follow the
platform ABI. It contains accesses to global data which are rewritten to use
R15 by the assembler, and as R15 is callee-save we need to save it.
Change-Id: I03893efb1576aed4f102f2465421f256f3bb0f30
Reviewed-on: https://go-review.googlesource.com/9941
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Change-Id: I89778988baec1cf4a35d9342c7dbe8c4c08ff3cd
Reviewed-on: https://go-review.googlesource.com/9893
Run-TryBot: Hyang-Ah Hana Kim <hyangah@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Hyang-Ah Hana Kim <hyangah@gmail.com>
Instead of errors like:
./blank2.go:15: cannot use ~b1 (type []int) as type int in assignment
we now have:
./blank2.go:15: cannot use _ (type []int) as type int in assignment
Less confusing for users.
Fixes#9521
Change-Id: Ieab9859040e8e0df95deeaee7eeb408d3be61c0f
Reviewed-on: https://go-review.googlesource.com/9902
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
DWARF generation appears to assume Cpos is cheap and this makes linking godoc
about 8% faster and linking the standard library into a single shared library
about 22% faster on my machine.
Updates #10571
Change-Id: I3f81efd0174e356716e7971c4f59810b72378177
Reviewed-on: https://go-review.googlesource.com/9913
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When running the client header timeout test, there is a race between
us timing out and waiting on the remaining requests to be serviced. If
the client times out before the server blocks on the channel in the
handler, we will be simultaneously adding to a waitgroup with the
value 0 and waiting on it when we call TestServer.Close().
This is largely a theoretical race. We have to time out before we
enter the handler and the only reason we would time out if we're
blocked on the channel. Nevertheless, make the race detector happy
by turning the close into a channel send. This turns the defer call
into a synchronization point and we can be sure that we've entered
the handler before we close the server.
Fixes#10780
Change-Id: Id73b017d1eb7503e446aa51538712ef49f2f5c9e
Reviewed-on: https://go-review.googlesource.com/9905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The current implementation of typedmemmove walks the ptrmask
in the type to find out where pointers are. This led to turning off
GC programs for the Go 1.5 dev cycle, so that there would always
be a ptrmask. Instead of also interpreting the GC programs,
interpret the heap bitmap, which we know must be available and
up to date. (There is no point to write barriers when writing outside
the heap.)
This CL is only about correctness. The next CL will optimize the code.
Change-Id: Id1305c7c071fd2734ab96634b0e1c745b23fa793
Reviewed-on: https://go-review.googlesource.com/9886
Reviewed-by: Austin Clements <austin@google.com>
We want typedmemmove to use the heap bitmap to determine
where pointers are, instead of reinterpreting the type information.
The heap bitmap is simpler to access.
In general, typedmemmove will need to be able to look up the bits
for any word and find valid pointer information, so fill even after the
dead marker. Not filling after the dead marker was an optimization
I introduced only a few days ago, when reintroducing the dead marker
code. At the time I said it probably wouldn't last, and it didn't.
Change-Id: I6ba01bff17ddee1ff429f454abe29867ec60606e
Reviewed-on: https://go-review.googlesource.com/9885
Reviewed-by: Austin Clements <austin@google.com>
Moving them up makes them properly aligned on 32-bit systems.
There are some odd fields above them right now
(like fixalloc and mutex maybe).
Change-Id: I57851a5bbb2e7cc339712f004f99bb6c0cce0ca5
Reviewed-on: https://go-review.googlesource.com/9889
Reviewed-by: Austin Clements <austin@google.com>
Dead code.
This field is left over from Go 1.4, when we elided the fake write
barrier in this case. Today, it's unused (always false).
The upcoming append/slice changes handle this case again,
but without needing this field.
Change-Id: Ic6f160b64efdc1bbed02097ee03050f8cd0ab1b8
Reviewed-on: https://go-review.googlesource.com/9789
Reviewed-by: David Chase <drchase@google.com>
If you are using -h to get a stack trace at the site of the failure,
Yyerror will never return. Dump the register allocation sites
before calling Yyerror.
Change-Id: I51266c03e06cb5084c2eaa89b367b9ed85ba286a
Reviewed-on: https://go-review.googlesource.com/9788
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Dave Cheney <dave@cheney.net>
The new(uint64) was moving to the stack, which may not be aligned.
Change-Id: Iad070964202001b52029494d43e299fed980f939
Reviewed-on: https://go-review.googlesource.com/9787
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David Chase <drchase@google.com>
The -g mode is a debugging mode that prints instructions
as they are constructed. Gbranch was just missing the print.
Change-Id: I3fb45fd9bd3996ed96df5be903b9fd6bd97148b0
Reviewed-on: https://go-review.googlesource.com/9827
Reviewed-by: Rick Hudson <rlh@golang.org>
Reintroduce an optimization discarded during the initial conversion
from 4-bit heap bitmaps to 2-bit heap bitmaps: when we reach the
place in the bitmap where there are no more pointers, mark that position
for the GC so that it can avoid scanning past that place.
During heapBitsSetType we can also avoid initializing heap bitmap
beyond that location, which gives a bit of a win compared to Go 1.4.
This particular optimization (not initializing the heap bitmap) may not last:
we might change typedmemmove to use the heap bitmap, in which
case it would all need to be initialized. The early stop in the GC scan
will stay no matter what.
Compared to Go 1.4 (github.com/rsc/go, branch go14bench):
name old mean new mean delta
SetTypeNode64 80.7ns × (1.00,1.01) 57.4ns × (1.00,1.01) -28.83% (p=0.000)
SetTypeNode64Dead 80.5ns × (1.00,1.01) 13.1ns × (0.99,1.02) -83.77% (p=0.000)
SetTypeNode64Slice 2.16µs × (1.00,1.01) 1.54µs × (1.00,1.01) -28.75% (p=0.000)
SetTypeNode64DeadSlice 2.16µs × (1.00,1.01) 1.52µs × (1.00,1.00) -29.74% (p=0.000)
Compared to previous CL:
name old mean new mean delta
SetTypeNode64 56.7ns × (1.00,1.00) 57.4ns × (1.00,1.01) +1.19% (p=0.000)
SetTypeNode64Dead 57.2ns × (1.00,1.00) 13.1ns × (0.99,1.02) -77.15% (p=0.000)
SetTypeNode64Slice 1.56µs × (1.00,1.01) 1.54µs × (1.00,1.01) -0.89% (p=0.000)
SetTypeNode64DeadSlice 1.55µs × (1.00,1.01) 1.52µs × (1.00,1.00) -2.23% (p=0.000)
This is the last CL in the sequence converting from the 4-bit heap
to the 2-bit heap, with all the same optimizations reenabled.
Compared to before that process began (compared to CL 9701 patch set 1):
name old mean new mean delta
BinaryTree17 5.87s × (0.94,1.09) 5.91s × (0.96,1.06) ~ (p=0.578)
Fannkuch11 4.32s × (1.00,1.00) 4.32s × (1.00,1.00) ~ (p=0.474)
FmtFprintfEmpty 89.1ns × (0.95,1.16) 89.0ns × (0.93,1.10) ~ (p=0.942)
FmtFprintfString 283ns × (0.98,1.02) 298ns × (0.98,1.06) +5.33% (p=0.000)
FmtFprintfInt 284ns × (0.98,1.04) 286ns × (0.98,1.03) ~ (p=0.208)
FmtFprintfIntInt 486ns × (0.98,1.03) 498ns × (0.97,1.06) +2.48% (p=0.000)
FmtFprintfPrefixedInt 400ns × (0.99,1.02) 408ns × (0.98,1.02) +2.23% (p=0.000)
FmtFprintfFloat 566ns × (0.99,1.01) 587ns × (0.98,1.01) +3.69% (p=0.000)
FmtManyArgs 1.91µs × (0.99,1.02) 1.94µs × (0.99,1.02) +1.81% (p=0.000)
GobDecode 15.5ms × (0.98,1.05) 15.8ms × (0.98,1.03) +1.94% (p=0.002)
GobEncode 11.9ms × (0.97,1.03) 12.0ms × (0.96,1.09) ~ (p=0.263)
Gzip 648ms × (0.99,1.01) 648ms × (0.99,1.01) ~ (p=0.992)
Gunzip 143ms × (1.00,1.00) 143ms × (1.00,1.01) ~ (p=0.585)
HTTPClientServer 89.2µs × (0.99,1.02) 90.3µs × (0.98,1.01) +1.24% (p=0.000)
JSONEncode 32.3ms × (0.97,1.06) 31.6ms × (0.99,1.01) -2.29% (p=0.000)
JSONDecode 106ms × (0.99,1.01) 107ms × (1.00,1.01) +0.62% (p=0.000)
Mandelbrot200 6.02ms × (1.00,1.00) 6.03ms × (1.00,1.01) ~ (p=0.250)
GoParse 6.57ms × (0.97,1.06) 6.53ms × (0.99,1.03) ~ (p=0.243)
RegexpMatchEasy0_32 162ns × (1.00,1.00) 161ns × (1.00,1.01) -0.80% (p=0.000)
RegexpMatchEasy0_1K 561ns × (0.99,1.02) 541ns × (0.99,1.01) -3.67% (p=0.000)
RegexpMatchEasy1_32 145ns × (0.95,1.04) 138ns × (1.00,1.00) -5.04% (p=0.000)
RegexpMatchEasy1_1K 864ns × (0.99,1.04) 887ns × (0.99,1.01) +2.57% (p=0.000)
RegexpMatchMedium_32 255ns × (0.99,1.04) 253ns × (0.99,1.01) -1.05% (p=0.012)
RegexpMatchMedium_1K 73.9µs × (0.98,1.04) 72.8µs × (1.00,1.00) -1.51% (p=0.005)
RegexpMatchHard_32 3.92µs × (0.98,1.04) 3.85µs × (1.00,1.01) -1.88% (p=0.002)
RegexpMatchHard_1K 120µs × (0.98,1.04) 117µs × (1.00,1.01) -2.02% (p=0.001)
Revcomp 936ms × (0.95,1.08) 922ms × (0.97,1.08) ~ (p=0.234)
Template 130ms × (0.98,1.04) 126ms × (0.99,1.01) -2.99% (p=0.000)
TimeParse 638ns × (0.98,1.05) 628ns × (0.99,1.01) -1.54% (p=0.004)
TimeFormat 674ns × (0.99,1.01) 668ns × (0.99,1.01) -0.80% (p=0.001)
The slowdown of the first few benchmarks seems to be due to the new
atomic operations for certain small size allocations. But the larger
benchmarks mostly improve, probably due to the decreased memory
pressure from having half as much heap bitmap.
CL 9706, which removes the (never used anymore) wbshadow mode,
gets back what is lost in the early microbenchmarks.
Change-Id: I37423a209e8ec2a2e92538b45cac5422a6acd32d
Reviewed-on: https://go-review.googlesource.com/9705
Reviewed-by: Rick Hudson <rlh@golang.org>
For the conversion of the heap bitmap from 4-bit to 2-bit fields,
I replaced heapBitsSetType with the dumbest thing that could possibly work:
two atomic operations (atomicand8+atomicor8) per 2-bit field.
This CL replaces that code with a proper implementation that
avoids the atomics whenever possible. Benchmarks vs base CL
(before the conversion to 2-bit heap bitmap) and vs Go 1.4 below.
Compared to Go 1.4, SetTypePtr (a 1-pointer allocation)
is 10ns slower because a race against the concurrent GC requires the
use of an atomicor8 that used to be an ordinary write. This slowdown
was present even in the base CL.
Compared to both Go 1.4 and base, SetTypeNode8 (a 10-word allocation)
is 10ns slower because it too needs a new atomic, because with the
denser representation, the byte on the end of the allocation is now shared
with the object next to it; this was not true with the 4-bit representation.
Excluding these two (fundamental) slowdowns due to the use of atomics,
the new code is noticeably faster than both Go 1.4 and the base CL.
The next CL will reintroduce the ``typeDead'' optimization.
Stats are from 5 runs on a MacBookPro10,2 (late 2012 Core i5).
Compared to base CL (** = new atomic)
name old mean new mean delta
SetTypePtr 14.1ns × (0.99,1.02) 14.7ns × (0.93,1.10) ~ (p=0.175)
SetTypePtr8 18.4ns × (1.00,1.01) 18.6ns × (0.81,1.21) ~ (p=0.866)
SetTypePtr16 28.7ns × (1.00,1.00) 22.4ns × (0.90,1.27) -21.88% (p=0.015)
SetTypePtr32 52.3ns × (1.00,1.00) 33.8ns × (0.93,1.24) -35.37% (p=0.001)
SetTypePtr64 79.2ns × (1.00,1.00) 55.1ns × (1.00,1.01) -30.43% (p=0.000)
SetTypePtr126 118ns × (1.00,1.00) 100ns × (1.00,1.00) -15.97% (p=0.000)
SetTypePtr128 130ns × (0.92,1.19) 98ns × (1.00,1.00) -24.36% (p=0.008)
SetTypePtrSlice 726ns × (0.96,1.08) 760ns × (1.00,1.00) ~ (p=0.152)
SetTypeNode1 14.1ns × (0.94,1.15) 12.0ns × (1.00,1.01) -14.60% (p=0.020)
SetTypeNode1Slice 135ns × (0.96,1.07) 88ns × (1.00,1.00) -34.53% (p=0.000)
SetTypeNode8 20.9ns × (1.00,1.01) 32.6ns × (1.00,1.00) +55.37% (p=0.000) **
SetTypeNode8Slice 414ns × (0.99,1.02) 244ns × (1.00,1.00) -41.09% (p=0.000)
SetTypeNode64 80.0ns × (1.00,1.00) 57.4ns × (1.00,1.00) -28.23% (p=0.000)
SetTypeNode64Slice 2.15µs × (1.00,1.01) 1.56µs × (1.00,1.00) -27.43% (p=0.000)
SetTypeNode124 119ns × (0.99,1.00) 100ns × (1.00,1.00) -16.11% (p=0.000)
SetTypeNode124Slice 3.40µs × (1.00,1.00) 2.93µs × (1.00,1.00) -13.80% (p=0.000)
SetTypeNode126 120ns × (1.00,1.01) 98ns × (1.00,1.00) -18.19% (p=0.000)
SetTypeNode126Slice 3.53µs × (0.98,1.08) 3.02µs × (1.00,1.00) -14.49% (p=0.002)
SetTypeNode1024 726ns × (0.97,1.09) 740ns × (1.00,1.00) ~ (p=0.451)
SetTypeNode1024Slice 24.9µs × (0.89,1.37) 23.1µs × (1.00,1.00) ~ (p=0.476)
Compared to Go 1.4 (** = new atomic)
name old mean new mean delta
SetTypePtr 5.71ns × (0.89,1.19) 14.68ns × (0.93,1.10) +157.24% (p=0.000) **
SetTypePtr8 19.3ns × (0.96,1.10) 18.6ns × (0.81,1.21) ~ (p=0.638)
SetTypePtr16 30.7ns × (0.99,1.03) 22.4ns × (0.90,1.27) -26.88% (p=0.005)
SetTypePtr32 51.5ns × (1.00,1.00) 33.8ns × (0.93,1.24) -34.40% (p=0.001)
SetTypePtr64 83.6ns × (0.94,1.12) 55.1ns × (1.00,1.01) -34.12% (p=0.001)
SetTypePtr126 137ns × (0.87,1.26) 100ns × (1.00,1.00) -27.10% (p=0.028)
SetTypePtrSlice 865ns × (0.80,1.23) 760ns × (1.00,1.00) ~ (p=0.243)
SetTypeNode1 15.2ns × (0.88,1.12) 12.0ns × (1.00,1.01) -20.89% (p=0.014)
SetTypeNode1Slice 156ns × (0.93,1.16) 88ns × (1.00,1.00) -43.57% (p=0.001)
SetTypeNode8 23.8ns × (0.90,1.18) 32.6ns × (1.00,1.00) +36.76% (p=0.003) **
SetTypeNode8Slice 502ns × (0.92,1.10) 244ns × (1.00,1.00) -51.46% (p=0.000)
SetTypeNode64 85.6ns × (0.94,1.11) 57.4ns × (1.00,1.00) -32.89% (p=0.001)
SetTypeNode64Slice 2.36µs × (0.91,1.14) 1.56µs × (1.00,1.00) -33.96% (p=0.002)
SetTypeNode124 130ns × (0.91,1.12) 100ns × (1.00,1.00) -23.49% (p=0.004)
SetTypeNode124Slice 3.81µs × (0.90,1.22) 2.93µs × (1.00,1.00) -23.09% (p=0.025)
There are fewer benchmarks vs Go 1.4 because unrolling directly
into the heap bitmap is not yet implemented, so those would not
be meaningful comparisons.
These benchmarks were not present in Go 1.4 as distributed.
The backport to Go 1.4 is in github.com/rsc/go's go14bench branch,
commit 71d5ee5.
Change-Id: I95ed05a22bf484b0fc9efad549279e766c98d2b6
Reviewed-on: https://go-review.googlesource.com/9704
Reviewed-by: Rick Hudson <rlh@golang.org>
Previous CLs changed the representation of the non-heap type bitmaps
to be 1-bit bitmaps (pointer or not). Before this CL, the heap bitmap
stored a 2-bit type for each word and a mark bit and checkmark bit
for the first word of the object. (There used to be additional per-word bits.)
Reduce heap bitmap to 2-bit, with 1 dedicated to pointer or not,
and the other used for mark, checkmark, and "keep scanning forward
to find pointers in this object." See comments for details.
This CL replaces heapBitsSetType with very slow but obviously correct code.
A followup CL will optimize it. (Spoiler: the new code is faster than Go 1.4 was.)
Change-Id: I999577a133f3cfecacebdec9cdc3573c235c7fb9
Reviewed-on: https://go-review.googlesource.com/9703
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The type information in reflect.Type and the GC programs is now
1 bit per word, down from 2 bits.
The in-memory unrolled type bitmap representation are now
1 bit per word, down from 4 bits.
The conversion from the unrolled (now 1-bit) bitmap to the
heap bitmap (still 4-bit) is not optimized. A followup CL will
work on that, after the heap bitmap has been converted to 2-bit.
The typeDead optimization, in which a special value denotes
that there are no more pointers anywhere in the object, is lost
in this CL. A followup CL will bring it back in the final form of
heapBitsSetType.
Change-Id: If61e67950c16a293b0b516a6fd9a1c755b6d5549
Reviewed-on: https://go-review.googlesource.com/9702
Reviewed-by: Austin Clements <austin@google.com>
There was an old benchmark that measured this indirectly
via allocation, but I don't understand how to factor out the
allocation cost when interpreting the numbers.
Replace with a benchmark that only calls heapBitsSetType,
that does not allocate. This was not possible when the
benchmark was first written, because heapBitsSetType had
not been factored out of mallocgc.
Change-Id: I30f0f02362efab3465a50769398be859832e6640
Reviewed-on: https://go-review.googlesource.com/9701
Reviewed-by: Austin Clements <austin@google.com>
Since we now have stack information for code running on the
systemstack, we can traceback over it. To make cpu profiles useful,
add a case in gentraceback to jump over systemstack switches.
Fixes#10609.
Change-Id: I21f47fcc802c07c5d4a1ada56374314e388a6dc7
Reviewed-on: https://go-review.googlesource.com/9506
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
I have around twenty of such values on a Windows 7 development machine.
regedit displays (translated): "invalid 32-bits DWORD value".
Change-Id: Ib37a414ee4c85e891b0a25fed2ddad9e105f5f4e
Reviewed-on: https://go-review.googlesource.com/9901
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
On darwin/arm, the test sometimes fails with:
Process 557 resuming
--- FAIL: TestWriteTimeoutFluctuation (1.64s)
timeout_test.go:706: Write took over 1s; expected 0.1s
FAIL
Process 557 exited with status = 1 (0x00000001)
go_darwin_arm_exec: timeout running tests
This change increaes timeout on iOS builders from 1s to 3s as a
temporarily fix.
Updates #10775.
Change-Id: Ifdaf99cf5b8582c1a636a0f7d5cc66bb276efd72
Reviewed-on: https://go-review.googlesource.com/9915
Reviewed-by: Minux Ma <minux@golang.org>
ExpandString correctly loops on the syscall until it reaches the
required buffer size but truncates it before converting it back to
string. The truncation limit is increased to 2^15 bytes which is the
documented maximum ExpandEnvironmentStrings output size.
This fixes TestExpandString on systems where len($PATH) > 1024.
Change-Id: I2a6f184eeca939121b458bcffe1a436a50f3298e
Reviewed-on: https://go-review.googlesource.com/9805
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
There is no SYS_INOTIFY_INIT on linux/arm64, only SYS_INOTIFY_INIT1.
Change-Id: I97f430f2c2b910fb19dce495ff1adf591b8634fc
Reviewed-on: https://go-review.googlesource.com/9870
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
The html package uses some specific code to escape special characters.
Actually, the strings.Replacer can be used instead, and is much more
efficient. The converse operation is more complex but can still be
slightly optimized.
Credits to Ken Bloom (kabloom@google.com), who first submitted a
similar patch at https://codereview.appspot.com/141930043
Added benchmarks and slightly optimized UnescapeString.
benchmark old ns/op new ns/op delta
BenchmarkEscape-4 118713 19825 -83.30%
BenchmarkEscapeNone-4 87653 3784 -95.68%
BenchmarkUnescape-4 24888 23417 -5.91%
BenchmarkUnescapeNone-4 14423 157 -98.91%
benchmark old allocs new allocs delta
BenchmarkEscape-4 9 2 -77.78%
BenchmarkEscapeNone-4 0 0 +0.00%
BenchmarkUnescape-4 2 2 +0.00%
BenchmarkUnescapeNone-4 0 0 +0.00%
benchmark old bytes new bytes delta
BenchmarkEscape-4 24800 12288 -50.45%
BenchmarkEscapeNone-4 0 0 +0.00%
BenchmarkUnescape-4 10240 10240 +0.00%
BenchmarkUnescapeNone-4 0 0 +0.00%
Fixes#8697
Change-Id: I208261ed7cbe9b3dee6317851f8c0cf15528bce4
Reviewed-on: https://go-review.googlesource.com/9808
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Delete the colon from RUN: for examples, since it's not there for tests.
Add spaces to line up RUN and PASS: lines.
Before:
=== RUN TestCount
--- PASS: TestCount (0.00s)
=== RUN: ExampleFields
--- PASS: ExampleFields (0.00s)
After:
=== RUN TestCount
--- PASS: TestCount (0.00s)
=== RUN ExampleFields
--- PASS: ExampleFields (0.00s)
Fixes#10594.
Change-Id: I189c80a5d99101ee72d8c9c3a4639c07e640cbd8
Reviewed-on: https://go-review.googlesource.com/9846
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Pipelines are altered by inserting sanitizers if they are not
already present. The code makes the assumption that the first
operands of each commands are function identifiers.
This is wrong, since they can also be methods. It results in
a panic with templates such as {{1|print 2|.f 3}}
Adds an extra type assertion to make sure only identifiers
are compared with sanitizers.
Fixes#10673
Change-Id: I3eb820982675231dbfa970f197abc5ef335ce86b
Reviewed-on: https://go-review.googlesource.com/9801
Reviewed-by: Rob Pike <r@golang.org>
This will make it possible for C++ code to #include the export header
file and see the correct declarations.
The preamble remains the user's responsibility. It would not be
appropriate to wrap the preamble in extern "C", because it might
include header files that work with both C and C++. Putting those
header files in an extern "C" block would break them.
Change-Id: Ifb40879d709d26596d5c80b1307a49f1bd70932a
Reviewed-on: https://go-review.googlesource.com/9850
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
With this patch, gdb seems to be able to corretly backtrace Go
process on at least linux/{arm,arm64,ppc64}.
Change-Id: Ic40a2a70e71a19c4a92e4655710f38a807b67e9a
Reviewed-on: https://go-review.googlesource.com/9822
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
It was testing the mark bits on what roots pointed at,
but not the remainder of the live heap, because in
CL 2991 I accidentally inverted this check during
refactoring.
The next CL will turn it back off by default again,
but I want one run on the builders with the full
checkmark checks.
Change-Id: Ic166458cea25c0a56e5387fc527cb166ff2e5ada
Reviewed-on: https://go-review.googlesource.com/9824
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently the heap minimum is set to 4MB which prevents our ability to
collect at every allocation by setting GOGC=0. This adjust the
heap minimum to 4MB*GOGC/100 thus reenabling collecting at every allocation.
Fixes#10681
Change-Id: I912d027dac4b14ae535597e8beefa9ac3fb8ad94
Reviewed-on: https://go-review.googlesource.com/9814
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
This was added during testing but is unnecessary.
Thanks to gravis on GitHub for catching it.
See #10574.
Change-Id: I4a8f76d237e67f5a0ea189a0f3cadddbf426778a
Reviewed-on: https://go-review.googlesource.com/9841
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The code already handled high widths but not high precisions.
Also make sure it handles the harder cases of %U.
Fixes#10745.
Change-Id: Ib4d394d49a9941eeeaff866dc59d80483e312a98
Reviewed-on: https://go-review.googlesource.com/9769
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When
using -buildmode=c-archive or c-shared, and
when installing packages that use cgo, and
when those packages export some functions via //export comments,
then
for each such package, install a pkg.h header file that declares the
functions.
This permits C code to #include the header when calling the Go
functions.
This is a little awkward to use when there are multiple packages that
export functions, as you have to "go install" your c-archive/c-shared
object and then pull it out of the package directory. When compiling
your C code you have to -I pkg/$GOOS_$GOARCH. I haven't thought of
any more convenient approach. It's simpler when only the main package
has exported functions.
When using c-shared you currently have to use a _shared suffix in the
-I option; it would be nice to fix that somehow.
Change-Id: I5d8cf08914b7d3c2b194120c77791d2732ffd26e
Reviewed-on: https://go-review.googlesource.com/9798
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Try to provide hints for common areas, either *interface
were interface would have been better, and note incorrect
capitalization (but don't be more ambitious than that, at
least not today).
Added code and test for cases
ptrInterface.ExistingMethod
ptrInterface.unexportedMethod
ptrInterface.MissingMethod
ptrInterface.withwRongcASEdMethod
interface.withwRongcASEdMethod
ptrStruct.withwRongcASEdMethod
struct.withwRongcASEdMethod
also included tests for related errors to check for
unintentional changes and consistent wording.
Somewhat simplified from previous versions to avoid second-
guessing user errors, yet also biased to point out most-likely
root cause.
Fixes#10700
Change-Id: I16693e93cc8d8ca195e7742a222d640c262105b4
Reviewed-on: https://go-review.googlesource.com/9731
Reviewed-by: Russ Cox <rsc@golang.org>
Current code just checks the consistency (that the functab is correctly
sorted by PC, etc) of the moduledata object that the runtime belongs to.
Change to check all of them.
Change-Id: I544a44c5de7445fff87d3cdb4840ff04c5e2bf75
Reviewed-on: https://go-review.googlesource.com/9773
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When AttrByteSize is not present for a type, we can still determine the
size in two more cases: when the type is a Typedef referring to another
type, and when the type is a pointer and we know the default address
size.
entry.go: return after setting an error if the offset is out of range.
Change-Id: I63a922ca4e4ad2fc9e9be3e5b47f59fae7d0eb5c
Reviewed-on: https://go-review.googlesource.com/9663
Reviewed-by: Austin Clements <austin@google.com>
No code changes, but the test passes here.
And TryBots are happy.
Fixes#8662 maybe
Change-Id: Id37380f72a951c9ad7cf96c0db153c05167e62ed
Reviewed-on: https://go-review.googlesource.com/9778
Reviewed-by: Minux Ma <minux@golang.org>
The -exportheader option tells cgo to generate a header file declaring
expoted functions. The header file is only created if there are, in
fact, some exported functions, so it also serves as a signal as to
whether there were any.
In future CLs the go tool will use this option to install header files
for packages that use cgo and export functions.
Change-Id: I5b04357d453a9a8f0e70d37f8f18274cf40d74c9
Reviewed-on: https://go-review.googlesource.com/9796
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Similar for linux/386 with GO386=387.
Change-Id: If8b6f8a0659a1b3e078d87a43fcfe8a38af20308
Reviewed-on: https://go-review.googlesource.com/9821
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change doesn't work perfectly on IPv6-only kernels including CLAT
enabled kernels, but works enough on IPv4-only kernels.
Fixes#10721.
Updates #10729.
Change-Id: I7db0e572e252aa0a9f9f54c8e557955077b72e44
Reviewed-on: https://go-review.googlesource.com/9777
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Also adds missing temporary file deletion.
Change-Id: Ia644b0898022e05d2f5232af38f51d55e40c6fb5
Reviewed-on: https://go-review.googlesource.com/9772
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Makes searching in source code easier.
Change-Id: Ie2e85934d23920ac0bc01d28168bcfbbdc465580
Reviewed-on: https://go-review.googlesource.com/9774
Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com>
Reviewed-by: Minux Ma <minux@golang.org>
Also copy doc comments from Go code to _cgo_export.h.
This is a step toward installing this generated file when using
-buildmode=c-archive or c-shared, so that C code can #include it.
Change-Id: I3a243f7b386b58ec5c5ddb9a246bb9f9eddc5fb8
Reviewed-on: https://go-review.googlesource.com/9790
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Already done for constants and funcs, but I didn't realize that some
global vars were also not in the global list. This fixes
go doc build.Default
Change-Id: I768bde13a400259df3e46dddc9f58c8f0e993c72
Reviewed-on: https://go-review.googlesource.com/9764
Reviewed-by: Andrew Gerrand <adg@golang.org>
Currently, the GC uses a moving average of recent scan work ratios to
estimate the total scan work required by this cycle. This is in turn
used to compute how much scan work should be done by mutators when
they allocate in order to perform all expected scan work by the time
the allocated heap reaches the heap goal.
However, our current scan work estimate can be arbitrarily wrong if
the heap topography changes significantly from one cycle to the
next. For example, in the go1 benchmarks, at the beginning of each
benchmark, the heap is dominated by a 256MB no-scan object, so the GC
learns that the scan density of the heap is very low. In benchmarks
that then rapidly allocate pointer-dense objects, by the time of the
next GC cycle, our estimate of the scan work can be too low by a large
factor. This in turn lets the mutator allocate faster than the GC can
collect, allowing it to get arbitrarily far ahead of the scan work
estimate, which leads to very long GC cycles with very little mutator
assist that can overshoot the heap goal by large margins. This is
particularly easy to demonstrate with BinaryTree17:
$ GODEBUG=gctrace=1 ./go1.test -test.bench BinaryTree17
gc #1 @0.017s 2%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 4->262->262 MB, 4 MB goal, 1 P
gc #2 @0.026s 3%: 0+0+0+0+0 ms clock, 0+0+0+0/0/0+0 ms cpu, 262->262->262 MB, 524 MB goal, 1 P
testing: warning: no tests to run
PASS
BenchmarkBinaryTree17 gc #3 @1.906s 0%: 0+0+0+0+7 ms clock, 0+0+0+0/0/0+7 ms cpu, 325->325->287 MB, 325 MB goal, 1 P (forced)
gc #4 @12.203s 20%: 0+0+0+10067+10 ms clock, 0+0+0+0/2523/852+10 ms cpu, 430->2092->1950 MB, 574 MB goal, 1 P
1 9150447353 ns/op
Change this estimate to instead use the *current* scannable heap
size. This has the advantage of being based solely on the current
state of the heap, not on past densities or reachable heap sizes, so
it isn't susceptible to falling behind during these sorts of phase
changes. This is strictly an over-estimate, but it's better to
over-estimate and get more assist than necessary than it is to
under-estimate and potentially spiral out of control. Experiments with
scaling this estimate back showed no obvious benefit for mutator
utilization, heap size, or assist time.
This new estimate has little effect for most benchmarks, including
most go1 benchmarks, x/benchmarks, and the 6g benchmark. It has a huge
effect for benchmarks that triggered the bad pacer behavior:
name old mean new mean delta
BinaryTree17 10.0s × (1.00,1.00) 3.5s × (0.98,1.01) -64.93% (p=0.000)
Fannkuch11 2.74s × (1.00,1.01) 2.65s × (1.00,1.00) -3.52% (p=0.000)
FmtFprintfEmpty 56.4ns × (0.99,1.00) 57.8ns × (1.00,1.01) +2.43% (p=0.000)
FmtFprintfString 187ns × (0.99,1.00) 185ns × (0.99,1.01) -1.19% (p=0.010)
FmtFprintfInt 184ns × (1.00,1.00) 183ns × (1.00,1.00) (no variance)
FmtFprintfIntInt 321ns × (1.00,1.00) 315ns × (1.00,1.00) -1.80% (p=0.000)
FmtFprintfPrefixedInt 266ns × (1.00,1.00) 263ns × (1.00,1.00) -1.22% (p=0.000)
FmtFprintfFloat 353ns × (1.00,1.00) 353ns × (1.00,1.00) -0.13% (p=0.035)
FmtManyArgs 1.21µs × (1.00,1.00) 1.19µs × (1.00,1.00) -1.33% (p=0.000)
GobDecode 9.69ms × (1.00,1.00) 9.59ms × (1.00,1.00) -1.07% (p=0.000)
GobEncode 7.89ms × (0.99,1.01) 7.74ms × (1.00,1.00) -1.92% (p=0.000)
Gzip 391ms × (1.00,1.00) 392ms × (1.00,1.00) ~ (p=0.522)
Gunzip 97.1ms × (1.00,1.00) 97.0ms × (1.00,1.00) -0.10% (p=0.000)
HTTPClientServer 55.7µs × (0.99,1.01) 56.7µs × (0.99,1.01) +1.81% (p=0.001)
JSONEncode 19.1ms × (1.00,1.00) 19.0ms × (1.00,1.00) -0.85% (p=0.000)
JSONDecode 66.8ms × (1.00,1.00) 66.9ms × (1.00,1.00) ~ (p=0.288)
Mandelbrot200 4.13ms × (1.00,1.00) 4.12ms × (1.00,1.00) -0.08% (p=0.000)
GoParse 3.97ms × (1.00,1.01) 4.01ms × (1.00,1.00) +0.99% (p=0.000)
RegexpMatchEasy0_32 114ns × (1.00,1.00) 115ns × (0.99,1.00) ~ (p=0.070)
RegexpMatchEasy0_1K 376ns × (1.00,1.00) 376ns × (1.00,1.00) ~ (p=0.900)
RegexpMatchEasy1_32 94.9ns × (1.00,1.00) 96.3ns × (1.00,1.01) +1.53% (p=0.001)
RegexpMatchEasy1_1K 568ns × (1.00,1.00) 567ns × (1.00,1.00) -0.22% (p=0.001)
RegexpMatchMedium_32 159ns × (1.00,1.00) 159ns × (1.00,1.00) ~ (p=0.178)
RegexpMatchMedium_1K 46.4µs × (1.00,1.00) 46.6µs × (1.00,1.00) +0.29% (p=0.000)
RegexpMatchHard_32 2.37µs × (1.00,1.00) 2.37µs × (1.00,1.00) ~ (p=0.722)
RegexpMatchHard_1K 71.1µs × (1.00,1.00) 71.2µs × (1.00,1.00) ~ (p=0.229)
Revcomp 565ms × (1.00,1.00) 562ms × (1.00,1.00) -0.52% (p=0.000)
Template 81.0ms × (1.00,1.00) 80.2ms × (1.00,1.00) -0.97% (p=0.000)
TimeParse 380ns × (1.00,1.00) 380ns × (1.00,1.00) ~ (p=0.148)
TimeFormat 405ns × (0.99,1.00) 385ns × (0.99,1.00) -5.00% (p=0.000)
Change-Id: I11274158bf3affaf62662e02de7af12d5fb789e4
Reviewed-on: https://go-review.googlesource.com/9696
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
This tracks the number of scannable bytes in the allocated heap. That
is, bytes that the garbage collector must scan before reaching the
last pointer field in each object.
This will be used to compute a more robust estimate of the GC scan
work.
Change-Id: I1eecd45ef9cdd65b69d2afb5db5da885c80086bb
Reviewed-on: https://go-review.googlesource.com/9695
Reviewed-by: Russ Cox <rsc@golang.org>
The garbage collector predicts how much "scan work" must be done in a
cycle to determine how much work should be done by mutators when they
allocate. Most code doesn't care what units the scan work is in: it
simply knows that a certain amount of scan work has to be done in the
cycle. Currently, the GC uses the number of pointer slots scanned as
the scan work on the theory that this is the bulk of the time spent in
the garbage collector and hence reflects real CPU resource usage.
However, this metric is difficult to estimate at the beginning of a
cycle.
Switch to counting the total number of bytes scanned, including both
pointer and scalar slots. This is still less than the total marked
heap since it omits no-scan objects and no-scan tails of objects. This
metric may not reflect absolute performance as well as the count of
scanned pointer slots (though it still takes time to scan scalar
fields), but it will be much easier to estimate robustly, which is
more important.
Change-Id: Ie3a5eeeb0384a1ca566f61b2f11e9ff3a75ca121
Reviewed-on: https://go-review.googlesource.com/9694
Reviewed-by: Russ Cox <rsc@golang.org>
Currently, we only flush the per-P gcWork caches in gcMark, at the
beginning of mark termination. This is necessary to ensure that no
work is held up in these caches.
However, this flush happens after we update the GC controller state,
which depends on statistics about marked heap size and scan work that
are only updated by this flush. Hence, the controller is missing the
bulk of heap marking and scan work. This bug was introduced in commit
1b4025f, which introduced the per-P gcWork caches.
Fix this by flushing these caches before we update the GC controller
state. We continue to flush them at the beginning of mark termination
as well to be robust in case any write barriers happened between the
previous flush and entering mark termination, but this should be a
no-op.
Change-Id: I8f0f91024df967ebf0c616d1c4f0c339c304ebaa
Reviewed-on: https://go-review.googlesource.com/9646
Reviewed-by: Russ Cox <rsc@golang.org>
Ramp up the delay on subsequent attempts. Fast builders have the same delay.
Not a perfect fix, but should make it better. And this easy.
Fixes#9903 maybe
Fixes#10680 maybe
Change-Id: I967380c2cb8196e6da9a71116961229d37b36335
Reviewed-on: https://go-review.googlesource.com/9795
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Android has (had?) its own local DNS resolver daemon, also my fault:
007e987fee
And you access that via libc, not DNS.
Fixes#10714
Change-Id: Iaff752872ce19bb5c7771ab048fd50e3f72cb73c
Reviewed-on: https://go-review.googlesource.com/9793
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Improving the usability further.
Before:
$ go doc bytes.Read
doc: symbol Read not present in package bytes installed in "bytes"
$
After:
$ go doc bytes.Read
func (b *Buffer) Read(p []byte) (n int, err error)
Read reads the next len(p) bytes from the buffer or until the buffer is drained.
The return value n is the number of bytes read. If the buffer has no data to
return, err is io.EOF (unless len(p) is zero); otherwise it is nil.
func (r *Reader) Read(b []byte) (n int, err error)
$
Change-Id: I646511fada138bd09e9b39820da01a5ccef4a90f
Reviewed-on: https://go-review.googlesource.com/9656
Reviewed-by: Russ Cox <rsc@golang.org>
GOROOT is not dependably set.
When I first wrote this test, I thought it was a waste of time
because the function can't fail if the other environment functions
work, but I didn't want to add functionality without testing it.
Of course, the test broke, and I learned something: GOROOT is not
set on iOS or, to put it more broadly, the world continues to
surprise me with its complexity and horror, such as a version of
cat with syntax coloring.
In that vein, I built this test around smallpox.
Change-Id: Ifa6c218a927399d05c47954fdcaea1015e558fb6
Reviewed-on: https://go-review.googlesource.com/9791
Reviewed-by: Russ Cox <rsc@golang.org>
During development some tracing routines were added that are not
needed in the release. These included GCstarttimes, GCendtimes, and
GCprinttimes.
Fixes#10462
Change-Id: I0788e6409d61038571a5ae0cbbab793102df0a65
Reviewed-on: https://go-review.googlesource.com/9689
Reviewed-by: Austin Clements <austin@google.com>
Fixes#10221.
Change-Id: Ib23805494d8af1946360bfea767f9727e2504dc5
Reviewed-on: https://go-review.googlesource.com/7941
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Unfortunately Oracle Solaris does not have TCP_KEEPIDLE and
TCP_KEEPINTVL. TCP_KEEPIDLE is equivalent to TCP_KEEPALIVE_THRESHOLD,
but TCP_KEEPINTVL does not have a direct equivalent, so we don't set
TCP_KEEPINTVL any more.
Old Darwin versions also lack TCP_KEEPINTVL, but the code tries to set
it anyway so that it works on newer versions. We can't do that because
Oracle might assign the number illumos uses for TCP_KEEPINTVL to a
constant with a different meaning.
Unfortunately there's nothing we can do if we want to support both
illumos and Oracle Solaris with the same GOOS.
Updates #9614.
Change-Id: Id39eb5147f7afa8e951f886c0bf529d00f0e1bd4
Reviewed-on: https://go-review.googlesource.com/7690
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Mikio Hara <mikioh.mikioh@gmail.com>
Before CL 8214 (use .plt instead of .got on Solaris) Solaris used a
dynamic linking scheme that didn't permit lazy binding. To speed program
startup, Go binaries only used it for a small number of symbols required
by the runtime. Other symbols were resolved on demand on first use, and
were cached for subsequent use. This required some moderately complex
code in the syscall package.
CL 8214 changed the way dynamic linking is implemented, and now lazy
binding is supported. As now all symbols are resolved lazily by the
dynamic loader, there is no need for the complex code in the syscall
package that did the same. This CL makes Go programs link directly
with the necessary shared libraries and deletes the lazy-loading code
implemented in Go.
Change-Id: Ifd7275db72de61b70647242e7056dd303b1aee9e
Reviewed-on: https://go-review.googlesource.com/9184
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
ELF normally requires this and Solaris runtime loader will crash if we
don't do it.
Fixes Solaris build.
Change-Id: I0482eed890aff2d346136ae7f9caf8f094f502ed
Reviewed-on: https://go-review.googlesource.com/8216
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The linker always uses .plt for externals, so libcFunc is now an actual
external symbol instead of a pointer to one.
Fixes most of the breakage introduced in previous CL.
Change-Id: I64b8c96f93127f2d13b5289b024677fd3ea7dbea
Reviewed-on: https://go-review.googlesource.com/8215
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Solaris requires all external procedures to be accessed through the
PLT. If 6l won't do it, /bin/ld will, so all the code written with .GOT
in mind won't work with the external linker.
This CL makes external linking work, opening the path to cgo support
on Solaris.
This CL breaks the Solaris build, this is fixed in subsequent CLs in
this series.
Change-Id: If370a79f49fdbe66d28b89fa463b4f3e91685f69
Reviewed-on: https://go-review.googlesource.com/8214
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
This change simplifies unnecessarily redundant error messages in tests.
There's no need to worry any more because package APIs now return
consistent, self-descriptive error values.
Alos renames ambiguous test functions and makes use of test tables.
Change-Id: I7b61027607c4ae2a3cf605d08d58cf449fa27eb2
Reviewed-on: https://go-review.googlesource.com/9662
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Mikio Hara <mikioh.mikioh@gmail.com>
This change makes TestDualStack{TCP,UDP}Listener work more properly by
attempting to book an available service port before testing.
Also simplifies error messages in tests.
Fixes#5001.
Change-Id: If13b0d0039878c9bd32061a0440664e4fa7abaf7
Reviewed-on: https://go-review.googlesource.com/9661
Reviewed-by: Ian Lance Taylor <iant@golang.org>
isn't (0, 0).
Also fix a s/b.Min.X/b.Max.X/ typo in bounds checking.
Fixes#10676
Change-Id: Ie5ff7ec20ca30367a8e65d32061959a2d8e089e9
Reviewed-on: https://go-review.googlesource.com/9712
Reviewed-by: Rob Pike <r@golang.org>
An ELF linker handles a PC-relative reference to an STT_FUNC defined in a
shared library by building a PLT entry and referring to that, so do the
same in 6l.
Fixes#10690
Change-Id: I061a96fd4400d957e301d0ac86760ce256910e1d
Reviewed-on: https://go-review.googlesource.com/9711
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Catch some malformed pipelines at parsing time.
The current code accepts pipelines such as:
{{12|.}}
{{"hello"|print|false}}
{{.|"blah blah"}}
Such pipelines generate panic in html/template at execution time.
Add an extra check to verify all the commands of the pipeline are executable
(except for the first one).
Fixes#10610
Change-Id: Id72236ba8f76a59fa284fe3d4c2cb073e50b51f1
Reviewed-on: https://go-review.googlesource.com/9626
Reviewed-by: Rob Pike <r@golang.org>
This makes the intermediate object file a little bigger but it doesn't waste
any space in the final shared library.
Fixes#10691
Change-Id: Ic51a571d60291f1ac2dad1b50dba4679643168ae
Reviewed-on: https://go-review.googlesource.com/9710
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This changes the action graph when shared libraries are involved to always have
an action for the shared library (which does nothing when the shared library
is up to date).
Change-Id: Ibbc70fd01cbb3f4e8c0ef96e62a151002d446144
Reviewed-on: https://go-review.googlesource.com/8934
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Fixes#10660
Fix the clang only builder by passing -extld down to the linker when needed.
The build passed on most hosts because gcc is almost always present. The bug
was verified by symlinking bin/false in place of gcc in my $PATH and running
the build.
Also, resolve a TODO and move the support logic into its own function.
Tested manually
env CC=clang-3.5 ./all.bash # linux/amd64
env CC=gcc-4.8 ./all.bash # linux/amd64
./all.bash # linux/amd64
./all.bash # darwin/amd64
Change-Id: I4e27a1119356e295500a0d19ad7a4ec14207bf10
Reviewed-on: https://go-review.googlesource.com/9526
Run-TryBot: Dave Cheney <dave@cheney.net>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I just submitted the real fix for #10641.
This reverts commit 3120adc212.
Change-Id: I55051515f697e27ca887ed21c2ac985f0b9b062b
Reviewed-on: https://go-review.googlesource.com/9720
Reviewed-by: Joel Sing <jsing@google.com>
No semantic change.
Fixes#8708.
Change-Id: Ieda04a86a19bb69bfc2519d381a2f025e7cb8279
Reviewed-on: https://go-review.googlesource.com/9740
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I forgot there is already a ptrSize constant.
Rename field to avoid some confusion.
Change-Id: I098fdcc8afc947d6c02c41c6e6de24624cc1c8ff
Reviewed-on: https://go-review.googlesource.com/9700
Reviewed-by: Austin Clements <austin@google.com>
The old one was inferior.
Fixes#10695.
Change-Id: Ia7fb88c9ceb1b10197b77a54f729865385288d98
Reviewed-on: https://go-review.googlesource.com/9709
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
When a parse error occurred, the lexing goroutine would lay idle.
It's not likely a problem but if the program is for some reason
accepting badly formed data repeatedly, it's wasteful.
The solution is easy: Just drain the input on error. We know this
will succeed because the input is always a string and is therefore
guaranteed finite.
With debugging prints in the package tests I've shown this is effective,
shutting down 79 goroutines that would otherwise linger, out of 123 total.
Fixes#10574.
Change-Id: I8aa536e327b219189a7e7f604a116fa562ae1c39
Reviewed-on: https://go-review.googlesource.com/9658
Reviewed-by: Russ Cox <rsc@golang.org>
Freezetheworld still has stuff to do when gomaxprocs=1.
In particular, signals can come in on other Ms (like the GC M, say)
and the single user M is still running.
Fixes#10546
Change-Id: I2f07f17d1c81e93cf905df2cb087112d436ca7e7
Reviewed-on: https://go-review.googlesource.com/9551
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
When emulating ARM FSQRT instruction, the sqrt function itself
should not use any floating point arithmetics, otherwise it will
clobber the user software FP registers.
Fortunately, the sqrt function only uses floating point instructions
to test for corner cases, so it's easy to make that function does
all it job using pure integer arithmetic only. I've verified that
after this change, runtime.stepflt and runtime.sqrt doesn't contain
any call to _sfloat. (Perhaps we should add //go:nosfloat to make
the compiler enforce this?)
Fixes#10641.
Change-Id: Ida4742c49000fae4fea4649f28afde630ce4c576
Signed-off-by: Shenghou Ma <minux@golang.org>
Reviewed-on: https://go-review.googlesource.com/9570
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Keith Randall <khr@golang.org>
OSQRT currently produces incorrect results when used on arm with softfloat.
Disable it on GOARM=5 until the actual problem is found and fixed.
Updates #10641
Change-Id: Ia6f6879fbbb05cb24399c2feee93c1be21113e73
Reviewed-on: https://go-review.googlesource.com/9524
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Whenever we introduce a new GOARCH, older Go releases won't
recognize them and this causes trouble for both our users and
us (we need to add unnecessary build tags).
Go 1.5 has introduced three new GOARCHes so far: arm64 ppc64
ppc64le, we can take the time to introduce GOARCHes for all
common architectures that Go might support in the future to
avoid the problem.
Fixes#10165.
Change-Id: Ida4f9112897cfb1e85b06538db79125955ad0f4c
Reviewed-on: https://go-review.googlesource.com/9644
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Update #10652
This proposal deletes cmd/internal/ld.Biobuf and replaces all uses with
cmd/internal/obj.Biobuf. As cmd/internal/ld already imported cmd/internal/obj
there are no additional dependencies created.
Notes:
- ld.Boffset included more checks, so it was merged into obj.Boffset
- obj.Bflush was removed in 8d16253c90, so replaced all calls to
ld.Bflush, with obj.Biobuf.Flush.
- Almost all of this change was prepared with sed.
Change-Id: I814854d52f5729a5a40c523c8188e465246b88da
Reviewed-on: https://go-review.googlesource.com/9660
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dave Cheney <dave@cheney.net>
Write should return ErrWriteAfterClose instead
of ErrWriteTooLong when called after Close.
Change-Id: If5ec4ef924e4c56489e0d426976f7e5fad79be9b
Reviewed-on: https://go-review.googlesource.com/9259
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This adds a field to the runtime type structure that records the size
of the prefix of objects of that type containing pointers. Any data
after this offset is scalar data.
This is necessary for shrinking the type bitmaps to 1 bit and will
help the garbage collector efficiently estimate the amount of heap
that needs to be scanned.
Change-Id: I1318d79e6360dca0ac980245016c562e61f52ff5
Reviewed-on: https://go-review.googlesource.com/9691
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Move the one instance of type structure decoding in the linker that
doesn't live decodesym.go in to decodesym.go.
Change-Id: Ic6a23500deb72f0e9c8227ab611511e9781fac70
Reviewed-on: https://go-review.googlesource.com/9690
Reviewed-by: Russ Cox <rsc@golang.org>
shouldtriggergc is slightly expensive due to the call overhead
and the use of an atomic. This CL reduces the number of time
one checks if a GC should be done from one at each allocation
to once when a span is allocated. Since shouldtriggergc is an
important abstraction simply hand inlining it, along with its
atomic instruction would lose the abstraction.
Change-Id: Ia3210655b4b3d433f77064a21ecb54e4d9d435f7
Reviewed-on: https://go-review.googlesource.com/9403
Reviewed-by: Austin Clements <austin@google.com>
At the end of lexInsideAction(), we return lexInsideAction: this is the default
behaviour when we are still parsing an action. But some switch branches return
lexInsideAction too.
So let's ensure code consistency by always reaching the end of the
lexInsideAction function when needed.
Change-Id: I7e9d8d6e51f29ecd6db6bdd63b36017845d95368
Reviewed-on: https://go-review.googlesource.com/9441
Reviewed-by: Rob Pike <r@golang.org>
We shouldn't sort the slots array, as it is used each time the
test is run. Tests after the first should continue to use the
unsorted ordering.
Note that this doesn't fix the flaky test. Just a bug I saw
while investigating.
Change-Id: Ic03cca637829d569d50d3a2278d19410d4dedba9
Reviewed-on: https://go-review.googlesource.com/9637
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
global color table.
Change-Id: Ia38f75708ed5e5b430680a1eecafb4fc8047269c
Reviewed-on: https://go-review.googlesource.com/9467
Reviewed-by: Rob Pike <r@golang.org>
The script was renamed in b3000b6.
Change-Id: I45ecafff7400e4bff14f31906278609abf2bcb9f
Reviewed-on: https://go-review.googlesource.com/9667
Reviewed-by: Andrew Gerrand <adg@golang.org>
Ideal constants in the template package are a little different from Go.
This is a case that slipped through the cracks: A huge integer number
was accepted as a floating-point number, but this loses precision
and is confusing. Also, the code in the template package (as opposed
to the parse package) wasn't expecting it.
Root this out at the source: If an integer doesn't fit an int64 or uint64,
complain right away.
Change-Id: I375621e6f5333c4d53f053a3c84a9af051711b7a
Reviewed-on: https://go-review.googlesource.com/9651
Reviewed-by: Russ Cox <rsc@golang.org>
The siz argument to both runtime.newproc and runtime.deferproc is
int32, not uintptr. This problem won't manifest on little-endian
systems because that stack slot is uintptr sized anyway. However,
on big-endian systems, it will make a difference.
Change-Id: I2351d1ec81839abe25375cff95e327b80764c2b5
Reviewed-on: https://go-review.googlesource.com/9647
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The current parser ignores obvious errors such as:
{{0.1.E}}
{{true.any}}
{{"hello".wrong}}
{{nil.E}}
The common problem is that a chain is built from
a literal value. It then panics at execution time.
Furthermore, a double dot triggers the same behavior:
{{..E}}
Addresses a TODO left in Tree.operand to catch these
errors at parsing time.
Note that identifiers can include a '.', and pipelines
could return an object which a field can be derived
from (like a variable), so they are excluded from the check.
Fixes#10615
Change-Id: I903706d1c17861b5a8354632c291e73c9c0bc4e1
Reviewed-on: https://go-review.googlesource.com/9621
Reviewed-by: Rob Pike <r@golang.org>
There are three problems:
1. There is no CR at the end of the message.
2. The message is unconditionally printed.
3. The message is printed to stdout.
Change-Id: Ib2d880eea03348e8a69720aad7752302a75bd277
Reviewed-on: https://go-review.googlesource.com/9622
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>