builder.copyFile ensures that the destination is an object file. This
wouldn't be true if we are not writing to a regular file and the copy
fails. Check if the destination is an object file only if we are
writing to a regular file. While removing the file, ensure that it is a
regular file so that device files and such aren't removed when running
as a user with suggicient privileges.
Fixes#12407
Change-Id: Ie86ce9770fa59aa56fc486a5962287859b69db3d
Reviewed-on: https://go-review.googlesource.com/16585
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This introduces a recursive variant of the go:nowritebarrier
annotation that prohibits write barriers not only in the annotated
function, but in all functions it calls, recursively. The error
message gives the shortest call stack from the annotated function to
the function containing the prohibited write barrier, including the
names of the functions and the line numbers of the calls.
To demonstrate the annotation, we apply it to gcmarkwb_m, the write
barrier itself.
This is a new annotation rather than a modification of the existing
go:nowritebarrier annotation because, for better or worse, there are
many go:nowritebarrier functions that do call functions with write
barriers. In most of these cases this is benign because the annotation
was conservative, but it prohibits simply coopting the existing
annotation.
Change-Id: I225ca483c8f699e8436373ed96349e80ca2c2479
Reviewed-on: https://go-review.googlesource.com/16554
Reviewed-by: Keith Randall <khr@golang.org>
The code works without the newline, but it looks funny:
func _cgoexp_15afe6549f62_GoFn(a unsafe.Pointer, n int32) { fn := GoFn
This adds a newline after the '{'.
Change-Id: I6c465abe16f47924426d1b22b91004b3a3586ebd
Reviewed-on: https://go-review.googlesource.com/16612
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The Server's server goroutine was panicing (but recovering) when
cleaning up after handling a request. It was pretty harmless (it just
closed that one connection and didn't kill the whole process) but it
was distracting.
Updates #13135
Change-Id: I2a0ce9e8b52c8d364e3f4ce245e05c6f8d62df14
Reviewed-on: https://go-review.googlesource.com/16572
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
The width of the type of an external variable defined with a type
literal may not be set when the instrumentation pass is run. There are
two cases in the standard library that fail without the call to dowidth:
../../../src/encoding/base32/base32.go:322: constant -1000000000 overflows uintptr
../../../src/encoding/base32/base32.go:329: constant -1000000000 overflows uintptr
../../../src/encoding/json/encode.go:385: constant -1000000000 overflows uintptr
../../../src/encoding/json/encode.go:387: constant -1000000000 overflows uintptr
Change-Id: I7c3334f7decdb7488595ffe4090cd262d7334283
Reviewed-on: https://go-review.googlesource.com/16331
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
On older versions of GCC we need to pass a file name before GCC will
report an unrecognized option.
Fixes#13065.
Change-Id: I7ed34c01a006966a446059025f7d10235c649072
Reviewed-on: https://go-review.googlesource.com/16589
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Handling of special records for tiny allocations has two problems:
1. Once we queue a finalizer we mark the object. As the result any
subsequent finalizers for the same object will not be queued
during this GC cycle. If we have 16 finalizers setup (the worst case),
finalization will take 16 GC cycles. This is what caused misbehave
of tinyfin.go. The actual flakiness was caused by the fact that fing
is asynchronous and don't always run before the check.
2. If a tiny block has both finalizer and profile specials,
it is possible that we both queue finalizer, preserve the object live
and free the profile record. As the result heap profile can be skewed.
Fix both issues by analyzing all special records for a single object at once.
Also, make tinyfin test stricter and remove reliance on real time.
Also, add a test for the problem 2. Currently heap profile missed about
a half of live memory.
Fixes#13100
Change-Id: I9ae4dc1c44893724138a4565ca5cae29f2e97544
Reviewed-on: https://go-review.googlesource.com/16591
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Currently the GC work buffers are only 256 bytes and hence can record
only 24 64-bit pointer. They were reduced from 4K in commits db7fd1c
and a15818f as a way to minimize the amount of work the per-P workbuf
caches could "hide" from the mark phase and carry in to the mark
termination phase. However, this approach wasn't very robust and we
later added a "mark 2" phase to address this problem head-on.
Because of mark 2, there's now no benefit to having very small work
buffers. But there are plenty of downsides: small work buffers
increase contention on the work lists, increase the frequency and
hence net overhead of acquiring and releasing work buffers, and
somewhat increase memory overhead of the GC.
This commit expands work buffers back to 4K (504 64-bit pointers).
This reduces the rate of writes to work.full in the garbage benchmark
from a peak of ~780,000 writes/sec to a peak of ~32,000 writes/sec.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.37ms ± 5% 5.60ms ± 2% +4.37% (p=0.000 n=20+20)
Change-Id: Ic9cc28e7a125d23d9faf4f5e690fb8aa9bcdfb28
Reviewed-on: https://go-review.googlesource.com/15893
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently, assists are non-preemptible, which means a heavily
assisting G can block other Gs from running. At the beginning of a GC
cycle, it can also delay scang, which will spin until the assist is
done. Since scanning is currently done sequentially, this can
seriously extend the length of the scan phase.
Fix this by making assists preemptible. Since the assist holds work
buffers and runs on the system stack, this must be done cooperatively:
we make gcDrainN return on preemption, and make the assist return from
the system stack and voluntarily Gosched.
This is prerequisite to enlarging the work buffers. Without this
change, the delays and spinning in scang increase significantly.
This has no effect on the go1 benchmarks.
name old time/op new time/op delta
XBenchGarbage-12 5.72ms ± 4% 5.37ms ± 5% -6.11% (p=0.000 n=20+20)
Change-Id: I829e732a0f23b126da633516a1a9ec1a508fdbf1
Reviewed-on: https://go-review.googlesource.com/15894
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
GC assists must block until the assist can be satisfied (either
through stealing credit or doing work) or the GC cycle ends.
Currently, this is implemented as a retry loop with a 100 µs delay.
This obviously isn't ideal, as it wastes CPU and delays mutator
execution. It also has the somewhat peculiar downside that sleeping a
G requires allocation, and this requires working around recursive
allocation.
Replace this timed delay with a proper scheduling queue. When an
assist can't be satisfied immediately, it adds the allocating G to a
queue and parks it. Any time background scan credit is flushed, it
consults this queue, directly satisfies the debt of queued assists,
and wakes up satisfied assists before flushing any remaining credit to
the background credit pool.
No effect on the go1 benchmarks. Slightly speeds up the garbage
benchmark.
name old time/op new time/op delta
XBenchGarbage-12 5.81ms ± 1% 5.72ms ± 4% -1.65% (p=0.011 n=20+20)
Updates #12041.
Change-Id: I8ee3b6274dd097b12b10a8030796a958a4b0e7b7
Reviewed-on: https://go-review.googlesource.com/15890
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This eliminates many write barriers in the scheduler code that are
unnecessary and will interfere with upcoming changes where the garbage
collector will have to invoke run queue functions in contexts that
must not have write barriers.
Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55
Reviewed-on: https://go-review.googlesource.com/16556
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
The error message should indicate the name of the unset variable,
rather than the value. The value will alwayse be empty.
Change-Id: I6f6c165074dfce857b6523703a890d205423cd28
Reviewed-on: https://go-review.googlesource.com/16555
Reviewed-by: David Crawshaw <crawshaw@golang.org>
golang.org/cl/16436 added a local symbol for every global function, but also
added a duplicate entry for the global symbol. Surprisingly this hasn't caused
any noticeable problems, but it's still wrong.
Change-Id: Icd3906760f8aaf7bef31ffd4f2d866d73d36dc2c
Reviewed-on: https://go-review.googlesource.com/16581
Reviewed-by: Ian Lance Taylor <iant@golang.org>
* use new(int32) to be pedantic about documented SetFinalizer rules:
"The argument x must be a pointer to an object allocated by calling
new or by taking the address of a composite literal"
* remove the amd64-only restriction. The GC is fully precise everywhere
now, even on 32-bit. (keep the gccgo restriction, though)
* remove a data race (perhaps the actual bug) and use atomic.LoadInt32
for the final check. The race detector is now happy, too.
Updates #13100
Change-Id: I8d05c0ac4f046af9ba05701ad709c57984b34893
Reviewed-on: https://go-review.googlesource.com/16535
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
To avoid collisions with what existing code may already be doing.
Change-Id: Ice639440aafc0724714c25333d90a49954372230
Reviewed-on: https://go-review.googlesource.com/16503
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change makes Dial, Listen and ListenPacket with invalid port fail
whatever GODEBUG=netdns is.
Please be informed that cgoLookupPort with an out of range literal
number may return either the lower or upper bound value, 0 or 65535,
with no error on some platform.
Fixes#11715.
Change-Id: I43f9c4fb5526d1bf50b97698e0eb39d29fd74c35
Reviewed-on: https://go-review.googlesource.com/12447
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Also new tests added. So, perhaps, this CL corrects
even more broken EvalSymlinks behaviour.
Fixes#12451
Change-Id: I81b9d92bab74bcb8eca6db6633546982fe5cec87
Reviewed-on: https://go-review.googlesource.com/16192
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The GNU binutils recently picked up support for new 386/amd64
relocations. Add support for them in the Go linker when doing an
internal link.
The 386 relocation R_386_GOT32X was proposed in
https://groups.google.com/forum/#!topic/ia32-abi/GbJJskkid4I . It can
be treated as identical to the R_386_GOT32 relocation.
The amd64 relocations R_X86_64_GOTPCRELX and R_X86_64_REX_GOTPCRELX were
proposed in
https://groups.google.com/forum/#!topic/x86-64-abi/n9AWHogmVY0 . They
can both be treated as identical to the R_X86_64_GOTPCREL relocation.
The purpose of the new relocations is to permit additional linker
relaxations in some cases. We do not attempt to support those cases.
While we're at it, remove the unused and in some cases out of date
_COUNT names from ld/elf.go.
Fixes#13114.
Change-Id: I34ef07f6fcd00cdd2996038ecf46bb77a49e968b
Reviewed-on: https://go-review.googlesource.com/16529
Reviewed-by: Minux Ma <minux@golang.org>
The code is meant to return "<nil>", but because of a make([]Type, 8)
call that should be make([]Type, 0, 8), the nil Type happens to
already appear in the array.
Change-Id: I2db140046e52f27db1b0ac84bde2b6680677dd95
Reviewed-on: https://go-review.googlesource.com/16464
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Noticed by cmd/vet.
Expected values array produced by Python instead of Keisan because:
1) Keisan's website calculator is painfully difficult to copy/paste
values into and out of, and
2) after tediously computing e^(vf[i] * 10) - 1 via Keisan I
discovered that Keisan computing vf[i]*10 in a higher precision was
giving substantially different output values.
Also, testing uses "close" instead of "veryclose" because 386's
assembly implementation produces values for some of the test cases
that fail "veryclose". Curiously, Expm1(vf[i]*10) is identical to
Exp(vf[i]*10)-1 on 386, whereas with the portable implementation
they're only "veryclose".
Investigating these questions is left to someone else. I just wanted
to fix the cmd/vet warning.
Fixes#13101.
Change-Id: Ica8f6c267d01aa4cc31f53593e95812746942fbc
Reviewed-on: https://go-review.googlesource.com/16505
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Klaus Post <klauspost@gmail.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Currently the concurrent root scan is performed in its entirety by the
GC coordinator before entering concurrent mark (which enables GC
workers). This scan is done sequentially, which can prolong the scan
phase, delay the mark phase, and means that the scan phase does not
obey the 25% CPU goal. Furthermore, there's no need to complete the
root scan before starting marking (in fact, we already allow GC
assists to happen during the scan phase), so this acts as an
unnecessary barrier between root scanning and marking.
This change shifts the root scan work out of the GC coordinator and in
to the GC workers. The coordinator simply sets up the scan state and
enqueues the right number of root scan jobs. The GC workers then drain
the root scan jobs prior to draining heap scan jobs.
This parallelizes the root scan process, makes it obey the 25% CPU
goal, and effectively eliminates root scanning as an isolated phase,
allowing the system to smoothly transition from root scanning to heap
marking. This also eliminates a major non-STW responsibility of the GC
coordinator, which will make it easier to switch to a decentralized
state machine. Finally, it puts us in a good position to perform root
scanning in assists as well, which will help satisfy assists at the
beginning of the GC cycle.
This is mostly straightforward. One tricky aspect is that we have to
deal with preemption deadlock: where two non-preemptible gorountines
are trying to preempt each other to perform a stack scan. Given the
context where this happens, the only instance of this is two
background workers trying to scan each other. We avoid this by simply
not scanning the stacks of background workers during the concurrent
phase; this is safe because we'll scan them during mark termination
(and their stacks are *very* small and should not contain any new
pointers).
This change also switches the root marking during mark termination to
use the same gcDrain-based code path as concurrent mark. This
shouldn't affect performance because STW root marking was already
parallel and tasks switched to heap marking immediately when no more
root marking tasks were available. However, it simplifies the code and
unifies these code paths.
This has negligible effect on the go1 benchmarks. It slightly slows
down the garbage benchmark, possibly by making GC run slightly more
frequently.
name old time/op new time/op delta
XBenchGarbage-12 5.10ms ± 1% 5.24ms ± 1% +2.87% (p=0.000 n=18+18)
name old time/op new time/op delta
BinaryTree17-12 3.25s ± 3% 3.20s ± 5% -1.57% (p=0.013 n=20+20)
Fannkuch11-12 2.45s ± 1% 2.46s ± 1% +0.38% (p=0.019 n=20+18)
FmtFprintfEmpty-12 49.7ns ± 3% 49.9ns ± 4% ~ (p=0.851 n=19+20)
FmtFprintfString-12 170ns ± 2% 170ns ± 1% ~ (p=0.775 n=20+19)
FmtFprintfInt-12 161ns ± 1% 160ns ± 1% -0.78% (p=0.000 n=19+18)
FmtFprintfIntInt-12 267ns ± 1% 270ns ± 1% +1.04% (p=0.000 n=19+19)
FmtFprintfPrefixedInt-12 238ns ± 2% 238ns ± 1% ~ (p=0.133 n=18+19)
FmtFprintfFloat-12 311ns ± 1% 310ns ± 2% -0.35% (p=0.023 n=20+19)
FmtManyArgs-12 1.08µs ± 1% 1.06µs ± 1% -2.31% (p=0.000 n=20+20)
GobDecode-12 8.65ms ± 1% 8.63ms ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 6.49ms ± 1% 6.52ms ± 1% +0.37% (p=0.015 n=20+20)
Gzip-12 319ms ± 3% 318ms ± 1% ~ (p=0.975 n=19+17)
Gunzip-12 41.9ms ± 1% 42.1ms ± 2% +0.65% (p=0.004 n=19+20)
HTTPClientServer-12 61.7µs ± 1% 62.6µs ± 1% +1.40% (p=0.000 n=18+20)
JSONEncode-12 16.8ms ± 1% 16.9ms ± 1% ~ (p=0.239 n=20+18)
JSONDecode-12 58.4ms ± 1% 60.7ms ± 1% +3.85% (p=0.000 n=19+20)
Mandelbrot200-12 3.86ms ± 0% 3.86ms ± 1% ~ (p=0.092 n=18+19)
GoParse-12 3.75ms ± 2% 3.75ms ± 2% ~ (p=0.708 n=19+20)
RegexpMatchEasy0_32-12 100ns ± 1% 100ns ± 2% +0.60% (p=0.010 n=17+20)
RegexpMatchEasy0_1K-12 341ns ± 1% 342ns ± 2% ~ (p=0.203 n=20+19)
RegexpMatchEasy1_32-12 82.5ns ± 2% 83.2ns ± 2% +0.83% (p=0.007 n=19+19)
RegexpMatchEasy1_1K-12 495ns ± 1% 495ns ± 2% ~ (p=0.970 n=19+18)
RegexpMatchMedium_32-12 130ns ± 2% 130ns ± 2% +0.59% (p=0.039 n=19+20)
RegexpMatchMedium_1K-12 39.2µs ± 1% 39.3µs ± 1% ~ (p=0.214 n=18+18)
RegexpMatchHard_32-12 2.03µs ± 2% 2.02µs ± 1% ~ (p=0.166 n=18+19)
RegexpMatchHard_1K-12 61.0µs ± 1% 60.9µs ± 1% ~ (p=0.169 n=20+18)
Revcomp-12 533ms ± 1% 535ms ± 1% ~ (p=0.071 n=19+17)
Template-12 68.1ms ± 2% 73.0ms ± 1% +7.26% (p=0.000 n=19+20)
TimeParse-12 355ns ± 2% 356ns ± 2% ~ (p=0.530 n=19+20)
TimeFormat-12 357ns ± 2% 347ns ± 1% -2.59% (p=0.000 n=20+19)
[Geo mean] 62.1µs 62.3µs +0.31%
name old speed new speed delta
GobDecode-12 88.7MB/s ± 1% 88.9MB/s ± 1% ~ (p=0.377 n=18+20)
GobEncode-12 118MB/s ± 1% 118MB/s ± 1% -0.37% (p=0.015 n=20+20)
Gzip-12 60.9MB/s ± 3% 60.9MB/s ± 1% ~ (p=0.944 n=19+17)
Gunzip-12 464MB/s ± 1% 461MB/s ± 2% -0.64% (p=0.004 n=19+20)
JSONEncode-12 115MB/s ± 1% 115MB/s ± 1% ~ (p=0.236 n=20+18)
JSONDecode-12 33.2MB/s ± 1% 32.0MB/s ± 1% -3.71% (p=0.000 n=19+20)
GoParse-12 15.5MB/s ± 2% 15.5MB/s ± 2% ~ (p=0.702 n=19+20)
RegexpMatchEasy0_32-12 320MB/s ± 1% 318MB/s ± 2% ~ (p=0.094 n=18+20)
RegexpMatchEasy0_1K-12 3.00GB/s ± 1% 2.99GB/s ± 1% ~ (p=0.194 n=20+19)
RegexpMatchEasy1_32-12 388MB/s ± 2% 385MB/s ± 2% -0.83% (p=0.008 n=19+19)
RegexpMatchEasy1_1K-12 2.07GB/s ± 1% 2.07GB/s ± 1% ~ (p=0.964 n=19+18)
RegexpMatchMedium_32-12 7.68MB/s ± 1% 7.64MB/s ± 2% -0.57% (p=0.020 n=19+20)
RegexpMatchMedium_1K-12 26.1MB/s ± 1% 26.1MB/s ± 1% ~ (p=0.211 n=18+18)
RegexpMatchHard_32-12 15.8MB/s ± 1% 15.8MB/s ± 1% ~ (p=0.180 n=18+19)
RegexpMatchHard_1K-12 16.8MB/s ± 1% 16.8MB/s ± 2% ~ (p=0.236 n=20+19)
Revcomp-12 477MB/s ± 1% 475MB/s ± 1% ~ (p=0.071 n=19+17)
Template-12 28.5MB/s ± 2% 26.6MB/s ± 1% -6.77% (p=0.000 n=19+20)
[Geo mean] 100MB/s 99.0MB/s -0.82%
Change-Id: I875bf6ceb306d1ee2f470cabf88aa6ede27c47a0
Reviewed-on: https://go-review.googlesource.com/16059
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We already have gcMarkWorkAvailable, but the check for GC mark work is
open-coded in several places. Generalize gcMarkWorkAvailable slightly
and replace these open-coded checks with calls to gcMarkWorkAvailable.
In addition to cleaning up the code, this puts us in a better position
to make this check slightly more complicated.
Change-Id: I1b29883300ecd82a1bf6be193e9b4ee96582a860
Reviewed-on: https://go-review.googlesource.com/16058
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Type Op is enfored now.
Type EType will need further CLs.
Added TODOs where Node.EType is used as a union type.
The TODOs have the format `TODO(marvin): Fix Node.EType union type.`.
Furthermore:
-The flag of Econv function in fmt.go is removed, since unused.
-Some cleaning along the way, e.g. declare vars first when getting initialized.
Passes go build -toolexec 'toolstash -cmp' -a std.
Fixes#11846
Change-Id: I908b955d5a78a195604970983fb9194bd9e9260b
Reviewed-on: https://go-review.googlesource.com/14956
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Include syscall.Stat_t on unix to the
unexported fileStat structure rather than
accessing it though an interface.
Additionally add a benchmark for Readdir
(and Readdirnames).
Tested on linux, freebsd, netbsd, openbsd
darwin, solaris, does not touch windows
stuff. Does not change the API, as
discussed on golang-dev.
E.g. on linux/amd64 with a directory of 65 files:
benchmark old ns/op new ns/op delta
BenchmarkReaddir-4 67774 66225 -2.29%
benchmark old allocs new allocs delta
BenchmarkReaddir-4 334 269 -19.46%
benchmark old bytes new bytes delta
BenchmarkReaddir-4 25208 24168 -4.13%
Change-Id: I44ef72a04ad7055523a980f29aa11122040ae8fe
Reviewed-on: https://go-review.googlesource.com/16423
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Abandon (but still support) the old numbering system.
GOTRACEBACK=none is old 0
GOTRACEBACK=single is the new behavior
GOTRACEBACK=all is old 1
GOTRACEBACK=system is old 2
GOTRACEBACK=crash is unchanged
See doc comment change in runtime1.go for details.
Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release.
If you run into programs where printing only the current goroutine omits
needed information, please add details in a comment on that issue.
Fixes#12366.
Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db
Reviewed-on: https://go-review.googlesource.com/16512
Reviewed-by: Austin Clements <austin@google.com>
Change parameter name from uid to gid, to fix an obvious copy-paste
error.
Change-Id: Iba13a45c87fde9625b82976a7d7901af4b705230
Reviewed-on: https://go-review.googlesource.com/16474
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Some tests need to disable inlining of a function. It's currently done
in one of a few ways (adding a function call, an empty switch, or a
defer). Add support for a less fragile 'go:noinline' directive that
prevents inlining.
Fixes#12312
Change-Id: Ife444e13361b4a927709d81aa41e448f32eec8d4
Reviewed-on: https://go-review.googlesource.com/13911
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
On ppc64x, the thread pointer, held in R13, points 0x7000 bytes past where
thread-local storage begins (presumably to maximize the amount of storage that
can be accessed with a 16-bit signed displacement). The relocations used to
indicate thread-local storage to the platform linker account for this, so to be
able to support external linking we need to change things so the linker applies
this offset instead of the runtime assembly.
Change-Id: I2556c249ab2d802cae62c44b2b4c5b44787d7059
Reviewed-on: https://go-review.googlesource.com/14233
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
The bug number was a typo, and I forgot to switch the implementation
back to if statements after the change from Float64bits in the first
patchset back to branching.
if statements can currently be inlined, but switch cannot (#13071)
Change-Id: I81d0cf64bda69186c3d747a07047f6a694f8fa70
Reviewed-on: https://go-review.googlesource.com/16446
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
And get rid of the stupid game of encoding the instruction in the addend.
Change-Id: Ib4de7515196cbc1e63b4261b01931cf02a44c1e6
Reviewed-on: https://go-review.googlesource.com/14055
Reviewed-by: Russ Cox <rsc@golang.org>
Errors with http.Redirect and http.StatusOk seem
to occur from time to time on the irc channel.
This change adds documentation suggesting
to use one of the 3xx codes and not StatusOk
with Redirect.
Change-Id: I6b900a8eb868265fbbb846ee6a53e426d90a727d
Reviewed-on: https://go-review.googlesource.com/15980
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The compiler can do a fine job, and can also inline it.
From Jeremy Jackins's observation and rsc's recommendation in thread:
"Pure Go math.Abs outperforms assembly version"
https://groups.google.com/forum/#!topic/golang-dev/nP5mWvwAXZo
Updates #13095
Change-Id: I3066f8eaa327bb403173b29791cc8661d7c0532c
Reviewed-on: https://go-review.googlesource.com/16444
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When dynamically linking, we want references to functions defined
in this module to always be to the function object, not to the
PLT. We force this by writing an additional local symbol for
every global function symbol and making all relocations against
the global symbol refer to this local symbol instead. This is
approximately equivalent to the ELF linker -Bsymbolic-functions
option, but that is buggy on several platforms.
Change-Id: Ie6983eb4d1947f8543736fd349f9a90df3cce91a
Reviewed-on: https://go-review.googlesource.com/16436
Reviewed-by: Ian Lance Taylor <iant@golang.org>