For some reason git won't let me write
doc/effective_go.html: reword confusing sentence
or even
doc/effective_go: reword confusing sentence
as the subject line for this CL, but that's not important. The
actual CL just rewrites one sentence and adds an option to grep in
the associated example.
Fixes#15875
Change-Id: Iee159ea751caf4b73eacf3dfc86e29032646373f
Reviewed-on: https://go-review.googlesource.com/32110
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently, if the number of stack barriers for a stack is 0, we'll
create a zero-length slice that points just past the end of the stack
allocation. This bad pointer causes GC panics.
Fix this by creating a nil slice if the stack barrier count is 0.
In practice, the only way this can happen is if
GODEBUG=gcstackbarrieroff=1 is set because even the minimum size stack
reserves space for two stack barriers.
Change-Id: I3527c9a504c445b64b81170ee285a28594e7983d
Reviewed-on: https://go-review.googlesource.com/31762
Reviewed-by: Rick Hudson <rlh@golang.org>
This adds debug code enabled in gccheckmark mode that panics if we
attempt to mark an unallocated object. This is a common issue with the
hybrid barrier when we're manipulating uninitialized memory that
contains stale pointers. This also tends to catch bugs that will lead
to "sweep increased allocation count" crashes closer to the source of
the bug.
Change-Id: I443ead3eac6f316a46f50b106078b524cac317f4
Reviewed-on: https://go-review.googlesource.com/31761
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently reflectcall has a subtle dance with write barriers where the
assembly code copies the result values from the stack to the in-heap
argument frame without write barriers and then calls into the runtime
after the fact to invoke the necessary write barriers.
For the hybrid barrier (and for ROC), we need to switch to a
*pre*-write write barrier, which is very difficult to do with the
current setup. We could tie ourselves in knots of subtle reasoning
about why it's okay in this particular case to have a post-write write
barrier, but this commit instead takes a different approach. Rather
than making things more complex, this simplifies reflection calls so
that the argument copy is done in Go using normal bulk write barriers.
The one difficulty with this approach is that calling into Go requires
putting arguments on the stack, but the call* functions "donate" their
entire stack frame to the called function. We can get away with this
now because the copy avoids using the stack and has copied the results
out before we clobber the stack frame to call into the write barrier.
The solution in this CL is to call another function, passing arguments
in registers instead of on the stack, and let that other function
reserve more stack space and setup the arguments for the runtime.
This approach seemed to work out the best. I also tried making the
call* functions reserve 32 extra bytes of frame for the write barrier
arguments and adjust SP up by 32 bytes around the call. However, even
with the necessary changes to the assembler to correct the spdelta
table, the runtime was still having trouble with the frame layout (and
the changes to the assembler caused many other things that do strange
things with the SP to fail to assemble). The approach I took doesn't
require any funny business with the SP.
Updates #17503.
Change-Id: Ie2bb0084b24d6cff38b5afb218b9e0534ad2119e
Reviewed-on: https://go-review.googlesource.com/31655
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
The logic for saving the list of packages was not always
preferring to keep error messages around correctly.
The missed error led to an internal consistency failure later.
Fixes#17119.
Change-Id: I9723b5d2518c25e2cac5249e6a7b907be95b521c
Reviewed-on: https://go-review.googlesource.com/31812
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
If we leave it for compilation sometimes the error appears first
in derived vendor paths, without any indication where they came from.
This is better.
$ go1.7 build canonical/d
cmd/go/testdata/src/canonical/a/a.go:3: non-canonical import path "canonical/a//vendor/c" (should be "canonical/a/vendor/c")
cmd/go/testdata/src/canonical/a/a.go:3: can't find import: "canonical/a//vendor/c"
$ go build canonical/d
package canonical/d
imports canonical/b
imports canonical/a/: non-canonical import path: "canonical/a/" should be "canonical/a"
$
Fixes#16954.
Change-Id: I315ccec92a00d98a08c139b3dc4e17dbc640edd0
Reviewed-on: https://go-review.googlesource.com/31668
Reviewed-by: Quentin Smith <quentin@golang.org>
This is not strictly illegal but it probably should be (too late)
and doesn't mean what it looks like it means:
the second key-value pair has the key ",xml".
Fixes#14466.
Change-Id: I174bccc23fd28affeb87f57f77c6591634ade641
Reviewed-on: https://go-review.googlesource.com/32031
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Quentin Smith <quentin@golang.org>
This CL introduces some minor changes to match rules more closely
to the instructions they are targeting. s390x logical operation
with immediate instructions typically leave some bits in the
target register unchanged. This means for example that an XOR
with -1 requires 2 instructions. It is better in cases such as
this to create a constant and leave it visible to the compiler
so that it can be reused rather than hiding it in the assembler.
This CL also tweaks the rules a bit to ensure that constants are
folded when possible.
Change-Id: I1c6dee31ece00fc3c5fdf6a24f1abbc91dd2db2a
Reviewed-on: https://go-review.googlesource.com/31754
Reviewed-by: Keith Randall <khr@golang.org>
Currently any warning will make dist fail because the
text will be considered as part of the package list.
Change-Id: I09a14089cd0448c3779e2f767e9356fe3325d8d9
Reviewed-on: https://go-review.googlesource.com/32111
TryBot-Result: Gobot Gobot <gobot@golang.org>
Run-TryBot: Andrew Gerrand <adg@golang.org>
Reviewed-by: Andrew Gerrand <adg@golang.org>
In the situation where the Client.Jar is set and the Request.Header
has cookies manually inserted, the redirect logic needs to be
able to apply changes to cookies from "Set-Cookie" headers to both
the Jar and the manually inserted Header cookies.
Since Header cookies lack information about the original domain
and path, the logic in this CL simply removes cookies from the
initial Header if any subsequent "Set-Cookie" matches. Thus,
in the event of cookie conflicts, the logic preserves the behavior
prior to change made in golang.org/cl/28930.
Fixes#17494
Updates #4800
Change-Id: I645194d9f97ff4d95bd07ca36de1d6cdf2f32429
Reviewed-on: https://go-review.googlesource.com/31435
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Now that sweeping and span marking use the sweep list, there's no need
for the work.spans snapshot of the allspans list. This change
eliminates the few remaining uses of it, which are either dead code or
can use allspans directly, and removes work.spans and its support
functions.
Change-Id: Id5388b42b1e68e8baee853d8eafb8bb4ff95bb43
Reviewed-on: https://go-review.googlesource.com/30537
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently markrootSpans iterates over all spans ever allocated to find
the in-use spans. Since we now have a list of in-use spans, change it
to iterate over that instead.
This, combined with the previous change, fixes#9265. Before these two
changes, blowing up the heap to 8GB and then shrinking it to a 0MB
live set caused the small-heap portion of the test to run 60x slower
than without the initial blowup. With these two changes, the time is
indistinguishable.
No significant effect on other benchmarks.
Change-Id: I4a27e533efecfb5d18cba3a87c0181a81d0ddc1e
Reviewed-on: https://go-review.googlesource.com/30536
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently sweeping walks the list of all spans, which means the work
in sweeping is proportional to the maximum number of spans ever used.
If the heap was once large but is now small, this causes an
amortization failure: on a small heap, GCs happen frequently, but a
full sweep still has to happen in each GC cycle, which means we spent
a lot of time in sweeping.
Fix this by creating a separate list consisting of just the in-use
spans to be swept, so sweeping is proportional to the number of in-use
spans (which is proportional to the live heap). Specifically, we
create two lists: a list of unswept in-use spans and a list of swept
in-use spans. At the start of the sweep cycle, the swept list becomes
the unswept list and the new swept list is empty. Allocating a new
in-use span adds it to the swept list. Sweeping moves spans from the
unswept list to the swept list.
This fixes the amortization problem because a shrinking heap moves
spans off the unswept list without adding them to the swept list,
reducing the time required by the next sweep cycle.
Updates #9265. This fix eliminates almost all of the time spent in
sweepone; however, markrootSpans has essentially the same bug, so now
the test program from this issue spends all of its time in
markrootSpans.
No significant effect on other benchmarks.
Change-Id: Ib382e82790aad907da1c127e62b3ab45d7a4ac1e
Reviewed-on: https://go-review.googlesource.com/30535
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently we set the len and cap of h.spans to the full reserved
region of the address space and track the actual mapped region
separately in h.spans_mapped. Since we have both the len and cap at
our disposal, change things so len(h.spans) tracks how much of the
spans array is mapped and eliminate h.spans_mapped. This simplifies
mheap and means we'll get nice "index out of bounds" exceptions if we
do try to go off the end of the spans rather than a SIGSEGV.
Change-Id: I8ed9a1a9a844d90e9fd2e269add4704623dbdfe6
Reviewed-on: https://go-review.googlesource.com/30533
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Like h_allspans and mheap_.allspans, these were two ways of referring
to the spans array from when the runtime was split between C and Go.
Clean this up by making mheap_.spans a slice and eliminating h_spans.
Change-Id: I3aa7038d53c3a4252050aa33e468c48dfed0b70e
Reviewed-on: https://go-review.googlesource.com/30532
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This was necessary in the C days when allspans was an mspan**, but now
that allspans is a Go slice, this is redundant with len(allspans) and
we can use range loops over allspans.
Change-Id: Ie1dc39611e574e29a896e01690582933f4c5be7e
Reviewed-on: https://go-review.googlesource.com/30531
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
These are two ways to refer to the allspans array that hark back to
when the runtime was split between C and Go. Clean this up by making
mheap_.allspans a slice and eliminating h_allspans.
Change-Id: Ic9360d040cf3eb590b5dfbab0b82e8ace8525610
Reviewed-on: https://go-review.googlesource.com/30530
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
When the compiler insert write barriers, the frontend makes
conservative decisions at an early stage. This may have false
positives which result in write barriers for stack writes.
A new phase, writebarrier, is added to the SSA backend, to delay
the decision and eliminate false positives. The frontend still
makes conservative decisions. When building SSA, instead of
emitting runtime calls directly, it emits WB ops (StoreWB,
MoveWB, etc.), which will be expanded to branches and runtime
calls in writebarrier phase. Writes to static locations on stack
are detected and write barriers are removed.
All write barriers of stack writes found by the script from
issue #17330 are eliminated (except two false positives).
Fixes#17330.
Change-Id: I9bd66333da9d0ceb64dcaa3c6f33502798d1a0f8
Reviewed-on: https://go-review.googlesource.com/31131
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
The runtime traceback code assumes non-empty frame has link
link register saved on LR architectures. Make sure it is so in
the assember.
Also make sure that LR is stored before update SP, so the traceback
code will not see a half-updated stack frame if a signal comes
during the execution of function prologue.
Fixes#17381.
Change-Id: I668b04501999b7f9b080275a2d1f8a57029cbbb3
Reviewed-on: https://go-review.googlesource.com/31760
Reviewed-by: Michael Munday <munday@ca.ibm.com>
Update the ppc64x disassembly code for use by objdump
from golang.org/x/arch/ppc64/ppc64asm commit fcea5ea.
Enable the objdump testcase for external linking on ppc64le
make a minor fix to the expected output.
Fixes#17447
Change-Id: I769cc7f8bfade594690a476dfe77ab33677ac03b
Reviewed-on: https://go-review.googlesource.com/32015
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The Dragonfly libc returns a non-zero value for malloc(-1).
Fixes#17585.
Change-Id: Icfe68011ccbc75c676273ee3c3efdf24a520a004
Reviewed-on: https://go-review.googlesource.com/32050
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
For historical reasons, the go/doc package does not include
the methods within an interface as part of the documented
methods for that type. Thus,
go doc ast.Node.Pos
gives an incorrect and confusing error message:
doc: no method Node.Pos in package go/ast
This CL does some dirty work to dig down to the methods
so interface methods now present their documentation:
% go doc ast.node.pos
func Pos() token.Pos // position of first character belonging to the node
%
It must largely sidestep the doc package to do this, which
is a shame. Perhaps things will improve there one day.
The change does not handle embeddings, and in principle the
same approach could be done for struct fields, but that is also
not here yet. But this CL fixes the thing that was bugging me.
Change-Id: Ic10a91936da96f54ee0b2f4a4fe4a8c9b93a5b4a
Reviewed-on: https://go-review.googlesource.com/31852
Reviewed-by: Robert Griesemer <gri@golang.org>
Instead of generating typelink symbols in the compiler
mark types that should have typelinks with a flag.
The linker detects this flag and adds the marked types
to the typelink table.
name old s/op new s/op delta
LinkCmdCompile 0.27 ± 6% 0.25 ± 6% -6.93% (p=0.000 n=97+98)
LinkCmdGo 0.30 ± 5% 0.29 ±10% -4.22% (p=0.000 n=97+99)
name old MaxRSS new MaxRSS delta
LinkCmdCompile 112k ± 3% 106k ± 2% -4.85% (p=0.000 n=100+100)
LinkCmdGo 107k ± 3% 103k ± 3% -3.00% (p=0.000 n=100+100)
Change-Id: Ic95dd4b0101e90c1fa262c9c6c03a2028d6b3623
Reviewed-on: https://go-review.googlesource.com/31772
Run-TryBot: Shahar Kohanim <skohanim@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The condition to determine if any further iterations are needed is
evaluated to false in case it encounters a NaN. Instead, flip the
condition to keep looping until the factor is greater than the machine
roundoff error.
Updates #17577
Change-Id: I058abe73fcd49d3ae4e2f7b33020437cc8f290c3
Reviewed-on: https://go-review.googlesource.com/31952
Reviewed-by: Robert Griesemer <gri@golang.org>
In sinit.go, gdata can already handle strings and complex, so no
reason to handle them separately.
In obj.go, inline gdatastring and gdatacomplex into gdata, since it's
the only caller. Allows extracting out the common Linksym calls.
Passes toolstash -cmp.
Change-Id: I3cb18d9b4206a8a269c36e0d30a345d8e6caba1f
Reviewed-on: https://go-review.googlesource.com/31498
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Bug 15141 was apparently fixed by some other change to the
compiler (this is plausible, it was a weird bug dependent
on a particular way of returning a large named array result),
add the test to ensure that it stays fixed.
Updates #15141.
Change-Id: I3d6937556413fab1af31c5a1940e6931563ce2f3
Reviewed-on: https://go-review.googlesource.com/31972
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It no longer exists as of CL 31010.
Change-Id: Idd61f392544cad8b3f3f8d984dc5c953b473e2e5
Reviewed-on: https://go-review.googlesource.com/31934
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Rob Pike <r@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Some original shift opcodes for ppc64x expected an operand to be
a mask instead of a shift count, preventing some valid shift counts
from being written.
This adds new opcodes for shifts where needed, using mnemonics that
match the ppc64 asm and allowing the assembler to accept the full set
of valid shift counts.
Fixes#15016
Change-Id: Id573489f852038d06def279c13fd0523736878a7
Reviewed-on: https://go-review.googlesource.com/31853
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Carlos Eduardo Seo <cseo@linux.vnet.ibm.com>
Reviewed-by: David Chase <drchase@google.com>