These used to be used for the list of newly freed objects, but that's
no longer a thing.
Change-Id: I5a4503137b74ec0eae5372ca271b1aa0b32df074
Reviewed-on: https://go-review.googlesource.com/22557
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
.bss section has no data stored in PE file. But when .bss section data
is used by the linker it is assumed that its every byte is set to zero.
(*Section).Data returns garbage at this moment. Change (*Section).Data
so it returns slice filled with 0s.
Updates #15345
Change-Id: I1fa5138244a9447e1d59dec24178b1dd0fd4c5d7
Reviewed-on: https://go-review.googlesource.com/22544
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Alex Brainman <alex.brainman@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is an error according to the spec, but Firefox and Google Chrome
seem OK with this.
Fixes#15059.
Change-Id: I841cf44e96655e91a2481555f38fbd7055a32202
Reviewed-on: https://go-review.googlesource.com/22546
Reviewed-by: Rob Pike <r@golang.org>
Our compilers now provides instrinsics including
sys.Ctz64 that support CTZ (count trailing zero)
instructions. This CL replaces the Go versions
of CTZ with the compiler intrinsic.
Count trailing zeros CTZ finds the least
significant 1 in a word and returns the number
of less significant 0s in the word.
Allocation uses the bitmap created by the garbage
collector to locate an unmarked object. The logic
takes a word of the bitmap, complements, and then
caches it. It then uses CTZ to locate an available
unmarked object. It then shifts marked bits out of
the bitmap word preparing it for the next search.
Once all the unmarked objects are used in the
cached work the bitmap gets another word and
repeats the process.
Change-Id: Id2fc42d1d4b9893efaa2e1bd01896985b7e42f82
Reviewed-on: https://go-review.googlesource.com/21366
Reviewed-by: Austin Clements <austin@google.com>
Two changes are included here that are dependent on the other.
The first is that allocBits and gcamrkBits are changed to
a *uint8 which points to the first byte of that span's
mark and alloc bits. Several places were altered to
perform pointer arithmetic to locate the byte corresponding
to an object in the span. The actual bit corresponding
to an object is indexed in the byte by using the lower three
bits of the objects index.
The second change avoids the redundant calculation of an
object's index. The index is returned from heapBitsForObject
and then used by the functions indexing allocBits
and gcmarkBits.
Finally we no longer allocate the gc bits in the span
structures. Instead we use an arena based allocation scheme
that allows for a more compact bit map as well as recycling
and bulk clearing of the mark bits.
Change-Id: If4d04b2021c092ec39a4caef5937a8182c64dfef
Reviewed-on: https://go-review.googlesource.com/20705
Reviewed-by: Austin Clements <austin@google.com>
golang.org/cl/22453 was supposed to pass -no-pie to the linker when linking a
race-enabled binary if the host toolchain supports it. But I bungled the
supported check as I forgot to pass -c to the host compiler so it tried to
compile a 0 byte .c file into an executable, which will never work. Fix it to
pass -c as it should have all along.
Change-Id: I4801345c7a29cb18d5f22cec5337ce535f92135d
Reviewed-on: https://go-review.googlesource.com/22587
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
It is unused, remove the clutter.
Change-Id: I51a44326b125ef79241459c463441f76a289cc08
Reviewed-on: https://go-review.googlesource.com/22586
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The _SigUnblock flag was appended to SIGSYS slot of runtime signal table
for Linux in https://go-review.googlesource.com/22202, but there is
still no concrete opinion on whether SIGSYS must be an unblocked signal
for runtime.
This change removes _SigUnblock flag from SIGSYS on Linux for
consistency in runtime signal handling and adds a reference to #15204 to
runtime signal table for FreeBSD.
Updates #15204.
Change-Id: I42992b1d852c2ab5dd37d6dbb481dba46929f665
Reviewed-on: https://go-review.googlesource.com/22537
Reviewed-by: Ian Lance Taylor <iant@golang.org>
DNS packing and unpacking uses hand-coded struct walking functions
rather than reflection, so these tags are unneeded and just contribute
to their runtime reflect metadata size.
Change-Id: I2db09d5159912bcbc3b482cbf23a50fa8fa807fa
Reviewed-on: https://go-review.googlesource.com/22594
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
There are no real world use cases for HINFO, MINFO, MB, MG, or MR
records, and package net's exposed APIs don't provide any way to
access them even if there were. If a use ever does show up, we can
revive them. In the mean time, this is just effectively-dead code that
sticks around because of rr_mk.
Change-Id: I6c188b5ee32f3b3a04588b79a0ee9c2e3e725ccc
Reviewed-on: https://go-review.googlesource.com/22593
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Goroutine leak checking is still too tedious, so untested.
See #6705 which is my fault for forgetting to mail out.
Change-Id: I899fb311c9d4229ff1dbd3f54fe307805e17efee
Reviewed-on: https://go-review.googlesource.com/22581
Reviewed-by: Ahmed W. <oneofone@gmail.com>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
MGHI (16-bit signed immediate) is now used where possible for both
MULLW and MULLD. MGHI is 2-bytes shorter than MSGFI.
Change-Id: I5d0648934f28b3403b1126913fd703d8f62b9e9f
Reviewed-on: https://go-review.googlesource.com/22398
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Bill O'Farrell <billotosyr@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Avoids some extra work and string concatenation at query time.
benchmark old allocs new allocs delta
BenchmarkGoLookupIP-32 154 150 -2.60%
BenchmarkGoLookupIPNoSuchHost-32 446 442 -0.90%
BenchmarkGoLookupIPWithBrokenNameServer-32 564 568 +0.71%
benchmark old bytes new bytes delta
BenchmarkGoLookupIP-32 10824 10704 -1.11%
BenchmarkGoLookupIPNoSuchHost-32 43140 42992 -0.34%
BenchmarkGoLookupIPWithBrokenNameServer-32 46616 46680 +0.14%
BenchmarkGoLookupIPWithBrokenNameServer's regression appears to be
because it's actually only performing 1 LookupIP call, so the extra
work done parsing the DNS config file doesn't amortize as well as for
BenchmarkGoLookupIP or BenchmarkGoLOokupIPNoSuchHost, which perform
2000+ LookupIP calls per run.
Update #15473.
Change-Id: I98c8072f2f39e2f2ccd6c55e9e9bd309f5ad68f8
Reviewed-on: https://go-review.googlesource.com/22571
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This change adds Config.Renegotiation which controls whether a TLS
client will accept renegotiation requests from a server. This is used,
for example, by some web servers that wish to “add” a client certificate
to an HTTPS connection.
This is disabled by default because it significantly complicates the
state machine.
Originally, handshakeMutex was taken before locking either Conn.in or
Conn.out. However, if renegotiation is permitted then a handshake may
be triggered during a Read() call. If Conn.in were unlocked before
taking handshakeMutex then a concurrent Read() call could see an
intermediate state and trigger an error. Thus handshakeMutex is now
locked after Conn.in and the handshake functions assume that Conn.in is
locked for the duration of the handshake.
Additionally, handshakeMutex used to protect Conn.out also. With the
possibility of renegotiation that's no longer viable and so
writeRecordLocked has been split off.
Fixes#5742.
Change-Id: I935914db1f185d507ff39bba8274c148d756a1c8
Reviewed-on: https://go-review.googlesource.com/22475
Run-TryBot: Adam Langley <agl@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Make sure we don't do O(n^2) work to eliminate a chain
of n copies.
benchmark old ns/op new ns/op delta
BenchmarkCopyElim1-8 1418 1406 -0.85%
BenchmarkCopyElim10-8 5289 5162 -2.40%
BenchmarkCopyElim100-8 52618 41684 -20.78%
BenchmarkCopyElim1000-8 2473878 424339 -82.85%
BenchmarkCopyElim10000-8 269373954 6367971 -97.64%
BenchmarkCopyElim100000-8 31272781165 104357244 -99.67%
Change-Id: I680f906f70f2ee1a8615cb1046bc510c77d59284
Reviewed-on: https://go-review.googlesource.com/22535
Reviewed-by: Alexandru Moșoi <alexandru@mosoi.ro>
Comparison of certain map types could fail to be antisymmetric.
This corrects that.
Change-Id: I88c6256053ce29950ced4ba4d538e241ee8591fe
Reviewed-on: https://go-review.googlesource.com/22552
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: jcd . <jcd@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Instead of keeping the desired number of seconds and converting to
time.Duration for every query, convert to time.Duration when
building the config.
Updates #15473
Change-Id: Ib24c050b593b3109011e359f4ed837a3fb45dc65
Reviewed-on: https://go-review.googlesource.com/22548
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
- Simplified the code.
- Removed types for slice aliases from composite literals' whitelist, since they
are properly handled by vet.
Fixes#15408
Updates #9171
Updates #11041
Change-Id: Ia1806c9eb3f327c09d2e28da4ffdb233b5a159b0
Reviewed-on: https://go-review.googlesource.com/22318
Run-TryBot: Rob Pike <r@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
Binary export format only.
Make sure we don't accidentally export an unnamed parameter
in signatures which expect all named parameters; otherwise
we crash during import. Appears to happen for _ (blank)
parameter names, as observed in method signatures such as
the one at: x/tools/godoc/analysis/analysis.go:76.
Fixes#15470.
TBR=mdempsky
Change-Id: I1b1184bf08c4c09d8a46946539c4b8c341acdb84
Reviewed-on: https://go-review.googlesource.com/22543
Reviewed-by: Robert Griesemer <gri@golang.org>
Updates #15462
Unexport Jconv, Sconv, Fconv, Hconv, Bconv, and VConv as they are
not referenced outside internal/gc.
Econv was only called by EType.String, so merge it into that method.
Change-Id: Iad9b06078eb513b85a03a43cd9eb9366477643d1
Reviewed-on: https://go-review.googlesource.com/22531
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Updates #15462
Replace all use of oconv(op, FmtSharp) with fmt.Printf("%#v", op).
This removes all the callers of oconv.
Change-Id: Ic3bf22495147f8497c8bada01d681428e2405b0e
Reviewed-on: https://go-review.googlesource.com/22530
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It wasn't rendering as HTML nicely.
Change-Id: I5408ec22932a05e85c210c0faa434bd19dce5650
Reviewed-on: https://go-review.googlesource.com/22532
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These symbols are de-duplicated in the linker but the compiler generates quite
many duplicates too: 2425 of 13769 total symbols for runtime.a for example.
De-duplicating them in the compiler saves the linker a bit of work.
Fixes#14983
Change-Id: I5f18e5f9743563c795aad8f0a22d17a7ed147711
Reviewed-on: https://go-review.googlesource.com/22293
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The complexity of the GC work buffers put and tryGet
prevented them from being inlined. This CL simplifies
the fast path thus enabling inlining. If the fast
path does not succeed the previous put and tryGet
functions are called.
Change-Id: I6da6495d0dadf42bd0377c110b502274cc01acf5
Reviewed-on: https://go-review.googlesource.com/20704
Reviewed-by: Austin Clements <austin@google.com>
Prior to this CL the base of a span was calculated in various
places using shifts or calls to base(). This CL now
always calls base() which has been optimized to calculate the
base of the span when the span is initialized and store that
value in the span structure.
Change-Id: I661f2bfa21e3748a249cdf049ef9062db6e78100
Reviewed-on: https://go-review.googlesource.com/20703
Reviewed-by: Austin Clements <austin@google.com>
Prior to this CL the sweep phase was responsible for locating
all objects that were about to be freed and calling a function
to process the object. This was done by the function
heapBitsSweepSpan. Part of processing included calls to
tracefree and msanfree as well as counting how many objects
were freed.
The calls to tracefree and msanfree have been moved into the
gcmalloc routine and called when the object is about to be
reallocated. The counting of free objects has been optimized
using an array based popcnt algorithm and if all the objects
in a span are free then span is freed.
Similarly the code to locate the next free object has been
optimized to use an array based ctz (count trailing zero).
Various hot paths in the allocation logic have been optimized.
At this point the garbage benchmark is within 3% of the 1.6
release.
Change-Id: I00643c442e2ada1685c010c3447e4ea8537d2dfa
Reviewed-on: https://go-review.googlesource.com/20201
Reviewed-by: Austin Clements <austin@google.com>
Add to each span a 64 bit cache (allocCache) of the allocBits
at freeindex. allocCache is shifted such that the lowest bit
corresponds to the bit freeindex. allocBits uses a 0 to
indicate an object is free, on the other hand allocCache
uses a 1 to indicate an object is free. This facilitates
ctz64 (count trailing zero) which counts the number of 0s
trailing the least significant 1. This is also the index of
the least significant 1.
Each span maintains a freeindex indicating the boundary
between allocated objects and unallocated objects. allocCache
is shifted as freeindex is incremented such that the low bit
in allocCache corresponds to the bit a freeindex in the
allocBits array.
Currently ctz64 is written in Go using a for loop so it is
not very efficient. Use of the hardware instruction will
follow. With this in mind comparisons of the garbage
benchmark are as follows.
1.6 release 2.8 seconds
dev:garbage branch 3.1 seconds.
Profiling shows the go implementation of ctz64 takes up
1% of the total time.
Change-Id: If084ed9c3b1eda9f3c6ab2e794625cb870b8167f
Reviewed-on: https://go-review.googlesource.com/20200
Reviewed-by: Austin Clements <austin@google.com>
Most (all?) processors that Go supports supply a hardware
instruction that takes a byte and returns the number
of zeros trailing the first 1 encountered, or 8
if no ones are found. This is the index within the
byte of the first 1 encountered. CTZ should improve the
performance of the nextFreeIndex function.
Since nextFreeIndex wants the next unmarked (0) bit
a bit-wise complement is needed before calling ctz.
Furthermore unmarked bits associated with previously
allocated objects need to be ignored. Instead of writing
a 1 as we allocate the code masks all bits less than the
freeindex after loading the byte.
While this CL does not actual execute a CTZ instruction
it supplies a ctz function with the appropiate signature
along with the logic to execute it.
Change-Id: I5c55ce0ed48ca22c21c4dd9f969b0819b4eadaa7
Reviewed-on: https://go-review.googlesource.com/20169
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
This is a renaming of the field ref to the
more appropriate allocCount. The field
holds the number of objects in the span
that are currently allocated. Some throws
strings were adjusted to more accurately
convey the meaning of allocCount.
Change-Id: I10daf44e3e9cc24a10912638c7de3c1984ef8efe
Reviewed-on: https://go-review.googlesource.com/19518
Reviewed-by: Austin Clements <austin@google.com>
Instead of building a freelist from the mark bits generated
by the GC this CL allocates directly from the mark bits.
The approach moves the mark bits from the pointer/no pointer
heap structures into their own per span data structures. The
mark/allocation vectors consist of a single mark bit per
object. Two vectors are maintained, one for allocation and
one for the GC's mark phase. During the GC cycle's sweep
phase the interpretation of the vectors is swapped. The
mark vector becomes the allocation vector and the old
allocation vector is cleared and becomes the mark vector that
the next GC cycle will use.
Marked entries in the allocation vector indicate that the
object is not free. Each allocation vector maintains a boundary
between areas of the span already allocated from and areas
not yet allocated from. As objects are allocated this boundary
is moved until it reaches the end of the span. At this point
further allocations will be done from another span.
Since we no longer sweep a span inspecting each freed object
the responsibility for maintaining pointer/scalar bits in
the heapBitMap containing is now the responsibility of the
the routines doing the actual allocation.
This CL is functionally complete and ready for performance
tuning.
Change-Id: I336e0fc21eef1066e0b68c7067cc71b9f3d50e04
Reviewed-on: https://go-review.googlesource.com/19470
Reviewed-by: Austin Clements <austin@google.com>
The gcmarkBits is a bit vector used by the GC to mark
reachable objects. Once a GC cycle is complete the gcmarkBits
swap places with the allocBits. allocBits is then used directly
by malloc to locate free objects, thus avoiding the
construction of a linked free list. This CL introduces a set
of helper functions for manipulating gcmarkBits and allocBits
that will be used by later CLs to realize the actual
algorithm. Minimal attempts have been made to optimize these
helper routines.
Change-Id: I55ad6240ca32cd456e8ed4973c6970b3b882dd34
Reviewed-on: https://go-review.googlesource.com/19420
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Rick Hudson <rlh@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In preparation for changing how the next free object is chosen
refactor and consolidate code into a single function.
Change-Id: I6836cd88ed7cbf0b2df87abd7c1c3b9fabc1cbd8
Reviewed-on: https://go-review.googlesource.com/19317
Reviewed-by: Austin Clements <austin@google.com>
The freelist for normal objects and the freelist
for stacks share the same mspan field for holding
the list head but are operated on by different code
sequences. This overloading complicates the use of bit
vectors for allocation of normal objects. This change
refactors the use of the stackfreelist out from the
use of freelist.
Change-Id: I5b155b5b8a1fcd8e24c12ee1eb0800ad9b6b4fa0
Reviewed-on: https://go-review.googlesource.com/19315
Reviewed-by: Austin Clements <austin@google.com>
The bitmap allocation data structure prototypes. Before
this is released these underlying data structures need
to be more performant but the signatures of helper
functions utilizing these structures will remain stable.
Change-Id: I5ace12f2fb512a7038a52bbde2bfb7e98783bcbe
Reviewed-on: https://go-review.googlesource.com/19221
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>