Old url 404s because the file no longer exists on master; change it to
point to the android 10 release branch.
Change-Id: If0f8b645f2c746f9fc8bbd68f4d1fe41868493ba
Reviewed-on: https://go-review.googlesource.com/c/go/+/232809
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Only enable broadcast on SOCK_DGRAM and SOCK_RAW sockets, SOCK_STREAM
and others don't support it.
Don't enable SO_BROADCAST on UNIX domain sockets as they don't support it.
This caused failures on WSL which strictly checks setsockopt calls
unlike other OSes which often silently ignore bad options.
Also return error for setsockopt call for SO_BROADCAST on Windows
matching all other platforms but for IPv4 only as it's not supported
on IPv6 as per:
https://docs.microsoft.com/en-us/windows/win32/winsock/socket-optionsFixes#38954
Change-Id: I0503fd1ce96102b17121af548b66b3e9c2bb80d3
Reviewed-on: https://go-review.googlesource.com/c/go/+/232807
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Before this commit, the code declares and assigns "n" with the result of
io.ReadFull() -- but the value is not used. The variable is then reused
later in the function.
This commit removes the first declaration of "n" and declares it closer
to where it is used.
Change-Id: I7ffe19a10f2a563c306bb6fe6562493435b9dc5a
Reviewed-on: https://go-review.googlesource.com/c/go/+/232917
Reviewed-by: Rob Pike <r@golang.org>
testing.M.Run has this bit of code:
if !flag.Parsed() {
flag.Parse()
}
It makes sense, and it's common knowledge for many Go developers that
test flags are automatically parsed by the time tests and benchmarks are
run. However, the docs didn't clarify that. The previous wording only
mentioned that flag.Parse isn't run before TestMain, which doesn't
necessarily mean that it's run afterwards.
Fixes#38952.
Change-Id: I85f7a9dce637a23c5cb9abc485d47415c1a1ca27
Reviewed-on: https://go-review.googlesource.com/c/go/+/232806
Reviewed-by: Ian Lance Taylor <iant@golang.org>
When we decode into a struct, each input key-value may be decoded into
one of the struct's fields. Particularly, existing data isn't dropped,
so that some sub-fields can be decoded into without zeroing all other
data.
However, decoding into a map behaved in the opposite way. Whenever a
key-value was decoded, it completely replaced the previous map element.
If the map contained any non-zero data in that key, it's dropped.
Instead, try to reuse the existing element value if possible. If the map
element type is a pointer, and the value is non-nil, we can decode
directly into it. If it's not a pointer, make a copy and decode into
that copy, as map element values aren't addressable.
This means we have to parse and convert the map element key before the
value, to be able to obtain the existing element value. This is fine,
though. Moreover, reporting errors on the key before the value follows
the input order more closely.
Finally, add a test to explore the four combinations, involving pointer
and non-pointer, and non-zero and zero values. A table-driven test
wasn't used, as each case required different checks, such as checking
that the non-nil pointer case doesn't end up with a different pointer.
Fixes#31924.
Change-Id: I5ca40c9963a98aaf92f26f0b35843c021028dfca
Reviewed-on: https://go-review.googlesource.com/c/go/+/179337
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The BITCON test, isbitcon, assumes 32-bit constants are expanded
repeatedly, i.e. by copying the low 32 bits to high 32 bits,
instead of zero extending. We already do such expansion in
progedit. In con32class when classifying 32-bit constants, we
should use the expanded constant, instead of zero-extending it.
TODO: we could have better encoding for things like ANDW $-1, Rx.
Fixes#38946.
Change-Id: I37d0c95d744834419db5c897fd1f6c187595c926
Reviewed-on: https://go-review.googlesource.com/c/go/+/232984
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Improve the error user experience when users try to set/refer
to unexported fields and methods of struct literals, by directly saying
"cannot refer to unexported field or method"
Fixes#31053
Change-Id: I6fd3caf64b7ca9f9d8ea60b7756875e340792d59
Reviewed-on: https://go-review.googlesource.com/c/go/+/201657
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The recently added function parseFloatPrefix tested the entire
string for correct placement of separators rather than just the
consumed part. The 4-char fix is in readFloat (atof.go:303).
Added more tests. Also added some white space for nicer
grouping of the test cases.
While at it, removed the need for calling testing.Run.
Fixes#38962.
Change-Id: Ifce84f362bb4ede559103f8d535556d3de9325f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/233017
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Omits printing the file:line:column when trying to
open non-existent files
Given:
go tool compile x.go
* Before:
x.go:0: open x.go: no such file or directory
* After:
open x.go: no such file or directory
Reverts the revert in CL 231043 by only fixing the case
of non-existent errors which is what the original bug
was about. The fix for "permission errors" will come later
on when I have bandwidth to investigate the differences
between running with root and why os.Open works for some
builders and not others.
Fixes#36437
Change-Id: I9c8a0981ad708b504bb43990a4105b42266fa41f
Reviewed-on: https://go-review.googlesource.com/c/go/+/230941
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Robert Griesemer <gri@golang.org>
Fix TestFreeBSDNumCPU on newer versions of FreeBSD which have multi line
output from cpuset e.g.
cpuset -g -p 4141
pid 4141 mask: 0, 1, 2, 3, 4, 5, 6, 7, 8
pid 4141 domain policy: first-touch mask: 0, 1
The test now uses just the first line of output.
Fixes#38937Fixes#25924
Change-Id: If082ee6b82120ebde4dc437e58343b3dad69c65f
Reviewed-on: https://go-review.googlesource.com/c/go/+/232801
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The added comment contains some context. The original optimization
assumed that each call to unquoteBytes (or unquote) followed its
corresponding call to rescanLiteral. Otherwise, unquoting a literal
might use d.safeUnquote from another re-scanned literal.
Unfortunately, this assumption is wrong. When decoding {"foo": "bar"}
into a map[T]string where T implements TextUnmarshaler, the sequence of
calls would be as follows:
1) rescanLiteral "foo"
2) unquoteBytes "foo"
3) rescanLiteral "bar"
4) unquoteBytes "foo" (for UnmarshalText)
5) unquoteBytes "bar"
Note that the call to UnmarshalText happens in literalStore, which
repeats the work to unquote the input string literal. But, since that
happens after we've re-scanned "bar", we're using the wrong safeUnquote
field value.
In the added test case, the second string had a non-zero number of safe
bytes, and the first string had none since it was all non-ASCII. Thus,
"safely" unquoting a number of the first string's bytes could cut a rune
in half, and thus mangle the runes.
A rather simple fix, without a full revert, is to only allow one use of
safeUnquote per call to unquoteBytes. Each call to rescanLiteral when
we have a string is soon followed by a call to unquoteBytes, so it's no
longer possible for us to use the wrong index.
Also add a test case from #38126, which is the same underlying bug, but
affecting the ",string" option.
Before the fix, the test would fail, just like in the original two issues:
--- FAIL: TestUnmarshalRescanLiteralMangledUnquote (0.00s)
decode_test.go:2443: Key "开源" does not exist in map: map[开���:12345开源]
decode_test.go:2458: Unmarshal unexpected error: json: invalid use of ,string struct tag, trying to unmarshal "\"aaa\tbbb\"" into string
Fixes#38105.
For #38126.
Change-Id: I761e54924e9a971a4f9eaa70bbf72014bb1476e6
Reviewed-on: https://go-review.googlesource.com/c/go/+/226218
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
This change uses the new offAddr type in more parts of the runtime where
we've been implicitly switching from the default address space to a
contiguous view. The purpose of offAddr is to represent addresses in the
contiguous view of the address space, and to make direct computations
between real addresses and offset addresses impossible. This change thus
improves readability in the runtime.
Updates #35788.
Change-Id: I4e1c5fed3ed68aa12f49a42b82eb3f46aba82fc1
Reviewed-on: https://go-review.googlesource.com/c/go/+/230718
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently addrRange and addrRanges operate on real addresses. That is,
the addresses they manipulate don't include arenaBaseOffset. When added
to an address, arenaBaseOffset makes the address space appear contiguous
on platforms where the address space is segmented. While this is
generally OK because even those platforms which have a segmented address
space usually don't give addresses in a different segment, today it
causes a mismatch between the scavenger and the rest of the page
allocator. The scavenger scavenges from the highest addresses first, but
only via real address, whereas the page allocator allocates memory in
offset address order.
So this change makes addrRange and addrRanges, i.e. what the scavenger
operates on, use offset addresses. However, lots of the page allocator
relies on an addrRange containing real addresses.
To make this transition less error-prone, this change introduces a new
type, offAddr, whose purpose is to make offset addresses a distinct
type, so any attempt to trivially mix real and offset addresses will
trigger a compilation error.
This change doesn't attempt to use offAddr in all of the runtime; a
follow-up change will look for and catch remaining uses of an offset
address which doesn't use the type.
Updates #35788.
Change-Id: I991d891ac8ace8339ca180daafdf6b261a4d43d1
Reviewed-on: https://go-review.googlesource.com/c/go/+/230717
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently the scavenger will reset to the top of the heap every GC. This
means if it scavenges a bunch of memory which doesn't get used again,
it's going to keep re-scanning that memory on subsequent cycles. This
problem is especially bad when it comes to heap spikes: suppose an
application's heap spikes to 2x its steady-state size. The scavenger
will run over the top half of that heap even if the heap shrinks, for
the rest of the application's lifetime.
To fix this, we maintain two numbers: a "free" high watermark, which
represents the highest address freed to the page allocator in that
cycle, and a "scavenged" low watermark, which represents how low of an
address the scavenger got to when scavenging. If the "free" watermark
exceeds the "scavenged" watermark, then we pick the "free" watermark as
the new "top of the heap" for the scavenger when starting the next
scavenger cycle. Otherwise, we have the scavenger pick up where it left
off.
With this mechanism, we only ever re-scan scavenged memory if a random
page gets freed very high up in the heap address space while most of the
action is happening in the lower parts. This case should be exceedingly
unlikely because the page reclaimer walks over the heap from low address
to high addresses, and we use a first-fit address-ordered allocation
policy.
Updates #35788.
Change-Id: Id335603b526ce3a0eb79ef286d1a4e876abc9cab
Reviewed-on: https://go-review.googlesource.com/c/go/+/218997
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
This change removes the concept of s.scavAddr in favor of explicitly
reserving and unreserving address ranges. s.scavAddr has several
problems with raciness that can cause the scavenger to miss updates, or
move it back unnecessarily, forcing future scavenge calls to iterate
over searched address space unnecessarily.
This change achieves this by replacing scavAddr with a second addrRanges
which is cloned from s.inUse at the end of each sweep phase. Ranges from
this second addrRanges are then reserved by scavengers (with the
reservation size proportional to the heap size) who are then able to
safely iterate over those ranges without worry of another scavenger
coming in.
Fixes#35788.
Change-Id: Ief01ae170384174875118742f6c26b2a41cbb66d
Reviewed-on: https://go-review.googlesource.com/c/go/+/208378
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Austin Clements <austin@google.com>
golang.org/cl/193604 fixed one bug when one encodes a string with the
",string" option: if SetEscapeHTML(false) is used, we should not be
using HTML escaping for the inner string encoding. The CL correctly
fixed that.
The CL also tried to speed up this edge case. By avoiding an entire new
call to Marshal, the new Issue34127 benchmark reduced its time/op by
45%, and lowered the allocs/op from 3 to 2.
However, that last optimization wasn't correct:
Since Go 1.2 every string can be marshaled to JSON without error
even if it contains invalid UTF-8 byte sequences. Therefore
there is no need to use Marshal again for the only reason of
enclosing the string in double quotes.
JSON string encoding isn't just about adding quotes and taking care of
invalid UTF-8. We also need to escape some characters, like tabs and
newlines.
The new code failed to do that. The bug resulted in the added test case
failing to roundtrip properly; before our fix here, we'd see an error:
invalid use of ,string struct tag, trying to unmarshal "\"\b\f\n\r\t\"\\\"" into string
If you pay close attention, you'll notice that the special characters
like tab and newline are only encoded once, not twice. When decoding
with the ",string" option, the outer string decode works, but the inner
string decode fails, as we are now decoding a JSON string with unescaped
special characters.
The fix we apply here isn't to go back to Marshal, as that would
re-introduce the bug with SetEscapeHTML(false). Instead, we can use a
new encode state from the pool - it results in minimal performance
impact, and even reduces allocs/op further. The performance impact seems
fair, given that we need to check the entire string for characters that
need to be escaped.
name old time/op new time/op delta
Issue34127-8 89.7ns ± 2% 100.8ns ± 1% +12.27% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
Issue34127-8 40.0B ± 0% 32.0B ± 0% -20.00% (p=0.000 n=8+8)
name old allocs/op new allocs/op delta
Issue34127-8 2.00 ± 0% 1.00 ± 0% -50.00% (p=0.000 n=8+8)
Instead of adding another standalone test, we convert an existing
"string tag" test to be table-based, and add another test case there.
One test case from the original CL also had to be amended, due to the
same problem - when escaping '<' due to SetEscapeHTML(true), we need to
end up with double escaping, since we're using ",string".
Fixes#38173.
Change-Id: I2b0df9e4f1d3452fff74fe910e189c930dde4b5b
Reviewed-on: https://go-review.googlesource.com/c/go/+/226498
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Joe Tsai <thebrokentoaster@gmail.com>
Since the ConnectionState will now be available during
verification, some code was moved around in order to
initialize and make available as much of the fields on
Conn as possible before the ConnectionState is verified.
Fixes#36736
Change-Id: I0e3efa97565ead7de5c48bb8a87e3ea54fbde140
Reviewed-on: https://go-review.googlesource.com/c/go/+/229122
Run-TryBot: Katie Hockman <katie@golang.org>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Automatically rotate session ticket keys for servers
that don't already have sessionTicketKeys and that
haven't called SetSessionTicketKeys.
Now, session ticket keys will be rotated every 24 hours
with a lifetime of 7 days. This adds a small performance
cost to existing clients that don't provide a session
ticket encrypted with a fresh enough session ticket key,
which would require a full handshake.
Updates #25256
Change-Id: I15b46af7a82aab9a108bceb706bbf66243a1510f
Reviewed-on: https://go-review.googlesource.com/c/go/+/230679
Run-TryBot: Katie Hockman <katie@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
Per X690 Section 11.6 sort the order of SET of components when generating
DER. This CL makes no changes to Unmarshal, meaning unordered components
will still be accepted, and won't be re-ordered during parsing.
In order to sort the components a new encoder, setEncoder, which is similar
to multiEncoder is added. The functional difference is that setEncoder
encodes each component to a [][]byte, sorts the slice using a sort.Sort
interface, and then writes it out to the destination slice. The ordering
matches the output of OpenSSL.
Fixes#24254
Change-Id: Iff4560f0b8c2dce5aae616ba30226f39c10b972e
GitHub-Last-Rev: e52fc43658
GitHub-Pull-Request: golang/go#38228
Reviewed-on: https://go-review.googlesource.com/c/go/+/226984
Reviewed-by: Filippo Valsorda <filippo@golang.org>
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Colons are port separators, so it's risky to allow them in hostnames.
Per the CL 231377 rule, if we at least consider them invalid we will not
apply wildcard processing to them, making behavior a little more
predictable.
We were considering hostnames with colons valid (against spec) because
that meant we'd not ignore them in Common Name. (There was at least
one deployment that was putting colons in Common Name and expecting it
to verify.)
Now that Common Name is ignored by default, those clients will break
again, so it's a good time to drop the exception. Hopefully they moved
to SANs, where invalid hostnames are checked 1:1 (ignoring wildcards)
but still work. (If they didn't, this change means they can't use
GODEBUG=x509ignoreCN=0 to opt back in, but again you don't get to use a
legacy deprecated field AND invalid hostnames.)
Updates #24151
Change-Id: Id44b4fecb2d620480acdfc65fea1473f7abbca7f
Reviewed-on: https://go-review.googlesource.com/c/go/+/231381
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Katie Hockman <katie@golang.org>
Trailing dots are not allowed in certificate fields like CN and SANs
(while they are allowed and ignored as inputs to verification APIs).
Move to considering names with trailing dots in certificates as invalid
hostnames.
Following the rule of CL 231378, these invalid names lose wildcard
processing, but can still match if there is a 1:1 match, trailing dot
included, with the VerifyHostname input.
They also become ignored Common Name values regardless of the
GODEBUG=x509ignoreCN=X value, because we have to ignore invalid
hostnames in Common Name for #24151. The error message automatically
accounts for this, and doesn't suggest the environment variable. You
don't get to use a legacy deprecated field AND invalid hostnames.
(While at it, also consider wildcards in VerifyHostname inputs as
invalid hostnames, not that it should change any observed behavior.)
Change-Id: Iecdee8927df50c1d9daf904776b051de9f5e76ad
Reviewed-on: https://go-review.googlesource.com/c/go/+/231380
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Katie Hockman <katie@golang.org>
Common Name has been deprecated for 20 years, and has horrible
interactions with Name Constraints. The browsers managed to drop it last
year, let's try flicking the switch to disabled by default.
Return helpful errors for things that would get unbroken by flipping the
switch back with the environment variable.
Had to refresh a test certificate that was too old to have SANs.
Updates #24151
Change-Id: I2ab78577fd936ba67969d3417284dbe46e4ae02f
Reviewed-on: https://go-review.googlesource.com/c/go/+/231379
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Katie Hockman <katie@golang.org>
When the input or SAN dNSNames are not valid hostnames, the specs don't
define what should happen, because this should ideally never happen, so
everything we do is undefined behavior. Browsers get to just return an
error, because browsers can assume that the resolving layer is DNS. We
can't, names can be resolved by anything implementing a Dial function,
and the crypto/x509 APIs can also be used directly without actual
networks in sight.
Trying to process invalid hostnames leads to issues like #27591 where
wildcards glob stuff they aren't expected to, because wildcards are only
defined on hostnames.
Try to rationalize the behavior like this: if both the VerifyHostname
input and the SAN dNSNames are a valid hostname, follow the specs;
otherwise, only accept perfect 1:1 case-insensitive matches (without
wildcards or trailing dot processing).
This should allow us to keep supporting weird names, with less
unexpected side-effects from undefined behavior. Also, it's a rule, even
if completely made up, so something we can reason about and code against.
The commonName field does allow any string, but no specs define how to
process it. Processing it differently from dNSNames would be confusing,
and allowing it to match invalid hostnames is incompatible with Name
Constraint processing (#24151).
This does encourage invalid dNSNames, regrettably, but we need some way
for the standard API to match weird names, and the alternative of
keeping CN alive sounds less appealing.
Fixes#27591
Change-Id: Id2d515f068a17ff796a32b30733abe44ad4f0339
Reviewed-on: https://go-review.googlesource.com/c/go/+/231378
Run-TryBot: Filippo Valsorda <filippo@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Katie Hockman <katie@golang.org>
If dst slice length is zero in makeslicecopy then the called mallocgc is
using a fast path to only return a pointer to runtime.zerobase.
There may be no heapBits for that address readable by
bulkBarrierPreWriteSrcOnly which will cause a panic.
Protect against this by not calling bulkBarrierPreWriteSrcOnly if
there is nothing to copy. This is the case for all cases where the
length of the destination slice is zero.
runtime.growslice and runtime.typedslicecopy have fast paths that
do not call bulkBarrierPreWrite for zero copy lengths either.
Fixes#38929
Change-Id: I78ece600203a0a8d24de5b6c9eef56f605d44e99
Reviewed-on: https://go-review.googlesource.com/c/go/+/232800
Reviewed-by: Keith Randall <khr@golang.org>
Missed in CL 221790
This is the only remaining use of math.Float64frombits in the .rules
file that isn't already guarded.
Fixes#38880
Change-Id: I11f71e3a48516748d8d2701c6cf6920a7bc9e216
Reviewed-on: https://go-review.googlesource.com/c/go/+/232859
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Currently linearAlloc manages an exclusive "end" address for the top of
its reserved space. While unlikely for a linearAlloc to be allocated
with an "end" address hitting the top of the address space, it is
possible and could lead to overflow.
Avoid overflow by chopping off the last byte from the linearAlloc if
it's bumping up against the top of the address space defensively. In
practice, this means that if 32-bit platforms map the top of the address
space and use the linearAlloc to acquire arenas, the top arena will not
be usable.
Fixes#35954.
Change-Id: I512cddcd34fd1ab15cb6ca92bbf899fc1ef22ff6
Reviewed-on: https://go-review.googlesource.com/c/go/+/231338
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently when checking if we can grow the heap into the current arena,
we do an addition which may overflow. This is particularly likely on
32-bit systems.
Avoid this situation by explicitly checking for overflow, and adding in
some comments about when overflow is possible, when it isn't, and why.
For #35954.
Change-Id: I2d4ecbb1ccbd43da55979cc721f0cd8d1757add2
Reviewed-on: https://go-review.googlesource.com/c/go/+/231337
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
I added routines that can acquire/release a particular rank without
acquiring/releasing an associated lock. I added lockRankGscan as a rank
for acquiring/releasing the Gscan bit.
castogscanstatus() and casGtoPreemptScan() are acquires of the Gscan
bit. casfrom_Gscanstatus() is a release of the Gscan bit. casgstatus()
is like an acquire and release of the Gscan bit, since it will wait if
Gscan bit is currently set.
We have a cycle between hchan and Gscan. The acquisition of Gscan and
then hchan only happens in syncadjustsudogs() when the G is suspended,
so the main normal ordering (get hchan, then get Gscan) can't be
happening. So, I added a new rank lockRankHchanLeaf that is used when
acquiring hchan locks in syncadjustsudogs. This ranking is set so no
other locks can be acquired except other hchan locks.
Fixes#38922
Change-Id: I58ce526a74ba856cb42078f7b9901f2832e1d45c
Reviewed-on: https://go-review.googlesource.com/c/go/+/228417
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Fix a minor bug where it should use Textp2 instead of Textp. This
doesn't affect correctness. It just made the pre-allocation less
effective.
Change-Id: Ib3fa8ab3c64037e3582933970d051f278286353b
Reviewed-on: https://go-review.googlesource.com/c/go/+/232837
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
+----------------------------------------------------------------------+
| Hello, if you are reading this and run macOS, please test this code: |
| |
| $ GO111MODULE=on go get golang.org/dl/gotip@latest |
| $ gotip download |
| $ GODEBUG=x509roots=1 gotip test crypto/x509 -v -run TestSystemRoots |
+----------------------------------------------------------------------+
We currently have two code paths to extract system roots on macOS: one
uses cgo to invoke a maze of Security.framework APIs; the other is a
horrible fallback that runs "/usr/bin/security verify-cert" on every
root that has custom policies to check if it's trusted for SSL.
The fallback is not only terrifying because it shells out to a binary,
but also because it lets in certificates that are not trusted roots but
are signed by trusted roots, and because it applies some filters (EKUs
and expiration) only to roots with custom policies, as the others are
not passed to verify-cert. The other code path, of course, requires cgo,
so can't be used when cross-compiling and involves a large ball of C.
It's all a mess, and it broke oh-so-many times (#14514, #16532, #19436,
#20990, #21416, #24437, #24652, #25649, #26073, #27958, #28025, #28092,
#29497, #30471, #30672, #30763, #30889, #32891, #38215, #38365, ...).
Since macOS does not have a stable syscall ABI, we already dynamically
link and invoke libSystem.dylib regardless of cgo availability (#17490).
How that works is that functions in package syscall (like syscall.Open)
take the address of assembly trampolines (like libc_open_trampoline)
that jump to symbols imported with cgo_import_dynamic (like libc_open),
and pass them along with arguments to syscall.syscall (which is
implemented as runtime.syscall_syscall). syscall_syscall informs the
scheduler and profiler, and then uses asmcgocall to switch to a system
stack and invoke runtime.syscall. The latter is an assembly trampoline
that unpacks the Go ABI arguments passed to syscall.syscall, finally
calls the remote function, and puts the return value on the Go stack.
(This last bit is the part that cgo compiles from a C wrapper.)
We can do something similar to link and invoke Security.framework!
The one difference is that runtime.syscall and friends check errors
based on the errno convention, which Security doesn't follow, so I added
runtime.syscallNoErr which just skips interpreting the return value.
We only need a variant with six arguments because the calling convention
is register-based, and extra arguments simply zero out some registers.
That's plumbed through as crypto/x509/internal/macOS.syscall. The rest
of that package is a set of wrappers for Security.framework and Core
Foundation functions, like syscall is for libSystem. In theory, as long
as macOS respects ABI backwards compatibility (a.k.a. as long as
binaries built for a previous OS version keep running) this should be
stable, as the final result is not different from what a C compiler
would make. (One exception might be dictionary key strings, which we
make our own copy of instead of using the dynamic symbol. If they change
the value of those strings things might break. But why would they.)
Finally, I rewrote the crypto/x509 cgo logic in Go using those wrappers.
It works! I tried to make it match 1:1 the old logic, so that
root_darwin_amd64.go can be reviewed by comparing it to
root_cgo_darwin_amd64.go. The only difference is that we do proper error
handling now, and assume that if there is no error the return values are
there, while before we'd just check for nil pointers and move on.
I kept the cgo logic to help with review and testing, but we should
delete it once we are confident the new code works.
The nocgo logic is gone and we shall never speak of it again.
Fixes#32604Fixes#19561Fixes#38365
Awakens Cthulhu
Change-Id: Id850962bad667f71e3af594bdfebbbb1edfbcbb4
Reviewed-on: https://go-review.googlesource.com/c/go/+/227037
Reviewed-by: Katie Hockman <katie@golang.org>
Also encode the certificates in a way that's more
consistent with TLS 1.3 (with a 24 byte length prefix).
Note that this will have an additional performance cost
requiring clients to do a full handshake every 7 days
where previously they were able to use the same ticket
indefinitely.
Updates #25256
Change-Id: Ic4d1ba0d92773c490b33b5f6c1320d557cc7347d
Reviewed-on: https://go-review.googlesource.com/c/go/+/231317
Run-TryBot: Katie Hockman <katie@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
We might as well grow the stack at least as large as we'll need for
the frame that is calling morestack. It doesn't help with the
lots-of-small-frames case, but it may help a bit with the
few-big-frames case.
Update #18138
Change-Id: I1f49c97706a70e20b30433cbec99a7901528ea52
Reviewed-on: https://go-review.googlesource.com/c/go/+/225800
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
match:
m = make([]T, x); copy(m, s)
for pointer free T and x==len(s) rewrite to:
m = mallocgc(x*elemsize(T), nil, false); memmove(&m, &s, x*elemsize(T))
otherwise rewrite to:
m = makeslicecopy([]T, x, s)
This avoids memclear and shading of pointers in the newly created slice
before the copy.
With this CL "s" is only be allowed to bev a variable and not a more
complex expression. This restriction could be lifted in future versions
of this optimization when it can be proven that "s" is not referencing "m".
Triggers 450 times during make.bash..
Reduces go binary size by ~8 kbyte.
name old time/op new time/op delta
MakeSliceCopy/mallocmove/Byte 71.1ns ± 1% 65.8ns ± 0% -7.49% (p=0.000 n=10+9)
MakeSliceCopy/mallocmove/Int 71.2ns ± 1% 66.0ns ± 0% -7.27% (p=0.000 n=10+8)
MakeSliceCopy/mallocmove/Ptr 104ns ± 4% 99ns ± 1% -5.13% (p=0.000 n=10+10)
MakeSliceCopy/makecopy/Byte 70.3ns ± 0% 68.0ns ± 0% -3.22% (p=0.000 n=10+9)
MakeSliceCopy/makecopy/Int 70.3ns ± 0% 68.5ns ± 1% -2.59% (p=0.000 n=9+10)
MakeSliceCopy/makecopy/Ptr 102ns ± 0% 99ns ± 1% -2.97% (p=0.000 n=9+9)
MakeSliceCopy/nilappend/Byte 75.4ns ± 0% 74.9ns ± 2% -0.63% (p=0.015 n=9+9)
MakeSliceCopy/nilappend/Int 75.6ns ± 0% 76.4ns ± 3% ~ (p=0.245 n=9+10)
MakeSliceCopy/nilappend/Ptr 107ns ± 0% 108ns ± 1% +0.93% (p=0.005 n=9+10)
Fixes#26252
Change-Id: Iec553dd1fef6ded16197216a472351c8799a8e71
Reviewed-on: https://go-review.googlesource.com/c/go/+/146719
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Speed up repeated HMAC operations with the same key by not recomputing
the first block of the inner and outer hashes in Reset and Sum, saving
two block computations each time.
This is a significant win for applications which hash many small
messages with the same key. In x/crypto/pbkdf2 for example, this
optimization cuts the number of block computations in half, speeding it
up by 25%-40% depending on the hash function.
The hash function needs to implement binary.Marshaler and
binary.Unmarshaler for this optimization to work, so that we can save
and restore its internal state. All hash functions in the standard
library are marshalable (CL 66710) but if the hash isn't marshalable, we
fall back on the old behaviour.
Marshaling the hashes does add a couple unavoidable new allocations, but
this only has to be done once, so the cost is amortized over repeated
uses. To minimize impact to applications which don't (or can't) reuse
hmac objects, marshaling is performed in Reset (rather than in New),
since calling Reset seems like a good indication that the caller intends
to reuse the hmac object later.
I had to add a boolean field to the hmac state to remember if we've
marshaled the hashes or not. This is paid for by removing the size and
blocksize fields, which were basically unused except for some
initialization work in New, and to fulfill the Size and Blocksize
methods. Size and Blocksize can just be forwarded to the underlying
hash, so there doesn't really seem to be any reason to waste space
caching their values.
crypto/hmac benchmarks:
name old time/op new time/op delta
HMAC_Reset/SHA1/1K-2 4.06µs ± 0% 3.77µs ± 0% -7.29% (p=0.000 n=8+10)
HMAC_Reset/SHA1/32-2 1.08µs ± 0% 0.78µs ± 1% -27.67% (p=0.000 n=10+10)
HMAC_Reset/SHA256/1K-2 10.3µs ± 0% 9.4µs ± 0% -9.03% (p=0.000 n=10+10)
HMAC_Reset/SHA256/32-2 2.32µs ± 0% 1.42µs ± 0% -38.87% (p=0.000 n=10+10)
HMAC_Reset/SHA512/1K-2 8.22µs ± 0% 7.04µs ± 0% -14.32% (p=0.000 n=9+9)
HMAC_Reset/SHA512/32-2 3.08µs ± 0% 1.89µs ± 0% -38.54% (p=0.000 n=10+9)
HMAC_New/SHA1/1K-2 4.86µs ± 1% 4.93µs ± 1% +1.30% (p=0.000 n=10+9)
HMAC_New/SHA1/32-2 1.91µs ± 1% 1.95µs ± 1% +1.84% (p=0.000 n=10+9)
HMAC_New/SHA256/1K-2 11.2µs ± 1% 11.2µs ± 0% ~ (p=1.000 n=9+10)
HMAC_New/SHA256/32-2 3.22µs ± 2% 3.19µs ± 2% -1.07% (p=0.018 n=9+10)
HMAC_New/SHA512/1K-2 9.54µs ± 0% 9.66µs ± 1% +1.31% (p=0.000 n=9+10)
HMAC_New/SHA512/32-2 4.37µs ± 1% 4.46µs ± 1% +1.97% (p=0.000 n=10+9)
name old speed new speed delta
HMAC_Reset/SHA1/1K-2 252MB/s ± 0% 272MB/s ± 0% +7.86% (p=0.000 n=8+10)
HMAC_Reset/SHA1/32-2 29.7MB/s ± 0% 41.1MB/s ± 1% +38.26% (p=0.000 n=10+10)
HMAC_Reset/SHA256/1K-2 99.1MB/s ± 0% 108.9MB/s ± 0% +9.93% (p=0.000 n=10+10)
HMAC_Reset/SHA256/32-2 13.8MB/s ± 0% 22.6MB/s ± 0% +63.57% (p=0.000 n=10+10)
HMAC_Reset/SHA512/1K-2 125MB/s ± 0% 145MB/s ± 0% +16.71% (p=0.000 n=9+9)
HMAC_Reset/SHA512/32-2 10.4MB/s ± 0% 16.9MB/s ± 0% +62.69% (p=0.000 n=10+9)
HMAC_New/SHA1/1K-2 211MB/s ± 1% 208MB/s ± 1% -1.29% (p=0.000 n=10+9)
HMAC_New/SHA1/32-2 16.7MB/s ± 1% 16.4MB/s ± 1% -1.81% (p=0.000 n=10+9)
HMAC_New/SHA256/1K-2 91.3MB/s ± 1% 91.5MB/s ± 0% ~ (p=0.950 n=9+10)
HMAC_New/SHA256/32-2 9.94MB/s ± 2% 10.04MB/s ± 2% +1.09% (p=0.021 n=9+10)
HMAC_New/SHA512/1K-2 107MB/s ± 0% 106MB/s ± 1% -1.29% (p=0.000 n=9+10)
HMAC_New/SHA512/32-2 7.32MB/s ± 1% 7.18MB/s ± 1% -1.89% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
HMAC_Reset/SHA1/1K-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA1/32-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA256/1K-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA256/32-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA512/1K-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA512/32-2 0.00B ±NaN% 0.00B ±NaN% ~ (all samples are equal)
HMAC_New/SHA1/1K-2 448B ± 0% 448B ± 0% ~ (all samples are equal)
HMAC_New/SHA1/32-2 448B ± 0% 448B ± 0% ~ (all samples are equal)
HMAC_New/SHA256/1K-2 480B ± 0% 480B ± 0% ~ (all samples are equal)
HMAC_New/SHA256/32-2 480B ± 0% 480B ± 0% ~ (all samples are equal)
HMAC_New/SHA512/1K-2 800B ± 0% 800B ± 0% ~ (all samples are equal)
HMAC_New/SHA512/32-2 800B ± 0% 800B ± 0% ~ (all samples are equal)
name old allocs/op new allocs/op delta
HMAC_Reset/SHA1/1K-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA1/32-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA256/1K-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA256/32-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA512/1K-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_Reset/SHA512/32-2 0.00 ±NaN% 0.00 ±NaN% ~ (all samples are equal)
HMAC_New/SHA1/1K-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
HMAC_New/SHA1/32-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
HMAC_New/SHA256/1K-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
HMAC_New/SHA256/32-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
HMAC_New/SHA512/1K-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
HMAC_New/SHA512/32-2 5.00 ± 0% 5.00 ± 0% ~ (all samples are equal)
x/crypto/pbkdf2 benchmarks:
name old time/op new time/op delta
HMACSHA1-2 4.63ms ± 0% 3.40ms ± 0% -26.58% (p=0.000 n=10+9)
HMACSHA256-2 9.75ms ± 0% 5.98ms ± 0% -38.62% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
HMACSHA1-2 516B ± 0% 708B ± 0% +37.21% (p=0.000 n=10+10)
HMACSHA256-2 549B ± 0% 772B ± 0% +40.62% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
HMACSHA1-2 8.00 ± 0% 10.00 ± 0% +25.00% (p=0.000 n=10+10)
HMACSHA256-2 8.00 ± 0% 10.00 ± 0% +25.00% (p=0.000 n=10+10)
Fixes#19941
Change-Id: I7077a6f875be68d3da05f7b3664e18514861886f
Reviewed-on: https://go-review.googlesource.com/c/go/+/27458
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Filippo Valsorda <filippo@golang.org>
The code writes text sections twice, one with Codeblk, one with
Datblk. The second write shouldn't be there.
May fix#38898.
Change-Id: I4ec70294059ec9aa0fc4cc69a3cd824f5843287b
Reviewed-on: https://go-review.googlesource.com/c/go/+/232661
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
Reject base 128 encoded integers that aren't using minimal encoding,
specifically if the leading octet of an encoded integer is 0x80. This
only affects parsing of tags and OIDs, both of which expect this
encoding (see X.690 8.1.2.4.2 and 8.19.2).
Fixes#36881
Change-Id: I969cf48ac1fba7e56bac334672806a0784d3e123
GitHub-Last-Rev: fefc03d202
GitHub-Pull-Request: golang/go#38281
Reviewed-on: https://go-review.googlesource.com/c/go/+/227320
Reviewed-by: Filippo Valsorda <filippo@golang.org>
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The previous behavior directly contradicted the docs that have been in
place for years:
To unmarshal a JSON array into a slice, Unmarshal resets the
slice length to zero and then appends each element to the slice.
We could use reflect.New to create a new element and reflect.Append to
then append it to the destination slice, but benchmarks have shown that
reflect.Append is very slow compared to the code that manually grows a
slice in this file.
Instead, if we're decoding into an element that came from the original
backing array, zero it before decoding into it. We're going to be using
the CodeDecoder benchmark, as it has a slice of struct pointers that's
decoded very often.
Note that we still reuse existing values from arrays being decoded into,
as the documentation agrees with the existing implementation in that
case:
To unmarshal a JSON array into a Go array, Unmarshal decodes
JSON array elements into corresponding Go array elements.
The numbers with the benchmark as-is might seem catastrophic, but that's
only because the benchmark is decoding into the same variable over and
over again. Since the old decoder was happy to reuse slice elements, it
would save a lot of allocations by not having to zero and re-allocate
said elements:
name old time/op new time/op delta
CodeDecoder-8 10.4ms ± 1% 10.9ms ± 1% +4.41% (p=0.000 n=10+10)
name old speed new speed delta
CodeDecoder-8 186MB/s ± 1% 178MB/s ± 1% -4.23% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
CodeDecoder-8 2.19MB ± 0% 3.59MB ± 0% +64.09% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
CodeDecoder-8 76.8k ± 0% 92.7k ± 0% +20.71% (p=0.000 n=10+10)
We can prove this by moving 'var r codeResponse' into the loop, so that
the benchmark no longer reuses the destination pointer. And sure enough,
we no longer see the slow-down caused by the extra allocations:
name old time/op new time/op delta
CodeDecoder-8 10.9ms ± 0% 10.9ms ± 1% -0.37% (p=0.043 n=10+10)
name old speed new speed delta
CodeDecoder-8 177MB/s ± 0% 178MB/s ± 1% +0.37% (p=0.041 n=10+10)
name old alloc/op new alloc/op delta
CodeDecoder-8 3.59MB ± 0% 3.59MB ± 0% ~ (p=0.780 n=10+10)
name old allocs/op new allocs/op delta
CodeDecoder-8 92.7k ± 0% 92.7k ± 0% ~ (all equal)
I believe that it's useful to leave the benchmarks as they are now,
because the decoder does reuse memory in some cases. For example,
existing map elements are reused. However, subtle changes like this one
need to be benchmarked carefully.
Finally, add a couple of tests involving both a slice and an array of
structs.
Fixes#21092.
Change-Id: I8b1194f25e723a31abd146fbfe9428ac10c1389d
Reviewed-on: https://go-review.googlesource.com/c/go/+/191783
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The `yield := osyield` line doesn't serve any purpose, it's committed in `2015`, time to delete that line:)
Change-Id: I382d4d32cf320f054f011f3b6684c868cbcb0ff2
GitHub-Last-Rev: 7a0aa25e55
GitHub-Pull-Request: golang/go#36078
Reviewed-on: https://go-review.googlesource.com/c/go/+/210837
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This commit moves the isSelect bool below the ticket uint32. The
boolean was consuming 8 bytes of the struct. The uint32 was also
consuming 8 bytes, so we can pack isSelect below the uint32 and save 8
bytes. This reduces the sudog struct from 96 bytes to 88 bytes.
Change-Id: If555cdaf2f5eaa125e2590fc4d113dbc99750738
GitHub-Last-Rev: d63b4e086b
GitHub-Pull-Request: golang/go#36552
Reviewed-on: https://go-review.googlesource.com/c/go/+/214677
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Change-Id: I493bb7e5e9a9e1752236dea1e032b317da7f67f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/211560
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This CL sets positions for errors from cals to load within the load
call itself, similar to how the rest of the code in pkg.go sets
positions right after the error is set on the package.
This allows the code to ensure that we only add positions either for
ImportPathErrors, or if an error was passed into load, and was set
using setLoadPackageDataError. (Though I'm wondering if the call
to setLoadPackageDataError should be done before the call to load).
Fixes#38034
Change-Id: I0748866933b4c1a329954b4b96640bef702a4644
Reviewed-on: https://go-review.googlesource.com/c/go/+/228784
Reviewed-by: Bryan C. Mills <bcmills@google.com>
Reviewed-by: Jay Conrod <jayconrod@google.com>
Run-TryBot: Bryan C. Mills <bcmills@google.com>