The former was broken deliberately; see #58110. The latter is just an
in-progress port.
Updates #58110, #56001.
Change-Id: I7f1c5e2ac016fb7c65c081174d19239fc9b1ea32
Reviewed-on: https://go-review.googlesource.com/c/go/+/482115
Auto-Submit: Heschi Kreinick <heschi@google.com>
TryBot-Bypass: Heschi Kreinick <heschi@google.com>
Run-TryBot: Heschi Kreinick <heschi@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
ECMAScript 6 introduced template literals[0][1] which are delimited with
backticks. These need to be escaped in a similar fashion to the
delimiters for other string literals. Additionally template literals can
contain special syntax for string interpolation.
There is no clear way to allow safe insertion of actions within JS
template literals, as handling (JS) string interpolation inside of these
literals is rather complex. As such we've chosen to simply disallow
template actions within these template literals.
A new error code is added for this parsing failure case, errJsTmplLit,
but it is unexported as it is not backwards compatible with other minor
release versions to introduce an API change in a minor release. We will
export this code in the next major release.
The previous behavior (with the cavet that backticks are now escaped
properly) can be re-enabled with GODEBUG=jstmpllitinterp=1.
This change subsumes CL471455.
Thanks to Sohom Datta, Manipal Institute of Technology, for reporting
this issue.
Fixes CVE-2023-24538
Fixes#59234
[0] https://tc39.es/ecma262/multipage/ecmascript-language-expressions.html#sec-template-literals
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802457
Reviewed-by: Damien Neil <dneil@google.com>
Run-TryBot: Damien Neil <dneil@google.com>
Reviewed-by: Julie Qiu <julieqiu@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Change-Id: Ia221fefdb273bd0f066dffc2abcf2a616801d2f2
Reviewed-on: https://go-review.googlesource.com/c/go/+/482079
TryBot-Bypass: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Setting a large line or column number using a //line directive can cause
integer overflow even in small source files.
Limit line and column numbers in //line directives to 2^30-1, which
is small enough to avoid int32 overflow on all reasonbly-sized files.
For #59180
Fixes CVE-2023-24537
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802456
Reviewed-by: Julie Qiu <julieqiu@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Run-TryBot: Damien Neil <dneil@google.com>
Change-Id: I149bf34deca532af7994203fa1e6aca3c890ea14
Reviewed-on: https://go-review.googlesource.com/c/go/+/482078
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Bypass: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
The parsed forms of MIME headers and multipart forms can consume
substantially more memory than the size of the input data.
A malicious input containing a very large number of headers or
form parts can cause excessively large memory allocations.
Set limits on the size of MIME data:
Reader.NextPart and Reader.NextRawPart limit the the number
of headers in a part to 10000.
Reader.ReadForm limits the total number of headers in all
FileHeaders to 10000.
Both of these limits may be set with with
GODEBUG=multipartmaxheaders=<values>.
Reader.ReadForm limits the number of parts in a form to 1000.
This limit may be set with GODEBUG=multipartmaxparts=<value>.
Thanks for Jakob Ackermann (@das7pad) for reporting this issue.
For CVE-2023-24536
For #59153
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802455
Run-TryBot: Damien Neil <dneil@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Reviewed-by: Julie Qiu <julieqiu@google.com>
Change-Id: I08dd297bd75724aade4b0bd6a7d19aeca5bbf99f
Reviewed-on: https://go-review.googlesource.com/c/go/+/482077
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
For requests containing large numbers of small parts,
memory consumption of a parsed form could be about 250%
over the estimated size.
When considering the size of parsed forms, account for the size of
FileHeader structs and increase the estimate of memory consumed by
map entries.
Thanks to Jakob Ackermann (@das7pad) for reporting this issue.
For CVE-2023-24536
For #59153
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802454
Run-TryBot: Damien Neil <dneil@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Reviewed-by: Julie Qiu <julieqiu@google.com>
Change-Id: I9620758495ed77c09ca6dc5db4b723c29f3baad8
Reviewed-on: https://go-review.googlesource.com/c/go/+/482076
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
When copying form data to disk with io.Copy,
allocate only one copy buffer and reuse it rather than
creating two buffers per file (one from io.multiReader.WriteTo,
and a second one from os.File.ReadFrom).
Thanks to Jakob Ackermann (@das7pad) for reporting this issue.
For CVE-2023-24536
For #59153
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802453
Run-TryBot: Damien Neil <dneil@google.com>
Reviewed-by: Julie Qiu <julieqiu@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Change-Id: I732bd2e1e7467918cac8ab9d65d089272ba4656f
Reviewed-on: https://go-review.googlesource.com/c/go/+/482075
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Bypass: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
A parsed MIME header is a map[string][]string. In the common case,
a header contains many one-element []string slices. To avoid
allocating a separate slice for each key, ReadMIMEHeader looks
ahead in the input to predict the number of keys that will be
parsed, and allocates a single []string of that length.
The individual slices are then allocated out of the larger one.
The prediction of the number of header keys was done by counting
newlines in the input buffer, which does not take into account
header continuation lines (where a header key/value spans multiple
lines) or the end of the header block and the start of the body.
This could lead to a substantial amount of overallocation, for
example when the body consists of nothing but a large block of
newlines.
Fix header key count prediction to take into account the end of
the headers (indicated by a blank line) and continuation lines
(starting with whitespace).
Thanks to Jakob Ackermann (@das7pad) for reporting this issue.
For #58975
Fixes CVE-2023-24534
Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802452
Run-TryBot: Damien Neil <dneil@google.com>
Reviewed-by: Roland Shoemaker <bracewell@google.com>
Reviewed-by: Julie Qiu <julieqiu@google.com>
Change-Id: Iacc1c2b5ea6509529845a972414199f988ede1e5
Reviewed-on: https://go-review.googlesource.com/c/go/+/481994
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
man getaddrinfo:
EAI_NODATA
The specified network host exists, but does not have any
network addresses defined.
In the go resolver we treat this kind of error as nosuchhost.
Change-Id: I69fab6f8da8e3a86907e65104bca9f055968633a
GitHub-Last-Rev: b4891e2add
GitHub-Pull-Request: golang/go#57507
Reviewed-on: https://go-review.googlesource.com/c/go/+/459955
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Run-TryBot: Mateusz Poliwczak <mpoliwczak34@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Fixes the misuse of "a" vs "an", according to English grammatical
expectations and using https://www.a-or-an.com/
Change-Id: I53ac724070e3ff3d33c304483fe72c023c7cda47
Reviewed-on: https://go-review.googlesource.com/c/go/+/480536
Run-TryBot: shuang cui <imcusg@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Growing by one is a simpler, and often cheaper,
operation compared to appending one (newly created) zero value.
The method was introduced in Go 1.20.
growSlice in dec_helpers.go is left alone,
as it grows using the builtin append instead of reflect.Append.
No noticeable performance difference on any of our benchmarks,
as this code only runs for slices large enough to not fit in
saferio.SliceCap, and none of our benchmarks use data that large.
goos: linux
goarch: amd64
pkg: encoding/gob
cpu: AMD Ryzen 7 PRO 5850U with Radeon Graphics
│ old │ new │
│ sec/op │ sec/op vs base │
DecodeBytesSlice-8 11.37µ ± 1% 11.46µ ± 4% ~ (p=0.315 n=10)
DecodeInterfaceSlice-8 96.49µ ± 1% 95.75µ ± 1% ~ (p=0.436 n=10)
geomean 33.12µ 33.12µ +0.01%
│ old │ new │
│ B/op │ B/op vs base │
DecodeBytesSlice-8 22.39Ki ± 0% 22.39Ki ± 0% ~ (p=1.000 n=10)
DecodeInterfaceSlice-8 80.25Ki ± 0% 80.25Ki ± 0% ~ (p=0.650 n=10)
geomean 42.39Ki 42.39Ki +0.00%
│ old │ new │
│ allocs/op │ allocs/op vs base │
DecodeBytesSlice-8 169.0 ± 0% 169.0 ± 0% ~ (p=1.000 n=10) ¹
DecodeInterfaceSlice-8 3.178k ± 0% 3.178k ± 0% ~ (p=1.000 n=10) ¹
geomean 732.9 732.9 +0.00%
Change-Id: I468aebf4ae6f197a1fd35f6fee809ca591c1788f
Reviewed-on: https://go-review.googlesource.com/c/go/+/481376
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
I almost exclusively use these benchmarks with -benchtime already.
Change-Id: I6539cbba6abbdb6b275502e122f4e16856d8b9e4
Reviewed-on: https://go-review.googlesource.com/c/go/+/481375
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Rob Pike <r@golang.org>
This was intended to be merged together with changes in CL 479616.
For #58141
Change-Id: I76c38d3d4dfee93a1a170e28af28f0c9d6382830
Reviewed-on: https://go-review.googlesource.com/c/go/+/480656
Run-TryBot: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Add new helper macros to further simplify the transition from
the host's ABI to Go. Fortunately the same one should work for
all PPC64 targets.
Update the other site which uses these wrappers to further
consolidate. Also, update the call to runtime.sigtrampgo to
call the ABIInternal version directly.
Also, update the SAVE/RESTORE_VR macros to accept R0.
Change-Id: I0046176029e1e1b25838688e4b7bf57805b01bd4
Reviewed-on: https://go-review.googlesource.com/c/go/+/476297
Reviewed-by: Archana Ravindar <aravind5@in.ibm.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts CL 481059, which in turn reverts CL 478917.
Reason for revert: reapply the original CL.
Change-Id: Icf6bb6a620313b44fadcc7f69a62fdbb943e34fd
Reviewed-on: https://go-review.googlesource.com/c/go/+/481075
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Paul Murphy <murp@ibm.com>
Skip one of the testpoints that verifies inlining, since it
no longer passes as a result of reverting CL 479095. Once we
roll forward with a new version of CL 479095 we can re-enable
this testpoint.
Change-Id: I41f6fb3fce78f31e60c5f0ed2856be0e66865149
Reviewed-on: https://go-review.googlesource.com/c/go/+/481755
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Than McIntosh <thanm@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
This reapplies CL 392854, with the followup fixes in CL 479255,
CL 479915, and CL 481057 incorporated.
CL 392854, by doujiang24 <doujiang24@gmail.com>, speed up C to Go
calls by binding the M to the C thread. See below for its
description.
CL 479255 is a followup fix for a small bug in ARM assembly code.
CL 479915 is another followup fix to address C to Go calls after
the C code uses some stack, but that CL is also buggy.
CL 481057, by Michael Knyszek, is a followup fix for a memory leak
bug of CL 479915.
[Original CL 392854 description]
In a C thread, it's necessary to acquire an extra M by using needm while invoking a Go function from C. But, needm and dropm are heavy costs due to the signal-related syscalls.
So, we change to not dropm while returning back to C, which means binding the extra M to the C thread until it exits, to avoid needm and dropm on each C to Go call.
Instead, we only dropm while the C thread exits, so the extra M won't leak.
When invoking a Go function from C:
Allocate a pthread variable using pthread_key_create, only once per shared object, and register a thread-exit-time destructor.
And store the g0 of the current m into the thread-specified value of the pthread key, only once per C thread, so that the destructor will put the extra M back onto the extra M list while the C thread exits.
When returning back to C:
Skip dropm in cgocallback, when the pthread variable has been created, so that the extra M will be reused the next time invoke a Go function from C.
This is purely a performance optimization. The old version, in which needm & dropm happen on each cgo call, is still correct too, and we have to keep the old version on systems with cgo but without pthreads, like Windows.
This optimization is significant, and the specific value depends on the OS system and CPU, but in general, it can be considered as 10x faster, for a simple Go function call from a C thread.
For the newly added BenchmarkCGoInCThread, some benchmark results:
1. it's 28x faster, from 3395 ns/op to 121 ns/op, in darwin OS & Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
2. it's 6.5x faster, from 1495 ns/op to 230 ns/op, in Linux OS & Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz
[CL 479915 description]
Currently, when C calls into Go the first time, we grab an M
using needm, which sets m.g0's stack bounds using the SP. We don't
know how big the stack is, so we simply assume 32K. Previously,
when the Go function returns to C, we drop the M, and the next
time C calls into Go, we put a new stack bound on the g0 based on
the current SP. After CL 392854, we don't drop the M, and the next
time C calls into Go, we reuse the same g0, without recomputing
the stack bounds. If the C code uses quite a bit of stack space
before calling into Go, the SP may be well below the 32K stack
bound we assumed, so the runtime thinks the g0 stack overflows.
This CL makes needm get a more accurate stack bound from
pthread. (In some platforms this may still be a guess as we don't
know exactly where we are in the C stack), but it is probably
better than simply assuming 32K.
Fixes#51676.
Fixes#59294.
Change-Id: I9bf1400106d5c08ce621d2ed1df3a2d9e3f55494
Reviewed-on: https://go-review.googlesource.com/c/go/+/481061
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: DeJiang Zhu (doujiang) <doujiang24@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
These are replaced by unsafe.String etc, which were added in Go 1.20.
Per https://go.dev/wiki/Deprecated, we must wait until Go 1.21
to mark them deprecated.
Fixes#56906.
Change-Id: I4198c3f3456e9e2031f6c7232842e187e6448892
Reviewed-on: https://go-review.googlesource.com/c/go/+/452762
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
Run-TryBot: Russ Cox <rsc@golang.org>
This file is itself template input, so have to hide the template
in the go command example.
Change-Id: Ifc4eaff35ca8dc2fb479f8e28d64c06b2a9c9d3b
Reviewed-on: https://go-review.googlesource.com/c/go/+/480995
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Casting to a *uintptr is not ok if there isn't at least 8 bytes of
data backing that pointer (on 64-bit archs).
So although we end up making a slice of 0 length with that pointer,
the cast itself doesn't know that.
Instead, bail early if the result is going to be 0 length.
Fixes#59334
Change-Id: Id3c0e09d341d838835c0382cccfb0f71dc3dc7e6
Reviewed-on: https://go-review.googlesource.com/c/go/+/480575
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
File.Chmod is supported on Windows since CL 250077, there is no need
to skip the call anymore.
Updates #18026
Change-Id: Ie03cf016e651b93241f73067614fc4cb341504ef
Reviewed-on: https://go-review.googlesource.com/c/go/+/480416
Run-TryBot: Quim Muntal <quimmuntal@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
There are many tests in internal/gcimporter that are skipped on
Windows because they build a test program that needs the -D flag
when invoking the Go compiler.
This flag is already passed since CL 442303, so there is no need to
skip those tests.
Change-Id: I877e670194048bda9a52ad2568650cf33eacfb5a
Reviewed-on: https://go-review.googlesource.com/c/go/+/480415
Run-TryBot: Quim Muntal <quimmuntal@gmail.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Windows has supported external linking for a while, there is no
need to skip this test.
Change-Id: Ic3d0cc3441ee670767dae085db5e62fce205ff04
Reviewed-on: https://go-review.googlesource.com/c/go/+/480417
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Quim Muntal <quimmuntal@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
This adds the three functions from #56102 to the sync package. These
provide a convenient API for the most common uses of sync.Once.
The performance of these is comparable to direct use of sync.Once:
$ go test -run ^$ -bench OnceFunc\|OnceVal -count 20 | benchstat -row .name -col /v
goos: linux
goarch: amd64
pkg: sync
cpu: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz
│ Once │ Global │ Local │
│ sec/op │ sec/op vs base │ sec/op vs base │
OnceFunc 1.3500n ± 6% 2.7030n ± 1% +100.22% (p=0.000 n=20) 0.3935n ± 0% -70.86% (p=0.000 n=20)
OnceValue 1.3155n ± 0% 2.7460n ± 1% +108.74% (p=0.000 n=20) 0.5478n ± 1% -58.35% (p=0.000 n=20)
The "Once" column represents the baseline of how code would typically
express these patterns using sync.Once. "Global" binds the closure
returned by OnceFunc/OnceValue to global, which is how I expect these
to be used most of the time. Currently, this defeats some inlining
opportunities, which roughly doubles the cost over sync.Once; however,
it's still *extremely* fast. Finally, "Local" binds the returned
closure to a local variable. This unlocks several levels of inlining
and represents pretty much the best possible case for these APIs, but
is also unlikely to happen in practice. In principle the compiler
could recognize that the global in the "Global" case is initialized in
place and never mutated and do the same optimizations it does in the
"Local" case, but it currently does not.
Fixes#56102
Change-Id: If7355eccd7c8de7288d89a4282ff15ab1469e420
Reviewed-on: https://go-review.googlesource.com/c/go/+/451356
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Caleb Spare <cespare@gmail.com>
Auto-Submit: Austin Clements <austin@google.com>
Currently, when the inliner is determining if a function is
inlineable, it descends into the bodies of closures constructed by
that function. This has several unfortunate consequences:
- If the closure contains a disallowed operation (e.g., a defer), then
the outer function can't be inlined. It makes sense that the
*closure* can't be inlined in this case, but it doesn't make sense
to punish the function that constructs the closure.
- The hairiness of the closure counts against the inlining budget of
the outer function. Since we currently copy the closure body when
inlining the outer function, this makes sense from the perspective
of export data size and binary size, but ultimately doesn't make
much sense from the perspective of what should be inlineable.
- Since the inliner walks into every closure created by an outer
function in addition to starting a walk at every closure, this adds
an n^2 factor to inlinability analysis.
This CL simply drops this behavior.
In std, this makes 57 more functions inlinable, and disallows inlining
for 10 (due to the basic instability of our bottom-up inlining
approach), for an net increase of 47 inlinable functions (+0.6%).
This will help significantly with the performance of the functions to
be added for #56102, which have a somewhat complicated nesting of
closures with a performance-critical fast path.
The downside of this seems to be a potential increase in export data
and text size, but the practical impact of this seems to be
negligible:
│ before │ after │
│ bytes │ bytes vs base │
Go/binary 15.12Mi ± 0% 15.14Mi ± 0% +0.16% (n=1)
Go/text 5.220Mi ± 0% 5.237Mi ± 0% +0.32% (n=1)
Compile/binary 22.92Mi ± 0% 22.94Mi ± 0% +0.07% (n=1)
Compile/text 8.428Mi ± 0% 8.435Mi ± 0% +0.08% (n=1)
Change-Id: Ie9e38104fed5689a94c368288653fd7cb4b7a35e
Reviewed-on: https://go-review.googlesource.com/c/go/+/479095
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
This reverts CL 392854.
Reason for revert: caused #59294, which was derived from google
internal tests. The attempted fix of #59294 caused more breakage.
Change-Id: I5a061561ac2740856b7ecc09725ac28bd30f8bba
Reviewed-on: https://go-review.googlesource.com/c/go/+/481060
Reviewed-by: Heschi Kreinick <heschi@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts CL 479255.
Reason for revert: need to revert CL 392854, and this caused a conflict.
Change-Id: I6cb105c62e51b47de3f652df5f5ee92673a93919
Reviewed-on: https://go-review.googlesource.com/c/go/+/481058
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
This reverts CL 478917.
Reason for revert: need to revert CL 392854, and this caused a conflict.
Change-Id: I02c3285de5635b431a99adc8790c8310d1c4e6a2
Reviewed-on: https://go-review.googlesource.com/c/go/+/481059
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
This reverts CL 479915.
Reason for revert: breaks a lot google internal tests.
Change-Id: I13a9422e810af7ba58cbf4a7e6e55f4d8cc0ca51
Reviewed-on: https://go-review.googlesource.com/c/go/+/481055
Reviewed-by: Chressie Himpel <chressie@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This was found by running `git grep 'fmt.Sprintf("%d",' | grep -v test | grep -v vendor`
And this was automatically fixed with gotiti https://github.com/catenacyber/gotiti
and using unconvert https://github.com/mdempsky/unconvert
to check if there was (tool which fixed another useless cast)
Change-Id: I023926bc4aa8d51de45f712ac739a0a80145c28c
GitHub-Last-Rev: 1063e32e5b
GitHub-Pull-Request: golang/go#59144
Reviewed-on: https://go-review.googlesource.com/c/go/+/477675
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
For f()() call, the compiler rewrite it roughly to:
autotmp := f()
autotmp()
However, if f() were inlined, escape analysis will confuse about the
lifetime of autotmp, leading to bad escaping decision.
This CL fixes this issue by rewriting f()() to:
var autotmp
autotmp = f()
autotmp()
This problem also happens with Unified IR, until CL 421821 land.
Fixes#57434
Change-Id: I159a7e4c93bbc172f0eae60e7d40fc64ba70b236
Reviewed-on: https://go-review.googlesource.com/c/go/+/459295
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Document the changes to GODEBUG implemented as
part of proposal #56986.
Fixes#56986.
Change-Id: I23153a123e23820c5b22db4767620e037bbdd083
Reviewed-on: https://go-review.googlesource.com/c/go/+/462202
Reviewed-by: Damien Neil <dneil@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Russ Cox <rsc@golang.org>
There is currently no support for GOARCH=loong32, so the Optab.family
field is unused so far. Remove it to simplify the optab; the loong
assembler backend would likely already be overhauled into a sufficiently
different shape by the time we start to care for loong32, that the data
we have today would be useless anyway.
While at it, add a operand class slot for the 3rd source operand
(support for which will arrive in later commits), and rename the other
operand class fields to be self-documenting. The changes are being
merged into this patch for sake of reducing code churn.
Change-Id: Icf0988e34ff1c0f762c8e0708cfcef2e7954760c
Reviewed-on: https://go-review.googlesource.com/c/go/+/477715
Reviewed-by: abner chenc <chenguoqi@loongson.cn>
Run-TryBot: Ben Shi <powerman1st@163.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Wayne Zuo <wdvxdr@golangcn.org>
LoongArch (except for the extremely reduced LA32 Primary subset) has
dedicated beqz/bnez instructions as alternative encodings for beq/bne
with one of the source registers being R0, that allow the offset field
to occupy 5 more bits, giving 21 bits in total (equal to the FP
branches). Make use of them instead of beq/bne if one source operand is
omitted in asm, or if one of the registers being compared is R0.
Multiple go1 benchmark runs indicate the change is not perf-sensitive.
Change-Id: If6267623c82092e81d75578091fb4e013658b9f3
Reviewed-on: https://go-review.googlesource.com/c/go/+/478377
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: abner chenc <chenguoqi@loongson.cn>
Run-TryBot: Ben Shi <powerman1st@163.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Wayne Zuo <wdvxdr@golangcn.org>
Document that errors returned by Join always implement Unwrap []error.
Explicitly state that Unwrap does not unwrap errors
with an Unwrap() []error method.
Change-Id: Id610345dcf43ca54a9dde157e56c5815c5112073
GitHub-Last-Rev: 7a0ec450bd
GitHub-Pull-Request: golang/go#59301
Reviewed-on: https://go-review.googlesource.com/c/go/+/480021
Run-TryBot: Emmanuel Odeke <emmanuel@orijtech.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Damien Neil <dneil@google.com>
Currently, when C calls into Go the first time, we grab an M
using needm, which sets m.g0's stack bounds using the SP. We don't
know how big the stack is, so we simply assume 32K. Previously,
when the Go function returns to C, we drop the M, and the next
time C calls into Go, we put a new stack bound on the g0 based on
the current SP. After CL 392854, we don't drop the M, and the next
time C calls into Go, we reuse the same g0, without recomputing
the stack bounds. If the C code uses quite a bit of stack space
before calling into Go, the SP may be well below the 32K stack
bound we assumed, so the runtime thinks the g0 stack overflows.
This CL makes needm get a more accurate stack bound from
pthread. (In some platforms this may still be a guess as we don't
know exactly where we are in the C stack), but it is probably
better than simply assuming 32K.
For #59294.
Change-Id: Ie52a8f931e0648d8753e4c1dbe45468b8748b527
Reviewed-on: https://go-review.googlesource.com/c/go/+/479915
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Mark the wasip1/wasm port as broken until it has been fully merged.
Change-Id: I58592b43c82513b079c561673de99b41c94b11c8
Reviewed-on: https://go-review.googlesource.com/c/go/+/480655
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Run-TryBot: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Introduce a new m.incgocallback field that is true while C code calls
into Go code. Use it in the tracer in order to fallback to the default
unwinder instead of frame pointer unwinding for this scenario. The
existing fields (incgo, ncgo) were not sufficient to detect the case
where a thread created in C calls into Go code.
Motivation:
1. Take advantage of a cgo symbolizer, if registered, to unwind through
C stacks without frame pointers.
2. Reduce the chance of crashes. It seems unsafe to follow frame
pointers when there could be C code that was compiled without frame
pointers.
Removing the curgp.m.incgocallback check in traceStackID shows the
following minor differences between frame pointer unwinding and the
default unwinder when there is no cgo symbolizer involved.
trace_test.go:60: "goCalledFromCThread": got stack:
main.goCalledFromCThread
/src/runtime/testdata/testprogcgo/trace.go:58
_cgoexp_45c15a3efb3a_goCalledFromCThread
_cgo_gotypes.go:694
runtime.cgocallbackg1
/src/runtime/cgocall.go:318
runtime.cgocallbackg
/src/runtime/cgocall.go:236
runtime.cgocallback
/src/runtime/asm_amd64.s:998
crosscall2
/src/runtime/cgo/asm_amd64.s:30
want stack:
main.goCalledFromCThread
/src/runtime/testdata/testprogcgo/trace.go:58
_cgoexp_45c15a3efb3a_goCalledFromCThread
_cgo_gotypes.go:694
runtime.cgocallbackg1
/src/runtime/cgocall.go:318
runtime.cgocallbackg
/src/runtime/cgocall.go:236
runtime.cgocallback
/src/runtime/asm_amd64.s:998
trace_test.go:60: "goCalledFromC": got stack:
main.goCalledFromC
/src/runtime/testdata/testprogcgo/trace.go:51
_cgoexp_45c15a3efb3a_goCalledFromC
_cgo_gotypes.go:687
runtime.cgocallbackg1
/src/runtime/cgocall.go:318
runtime.cgocallbackg
/src/runtime/cgocall.go:236
runtime.cgocallback
/src/runtime/asm_amd64.s:998
crosscall2
/src/runtime/cgo/asm_amd64.s:30
runtime.asmcgocall
/src/runtime/asm_amd64.s:848
main._Cfunc_cCalledFromGo
_cgo_gotypes.go:263
main.goCalledFromGo
/src/runtime/testdata/testprogcgo/trace.go:46
main.Trace
/src/runtime/testdata/testprogcgo/trace.go:37
main.main
/src/runtime/testdata/testprogcgo/main.go:34
want stack:
main.goCalledFromC
/src/runtime/testdata/testprogcgo/trace.go:51
_cgoexp_45c15a3efb3a_goCalledFromC
_cgo_gotypes.go:687
runtime.cgocallbackg1
/src/runtime/cgocall.go:318
runtime.cgocallbackg
/src/runtime/cgocall.go:236
runtime.cgocallback
/src/runtime/asm_amd64.s:998
runtime.systemstack_switch
/src/runtime/asm_amd64.s:463
runtime.cgocall
/src/runtime/cgocall.go:168
main._Cfunc_cCalledFromGo
_cgo_gotypes.go:263
main.goCalledFromGo
/src/runtime/testdata/testprogcgo/trace.go:46
main.Trace
/src/runtime/testdata/testprogcgo/trace.go:37
main.main
/src/runtime/testdata/testprogcgo/main.go:34
For #16638
Change-Id: I95fa27a3170c5abd923afc6eadab4eae777ced31
Reviewed-on: https://go-review.googlesource.com/c/go/+/474916
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Change tracer to use frame pointer unwinding by default on amd64. The
expansion of inline frames is delayed until the stack table is dumped at
the end of the trace. This requires storing the skip argument in the
stack table, which now resides in pcBuf[0]. For stacks that are not
produced by traceStackID (e.g. CPU samples), a logicalStackSentinel
value in pcBuf[0] indicates that no inline expansion is needed.
Add new GODEBUG=tracefpunwindoff=1 option to use the old unwinder if
needed.
Benchmarks show a considerable decrease in CPU overhead when using frame
pointer unwinding for trace events:
GODEBUG=tracefpunwindoff=1 ../bin/go test -run '^$' -bench '.+PingPong' -count 20 -v -trace /dev/null ./runtime | tee tracefpunwindoff1.txt
GODEBUG=tracefpunwindoff=0 ../bin/go test -run '^$' -bench '.+PingPong' -count 20 -v -trace /dev/null ./runtime | tee tracefpunwindoff0.txt
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz
│ tracefpunwindoff1.txt │ tracefpunwindoff0.txt │
│ sec/op │ sec/op vs base │
PingPongHog-32 3782.5n ± 0% 740.7n ± 2% -80.42% (p=0.000 n=20)
For #16638
Change-Id: I2928a2fcd8779a31c45ce0f2fbcc0179641190bb
Reviewed-on: https://go-review.googlesource.com/c/go/+/463835
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
This commit addresses a regression caused by commit
43f911b0b6 (CL 472195) which led to frame
pointer cycles, causing frame pointer unwinders (refer to CL 463835) to
encounter repetitive stack frames.
The issue occurs when mcall invokes fn on g0's stack. fn is expected not
to return but to continue g's execution through gogo(&g.sched). To
achieve this, g.sched must hold the sp, pc, and bp of mcall's caller. CL
472195 mistakenly altered g.sched.bp to store mcall's own bp, causing
gogo to resume execution with a bp value that points downwards into the
now non-existent mcall frame. This results in the next function call
executed by mcall's callee pushing a bp that points to itself on the
stack, creating a pointer loop.
Fix this by dereferencing bp before storing it in g.sched.bp to
reinstate the correct behavior. Although this problem could potentially
be resolved by reverting the mcall-related changes from CL 472195, doing
so would hide mcall's caller frame from async frame pointer unwinders
like Linux perf when unwinding during fn's execution.
Currently, there is no test coverage for frame pointers to validate
these changes. However, runtime/trace.TestTraceSymbolize at CL 463835
will add basic test coverage and can be used to validate this change.
Change-Id: Iad3c42908eeb1b0009fcb839d7fcfffe53d13326
Reviewed-on: https://go-review.googlesource.com/c/go/+/476235
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
The go:wasmimport compiler directive is used by the wasi port
to access host APIs, some of need to implemented in the syscall
package.
Co-authored-by: Richard Musiol <neelance@gmail.com>
Co-authored-by: Achille Roussel <achille.roussel@gmail.com>
Co-authored-by: Julien Fabre <ju.pryz@gmail.com>
Co-authored-by: Evan Phoenix <evan@phx.io>
Change-Id: I3851e154c6989094effcd25bba5864faa133564e
Reviewed-on: https://go-review.googlesource.com/c/go/+/479615
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Johan Brandhorst-Satzkorn <johan.brandhorst@gmail.com>
The comment justifies exporting GOROOT by saying the api test needs it,
which was relevant back when it was added in CL 99870043, but isn't true
by now.
As of Go 1.8, GOPATH can be unset (https://go.dev/doc/go1.8#gopath).
At some point it also became okay to leave GOROOT unset, at least
whenever one is looking to use the default GOROOT tree of the go command
being executed and not intentionally changing it to a custom directory.
It's also not there in the .bat and .rc variants of this script.
Drop it.
Change-Id: Ibcb386c560523fcfbfec8020f90692dcfa5aa686
Reviewed-on: https://go-review.googlesource.com/c/go/+/480376
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
Run-TryBot: Dmitri Shuralyov <dmitshur@golang.org>
This adds a simple test validating MPTCP Sock for Linux implementation:
- A Listener is created with MPTCP support, accepting new connections in
a new thread.
- A Dialer with MPTCP support connects to this new Listener
- On both sides, MPTCP should be used. Note that at this point, we
cannot check if a fallback to TCP has been done nor if the correct
protocol is being used.
Technically, a localServer from mockserver_test.go is used, similar to
TestIPv6LinkLocalUnicastTCP from tcpsock_test.go. Here with MPTCP, the
Listen step is done manually to force using MPTCP and a post step is
done to verify extra status after the Accept. More checks are going to
be done in the future.
Please note that the test is skipped if the kernel doesn't allow the
creation of an MPTCP socket at all when starting the test.
The test can be executed with this command:
$ ../bin/go test -v net -run "^TestMultiPathTCP$"
The "-race" option has also been checked.
This work has been co-developped by Benjamin Hesmans
<benjamin.hesmans@tessares.net> and Gregory Detal
<gregory.detal@tessares.net>.
Fixes#56539
Change-Id: I4b6b39e9175a20f98497b5ea56934e242da06194
Reviewed-on: https://go-review.googlesource.com/c/go/+/471141
Reviewed-by: Damien Neil <dneil@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Emmanuel Odeke <emmanuel@orijtech.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>