1
0
mirror of https://github.com/golang/go synced 2024-11-17 17:34:46 -07:00
Commit Graph

52610 Commits

Author SHA1 Message Date
Cuong Manh Le
0e90ba9335 cmd/compile: fix constructing expr side-effects when comparing 0-size types
In walkCompare, any ir.OCONVNOP was removed from both operands. So when
constructing assignments for them to preserve any side-effects, using
temporary variables can cause type mismatched with original type.

Instead, using blank assignments will prevent that issue and still make
sure that the operands will be evaluated.

Fixes #52701

Change-Id: I229046acb154890bb36fe441d258563687fdce37
Reviewed-on: https://go-review.googlesource.com/c/go/+/403997
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Robert Griesemer <gri@google.com>
2022-05-05 04:53:27 +00:00
fanzha02
d544591d72 cmd/dist: add asan tests for global objects in testsanitizers package
Add tests to test that -asan in Go can detect the error memory access
to the global objects.

Updates #44853.

Change-Id: I612a048460b497d18389160b66e6f818342d3941
Reviewed-on: https://go-review.googlesource.com/c/go/+/321716
Run-TryBot: Fannie Zhang <Fannie.Zhang@arm.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Benny Siegert <bsiegert@gmail.com>
2022-05-05 04:05:43 +00:00
Cuong Manh Le
0bd7408f90 cmd/compile: fix static init of literal contains dynamic exprs
Fixes #52673

Change-Id: Ib2faa5a669c05778fc6beb38c3e63d558af9b2be
Reviewed-on: https://go-review.googlesource.com/c/go/+/403995
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-05-05 01:33:11 +00:00
Cuong Manh Le
ed462a66a5 cmd/compile: use appendWalkStmt when possible
Change-Id: I718ad2fd98a3d30087e169abdd5ba3cc4f4b36ba
Reviewed-on: https://go-review.googlesource.com/c/go/+/403999
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-05 01:33:01 +00:00
Cuong Manh Le
b922055572 cmd/compile/internal/walk: remove litas
Since when its only usage is in maplit, and its body is simple enough to
be inlined directly at the caller side.

Change-Id: Id6b8a9d230d0e1e7f8da8d33bbc0073d3e816fb5
Reviewed-on: https://go-review.googlesource.com/c/go/+/403998
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2022-05-05 01:32:18 +00:00
Michael Matloob
aeb933d3ff cmd/go: skip computing BuildInfo in go list unless it's needed
The only fields of the go list output that require BuildInfo to be
computed are the Stale and StaleReason fields. If a user explicitly
requests JSON fields and does not ask for Stale or StaleReason, skip
the computation of BuildInfo.

For #29666

Change-Id: Ie77581c44babedcb5cb7f3dc7d6ed1078b56eee4
Reviewed-on: https://go-review.googlesource.com/c/go/+/402736
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Michael Matloob <matloob@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
2022-05-04 21:35:22 +00:00
Paul E. Murphy
2f23364340 cmd/asm: on PPC64, allow ISEL to accept a CR bit arg
Using the CR bit register arguments makes it more easy to
understand which condition and CR field is being tested when
using ISEL.

Likewise, cleanup optab setup for ISEL. ISEL should only
accept a 5 bit unsigned constant (C_U5CON), and C_ZCON
arguments are accepted by a C_U5CON optab arg.

Change-Id: I2495dbe3595dd3f16c510b3492a88133af9f7e1a
Reviewed-on: https://go-review.googlesource.com/c/go/+/402375
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2022-05-04 21:03:31 +00:00
Jorropo
e1e056fa6a cmd/compile: fold constants found by prove
It is hit ~70k times building go.
This make the go binary, 0.04% smaller.
I didn't included benchmarks because this is just constant foldings
and is hard to mesure objectively.

For example, this enable rewriting things like:
  if x == 20 {
    return x + 30 + z
  }

Into:
  if x == 20 {
    return 50 + z
  }

It's not just fixing programer's code,
the ssa generator generate code like this sometimes.

Change-Id: I0861f342b27f7227b5f1c34d8267fa0057b1bbbc
GitHub-Last-Rev: 4c2f9b5216
GitHub-Pull-Request: golang/go#52669
Reviewed-on: https://go-review.googlesource.com/c/go/+/403735
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2022-05-04 20:30:17 +00:00
Robert Griesemer
fbb47e81c1 cmd/compile/internal/syntax: minor cleanups in extractName
Backport the recommended changes suggested in CL 403937.

Change-Id: I3ac29c90977e33899881838825da033627344ed2
Reviewed-on: https://go-review.googlesource.com/c/go/+/403853
Run-TryBot: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Findley <rfindley@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@google.com>
2022-05-04 20:20:28 +00:00
Robert Griesemer
ab0bb52f2f go/parser: accept all valid type parameter lists
This is a port of CL 402256 from the syntax package to go/parser
with adjustments because of the different AST structure, and
excluding any necessary go/printer changes (separate CL).

Type parameter lists starting with the form [name *T|...] or
[name (X)|...] may look like an array length expression [x].
Only after parsing the entire initial expression and checking
whether the expression contains type elements or is followed
by a comma can we make the final decision.

This change simplifies the existing parsing strategy: instead
of trying to make an upfront decision with limited information
(which is insufficient), the parser now parses the start of a
type parameter list or array length specification as expression.
In a second step, if the expression can be split into a name
followed by a type element, or a name followed by an ordinary
expression which is succeeded by a comma, we assume a type
parameter list (because it can't be an array length).
In all other cases we assume an array length specification.

Fixes #52559.

Change-Id: I11ab6e62b073b78b2331bb6063cf74d2a9eaa236
Reviewed-on: https://go-review.googlesource.com/c/go/+/403937
Run-TryBot: Robert Griesemer <gri@google.com>
Reviewed-by: Robert Findley <rfindley@google.com>
Reviewed-by: Robert Griesemer <gri@google.com>
2022-05-04 20:20:27 +00:00
Carl Johnson
fd6ef06296 io: add an Err field to LimitedReader
Fixes #51115

Change-Id: I3c5296e4adc71c1c1b1808a45abd4801ae43465a
GitHub-Last-Rev: 4c197acd51
GitHub-Pull-Request: golang/go#51990
Reviewed-on: https://go-review.googlesource.com/c/go/+/396215
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-04 20:06:32 +00:00
Tobias Klauser
037b209ae3 net/netip: use strconv.AppendUint in AddrPort.AppendTo
This is already used in AddrPort.String.

name                               old time/op    new time/op    delta
AddrPortMarshalText/v4-4             67.9ns ± 3%    61.5ns ± 6%  -9.43%  (p=0.000 n=10+9)
AddrPortMarshalText/v6-4              168ns ± 4%     155ns ± 2%  -7.54%  (p=0.000 n=10+10)
AddrPortMarshalText/v6_ellipsis-4     169ns ± 3%     162ns ± 5%  -4.03%  (p=0.001 n=9+10)
AddrPortMarshalText/v6_v4-4          87.0ns ± 4%    85.8ns ± 4%    ~     (p=0.165 n=10+10)
AddrPortMarshalText/v6_zone-4         169ns ± 3%     166ns ± 3%    ~     (p=0.066 n=10+10)

Change-Id: If16eda4bf48224029dcaf2069ae87f8d89865d99
Reviewed-on: https://go-review.googlesource.com/c/go/+/404014
Run-TryBot: Ian Lance Taylor <iant@google.com>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2022-05-04 18:54:31 +00:00
fanzha02
1b0f9fbb67 cmd/compile: enable Asan check for global variables
With this patch, -asan option can detect the error memory
access to global variables.

So this patch makes a few changes:

1. Add the asanregisterglobals runtime support function,
which calls asan runtime function _asan_register_globals
to register global variables.

2. Create a new initialization function for the package
being compiled. This function initializes an array of
instrumented global variables and pass it to function
runtime.asanregisterglobals. An instrumented global
variable has trailing redzone.

3. Writes the new size of instrumented global variables
that have trailing redzones into object file.

4. Notice that the current implementation is only compatible with
the ASan library from version v7 to v9. Therefore, using the
-asan option requires that the gcc version is not less than 7
and the clang version is less than 4, otherwise a segmentation
fault will occur. So this patch adds a check on whether the compiler
being used is a supported version in cmd/go.

(This is a redo of CL 401775 with a fix for a build break due to an
intervening commit that removed the internal/execabs package.)

Updates #44853.

Change-Id: I719d4ef2b22cb2d5516e1494cd453c3efb47d6c7
Reviewed-on: https://go-review.googlesource.com/c/go/+/403851
Auto-Submit: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-04 18:51:19 +00:00
Paul E. Murphy
c570f0eda2 cmd/compile: combine OR + NOT into ORN on PPC64
This shows up in a few crypto functions, and other
assorted places.

Change-Id: I5a7f4c25ddd4a6499dc295ef693b9fe43d2448ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/404057
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
2022-05-04 18:49:50 +00:00
Russ Cox
ffe48e00ad sync/atomic: add typed atomic values
These implementations will inline to the lower-level primitives,
but they hide the underlying values so that all accesses are
forced to use the atomic APIs. They also allow the use of shorter
names (methods instead of functions) at call sites, making code
more readable.

Pointer[T] also avoids conversions using unsafe.Pointer at call sites.

Discussed on #47141.
See also https://research.swtch.com/gomm for background.

Fixes #50860.

Change-Id: I0b178ee0c7747fa8985f8e48cd7b01063feb7dcc
Reviewed-on: https://go-review.googlesource.com/c/go/+/381317
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-04 18:05:18 +00:00
Bryan C. Mills
cf697253ab cmd/go: write changes to go.mod and go.sum after loading the command-line-arguments package
This entrypoint was missed in CL 349600, and the behavior happened not
to be covered by existing tests.

Fixes #52331.

Change-Id: Iccf12e8e633215abe4bfa1c3ca2fe3a8391b5ba5
Reviewed-on: https://go-review.googlesource.com/c/go/+/401536
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
2022-05-04 16:42:28 +00:00
Bryan Mills
265b5fd2f1 Revert "cmd/compile: enable Asan check for global variables"
This reverts CL 401775.

Reason for revert: broke build.

Change-Id: I4f6f2edff1e4afcf31cd90e26dacf303979eb10c
Reviewed-on: https://go-review.googlesource.com/c/go/+/403981
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
2022-05-04 16:41:22 +00:00
fanzha02
f52b4ec63d cmd/compile: enable Asan check for global variables
With this patch, -asan option can detect the error memory
access to global variables.

So this patch makes a few changes:

1. Add the asanregisterglobals runtime support function,
which calls asan runtime function _asan_register_globals
to register global variables.

2. Create a new initialization function for the package
being compiled. This function initializes an array of
instrumented global variables and pass it to function
runtime.asanregisterglobals. An instrumented global
variable has trailing redzone.

3. Writes the new size of instrumented global variables
that have trailing redzones into object file.

4. Notice that the current implementation is only compatible with
the ASan library from version v7 to v9. Therefore, using the
-asan option requires that the gcc version is not less than 7
and the clang version is less than 4, otherwise a segmentation
fault will occur. So this patch adds a check on whether the compiler
being used is a supported version in cmd/go.

Updates #44853.

Change-Id: Ib877a817209ab2be68a8e22c418fe4a4a20880fc
Reviewed-on: https://go-review.googlesource.com/c/go/+/401775
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
Auto-Submit: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-04 16:13:20 +00:00
Ilya Leoshkevich
87cf92e7d5 cmd,runtime: enable race detector on s390x
LLVM has SystemZ ThreadSanitizer support now [1], this patch integrates
it with golang. The biggest part is the glue code in race_s390x.s,
which is derived from race_arm64.s, and then the support needs to be
enabled in four places.

[1] https://reviews.llvm.org/D105629

Change-Id: I1d4e51beb4042603b681e4aca9af6072879d54d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/336549
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
2022-05-04 14:17:20 +00:00
Meng Zhuo
93218ea1a7 reflect: implement float32 for regabi riscv64
This CL implement archFloat32FromReg and archFloat32ToReg
need for riscv64 due to differences in the way float32 are
represented in registers as compared to other platforms.

Change-Id: I5eab27df242f84b387b0c8dc7f347c93b3fd9df0
Reviewed-on: https://go-review.googlesource.com/c/go/+/403134
Run-TryBot: mzh <mzh@golangcn.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-04 13:38:32 +00:00
Robert Griesemer
7cf3268666 go/parser: parser to accept ~x as unary expression
This is a port of CL 402255 from the syntax package to go/parser
with adjustments because of the different AST structure.

Accept ~x as ordinary unary expression in the parser but recognize
such expressions as invalid in the type checker.

This change opens the door to recognizing complex type constraint
literals such as `*E|~int` in `[P *E|~int]` and parse them correctly
instead of reporting a parse error because `P*E|~int` syntactically
looks like an incorrect array length expression (binary expression
where the RHS of | is an invalid unary expression ~int).

As a result, the parser is more forgiving with expressions but the
type checker will reject invalid uses as before.

We could pass extra information into the binary/unary expression
parse functions to prevent the use of ~ in invalid situations but
it doesn't seem worth the trouble. In fact it may be advantageous
to allow a more liberal expression syntax especially in the presence
of errors (better parser synchronization after an error).

Preparation for fixing #52559.

Change-Id: I48562cf40ccf5f14c20fcd92c40a0303b2d8b2b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/403696
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-04 04:21:29 +00:00
eric fang
4f898840d1 runtime: improve the annotation of debugCallV2 for arm64
This CL improves the annotation documentation of the debugCallV2 function
for arm64.

Change-Id: Icc2b52063cf4fe779071039d6a3bca1951108eb0
Reviewed-on: https://go-review.googlesource.com/c/go/+/402514
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Eric Fang <eric.fang@arm.com>
Reviewed-by: Alessandro Arzilli <alessandro.arzilli@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2022-05-04 03:08:40 +00:00
Joe Tsai
8b389eb2be reflect: derive correct Value method in panic messages
methodName was brittle in that it assumed exactly where
in the call stack the exported Value method is.
This broke since recent inlining optimizations changed
exactly which frame the exported method was located.
Instead, iterate through a sufficient number of stack entries
and dynamically determined the exported Value method name.

This is more maintainable, but slightly slower.
The slowdown is acceptable since panics are not the common case.

Change-Id: I9fc939627007d7bae004b4969516ad44be09c270
Reviewed-on: https://go-review.googlesource.com/c/go/+/403494
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
Auto-Submit: Russ Cox <rsc@golang.org>
2022-05-04 00:51:08 +00:00
Wayne Zuo
64b6e44ad7 cmd/compile: use dictionary to convert type to shaped interface type
When we convert a type to a shaped interface type, we are not able
to recognize the itab. So passing the itab by dictionary as the
workaround.

Fixes #52026.

Change-Id: I75c23c7dd215daf9761dc24116a8af2c28c6d948
Reviewed-on: https://go-review.googlesource.com/c/go/+/401034
Run-TryBot: Wayne Zuo <wdvxdr@golangcn.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2022-05-03 21:36:22 +00:00
Rhys Hiltner
23f13255f0 runtime: re-add import in trace.go
CL 400795, which uses the runtime/internal/atomic package in trace.go,
raced against CL 397014 removing that import. Re-add the import.

Change-Id: If847ec23f9a0fdff91dab07e93d9fb1b2efed85b
Reviewed-on: https://go-review.googlesource.com/c/go/+/403845
Run-TryBot: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-03 21:11:24 +00:00
Paul E. Murphy
d24c65d8bf cmd/internal/notsha256: revert PPC64 removal, and fix PPC64 asm
This reverts commit a41e37f56a, and
updates the ASM usage to be go1.8 compliant. go 1.18 added support
for using VR's in place of VSR arguments.

The following transformations are made:

  XXLOR Vx, Vx, VSy  -> XXLORQ VSx+32, VSx+32, VSy
  XXLOR VSx, VSx, Vy -> XXLORQ VSx, VSx, VSy+32

XXLOR is broken on 1.8, but XXLORQ is identical and still supported
today.

Change-Id: Icc9cd5511b412c30a14e6afd07a51839aaaf6021
Reviewed-on: https://go-review.googlesource.com/c/go/+/403734
Reviewed-by: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Paul Murphy <murp@ibm.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-03 20:54:41 +00:00
Rhys Hiltner
89c0dd829f runtime: split mprof locks
The profiles for memory allocations, sync.Mutex contention, and general
blocking store their data in a shared hash table. The bookkeeping work
at the end of a garbage collection cycle involves maintenance on each
memory allocation record. Previously, a single lock guarded access to
the hash table and the contents of all records. When a program has
allocated memory at a large number of unique call stacks, the
maintenance following every garbage collection can hold that lock for
several milliseconds. That can prevent progress on all other goroutines
by delaying acquirep's call to mcache.prepareForSweep, which needs the
lock in mProf_Free to report when a profiled allocation is no longer in
use. With no user goroutines making progress, it is in effect a
multi-millisecond GC-related stop-the-world pause.

Split the lock so the call to mProf_Flush no longer delays each P's call
to mProf_Free: mProf_Free uses a lock on the memory records' N+1 cycle,
and mProf_Flush uses locks on the memory records' accumulator and their
N cycle. mProf_Malloc also no longer competes with mProf_Flush, as it
uses a lock on the memory records' N+2 cycle. The profiles for
sync.Mutex contention and general blocking now share a separate lock,
and another lock guards insertions to the shared hash table (uncommon in
the steady-state). Consumers of each type of profile take the matching
accumulator lock, so will observe consistent count and magnitude values
for each record.

For #45894

Change-Id: I615ff80618d10e71025423daa64b0b7f9dc57daa
Reviewed-on: https://go-review.googlesource.com/c/go/+/399956
Reviewed-by: Carlos Amedee <carlos@golang.org>
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 20:49:59 +00:00
Rhys Hiltner
2c0a9884e0 runtime: add CPU samples to execution trace
When the CPU profiler and execution tracer are both active, report the
CPU profile samples in the execution trace data stream.

Include only samples that arrive on the threads known to the runtime,
but include them even when running g0 (such as near the scheduler) or if
there's no P (such as near syscalls).

Render them in "go tool trace" as instantaneous events.

For #16895

Change-Id: I0aa501a7b450c971e510961c0290838729033f7f
Reviewed-on: https://go-review.googlesource.com/c/go/+/400795
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 20:49:46 +00:00
Rhys Hiltner
52bd1c4d6c runtime: decrease STW pause for goroutine profile
The goroutine profile needs to stop the world to get a consistent
snapshot of all goroutines in the app. Leaving the world stopped while
iterating over allgs leads to a pause proportional to the number of
goroutines in the app (or its high-water mark).

Instead, do only a fixed amount of bookkeeping while the world is
stopped. Install a barrier so the scheduler confirms that a goroutine
appears in the profile, with its stack recorded exactly as it was during
the stop-the-world pause, before it allows that goroutine to execute.
Iterate over allgs while the app resumes normal operations, adding each
to the profile unless they've been scheduled in the meantime (and so
have profiled themselves). Stop the world a second time to remove the
barrier and do a fixed amount of cleanup work.

This increases both the fixed overhead and per-goroutine CPU-time cost
of GoroutineProfile. It also increases the wall-clock latency of the
call to GoroutineProfile, since the scheduler may interrupt it to
execute other goroutines.

    name                                  old time/op    new time/op    delta
    GoroutineProfile/small/loaded-8         1.05ms ± 5%    4.99ms ±31%   +376.85%  (p=0.000 n=10+9)
    GoroutineProfile/sparse/loaded-8        1.04ms ± 4%    3.61ms ±27%   +246.61%  (p=0.000 n=10+10)
    GoroutineProfile/large/loaded-8         7.69ms ±17%   20.35ms ± 4%   +164.50%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle              958µs ± 0%    1820µs ±23%    +89.91%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle-8          1.00ms ± 3%    1.52ms ±17%    +51.18%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle-8           1.01ms ± 4%    1.47ms ± 7%    +45.28%  (p=0.000 n=9+9)
    GoroutineProfile/sparse/idle             980µs ± 1%    1403µs ± 2%    +43.22%  (p=0.000 n=9+10)
    GoroutineProfile/large/idle-8           7.19ms ± 8%    8.43ms ±21%    +17.22%  (p=0.011 n=10+10)
    PingPongHog                              511ns ± 8%     585ns ± 9%    +14.39%  (p=0.000 n=10+10)
    GoroutineProfile/large/idle             6.71ms ± 0%    7.58ms ± 3%    +13.08%  (p=0.000 n=8+10)
    PingPongHog-8                            469ns ± 8%     509ns ±12%     +8.62%  (p=0.010 n=9+10)
    WakeupParallelSyscall/5µs                216µs ± 4%     229µs ± 3%     +6.06%  (p=0.000 n=10+9)
    WakeupParallelSyscall/5µs-8              147µs ± 1%     149µs ± 2%     +1.12%  (p=0.009 n=10+10)
    WakeupParallelSyscall/2µs-8              140µs ± 0%     142µs ± 1%     +1.11%  (p=0.001 n=10+9)
    WakeupParallelSyscall/50µs-8             236µs ± 0%     238µs ± 1%     +1.08%  (p=0.000 n=9+10)
    WakeupParallelSyscall/1µs-8              138µs ± 0%     140µs ± 1%     +1.05%  (p=0.013 n=10+9)
    Matmult                                 8.52ns ± 1%    8.61ns ± 0%     +0.98%  (p=0.002 n=10+8)
    WakeupParallelSyscall/10µs-8             157µs ± 1%     158µs ± 1%     +0.58%  (p=0.003 n=10+8)
    CreateGoroutinesSingle-8                 328ns ± 0%     330ns ± 1%     +0.57%  (p=0.000 n=9+9)
    WakeupParallelSpinning/100µs-8           343µs ± 0%     344µs ± 1%     +0.30%  (p=0.015 n=8+8)
    WakeupParallelSyscall/20µs-8             178µs ± 0%     178µs ± 0%     +0.18%  (p=0.043 n=10+9)
    StackGrowthDeep-8                       22.8µs ± 0%    22.9µs ± 0%     +0.12%  (p=0.006 n=10+10)
    StackGrowth                             1.06µs ± 0%    1.06µs ± 0%     +0.09%  (p=0.000 n=8+9)
    WakeupParallelSpinning/0s               10.7µs ± 0%    10.7µs ± 0%     +0.08%  (p=0.000 n=9+9)
    WakeupParallelSpinning/5µs              30.7µs ± 0%    30.7µs ± 0%     +0.04%  (p=0.000 n=10+10)
    WakeupParallelSpinning/100µs             411µs ± 0%     411µs ± 0%     +0.03%  (p=0.000 n=10+9)
    WakeupParallelSpinning/2µs              18.7µs ± 0%    18.7µs ± 0%     +0.02%  (p=0.026 n=10+10)
    WakeupParallelSpinning/20µs-8           93.0µs ± 0%    93.0µs ± 0%     +0.01%  (p=0.021 n=9+10)
    StackGrowth-8                            216ns ± 0%     216ns ± 0%       ~     (p=0.209 n=10+10)
    CreateGoroutinesParallel-8              49.5ns ± 2%    49.3ns ± 1%       ~     (p=0.591 n=10+10)
    CreateGoroutinesSingle                   699ns ±20%     748ns ±19%       ~     (p=0.353 n=10+10)
    WakeupParallelSpinning/0s-8             15.9µs ± 2%    16.0µs ± 3%       ~     (p=0.315 n=10+10)
    WakeupParallelSpinning/1µs              14.6µs ± 0%    14.6µs ± 0%       ~     (p=0.513 n=10+10)
    WakeupParallelSpinning/2µs-8            24.2µs ± 3%    24.1µs ± 2%       ~     (p=0.971 n=10+10)
    WakeupParallelSpinning/10µs             50.7µs ± 0%    50.7µs ± 0%       ~     (p=0.101 n=10+10)
    WakeupParallelSpinning/20µs             90.7µs ± 0%    90.7µs ± 0%       ~     (p=0.898 n=10+10)
    WakeupParallelSpinning/50µs              211µs ± 0%     211µs ± 0%       ~     (p=0.382 n=10+10)
    WakeupParallelSyscall/0s-8               137µs ± 1%     138µs ± 0%       ~     (p=0.075 n=10+10)
    WakeupParallelSyscall/1µs                216µs ± 1%     219µs ± 3%       ~     (p=0.065 n=10+9)
    WakeupParallelSyscall/2µs                214µs ± 7%     219µs ± 1%       ~     (p=0.101 n=10+8)
    WakeupParallelSyscall/50µs               317µs ± 5%     326µs ± 4%       ~     (p=0.123 n=10+10)
    WakeupParallelSyscall/100µs              450µs ± 9%     459µs ± 8%       ~     (p=0.247 n=10+10)
    WakeupParallelSyscall/100µs-8            337µs ± 0%     338µs ± 1%       ~     (p=0.089 n=10+10)
    WakeupParallelSpinning/5µs-8            32.2µs ± 0%    32.2µs ± 0%     -0.05%  (p=0.026 n=9+10)
    WakeupParallelSpinning/50µs-8            216µs ± 0%     216µs ± 0%     -0.12%  (p=0.004 n=10+10)
    WakeupParallelSpinning/1µs-8            20.6µs ± 0%    20.5µs ± 0%     -0.22%  (p=0.014 n=10+10)
    WakeupParallelSpinning/10µs-8           54.5µs ± 0%    54.2µs ± 1%     -0.57%  (p=0.000 n=10+10)
    CreateGoroutines-8                       213ns ± 1%     211ns ± 1%     -0.86%  (p=0.002 n=10+10)
    CreateGoroutinesCapture                 1.03µs ± 0%    1.02µs ± 0%     -0.91%  (p=0.000 n=10+10)
    CreateGoroutinesCapture-8               1.32µs ± 1%    1.31µs ± 1%     -1.06%  (p=0.001 n=10+9)
    CreateGoroutines                         188ns ± 0%     186ns ± 0%     -1.06%  (p=0.000 n=9+10)
    CreateGoroutinesParallel                 188ns ± 0%     186ns ± 0%     -1.27%  (p=0.000 n=8+10)
    WakeupParallelSyscall/0s                 210µs ± 3%     207µs ± 3%     -1.60%  (p=0.043 n=10+10)
    StackGrowthDeep                          121µs ± 1%     119µs ± 1%     -1.70%  (p=0.000 n=9+10)
    Matmult-8                               1.82ns ± 3%    1.78ns ± 3%     -2.16%  (p=0.020 n=10+10)
    WakeupParallelSyscall/20µs               281µs ± 3%     269µs ± 4%     -4.44%  (p=0.000 n=10+10)
    WakeupParallelSyscall/10µs               239µs ± 3%     228µs ± 9%     -4.70%  (p=0.001 n=10+10)
    GoroutineProfile/sparse-nil/idle-8       485µs ± 2%      12µs ± 4%    -97.56%  (p=0.000 n=10+10)
    GoroutineProfile/small-nil/idle-8        484µs ± 2%      12µs ± 1%    -97.60%  (p=0.000 n=10+7)
    GoroutineProfile/small-nil/loaded-8      487µs ± 2%      11µs ± 3%    -97.68%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/loaded-8     507µs ± 4%      11µs ± 6%    -97.78%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/idle-8        709µs ± 2%      11µs ± 4%    -98.38%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/loaded-8      717µs ± 2%      11µs ± 3%    -98.43%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/idle         465µs ± 3%       1µs ± 1%    -99.84%  (p=0.000 n=10+10)
    GoroutineProfile/small-nil/idle          493µs ± 3%       1µs ± 0%    -99.85%  (p=0.000 n=10+9)
    GoroutineProfile/large-nil/idle          716µs ± 1%       1µs ± 2%    -99.89%  (p=0.000 n=7+10)

    name                                  old alloc/op   new alloc/op   delta
    CreateGoroutinesCapture                   144B ± 0%      144B ± 0%       ~     (all equal)
    CreateGoroutinesCapture-8                 144B ± 0%      144B ± 0%       ~     (all equal)

    name                                  old allocs/op  new allocs/op  delta
    CreateGoroutinesCapture                   5.00 ± 0%      5.00 ± 0%       ~     (all equal)
    CreateGoroutinesCapture-8                 5.00 ± 0%      5.00 ± 0%       ~     (all equal)

    name                                  old p50-ns     new p50-ns     delta
    GoroutineProfile/small/loaded-8          1.01M ± 3%     3.87M ±45%   +282.15%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/loaded-8         1.02M ± 3%     2.43M ±41%   +138.42%  (p=0.000 n=10+10)
    GoroutineProfile/large/loaded-8          7.43M ±16%    17.28M ± 2%   +132.43%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle               956k ± 0%     1559k ±16%    +63.03%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle-8            1.01M ± 3%     1.45M ± 7%    +44.31%  (p=0.000 n=10+9)
    GoroutineProfile/sparse/idle              977k ± 1%     1399k ± 2%    +43.20%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle-8           1.00M ± 3%     1.41M ± 3%    +40.47%  (p=0.000 n=10+10)
    GoroutineProfile/large/idle-8            6.97M ± 1%     8.41M ±25%    +20.54%  (p=0.003 n=8+10)
    GoroutineProfile/large/idle              6.71M ± 1%     7.46M ± 4%    +11.15%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/idle-8        483k ± 3%       13k ± 3%    -97.41%  (p=0.000 n=10+9)
    GoroutineProfile/small-nil/idle-8         483k ± 2%       12k ± 1%    -97.43%  (p=0.000 n=10+8)
    GoroutineProfile/small-nil/loaded-8       484k ± 3%       10k ± 2%    -97.93%  (p=0.000 n=10+8)
    GoroutineProfile/sparse-nil/loaded-8      492k ± 2%       10k ± 4%    -97.97%  (p=0.000 n=10+8)
    GoroutineProfile/large-nil/idle-8         708k ± 2%       12k ±15%    -98.36%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/loaded-8       714k ± 2%       10k ± 2%    -98.60%  (p=0.000 n=10+8)
    GoroutineProfile/sparse-nil/idle          459k ± 1%        1k ± 1%    -99.85%  (p=0.000 n=10+10)
    GoroutineProfile/small-nil/idle           477k ± 1%        1k ± 0%    -99.85%  (p=0.000 n=10+9)
    GoroutineProfile/large-nil/idle           712k ± 1%        1k ± 1%    -99.90%  (p=0.000 n=7+10)

    name                                  old p90-ns     new p90-ns     delta
    GoroutineProfile/small/loaded-8          1.13M ±10%     7.49M ±35%   +562.07%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/loaded-8         1.10M ±12%     4.58M ±31%   +318.02%  (p=0.000 n=10+9)
    GoroutineProfile/large/loaded-8          8.78M ±24%    27.83M ± 2%   +217.00%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle               967k ± 0%     2909k ±50%   +200.91%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle-8           1.02M ± 3%     1.96M ±76%    +92.99%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle-8            1.07M ±17%     1.55M ±12%    +45.23%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle              992k ± 1%     1417k ± 3%    +42.79%  (p=0.000 n=9+10)
    GoroutineProfile/large/idle              6.73M ± 0%     7.99M ± 8%    +18.80%  (p=0.000 n=8+10)
    GoroutineProfile/large/idle-8            8.20M ±25%     9.18M ±25%       ~     (p=0.315 n=10+10)
    GoroutineProfile/sparse-nil/idle-8        495k ± 3%       13k ± 1%    -97.36%  (p=0.000 n=10+9)
    GoroutineProfile/small-nil/idle-8         494k ± 2%       13k ± 3%    -97.36%  (p=0.000 n=10+10)
    GoroutineProfile/small-nil/loaded-8       496k ± 2%       13k ± 1%    -97.41%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/loaded-8      544k ±11%       13k ± 1%    -97.62%  (p=0.000 n=10+9)
    GoroutineProfile/large-nil/idle-8         724k ± 1%       13k ± 3%    -98.20%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/loaded-8       729k ± 3%       13k ± 2%    -98.23%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/idle          476k ± 4%        1k ± 1%    -99.85%  (p=0.000 n=9+10)
    GoroutineProfile/small-nil/idle           537k ±10%        1k ± 0%    -99.87%  (p=0.000 n=10+9)
    GoroutineProfile/large-nil/idle           729k ± 0%        1k ± 1%    -99.90%  (p=0.000 n=7+10)

    name                                  old p99-ns     new p99-ns     delta
    GoroutineProfile/sparse/loaded-8         1.27M ±33%    20.49M ±17%  +1514.61%  (p=0.000 n=10+10)
    GoroutineProfile/small/loaded-8          1.37M ±29%    20.48M ±23%  +1399.35%  (p=0.000 n=10+10)
    GoroutineProfile/large/loaded-8          9.76M ±23%    39.98M ±22%   +309.52%  (p=0.000 n=10+8)
    GoroutineProfile/small/idle               976k ± 1%     3367k ±55%   +244.94%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle-8           1.03M ± 3%     2.50M ±65%   +142.30%  (p=0.000 n=10+10)
    GoroutineProfile/small/idle-8            1.17M ±34%     1.70M ±14%    +45.15%  (p=0.000 n=10+10)
    GoroutineProfile/sparse/idle             1.02M ± 3%     1.45M ± 4%    +42.64%  (p=0.000 n=9+10)
    GoroutineProfile/large/idle              6.92M ± 2%     9.00M ± 7%    +29.98%  (p=0.000 n=8+9)
    GoroutineProfile/large/idle-8            8.74M ±23%     9.90M ±24%       ~     (p=0.190 n=10+10)
    GoroutineProfile/sparse-nil/idle-8        508k ± 4%       16k ± 2%    -96.90%  (p=0.000 n=10+9)
    GoroutineProfile/small-nil/idle-8         508k ± 4%       16k ± 3%    -96.91%  (p=0.000 n=10+9)
    GoroutineProfile/small-nil/loaded-8       542k ± 5%       15k ±15%    -97.15%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/loaded-8      649k ±16%       15k ±18%    -97.67%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/idle-8         738k ± 2%       16k ± 2%    -97.86%  (p=0.000 n=10+10)
    GoroutineProfile/large-nil/loaded-8       765k ± 4%       15k ±17%    -98.03%  (p=0.000 n=10+10)
    GoroutineProfile/sparse-nil/idle          539k ±26%        1k ±17%    -99.84%  (p=0.000 n=10+10)
    GoroutineProfile/small-nil/idle           659k ±25%        1k ± 0%    -99.84%  (p=0.000 n=10+8)
    GoroutineProfile/large-nil/idle           760k ± 2%        1k ±22%    -99.88%  (p=0.000 n=9+10)

Fixes #33250
For #50794

Change-Id: I862a2bc4e991cec485f21a6fce4fca84f2c6435b
Reviewed-on: https://go-review.googlesource.com/c/go/+/387415
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 20:49:34 +00:00
Rhys Hiltner
b9dee7e59b runtime/pprof: stress test goroutine profiler
For #33250

Change-Id: Ic7aa74b1bb5da9c4319718bac96316b236cb40b2
Reviewed-on: https://go-review.googlesource.com/c/go/+/387414
Run-TryBot: Rhys Hiltner <rhys@justin.tv>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-03 20:49:18 +00:00
Rhys Hiltner
209942fa88 runtime/pprof: add race annotations for goroutine profiles
The race annotations for goroutine label maps covered the special type
of read necessary to create CPU profiles. Extend that to include
goroutine profiles. Annotate the copy involved in creating new
goroutines.

Fixes #50292

Change-Id: I10f69314e4f4eba85c506590fe4781f4d6b8ec2d
Reviewed-on: https://go-review.googlesource.com/c/go/+/385660
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2022-05-03 20:49:07 +00:00
Michael Anthony Knyszek
7c404d59db runtime: store consistent total allocation stats as uint64
Currently the consistent total allocation stats are managed as uintptrs,
which means they can easily overflow on 32-bit systems. Fix this by
storing these stats as uint64s. This will cause some minor performance
degradation on 32-bit systems, but there really isn't a way around this,
and it affects the correctness of the metrics we export.

Fixes #52680.

Change-Id: I7e6ca44047d46b4bd91c6f87c2d29f730e0d6191
Reviewed-on: https://go-review.googlesource.com/c/go/+/403758
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2022-05-03 19:58:15 +00:00
Ian Lance Taylor
bccce90289 vendor, cmd/vendor: update to current x/sys repo
Ran, in src and src/cmd:

    go get -u golang.org/x/sys
    go mod vendor
    go mod tidy

This brings in loong64 support.

Change-Id: Ide30bd7bd073f473be9d8329e6a4f1d2c903d9a3
Reviewed-on: https://go-review.googlesource.com/c/go/+/401855
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-03 19:48:07 +00:00
zhouguangyuan
884530b374 cmd/compile: mark shape type dupok
Fixes #52633

Change-Id: I3f19804cd7c00cee7e365062402c264d84b596c1
Reviewed-on: https://go-review.googlesource.com/c/go/+/403316
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: guangyuan zhou <zhouguangyuan@golangcn.org>
2022-05-03 19:04:56 +00:00
Keith Randall
5a103ca5e9 cmd/compile: fix bit length intrinsic for 16/8 bits on GOAMD64=v3
Upper bits of registers for uint8/uint16 are junk. Make sure we
mask those off before using LZCNT (leading zeros count).

Fixes #52681

Change-Id: I0ca9e62f23bcb1f6ad2a787fa9895322afaa2533
Reviewed-on: https://go-review.googlesource.com/c/go/+/403815
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@google.com>
2022-05-03 18:33:02 +00:00
teivah
8a5845e4e3 encoding/base32: decoder output depends on chunking of underlying reader
After an analysis, I figured that a way to do it could be to check, after
the call to readEncodedData whether the decoder already saw the end or not.

Fixes #38657

Change-Id: I06fd718ea4ee6ded2cb26c2866b28581ad86e271
GitHub-Last-Rev: d0b7bb38e4
GitHub-Pull-Request: golang/go#52631
Reviewed-on: https://go-review.googlesource.com/c/go/+/403315
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-03 18:30:15 +00:00
Tobias Klauser
35d02791b9 net: remove fallback path in sysSocket
Support for operating system versions requiring this fallback path was
dropped from recent Go versions. The minimum Linux kernel version is
2.6.32 as of Go 1.18. FreeBSD 10 is no longer supported as of Go 1.13.

Change-Id: I7e74768146dd43a36d0d26fcb08eed9ace82189f
Reviewed-on: https://go-review.googlesource.com/c/go/+/403634
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com>
Auto-Submit: Tobias Klauser <tobias.klauser@gmail.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-03 18:23:13 +00:00
Kale Blankenship
356c8c5452 archive/zip: remove unused File.descErr field
Found via staticcheck. Unused as of CL 357489.

Change-Id: I3aa409994ba4388912ac7e7809168529a5b6e31c
Reviewed-on: https://go-review.googlesource.com/c/go/+/403814
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Kale B <kale@lemnisys.com>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: David Chase <drchase@google.com>
2022-05-03 18:22:07 +00:00
zhangyunhao
8efd059f10 A+C: add YunHao Zhang
Change-Id: I898733dff529a40eeec9f9db2a0a59a6757c3827
Reviewed-on: https://go-review.googlesource.com/c/go/+/402515
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Eli Bendersky <eliben@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
2022-05-03 18:08:38 +00:00
Bryan C. Mills
61a585a32c os/exec: in Command, update cmd.Path even if LookPath returns an error
Fixes #52666.
Updates #43724.
Updates #43947.

Change-Id: I72cb585036b7e93cd7adbff318b400586ea97bd5
Reviewed-on: https://go-review.googlesource.com/c/go/+/403694
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Bryan Mills <bcmills@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Bryan Mills <bcmills@google.com>
2022-05-03 16:29:56 +00:00
Michael Anthony Knyszek
e7508598bb runtime: use Escape instead of escape in export_test.go
I landed the bottom CL of my stack without rebasing or retrying trybots,
but in the rebase "escape" was removed in favor of "Escape."

Change-Id: Icdc4d8de8b6ebc782215f2836cd191377cc211df
Reviewed-on: https://go-review.googlesource.com/c/go/+/403755
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
2022-05-03 15:51:30 +00:00
Michael Anthony Knyszek
f01c20bf2b runtime/debug: export SetMemoryLimit
This change also adds an end-to-end test for SetMemoryLimit as a
testprog.

Fixes #48409.

Change-Id: I102d64acf0f36a43ee17b7029e8dfdd1ee5f057d
Reviewed-on: https://go-review.googlesource.com/c/go/+/397018
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:14:09 +00:00
Michael Anthony Knyszek
91f863013e runtime: redesign scavenging algorithm
Currently the runtime's scavenging algorithm involves running from the
top of the heap address space to the bottom (or as far as it gets) once
per GC cycle. Once it treads some ground, it doesn't tread it again
until the next GC cycle.

This works just fine for the background scavenger, for heap-growth
scavenging, and for debug.FreeOSMemory. However, it breaks down in the
face of a memory limit for small heaps in the tens of MiB. Basically,
because the scavenger never retreads old ground, it's completely
oblivious to new memory it could scavenge, and that it really *should*
in the face of a memory limit.

Also, every time some thread goes to scavenge in the runtime, it
reserves what could be a considerable amount of address space, hiding it
from other scavengers.

This change modifies and simplifies the implementation overall. It's
less code with complexities that are much better encapsulated. The
current implementation iterates optimistically over the address space
looking for memory to scavenge, keeping track of what it last saw. The
new implementation does the same, but instead of directly iterating over
pages, it iterates over chunks. It maintains an index of chunks (as a
bitmap over the address space) that indicate which chunks may contain
scavenge work. The page allocator populates this index, while scavengers
consume it and iterate over it optimistically.

This has a two key benefits:
1. Scavenging is much simpler: find a candidate chunk, and check it,
   essentially just using the scavengeOne fast path. There's no need for
   the complexity of iterating beyond one chunk, because the index is
   lock-free and already maintains that information.
2. If pages are freed to the page allocator (always guaranteed to be
   unscavenged), the page allocator immediately notifies all scavengers
   of the new source of work, avoiding the hiding issues of the old
   implementation.

One downside of the new implementation, however, is that it's
potentially more expensive to find pages to scavenge. In the past, if
a single page would become free high up in the address space, the
runtime's scavengers would ignore it. Now that scavengers won't, one or
more scavengers may need to iterate potentially across the whole heap to
find the next source of work. For the background scavenger, this just
means a potentially less reactive scavenger -- overall it should still
use the same amount of CPU. It means worse overheads for memory limit
scavenging, but that's not exactly something with a baseline yet.

In practice, this shouldn't be too bad, hopefully since the chunk index
is extremely compact. For a 48-bit address space, the index is only 8
MiB in size at worst, but even just one physical page in the index is
able to support up to 128 GiB heaps, provided they aren't terribly
sparse. On 32-bit platforms, the index is only 128 bytes in size.

For #48409.

Change-Id: I72b7e74365046b18c64a6417224c5d85511194fb
Reviewed-on: https://go-review.googlesource.com/c/go/+/399474
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:13:53 +00:00
Michael Anthony Knyszek
b4d81147d8 runtime: make the scavenger and allocator respect the memory limit
This change does everything necessary to make the memory allocator and
the scavenger respect the memory limit. In particular, it:

- Adds a second goal for the background scavenge that's based on the
  memory limit, setting a target 5% below the limit to make sure it's
  working hard when the application is close to it.
- Makes span allocation assist the scavenger if the next allocation is
  about to put total memory use above the memory limit.
- Measures any scavenge assist time and adds it to GC assist time for
  the sake of GC CPU limiting, to avoid a death spiral as a result of
  scavenging too much.

All of these changes have a relatively small impact, but each is
intimately related and thus benefit from being done together.

For #48409.

Change-Id: I35517a752f74dd12a151dd620f102c77e095d3e8
Reviewed-on: https://go-review.googlesource.com/c/go/+/397017
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:13:45 +00:00
Michael Anthony Knyszek
7e4bc74119 runtime: set the heap goal from the memory limit
This change makes the memory limit functional by including it in the
heap goal calculation. Specifically, we derive a heap goal from the
memory limit, and compare that to the GOGC-based goal. If the goal based
on the memory limit is lower, we prefer that.

To derive the memory limit goal, the heap goal calculation now takes
a few additional parameters as input. As a result, the heap goal, in the
presence of a memory limit, may change dynamically. The consequences of
this are that different parts of the runtime can have different views of
the heap goal; this is OK. What's important is that all of the runtime
is able to observe the correct heap goal for the moment it's doing
something that affects it, like anything that should trigger a GC cycle.

On the topic of triggering a GC cycle, this change also allows any
manually managed memory allocation from the page heap to trigger a GC.
So, specifically workbufs, unrolled GC scan programs, and goroutine
stacks. The reason for this is that now non-heap memory can effect the
trigger or the heap goal.

Most sources of non-heap memory only change slowly, like GC pointer
bitmaps, or change in response to explicit function calls like
GOMAXPROCS. Note also that unrolled GC scan programs and workbufs are
really only relevant during a GC cycle anyway, so they won't actually
ever trigger a GC. Our primary target here is goroutine stacks.

Goroutine stacks can increase quickly, and this is currently totally
independent of the GC cycle. Thus, if for example a goroutine begins to
recurse suddenly and deeply, then even though the heap goal and trigger
react, we might not notice until its too late. As a result, we need to
trigger a GC cycle.

We do this trigger in allocManual instead of in stackalloc because it's
far more general. We ultimately care about memory that's mapped
read/write and not returned to the OS, which is much more the domain of
the page heap than the stack allocator. Furthermore, there may be new
sources of memory manual allocation in the future (e.g. arenas) that
need to trigger a GC if necessary. As such, I'm inclined to leave the
trigger in allocManual as an extra defensive measure.

It's worth noting that because goroutine stacks do not behave quite as
predictably as other non-heap memory, there is the potential for the
heap goal to swing wildly. Fortunately, goroutine stacks that haven't
been set up to shrink by the last GC cycle will not shrink until after
the next one. This reduces the amount of possible churn in the heap goal
because it means that shrinkage only happens once per goroutine, per GC
cycle. After all the goroutines that should shrink did, then goroutine
stacks will only grow. The shrink mechanism is analagous to sweeping,
which is incremental and thus tends toward a steady amount of heap
memory used. As a result, in practice, I expect this to be a non-issue.

Note that if the memory limit is not set, this change should be a no-op.

For #48409.

Change-Id: Ie06d10175e5e36f9fb6450e26ed8acd3d30c681c
Reviewed-on: https://go-review.googlesource.com/c/go/+/394221
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2022-05-03 15:13:35 +00:00
Michael Anthony Knyszek
973dcbb87c runtime: remove float64 multiplication in heap trigger compute path
As of the last CL, the heap trigger is computed as-needed. This means
that some of the niceties we assumed (that the float64 computations
don't matter because we're doing this rarely anyway) are no longer true.
While we're not exactly on a hot path right now, the trigger check still
happens often enough that it's a little too hot for comfort.

This change optimizes the computation by replacing the float64
multiplication with a shift and a constant integer multiplication.

I ran an allocation microbenchmark for an allocation size that would hit
this path often. CPU profiles seem to indicate this path was ~0.1% of
cycles (dwarfed by other costs, e.g. zeroing memory) even if all we're
doing is allocating, so the "optimization" here isn't particularly
important. However, since the code here is executed significantly more
frequently, and this change isn't particularly complicated, let's err
on the size of efficiency if we can help it.

Note that because of the way the constants are represented now, they're
ever so slightly different from before, so this change technically isn't
a total no-op. In practice however, it should be. These constants are
fuzzy and hand-picked anyway, so having them shift a little is unlikely
to make a significant change to the behavior of the GC.

For #48409.

Change-Id: Iabb2385920f7d891b25040226f35a3f31b7bf844
Reviewed-on: https://go-review.googlesource.com/c/go/+/397015
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2022-05-03 15:13:21 +00:00
Michael Anthony Knyszek
129dcb7226 runtime: check the heap goal and trigger dynamically
As it stands, the heap goal and the trigger are set once by
gcController.commit, and then read out of gcController. However with the
coming memory limit we need the GC to be able to respond to changes in
non-heap memory. The simplest way of achieving this is to compute the
heap goal and its associated trigger dynamically.

In order to make this easier to implement, the GC trigger is now based
on the heap goal, as opposed to the status quo of computing both
simultaneously. In many cases we just want the heap goal anyway, not
both, but we definitely need the goal to compute the trigger, because
the trigger's bounds are entirely based on the goal (the initial runway
is not). A consequence of this is that we can't rely on the trigger to
enforce a minimum heap size anymore, and we need to lift that up
directly to the goal. Specifically, we need to lift up any part of the
calculation that *could* put the trigger ahead of the goal. Luckily this
is just the heap minimum and minimum sweep distance. In the first case,
the pacer may behave slightly differently, as the heap minimum is no
longer the minimum trigger, but the actual minimum heap goal. In the
second case it should be the same, as we ensure the additional runway
for sweeping is added to both the goal *and* the trigger, as before, by
computing that in gcControllerState.commit.

There's also another place we update the heap goal: if a GC starts and
we triggered beyond the goal, we always ensure there's some runway.
That calculation uses the current trigger, which violates the rule of
keeping the goal based on the trigger. Notice, however, that using the
precomputed trigger for this isn't even quite correct: due to a bug, or
something else, we might trigger a GC beyond the precomputed trigger.

So this change also adds a "triggered" field to gcControllerState that
tracks the point at which a GC actually triggered. This is independent
of the precomputed trigger, so it's fine for the heap goal calculation
to rely on it. It also turns out, there's more than just that one place
where we really should be using the actual trigger point, so this change
fixes those up too.

Also, because the heap minimum is set by the goal and not the trigger,
the maximum trigger calculation now happens *after* the goal is set, so
the maximum trigger actually does what I originally intended (and what
the comment says): at small heaps, the pacer picks 95% of the runway as
the maximum trigger. Currently, the pacer picks a small trigger based
on a not-yet-rounded-up heap goal, so the trigger gets rounded up to the
goal, and as per the "ensure there's some runway" check, the runway ends
up at always being 64 KiB. That check is supposed to be for exceptional
circumstances, not the status quo. There's a test introduced in the last
CL that needs to be updated to accomodate this slight change in
behavior.

So, this all sounds like a lot that changed, but what we're talking about
here are really, really tight corner cases that arise from situations
outside of our control, like pathologically bad behavior on the part of
an OS or CPU. Even in these corner cases, it's very unlikely that users
will notice any difference at all. What's more important, I think, is
that the pacer behaves more closely to what all the comments describe,
and what the original intent was.

Another note: at first, one might think that computing the heap goal and
trigger dynamically introduces some raciness, but not in this CL: the heap
goal and trigger are completely static.

Allocation outside of a GC cycle may now be a bit slower than before, as
the GC trigger check is now significantly more complex. However, note
that this executes basically just as often as gcController.revise, and
that makes up for a vanishingly small part of any CPU profile. The next
CL cleans up the floating point multiplications on this path
nonetheless, just to be safe.

For #48409.

Change-Id: I280f5ad607a86756d33fb8449ad08555cbee93f9
Reviewed-on: https://go-review.googlesource.com/c/go/+/397014
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:12:52 +00:00
Michael Anthony Knyszek
473c99643f runtime: rewrite pacer max trigger calculation
Currently the maximum trigger calculation is totally incorrect with
respect to the comment above it and its intent. This change rectifies
this mistake.

For #48409.

Change-Id: Ifef647040a8bdd304dd327695f5f315796a61a74
Reviewed-on: https://go-review.googlesource.com/c/go/+/398834
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:12:45 +00:00
Michael Anthony Knyszek
375d696ddf runtime: move inconsistent memstats into gcController
Fundamentally, all of these memstats exist to serve the runtime in
managing memory. For the sake of simpler testing, couple these stats
more tightly with the GC.

This CL was mostly done automatically. The fields had to be moved
manually, but the references to the fields were updated via

    gofmt -w -r 'memstats.<field> -> gcController.<field>' *.go

For #48409.

Change-Id: Ic036e875c98138d9a11e1c35f8c61b784c376134
Reviewed-on: https://go-review.googlesource.com/c/go/+/397678
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:12:38 +00:00
Michael Anthony Knyszek
d36d5bd3c1 runtime: clean up inconsistent heap stats
The inconsistent heaps stats in memstats are a bit messy. Primarily,
heap_sys is non-orthogonal with heap_released and heap_inuse. In later
CLs, we're going to want heap_sys-heap_released-heap_inuse, so clean
this up by replacing heap_sys with an orthogonal metric: heapFree.
heapFree represents page heap memory that is free but not released.

I think this change also simplifies a lot of reasoning about these
stats; it's much clearer what they mean, and to obtain HeapSys for
memstats, we no longer need to do the strange subtraction from heap_sys
when allocating specifically non-heap memory from the page heap.

Because we're removing heap_sys, we need to replace it with a sysMemStat
for mem.go functions. In this case, heap_released is the most
appropriate because we increase it anyway (again, non-orthogonality). In
which case, it makes sense for heap_inuse, heap_released, and heapFree
to become more uniform, and to just represent them all as sysMemStats.

While we're here and messing with the types of heap_inuse and
heap_released, let's also fix their names (and last_heap_inuse's name)
up to the more modern Go convention of camelCase.

For #48409.

Change-Id: I87fcbf143b3e36b065c7faf9aa888d86bd11710b
Reviewed-on: https://go-review.googlesource.com/c/go/+/397677
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-03 15:12:31 +00:00