mirror of
https://github.com/golang/go
synced 2024-11-24 04:00:13 -07:00
23f13255f0
52436 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Rhys Hiltner
|
23f13255f0 |
runtime: re-add import in trace.go
CL 400795, which uses the runtime/internal/atomic package in trace.go, raced against CL 397014 removing that import. Re-add the import. Change-Id: If847ec23f9a0fdff91dab07e93d9fb1b2efed85b Reviewed-on: https://go-review.googlesource.com/c/go/+/403845 Run-TryBot: Ian Lance Taylor <iant@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Rhys Hiltner <rhys@justin.tv> Reviewed-by: Ian Lance Taylor <iant@google.com> |
||
Paul E. Murphy
|
d24c65d8bf |
cmd/internal/notsha256: revert PPC64 removal, and fix PPC64 asm
This reverts commit
|
||
Rhys Hiltner
|
89c0dd829f |
runtime: split mprof locks
The profiles for memory allocations, sync.Mutex contention, and general blocking store their data in a shared hash table. The bookkeeping work at the end of a garbage collection cycle involves maintenance on each memory allocation record. Previously, a single lock guarded access to the hash table and the contents of all records. When a program has allocated memory at a large number of unique call stacks, the maintenance following every garbage collection can hold that lock for several milliseconds. That can prevent progress on all other goroutines by delaying acquirep's call to mcache.prepareForSweep, which needs the lock in mProf_Free to report when a profiled allocation is no longer in use. With no user goroutines making progress, it is in effect a multi-millisecond GC-related stop-the-world pause. Split the lock so the call to mProf_Flush no longer delays each P's call to mProf_Free: mProf_Free uses a lock on the memory records' N+1 cycle, and mProf_Flush uses locks on the memory records' accumulator and their N cycle. mProf_Malloc also no longer competes with mProf_Flush, as it uses a lock on the memory records' N+2 cycle. The profiles for sync.Mutex contention and general blocking now share a separate lock, and another lock guards insertions to the shared hash table (uncommon in the steady-state). Consumers of each type of profile take the matching accumulator lock, so will observe consistent count and magnitude values for each record. For #45894 Change-Id: I615ff80618d10e71025423daa64b0b7f9dc57daa Reviewed-on: https://go-review.googlesource.com/c/go/+/399956 Reviewed-by: Carlos Amedee <carlos@golang.org> Run-TryBot: Rhys Hiltner <rhys@justin.tv> Reviewed-by: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Rhys Hiltner
|
2c0a9884e0 |
runtime: add CPU samples to execution trace
When the CPU profiler and execution tracer are both active, report the CPU profile samples in the execution trace data stream. Include only samples that arrive on the threads known to the runtime, but include them even when running g0 (such as near the scheduler) or if there's no P (such as near syscalls). Render them in "go tool trace" as instantaneous events. For #16895 Change-Id: I0aa501a7b450c971e510961c0290838729033f7f Reviewed-on: https://go-review.googlesource.com/c/go/+/400795 Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Rhys Hiltner <rhys@justin.tv> Reviewed-by: David Chase <drchase@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Rhys Hiltner
|
52bd1c4d6c |
runtime: decrease STW pause for goroutine profile
The goroutine profile needs to stop the world to get a consistent snapshot of all goroutines in the app. Leaving the world stopped while iterating over allgs leads to a pause proportional to the number of goroutines in the app (or its high-water mark). Instead, do only a fixed amount of bookkeeping while the world is stopped. Install a barrier so the scheduler confirms that a goroutine appears in the profile, with its stack recorded exactly as it was during the stop-the-world pause, before it allows that goroutine to execute. Iterate over allgs while the app resumes normal operations, adding each to the profile unless they've been scheduled in the meantime (and so have profiled themselves). Stop the world a second time to remove the barrier and do a fixed amount of cleanup work. This increases both the fixed overhead and per-goroutine CPU-time cost of GoroutineProfile. It also increases the wall-clock latency of the call to GoroutineProfile, since the scheduler may interrupt it to execute other goroutines. name old time/op new time/op delta GoroutineProfile/small/loaded-8 1.05ms ± 5% 4.99ms ±31% +376.85% (p=0.000 n=10+9) GoroutineProfile/sparse/loaded-8 1.04ms ± 4% 3.61ms ±27% +246.61% (p=0.000 n=10+10) GoroutineProfile/large/loaded-8 7.69ms ±17% 20.35ms ± 4% +164.50% (p=0.000 n=10+10) GoroutineProfile/small/idle 958µs ± 0% 1820µs ±23% +89.91% (p=0.000 n=10+10) GoroutineProfile/sparse/idle-8 1.00ms ± 3% 1.52ms ±17% +51.18% (p=0.000 n=10+10) GoroutineProfile/small/idle-8 1.01ms ± 4% 1.47ms ± 7% +45.28% (p=0.000 n=9+9) GoroutineProfile/sparse/idle 980µs ± 1% 1403µs ± 2% +43.22% (p=0.000 n=9+10) GoroutineProfile/large/idle-8 7.19ms ± 8% 8.43ms ±21% +17.22% (p=0.011 n=10+10) PingPongHog 511ns ± 8% 585ns ± 9% +14.39% (p=0.000 n=10+10) GoroutineProfile/large/idle 6.71ms ± 0% 7.58ms ± 3% +13.08% (p=0.000 n=8+10) PingPongHog-8 469ns ± 8% 509ns ±12% +8.62% (p=0.010 n=9+10) WakeupParallelSyscall/5µs 216µs ± 4% 229µs ± 3% +6.06% (p=0.000 n=10+9) WakeupParallelSyscall/5µs-8 147µs ± 1% 149µs ± 2% +1.12% (p=0.009 n=10+10) WakeupParallelSyscall/2µs-8 140µs ± 0% 142µs ± 1% +1.11% (p=0.001 n=10+9) WakeupParallelSyscall/50µs-8 236µs ± 0% 238µs ± 1% +1.08% (p=0.000 n=9+10) WakeupParallelSyscall/1µs-8 138µs ± 0% 140µs ± 1% +1.05% (p=0.013 n=10+9) Matmult 8.52ns ± 1% 8.61ns ± 0% +0.98% (p=0.002 n=10+8) WakeupParallelSyscall/10µs-8 157µs ± 1% 158µs ± 1% +0.58% (p=0.003 n=10+8) CreateGoroutinesSingle-8 328ns ± 0% 330ns ± 1% +0.57% (p=0.000 n=9+9) WakeupParallelSpinning/100µs-8 343µs ± 0% 344µs ± 1% +0.30% (p=0.015 n=8+8) WakeupParallelSyscall/20µs-8 178µs ± 0% 178µs ± 0% +0.18% (p=0.043 n=10+9) StackGrowthDeep-8 22.8µs ± 0% 22.9µs ± 0% +0.12% (p=0.006 n=10+10) StackGrowth 1.06µs ± 0% 1.06µs ± 0% +0.09% (p=0.000 n=8+9) WakeupParallelSpinning/0s 10.7µs ± 0% 10.7µs ± 0% +0.08% (p=0.000 n=9+9) WakeupParallelSpinning/5µs 30.7µs ± 0% 30.7µs ± 0% +0.04% (p=0.000 n=10+10) WakeupParallelSpinning/100µs 411µs ± 0% 411µs ± 0% +0.03% (p=0.000 n=10+9) WakeupParallelSpinning/2µs 18.7µs ± 0% 18.7µs ± 0% +0.02% (p=0.026 n=10+10) WakeupParallelSpinning/20µs-8 93.0µs ± 0% 93.0µs ± 0% +0.01% (p=0.021 n=9+10) StackGrowth-8 216ns ± 0% 216ns ± 0% ~ (p=0.209 n=10+10) CreateGoroutinesParallel-8 49.5ns ± 2% 49.3ns ± 1% ~ (p=0.591 n=10+10) CreateGoroutinesSingle 699ns ±20% 748ns ±19% ~ (p=0.353 n=10+10) WakeupParallelSpinning/0s-8 15.9µs ± 2% 16.0µs ± 3% ~ (p=0.315 n=10+10) WakeupParallelSpinning/1µs 14.6µs ± 0% 14.6µs ± 0% ~ (p=0.513 n=10+10) WakeupParallelSpinning/2µs-8 24.2µs ± 3% 24.1µs ± 2% ~ (p=0.971 n=10+10) WakeupParallelSpinning/10µs 50.7µs ± 0% 50.7µs ± 0% ~ (p=0.101 n=10+10) WakeupParallelSpinning/20µs 90.7µs ± 0% 90.7µs ± 0% ~ (p=0.898 n=10+10) WakeupParallelSpinning/50µs 211µs ± 0% 211µs ± 0% ~ (p=0.382 n=10+10) WakeupParallelSyscall/0s-8 137µs ± 1% 138µs ± 0% ~ (p=0.075 n=10+10) WakeupParallelSyscall/1µs 216µs ± 1% 219µs ± 3% ~ (p=0.065 n=10+9) WakeupParallelSyscall/2µs 214µs ± 7% 219µs ± 1% ~ (p=0.101 n=10+8) WakeupParallelSyscall/50µs 317µs ± 5% 326µs ± 4% ~ (p=0.123 n=10+10) WakeupParallelSyscall/100µs 450µs ± 9% 459µs ± 8% ~ (p=0.247 n=10+10) WakeupParallelSyscall/100µs-8 337µs ± 0% 338µs ± 1% ~ (p=0.089 n=10+10) WakeupParallelSpinning/5µs-8 32.2µs ± 0% 32.2µs ± 0% -0.05% (p=0.026 n=9+10) WakeupParallelSpinning/50µs-8 216µs ± 0% 216µs ± 0% -0.12% (p=0.004 n=10+10) WakeupParallelSpinning/1µs-8 20.6µs ± 0% 20.5µs ± 0% -0.22% (p=0.014 n=10+10) WakeupParallelSpinning/10µs-8 54.5µs ± 0% 54.2µs ± 1% -0.57% (p=0.000 n=10+10) CreateGoroutines-8 213ns ± 1% 211ns ± 1% -0.86% (p=0.002 n=10+10) CreateGoroutinesCapture 1.03µs ± 0% 1.02µs ± 0% -0.91% (p=0.000 n=10+10) CreateGoroutinesCapture-8 1.32µs ± 1% 1.31µs ± 1% -1.06% (p=0.001 n=10+9) CreateGoroutines 188ns ± 0% 186ns ± 0% -1.06% (p=0.000 n=9+10) CreateGoroutinesParallel 188ns ± 0% 186ns ± 0% -1.27% (p=0.000 n=8+10) WakeupParallelSyscall/0s 210µs ± 3% 207µs ± 3% -1.60% (p=0.043 n=10+10) StackGrowthDeep 121µs ± 1% 119µs ± 1% -1.70% (p=0.000 n=9+10) Matmult-8 1.82ns ± 3% 1.78ns ± 3% -2.16% (p=0.020 n=10+10) WakeupParallelSyscall/20µs 281µs ± 3% 269µs ± 4% -4.44% (p=0.000 n=10+10) WakeupParallelSyscall/10µs 239µs ± 3% 228µs ± 9% -4.70% (p=0.001 n=10+10) GoroutineProfile/sparse-nil/idle-8 485µs ± 2% 12µs ± 4% -97.56% (p=0.000 n=10+10) GoroutineProfile/small-nil/idle-8 484µs ± 2% 12µs ± 1% -97.60% (p=0.000 n=10+7) GoroutineProfile/small-nil/loaded-8 487µs ± 2% 11µs ± 3% -97.68% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/loaded-8 507µs ± 4% 11µs ± 6% -97.78% (p=0.000 n=10+10) GoroutineProfile/large-nil/idle-8 709µs ± 2% 11µs ± 4% -98.38% (p=0.000 n=10+10) GoroutineProfile/large-nil/loaded-8 717µs ± 2% 11µs ± 3% -98.43% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/idle 465µs ± 3% 1µs ± 1% -99.84% (p=0.000 n=10+10) GoroutineProfile/small-nil/idle 493µs ± 3% 1µs ± 0% -99.85% (p=0.000 n=10+9) GoroutineProfile/large-nil/idle 716µs ± 1% 1µs ± 2% -99.89% (p=0.000 n=7+10) name old alloc/op new alloc/op delta CreateGoroutinesCapture 144B ± 0% 144B ± 0% ~ (all equal) CreateGoroutinesCapture-8 144B ± 0% 144B ± 0% ~ (all equal) name old allocs/op new allocs/op delta CreateGoroutinesCapture 5.00 ± 0% 5.00 ± 0% ~ (all equal) CreateGoroutinesCapture-8 5.00 ± 0% 5.00 ± 0% ~ (all equal) name old p50-ns new p50-ns delta GoroutineProfile/small/loaded-8 1.01M ± 3% 3.87M ±45% +282.15% (p=0.000 n=10+10) GoroutineProfile/sparse/loaded-8 1.02M ± 3% 2.43M ±41% +138.42% (p=0.000 n=10+10) GoroutineProfile/large/loaded-8 7.43M ±16% 17.28M ± 2% +132.43% (p=0.000 n=10+10) GoroutineProfile/small/idle 956k ± 0% 1559k ±16% +63.03% (p=0.000 n=10+10) GoroutineProfile/small/idle-8 1.01M ± 3% 1.45M ± 7% +44.31% (p=0.000 n=10+9) GoroutineProfile/sparse/idle 977k ± 1% 1399k ± 2% +43.20% (p=0.000 n=10+10) GoroutineProfile/sparse/idle-8 1.00M ± 3% 1.41M ± 3% +40.47% (p=0.000 n=10+10) GoroutineProfile/large/idle-8 6.97M ± 1% 8.41M ±25% +20.54% (p=0.003 n=8+10) GoroutineProfile/large/idle 6.71M ± 1% 7.46M ± 4% +11.15% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/idle-8 483k ± 3% 13k ± 3% -97.41% (p=0.000 n=10+9) GoroutineProfile/small-nil/idle-8 483k ± 2% 12k ± 1% -97.43% (p=0.000 n=10+8) GoroutineProfile/small-nil/loaded-8 484k ± 3% 10k ± 2% -97.93% (p=0.000 n=10+8) GoroutineProfile/sparse-nil/loaded-8 492k ± 2% 10k ± 4% -97.97% (p=0.000 n=10+8) GoroutineProfile/large-nil/idle-8 708k ± 2% 12k ±15% -98.36% (p=0.000 n=10+10) GoroutineProfile/large-nil/loaded-8 714k ± 2% 10k ± 2% -98.60% (p=0.000 n=10+8) GoroutineProfile/sparse-nil/idle 459k ± 1% 1k ± 1% -99.85% (p=0.000 n=10+10) GoroutineProfile/small-nil/idle 477k ± 1% 1k ± 0% -99.85% (p=0.000 n=10+9) GoroutineProfile/large-nil/idle 712k ± 1% 1k ± 1% -99.90% (p=0.000 n=7+10) name old p90-ns new p90-ns delta GoroutineProfile/small/loaded-8 1.13M ±10% 7.49M ±35% +562.07% (p=0.000 n=10+10) GoroutineProfile/sparse/loaded-8 1.10M ±12% 4.58M ±31% +318.02% (p=0.000 n=10+9) GoroutineProfile/large/loaded-8 8.78M ±24% 27.83M ± 2% +217.00% (p=0.000 n=10+10) GoroutineProfile/small/idle 967k ± 0% 2909k ±50% +200.91% (p=0.000 n=10+10) GoroutineProfile/sparse/idle-8 1.02M ± 3% 1.96M ±76% +92.99% (p=0.000 n=10+10) GoroutineProfile/small/idle-8 1.07M ±17% 1.55M ±12% +45.23% (p=0.000 n=10+10) GoroutineProfile/sparse/idle 992k ± 1% 1417k ± 3% +42.79% (p=0.000 n=9+10) GoroutineProfile/large/idle 6.73M ± 0% 7.99M ± 8% +18.80% (p=0.000 n=8+10) GoroutineProfile/large/idle-8 8.20M ±25% 9.18M ±25% ~ (p=0.315 n=10+10) GoroutineProfile/sparse-nil/idle-8 495k ± 3% 13k ± 1% -97.36% (p=0.000 n=10+9) GoroutineProfile/small-nil/idle-8 494k ± 2% 13k ± 3% -97.36% (p=0.000 n=10+10) GoroutineProfile/small-nil/loaded-8 496k ± 2% 13k ± 1% -97.41% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/loaded-8 544k ±11% 13k ± 1% -97.62% (p=0.000 n=10+9) GoroutineProfile/large-nil/idle-8 724k ± 1% 13k ± 3% -98.20% (p=0.000 n=10+10) GoroutineProfile/large-nil/loaded-8 729k ± 3% 13k ± 2% -98.23% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/idle 476k ± 4% 1k ± 1% -99.85% (p=0.000 n=9+10) GoroutineProfile/small-nil/idle 537k ±10% 1k ± 0% -99.87% (p=0.000 n=10+9) GoroutineProfile/large-nil/idle 729k ± 0% 1k ± 1% -99.90% (p=0.000 n=7+10) name old p99-ns new p99-ns delta GoroutineProfile/sparse/loaded-8 1.27M ±33% 20.49M ±17% +1514.61% (p=0.000 n=10+10) GoroutineProfile/small/loaded-8 1.37M ±29% 20.48M ±23% +1399.35% (p=0.000 n=10+10) GoroutineProfile/large/loaded-8 9.76M ±23% 39.98M ±22% +309.52% (p=0.000 n=10+8) GoroutineProfile/small/idle 976k ± 1% 3367k ±55% +244.94% (p=0.000 n=10+10) GoroutineProfile/sparse/idle-8 1.03M ± 3% 2.50M ±65% +142.30% (p=0.000 n=10+10) GoroutineProfile/small/idle-8 1.17M ±34% 1.70M ±14% +45.15% (p=0.000 n=10+10) GoroutineProfile/sparse/idle 1.02M ± 3% 1.45M ± 4% +42.64% (p=0.000 n=9+10) GoroutineProfile/large/idle 6.92M ± 2% 9.00M ± 7% +29.98% (p=0.000 n=8+9) GoroutineProfile/large/idle-8 8.74M ±23% 9.90M ±24% ~ (p=0.190 n=10+10) GoroutineProfile/sparse-nil/idle-8 508k ± 4% 16k ± 2% -96.90% (p=0.000 n=10+9) GoroutineProfile/small-nil/idle-8 508k ± 4% 16k ± 3% -96.91% (p=0.000 n=10+9) GoroutineProfile/small-nil/loaded-8 542k ± 5% 15k ±15% -97.15% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/loaded-8 649k ±16% 15k ±18% -97.67% (p=0.000 n=10+10) GoroutineProfile/large-nil/idle-8 738k ± 2% 16k ± 2% -97.86% (p=0.000 n=10+10) GoroutineProfile/large-nil/loaded-8 765k ± 4% 15k ±17% -98.03% (p=0.000 n=10+10) GoroutineProfile/sparse-nil/idle 539k ±26% 1k ±17% -99.84% (p=0.000 n=10+10) GoroutineProfile/small-nil/idle 659k ±25% 1k ± 0% -99.84% (p=0.000 n=10+8) GoroutineProfile/large-nil/idle 760k ± 2% 1k ±22% -99.88% (p=0.000 n=9+10) Fixes #33250 For #50794 Change-Id: I862a2bc4e991cec485f21a6fce4fca84f2c6435b Reviewed-on: https://go-review.googlesource.com/c/go/+/387415 Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: Than McIntosh <thanm@google.com> Run-TryBot: Rhys Hiltner <rhys@justin.tv> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Rhys Hiltner
|
b9dee7e59b |
runtime/pprof: stress test goroutine profiler
For #33250 Change-Id: Ic7aa74b1bb5da9c4319718bac96316b236cb40b2 Reviewed-on: https://go-review.googlesource.com/c/go/+/387414 Run-TryBot: Rhys Hiltner <rhys@justin.tv> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com> Reviewed-by: David Chase <drchase@google.com> |
||
Rhys Hiltner
|
209942fa88 |
runtime/pprof: add race annotations for goroutine profiles
The race annotations for goroutine label maps covered the special type of read necessary to create CPU profiles. Extend that to include goroutine profiles. Annotate the copy involved in creating new goroutines. Fixes #50292 Change-Id: I10f69314e4f4eba85c506590fe4781f4d6b8ec2d Reviewed-on: https://go-review.googlesource.com/c/go/+/385660 Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Austin Clements <austin@google.com> |
||
Michael Anthony Knyszek
|
7c404d59db |
runtime: store consistent total allocation stats as uint64
Currently the consistent total allocation stats are managed as uintptrs, which means they can easily overflow on 32-bit systems. Fix this by storing these stats as uint64s. This will cause some minor performance degradation on 32-bit systems, but there really isn't a way around this, and it affects the correctness of the metrics we export. Fixes #52680. Change-Id: I7e6ca44047d46b4bd91c6f87c2d29f730e0d6191 Reviewed-on: https://go-review.googlesource.com/c/go/+/403758 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com> |
||
Ian Lance Taylor
|
bccce90289 |
vendor, cmd/vendor: update to current x/sys repo
Ran, in src and src/cmd: go get -u golang.org/x/sys go mod vendor go mod tidy This brings in loong64 support. Change-Id: Ide30bd7bd073f473be9d8329e6a4f1d2c903d9a3 Reviewed-on: https://go-review.googlesource.com/c/go/+/401855 Reviewed-by: David Chase <drchase@google.com> Run-TryBot: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@google.com> |
||
zhouguangyuan
|
884530b374 |
cmd/compile: mark shape type dupok
Fixes #52633 Change-Id: I3f19804cd7c00cee7e365062402c264d84b596c1 Reviewed-on: https://go-review.googlesource.com/c/go/+/403316 Reviewed-by: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Keith Randall <khr@google.com> Auto-Submit: Keith Randall <khr@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: guangyuan zhou <zhouguangyuan@golangcn.org> |
||
Keith Randall
|
5a103ca5e9 |
cmd/compile: fix bit length intrinsic for 16/8 bits on GOAMD64=v3
Upper bits of registers for uint8/uint16 are junk. Make sure we mask those off before using LZCNT (leading zeros count). Fixes #52681 Change-Id: I0ca9e62f23bcb1f6ad2a787fa9895322afaa2533 Reviewed-on: https://go-review.googlesource.com/c/go/+/403815 TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Keith Randall <khr@google.com> Run-TryBot: Keith Randall <khr@golang.org> Auto-Submit: Keith Randall <khr@google.com> |
||
teivah
|
8a5845e4e3 |
encoding/base32: decoder output depends on chunking of underlying reader
After an analysis, I figured that a way to do it could be to check, after
the call to readEncodedData whether the decoder already saw the end or not.
Fixes #38657
Change-Id: I06fd718ea4ee6ded2cb26c2866b28581ad86e271
GitHub-Last-Rev:
|
||
Tobias Klauser
|
35d02791b9 |
net: remove fallback path in sysSocket
Support for operating system versions requiring this fallback path was dropped from recent Go versions. The minimum Linux kernel version is 2.6.32 as of Go 1.18. FreeBSD 10 is no longer supported as of Go 1.13. Change-Id: I7e74768146dd43a36d0d26fcb08eed9ace82189f Reviewed-on: https://go-review.googlesource.com/c/go/+/403634 TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> Auto-Submit: Tobias Klauser <tobias.klauser@gmail.com> Run-TryBot: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Ian Lance Taylor <iant@google.com> Reviewed-by: David Chase <drchase@google.com> |
||
Kale Blankenship
|
356c8c5452 |
archive/zip: remove unused File.descErr field
Found via staticcheck. Unused as of CL 357489. Change-Id: I3aa409994ba4388912ac7e7809168529a5b6e31c Reviewed-on: https://go-review.googlesource.com/c/go/+/403814 TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Kale B <kale@lemnisys.com> Run-TryBot: Ian Lance Taylor <iant@google.com> Reviewed-by: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: David Chase <drchase@google.com> |
||
zhangyunhao
|
8efd059f10 |
A+C: add YunHao Zhang
Change-Id: I898733dff529a40eeec9f9db2a0a59a6757c3827 Reviewed-on: https://go-review.googlesource.com/c/go/+/402515 Run-TryBot: Ian Lance Taylor <iant@google.com> Reviewed-by: Eli Bendersky <eliben@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Ian Lance Taylor <iant@google.com> |
||
Bryan C. Mills
|
61a585a32c |
os/exec: in Command, update cmd.Path even if LookPath returns an error
Fixes #52666. Updates #43724. Updates #43947. Change-Id: I72cb585036b7e93cd7adbff318b400586ea97bd5 Reviewed-on: https://go-review.googlesource.com/c/go/+/403694 Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Bryan Mills <bcmills@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Bryan Mills <bcmills@google.com> |
||
Michael Anthony Knyszek
|
e7508598bb |
runtime: use Escape instead of escape in export_test.go
I landed the bottom CL of my stack without rebasing or retrying trybots, but in the rebase "escape" was removed in favor of "Escape." Change-Id: Icdc4d8de8b6ebc782215f2836cd191377cc211df Reviewed-on: https://go-review.googlesource.com/c/go/+/403755 Reviewed-by: Bryan Mills <bcmills@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> |
||
Michael Anthony Knyszek
|
f01c20bf2b |
runtime/debug: export SetMemoryLimit
This change also adds an end-to-end test for SetMemoryLimit as a testprog. Fixes #48409. Change-Id: I102d64acf0f36a43ee17b7029e8dfdd1ee5f057d Reviewed-on: https://go-review.googlesource.com/c/go/+/397018 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
91f863013e |
runtime: redesign scavenging algorithm
Currently the runtime's scavenging algorithm involves running from the top of the heap address space to the bottom (or as far as it gets) once per GC cycle. Once it treads some ground, it doesn't tread it again until the next GC cycle. This works just fine for the background scavenger, for heap-growth scavenging, and for debug.FreeOSMemory. However, it breaks down in the face of a memory limit for small heaps in the tens of MiB. Basically, because the scavenger never retreads old ground, it's completely oblivious to new memory it could scavenge, and that it really *should* in the face of a memory limit. Also, every time some thread goes to scavenge in the runtime, it reserves what could be a considerable amount of address space, hiding it from other scavengers. This change modifies and simplifies the implementation overall. It's less code with complexities that are much better encapsulated. The current implementation iterates optimistically over the address space looking for memory to scavenge, keeping track of what it last saw. The new implementation does the same, but instead of directly iterating over pages, it iterates over chunks. It maintains an index of chunks (as a bitmap over the address space) that indicate which chunks may contain scavenge work. The page allocator populates this index, while scavengers consume it and iterate over it optimistically. This has a two key benefits: 1. Scavenging is much simpler: find a candidate chunk, and check it, essentially just using the scavengeOne fast path. There's no need for the complexity of iterating beyond one chunk, because the index is lock-free and already maintains that information. 2. If pages are freed to the page allocator (always guaranteed to be unscavenged), the page allocator immediately notifies all scavengers of the new source of work, avoiding the hiding issues of the old implementation. One downside of the new implementation, however, is that it's potentially more expensive to find pages to scavenge. In the past, if a single page would become free high up in the address space, the runtime's scavengers would ignore it. Now that scavengers won't, one or more scavengers may need to iterate potentially across the whole heap to find the next source of work. For the background scavenger, this just means a potentially less reactive scavenger -- overall it should still use the same amount of CPU. It means worse overheads for memory limit scavenging, but that's not exactly something with a baseline yet. In practice, this shouldn't be too bad, hopefully since the chunk index is extremely compact. For a 48-bit address space, the index is only 8 MiB in size at worst, but even just one physical page in the index is able to support up to 128 GiB heaps, provided they aren't terribly sparse. On 32-bit platforms, the index is only 128 bytes in size. For #48409. Change-Id: I72b7e74365046b18c64a6417224c5d85511194fb Reviewed-on: https://go-review.googlesource.com/c/go/+/399474 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
b4d81147d8 |
runtime: make the scavenger and allocator respect the memory limit
This change does everything necessary to make the memory allocator and the scavenger respect the memory limit. In particular, it: - Adds a second goal for the background scavenge that's based on the memory limit, setting a target 5% below the limit to make sure it's working hard when the application is close to it. - Makes span allocation assist the scavenger if the next allocation is about to put total memory use above the memory limit. - Measures any scavenge assist time and adds it to GC assist time for the sake of GC CPU limiting, to avoid a death spiral as a result of scavenging too much. All of these changes have a relatively small impact, but each is intimately related and thus benefit from being done together. For #48409. Change-Id: I35517a752f74dd12a151dd620f102c77e095d3e8 Reviewed-on: https://go-review.googlesource.com/c/go/+/397017 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
7e4bc74119 |
runtime: set the heap goal from the memory limit
This change makes the memory limit functional by including it in the heap goal calculation. Specifically, we derive a heap goal from the memory limit, and compare that to the GOGC-based goal. If the goal based on the memory limit is lower, we prefer that. To derive the memory limit goal, the heap goal calculation now takes a few additional parameters as input. As a result, the heap goal, in the presence of a memory limit, may change dynamically. The consequences of this are that different parts of the runtime can have different views of the heap goal; this is OK. What's important is that all of the runtime is able to observe the correct heap goal for the moment it's doing something that affects it, like anything that should trigger a GC cycle. On the topic of triggering a GC cycle, this change also allows any manually managed memory allocation from the page heap to trigger a GC. So, specifically workbufs, unrolled GC scan programs, and goroutine stacks. The reason for this is that now non-heap memory can effect the trigger or the heap goal. Most sources of non-heap memory only change slowly, like GC pointer bitmaps, or change in response to explicit function calls like GOMAXPROCS. Note also that unrolled GC scan programs and workbufs are really only relevant during a GC cycle anyway, so they won't actually ever trigger a GC. Our primary target here is goroutine stacks. Goroutine stacks can increase quickly, and this is currently totally independent of the GC cycle. Thus, if for example a goroutine begins to recurse suddenly and deeply, then even though the heap goal and trigger react, we might not notice until its too late. As a result, we need to trigger a GC cycle. We do this trigger in allocManual instead of in stackalloc because it's far more general. We ultimately care about memory that's mapped read/write and not returned to the OS, which is much more the domain of the page heap than the stack allocator. Furthermore, there may be new sources of memory manual allocation in the future (e.g. arenas) that need to trigger a GC if necessary. As such, I'm inclined to leave the trigger in allocManual as an extra defensive measure. It's worth noting that because goroutine stacks do not behave quite as predictably as other non-heap memory, there is the potential for the heap goal to swing wildly. Fortunately, goroutine stacks that haven't been set up to shrink by the last GC cycle will not shrink until after the next one. This reduces the amount of possible churn in the heap goal because it means that shrinkage only happens once per goroutine, per GC cycle. After all the goroutines that should shrink did, then goroutine stacks will only grow. The shrink mechanism is analagous to sweeping, which is incremental and thus tends toward a steady amount of heap memory used. As a result, in practice, I expect this to be a non-issue. Note that if the memory limit is not set, this change should be a no-op. For #48409. Change-Id: Ie06d10175e5e36f9fb6450e26ed8acd3d30c681c Reviewed-on: https://go-review.googlesource.com/c/go/+/394221 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> |
||
Michael Anthony Knyszek
|
973dcbb87c |
runtime: remove float64 multiplication in heap trigger compute path
As of the last CL, the heap trigger is computed as-needed. This means that some of the niceties we assumed (that the float64 computations don't matter because we're doing this rarely anyway) are no longer true. While we're not exactly on a hot path right now, the trigger check still happens often enough that it's a little too hot for comfort. This change optimizes the computation by replacing the float64 multiplication with a shift and a constant integer multiplication. I ran an allocation microbenchmark for an allocation size that would hit this path often. CPU profiles seem to indicate this path was ~0.1% of cycles (dwarfed by other costs, e.g. zeroing memory) even if all we're doing is allocating, so the "optimization" here isn't particularly important. However, since the code here is executed significantly more frequently, and this change isn't particularly complicated, let's err on the size of efficiency if we can help it. Note that because of the way the constants are represented now, they're ever so slightly different from before, so this change technically isn't a total no-op. In practice however, it should be. These constants are fuzzy and hand-picked anyway, so having them shift a little is unlikely to make a significant change to the behavior of the GC. For #48409. Change-Id: Iabb2385920f7d891b25040226f35a3f31b7bf844 Reviewed-on: https://go-review.googlesource.com/c/go/+/397015 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> |
||
Michael Anthony Knyszek
|
129dcb7226 |
runtime: check the heap goal and trigger dynamically
As it stands, the heap goal and the trigger are set once by gcController.commit, and then read out of gcController. However with the coming memory limit we need the GC to be able to respond to changes in non-heap memory. The simplest way of achieving this is to compute the heap goal and its associated trigger dynamically. In order to make this easier to implement, the GC trigger is now based on the heap goal, as opposed to the status quo of computing both simultaneously. In many cases we just want the heap goal anyway, not both, but we definitely need the goal to compute the trigger, because the trigger's bounds are entirely based on the goal (the initial runway is not). A consequence of this is that we can't rely on the trigger to enforce a minimum heap size anymore, and we need to lift that up directly to the goal. Specifically, we need to lift up any part of the calculation that *could* put the trigger ahead of the goal. Luckily this is just the heap minimum and minimum sweep distance. In the first case, the pacer may behave slightly differently, as the heap minimum is no longer the minimum trigger, but the actual minimum heap goal. In the second case it should be the same, as we ensure the additional runway for sweeping is added to both the goal *and* the trigger, as before, by computing that in gcControllerState.commit. There's also another place we update the heap goal: if a GC starts and we triggered beyond the goal, we always ensure there's some runway. That calculation uses the current trigger, which violates the rule of keeping the goal based on the trigger. Notice, however, that using the precomputed trigger for this isn't even quite correct: due to a bug, or something else, we might trigger a GC beyond the precomputed trigger. So this change also adds a "triggered" field to gcControllerState that tracks the point at which a GC actually triggered. This is independent of the precomputed trigger, so it's fine for the heap goal calculation to rely on it. It also turns out, there's more than just that one place where we really should be using the actual trigger point, so this change fixes those up too. Also, because the heap minimum is set by the goal and not the trigger, the maximum trigger calculation now happens *after* the goal is set, so the maximum trigger actually does what I originally intended (and what the comment says): at small heaps, the pacer picks 95% of the runway as the maximum trigger. Currently, the pacer picks a small trigger based on a not-yet-rounded-up heap goal, so the trigger gets rounded up to the goal, and as per the "ensure there's some runway" check, the runway ends up at always being 64 KiB. That check is supposed to be for exceptional circumstances, not the status quo. There's a test introduced in the last CL that needs to be updated to accomodate this slight change in behavior. So, this all sounds like a lot that changed, but what we're talking about here are really, really tight corner cases that arise from situations outside of our control, like pathologically bad behavior on the part of an OS or CPU. Even in these corner cases, it's very unlikely that users will notice any difference at all. What's more important, I think, is that the pacer behaves more closely to what all the comments describe, and what the original intent was. Another note: at first, one might think that computing the heap goal and trigger dynamically introduces some raciness, but not in this CL: the heap goal and trigger are completely static. Allocation outside of a GC cycle may now be a bit slower than before, as the GC trigger check is now significantly more complex. However, note that this executes basically just as often as gcController.revise, and that makes up for a vanishingly small part of any CPU profile. The next CL cleans up the floating point multiplications on this path nonetheless, just to be safe. For #48409. Change-Id: I280f5ad607a86756d33fb8449ad08555cbee93f9 Reviewed-on: https://go-review.googlesource.com/c/go/+/397014 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
473c99643f |
runtime: rewrite pacer max trigger calculation
Currently the maximum trigger calculation is totally incorrect with respect to the comment above it and its intent. This change rectifies this mistake. For #48409. Change-Id: Ifef647040a8bdd304dd327695f5f315796a61a74 Reviewed-on: https://go-review.googlesource.com/c/go/+/398834 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
375d696ddf |
runtime: move inconsistent memstats into gcController
Fundamentally, all of these memstats exist to serve the runtime in managing memory. For the sake of simpler testing, couple these stats more tightly with the GC. This CL was mostly done automatically. The fields had to be moved manually, but the references to the fields were updated via gofmt -w -r 'memstats.<field> -> gcController.<field>' *.go For #48409. Change-Id: Ic036e875c98138d9a11e1c35f8c61b784c376134 Reviewed-on: https://go-review.googlesource.com/c/go/+/397678 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
d36d5bd3c1 |
runtime: clean up inconsistent heap stats
The inconsistent heaps stats in memstats are a bit messy. Primarily, heap_sys is non-orthogonal with heap_released and heap_inuse. In later CLs, we're going to want heap_sys-heap_released-heap_inuse, so clean this up by replacing heap_sys with an orthogonal metric: heapFree. heapFree represents page heap memory that is free but not released. I think this change also simplifies a lot of reasoning about these stats; it's much clearer what they mean, and to obtain HeapSys for memstats, we no longer need to do the strange subtraction from heap_sys when allocating specifically non-heap memory from the page heap. Because we're removing heap_sys, we need to replace it with a sysMemStat for mem.go functions. In this case, heap_released is the most appropriate because we increase it anyway (again, non-orthogonality). In which case, it makes sense for heap_inuse, heap_released, and heapFree to become more uniform, and to just represent them all as sysMemStats. While we're here and messing with the types of heap_inuse and heap_released, let's also fix their names (and last_heap_inuse's name) up to the more modern Go convention of camelCase. For #48409. Change-Id: I87fcbf143b3e36b065c7faf9aa888d86bd11710b Reviewed-on: https://go-review.googlesource.com/c/go/+/397677 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Michael Pratt <mpratt@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
4649a43903 |
runtime: track how much memory is mapped in the Ready state
This change adds a field to memstats called mappedReady that tracks how much memory is in the Ready state at any given time. In essence, it's the total memory usage by the Go runtime (with one exception which is documented). Essentially, all memory mapped read/write that has either been paged in or will soon. To make tracking this not involve the many different stats that track mapped memory, we track this statistic at a very low level. The downside of tracking this statistic at such a low level is that it managed to catch lots of situations where the runtime wasn't fully accounting for memory. This change rectifies these situations by always accounting for memory that's mapped in some way (i.e. always passing a sysMemStat to a mem.go function), with *two* exceptions. Rectifying these situations means also having the memory mapped during testing being accounted for, so that tests (i.e. ReadMemStats) that ultimately check mappedReady continue to work correctly without special exceptions. We choose to simply account for this memory in other_sys. Let's talk about the exceptions. The first is the arenas array for finding heap arena metadata from an address is mapped as read/write in one large chunk. It's tens of MiB in size. On systems with demand paging, we assume that the whole thing isn't paged in at once (after all, it maps to the whole address space, and it's exceedingly difficult with today's technology to even broach having as much physical memory as the total address space). On systems where we have to commit memory manually, we use a two-level structure. Now, the reason why this is an exception is because we have no mechanism to track what memory is paged in, and we can't just account for the entire thing, because that would *look* like an enormous overhead. Furthermore, this structure is on a few really, really critical paths in the runtime, so doing more explicit tracking isn't really an option. So, we explicitly don't and call sysAllocOS to map this memory. The second exception is that we call sysFree with no accounting to clean up address space reservations, or otherwise to throw out mappings we don't care about. In this case, also drop down to a lower level and call sysFreeOS to explicitly avoid accounting. The third exception is debuglog allocations. That is purely a debugging facility and ideally we want it to have as small an impact on the runtime as possible. If we include it in mappedReady calculations, it could cause GC pacing shifts in future CLs, especailly if one increases the debuglog buffer sizes as a one-off. As of this CL, these are the only three places in the runtime that would pass nil for a stat to any of the functions in mem.go. As a result, this CL makes sysMemStats mandatory to facilitate better accounting in the future. It's now much easier to grep and find out where accounting is explicitly elided, because one doesn't have to follow the trail of sysMemStat nil pointer values, and can just look at the function name. For #48409. Change-Id: I274eb467fc2603881717482214fddc47c9eaf218 Reviewed-on: https://go-review.googlesource.com/c/go/+/393402 Reviewed-by: Michael Pratt <mpratt@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Michael Knyszek <mknyszek@google.com> |
||
Michael Anthony Knyszek
|
85d466493d |
runtime: maintain a direct count of total allocs and frees
This will be used by the memory limit computation to determine overheads. For #48409. Change-Id: Iaa4e26e1e6e46f88d10ba8ebb6b001be876dc5cd Reviewed-on: https://go-review.googlesource.com/c/go/+/394220 Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Michael Anthony Knyszek
|
986a31053d |
runtime: add a non-functional memory limit to the pacer
Nothing much to see here, just some plumbing to make latter CLs smaller and clearer. For #48409. Change-Id: Ide23812d5553e0b6eea5616c277d1a760afb4ed0 Reviewed-on: https://go-review.googlesource.com/c/go/+/393401 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> |
||
Michael Anthony Knyszek
|
0feebe6eb5 |
runtime: add byte count parser for GOMEMLIMIT
This change adds a parser for the GOMEMLIMIT environment variable's input. This environment variable accepts a number followed by an optional prefix expressing the unit. Acceptable units include B, KiB, MiB, GiB, TiB, where *iB is a power-of-two byte unit. For #48409. Change-Id: I6a3b4c02b175bfcf9c4debee6118cf5dda93bb6f Reviewed-on: https://go-review.googlesource.com/c/go/+/393400 TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Michael Pratt <mpratt@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> |
||
Michael Knyszek
|
01359b4681 |
runtime: add GC CPU utilization limiter
This change adds a GC CPU utilization limiter to the GC. It disables assists to ensure GC CPU utilization remains under 50%. It uses a leaky bucket mechanism that will only fill if GC CPU utilization exceeds 50%. Once the bucket begins to overflow, GC assists are limited until the bucket empties, at the risk of GC overshoot. The limiter is primarily updated by assists. The scheduler may also update it, but only if the GC is on and a few milliseconds have passed since the last update. This second case exists to ensure that if the limiter is on, and no assists are happening, we're still updating the limiter regularly. The purpose of this limiter is to mitigate GC death spirals, opting to use more memory instead. This change turns the limiter on always. In practice, 50% overall GC CPU utilization is very difficult to hit unless you're trying; even the most allocation-heavy applications with complex heaps still need to do something with that memory. Note that small GOGC values (i.e. single-digit, or low teens) are more likely to trigger the limiter, which means the GOGC tradeoff may no longer be respected. Even so, it should still be relatively rare. This change also introduces the feature flag for code to support the memory limit feature. For #48409. Change-Id: Ia30f914e683e491a00900fd27868446c65e5d3c2 Reviewed-on: https://go-review.googlesource.com/c/go/+/353989 Reviewed-by: Michael Pratt <mpratt@google.com> |
||
Robert Griesemer
|
920b9ab57d |
cmd/compile/internal/syntax: accept all valid type parameter lists
Type parameter lists starting with the form [name *T|...] or [name (X)|...] may look like an array length expression [x]. Only after parsing the entire initial expression and checking whether the expression contains type elements or is followed by a comma can we make the final decision. This change simplifies the existing parsing strategy: instead of trying to make an upfront decision with limited information (which is insufficient), the parser now parses the start of a type parameter list or array length specification as expression. In a second step, if the expression can be split into a name followed by a type element, or a name followed by an ordinary expression which is succeeded by a comma, we assume a type parameter list (because it can't be an array length). In all other cases we assume an array length specification. Fixes #49482. Change-Id: I269b6291999bf60dc697d33d24a5635f01e065b9 Reviewed-on: https://go-review.googlesource.com/c/go/+/402256 Reviewed-by: Benny Siegert <bsiegert@gmail.com> Reviewed-by: Ian Lance Taylor <iant@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> |
||
Tobias Klauser
|
3e00bd0ae4 |
internal/poll, net, syscall: use accept4 on solaris
Solaris supports accept4 since version 11.4, see https://docs.oracle.com/cd/E88353_01/html/E37843/accept4-3c.html Use it in internal/poll.accept like on other platforms. Change-Id: I3d9830a85e93bbbed60486247c2f91abc646371f Reviewed-on: https://go-review.googlesource.com/c/go/+/403394 Reviewed-by: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> Reviewed-by: Benny Siegert <bsiegert@gmail.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Ian Lance Taylor <iant@google.com> |
||
Jorropo
|
0537a74b76 |
io: NopCloser forward WriterTo implementations if the reader supports it
This patch also include related fixes to net/http.
io_test.go don't test reading or WritingTo of the because the logic is simple.
NopCloser didn't even had direct tests before.
Fixes #51566
Change-Id: I1943ee2c20d0fe749f4d04177342ce6eca443efe
GitHub-Last-Rev:
|
||
Meng Zhuo
|
d0cda4d95f |
internal/bytealg: mask high bit for riscv64 regabi
This CL masks byte params which high bits(~0xff) is unused for riscv64 regabi. Currently the compiler only guarantees the low bits contains value. Change-Id: I6dd6c867e60d2143fefde92c866f78c4b007a2f7 Reviewed-on: https://go-review.googlesource.com/c/go/+/402894 Reviewed-by: Cherry Mui <cherryyz@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: mzh <mzh@golangcn.org> Reviewed-by: Benny Siegert <bsiegert@gmail.com> |
||
Robert Findley
|
b75e492b35 |
go/types,types2: delay the check for conflicting struct field names
In #52529, we observed that checking types for duplicate fields and methods during method collection can result in incorrect early expansion of the base type. Fix this by delaying the check for duplicate fields. Notably, we can't delay the check for duplicate methods as we must preserve the invariant that added method names are unique. After this change, it may be possible in the presence of errors to have a type-checked type containing a method name that conflicts with a field name. With the previous logic conflicting methods would have been skipped. This is a change in behavior, but only for invalid code. Preserving the existing behavior would likely require delaying method collection, which could have more significant consequences. As a result of this change, the compiler test fixedbugs/issue28268.go started passing with types2, being previously marked as broken. The fix was not actually related to the duplicate method error, but rather the fact that we stopped reporting redundant errors on the calls to x.b() and x.E(), because they are now (valid!) methods. Fixes #52529 Change-Id: I850ce85c6ba76d79544f46bfd3deb8538d8c7d00 Reviewed-on: https://go-review.googlesource.com/c/go/+/403455 Reviewed-by: Robert Griesemer <gri@google.com> Run-TryBot: Robert Findley <rfindley@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Russ Cox
|
a41e37f56a |
cmd/internal/notsha256: fix ppc64 build
The Go 1.8 toolchain on the builder does not support the assembly in this directory for ppc64, so just delete it. Change-Id: I97caf9d176b7d72b4a265a008b84d91bb86ef70e Reviewed-on: https://go-review.googlesource.com/c/go/+/403616 TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Russ Cox <rsc@golang.org> Reviewed-by: Ian Lance Taylor <iant@google.com> Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com> |
||
Cuong Manh Le
|
0668e3cb1a |
cmd/compile: support pointers to arrays in arrayClear
Fixes #52635 Change-Id: I85f182931e30292983ef86c55a0ab6e01282395c Reviewed-on: https://go-review.googlesource.com/c/go/+/403337 Reviewed-by: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com> Reviewed-by: Ian Lance Taylor <iant@google.com> |
||
Russ Cox
|
f771edd7f9 |
all: REVERSE MERGE dev.boringcrypto (cdcb4b6 ) into master
This commit is a REVERSE MERGE. It merges dev.boringcrypto back into its parent branch, master. This marks the end of development on dev.boringcrypto. Manual Changes: - git rm README.boringcrypto.md - git rm -r misc/boring - git rm src/cmd/internal/notsha256/sha256block_arm64.s - git cherry-pick -n 5856aa74 # remove GOEXPERIMENT=boringcrypto forcing in cmd/dist There are some minor cleanups like merging import statements that I will apply in a follow-up CL. Merge List: + 2022-04-29 |
||
Ian Lance Taylor
|
99f1bf54eb |
bufio: clarify io.EOF behavior of Reader.Read
Fixes #52577 Change-Id: Idaff2604979f9a9c1c7d3140c8a5d218fcd27a56 Reviewed-on: https://go-review.googlesource.com/c/go/+/403594 Reviewed-by: Joseph Tsai <joetsai@digital-static.net> Reviewed-by: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Bryan Mills <bcmills@google.com> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Ian Lance Taylor <iant@google.com> |
||
Joe Tsai
|
a887579976 |
compress/flate: move idempotent close logic to compressor
The compressor methods already have logic for handling a sticky error. Merge the logic from CL 136475 into that. This slightly changes the error message to be more sensible in the situation where it's returned by Flush. Updates #27741 Change-Id: Ie34cf3164d0fa6bd0811175ca467dbbcb3be1395 Reviewed-on: https://go-review.googlesource.com/c/go/+/403514 Reviewed-by: Dmitri Shuralyov <dmitshur@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@google.com> Run-TryBot: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> |
||
Paul E. Murphy
|
8375b54d44 |
internal/bytealg: improve PPC64 equal
Rewrite the vector loop to process 64B per iteration, this greatly improves performance on POWER9/POWER10 for large sizes. Likewise, use a similar tricks for sizes >= 8 and <= 64. And, rewrite small comparisons, it's a little slower for 1 byte, but constant time for 1-7 bytes. Thus, it is increasingly faster for 2-7B. Benchmarks results below are from P8/P9 ppc64le (in that order), several additional testcases have been added to test interesting sizes. Likewise, the old variant was padded to the same code size of the new variant to minimize layout related noise: POWER8/ppc64le/linux: name old speed new speed delta Equal/1 110MB/s ± 0% 106MB/s ± 0% -3.26% Equal/2 202MB/s ± 0% 203MB/s ± 0% +0.18% Equal/3 280MB/s ± 0% 319MB/s ± 0% +13.89% Equal/4 350MB/s ± 0% 414MB/s ± 0% +18.27% Equal/5 412MB/s ± 0% 533MB/s ± 0% +29.19% Equal/6 462MB/s ± 0% 620MB/s ± 0% +34.11% Equal/7 507MB/s ± 0% 745MB/s ± 0% +47.02% Equal/8 913MB/s ± 0% 994MB/s ± 0% +8.84% Equal/9 909MB/s ± 0% 1117MB/s ± 0% +22.85% Equal/10 937MB/s ± 0% 1242MB/s ± 0% +32.59% Equal/11 962MB/s ± 0% 1370MB/s ± 0% +42.37% Equal/12 989MB/s ± 0% 1490MB/s ± 0% +50.60% Equal/13 1.01GB/s ± 0% 1.61GB/s ± 0% +60.27% Equal/14 1.02GB/s ± 0% 1.74GB/s ± 0% +71.22% Equal/15 1.03GB/s ± 0% 1.86GB/s ± 0% +81.45% Equal/16 1.60GB/s ± 0% 1.99GB/s ± 0% +24.21% Equal/17 1.54GB/s ± 0% 2.04GB/s ± 0% +32.28% Equal/20 1.48GB/s ± 0% 2.40GB/s ± 0% +62.64% Equal/32 3.58GB/s ± 0% 3.84GB/s ± 0% +7.18% Equal/63 3.74GB/s ± 0% 7.17GB/s ± 0% +91.79% Equal/64 6.35GB/s ± 0% 7.29GB/s ± 0% +14.75% Equal/65 5.85GB/s ± 0% 7.00GB/s ± 0% +19.66% Equal/127 6.74GB/s ± 0% 13.74GB/s ± 0% +103.77% Equal/128 10.6GB/s ± 0% 12.9GB/s ± 0% +21.98% Equal/129 9.66GB/s ± 0% 11.96GB/s ± 0% +23.85% Equal/191 9.12GB/s ± 0% 17.80GB/s ± 0% +95.26% Equal/192 13.4GB/s ± 0% 17.2GB/s ± 0% +28.66% Equal/4K 29.5GB/s ± 0% 37.3GB/s ± 0% +26.39% Equal/4M 22.6GB/s ± 0% 23.1GB/s ± 0% +2.40% Equal/64M 10.6GB/s ± 0% 11.2GB/s ± 0% +5.83% POWER9/ppc64le/linux: name old speed new speed delta Equal/1 122MB/s ± 0% 121MB/s ± 0% -0.94% Equal/2 223MB/s ± 0% 241MB/s ± 0% +8.29% Equal/3 289MB/s ± 0% 362MB/s ± 0% +24.90% Equal/4 366MB/s ± 0% 483MB/s ± 0% +31.82% Equal/5 427MB/s ± 0% 603MB/s ± 0% +41.28% Equal/6 462MB/s ± 0% 723MB/s ± 0% +56.65% Equal/7 509MB/s ± 0% 843MB/s ± 0% +65.57% Equal/8 974MB/s ± 0% 1066MB/s ± 0% +9.46% Equal/9 1.00GB/s ± 0% 1.20GB/s ± 0% +19.53% Equal/10 1.00GB/s ± 0% 1.33GB/s ± 0% +32.81% Equal/11 1.01GB/s ± 0% 1.47GB/s ± 0% +45.28% Equal/12 1.04GB/s ± 0% 1.60GB/s ± 0% +53.46% Equal/13 1.05GB/s ± 0% 1.73GB/s ± 0% +64.67% Equal/14 1.02GB/s ± 0% 1.87GB/s ± 0% +82.93% Equal/15 1.04GB/s ± 0% 2.00GB/s ± 0% +92.07% Equal/16 1.83GB/s ± 0% 2.13GB/s ± 0% +16.58% Equal/17 1.78GB/s ± 0% 2.18GB/s ± 0% +22.65% Equal/20 1.72GB/s ± 0% 2.57GB/s ± 0% +49.24% Equal/32 3.89GB/s ± 0% 4.10GB/s ± 0% +5.53% Equal/63 3.63GB/s ± 0% 7.63GB/s ± 0% +110.45% Equal/64 6.69GB/s ± 0% 7.75GB/s ± 0% +15.84% Equal/65 6.28GB/s ± 0% 7.07GB/s ± 0% +12.46% Equal/127 6.41GB/s ± 0% 13.65GB/s ± 0% +112.95% Equal/128 11.1GB/s ± 0% 14.1GB/s ± 0% +26.56% Equal/129 10.2GB/s ± 0% 11.2GB/s ± 0% +9.44% Equal/191 8.64GB/s ± 0% 16.39GB/s ± 0% +89.75% Equal/192 15.3GB/s ± 0% 17.8GB/s ± 0% +16.31% Equal/4K 24.6GB/s ± 0% 27.8GB/s ± 0% +13.12% Equal/4M 21.1GB/s ± 0% 22.7GB/s ± 0% +7.66% Equal/64M 20.8GB/s ± 0% 22.4GB/s ± 0% +8.06% Change-Id: Ie3c582133d526cc14e8846ef364c44c93eb7b9a2 Reviewed-on: https://go-review.googlesource.com/c/go/+/399976 Reviewed-by: Carlos Amedee <carlos@golang.org> Run-TryBot: Paul Murphy <murp@ibm.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Cherry Mui <cherryyz@google.com> Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com> |
||
Austin Clements
|
3b01a80319 |
cmd/compile: fix loopreschedchecks for regabi
The loopreschedchecks pass (GOEXPERIMENT=preemptibleloops) had bit-rotted in two ways because of the regabi experiment: 1. The call to goschedguarded was generating a pre-regabi StaticCall. This CL updates it to construct a new-style StaticCall. 2. The mem finder did not account for tuples or results containing a mem. This caused it to construct phis that were supposed to thread the mem into the added blocks, but they could instead thread a tuple or results containing a mem, causing things to go wrong later. This CL updates the mem finder to add an op to select out the mem if it finds the last live mem in a block is a tuple or results. This isn't ideal since we'll deadcode out most of these, but it's the easiest thing to do and this is just an experiment. Tested by running the runtime tests. Ideally we'd have a real test for this, but I don't think it's worth the effort for code that clearly hasn't been enabled by anyone for at least a year. Change-Id: I8ed01207637c454b68a551d38986c947e17d520b Reviewed-on: https://go-review.googlesource.com/c/go/+/403475 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com> |
||
Matthew Dempsky
|
d4bfc87218 |
cmd/compile: simplify code from CL 398474
This CL: 1. extracts typecheck.LookupNum into a method on *types.Pkg, so that it can be used with any package, not just types.LocalPkg, 2. adds a new helper function closureSym to generate symbols in the appropriate package as needed within stencil.go, and 3. updates the existing typecheck.LookupNum+Name.SetSym code to call closureSym instead. No functional change (so no need to backport to Go 1.18), but a little cleaner, and avoids polluting types.LocalPkg.Syms with symbols that we won't end up using. Updates #52117. Change-Id: Ifc8a3b76a37c830125e9d494530d1f5b2e3e3e2a Reviewed-on: https://go-review.googlesource.com/c/go/+/403197 Reviewed-by: David Chase <drchase@google.com> Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Matthew Dempsky <mdempsky@google.com> Run-TryBot: Matthew Dempsky <mdempsky@google.com> |
||
Xiaodong Liu
|
af99c2092a |
cmd/internal/objabi: define Go relocation types for loong64
Contributors to the loong64 port are: Weining Lu <luweining@loongson.cn> Lei Wang <wanglei@loongson.cn> Lingqin Gong <gonglingqin@loongson.cn> Xiaolin Zhao <zhaoxiaolin@loongson.cn> Meidan Li <limeidan@loongson.cn> Xiaojuan Zhai <zhaixiaojuan@loongson.cn> Qiyuan Pu <puqiyuan@loongson.cn> Guoqi Chen <chenguoqi@loongson.cn> This port has been updated to Go 1.15.6: https://github.com/loongson/go Updates #46229 Change-Id: I8d31b3cd827325aa0ff748ca8c0c0da6df6ed99f Reviewed-on: https://go-review.googlesource.com/c/go/+/396734 Reviewed-by: David Chase <drchase@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Ian Lance Taylor <iant@google.com> Run-TryBot: Ian Lance Taylor <iant@google.com> |
||
Russ Cox
|
d922c0a8f5 |
all: use os/exec instead of internal/execabs
We added internal/execabs back in January 2021 in order to fix a security problem caused by os/exec's handling of the current directory. Now that os/exec has that code, internal/execabs is superfluous and can be deleted. This commit rewrites all the imports back to os/exec and deletes internal/execabs. For #43724. Change-Id: Ib9736baf978be2afd42a1225e2ab3fd5d33d19df Reviewed-on: https://go-review.googlesource.com/c/go/+/381375 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org> Auto-Submit: Russ Cox <rsc@golang.org> Reviewed-by: Dmitri Shuralyov <dmitshur@google.com> |
||
Wayne Zuo
|
64369c3ea0 |
A+C: add Wayne Zuo (individual CLA)
Change-Id: I12fe0b7952a41f6d0f78f892d823244793745279 Reviewed-on: https://go-review.googlesource.com/c/go/+/403336 TryBot-Result: Gopher Robot <gobot@golang.org> Run-TryBot: Wayne Zuo <wdvxdr@golangcn.org> Run-TryBot: Ian Lance Taylor <iant@google.com> Auto-Submit: Benny Siegert <bsiegert@gmail.com> Reviewed-by: Ian Lance Taylor <iant@google.com> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Benny Siegert <bsiegert@gmail.com> |
||
Russ Cox
|
06338941ea |
net/http: fix for recent go.mod update
cmd/internal/moddeps was failing. Ran the commands it suggested: % go mod tidy # to remove extraneous dependencies % go mod vendor # to vendor dependencies % go generate -run=bundle std # to regenerate bundled packages % go generate syscall internal/syscall/... # to regenerate syscall packages cmd/internal/moddeps is happy now. Change-Id: I4ee212cdc323f62a6cdcfdddb6813397b23d89e5 Reviewed-on: https://go-review.googlesource.com/c/go/+/403454 Run-TryBot: Russ Cox <rsc@golang.org> Auto-Submit: Russ Cox <rsc@golang.org> Reviewed-by: Than McIntosh <thanm@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> |
||
Russ Cox
|
349cc83389 |
os/exec: return error when PATH lookup would use current directory
CL 381374 was reverted because x/sys/execabs broke.
This CL reapplies CL 381374, but adding a lookPathErr error
field back, for execabs to manipulate with reflect.
That field will just be a bit of scar tissue in this package forever,
to keep old code working with new toolchains.
CL 403256 fixes x/sys/execabs's test to be ready for the change.
Older versions of x/sys/execabs will keep working
(that is, will keep rejecting what they should reject),
but they will return a slightly different error from LookPath
without that CL, and the test fails because of the different
error text.
For #43724.
This reverts commit
|
||
Baokun Lee
|
32acceb359 |
internal/poll: clear completed Buffers to permit earlier collection
Updates #45163 Change-Id: I73a6f22715550e0e8b83fbd3ebec72ef019f153f Reviewed-on: https://go-review.googlesource.com/c/go/+/373374 Run-TryBot: Lee Baokun <bk@golangcn.org> Run-TryBot: Ian Lance Taylor <iant@google.com> TryBot-Result: Gopher Robot <gobot@golang.org> Auto-Submit: Ian Lance Taylor <iant@google.com> Reviewed-by: Carlos Amedee <carlos@golang.org> Reviewed-by: Ian Lance Taylor <iant@google.com> |