1
0
mirror of https://github.com/golang/go synced 2024-10-02 10:28:34 -06:00
Commit Graph

14 Commits

Author SHA1 Message Date
Muhammad Falak R Wani
58cb8a3c8f runtime: remove redeclared structs to make tests build
struct32 and struct40 structs are already declared, remove them to
make runtime tests build.

Change-Id: I3814f2b850dcb15c4002a3aa22e2a9326e5a5e53
Reviewed-on: https://go-review.googlesource.com/55614
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-08-15 07:19:25 +00:00
Martin Möhrmann
8a6e51aede cmd/compile: generate makechan calls with int arguments
Where possible generate calls to runtime makechan with int arguments
during compile time instead of makechan with int64 arguments.

This eliminates converting arguments for calls to makechan with
int64 arguments for platforms where int64 values do not fit into
arguments of type int.

A similar optimization for makeslice was introduced in CL
golang.org/cl/27851.

386:
name                old time/op  new time/op  delta
MakeChan/Byte       52.4ns ± 6%  45.0ns ± 1%  -14.14%  (p=0.000 n=10+10)
MakeChan/Int        54.5ns ± 1%  49.1ns ± 1%   -9.87%  (p=0.000 n=10+10)
MakeChan/Ptr         150ns ± 1%   143ns ± 0%   -4.38%  (p=0.000 n=9+7)
MakeChan/Struct/0   49.2ns ± 2%  43.2ns ± 2%  -12.27%  (p=0.000 n=10+10)
MakeChan/Struct/32  81.7ns ± 2%  76.2ns ± 1%   -6.71%  (p=0.000 n=10+10)
MakeChan/Struct/40  88.4ns ± 2%  82.5ns ± 2%   -6.60%  (p=0.000 n=10+10)

AMD64:
name                old time/op  new time/op  delta
MakeChan/Byte       83.4ns ± 8%  80.8ns ± 3%    ~     (p=0.171 n=10+10)
MakeChan/Int         101ns ± 3%   101ns ± 2%    ~     (p=0.412 n=10+10)
MakeChan/Ptr         128ns ± 1%   128ns ± 1%    ~     (p=0.191 n=10+10)
MakeChan/Struct/0   67.6ns ± 3%  68.7ns ± 4%    ~     (p=0.224 n=10+10)
MakeChan/Struct/32   138ns ± 1%   139ns ± 1%    ~     (p=0.185 n=10+9)
MakeChan/Struct/40   154ns ± 1%   154ns ± 1%  -0.55%  (p=0.027 n=10+9)

Change-Id: Ie854cb066007232c5e9f71ea7d6fe27e81a9c050
Reviewed-on: https://go-review.googlesource.com/55140
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-15 05:54:24 +00:00
Ian Lance Taylor
a145890059 all: don't call t.Fatal from a goroutine
Fixes #17900.

Change-Id: I42cda6ac9cf48ed739d3a015a90b3cb15edf8ddf
Reviewed-on: https://go-review.googlesource.com/33243
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-11-15 15:13:48 +00:00
Ian Lance Taylor
84bb9e62f0 runtime: handle selects with duplicate channels in shrinkstack
The shrinkstack code locks all the channels a goroutine is waiting for,
but didn't handle the case of the same channel appearing in the list
multiple times. This led to a deadlock. The channels are sorted so it's
easy to avoid locking the same channel twice.

Fixes #16286.

Change-Id: Ie514805d0532f61c942e85af5b7b8ac405e2ff65
Reviewed-on: https://go-review.googlesource.com/24815
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2016-07-08 02:05:40 +00:00
Dmitry Vyukov
3b246fa863 runtime: sleep less when we can do work
Usleep(100) in runqgrab negatively affects latency and throughput
of parallel application. We are sleeping instead of doing useful work.
This is effect is particularly visible on windows where minimal
sleep duration is 1-15ms.

Reduce sleep from 100us to 3us and use osyield on windows.
Sync chan send/recv takes ~50ns, so 3us gives us ~50x overshoot.

benchmark                    old ns/op     new ns/op     delta
BenchmarkChanSync-12         216           217           +0.46%
BenchmarkChanSyncWork-12     27213         25816         -5.13%

CPU consumption goes up from 106% to 108% in the first case,
and from 107% to 125% in the second case.

Test case from #14790 on windows:

BenchmarkDefaultResolution-8  4583372   29720    -99.35%
Benchmark1ms-8                992056    30701    -96.91%

99-th latency percentile for HTTP request serving is improved by up to 15%
(see http://golang.org/cl/20835 for details).

The following benchmarks are from the change that originally added this sleep
(see https://golang.org/s/go15gomaxprocs):

name        old time/op  new time/op  delta
Chain       22.6µs ± 2%  22.7µs ± 6%    ~      (p=0.905 n=9+10)
ChainBuf    22.4µs ± 3%  22.5µs ± 4%    ~      (p=0.780 n=9+10)
Chain-2     23.5µs ± 4%  24.9µs ± 1%  +5.66%   (p=0.000 n=10+9)
ChainBuf-2  23.7µs ± 1%  24.4µs ± 1%  +3.31%   (p=0.000 n=9+10)
Chain-4     24.2µs ± 2%  25.1µs ± 3%  +3.70%   (p=0.000 n=9+10)
ChainBuf-4  24.4µs ± 5%  25.0µs ± 2%  +2.37%  (p=0.023 n=10+10)
Powser       2.37s ± 1%   2.37s ± 1%    ~       (p=0.423 n=8+9)
Powser-2     2.48s ± 2%   2.57s ± 2%  +3.74%   (p=0.000 n=10+9)
Powser-4     2.66s ± 1%   2.75s ± 1%  +3.40%  (p=0.000 n=10+10)
Sieve        13.3s ± 2%   13.3s ± 2%    ~      (p=1.000 n=10+9)
Sieve-2      7.00s ± 2%   7.44s ±16%    ~      (p=0.408 n=8+10)
Sieve-4      4.13s ±21%   3.85s ±22%    ~       (p=0.113 n=9+9)

Fixes #14790

Change-Id: Ie7c6a1c4f9c8eb2f5d65ab127a3845386d6f8b5d
Reviewed-on: https://go-review.googlesource.com/20835
Reviewed-by: Austin Clements <austin@google.com>
2016-04-05 15:32:06 +00:00
Austin Clements
276b177771 runtime: make shrinkstack concurrent-safe
Currently shinkstack is only safe during STW because it adjusts
channel-related stack pointers and moves send/receive stack slots
without synchronizing with the channel code. Make it safe to use when
the world isn't stopped by:

1) Locking all channels the G is blocked on while adjusting the sudogs
   and copying the area of the stack that may contain send/receive
   slots.

2) For any stack frames that may contain send/receive slot, using an
   atomic CAS to adjust pointers to prevent races between adjusting a
   pointer in a receive slot and a concurrent send writing to that
   receive slot.

In principle, the synchronization could be finer-grained. For example,
we considered synchronizing around the sudogs, which would allow
channel operations involving other Gs to continue if the G being
shrunk was far enough down the send/receive queue. However, using the
channel lock means no additional locks are necessary in the channel
code. Furthermore, the stack shrinking code holds the channel lock for
a very short time (much less than the time required to shrink the
stack).

This does not yet make stack shrinking concurrent; it merely makes
doing so safe.

This has negligible effect on the go1 and garbage benchmarks.

For #12967.

Change-Id: Ia49df3a8a7be4b36e365aac4155a2416b94b988c
Reviewed-on: https://go-review.googlesource.com/20042
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
2016-03-16 20:13:20 +00:00
Martin Möhrmann
fdd0179bb1 all: fix typos and spelling
Change-Id: Icd06d99c42b8299fd931c7da821e1f418684d913
Reviewed-on: https://go-review.googlesource.com/19829
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-02-24 18:42:29 +00:00
Mikio Hara
1fa0a8cec5 runtime: fix data race in BenchmarkChanPopular
Fixes #11014.

Change-Id: I9a18dacd10564d3eaa1fea4d77f1a48e08e79f53
Reviewed-on: https://go-review.googlesource.com/10563
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-06-02 11:16:01 +00:00
Alex Brainman
031c3bc9ae runtime: fix stackDebug comment
Change-Id: Ia9191bd7ecdf7bd5ee7d69ae23aa71760f379aa8
Reviewed-on: https://go-review.googlesource.com/9590
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-05-02 02:39:50 +00:00
Dmitry Vyukov
fcb895feef runtime: add a select test
One of my earlier versions of finer-grained select locking
failed on this test. If you just naively lock and check channels
one-by-one, it is possible that you skip over ready channels.
Consider that initially c1 is ready and c2 is not. Select checks c2.
Then another goroutine makes c1 not ready and c2 ready (in that order).
Then select checks c1, concludes that no channels are ready and
executes the default case. But there was no point in time when
no channel is ready and so default case must not be executed.

Change-Id: I3594bf1f36cfb120be65e2474794f0562aebcbbd
Reviewed-on: https://go-review.googlesource.com/7550
Reviewed-by: Russ Cox <rsc@golang.org>
2015-03-18 08:57:30 +00:00
Keith Randall
cd5b144d98 runtime,reflect,cmd/internal/gc: Fix comments referring to .c/.h files
Everything has moved to Go, but comments still refer to .c/.h files.
Fix all of those up, at least for these three directories.

Fixes #10138

Change-Id: Ie5efe89b247841e0b3f82aac5256b2c606ef67dc
Reviewed-on: https://go-review.googlesource.com/7431
Reviewed-by: Russ Cox <rsc@golang.org>
2015-03-11 20:19:43 +00:00
Keith Randall
8eb8b40a49 runtime: use doubly-linked lists for channel send/recv queues.
Avoids a potential O(n^2) performance problem when dequeueing
from very popular channels.

benchmark                old ns/op     new ns/op     delta
BenchmarkChanPopular     2563782       627201        -75.54%

Change-Id: I231aaeafea0ecd93d27b268a0b2128530df3ddd6
Reviewed-on: https://go-review.googlesource.com/1200
Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-08 19:20:12 +00:00
Keith Randall
e330cc16f4 runtime: dequeue the correct SudoG
select {
       case <- c:
       case <- c:
}

In this case, c.recvq lists two SudoGs which have the same G.
So we can't use the G as the key to dequeue the correct SudoG,
as that key is ambiguous.  Dequeueing the wrong SudoG ends up
freeing a SudoG that is still in c.recvq.

The fix is to use the actual SudoG pointer as the key.

LGTM=dvyukov
R=rsc, bradfitz, dvyukov, khr
CC=austin, golang-codereviews
https://golang.org/cl/159040043
2014-10-18 21:02:49 -07:00
Russ Cox
c007ce824d build: move package sources from src/pkg to src
Preparation was in CL 134570043.
This CL contains only the effect of 'hg mv src/pkg/* src'.
For more about the move, see golang.org/s/go14nopkg.
2014-09-08 00:08:51 -04:00