linux/386 depends on modify_ldt system call, but recent Linux kernels
can disable this system call. Any Go programs built as linux/386
crash with the message 'Trace/breakpoint trap'.
The kernel config CONFIG_MODIFY_LDT_SYSCALL, which control
enable/disable modify_ldt, is disabled on Amazon Linux 2016.03.
This fixes this problem by using set_thread_area instead of modify_ldt
on linux/386.
Fixes#14795.
Change-Id: I0cc5139e40e9e5591945164156a77b6bdff2c7f1
Reviewed-on: https://go-review.googlesource.com/21190
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Minux Ma <minux@golang.org>
One intrinsic was needed to help get the very best
performance out of a future GC; as long as that one was
being added, I also added Bswap since that is sometimes
a handy thing to have. I had intended to fill out the
bit-scan intrinsic family, but the mismatch between the
"scan forward" instruction and "count leading zeroes"
was large enough to cause me to leave it out -- it poses
a dilemma that I'd rather dodge right now.
These intrinsics are not exposed for general use.
That's a separate issue requiring an API proposal change
( https://github.com/golang/proposal )
All intrinsics are tested, both that they are substituted
on the appropriate architecture, and that they produce the
expected result.
Change-Id: I5848037cfd97de4f75bdc33bdd89bba00af4a8ee
Reviewed-on: https://go-review.googlesource.com/20564
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
For #14876.
Change-Id: I0992859264cbaf9c9b691fad53345bbb01b4cf3b
Reviewed-on: https://go-review.googlesource.com/21085
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
For #14876.
Change-Id: I33947f74e8058437a784862f1f064974afc99250
Reviewed-on: https://go-review.googlesource.com/21084
Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This is a follow-up of https://go-review.googlesource.com/#/c/20653/
Special case computation for slices with elements of byte size or
pointer size.
name old time/op new time/op delta
GrowSliceBytes-4 86.2ns ± 3% 75.4ns ± 2% -12.50% (p=0.000 n=20+20)
GrowSliceInts-4 161ns ± 3% 136ns ± 3% -15.59% (p=0.000 n=19+19)
GrowSlicePtr-4 239ns ± 2% 233ns ± 2% -2.52% (p=0.000 n=20+20)
GrowSliceStruct24Bytes-4 258ns ± 3% 256ns ± 3% ~ (p=0.134 n=20+20)
Change-Id: Ice5fa648058fe9d7fa89dee97ca359966f671128
Reviewed-on: https://go-review.googlesource.com/21101
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
There's a race between runtime.goexitsall killing all OS processes
of a go program in order to exit, and runtime.newosproc forking a
new one. If the new process has been created but not yet stored
its pid in m.procid, it will not be killed by goexitsall and
deadlock results.
This CL prevents the race by making the newly forked process
check whether the program is exiting. It also prevents a
potential "shoot-out" if multiple goroutines call Exit at
the same time, which could possibly lead to two processes
killing each other and leaving the rest deadlocked.
Change-Id: I3170b4a62d2461f6b029b3d6aad70373714ed53e
Reviewed-on: https://go-review.googlesource.com/21135
Run-TryBot: David du Colombier <0intro@gmail.com>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David du Colombier <0intro@gmail.com>
During random stealing we steal 4*GOMAXPROCS times from random procs.
One would expect that most of the time we check all procs this way,
but due to low quality PRNG we actually miss procs with frightening
probability. Below are modelling experiment results for 1e6 tries:
GOMAXPROCS = 2 : missed 1 procs 7944 times
GOMAXPROCS = 3 : missed 1 procs 101620 times
GOMAXPROCS = 3 : missed 2 procs 3571 times
GOMAXPROCS = 4 : missed 1 procs 63916 times
GOMAXPROCS = 4 : missed 2 procs 61 times
GOMAXPROCS = 4 : missed 3 procs 16 times
GOMAXPROCS = 5 : missed 1 procs 133136 times
GOMAXPROCS = 5 : missed 2 procs 1025 times
GOMAXPROCS = 5 : missed 3 procs 101 times
GOMAXPROCS = 5 : missed 4 procs 15 times
GOMAXPROCS = 8 : missed 1 procs 151765 times
GOMAXPROCS = 8 : missed 2 procs 5057 times
GOMAXPROCS = 8 : missed 3 procs 1726 times
GOMAXPROCS = 8 : missed 4 procs 68 times
GOMAXPROCS = 12 : missed 1 procs 199081 times
GOMAXPROCS = 12 : missed 2 procs 27489 times
GOMAXPROCS = 12 : missed 3 procs 3113 times
GOMAXPROCS = 12 : missed 4 procs 233 times
GOMAXPROCS = 12 : missed 5 procs 9 times
GOMAXPROCS = 16 : missed 1 procs 237477 times
GOMAXPROCS = 16 : missed 2 procs 30037 times
GOMAXPROCS = 16 : missed 3 procs 9466 times
GOMAXPROCS = 16 : missed 4 procs 1334 times
GOMAXPROCS = 16 : missed 5 procs 192 times
GOMAXPROCS = 16 : missed 6 procs 5 times
GOMAXPROCS = 16 : missed 7 procs 1 times
GOMAXPROCS = 16 : missed 8 procs 1 times
A missed proc won't lead to underutilization because we check all procs
again after dropping P. But it can lead to an unpleasant situation
when we miss a proc, drop P, check all procs, discover work, acquire P,
miss the proc again, repeat.
Improve stealing logic to cover all procs.
Also don't enter spinning mode and try to steal when there is nobody around.
Change-Id: Ibb6b122cc7fb836991bad7d0639b77c807aab4c2
Reviewed-on: https://go-review.googlesource.com/20836
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Marvin Stenger <marvin.stenger94@gmail.com>
Create a byte encoding designed for static Go names.
It is intended to be a compact representation of a name
and optional tag data that can be turned into a Go string
without allocating, and describes whether or not it is
exported without unicode table.
The encoding is described in reflect/type.go:
// The first byte is a bit field containing:
//
// 1<<0 the name is exported
// 1<<1 tag data follows the name
// 1<<2 pkgPath *string follow the name and tag
//
// The next two bytes are the data length:
//
// l := uint16(data[1])<<8 | uint16(data[2])
//
// Bytes [3:3+l] are the string data.
//
// If tag data follows then bytes 3+l and 3+l+1 are the tag length,
// with the data following.
//
// If the import path follows, then ptrSize bytes at the end of
// the data form a *string. The import path is only set for concrete
// methods that are defined in a different package than their type.
Shrinks binary sizes:
cmd/go: 164KB (1.6%)
jujud: 1.0MB (1.5%)
For #6853.
Change-Id: I46b6591015b17936a443c9efb5009de8dfe8b609
Reviewed-on: https://go-review.googlesource.com/20968
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
The current runtime attempts to forward signals generated by non-Go
code to the original signal handler. If it can't call the original
handler directly, it currently attempts to re-raise the signal after
resetting the handler. In this case, the original context is lost.
This fix prevents that problem by simply returning from the go signal
handler after resetting the original handler. It only does this when
the original handler is the system default handler, which in all cases
is known to not recover. The signal is not reset, so it is retriggered
and the original handler takes over with the proper context.
Fixes#14899
Change-Id: Ib1c19dfa4b50d9732d7a453de3784c8141e1cbb3
Reviewed-on: https://go-review.googlesource.com/21006
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Fixes#14938.
Additionally some simplifications along the way.
Change-Id: I2c5fb7e32dcc6fab68fff36a49cb72e715756abe
Reviewed-on: https://go-review.googlesource.com/21046
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
For darwin/arm{,64} a non-Go thread is created to convert
EXC_BAD_ACCESS to panics. However, the Go signal handler refuse to
handle signals that would otherwise be ignored if they arrive at
non-Go threads.
Block all (posix) signals to that thread, making sure that
no unexpected signals arrive to it. At least one test, TestStop in
os/signal, depends on signals not arriving on any non-Go threads.
For #14318
Change-Id: I901467fb53bdadb0d03b0f1a537116c7f4754423
Reviewed-on: https://go-review.googlesource.com/21047
Reviewed-by: David Crawshaw <crawshaw@golang.org>
The existing implementation for Equal and similar
functions in the bytes package operate on one byte at
at time. This performs poorly on ppc64/ppc64le especially
when the byte buffers are large. This change improves
those functions by loading and comparing double words where
possible. The common code has been moved to a function
that can be shared by the other functions in this
file which perform the same type of comparison.
Further optimizations are done for the case where
>= 32 bytes are being compared. The new function
memeqbody is used by memeq_varlen, Equal, and eqstring.
When running the bytes test with -test.bench=Equal
benchmark old MB/s new MB/s speedup
BenchmarkEqual1 164.83 129.49 0.79x
BenchmarkEqual6 563.51 445.47 0.79x
BenchmarkEqual9 656.15 1099.00 1.67x
BenchmarkEqual15 591.93 1024.30 1.73x
BenchmarkEqual16 613.25 1914.12 3.12x
BenchmarkEqual20 682.37 1687.04 2.47x
BenchmarkEqual32 807.96 3843.29 4.76x
BenchmarkEqual4K 1076.25 23280.51 21.63x
BenchmarkEqual4M 1079.30 13120.14 12.16x
BenchmarkEqual64M 1073.28 10876.92 10.13x
It was determined that the degradation in the smaller byte tests
were due to unfavorable code alignment of the single byte loop.
Fixes#14368
Change-Id: I0dd87382c28887c70f4fbe80877a8ba03c31d7cd
Reviewed-on: https://go-review.googlesource.com/20249
Reviewed-by: Minux Ma <minux@golang.org>
MOVSB is quite a bit faster for unaligned moves.
Possibly we should use MOVSB all of the time, but Intel folks
say it might be a bit faster to use MOVSQ on some processors
(but not any I have access to at the moment).
benchmark old ns/op new ns/op delta
BenchmarkMemmove4096-8 93.9 93.2 -0.75%
BenchmarkMemmoveUnalignedDst4096-8 256 151 -41.02%
BenchmarkMemmoveUnalignedSrc4096-8 175 90.5 -48.29%
Fixes#14630
Change-Id: I568e6d6590eb3615e6a699fb474020596be665ff
Reviewed-on: https://go-review.googlesource.com/20293
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Missed this in review of 20812
Change-Id: I01e220499dcd58e1a7205e2a577dd9630a8b7174
Reviewed-on: https://go-review.googlesource.com/20819
Reviewed-by: Keith Randall <khr@golang.org>
We're about to add another root marking job that needs to happen only
during the first markroot pass (whether that's concurrent or STW),
just like finalizer scanning. Rather than introducing another flag
that has the same value as finalizersDone, just rename finalizersDone
to markrootDone.
Change-Id: I535356c6ea1f3734cb5b6add264cb7bf48de95e8
Reviewed-on: https://go-review.googlesource.com/20043
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently shinkstack is only safe during STW because it adjusts
channel-related stack pointers and moves send/receive stack slots
without synchronizing with the channel code. Make it safe to use when
the world isn't stopped by:
1) Locking all channels the G is blocked on while adjusting the sudogs
and copying the area of the stack that may contain send/receive
slots.
2) For any stack frames that may contain send/receive slot, using an
atomic CAS to adjust pointers to prevent races between adjusting a
pointer in a receive slot and a concurrent send writing to that
receive slot.
In principle, the synchronization could be finer-grained. For example,
we considered synchronizing around the sudogs, which would allow
channel operations involving other Gs to continue if the G being
shrunk was far enough down the send/receive queue. However, using the
channel lock means no additional locks are necessary in the channel
code. Furthermore, the stack shrinking code holds the channel lock for
a very short time (much less than the time required to shrink the
stack).
This does not yet make stack shrinking concurrent; it merely makes
doing so safe.
This has negligible effect on the go1 and garbage benchmarks.
For #12967.
Change-Id: Ia49df3a8a7be4b36e365aac4155a2416b94b988c
Reviewed-on: https://go-review.googlesource.com/20042
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently, locking a G's stack by setting its status to _Gcopystack or
_Gscan is unordered with respect to channel locks. However, when we
make stack shrinking concurrent, stack shrinking will need to lock the
G and then acquire channel locks, which imposes an order on these.
Document this lock ordering and fix closechan to respect it.
Everything else already happens to respect it.
For #12967.
Change-Id: I4dd02675efffb3e7daa5285cf75bf24f987d90d4
Reviewed-on: https://go-review.googlesource.com/20041
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently sudog.elem is never accessed concurrently, so in several
cases we drop the channel lock just before reading/writing the
sent/received value from/to sudog.elem. However, concurrent stack
shrinking is going to have to adjust sudog.elem to point to the new
stack, which means it needs a way to synchronize with accesses to
sudog.elem. Hence, add sudog.elem to the fields protected by
hchan.lock and scoot the unlocks down past the uses of sudog.elem.
While we're here, better document the channel synchronization rules.
For #12967.
Change-Id: I3ad0ca71f0a74b0716c261aef21b2f7f13f74917
Reviewed-on: https://go-review.googlesource.com/20040
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
With concurrent stack shrinking, the stack can move the instant after
a G enters _Gwaiting. There are only two places that put a G into
_Gwaiting: gopark and newstack. We fixed uses of gopark. This commit
fixes newstack by simplifying its G transitions and, in particular,
eliminating or narrowing the transient _Gwaiting states it passes
through so it's clear nothing in the G is accessed while in _Gwaiting.
For #12967.
Change-Id: I2440ead411d2bc61beb1e2ab020ebe3cb3481af9
Reviewed-on: https://go-review.googlesource.com/20039
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
gopark calls the unlock function after setting the G to _Gwaiting.
This means it's generally unsafe to access the G's stack from the
unlock function because the G may start running on another P. Once we
start shrinking stacks concurrently, a stack shrink could also move
the stack the moment after it enters _Gwaiting and before the unlock
function is called.
Document this restriction and fix the two places where we currently
violate it.
This is unlikely to be a problem in practice for these two places
right now, but they're already skating on thin ice. For example, the
following sequence could in principle cause corruption, deadlock, or a
panic in the select code:
On M1/P1:
1. G1 selects on channels A and B.
2. selectgoImpl calls gopark.
3. gopark puts G1 in _Gwaiting.
4. gopark calls selparkcommit.
5. selparkcommit releases the lock on channel A.
On M2/P2:
6. G2 sends to channel A.
7. The send puts G1 in _Grunnable and puts it on P2's run queue.
8. The scheduler runs, selects G1, puts it in _Grunning, and resumes G1.
9. On G1, the sellock immediately following the gopark gets called.
10. sellock grows and moves the stack.
On M1/P1:
11. selparkcommit continues to scan the lock order for the next
channel to unlock, but it's now reading from a freed (and possibly
reused) stack.
This shouldn't happen in practice because step 10 isn't the first call
to sellock, so the stack should already be big enough. However, once
we start shrinking stacks concurrently, this reasoning won't work any
more.
For #12967.
Change-Id: I3660c5be37e5be9f87433cb8141bdfdf37fadc4c
Reviewed-on: https://go-review.googlesource.com/20038
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the g.waiting list created by a select is in poll order.
However, nothing depends on this, and we're going to need access to
the channel lock order in other places shortly, so modify select to
put the waiting list in channel lock order.
For #12967.
Change-Id: If0d38816216ecbb37a36624d9b25dd96e0a775ec
Reviewed-on: https://go-review.googlesource.com/20037
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently the select lock order is a []*hchan. We're going to need to
refer to things other than the channel itself in lock order shortly,
so switch this to a []uint16 of indexes into the select cases. This
parallels the existing representation for the poll order.
Change-Id: I89262223fe20b4ddf5321592655ba9eac489cda1
Reviewed-on: https://go-review.googlesource.com/20036
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Given a G, there's currently no way to find the channel it's blocking
on. We'll need this information to fix a (probably theoretical) bug in
select and to implement concurrent stack shrinking, so record the
channel in the sudog.
For #12967.
Change-Id: If8fb63a140f1d07175818824d08c0ebeec2bdf66
Reviewed-on: https://go-review.googlesource.com/20035
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
gcMarkRootCheck is too expensive to do during mark termination.
However, since it's a useful check and it complements checkmark mode
nicely, enable it during mark termination is checkmark is enabled.
Change-Id: Icd9039e85e6e9d22747454441b50f1cdd1412202
Reviewed-on: https://go-review.googlesource.com/20663
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change Cond implementation to use a notification list such that waiters
can first register for a notification, release the lock, then actually
wait. Signalers never have to park anymore.
This is intended to address an issue in the previous implementation
where Broadcast could fail to signal all waiters.
Results of the existing benchmark are below.
Original New Diff
BenchmarkCond1-48 2000000 745 ns/op 755 +1.3%
BenchmarkCond2-48 1000000 1545 ns/op 1532 -0.8%
BenchmarkCond4-48 300000 3833 ns/op 3896 +1.6%
BenchmarkCond8-48 200000 10049 ns/op 10257 +2.1%
BenchmarkCond16-48 100000 21123 ns/op 21236 +0.5%
BenchmarkCond32-48 30000 40393 ns/op 41097 +1.7%
Fixes#14064
Change-Id: I083466d61593a791a034df61f5305adfb8f1c7f9
Reviewed-on: https://go-review.googlesource.com/18892
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Caleb Spare <cespare@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The type information for a method includes two variants: a func
without the receiver, and a func with the receiver as the first
parameter. The former is used as part of the dynamic interface
checks, but the latter is only returned as a type in the
reflect.Method struct.
Instead of computing it at compile time, construct it at run time
with reflect.FuncOf.
Using cl/20701 as a baseline,
cmd/go: -480KB, (4.4%)
jujud: -5.6MB, (7.8%)
For #6853.
Change-Id: I1b8c73f3ab894735f53d00cb9c0b506d84d54e92
Reviewed-on: https://go-review.googlesource.com/20709
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
CL 14603 attempted to preserve the callee-save registers for
the darwin/arm runtime initialization routine, but I believe it
wasn't sufficient and resulted in the crash reported in issue
Saving and restoring the registers on the stack the same way
linux/arm does seems more obvious and fixes#14778, so do that.
Even though #14778 is not reproducible on darwin/arm64, I applied
a similar change there, and to linux/arm64 which obeys the same
calling convention.
Finally, this CL is a candidate for a 1.6 minor release for the same
reason CL 14603 was in a 1.5 minor release (as CL 16968). It is
small and only touches the iOS platforms and gomobile on darwin/arm
is currently useless without it.
Fixes#14778Fixes#12590 (again)
Change-Id: I7401daf0bbd7c579a7e84761384a7b763651752a
Reviewed-on: https://go-review.googlesource.com/20621
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Elias Naur <elias.naur@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In particular, write down the rules for stack ownership because the
details of this are about to get very important with concurrent stack
shrinking. (Interestingly, the details don't actually change, but
anything that's currently skating on thin ice is likely to fall
through.)
Fox #12967.
Change-Id: I561e2610e864295e9faba07717a934aabefcaab9
Reviewed-on: https://go-review.googlesource.com/20034
Reviewed-by: Rick Hudson <rlh@golang.org>
Currently copystack adjusts pointers in the old stack and then copies
the adjusted stack to the new stack. In addition to being generally
confusing, this is going to make concurrent stack shrinking harder.
Switch this around so that we first copy the stack and then adjust
pointers on the new stack (never writing to the old stack).
This reprises CL 15996, but takes a different and simpler approach. CL
15996 still walked the old stack while adjusting pointers on the new
stack. In this CL, we adjust auxiliary structures before walking the
stack, so we can just walk the new stack.
For #12967.
Change-Id: I94fa86f823ba9ee478e73b2ba509eed3361c43df
Reviewed-on: https://go-review.googlesource.com/20033
Reviewed-by: Rick Hudson <rlh@golang.org>
In particular, document that *sel is on the stack no matter what.
Change-Id: I1c264215e026c27721b13eedae73ec845066cdec
Reviewed-on: https://go-review.googlesource.com/20032
Reviewed-by: Rick Hudson <rlh@golang.org>
Only compute the number of maximum allowed elements per slice once.
Special case newcap computation for slices with byte sized elements.
name old time/op new time/op delta
GrowSliceBytes-2 61.1ns ± 1% 43.4ns ± 1% -29.00% (p=0.000 n=20+20)
GrowSliceInts-2 85.9ns ± 1% 75.7ns ± 1% -11.80% (p=0.000 n=20+20)
Change-Id: I5d9c0d5987cdd108ac29dc32e31912dcefa2324d
Reviewed-on: https://go-review.googlesource.com/20653
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Move functions testSchedLocalQueueLocal and testSchedLocalQueueSteal
from proc.go to export_test.go, the only site that they are used.
Fixes#14796
Change-Id: I16b6fa4a13835eab33f66a2c2e87a5f5c79b7bd3
Reviewed-on: https://go-review.googlesource.com/20640
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
In addition to reflect.Value.Call, exported methods can be invoked
by the Func value in the reflect.Method struct. This CL has the
compiler track what functions get access to a legitimate reflect.Method
struct by looking for interface calls to either of:
Method(int) reflect.Method
MethodByName(string) (reflect.Method, bool)
This is a little overly conservative. If a user implements a type
with one of these methods without using the underlying calls on
reflect.Type, the linker will assume the worst and include all
exported methods. But it's cheap.
No change to any of the binary sizes reported in cl/20483.
For #14740
Change-Id: Ie17786395d0453ce0384d8b240ecb043b7726137
Reviewed-on: https://go-review.googlesource.com/20489
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Still fails about 20% of the time on my laptop.
Fixes#14766.
Change-Id: I169ab728c6022dceeb91188f5ad466ed6413c062
Reviewed-on: https://go-review.googlesource.com/20590
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
On Plan 9, there's no "kill all threads" system call, so exit is done
by sending a "go: exit" note to each OS process. If concurrent GC
occurs during this loop, deadlock sometimes results. Prevent this by
incrementing m.locks before sending notes.
Change-Id: I31aa15134ff6e42d9a82f9f8a308620b3ad1b1b1
Reviewed-on: https://go-review.googlesource.com/20477
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Alternative to golang.org/cl/19852. This memory layout doesn't have
an easy type representation, but it is noticeably smaller than the
current funcType, and saves significant extra space.
Some notes on the layout are in reflect/type.go:
// A *rtype for each in and out parameter is stored in an array that
// directly follows the funcType (and possibly its uncommonType). So
// a function type with one method, one input, and one output is:
//
// struct {
// funcType
// uncommonType
// [2]*rtype // [0] is in, [1] is out
// uncommonTypeSliceContents
// }
There are three arbitrary limits introduced by this CL:
1. No more than 65535 function input parameters.
2. No more than 32767 function output parameters.
3. reflect.FuncOf is limited to 128 parameters.
I don't think these are limits in practice, but are worth noting.
Reduces godoc binary size by 2.4%, 330KB.
For #6853.
Change-Id: I225c0a0516ebdbe92d41dfdf43f716da42dfe347
Reviewed-on: https://go-review.googlesource.com/19916
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>