and methodValueCall directly. Instead, we inline their behavior
inside of reflect.call.
This change is required because otherwise we have a situation where
reflect.callXX calls makeFuncStub, neither of which knows the
layout of the args passed between them. That's bad for
precise gc & stack copying.
Fixes#6619.
R=golang-dev, dvyukov, rsc, iant, khr
CC=golang-dev
https://golang.org/cl/26970044
This change is part of the plan to get rid of all vararg C calls
which are a pain for getting exact stack scanning.
We allocate a chunk of zero memory to return a pointer to when a
map access doesn't find the key. This is simpler than returning nil
and fixing things up in the caller. Linker magic allocates a single
zero memory area that is shared by all (non-reflect-generated) map
types.
Passing things by reference gets rid of some copies, so it speeds
up code with big keys/values.
benchmark old ns/op new ns/op delta
BenchmarkBigKeyMap 34 31 -8.48%
BenchmarkBigValMap 37 30 -18.62%
BenchmarkSmallKeyMap 26 23 -11.28%
R=golang-dev, dvyukov, khr, rsc
CC=golang-dev
https://golang.org/cl/14794043
The argument slice was kept hidden from the garbage collector
by destroying its referent in an unsafe.Pointer to uintptr
conversion. This change preserves the unsafe.Pointer referent
and only performs an unsafe.Pointer to uintptr conversions
within expressions that construct new unsafe.Pointer values.
R=golang-dev, khr, rsc
CC=golang-dev
https://golang.org/cl/14008043
Bug #1:
Issue 5406 identified an interesting case:
defer iface.M()
may end up calling a wrapper that copies an indirect receiver
from the iface value and then calls the real M method. That's
two calls down, not just one, and so recover() == nil always
in the real M method, even during a panic.
[For the purposes of this entire discussion, a wrapper's
implementation is a function containing an ordinary call, not
the optimized tail call form that is somtimes possible. The
tail call does not create a second frame, so it is already
handled correctly.]
Fix this bug by introducing g->panicwrap, which counts the
number of bytes on current stack segment that are due to
wrapper calls that should not count against the recover
check. All wrapper functions must now adjust g->panicwrap up
on entry and back down on exit. This adds slightly to their
expense; on the x86 it is a single instruction at entry and
exit; on the ARM it is three. However, the alternative is to
make a call to recover depend on being able to walk the stack,
which I very much want to avoid. We have enough problems
walking the stack for garbage collection and profiling.
Also, if performance is critical in a specific case, it is already
faster to use a pointer receiver and avoid this kind of wrapper
entirely.
Bug #2:
The old code, which did not consider the possibility of two
calls, already contained a check to see if the call had split
its stack and so the panic-created segment was one behind the
current segment. In the wrapper case, both of the two calls
might split their stacks, so the panic-created segment can be
two behind the current segment.
Fix this by propagating the Stktop.panic flag forward during
stack splits instead of looking backward during recover.
Fixes#5406.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13367052
pointer. An example that triggers the bad behavior on a 64bit
machine http://play.golang.org/p/GrNFakAYLN
rv1 := reflect.ValueOf(complex128(0))
rt := rv1.Type()
rv2 := rv1.Convert(rt)
rv3 := reflect.New(rt).Elem()
rv3.Set(rv2)
Running the code fails with the following:
panic: reflect: internal error: storeIword of 16-byte value
I've tested on a 64bit machine and verified this fixes the panic. I
haven't tested on a 32bit machine so I haven't verified the other
cases, but they follow logically.
R=golang-dev, r, iant
CC=golang-dev
https://golang.org/cl/12805045
See issue 4949 for a full explanation.
Allocs go from 1 to zero in the non-addressable case.
Fixes#4949.
BenchmarkInterfaceBig 90 14 -84.01%
BenchmarkInterfaceSmall 14 14 +0.00%
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/12646043
It changes an exported API, and breaks the build.
««« original CL description
reflect: use unsafe.Pointer in StringHeader and SliceHeader
Relates to issue 5193.
R=r
CC=golang-dev
https://golang.org/cl/8363045
»»»
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/8357051
The new garbage collector (CL 6114046) may find the fake *[]byte value
and interpret its contents as bytes rather than as potential pointers.
This may lead the garbage collector to free memory blocks that
shouldn't be freed.
R=dvyukov, rsc, dave, minux.ma, remyoudompheng, iant
CC=golang-dev
https://golang.org/cl/7000059
In order to add these, we need to be able to find references
to such types that already exist in the binary. To do that, introduce
a new linker section holding a list of the types corresponding to
arrays, chans, maps, and slices.
To offset the storage cost of this list, and to simplify the code,
remove the interface{} header from the representation of a
runtime type. It was used in early versions of the code but was
made obsolete by the kind field: a switch on kind is more efficient
than a type switch.
In the godoc binary, removing the interface{} header cuts two
words from each of about 10,000 types. Adding back the list of pointers
to array, chan, map, and slice types reintroduces one word for
each of about 500 types. On a 64-bit machine, then, this CL *removes*
a net 156 kB of read-only data from the binary.
This CL does not include the needed support for precise garbage
collection. I have created issue 4375 to track that.
This CL also does not set the 'algorithm' - specifically the equality
and copy functions - for a new array correctly, so I have unexported
ArrayOf for now. That is also part of issue 4375.
Fixes#2339.
R=r, remyoudompheng, mirtchovski, iant
CC=golang-dev
https://golang.org/cl/6572043
This CL makes the runtime understand that the type of
the len or cap of a map, slice, or string is 'int', not 'int32',
and it is also careful to distinguish between function arguments
and results of type 'int' vs type 'int32'.
In the runtime, the new typedefs 'intgo' and 'uintgo' refer
to Go int and uint. The C types int and uint continue to be
unavailable (cause intentional compile errors).
This CL does not change the meaning of int, but it should make
the eventual change of the meaning of int on amd64 a bit
smoother.
Update #2188.
R=iant, r, dave, remyoudompheng
CC=golang-dev
https://golang.org/cl/6551067
unsafe: delete Typeof, Reflect, Unreflect, New, NewArray
Part of issue 2955 and issue 2968.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/5650069
Making Value opaque means we can drop the interface kludges
in favor of a significantly simpler and faster representation.
v.Kind() will be a prime candidate for inlining too.
On a Thinkpad X201s using -benchtime 10:
benchmark old ns/op new ns/op delta
json.BenchmarkCodeEncoder 284391780 157415960 -44.65%
json.BenchmarkCodeMarshal 286979140 158992020 -44.60%
json.BenchmarkCodeDecoder 717175800 388288220 -45.86%
json.BenchmarkCodeUnmarshal 734470500 404548520 -44.92%
json.BenchmarkCodeUnmarshalReuse 707172280 385258720 -45.52%
json.BenchmarkSkipValue 24630036 18557062 -24.66%
benchmark old MB/s new MB/s speedup
json.BenchmarkCodeEncoder 6.82 12.33 1.81x
json.BenchmarkCodeMarshal 6.76 12.20 1.80x
json.BenchmarkCodeDecoder 2.71 5.00 1.85x
json.BenchmarkCodeUnmarshal 2.64 4.80 1.82x
json.BenchmarkCodeUnmarshalReuse 2.74 5.04 1.84x
json.BenchmarkSkipValue 77.92 103.42 1.33x
I cannot explain why BenchmarkSkipValue gets faster.
Maybe it is one of those code alignment things.
R=iant, r, gri, r
CC=golang-dev
https://golang.org/cl/5373101
Revert workaround in compiler and
revert test for compiler workaround.
Tested that the 386 build continues to fail if
the gc change is made without the reflect change.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/5312041
Had been allowing it for use by fmt, but it is too hard to lock down.
Fix other packages not to depend on it.
R=r, r
CC=golang-dev
https://golang.org/cl/5266054
-s now means *disable* escape analysis.
Fix escape leaks for struct/slice/map literals.
Add ... tracking.
Rewrite new(T) and slice literal into stack allocation when safe.
Add annotations to reflect.
Reflect is too chummy with the compiler,
so changes like these affect it more than they should.
R=lvd, dave, gustavo
CC=golang-dev
https://golang.org/cl/4954043
This allows code that wants to handle
[]byte separately to get at the actual slice
instead of just at individual bytes.
It seems to come up often enough.
R=r
CC=golang-dev
https://golang.org/cl/4942051
This was initially pushed as part of CL 4876046, found
when logic in exp/template was using the method on
an Invalid value.
R=rsc
CC=golang-dev
https://golang.org/cl/4890043
It's probably just an oversight that it doesn't work,
perhaps caused by analogy with Cap.
R=golang-dev, dsymonds
CC=golang-dev
https://golang.org/cl/4634125