Map iteration previously started from a random bucket, but walked each
bucket from the beginning. Now, iteration always starts from the first
bucket and walks each bucket starting at a random offset. For
performance, the random offset is selected at the start of iteration
and reused for each bucket.
Iteration over a map with 8 or fewer elements--a single bucket--will
now be non-deterministic. There will now be only 8 different possible
map iterations.
Significant benchmark changes, on my OS X laptop (rough but consistent):
benchmark old ns/op new ns/op delta
BenchmarkMapIter 128 121 -5.47%
BenchmarkMapIterEmpty 4.26 4.45 +4.46%
BenchmarkNewEmptyMap 114 111 -2.63%
Fixes#6719.
R=khr, bradfitz
CC=golang-codereviews
https://golang.org/cl/47370043
Profiling of multithreaded applications works correctly on OpenBSD
5.4-current, so enable the profiling test.
R=golang-codereviews, minux.ma
CC=golang-codereviews
https://golang.org/cl/50940043
Update Go so that it continues to work past the OpenBSD system ABI
break, with 64-bit time_t:
http://www.openbsd.org/faq/current.html#20130813
Note: this makes OpenBSD 5.5 (currently 5.4-current) the minimum
supported release for Go.
Fixes#7049.
R=golang-codereviews, mikioh.mikioh
CC=golang-codereviews
https://golang.org/cl/13368046
The spans array is allocated in runtime·mallocinit. On a
32-bit system the number of entries in the spans array is
MaxArena32 / PageSize, which (2U << 30) / (1 << 12) == (1 << 19).
So we are allocating an array that can hold 19 bits for an
index that can hold 20 bits. According to the comment in the
function, this is intentional: we only allocate enough spans
(and bitmaps) for a 2G arena, because allocating more would
probably be wasteful.
But since the span index is simply the upper 20 bits of the
memory address, this scheme only works if memory addresses are
limited to the low 2G of memory. That would be OK if we were
careful to enforce it, but we're not. What we are careful to
enforce, in functions like runtime·MHeap_SysAlloc, is that we
always return addresses between the heap's arena_start and
arena_start + MaxArena32.
We generally get away with it because we start allocating just
after the program end, so we only run into trouble with
programs that allocate a lot of memory, enough to get past
address 0x80000000.
This changes the code that computes a span index to subtract
arena_start on 32-bit systems just as we currently do on
64-bit systems.
R=golang-codereviews, rsc
CC=golang-codereviews
https://golang.org/cl/49460043
NPTL uses SIGRTMIN (signal 32) to effect thread cancellation.
Go's runtime replaces NPTL's signal handler with its own, and
ends up aborting if a C library that ends up calling
pthread_cancel is used.
This patch prevents runtime from replacing NPTL's handler.
Fixes#6997.
R=golang-codereviews, iant, dvyukov
CC=golang-codereviews
https://golang.org/cl/47540043
This prevents callers from using reflect to create a new
instance of errorCString with an arbitrary value and calling
the Error method to examine arbitrary memory.
Fixes#7084.
R=golang-codereviews, minux.ma, bradfitz
CC=golang-codereviews
https://golang.org/cl/49600043
This lets stack splits work correctly when running under gdb
when gdb has inserted a breakpoint somewhere on the call
stack.
Fixes#6834.
R=golang-codereviews, minux.ma
CC=golang-codereviews
https://golang.org/cl/48650043
record finalizers and heap profile info. Enables
removing the special bit from the heap bitmap. Also
provides a generic mechanism for annotating occasional
heap objects.
finalizers
overhead per obj
old 680 B 80 B avg
new 16 B/span 48 B
profile
overhead per obj
old 32KB 24 B + hash tables
new 16 B/span 24 B
R=cshapiro, khr, dvyukov, gobot
CC=golang-codereviews
https://golang.org/cl/13314053
Some builders broke on this test; I'm guessing that was because
this test didn't try hard enough to find a different iteration order.
Update #6719
R=dave
CC=golang-codereviews
https://golang.org/cl/47300043
Technically the spec does not guarantee that the iteration order is random,
but it is a property that we have consciously pursued, and so it seems
right to verify that our implementation does indeed randomise.
Update #6719.
R=khr, bradfitz
CC=golang-codereviews
https://golang.org/cl/47010043
Fixes#6952.
runtime.asminit was incorrectly loading runtime.goarm as a word, not a uint8 which made it subject to alignment issues on arm5 platforms.
Alignment aside, this also meant that the top 3 bytes in R11 would have been garbage and could not be assumed to be setting up the FPU reliably.
R=iant, minux.ma
CC=golang-codereviews
https://golang.org/cl/46240043
This was done correctly for most targets but was missing from
FreeBSD/ARM and Linux/ARM.
R=golang-codereviews, dave
CC=golang-codereviews
https://golang.org/cl/45180043
sigprocmask use in a multithreaded environment is undefined so replace it with pthread_sigmask.
Fixes#6811.
R=jsing, iant
CC=golang-codereviews, golang-dev
https://golang.org/cl/30460043
Needed for precise gc and copying stacks.
reflect.Value now takes 4 words instead of 3.
Still to do:
- un-iword-ify channel ops.
- un-iword-ify method receivers.
R=golang-dev, iant, rsc, khr
CC=golang-dev
https://golang.org/cl/43040043
The runtime tests are executed 4 times in all.bash
and there is currently a 5-second delay each time.
R=golang-dev, minux.ma, khr, bradfitz
CC=golang-dev
https://golang.org/cl/42450043
On the plus side, we don't need to change the bits when mallocing
pointerless objects. On the other hand, we need to mark objects in the
free lists during GC. But the free lists are small at GC time, so it
should be a net win.
benchmark old ns/op new ns/op delta
BenchmarkMalloc8 40 33 -17.65%
BenchmarkMalloc16 45 38 -15.72%
BenchmarkMallocTypeInfo8 58 59 +0.85%
BenchmarkMallocTypeInfo16 63 64 +1.10%
R=golang-dev, rsc, dvyukov
CC=cshapiro, golang-dev
https://golang.org/cl/41040043
Adds the Pool type and docs, and use it in fmt.
This is a temporary implementation, until Dmitry
makes it fast.
Uses the API proposal from Russ in http://goo.gl/cCKeb2 but
adds an optional New field, as used in fmt and elsewhere.
Almost all callers want that.
Update #4720
R=golang-dev, rsc, cshapiro, iant, r, dvyukov, khr
CC=golang-dev
https://golang.org/cl/41860043
Hash tables currently store an evacuated bit in the low bit
of the overflow pointer. That's probably not sustainable in the
long term as GC wants correctly typed & aligned pointers. It is
also a pain to move any of this code to Go in the current state.
This change moves the evacuated bit into the tophash entries.
Performance change is negligable.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/14412043
And document it explicitly, even though it already said
it wasn't guaranteed.
Fixes#6857
R=golang-dev, khr
CC=golang-dev
https://golang.org/cl/43580043
Fixes a regression introduced in revision 4cb93e2900d0.
That revision changed runtime·memmove to use SSE MOVOU
instructions for sizes between 17 and 256 bytes. We were
using memmove to save a copy of the note string during
the note handler. The Plan 9 kernel does not allow the
use of floating point in note handlers (which includes
MOVOU since it touches the XMM registers).
Arguably, runtime·memmove should not be using MOVOU when
GO386=387 but that wouldn't help us on amd64. It's very
important that we guard against any future changes so we
use a simple copy loop instead.
This change is extracted from CL 9796043 (since that CL
is still being ironed out).
R=rsc
CC=golang-dev
https://golang.org/cl/34640045
The funcdata symbol incorrectly named the dead value map the
dead pointer map. The dead value map identifies all dead
values, including pointers and non-pointers, in a stack frame.
The purpose of this map is to allow the runtime to poison
locations of dead data to catch lost invariants.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/38670043
The new linker will disallow this on arm
(it is already disallowed on amd64 and 386)
in order to be able to lay out each function
separately.
The restriction is only for jumps into the middle
of a function; jumps to the beginning of a function
remain fine.
Prereq for linker cleanup (golang.org/s/go13linker).
R=iant, r, minux.ma
CC=golang-dev
https://golang.org/cl/35800043
When enabled this new debugging mode will allocate objects on
their own page and never recycle memory addresses. This is an
essential tool to root cause a broad class of heap corruption.
R=golang-dev, dave, daniel.morsing, dvyukov, rsc, iant, cshapiro
CC=golang-dev
https://golang.org/cl/22060046
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
Pass as a slice of strings instead. For 2-5 strings, implement
dedicated routines so no slices are needed.
static call counts in the go binary:
2 strings: 342 occurrences
3 strings: 98
4 strings: 30
5 strings: 13
6+ strings: 14
Why? C varags, bad for stack scanning and copying.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/36380043
Now that the map implementation is reading the keys and values from
arbitrary memory (instead of from stack slots), it needs to tell the
race detector when it does so.
Fixes#6875.
R=golang-dev, dave
CC=golang-dev
https://golang.org/cl/36360043
This change is part of the plan to get rid of all vararg C calls
which are a pain for getting exact stack scanning.
We allocate a chunk of zero memory to return a pointer to when a
map access doesn't find the key. This is simpler than returning nil
and fixing things up in the caller. Linker magic allocates a single
zero memory area that is shared by all (non-reflect-generated) map
types.
Passing things by reference gets rid of some copies, so it speeds
up code with big keys/values.
benchmark old ns/op new ns/op delta
BenchmarkBigKeyMap 34 31 -8.48%
BenchmarkBigValMap 37 30 -18.62%
BenchmarkSmallKeyMap 26 23 -11.28%
R=golang-dev, dvyukov, khr, rsc
CC=golang-dev
https://golang.org/cl/14794043
Two bugs:
1. The first iteration of the traceback always uses LR when provided,
which it is (only) during a profiling signal, but in fact LR is correct
only if the stack frame has not been allocated yet. Otherwise an
intervening call may have changed LR, and the saved copy in the stack
frame should be used. Fix in traceback_arm.c.
2. The division runtime call adds 8 bytes to the stack. In order to
keep the traceback routines happy, it must copy the saved LR into
the new 0(SP). Change
SUB $8, SP
into
MOVW 0(SP), R11 // r11 is temporary, for use by linker
MOVW.W R11, -8(SP)
to update SP and 0(SP) atomically, so that the traceback always
sees a saved LR at 0(SP).
Fixes#6681.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/19910044
The CL causes misc/cgo/test to fail randomly.
I suspect that the problem is the use of a division instruction
in usleep, which can be called while trying to acquire an m
and therefore cannot store the denominator in m.
The solution to that would be to rewrite the code to use a
magic multiply instead of a divide, but now we're getting
pretty far off the original code.
Go back to the original in preparation for a different,
less efficient but simpler fix.
««« original CL description
cmd/5l, runtime: make ARM integer division profiler-friendly
The implementation of division constructed non-standard
stack frames that could not be handled by the traceback
routines.
CL 13239052 left the frames non-standard but fixed them
for the specific case of a divide-by-zero panic.
A profiling signal can arrive at any time, so that fix
is not sufficient.
Change the division to store the extra argument in the M struct
instead of in a new stack slot. That keeps the frames bog standard
at all times.
Also fix a related bug in the traceback code: when starting
a traceback, the LR register should be ignored if the current
function has already allocated its stack frame and saved the
original LR on the stack. The stack copy should be used, as the
LR register may have been modified.
Combined, these make the torture test from issue 6681 pass.
Fixes#6681.
R=golang-dev, r, josharian
CC=golang-dev
https://golang.org/cl/19810043
»»»
TBR=r
CC=golang-dev
https://golang.org/cl/20350043
The implementation of division constructed non-standard
stack frames that could not be handled by the traceback
routines.
CL 13239052 left the frames non-standard but fixed them
for the specific case of a divide-by-zero panic.
A profiling signal can arrive at any time, so that fix
is not sufficient.
Change the division to store the extra argument in the M struct
instead of in a new stack slot. That keeps the frames bog standard
at all times.
Also fix a related bug in the traceback code: when starting
a traceback, the LR register should be ignored if the current
function has already allocated its stack frame and saved the
original LR on the stack. The stack copy should be used, as the
LR register may have been modified.
Combined, these make the torture test from issue 6681 pass.
Fixes#6681.
R=golang-dev, r, josharian
CC=golang-dev
https://golang.org/cl/19810043
The case can happen when starttheworld is calling acquirep
to get things moving again and acquirep gets preempted.
The stack trace is in golang.org/issue/6644.
It is difficult to build a short test case for this, but
the person who reported issue 6644 confirms that this
solves the problem.
Fixes#6644.
R=golang-dev, r
CC=golang-dev
https://golang.org/cl/18740044
Nomemprof seems to be unneeded now, there is no recursion.
If the recursion will be re-introduced, it will break loudly by deadlocking.
Fixes#6566.
R=golang-dev, minux.ma, rsc
CC=golang-dev
https://golang.org/cl/14695044
Instead of adding an -march=armv5t flag to the gcc command
line, the same effect is obtained with an ".arch armv5t"
pseudo op in the assembly file that uses armv5t instructions.
R=golang-dev, iant, dave
CC=golang-dev
https://golang.org/cl/14511044
If an iterator is started while a map is in the middle of a grow,
and the map has NaN keys, then those keys might get returned by
the iterator more than once. This fix makes the evacuation decision
deterministic and repeatable for NaN keys so each one gets returned
only once.
R=golang-dev, r, khr, iant
CC=golang-dev
https://golang.org/cl/14367043
Changing from 4 kB to 8 kB brings significant improvement
on a variety of the Go 1 benchmarks, on both amd64
and 386 systems.
Significant runtime reductions:
amd64 386
GoParse -14% -1%
GobDecode -12% -20%
GobEncode -64% -1%
JSONDecode -9% -4%
JSONEncode -15% -5%
Template -17% -14%
In the longer term, khr's new stacks will avoid needing to
make this decision at all, but for Go 1.2 this is a reasonable
stopgap that makes performance significantly better.
Demand paging should mean that if the second 4 kB is not
used, it will not be brought into memory, so the change
should not adversely affect resident set size.
The same argument could justify bumping as high as 64 kB
on 64-bit machines, but there are diminishing returns
after 8 kB, and using 8 kB limits the possible unintended
memory overheads we are not aware of.
Benchmark graphs at
http://swtch.com/~rsc/gostackamd64.htmlhttp://swtch.com/~rsc/gostack386.html
Full data at
http://swtch.com/~rsc/gostack.zip
R=golang-dev, khr, dave, bradfitz, dvyukov
CC=golang-dev
https://golang.org/cl/14317043
It is not possible to use (there is no declaration in package syscall),
and no one seems to care.
Alex Brainman may bring this back properly for Go 1.3.
Fixes#6338.
R=golang-dev, r, alex.brainman
CC=golang-dev
https://golang.org/cl/14287043
Not scanning the stack by frames means we are reintroducing
a few false positives into the collection. Run the finalizer registration
in its own goroutine so that stack is guaranteed to be out of
consideration in a later collection.
This is working around a regression from yesterday's tip, but
it's not a regression from Go 1.1.
R=golang-dev
TBR=golang-dev
CC=golang-dev
https://golang.org/cl/14290043
Makes build unnecessarily slower. Will fix the parser instead.
««« original CL description
runtime/pprof: run TestGoroutineSwitch for longer
Short test now takes about 0.5 second here.
Fixes#6417.
The failure was also seen on our builders.
R=golang-dev, minux.ma, r
CC=golang-dev
https://golang.org/cl/13321048
»»»
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/13720048
Short test now takes about 0.5 second here.
Fixes#6417.
The failure was also seen on our builders.
R=golang-dev, minux.ma, r
CC=golang-dev
https://golang.org/cl/13321048
If a fault happens in malloc, inevitably the next thing that happens
is a deadlock trying to allocate the panic value that says the fault
happened. Stop doing that, two ways.
First, reject panic in malloc just as we reject panic in garbage collection.
Second, runtime.panicstring was using an error implementation
backed by a Go string, so the interface held an allocated *string.
Since the actual errors are C strings, define a new error
implementation backed by a C char*, which needs no indirection
and therefore no allocation.
This second fix will avoid allocation for errors like nil panic derefs
or division by zero, so it is worth doing even though the first fix
should take care of faults during malloc.
Update #6419
R=golang-dev, dvyukov, dave
CC=golang-dev
https://golang.org/cl/13774043
Keeping pointers from the pre-walk phase confuses
the race detection instrumentation.
Fixes#6418.
R=golang-dev, dvyukov, r
CC=golang-dev
https://golang.org/cl/13368057
This interface is required to use the PCDATA interface
implemented in Go 1.2. While initially entirely private, the
FUNCDATA side of the interface has been made public. This
change completes the FUNCDATA/PCDATA interface.
R=golang-dev, rsc
CC=golang-dev
https://golang.org/cl/13735043
The code for call site-specific pointer bitmaps was not ready in time,
but the zeroing required without it is too expensive to use by default.
We will have to wait for precise collection of stack frames until Go 1.3.
The precise collection can be re-enabled by
GOEXPERIMENT=precisestack ./all.bash
but that will not be the default for a Go 1.2 build.
Fixes#6087.
R=golang-dev, jeremyjackins, dan.kortschak, r
CC=golang-dev
https://golang.org/cl/13677045
The uint64 divide function calls _mul64x32 to do a 64x32-bit multiply
and then compares the result against the 64-bit numerator.
If the result is bigger than the numerator, must use the slow path.
Unfortunately, the 64x32 produces a 96-bit product, and only the
low 64 bits were being used in the comparison. Return all 96 bits,
the bottom 64 via the original uint64* pointer, and the top 32
as the function's return value.
Fixes 386 build (broken by ARM division tests).
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13722044
Because we can, and because it otherwise might crash
the program if we think we're out of memory.
Fixes#6390.
R=golang-dev, iant, minux.ma
CC=golang-dev
https://golang.org/cl/13345048
The implementation of division in the 5 toolchain is a bit too magical.
Hide the magic from the traceback routines.
Also add a test for the results of the software divide routine.
Fixes#5805.
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/13239052
The kernel implementation of the fast system call path,
the one invoked by the SYSCALL instruction, is broken for
restarting system calls. A C program demonstrating this is below.
Change the system calls to use INT $0x80 instead, because
that (perhaps slightly slower) system call path actually works.
I filed http://www.freebsd.org/cgi/query-pr.cgi?pr=182161.
The C program demonstrating that it is FreeBSD's fault is below.
It reports the same "Bad address" failures from wait.
#include <sys/time.h>
#include <sys/signal.h>
#include <pthread.h>
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
static void handler(int);
static void* looper(void*);
int
main(void)
{
int i;
struct sigaction sa;
pthread_cond_t cond;
pthread_mutex_t mu;
memset(&sa, 0, sizeof sa);
sa.sa_handler = handler;
sa.sa_flags = SA_RESTART;
memset(&sa.sa_mask, 0xff, sizeof sa.sa_mask);
sigaction(SIGCHLD, &sa, 0);
for(i=0; i<2; i++)
pthread_create(0, 0, looper, 0);
pthread_mutex_init(&mu, 0);
pthread_mutex_lock(&mu);
pthread_cond_init(&cond, 0);
for(;;)
pthread_cond_wait(&cond, &mu);
return 0;
}
static void
handler(int sig)
{
}
int
mywait4(int pid, int *stat, int options, struct rusage *rusage)
{
int result;
asm("movq %%rcx, %%r10; syscall"
: "=a" (result)
: "a" (7),
"D" (pid),
"S" (stat),
"d" (options),
"c" (rusage));
}
static void*
looper(void *v)
{
int pid, stat, out;
struct rusage rusage;
for(;;) {
if((pid = fork()) == 0)
_exit(0);
out = mywait4(pid, &stat, 0, &rusage);
if(out != pid) {
printf("wait4 returned %d\n", out);
}
}
}
Fixes#6372.
R=golang-dev, bradfitz
CC=golang-dev
https://golang.org/cl/13582047
The test 'gp == m->curg' is not valid on Windows,
because the goroutine being profiled is not from the
current m.
TBR=golang-dev
CC=golang-dev
https://golang.org/cl/13718043
Because profiling signals can arrive at any time, we must
handle the case where a profiling signal arrives halfway
through a goroutine switch. Luckily, although there is much
to think through, very little needs to change.
Fixes#6000.
Fixes#6015.
R=golang-dev, dvyukov
CC=golang-dev
https://golang.org/cl/13421048
Bug #1:
Issue 5406 identified an interesting case:
defer iface.M()
may end up calling a wrapper that copies an indirect receiver
from the iface value and then calls the real M method. That's
two calls down, not just one, and so recover() == nil always
in the real M method, even during a panic.
[For the purposes of this entire discussion, a wrapper's
implementation is a function containing an ordinary call, not
the optimized tail call form that is somtimes possible. The
tail call does not create a second frame, so it is already
handled correctly.]
Fix this bug by introducing g->panicwrap, which counts the
number of bytes on current stack segment that are due to
wrapper calls that should not count against the recover
check. All wrapper functions must now adjust g->panicwrap up
on entry and back down on exit. This adds slightly to their
expense; on the x86 it is a single instruction at entry and
exit; on the ARM it is three. However, the alternative is to
make a call to recover depend on being able to walk the stack,
which I very much want to avoid. We have enough problems
walking the stack for garbage collection and profiling.
Also, if performance is critical in a specific case, it is already
faster to use a pointer receiver and avoid this kind of wrapper
entirely.
Bug #2:
The old code, which did not consider the possibility of two
calls, already contained a check to see if the call had split
its stack and so the panic-created segment was one behind the
current segment. In the wrapper case, both of the two calls
might split their stacks, so the panic-created segment can be
two behind the current segment.
Fix this by propagating the Stktop.panic flag forward during
stack splits instead of looking backward during recover.
Fixes#5406.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13367052
args is useful for printing tracebacks.
frame is not necessary anymore, but we might some day
get back to functions where the frame size does not vary
by program counter, and if so we'll need it. Avoid needing
to introduce a new struct format later by keeping it now.
Fixes#5907.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13632051
The various throwing > 0 finish a change started
in a previous CL, which sets throwing = -1 to mean
"don't show the internals". That gets set during the
"all goroutines are asleep - deadlock!" crash, and it
should also be set during any other expected crash
that does not indicate a problem within the runtime.
Most runtime.throw do indicate a problem within the
runtime, however, so we should be able to enumerate
the ones that should be silent. The goroutine sleeping
deadlock is the only one I can think of.
Update #5139
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13662043
Otherwise, if panic starts running deferred functions,
the code that panicked appears to be calling those
functions directly, which is not the case and can be
confusing.
For example:
main.Two()
/Users/rsc/x.go:12 +0x2a
runtime.panic(0x20dc0, 0x2100cc010)
/Users/rsc/g/go/src/pkg/runtime/panic.c:248 +0x106
main.One()
/Users/rsc/x.go:8 +0x55
This makes clear(er) that main.Two is being called during
a panic, not as a direct call from main.One.
Fixes#5832.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13302051
This allows us to make two changes:
1. Force the argument type to be size_t, even on broken
systems that declare malloc to take a ulong.
2. Call runtime.throw if malloc fails.
(That is, the program crashes; it does not panic.)
Fixes#3403.
Fixes#5926.
R=golang-dev, iant
CC=golang-dev
https://golang.org/cl/13413047
If using other gdb python scripts loaded before Go's gdb-runtime.py
and that have a different init prototype:
Traceback (most recent call last):
File "/usr/lib/go/src/pkg/runtime/runtime-gdb.py", line 446, in <module>
k()
TypeError: __init__() takes exactly 3 arguments (1 given)
The problem is that gdb keeps all python scripts in the same namespace,
so vars() contains them. To avoid that, load helpers one by one.
R=iant, rsc
CC=gobot, golang-dev
https://golang.org/cl/9752044
The code in question is trying to print a nice error message
when a Go EABI binary runs on an OABI machine.
Unfortunately, the only way to do that is to use
ARM Thumb instructions, which we otherwise don't use.
There exist ARM EABI machines that do not support Thumb.
We could run on them if not for this OABI check, so disable it.
Fixes#5685.
R=golang-dev, minux.ma
CC=golang-dev
https://golang.org/cl/13234050
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes#5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043