2009-01-26 18:37:05 -07:00
|
|
|
// Copyright 2009 The Go Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Garbage collector.
|
2009-01-26 18:37:05 -07:00
|
|
|
|
|
|
|
#include "runtime.h"
|
2011-12-16 13:33:58 -07:00
|
|
|
#include "arch_GOARCH.h"
|
2009-01-26 18:37:05 -07:00
|
|
|
#include "malloc.h"
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
#include "stack.h"
|
2009-01-26 18:37:05 -07:00
|
|
|
|
|
|
|
enum {
|
2011-02-02 21:03:47 -07:00
|
|
|
Debug = 0,
|
|
|
|
PtrSize = sizeof(void*),
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
DebugMark = 0, // run second pass to check mark
|
2012-05-24 00:55:50 -06:00
|
|
|
DataBlock = 8*1024,
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Four bits per word (see #defines below).
|
|
|
|
wordsPerBitmapWord = sizeof(void*)*8/4,
|
|
|
|
bitShift = sizeof(void*)*8/4,
|
2009-01-26 18:37:05 -07:00
|
|
|
};
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Bits in per-word bitmap.
|
|
|
|
// #defines because enum might not be able to hold the values.
|
|
|
|
//
|
|
|
|
// Each word in the bitmap describes wordsPerBitmapWord words
|
|
|
|
// of heap memory. There are 4 bitmap bits dedicated to each heap word,
|
|
|
|
// so on a 64-bit system there is one bitmap word per 16 heap words.
|
|
|
|
// The bits in the word are packed together by type first, then by
|
|
|
|
// heap location, so each 64-bit bitmap word consists of, from top to bottom,
|
|
|
|
// the 16 bitSpecial bits for the corresponding heap words, then the 16 bitMarked bits,
|
|
|
|
// then the 16 bitNoPointers/bitBlockBoundary bits, then the 16 bitAllocated bits.
|
|
|
|
// This layout makes it easier to iterate over the bits of a given type.
|
|
|
|
//
|
|
|
|
// The bitmap starts at mheap.arena_start and extends *backward* from
|
|
|
|
// there. On a 64-bit system the off'th word in the arena is tracked by
|
|
|
|
// the off/16+1'th word before mheap.arena_start. (On a 32-bit system,
|
|
|
|
// the only difference is that the divisor is 8.)
|
|
|
|
//
|
|
|
|
// To pull out the bits corresponding to a given pointer p, we use:
|
|
|
|
//
|
|
|
|
// off = p - (uintptr*)mheap.arena_start; // word offset
|
|
|
|
// b = (uintptr*)mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
// shift = off % wordsPerBitmapWord
|
|
|
|
// bits = *b >> shift;
|
|
|
|
// /* then test bits & bitAllocated, bits & bitMarked, etc. */
|
|
|
|
//
|
|
|
|
#define bitAllocated ((uintptr)1<<(bitShift*0))
|
|
|
|
#define bitNoPointers ((uintptr)1<<(bitShift*1)) /* when bitAllocated is set */
|
|
|
|
#define bitMarked ((uintptr)1<<(bitShift*2)) /* when bitAllocated is set */
|
|
|
|
#define bitSpecial ((uintptr)1<<(bitShift*3)) /* when bitAllocated is set - has finalizer or being profiled */
|
|
|
|
#define bitBlockBoundary ((uintptr)1<<(bitShift*1)) /* when bitAllocated is NOT set */
|
|
|
|
|
|
|
|
#define bitMask (bitBlockBoundary | bitAllocated | bitMarked | bitSpecial)
|
|
|
|
|
2012-02-22 19:45:01 -07:00
|
|
|
// Holding worldsema grants an M the right to try to stop the world.
|
|
|
|
// The procedure is:
|
|
|
|
//
|
|
|
|
// runtime·semacquire(&runtime·worldsema);
|
|
|
|
// m->gcing = 1;
|
|
|
|
// runtime·stoptheworld();
|
|
|
|
//
|
|
|
|
// ... do stuff ...
|
|
|
|
//
|
|
|
|
// m->gcing = 0;
|
|
|
|
// runtime·semrelease(&runtime·worldsema);
|
|
|
|
// runtime·starttheworld();
|
|
|
|
//
|
|
|
|
uint32 runtime·worldsema = 1;
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
static int32 gctrace;
|
|
|
|
|
|
|
|
typedef struct Workbuf Workbuf;
|
|
|
|
struct Workbuf
|
2010-09-07 07:57:22 -06:00
|
|
|
{
|
2012-05-24 00:55:50 -06:00
|
|
|
LFNode node; // must be first
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
uintptr nobj;
|
2012-05-24 00:55:50 -06:00
|
|
|
byte *obj[512-(sizeof(LFNode)+sizeof(uintptr))/sizeof(byte*)];
|
2010-09-07 07:57:22 -06:00
|
|
|
};
|
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
typedef struct Finalizer Finalizer;
|
|
|
|
struct Finalizer
|
|
|
|
{
|
|
|
|
void (*fn)(void*);
|
|
|
|
void *arg;
|
|
|
|
int32 nret;
|
|
|
|
};
|
|
|
|
|
|
|
|
typedef struct FinBlock FinBlock;
|
|
|
|
struct FinBlock
|
|
|
|
{
|
|
|
|
FinBlock *alllink;
|
|
|
|
FinBlock *next;
|
|
|
|
int32 cnt;
|
|
|
|
int32 cap;
|
|
|
|
Finalizer fin[1];
|
|
|
|
};
|
|
|
|
|
2009-08-20 17:09:38 -06:00
|
|
|
extern byte data[];
|
2009-01-26 18:37:05 -07:00
|
|
|
extern byte etext[];
|
2012-02-21 20:08:42 -07:00
|
|
|
extern byte ebss[];
|
2009-01-26 18:37:05 -07:00
|
|
|
|
2010-03-26 15:15:30 -06:00
|
|
|
static G *fing;
|
2011-10-06 09:42:51 -06:00
|
|
|
static FinBlock *finq; // list of finalizers that are to be executed
|
|
|
|
static FinBlock *finc; // cache of free blocks
|
|
|
|
static FinBlock *allfin; // list of all blocks
|
|
|
|
static Lock finlock;
|
2010-04-07 21:38:02 -06:00
|
|
|
static int32 fingwait;
|
|
|
|
|
2010-03-26 15:15:30 -06:00
|
|
|
static void runfinq(void);
|
2011-02-02 21:03:47 -07:00
|
|
|
static Workbuf* getempty(Workbuf*);
|
|
|
|
static Workbuf* getfull(Workbuf*);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
static void putempty(Workbuf*);
|
|
|
|
static Workbuf* handoff(Workbuf*);
|
|
|
|
|
2012-05-24 00:55:50 -06:00
|
|
|
typedef struct GcRoot GcRoot;
|
|
|
|
struct GcRoot
|
|
|
|
{
|
|
|
|
byte *p;
|
|
|
|
uintptr n;
|
|
|
|
};
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
static struct {
|
2012-05-24 00:55:50 -06:00
|
|
|
uint64 full; // lock-free list of full blocks
|
|
|
|
uint64 empty; // lock-free list of empty blocks
|
|
|
|
byte pad0[CacheLineSize]; // prevents false-sharing between full/empty and nproc/nwait
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
uint32 nproc;
|
|
|
|
volatile uint32 nwait;
|
|
|
|
volatile uint32 ndone;
|
2012-05-22 11:35:52 -06:00
|
|
|
volatile uint32 debugmarkdone;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
Note alldone;
|
2012-05-24 00:55:50 -06:00
|
|
|
ParFor *markfor;
|
2012-05-22 11:35:52 -06:00
|
|
|
ParFor *sweepfor;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
|
|
|
Lock;
|
|
|
|
byte *chunk;
|
|
|
|
uintptr nchunk;
|
2012-05-24 00:55:50 -06:00
|
|
|
|
|
|
|
GcRoot *roots;
|
|
|
|
uint32 nroot;
|
|
|
|
uint32 rootcap;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
} work;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
// scanblock scans a block of n bytes starting at pointer b for references
|
|
|
|
// to other objects, scanning any it finds recursively until there are no
|
|
|
|
// unscanned objects left. Instead of using an explicit recursion, it keeps
|
|
|
|
// a work list in the Workbuf* structures and loops in the main function
|
|
|
|
// body. Keeping an explicit work list is easier on the stack allocator and
|
|
|
|
// more efficient.
|
2009-01-26 18:37:05 -07:00
|
|
|
static void
|
2012-06-05 02:55:14 -06:00
|
|
|
scanblock(byte *b, uintptr n)
|
2009-01-26 18:37:05 -07:00
|
|
|
{
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
byte *obj, *arena_start, *arena_used, *p;
|
2009-01-26 18:37:05 -07:00
|
|
|
void **vp;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
uintptr size, *bitp, bits, shift, i, j, x, xbits, off, nobj, nproc;
|
2011-02-02 21:03:47 -07:00
|
|
|
MSpan *s;
|
|
|
|
PageID k;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
void **wp;
|
2011-02-02 21:03:47 -07:00
|
|
|
Workbuf *wbuf;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
bool keepworking;
|
2010-09-07 07:57:22 -06:00
|
|
|
|
2012-06-05 02:55:14 -06:00
|
|
|
if((intptr)n < 0) {
|
|
|
|
runtime·printf("scanblock %p %D\n", b, (int64)n);
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
runtime·throw("scanblock");
|
|
|
|
}
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Memory arena parameters.
|
|
|
|
arena_start = runtime·mheap.arena_start;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
arena_used = runtime·mheap.arena_used;
|
|
|
|
nproc = work.nproc;
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
wbuf = nil; // current work buffer
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
wp = nil; // storage for next queued pointer (write pointer)
|
|
|
|
nobj = 0; // number of queued objects
|
|
|
|
|
|
|
|
// Scanblock helpers pass b==nil.
|
2012-05-24 00:55:50 -06:00
|
|
|
// Procs needs to return to make more
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
// calls to scanblock. But if work.nproc==1 then
|
|
|
|
// might as well process blocks as soon as we
|
|
|
|
// have them.
|
|
|
|
keepworking = b == nil || work.nproc == 1;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
// Align b to a word boundary.
|
|
|
|
off = (uintptr)b & (PtrSize-1);
|
|
|
|
if(off != 0) {
|
|
|
|
b += PtrSize - off;
|
|
|
|
n -= PtrSize - off;
|
|
|
|
}
|
2010-09-07 07:57:22 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
for(;;) {
|
|
|
|
// Each iteration scans the block b of length n, queueing pointers in
|
|
|
|
// the work buffer.
|
2010-09-07 07:57:22 -06:00
|
|
|
if(Debug > 1)
|
2012-06-05 02:55:14 -06:00
|
|
|
runtime·printf("scanblock %p %D\n", b, (int64)n);
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2010-09-07 07:57:22 -06:00
|
|
|
vp = (void**)b;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
n >>= (2+PtrSize/8); /* n /= PtrSize (4 or 8) */
|
2010-09-07 07:57:22 -06:00
|
|
|
for(i=0; i<n; i++) {
|
2011-02-02 21:03:47 -07:00
|
|
|
obj = (byte*)vp[i];
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Words outside the arena cannot be pointers.
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if((byte*)obj < arena_start || (byte*)obj >= arena_used)
|
2010-09-07 07:57:22 -06:00
|
|
|
continue;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// obj may be a pointer to a live object.
|
|
|
|
// Try to find the beginning of the object.
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Round down to word boundary.
|
|
|
|
obj = (void*)((uintptr)obj & ~((uintptr)PtrSize-1));
|
|
|
|
|
|
|
|
// Find bits for this word.
|
|
|
|
off = (uintptr*)obj - (uintptr*)arena_start;
|
|
|
|
bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
xbits = *bitp;
|
|
|
|
bits = xbits >> shift;
|
|
|
|
|
|
|
|
// Pointing at the beginning of a block?
|
|
|
|
if((bits & (bitAllocated|bitBlockBoundary)) != 0)
|
|
|
|
goto found;
|
|
|
|
|
|
|
|
// Pointing just past the beginning?
|
|
|
|
// Scan backward a little to find a block boundary.
|
|
|
|
for(j=shift; j-->0; ) {
|
|
|
|
if(((xbits>>j) & (bitAllocated|bitBlockBoundary)) != 0) {
|
|
|
|
obj = (byte*)obj - (shift-j)*PtrSize;
|
|
|
|
shift = j;
|
|
|
|
bits = xbits>>shift;
|
|
|
|
goto found;
|
2010-04-22 18:52:22 -06:00
|
|
|
}
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
// Otherwise consult span table to find beginning.
|
|
|
|
// (Manually inlined copy of MHeap_LookupMaybe.)
|
|
|
|
k = (uintptr)obj>>PageShift;
|
|
|
|
x = k;
|
|
|
|
if(sizeof(void*) == 8)
|
|
|
|
x -= (uintptr)arena_start>>PageShift;
|
|
|
|
s = runtime·mheap.map[x];
|
|
|
|
if(s == nil || k < s->start || k - s->start >= s->npages || s->state != MSpanInUse)
|
|
|
|
continue;
|
|
|
|
p = (byte*)((uintptr)s->start<<PageShift);
|
|
|
|
if(s->sizeclass == 0) {
|
|
|
|
obj = p;
|
|
|
|
} else {
|
|
|
|
if((byte*)obj >= (byte*)s->limit)
|
|
|
|
continue;
|
|
|
|
size = runtime·class_to_size[s->sizeclass];
|
|
|
|
int32 i = ((byte*)obj - p)/size;
|
|
|
|
obj = p+i*size;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now that we know the object header, reload bits.
|
|
|
|
off = (uintptr*)obj - (uintptr*)arena_start;
|
|
|
|
bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
xbits = *bitp;
|
|
|
|
bits = xbits >> shift;
|
|
|
|
|
|
|
|
found:
|
2012-05-24 00:55:50 -06:00
|
|
|
// If another proc wants a pointer, give it some.
|
|
|
|
if(work.nwait > 0 && nobj > 4 && work.full == 0) {
|
|
|
|
wbuf->nobj = nobj;
|
|
|
|
wbuf = handoff(wbuf);
|
|
|
|
nobj = wbuf->nobj;
|
|
|
|
wp = wbuf->obj + nobj;
|
|
|
|
}
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Now we have bits, bitp, and shift correct for
|
|
|
|
// obj pointing at the base of the object.
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
// Only care about allocated and not marked.
|
|
|
|
if((bits & (bitAllocated|bitMarked)) != bitAllocated)
|
2011-02-02 21:03:47 -07:00
|
|
|
continue;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if(nproc == 1)
|
|
|
|
*bitp |= bitMarked<<shift;
|
|
|
|
else {
|
|
|
|
for(;;) {
|
|
|
|
x = *bitp;
|
|
|
|
if(x & (bitMarked<<shift))
|
|
|
|
goto continue_obj;
|
|
|
|
if(runtime·casp((void**)bitp, (void*)x, (void*)(x|(bitMarked<<shift))))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
// If object has no pointers, don't need to scan further.
|
|
|
|
if((bits & bitNoPointers) != 0)
|
|
|
|
continue;
|
|
|
|
|
2012-04-07 07:02:44 -06:00
|
|
|
PREFETCH(obj);
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// If buffer is full, get a new one.
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if(wbuf == nil || nobj >= nelem(wbuf->obj)) {
|
|
|
|
if(wbuf != nil)
|
|
|
|
wbuf->nobj = nobj;
|
2011-02-02 21:03:47 -07:00
|
|
|
wbuf = getempty(wbuf);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
wp = wbuf->obj;
|
|
|
|
nobj = 0;
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
*wp++ = obj;
|
|
|
|
nobj++;
|
|
|
|
continue_obj:;
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Done scanning [b, b+n). Prepare for the next iteration of
|
|
|
|
// the loop by setting b and n to the parameters for the next block.
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
// Fetch b from the work buffer.
|
|
|
|
if(nobj == 0) {
|
|
|
|
if(!keepworking) {
|
2012-05-24 00:55:50 -06:00
|
|
|
if(wbuf)
|
|
|
|
putempty(wbuf);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
return;
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
// Emptied our buffer: refill.
|
|
|
|
wbuf = getfull(wbuf);
|
|
|
|
if(wbuf == nil)
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
return;
|
|
|
|
nobj = wbuf->nobj;
|
|
|
|
wp = wbuf->obj + wbuf->nobj;
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
b = *--wp;
|
|
|
|
nobj--;
|
|
|
|
|
2012-01-10 20:49:11 -07:00
|
|
|
// Ask span about size class.
|
2011-02-02 21:03:47 -07:00
|
|
|
// (Manually inlined copy of MHeap_Lookup.)
|
|
|
|
x = (uintptr)b>>PageShift;
|
|
|
|
if(sizeof(void*) == 8)
|
|
|
|
x -= (uintptr)arena_start>>PageShift;
|
|
|
|
s = runtime·mheap.map[x];
|
|
|
|
if(s->sizeclass == 0)
|
|
|
|
n = s->npages<<PageShift;
|
|
|
|
else
|
|
|
|
n = runtime·class_to_size[s->sizeclass];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
// debug_scanblock is the debug copy of scanblock.
|
|
|
|
// it is simpler, slower, single-threaded, recursive,
|
|
|
|
// and uses bitSpecial as the mark bit.
|
|
|
|
static void
|
2012-06-05 02:55:14 -06:00
|
|
|
debug_scanblock(byte *b, uintptr n)
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
{
|
|
|
|
byte *obj, *p;
|
|
|
|
void **vp;
|
|
|
|
uintptr size, *bitp, bits, shift, i, xbits, off;
|
|
|
|
MSpan *s;
|
|
|
|
|
|
|
|
if(!DebugMark)
|
|
|
|
runtime·throw("debug_scanblock without DebugMark");
|
|
|
|
|
2012-06-05 02:55:14 -06:00
|
|
|
if((intptr)n < 0) {
|
|
|
|
runtime·printf("debug_scanblock %p %D\n", b, (int64)n);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
runtime·throw("debug_scanblock");
|
|
|
|
}
|
|
|
|
|
|
|
|
// Align b to a word boundary.
|
|
|
|
off = (uintptr)b & (PtrSize-1);
|
|
|
|
if(off != 0) {
|
|
|
|
b += PtrSize - off;
|
|
|
|
n -= PtrSize - off;
|
|
|
|
}
|
|
|
|
|
|
|
|
vp = (void**)b;
|
|
|
|
n /= PtrSize;
|
|
|
|
for(i=0; i<n; i++) {
|
|
|
|
obj = (byte*)vp[i];
|
|
|
|
|
|
|
|
// Words outside the arena cannot be pointers.
|
|
|
|
if((byte*)obj < runtime·mheap.arena_start || (byte*)obj >= runtime·mheap.arena_used)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
// Round down to word boundary.
|
|
|
|
obj = (void*)((uintptr)obj & ~((uintptr)PtrSize-1));
|
|
|
|
|
|
|
|
// Consult span table to find beginning.
|
|
|
|
s = runtime·MHeap_LookupMaybe(&runtime·mheap, obj);
|
|
|
|
if(s == nil)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
p = (byte*)((uintptr)s->start<<PageShift);
|
|
|
|
if(s->sizeclass == 0) {
|
|
|
|
obj = p;
|
|
|
|
size = (uintptr)s->npages<<PageShift;
|
|
|
|
} else {
|
|
|
|
if((byte*)obj >= (byte*)s->limit)
|
|
|
|
continue;
|
|
|
|
size = runtime·class_to_size[s->sizeclass];
|
|
|
|
int32 i = ((byte*)obj - p)/size;
|
|
|
|
obj = p+i*size;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now that we know the object header, reload bits.
|
|
|
|
off = (uintptr*)obj - (uintptr*)runtime·mheap.arena_start;
|
|
|
|
bitp = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
xbits = *bitp;
|
|
|
|
bits = xbits >> shift;
|
|
|
|
|
|
|
|
// Now we have bits, bitp, and shift correct for
|
|
|
|
// obj pointing at the base of the object.
|
|
|
|
// If not allocated or already marked, done.
|
|
|
|
if((bits & bitAllocated) == 0 || (bits & bitSpecial) != 0) // NOTE: bitSpecial not bitMarked
|
|
|
|
continue;
|
|
|
|
*bitp |= bitSpecial<<shift;
|
|
|
|
if(!(bits & bitMarked))
|
|
|
|
runtime·printf("found unmarked block %p in %p\n", obj, vp+i);
|
|
|
|
|
|
|
|
// If object has no pointers, don't need to scan further.
|
|
|
|
if((bits & bitNoPointers) != 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
debug_scanblock(obj, size);
|
|
|
|
}
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-05-24 00:55:50 -06:00
|
|
|
static void
|
|
|
|
markroot(ParFor *desc, uint32 i)
|
|
|
|
{
|
|
|
|
USED(&desc);
|
|
|
|
scanblock(work.roots[i].p, work.roots[i].n);
|
|
|
|
}
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Get an empty work buffer off the work.empty list,
|
|
|
|
// allocating new buffers as needed.
|
|
|
|
static Workbuf*
|
|
|
|
getempty(Workbuf *b)
|
|
|
|
{
|
2012-05-24 00:55:50 -06:00
|
|
|
if(b != nil)
|
|
|
|
runtime·lfstackpush(&work.full, &b->node);
|
|
|
|
b = (Workbuf*)runtime·lfstackpop(&work.empty);
|
|
|
|
if(b == nil) {
|
|
|
|
// Need to allocate.
|
|
|
|
runtime·lock(&work);
|
|
|
|
if(work.nchunk < sizeof *b) {
|
|
|
|
work.nchunk = 1<<20;
|
|
|
|
work.chunk = runtime·SysAlloc(work.nchunk);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
2012-05-24 00:55:50 -06:00
|
|
|
b = (Workbuf*)work.chunk;
|
|
|
|
work.chunk += sizeof *b;
|
|
|
|
work.nchunk -= sizeof *b;
|
|
|
|
runtime·unlock(&work);
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
b->nobj = 0;
|
2011-02-02 21:03:47 -07:00
|
|
|
return b;
|
|
|
|
}
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
static void
|
|
|
|
putempty(Workbuf *b)
|
|
|
|
{
|
2012-05-24 00:55:50 -06:00
|
|
|
runtime·lfstackpush(&work.empty, &b->node);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
// Get a full work buffer off the work.full list, or return nil.
|
|
|
|
static Workbuf*
|
|
|
|
getfull(Workbuf *b)
|
|
|
|
{
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
int32 i;
|
|
|
|
|
2012-05-24 00:55:50 -06:00
|
|
|
if(b != nil)
|
|
|
|
runtime·lfstackpush(&work.empty, &b->node);
|
|
|
|
b = (Workbuf*)runtime·lfstackpop(&work.full);
|
|
|
|
if(b != nil || work.nproc == 1)
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
return b;
|
|
|
|
|
|
|
|
runtime·xadd(&work.nwait, +1);
|
|
|
|
for(i=0;; i++) {
|
2012-05-24 00:55:50 -06:00
|
|
|
if(work.full != 0) {
|
|
|
|
runtime·xadd(&work.nwait, -1);
|
|
|
|
b = (Workbuf*)runtime·lfstackpop(&work.full);
|
|
|
|
if(b != nil)
|
|
|
|
return b;
|
|
|
|
runtime·xadd(&work.nwait, +1);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
|
|
|
if(work.nwait == work.nproc)
|
|
|
|
return nil;
|
2012-04-05 10:48:28 -06:00
|
|
|
if(i < 10) {
|
|
|
|
m->gcstats.nprocyield++;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
runtime·procyield(20);
|
2012-04-05 10:48:28 -06:00
|
|
|
} else if(i < 20) {
|
|
|
|
m->gcstats.nosyield++;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
runtime·osyield();
|
2012-04-05 10:48:28 -06:00
|
|
|
} else {
|
|
|
|
m->gcstats.nsleep++;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
runtime·usleep(100);
|
2012-04-05 10:48:28 -06:00
|
|
|
}
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
static Workbuf*
|
|
|
|
handoff(Workbuf *b)
|
|
|
|
{
|
|
|
|
int32 n;
|
|
|
|
Workbuf *b1;
|
|
|
|
|
|
|
|
// Make new buffer with half of b's pointers.
|
|
|
|
b1 = getempty(nil);
|
|
|
|
n = b->nobj/2;
|
|
|
|
b->nobj -= n;
|
|
|
|
b1->nobj = n;
|
|
|
|
runtime·memmove(b1->obj, b->obj+b->nobj, n*sizeof b1->obj[0]);
|
2012-04-05 10:48:28 -06:00
|
|
|
m->gcstats.nhandoff++;
|
|
|
|
m->gcstats.nhandoffcnt += n;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
|
|
|
// Put b on full list - let first half of b get stolen.
|
2012-05-24 00:55:50 -06:00
|
|
|
runtime·lfstackpush(&work.full, &b->node);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
return b1;
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2012-05-24 00:55:50 -06:00
|
|
|
addroot(byte *p, uintptr n)
|
|
|
|
{
|
|
|
|
uint32 cap;
|
|
|
|
GcRoot *new;
|
|
|
|
|
|
|
|
if(work.nroot >= work.rootcap) {
|
|
|
|
cap = PageSize/sizeof(GcRoot);
|
|
|
|
if(cap < 2*work.rootcap)
|
|
|
|
cap = 2*work.rootcap;
|
|
|
|
new = (GcRoot*)runtime·SysAlloc(cap*sizeof(GcRoot));
|
|
|
|
if(work.roots != nil) {
|
|
|
|
runtime·memmove(new, work.roots, work.rootcap*sizeof(GcRoot));
|
|
|
|
runtime·SysFree(work.roots, work.rootcap*sizeof(GcRoot));
|
|
|
|
}
|
|
|
|
work.roots = new;
|
|
|
|
work.rootcap = cap;
|
|
|
|
}
|
|
|
|
work.roots[work.nroot].p = p;
|
|
|
|
work.roots[work.nroot].n = n;
|
|
|
|
work.nroot++;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
addstackroots(G *gp)
|
2009-01-26 18:37:05 -07:00
|
|
|
{
|
2011-10-11 10:57:16 -06:00
|
|
|
M *mp;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
int32 n;
|
2009-01-26 18:37:05 -07:00
|
|
|
Stktop *stk;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
byte *sp, *guard;
|
2009-01-26 18:37:05 -07:00
|
|
|
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
stk = (Stktop*)gp->stackbase;
|
2012-05-30 11:07:52 -06:00
|
|
|
guard = (byte*)gp->stackguard;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
|
|
|
|
if(gp == g) {
|
|
|
|
// Scanning our own stack: start at &gp.
|
2009-07-27 15:16:28 -06:00
|
|
|
sp = (byte*)&gp;
|
2011-10-11 10:57:16 -06:00
|
|
|
} else if((mp = gp->m) != nil && mp->helpgc) {
|
|
|
|
// gchelper's stack is in active use and has no interesting pointers.
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
return;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
} else {
|
|
|
|
// Scanning another goroutine's stack.
|
|
|
|
// The goroutine is usually asleep (the world is stopped).
|
2012-05-30 11:07:52 -06:00
|
|
|
sp = (byte*)gp->sched.sp;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
|
2011-04-27 22:20:37 -06:00
|
|
|
// The exception is that if the goroutine is about to enter or might
|
|
|
|
// have just exited a system call, it may be executing code such
|
|
|
|
// as schedlock and may have needed to start a new stack segment.
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
// Use the stack segment and stack pointer at the time of
|
2011-04-27 22:20:37 -06:00
|
|
|
// the system call instead, since that won't change underfoot.
|
2012-05-30 11:07:52 -06:00
|
|
|
if(gp->gcstack != (uintptr)nil) {
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
stk = (Stktop*)gp->gcstack;
|
2012-05-30 11:07:52 -06:00
|
|
|
sp = (byte*)gp->gcsp;
|
|
|
|
guard = (byte*)gp->gcguard;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
n = 0;
|
2009-01-26 18:37:05 -07:00
|
|
|
while(stk) {
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
if(sp < guard-StackGuard || (byte*)stk < sp) {
|
|
|
|
runtime·printf("scanstack inconsistent: g%d#%d sp=%p not in [%p,%p]\n", gp->goid, n, sp, guard-StackGuard, stk);
|
|
|
|
runtime·throw("scanstack");
|
|
|
|
}
|
2012-05-24 00:55:50 -06:00
|
|
|
addroot(sp, (byte*)stk - sp);
|
2012-05-30 11:07:52 -06:00
|
|
|
sp = (byte*)stk->gobuf.sp;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
guard = stk->stackguard;
|
2009-06-17 16:12:16 -06:00
|
|
|
stk = (Stktop*)stk->stackbase;
|
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
|
|
|
n++;
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2010-03-26 15:15:30 -06:00
|
|
|
static void
|
2012-05-24 00:55:50 -06:00
|
|
|
addfinroots(void *v)
|
2010-03-26 15:15:30 -06:00
|
|
|
{
|
|
|
|
uintptr size;
|
|
|
|
|
|
|
|
size = 0;
|
2011-02-02 21:03:47 -07:00
|
|
|
if(!runtime·mlookup(v, &v, &size, nil) || !runtime·blockspecial(v))
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·throw("mark - finalizer inconsistency");
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2010-03-26 15:15:30 -06:00
|
|
|
// do not mark the finalizer block itself. just mark the things it points at.
|
2012-05-24 00:55:50 -06:00
|
|
|
addroot(v, size);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
2012-05-24 00:55:50 -06:00
|
|
|
addroots(void)
|
2009-01-26 18:37:05 -07:00
|
|
|
{
|
2010-01-06 20:24:11 -07:00
|
|
|
G *gp;
|
2011-10-06 09:42:51 -06:00
|
|
|
FinBlock *fb;
|
2012-05-24 00:55:50 -06:00
|
|
|
byte *p;
|
|
|
|
|
|
|
|
work.nroot = 0;
|
2009-01-26 18:37:05 -07:00
|
|
|
|
2009-12-07 16:52:14 -07:00
|
|
|
// mark data+bss.
|
2012-05-24 00:55:50 -06:00
|
|
|
for(p=data; p<ebss; p+=DataBlock)
|
|
|
|
addroot(p, p+DataBlock < ebss ? DataBlock : ebss-p);
|
2009-01-26 18:37:05 -07:00
|
|
|
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
for(gp=runtime·allg; gp!=nil; gp=gp->alllink) {
|
2009-01-26 18:37:05 -07:00
|
|
|
switch(gp->status){
|
|
|
|
default:
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·printf("unexpected G.status %d\n", gp->status);
|
|
|
|
runtime·throw("mark - bad status");
|
2009-01-26 18:37:05 -07:00
|
|
|
case Gdead:
|
|
|
|
break;
|
|
|
|
case Grunning:
|
2010-01-06 20:24:11 -07:00
|
|
|
if(gp != g)
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·throw("mark - world not stopped");
|
2012-05-24 00:55:50 -06:00
|
|
|
addstackroots(gp);
|
2009-01-26 18:37:05 -07:00
|
|
|
break;
|
|
|
|
case Grunnable:
|
|
|
|
case Gsyscall:
|
|
|
|
case Gwaiting:
|
2012-05-24 00:55:50 -06:00
|
|
|
addstackroots(gp);
|
2009-01-26 18:37:05 -07:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-05-24 00:55:50 -06:00
|
|
|
runtime·walkfintab(addfinroots);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
for(fb=allfin; fb; fb=fb->alllink)
|
2012-05-24 00:55:50 -06:00
|
|
|
addroot((byte*)fb->fin, fb->cnt*sizeof(fb->fin[0]));
|
2010-03-26 15:15:30 -06:00
|
|
|
}
|
2010-02-03 17:31:34 -07:00
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
static bool
|
|
|
|
handlespecial(byte *p, uintptr size)
|
|
|
|
{
|
|
|
|
void (*fn)(void*);
|
|
|
|
int32 nret;
|
|
|
|
FinBlock *block;
|
|
|
|
Finalizer *f;
|
2012-05-15 09:10:16 -06:00
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
if(!runtime·getfinalizer(p, true, &fn, &nret)) {
|
|
|
|
runtime·setblockspecial(p, false);
|
|
|
|
runtime·MProf_Free(p, size);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
runtime·lock(&finlock);
|
|
|
|
if(finq == nil || finq->cnt == finq->cap) {
|
|
|
|
if(finc == nil) {
|
|
|
|
finc = runtime·SysAlloc(PageSize);
|
|
|
|
finc->cap = (PageSize - sizeof(FinBlock)) / sizeof(Finalizer) + 1;
|
|
|
|
finc->alllink = allfin;
|
|
|
|
allfin = finc;
|
|
|
|
}
|
|
|
|
block = finc;
|
|
|
|
finc = block->next;
|
|
|
|
block->next = finq;
|
|
|
|
finq = block;
|
|
|
|
}
|
|
|
|
f = &finq->fin[finq->cnt];
|
|
|
|
finq->cnt++;
|
|
|
|
f->fn = fn;
|
|
|
|
f->nret = nret;
|
|
|
|
f->arg = p;
|
2012-05-15 09:10:16 -06:00
|
|
|
runtime·unlock(&finlock);
|
2011-10-06 09:42:51 -06:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Sweep frees or collects finalizers for blocks not marked in the mark phase.
|
2011-02-02 21:03:47 -07:00
|
|
|
// It clears the mark bits in preparation for the next GC round.
|
2010-02-03 17:31:34 -07:00
|
|
|
static void
|
2012-05-22 11:35:52 -06:00
|
|
|
sweepspan(ParFor *desc, uint32 idx)
|
2012-04-05 11:02:20 -06:00
|
|
|
{
|
|
|
|
int32 cl, n, npages;
|
|
|
|
uintptr size;
|
|
|
|
byte *p;
|
|
|
|
MCache *c;
|
|
|
|
byte *arena_start;
|
2012-04-12 02:01:24 -06:00
|
|
|
MLink *start, *end;
|
|
|
|
int32 nfree;
|
2012-05-22 11:35:52 -06:00
|
|
|
MSpan *s;
|
2010-02-10 15:59:39 -07:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
USED(&desc);
|
|
|
|
s = runtime·mheap.allspans[idx];
|
|
|
|
// Stamp newly unused spans. The scavenger will use that
|
|
|
|
// info to potentially give back some pages to the OS.
|
|
|
|
if(s->state == MSpanFree && s->unusedsince == 0)
|
|
|
|
s->unusedsince = runtime·nanotime();
|
|
|
|
if(s->state != MSpanInUse)
|
|
|
|
return;
|
2012-04-05 11:02:20 -06:00
|
|
|
arena_start = runtime·mheap.arena_start;
|
|
|
|
p = (byte*)(s->start << PageShift);
|
|
|
|
cl = s->sizeclass;
|
|
|
|
if(cl == 0) {
|
|
|
|
size = s->npages<<PageShift;
|
|
|
|
n = 1;
|
|
|
|
} else {
|
|
|
|
// Chunk full of small blocks.
|
|
|
|
size = runtime·class_to_size[cl];
|
|
|
|
npages = runtime·class_to_allocnpages[cl];
|
|
|
|
n = (npages << PageShift) / size;
|
|
|
|
}
|
2012-04-12 02:01:24 -06:00
|
|
|
nfree = 0;
|
|
|
|
start = end = nil;
|
|
|
|
c = m->mcache;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-04-05 11:02:20 -06:00
|
|
|
// Sweep through n objects of given size starting at p.
|
|
|
|
// This thread owns the span now, so it can manipulate
|
|
|
|
// the block bitmap without atomic operations.
|
|
|
|
for(; n > 0; n--, p += size) {
|
|
|
|
uintptr off, *bitp, shift, bits;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-04-05 11:02:20 -06:00
|
|
|
off = (uintptr*)p - (uintptr*)arena_start;
|
|
|
|
bitp = (uintptr*)arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
bits = *bitp>>shift;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-04-05 11:02:20 -06:00
|
|
|
if((bits & bitAllocated) == 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if((bits & bitMarked) != 0) {
|
|
|
|
if(DebugMark) {
|
|
|
|
if(!(bits & bitSpecial))
|
|
|
|
runtime·printf("found spurious mark on %p\n", p);
|
|
|
|
*bitp &= ~(bitSpecial<<shift);
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
2012-04-05 11:02:20 -06:00
|
|
|
*bitp &= ~(bitMarked<<shift);
|
|
|
|
continue;
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-04-05 11:02:20 -06:00
|
|
|
// Special means it has a finalizer or is being profiled.
|
|
|
|
// In DebugMark mode, the bit has been coopted so
|
|
|
|
// we have to assume all blocks are special.
|
|
|
|
if(DebugMark || (bits & bitSpecial) != 0) {
|
|
|
|
if(handlespecial(p, size))
|
|
|
|
continue;
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-04-05 11:02:20 -06:00
|
|
|
// Mark freed; restore block boundary bit.
|
|
|
|
*bitp = (*bitp & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
|
|
|
|
|
|
|
|
if(s->sizeclass == 0) {
|
|
|
|
// Free large span.
|
|
|
|
runtime·unmarkspan(p, 1<<PageShift);
|
|
|
|
*(uintptr*)p = 1; // needs zeroing
|
|
|
|
runtime·MHeap_Free(&runtime·mheap, s, 1);
|
2012-04-12 02:01:24 -06:00
|
|
|
c->local_alloc -= size;
|
|
|
|
c->local_nfree++;
|
2012-04-05 11:02:20 -06:00
|
|
|
} else {
|
|
|
|
// Free small object.
|
|
|
|
if(size > sizeof(uintptr))
|
|
|
|
((uintptr*)p)[1] = 1; // mark as "needs to be zeroed"
|
2012-04-12 02:01:24 -06:00
|
|
|
if(nfree)
|
|
|
|
end->next = (MLink*)p;
|
|
|
|
else
|
|
|
|
start = (MLink*)p;
|
|
|
|
end = (MLink*)p;
|
|
|
|
nfree++;
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
2012-04-12 02:01:24 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
if(nfree) {
|
|
|
|
c->local_by_size[s->sizeclass].nfree += nfree;
|
|
|
|
c->local_alloc -= size * nfree;
|
|
|
|
c->local_nfree += nfree;
|
|
|
|
c->local_cachealloc -= nfree * size;
|
|
|
|
c->local_objects -= nfree;
|
|
|
|
runtime·MCentral_FreeSpan(&runtime·mheap.central[cl], s, nfree, start, end);
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
void
|
|
|
|
runtime·gchelper(void)
|
|
|
|
{
|
2012-05-24 00:55:50 -06:00
|
|
|
// parallel mark for over gc roots
|
|
|
|
runtime·parfordo(work.markfor);
|
|
|
|
// help other threads scan secondary blocks
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
scanblock(nil, 0);
|
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
if(DebugMark) {
|
|
|
|
// wait while the main thread executes mark(debug_scanblock)
|
|
|
|
while(runtime·atomicload(&work.debugmarkdone) == 0)
|
|
|
|
runtime·usleep(10);
|
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
runtime·parfordo(work.sweepfor);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if(runtime·xadd(&work.ndone, +1) == work.nproc-1)
|
|
|
|
runtime·notewakeup(&work.alldone);
|
|
|
|
}
|
|
|
|
|
2009-01-26 18:37:05 -07:00
|
|
|
// Initialized from $GOGC. GOGC=off means no gc.
|
|
|
|
//
|
|
|
|
// Next gc is after we've allocated an extra amount of
|
|
|
|
// memory proportional to the amount already in use.
|
|
|
|
// If gcpercent=100 and we're using 4M, we'll gc again
|
|
|
|
// when we get to 8M. This keeps the gc cost in linear
|
|
|
|
// proportion to the allocation cost. Adjusting gcpercent
|
|
|
|
// just changes the linear constant (and also the amount of
|
|
|
|
// extra memory used).
|
|
|
|
static int32 gcpercent = -2;
|
|
|
|
|
2010-03-08 15:15:44 -07:00
|
|
|
static void
|
|
|
|
stealcache(void)
|
|
|
|
{
|
|
|
|
M *m;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
for(m=runtime·allm; m; m=m->alllink)
|
|
|
|
runtime·MCache_ReleaseAll(m->mcache);
|
2010-03-08 15:15:44 -07:00
|
|
|
}
|
|
|
|
|
2010-09-07 07:57:22 -06:00
|
|
|
static void
|
2012-04-05 10:48:28 -06:00
|
|
|
cachestats(GCStats *stats)
|
2010-09-07 07:57:22 -06:00
|
|
|
{
|
|
|
|
M *m;
|
|
|
|
MCache *c;
|
2011-07-18 12:52:57 -06:00
|
|
|
int32 i;
|
|
|
|
uint64 stacks_inuse;
|
|
|
|
uint64 stacks_sys;
|
2012-04-05 10:48:28 -06:00
|
|
|
uint64 *src, *dst;
|
2010-09-07 07:57:22 -06:00
|
|
|
|
2012-04-05 10:48:28 -06:00
|
|
|
if(stats)
|
|
|
|
runtime·memclr((byte*)stats, sizeof(*stats));
|
2011-07-18 12:52:57 -06:00
|
|
|
stacks_inuse = 0;
|
|
|
|
stacks_sys = 0;
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
for(m=runtime·allm; m; m=m->alllink) {
|
2011-07-18 12:52:57 -06:00
|
|
|
runtime·purgecachedstats(m);
|
|
|
|
stacks_inuse += m->stackalloc->inuse;
|
|
|
|
stacks_sys += m->stackalloc->sys;
|
2012-04-05 10:48:28 -06:00
|
|
|
if(stats) {
|
|
|
|
src = (uint64*)&m->gcstats;
|
|
|
|
dst = (uint64*)stats;
|
|
|
|
for(i=0; i<sizeof(*stats)/sizeof(uint64); i++)
|
|
|
|
dst[i] += src[i];
|
|
|
|
runtime·memclr((byte*)&m->gcstats, sizeof(m->gcstats));
|
|
|
|
}
|
2010-09-07 07:57:22 -06:00
|
|
|
c = m->mcache;
|
2011-07-18 12:52:57 -06:00
|
|
|
for(i=0; i<nelem(c->local_by_size); i++) {
|
|
|
|
mstats.by_size[i].nmalloc += c->local_by_size[i].nmalloc;
|
|
|
|
c->local_by_size[i].nmalloc = 0;
|
|
|
|
mstats.by_size[i].nfree += c->local_by_size[i].nfree;
|
|
|
|
c->local_by_size[i].nfree = 0;
|
|
|
|
}
|
2010-09-07 07:57:22 -06:00
|
|
|
}
|
2011-07-18 12:52:57 -06:00
|
|
|
mstats.stacks_inuse = stacks_inuse;
|
|
|
|
mstats.stacks_sys = stacks_sys;
|
2010-09-07 07:57:22 -06:00
|
|
|
}
|
|
|
|
|
2009-01-26 18:37:05 -07:00
|
|
|
void
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·gc(int32 force)
|
2009-01-26 18:37:05 -07:00
|
|
|
{
|
2011-02-02 21:03:47 -07:00
|
|
|
int64 t0, t1, t2, t3;
|
|
|
|
uint64 heap0, heap1, obj0, obj1;
|
2009-01-26 18:37:05 -07:00
|
|
|
byte *p;
|
2012-04-05 10:48:28 -06:00
|
|
|
GCStats stats;
|
2012-05-24 00:55:50 -06:00
|
|
|
uint32 i;
|
2009-01-26 18:37:05 -07:00
|
|
|
|
|
|
|
// The gc is turned off (via enablegc) until
|
|
|
|
// the bootstrap has completed.
|
|
|
|
// Also, malloc gets called in the guts
|
|
|
|
// of a number of libraries that might be
|
|
|
|
// holding locks. To avoid priority inversion
|
|
|
|
// problems, don't bother trying to run gc
|
|
|
|
// while holding a lock. The next mallocgc
|
|
|
|
// without a lock will do the gc instead.
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
if(!mstats.enablegc || m->locks > 0 || runtime·panicking)
|
2009-01-26 18:37:05 -07:00
|
|
|
return;
|
|
|
|
|
|
|
|
if(gcpercent == -2) { // first time through
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
p = runtime·getenv("GOGC");
|
2009-01-26 18:37:05 -07:00
|
|
|
if(p == nil || p[0] == '\0')
|
|
|
|
gcpercent = 100;
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
else if(runtime·strcmp(p, (byte*)"off") == 0)
|
2009-01-26 18:37:05 -07:00
|
|
|
gcpercent = -1;
|
|
|
|
else
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
gcpercent = runtime·atoi(p);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
p = runtime·getenv("GOGCTRACE");
|
|
|
|
if(p != nil)
|
|
|
|
gctrace = runtime·atoi(p);
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
2009-06-05 11:59:37 -06:00
|
|
|
if(gcpercent < 0)
|
2009-01-26 18:37:05 -07:00
|
|
|
return;
|
|
|
|
|
2012-02-22 19:45:01 -07:00
|
|
|
runtime·semacquire(&runtime·worldsema);
|
2011-02-02 21:03:47 -07:00
|
|
|
if(!force && mstats.heap_alloc < mstats.next_gc) {
|
2012-02-22 19:45:01 -07:00
|
|
|
runtime·semrelease(&runtime·worldsema);
|
2011-02-02 21:03:47 -07:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
t0 = runtime·nanotime();
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2009-08-14 21:33:20 -06:00
|
|
|
m->gcing = 1;
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·stoptheworld();
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-05-15 09:10:16 -06:00
|
|
|
heap0 = 0;
|
|
|
|
obj0 = 0;
|
|
|
|
if(gctrace) {
|
|
|
|
cachestats(nil);
|
|
|
|
heap0 = mstats.heap_alloc;
|
|
|
|
obj0 = mstats.nmalloc - mstats.nfree;
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
work.nwait = 0;
|
|
|
|
work.ndone = 0;
|
|
|
|
work.debugmarkdone = 0;
|
2012-05-15 09:10:16 -06:00
|
|
|
work.nproc = runtime·gcprocs();
|
2012-05-24 00:55:50 -06:00
|
|
|
addroots();
|
|
|
|
if(work.markfor == nil)
|
|
|
|
work.markfor = runtime·parforalloc(MaxGcproc);
|
|
|
|
runtime·parforsetup(work.markfor, work.nproc, work.nroot, nil, false, markroot);
|
2012-05-22 11:35:52 -06:00
|
|
|
if(work.sweepfor == nil)
|
|
|
|
work.sweepfor = runtime·parforalloc(MaxGcproc);
|
|
|
|
runtime·parforsetup(work.sweepfor, work.nproc, runtime·mheap.nspan, nil, true, sweepspan);
|
2012-05-15 09:10:16 -06:00
|
|
|
if(work.nproc > 1) {
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
runtime·noteclear(&work.alldone);
|
2012-05-15 09:10:16 -06:00
|
|
|
runtime·helpgc(work.nproc);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
}
|
|
|
|
|
2012-05-24 00:55:50 -06:00
|
|
|
runtime·parfordo(work.markfor);
|
|
|
|
scanblock(nil, 0);
|
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
if(DebugMark) {
|
2012-05-24 00:55:50 -06:00
|
|
|
for(i=0; i<work.nroot; i++)
|
|
|
|
debug_scanblock(work.roots[i].p, work.roots[i].n);
|
2012-05-22 11:35:52 -06:00
|
|
|
runtime·atomicstore(&work.debugmarkdone, 1);
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
t1 = runtime·nanotime();
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
runtime·parfordo(work.sweepfor);
|
2011-02-02 21:03:47 -07:00
|
|
|
t2 = runtime·nanotime();
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
stealcache();
|
2012-04-05 10:48:28 -06:00
|
|
|
cachestats(&stats);
|
2011-02-02 21:03:47 -07:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
if(work.nproc > 1)
|
|
|
|
runtime·notesleep(&work.alldone);
|
|
|
|
|
|
|
|
stats.nprocyield += work.sweepfor->nprocyield;
|
|
|
|
stats.nosyield += work.sweepfor->nosyield;
|
|
|
|
stats.nsleep += work.sweepfor->nsleep;
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
mstats.next_gc = mstats.heap_alloc+mstats.heap_alloc*gcpercent/100;
|
2009-06-15 22:31:56 -06:00
|
|
|
m->gcing = 0;
|
2010-02-10 01:00:12 -07:00
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
if(finq != nil) {
|
2012-04-05 10:48:28 -06:00
|
|
|
m->locks++; // disable gc during the mallocs in newproc
|
2010-03-26 15:15:30 -06:00
|
|
|
// kick off or wake up goroutine to run queued finalizers
|
|
|
|
if(fing == nil)
|
2011-03-02 11:42:02 -07:00
|
|
|
fing = runtime·newproc1((byte*)runfinq, nil, 0, 0, runtime·gc);
|
2010-04-07 21:38:02 -06:00
|
|
|
else if(fingwait) {
|
|
|
|
fingwait = 0;
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·ready(fing);
|
2010-04-07 21:38:02 -06:00
|
|
|
}
|
2012-04-05 10:48:28 -06:00
|
|
|
m->locks--;
|
2010-02-03 17:31:34 -07:00
|
|
|
}
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
heap1 = mstats.heap_alloc;
|
|
|
|
obj1 = mstats.nmalloc - mstats.nfree;
|
|
|
|
|
|
|
|
t3 = runtime·nanotime();
|
2012-02-16 11:30:04 -07:00
|
|
|
mstats.last_gc = t3;
|
2011-02-02 21:03:47 -07:00
|
|
|
mstats.pause_ns[mstats.numgc%nelem(mstats.pause_ns)] = t3 - t0;
|
|
|
|
mstats.pause_total_ns += t3 - t0;
|
2010-02-08 15:32:22 -07:00
|
|
|
mstats.numgc++;
|
|
|
|
if(mstats.debuggc)
|
2011-02-02 21:03:47 -07:00
|
|
|
runtime·printf("pause %D\n", t3-t0);
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
if(gctrace) {
|
2012-04-05 10:48:28 -06:00
|
|
|
runtime·printf("gc%d(%d): %D+%D+%D ms, %D -> %D MB %D -> %D (%D-%D) objects,"
|
2012-05-22 11:35:52 -06:00
|
|
|
" %D(%D) handoff, %D(%D) steal, %D/%D/%D yields\n",
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
mstats.numgc, work.nproc, (t1-t0)/1000000, (t2-t1)/1000000, (t3-t2)/1000000,
|
2011-02-02 21:03:47 -07:00
|
|
|
heap0>>20, heap1>>20, obj0, obj1,
|
|
|
|
mstats.nmalloc, mstats.nfree,
|
2012-04-05 10:48:28 -06:00
|
|
|
stats.nhandoff, stats.nhandoffcnt,
|
2012-05-22 11:35:52 -06:00
|
|
|
work.sweepfor->nsteal, work.sweepfor->nstealcnt,
|
2012-04-05 10:48:28 -06:00
|
|
|
stats.nprocyield, stats.nosyield, stats.nsleep);
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
2012-05-22 11:35:52 -06:00
|
|
|
|
2012-02-22 19:45:01 -07:00
|
|
|
runtime·MProf_GC();
|
|
|
|
runtime·semrelease(&runtime·worldsema);
|
2012-05-15 09:10:16 -06:00
|
|
|
runtime·starttheworld();
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2012-05-22 11:35:52 -06:00
|
|
|
// give the queued finalizers, if any, a chance to run
|
|
|
|
if(finq != nil)
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·gosched();
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
if(gctrace > 1 && !force)
|
|
|
|
runtime·gc(1);
|
2010-03-26 15:15:30 -06:00
|
|
|
}
|
|
|
|
|
2011-07-21 22:55:01 -06:00
|
|
|
void
|
2012-02-06 11:16:26 -07:00
|
|
|
runtime·ReadMemStats(MStats *stats)
|
2011-07-21 22:55:01 -06:00
|
|
|
{
|
2012-02-22 19:45:01 -07:00
|
|
|
// Have to acquire worldsema to stop the world,
|
2011-07-21 22:55:01 -06:00
|
|
|
// because stoptheworld can only be used by
|
|
|
|
// one goroutine at a time, and there might be
|
|
|
|
// a pending garbage collection already calling it.
|
2012-02-22 19:45:01 -07:00
|
|
|
runtime·semacquire(&runtime·worldsema);
|
2011-07-21 22:55:01 -06:00
|
|
|
m->gcing = 1;
|
|
|
|
runtime·stoptheworld();
|
2012-04-05 10:48:28 -06:00
|
|
|
cachestats(nil);
|
2012-02-06 11:16:26 -07:00
|
|
|
*stats = mstats;
|
2011-07-21 22:55:01 -06:00
|
|
|
m->gcing = 0;
|
2012-02-22 19:45:01 -07:00
|
|
|
runtime·semrelease(&runtime·worldsema);
|
2012-05-15 09:10:16 -06:00
|
|
|
runtime·starttheworld();
|
2011-07-21 22:55:01 -06:00
|
|
|
}
|
|
|
|
|
2010-03-26 15:15:30 -06:00
|
|
|
static void
|
|
|
|
runfinq(void)
|
|
|
|
{
|
2011-10-06 09:42:51 -06:00
|
|
|
Finalizer *f;
|
|
|
|
FinBlock *fb, *next;
|
2010-03-26 15:15:30 -06:00
|
|
|
byte *frame;
|
2011-10-06 09:42:51 -06:00
|
|
|
uint32 framesz, framecap, i;
|
2010-03-26 15:15:30 -06:00
|
|
|
|
2011-10-06 09:42:51 -06:00
|
|
|
frame = nil;
|
|
|
|
framecap = 0;
|
2010-03-26 15:15:30 -06:00
|
|
|
for(;;) {
|
|
|
|
// There's no need for a lock in this section
|
|
|
|
// because it only conflicts with the garbage
|
|
|
|
// collector, and the garbage collector only
|
|
|
|
// runs when everyone else is stopped, and
|
|
|
|
// runfinq only stops at the gosched() or
|
|
|
|
// during the calls in the for loop.
|
2011-10-06 09:42:51 -06:00
|
|
|
fb = finq;
|
2010-03-26 15:15:30 -06:00
|
|
|
finq = nil;
|
2011-10-06 09:42:51 -06:00
|
|
|
if(fb == nil) {
|
2010-04-07 21:38:02 -06:00
|
|
|
fingwait = 1;
|
2010-03-26 15:15:30 -06:00
|
|
|
g->status = Gwaiting;
|
2011-08-22 21:26:39 -06:00
|
|
|
g->waitreason = "finalizer wait";
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·gosched();
|
2010-03-26 15:15:30 -06:00
|
|
|
continue;
|
|
|
|
}
|
2011-10-06 09:42:51 -06:00
|
|
|
for(; fb; fb=next) {
|
|
|
|
next = fb->next;
|
|
|
|
for(i=0; i<fb->cnt; i++) {
|
|
|
|
f = &fb->fin[i];
|
|
|
|
framesz = sizeof(uintptr) + f->nret;
|
|
|
|
if(framecap < framesz) {
|
|
|
|
runtime·free(frame);
|
|
|
|
frame = runtime·mal(framesz);
|
|
|
|
framecap = framesz;
|
|
|
|
}
|
|
|
|
*(void**)frame = f->arg;
|
|
|
|
reflect·call((byte*)f->fn, frame, sizeof(uintptr) + f->nret);
|
|
|
|
f->fn = nil;
|
|
|
|
f->arg = nil;
|
|
|
|
}
|
|
|
|
fb->cnt = 0;
|
|
|
|
fb->next = finc;
|
|
|
|
finc = fb;
|
2010-03-26 15:15:30 -06:00
|
|
|
}
|
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
|
|
|
runtime·gc(1); // trigger another gc to clean up the finalized objects, if possible
|
2010-03-26 15:15:30 -06:00
|
|
|
}
|
2009-01-26 18:37:05 -07:00
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
// mark the block at v of size n as allocated.
|
|
|
|
// If noptr is true, mark it as having no pointers.
|
|
|
|
void
|
|
|
|
runtime·markallocated(void *v, uintptr n, bool noptr)
|
|
|
|
{
|
2011-02-16 11:21:20 -07:00
|
|
|
uintptr *b, obits, bits, off, shift;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
if(0)
|
|
|
|
runtime·printf("markallocated %p+%p\n", v, n);
|
|
|
|
|
|
|
|
if((byte*)v+n > (byte*)runtime·mheap.arena_used || (byte*)v < runtime·mheap.arena_start)
|
|
|
|
runtime·throw("markallocated: bad pointer");
|
|
|
|
|
|
|
|
off = (uintptr*)v - (uintptr*)runtime·mheap.arena_start; // word offset
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
|
2011-02-16 11:21:20 -07:00
|
|
|
for(;;) {
|
|
|
|
obits = *b;
|
|
|
|
bits = (obits & ~(bitMask<<shift)) | (bitAllocated<<shift);
|
|
|
|
if(noptr)
|
|
|
|
bits |= bitNoPointers<<shift;
|
2011-08-16 14:53:02 -06:00
|
|
|
if(runtime·singleproc) {
|
2011-02-16 11:21:20 -07:00
|
|
|
*b = bits;
|
|
|
|
break;
|
|
|
|
} else {
|
2011-08-16 14:53:02 -06:00
|
|
|
// more than one goroutine is potentially running: use atomic op
|
2011-02-16 11:21:20 -07:00
|
|
|
if(runtime·casp((void**)b, (void*)obits, (void*)bits))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
// mark the block at v of size n as freed.
|
|
|
|
void
|
|
|
|
runtime·markfreed(void *v, uintptr n)
|
|
|
|
{
|
2011-02-16 11:21:20 -07:00
|
|
|
uintptr *b, obits, bits, off, shift;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
|
|
|
if(0)
|
|
|
|
runtime·printf("markallocated %p+%p\n", v, n);
|
|
|
|
|
|
|
|
if((byte*)v+n > (byte*)runtime·mheap.arena_used || (byte*)v < runtime·mheap.arena_start)
|
|
|
|
runtime·throw("markallocated: bad pointer");
|
|
|
|
|
|
|
|
off = (uintptr*)v - (uintptr*)runtime·mheap.arena_start; // word offset
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
|
2011-02-16 11:21:20 -07:00
|
|
|
for(;;) {
|
|
|
|
obits = *b;
|
|
|
|
bits = (obits & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
|
2011-08-16 14:53:02 -06:00
|
|
|
if(runtime·singleproc) {
|
2011-02-16 11:21:20 -07:00
|
|
|
*b = bits;
|
|
|
|
break;
|
|
|
|
} else {
|
2011-08-16 14:53:02 -06:00
|
|
|
// more than one goroutine is potentially running: use atomic op
|
2011-02-16 11:21:20 -07:00
|
|
|
if(runtime·casp((void**)b, (void*)obits, (void*)bits))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
// check that the block at v of size n is marked freed.
|
|
|
|
void
|
|
|
|
runtime·checkfreed(void *v, uintptr n)
|
|
|
|
{
|
|
|
|
uintptr *b, bits, off, shift;
|
|
|
|
|
|
|
|
if(!runtime·checking)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if((byte*)v+n > (byte*)runtime·mheap.arena_used || (byte*)v < runtime·mheap.arena_start)
|
|
|
|
return; // not allocated, so okay
|
|
|
|
|
|
|
|
off = (uintptr*)v - (uintptr*)runtime·mheap.arena_start; // word offset
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
|
|
|
|
bits = *b>>shift;
|
|
|
|
if((bits & bitAllocated) != 0) {
|
|
|
|
runtime·printf("checkfreed %p+%p: off=%p have=%p\n",
|
|
|
|
v, n, off, bits & bitMask);
|
|
|
|
runtime·throw("checkfreed: not freed");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// mark the span of memory at v as having n blocks of the given size.
|
|
|
|
// if leftover is true, there is left over space at the end of the span.
|
|
|
|
void
|
|
|
|
runtime·markspan(void *v, uintptr size, uintptr n, bool leftover)
|
|
|
|
{
|
|
|
|
uintptr *b, off, shift;
|
|
|
|
byte *p;
|
|
|
|
|
|
|
|
if((byte*)v+size*n > (byte*)runtime·mheap.arena_used || (byte*)v < runtime·mheap.arena_start)
|
|
|
|
runtime·throw("markspan: bad pointer");
|
|
|
|
|
|
|
|
p = v;
|
|
|
|
if(leftover) // mark a boundary just past end of last block too
|
|
|
|
n++;
|
|
|
|
for(; n-- > 0; p += size) {
|
2011-02-16 11:21:20 -07:00
|
|
|
// Okay to use non-atomic ops here, because we control
|
|
|
|
// the entire span, and each bitmap word has bits for only
|
|
|
|
// one span, so no other goroutines are changing these
|
|
|
|
// bitmap words.
|
2011-02-02 21:03:47 -07:00
|
|
|
off = (uintptr*)p - (uintptr*)runtime·mheap.arena_start; // word offset
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
*b = (*b & ~(bitMask<<shift)) | (bitBlockBoundary<<shift);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// unmark the span of memory at v of length n bytes.
|
|
|
|
void
|
|
|
|
runtime·unmarkspan(void *v, uintptr n)
|
|
|
|
{
|
|
|
|
uintptr *p, *b, off;
|
|
|
|
|
|
|
|
if((byte*)v+n > (byte*)runtime·mheap.arena_used || (byte*)v < runtime·mheap.arena_start)
|
|
|
|
runtime·throw("markspan: bad pointer");
|
|
|
|
|
|
|
|
p = v;
|
|
|
|
off = p - (uintptr*)runtime·mheap.arena_start; // word offset
|
|
|
|
if(off % wordsPerBitmapWord != 0)
|
|
|
|
runtime·throw("markspan: unaligned pointer");
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
n /= PtrSize;
|
|
|
|
if(n%wordsPerBitmapWord != 0)
|
|
|
|
runtime·throw("unmarkspan: unaligned length");
|
2011-02-16 11:21:20 -07:00
|
|
|
// Okay to use non-atomic ops here, because we control
|
|
|
|
// the entire span, and each bitmap word has bits for only
|
|
|
|
// one span, so no other goroutines are changing these
|
|
|
|
// bitmap words.
|
2011-02-02 21:03:47 -07:00
|
|
|
n /= wordsPerBitmapWord;
|
|
|
|
while(n-- > 0)
|
|
|
|
*b-- = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool
|
|
|
|
runtime·blockspecial(void *v)
|
|
|
|
{
|
|
|
|
uintptr *b, off, shift;
|
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if(DebugMark)
|
|
|
|
return true;
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
off = (uintptr*)v - (uintptr*)runtime·mheap.arena_start;
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
|
|
|
|
return (*b & (bitSpecial<<shift)) != 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
2011-10-06 09:42:51 -06:00
|
|
|
runtime·setblockspecial(void *v, bool s)
|
2011-02-02 21:03:47 -07:00
|
|
|
{
|
2011-02-16 11:21:20 -07:00
|
|
|
uintptr *b, off, shift, bits, obits;
|
2011-02-02 21:03:47 -07:00
|
|
|
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
if(DebugMark)
|
|
|
|
return;
|
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
off = (uintptr*)v - (uintptr*)runtime·mheap.arena_start;
|
|
|
|
b = (uintptr*)runtime·mheap.arena_start - off/wordsPerBitmapWord - 1;
|
|
|
|
shift = off % wordsPerBitmapWord;
|
|
|
|
|
2011-02-16 11:21:20 -07:00
|
|
|
for(;;) {
|
|
|
|
obits = *b;
|
2011-10-06 09:42:51 -06:00
|
|
|
if(s)
|
|
|
|
bits = obits | (bitSpecial<<shift);
|
|
|
|
else
|
|
|
|
bits = obits & ~(bitSpecial<<shift);
|
2011-08-16 14:53:02 -06:00
|
|
|
if(runtime·singleproc) {
|
2011-02-16 11:21:20 -07:00
|
|
|
*b = bits;
|
|
|
|
break;
|
|
|
|
} else {
|
2011-08-16 14:53:02 -06:00
|
|
|
// more than one goroutine is potentially running: use atomic op
|
2011-02-16 11:21:20 -07:00
|
|
|
if(runtime·casp((void**)b, (void*)obits, (void*)bits))
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2011-02-02 21:03:47 -07:00
|
|
|
}
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
void
|
|
|
|
runtime·MHeap_MapBits(MHeap *h)
|
|
|
|
{
|
|
|
|
// Caller has added extra mappings to the arena.
|
|
|
|
// Add extra mappings of bitmap words as needed.
|
|
|
|
// We allocate extra bitmap pieces in chunks of bitmapChunk.
|
|
|
|
enum {
|
|
|
|
bitmapChunk = 8192
|
|
|
|
};
|
|
|
|
uintptr n;
|
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
|
|
|
|
2011-02-02 21:03:47 -07:00
|
|
|
n = (h->arena_used - h->arena_start) / wordsPerBitmapWord;
|
|
|
|
n = (n+bitmapChunk-1) & ~(bitmapChunk-1);
|
|
|
|
if(h->bitmap_mapped >= n)
|
|
|
|
return;
|
|
|
|
|
|
|
|
runtime·SysMap(h->arena_start - n, n - h->bitmap_mapped);
|
|
|
|
h->bitmap_mapped = n;
|
|
|
|
}
|