2009-01-26 18:37:05 -07:00
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
2011-02-02 21:03:47 -07:00
// Garbage collector.
2009-01-26 18:37:05 -07:00
# include "runtime.h"
2011-12-16 13:33:58 -07:00
# include "arch_GOARCH.h"
2009-01-26 18:37:05 -07:00
# include "malloc.h"
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
# include "stack.h"
2012-12-16 17:32:12 -07:00
# include "mgc0.h"
2012-10-07 12:05:32 -06:00
# include "race.h"
2013-01-10 13:45:46 -07:00
# include "type.h"
# include "typekind.h"
2013-07-19 14:04:09 -06:00
# include "funcdata.h"
2013-08-29 13:36:59 -06:00
# include "../../cmd/ld/textflag.h"
2009-01-26 18:37:05 -07:00
enum {
2011-02-02 21:03:47 -07:00
Debug = 0 ,
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
DebugMark = 0 , // run second pass to check mark
2013-03-04 08:54:37 -07:00
CollectStats = 0 ,
2013-12-03 15:12:55 -07:00
ScanStackByFrames = 1 ,
2013-04-12 06:23:38 -06:00
IgnorePreciseGC = 0 ,
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2011-02-02 21:03:47 -07:00
// Four bits per word (see #defines below).
wordsPerBitmapWord = sizeof ( void * ) * 8 / 4 ,
bitShift = sizeof ( void * ) * 8 / 4 ,
2012-12-16 17:32:12 -07:00
handoffThreshold = 4 ,
IntermediateBufferCapacity = 64 ,
2013-01-10 13:45:46 -07:00
// Bits in type information
PRECISE = 1 ,
LOOP = 2 ,
PC_BITS = PRECISE | LOOP ,
2013-08-09 17:48:12 -06:00
// Pointer map
BitsPerPointer = 2 ,
2013-08-21 14:51:00 -06:00
BitsNoPointer = 0 ,
BitsPointer = 1 ,
BitsIface = 2 ,
BitsEface = 3 ,
2009-01-26 18:37:05 -07:00
} ;
2013-12-18 12:08:34 -07:00
static struct
{
Lock ;
void * head ;
} pools ;
void
sync · runtime_registerPool ( void * * p )
{
runtime · lock ( & pools ) ;
p [ 0 ] = pools . head ;
pools . head = p ;
runtime · unlock ( & pools ) ;
}
static void
clearpools ( void )
{
void * * p , * * next ;
for ( p = pools . head ; p ! = nil ; p = next ) {
next = p [ 0 ] ;
p [ 0 ] = nil ; // next
p [ 1 ] = nil ; // slice
p [ 2 ] = nil ;
p [ 3 ] = nil ;
}
pools . head = nil ;
}
2011-02-02 21:03:47 -07:00
// Bits in per-word bitmap.
// #defines because enum might not be able to hold the values.
//
// Each word in the bitmap describes wordsPerBitmapWord words
// of heap memory. There are 4 bitmap bits dedicated to each heap word,
// so on a 64-bit system there is one bitmap word per 16 heap words.
// The bits in the word are packed together by type first, then by
// heap location, so each 64-bit bitmap word consists of, from top to bottom,
// the 16 bitSpecial bits for the corresponding heap words, then the 16 bitMarked bits,
2013-12-18 18:13:59 -07:00
// then the 16 bitScan/bitBlockBoundary bits, then the 16 bitAllocated bits.
2011-02-02 21:03:47 -07:00
// This layout makes it easier to iterate over the bits of a given type.
//
// The bitmap starts at mheap.arena_start and extends *backward* from
// there. On a 64-bit system the off'th word in the arena is tracked by
// the off/16+1'th word before mheap.arena_start. (On a 32-bit system,
// the only difference is that the divisor is 8.)
//
// To pull out the bits corresponding to a given pointer p, we use:
//
// off = p - (uintptr*)mheap.arena_start; // word offset
// b = (uintptr*)mheap.arena_start - off/wordsPerBitmapWord - 1;
// shift = off % wordsPerBitmapWord
// bits = *b >> shift;
// /* then test bits & bitAllocated, bits & bitMarked, etc. */
//
2013-12-18 18:13:59 -07:00
# define bitAllocated ((uintptr)1<<(bitShift*0)) /* block start; eligible for garbage collection */
# define bitScan ((uintptr)1<<(bitShift*1)) /* when bitAllocated is set */
2011-02-02 21:03:47 -07:00
# define bitMarked ((uintptr)1<<(bitShift*2)) /* when bitAllocated is set */
# define bitSpecial ((uintptr)1<<(bitShift*3)) /* when bitAllocated is set - has finalizer or being profiled */
2013-12-18 18:13:59 -07:00
# define bitBlockBoundary ((uintptr)1<<(bitShift*1)) /* when bitAllocated is NOT set - mark for FlagNoGC objects */
2011-02-02 21:03:47 -07:00
2013-12-18 18:13:59 -07:00
# define bitMask (bitAllocated | bitScan | bitMarked | bitSpecial)
2011-02-02 21:03:47 -07:00
2012-02-22 19:45:01 -07:00
// Holding worldsema grants an M the right to try to stop the world.
// The procedure is:
//
// runtime·semacquire(&runtime·worldsema);
// m->gcing = 1;
// runtime·stoptheworld();
//
// ... do stuff ...
//
// m->gcing = 0;
// runtime·semrelease(&runtime·worldsema);
// runtime·starttheworld();
//
uint32 runtime · worldsema = 1 ;
2012-12-16 17:32:12 -07:00
typedef struct Obj Obj ;
struct Obj
{
byte * p ; // data pointer
uintptr n ; // size of data in bytes
uintptr ti ; // type info
} ;
// The size of Workbuf is N*PageSize.
2011-02-02 21:03:47 -07:00
typedef struct Workbuf Workbuf ;
struct Workbuf
2010-09-07 07:57:22 -06:00
{
2012-12-16 17:32:12 -07:00
# define SIZE (2*PageSize-sizeof(LFNode)-sizeof(uintptr))
LFNode node ; // must be first
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
uintptr nobj ;
2012-12-16 17:32:12 -07:00
Obj obj [ SIZE / sizeof ( Obj ) - 1 ] ;
uint8 _padding [ SIZE % sizeof ( Obj ) + sizeof ( Obj ) ] ;
# undef SIZE
2010-09-07 07:57:22 -06:00
} ;
2011-10-06 09:42:51 -06:00
typedef struct Finalizer Finalizer ;
struct Finalizer
{
2013-02-21 15:01:13 -07:00
FuncVal * fn ;
2011-10-06 09:42:51 -06:00
void * arg ;
2012-09-24 12:58:34 -06:00
uintptr nret ;
2013-08-14 12:54:31 -06:00
Type * fint ;
2013-07-29 09:43:08 -06:00
PtrType * ot ;
2011-10-06 09:42:51 -06:00
} ;
typedef struct FinBlock FinBlock ;
struct FinBlock
{
FinBlock * alllink ;
FinBlock * next ;
int32 cnt ;
int32 cap ;
Finalizer fin [ 1 ] ;
} ;
2009-08-20 17:09:38 -06:00
extern byte data [ ] ;
2012-12-16 17:32:12 -07:00
extern byte edata [ ] ;
extern byte bss [ ] ;
2012-02-21 20:08:42 -07:00
extern byte ebss [ ] ;
2009-01-26 18:37:05 -07:00
2012-12-16 17:32:12 -07:00
extern byte gcdata [ ] ;
extern byte gcbss [ ] ;
2010-03-26 15:15:30 -06:00
static G * fing ;
2011-10-06 09:42:51 -06:00
static FinBlock * finq ; // list of finalizers that are to be executed
static FinBlock * finc ; // cache of free blocks
static FinBlock * allfin ; // list of all blocks
static Lock finlock ;
2010-04-07 21:38:02 -06:00
static int32 fingwait ;
2010-03-26 15:15:30 -06:00
static void runfinq ( void ) ;
2011-02-02 21:03:47 -07:00
static Workbuf * getempty ( Workbuf * ) ;
static Workbuf * getfull ( Workbuf * ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
static void putempty ( Workbuf * ) ;
static Workbuf * handoff ( Workbuf * ) ;
2013-03-21 02:48:02 -06:00
static void gchelperstart ( void ) ;
2013-12-03 15:12:55 -07:00
static void scanstack ( G * gp , void * scanbuf ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
static struct {
2012-05-24 00:55:50 -06:00
uint64 full ; // lock-free list of full blocks
uint64 empty ; // lock-free list of empty blocks
byte pad0 [ CacheLineSize ] ; // prevents false-sharing between full/empty and nproc/nwait
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
uint32 nproc ;
volatile uint32 nwait ;
volatile uint32 ndone ;
2012-05-22 11:35:52 -06:00
volatile uint32 debugmarkdone ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
Note alldone ;
2012-05-24 00:55:50 -06:00
ParFor * markfor ;
2012-05-22 11:35:52 -06:00
ParFor * sweepfor ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
Lock ;
byte * chunk ;
uintptr nchunk ;
2012-05-24 00:55:50 -06:00
2012-12-16 17:32:12 -07:00
Obj * roots ;
2012-05-24 00:55:50 -06:00
uint32 nroot ;
uint32 rootcap ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
} work ;
2011-02-02 21:03:47 -07:00
2012-12-16 17:32:12 -07:00
enum {
GC_DEFAULT_PTR = GC_NUM_INSTR ,
2013-02-25 13:58:23 -07:00
GC_CHAN ,
2013-12-03 15:12:55 -07:00
GC_G_PTR ,
2013-03-04 08:54:37 -07:00
GC_NUM_INSTR2
2012-12-16 17:32:12 -07:00
} ;
2013-03-04 08:54:37 -07:00
static struct {
struct {
uint64 sum ;
uint64 cnt ;
} ptr ;
uint64 nbytes ;
struct {
uint64 sum ;
uint64 cnt ;
uint64 notype ;
uint64 typelookup ;
} obj ;
uint64 rescan ;
uint64 rescanbytes ;
uint64 instr [ GC_NUM_INSTR2 ] ;
uint64 putempty ;
uint64 getfull ;
2013-08-29 14:52:38 -06:00
struct {
uint64 foundbit ;
uint64 foundword ;
uint64 foundspan ;
} flushptrbuf ;
struct {
uint64 foundbit ;
uint64 foundword ;
uint64 foundspan ;
} markonly ;
2013-03-04 08:54:37 -07:00
} gcstats ;
2013-02-08 14:00:33 -07:00
// markonly marks an object. It returns true if the object
// has been marked by this function, false otherwise.
2013-03-15 02:02:36 -06:00
// This function doesn't append the object to any buffer.
2013-02-08 14:00:33 -07:00
static bool
markonly ( void * obj )
{
byte * p ;
2013-08-29 14:52:38 -06:00
uintptr * bitp , bits , shift , x , xbits , off , j ;
2013-02-08 14:00:33 -07:00
MSpan * s ;
PageID k ;
// Words outside the arena cannot be pointers.
2013-05-28 12:14:47 -06:00
if ( obj < runtime · mheap . arena_start | | obj > = runtime · mheap . arena_used )
2013-02-08 14:00:33 -07:00
return false ;
// obj may be a pointer to a live object.
// Try to find the beginning of the object.
// Round down to word boundary.
obj = ( void * ) ( ( uintptr ) obj & ~ ( ( uintptr ) PtrSize - 1 ) ) ;
// Find bits for this word.
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) obj - ( uintptr * ) runtime · mheap . arena_start ;
bitp = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2013-02-08 14:00:33 -07:00
shift = off % wordsPerBitmapWord ;
xbits = * bitp ;
bits = xbits > > shift ;
// Pointing at the beginning of a block?
2013-08-29 14:52:38 -06:00
if ( ( bits & ( bitAllocated | bitBlockBoundary ) ) ! = 0 ) {
if ( CollectStats )
runtime · xadd64 ( & gcstats . markonly . foundbit , 1 ) ;
2013-02-08 14:00:33 -07:00
goto found ;
2013-08-29 14:52:38 -06:00
}
// Pointing just past the beginning?
// Scan backward a little to find a block boundary.
for ( j = shift ; j - - > 0 ; ) {
if ( ( ( xbits > > j ) & ( bitAllocated | bitBlockBoundary ) ) ! = 0 ) {
shift = j ;
bits = xbits > > shift ;
if ( CollectStats )
runtime · xadd64 ( & gcstats . markonly . foundword , 1 ) ;
goto found ;
}
}
2013-02-08 14:00:33 -07:00
// Otherwise consult span table to find beginning.
// (Manually inlined copy of MHeap_LookupMaybe.)
k = ( uintptr ) obj > > PageShift ;
x = k ;
if ( sizeof ( void * ) = = 8 )
2013-05-28 12:14:47 -06:00
x - = ( uintptr ) runtime · mheap . arena_start > > PageShift ;
2013-05-30 07:09:58 -06:00
s = runtime · mheap . spans [ x ] ;
2013-05-30 22:32:20 -06:00
if ( s = = nil | | k < s - > start | | obj > = s - > limit | | s - > state ! = MSpanInUse )
2013-02-08 14:00:33 -07:00
return false ;
p = ( byte * ) ( ( uintptr ) s - > start < < PageShift ) ;
if ( s - > sizeclass = = 0 ) {
obj = p ;
} else {
uintptr size = s - > elemsize ;
int32 i = ( ( byte * ) obj - p ) / size ;
obj = p + i * size ;
}
// Now that we know the object header, reload bits.
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) obj - ( uintptr * ) runtime · mheap . arena_start ;
bitp = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2013-02-08 14:00:33 -07:00
shift = off % wordsPerBitmapWord ;
xbits = * bitp ;
bits = xbits > > shift ;
2013-08-29 14:52:38 -06:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . markonly . foundspan , 1 ) ;
2013-02-08 14:00:33 -07:00
found :
// Now we have bits, bitp, and shift correct for
// obj pointing at the base of the object.
// Only care about allocated and not marked.
if ( ( bits & ( bitAllocated | bitMarked ) ) ! = bitAllocated )
return false ;
2013-03-15 02:02:36 -06:00
if ( work . nproc = = 1 )
* bitp | = bitMarked < < shift ;
else {
for ( ; ; ) {
x = * bitp ;
if ( x & ( bitMarked < < shift ) )
return false ;
if ( runtime · casp ( ( void * * ) bitp , ( void * ) x , ( void * ) ( x | ( bitMarked < < shift ) ) ) )
break ;
}
}
2013-02-08 14:00:33 -07:00
// The object is now marked
return true ;
}
2013-03-15 10:37:40 -06:00
// PtrTarget is a structure used by intermediate buffers.
2012-12-16 17:32:12 -07:00
// The intermediate buffers hold GC data before it
// is moved/flushed to the work buffer (Workbuf).
// The size of an intermediate buffer is very small,
// such as 32 or 64 elements.
2013-01-04 08:20:50 -07:00
typedef struct PtrTarget PtrTarget ;
2012-12-16 17:32:12 -07:00
struct PtrTarget
{
void * p ;
uintptr ti ;
} ;
2013-12-03 15:12:55 -07:00
typedef struct Scanbuf Scanbuf ;
struct Scanbuf
{
struct {
PtrTarget * begin ;
PtrTarget * end ;
PtrTarget * pos ;
} ptr ;
struct {
Obj * begin ;
Obj * end ;
Obj * pos ;
} obj ;
Workbuf * wbuf ;
Obj * wp ;
uintptr nobj ;
} ;
2013-01-04 08:20:50 -07:00
typedef struct BufferList BufferList ;
2012-12-16 17:32:12 -07:00
struct BufferList
{
2013-01-04 08:20:50 -07:00
PtrTarget ptrtarget [ IntermediateBufferCapacity ] ;
2013-02-08 14:00:33 -07:00
Obj obj [ IntermediateBufferCapacity ] ;
2013-03-21 02:48:02 -06:00
uint32 busy ;
byte pad [ CacheLineSize ] ;
2012-12-16 17:32:12 -07:00
} ;
2013-08-29 13:36:59 -06:00
# pragma dataflag NOPTR
2013-03-21 02:48:02 -06:00
static BufferList bufferList [ MaxGcproc ] ;
2012-12-16 17:32:12 -07:00
2013-01-10 13:45:46 -07:00
static Type * itabtype ;
static void enqueue ( Obj obj , Workbuf * * _wbuf , Obj * * _wp , uintptr * _nobj ) ;
2012-12-16 17:32:12 -07:00
// flushptrbuf moves data from the PtrTarget buffer to the work buffer.
// The PtrTarget buffer contains blocks irrespective of whether the blocks have been marked or scanned,
// while the work buffer contains blocks which have been marked
// and are prepared to be scanned by the garbage collector.
//
// _wp, _wbuf, _nobj are input/output parameters and are specifying the work buffer.
//
// A simplified drawing explaining how the todo-list moves from a structure to another:
//
// scanblock
// (find pointers)
// Obj ------> PtrTarget (pointer targets)
// ↑ |
2013-03-15 10:37:40 -06:00
// | |
// `----------'
// flushptrbuf
// (find block start, mark and enqueue)
2009-01-26 18:37:05 -07:00
static void
2013-12-03 15:12:55 -07:00
flushptrbuf ( Scanbuf * sbuf )
2009-01-26 18:37:05 -07:00
{
2012-12-16 17:32:12 -07:00
byte * p , * arena_start , * obj ;
2013-01-10 13:45:46 -07:00
uintptr size , * bitp , bits , shift , j , x , xbits , off , nobj , ti , n ;
2011-02-02 21:03:47 -07:00
MSpan * s ;
PageID k ;
2012-12-16 17:32:12 -07:00
Obj * wp ;
2011-02-02 21:03:47 -07:00
Workbuf * wbuf ;
2013-12-03 15:12:55 -07:00
PtrTarget * ptrbuf ;
2013-01-04 08:20:50 -07:00
PtrTarget * ptrbuf_end ;
runtime: stack split + garbage collection bug
The g->sched.sp saved stack pointer and the
g->stackbase and g->stackguard stack bounds
can change even while "the world is stopped",
because a goroutine has to call functions (and
therefore might split its stack) when exiting a
system call to check whether the world is stopped
(and if so, wait until the world continues).
That means the garbage collector cannot access
those values safely (without a race) for goroutines
executing system calls. Instead, save a consistent
triple in g->gcsp, g->gcstack, g->gcguard during
entersyscall and have the garbage collector refer
to those.
The old code was occasionally seeing (because of
the race) an sp and stk that did not correspond to
each other, so that stk - sp was not the number of
stack bytes following sp. In that case, if sp < stk
then the call scanblock(sp, stk - sp) scanned too
many bytes (anything between the two pointers,
which pointed into different allocation blocks).
If sp > stk then stk - sp wrapped around.
On 32-bit, stk - sp is a uintptr (uint32) converted
to int64 in the call to scanblock, so a large (~4G)
but positive number. Scanblock would try to scan
that many bytes and eventually fault accessing
unmapped memory. On 64-bit, stk - sp is a uintptr (uint64)
promoted to int64 in the call to scanblock, so a negative
number. Scanblock would not scan anything, possibly
causing in-use blocks to be freed.
In short, 32-bit platforms would have seen either
ineffective garbage collection or crashes during garbage
collection, while 64-bit platforms would have seen
either ineffective or incorrect garbage collection.
You can see the invalid arguments to scanblock in the
stack traces in issue 1620.
Fixes #1620.
Fixes #1746.
R=iant, r
CC=golang-dev
https://golang.org/cl/4437075
2011-04-27 21:21:12 -06:00
2013-05-28 12:14:47 -06:00
arena_start = runtime · mheap . arena_start ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-12-03 15:12:55 -07:00
wp = sbuf - > wp ;
wbuf = sbuf - > wbuf ;
nobj = sbuf - > nobj ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-12-03 15:12:55 -07:00
ptrbuf = sbuf - > ptr . begin ;
ptrbuf_end = sbuf - > ptr . pos ;
n = ptrbuf_end - sbuf - > ptr . begin ;
sbuf - > ptr . pos = sbuf - > ptr . begin ;
2011-02-02 21:03:47 -07:00
2013-03-04 08:54:37 -07:00
if ( CollectStats ) {
runtime · xadd64 ( & gcstats . ptr . sum , n ) ;
runtime · xadd64 ( & gcstats . ptr . cnt , 1 ) ;
}
2012-12-16 17:32:12 -07:00
// If buffer is nearly full, get a new one.
if ( wbuf = = nil | | nobj + n > = nelem ( wbuf - > obj ) ) {
if ( wbuf ! = nil )
wbuf - > nobj = nobj ;
wbuf = getempty ( wbuf ) ;
wp = wbuf - > obj ;
nobj = 0 ;
if ( n > = nelem ( wbuf - > obj ) )
runtime · throw ( " ptrbuf has to be smaller than WorkBuf " ) ;
2011-02-02 21:03:47 -07:00
}
2010-09-07 07:57:22 -06:00
2013-12-10 12:17:43 -07:00
while ( ptrbuf < ptrbuf_end ) {
obj = ptrbuf - > p ;
ti = ptrbuf - > ti ;
ptrbuf + + ;
// obj belongs to interval [mheap.arena_start, mheap.arena_used).
if ( Debug > 1 ) {
if ( obj < runtime · mheap . arena_start | | obj > = runtime · mheap . arena_used )
runtime · throw ( " object is outside of mheap " ) ;
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-12-10 12:17:43 -07:00
// obj may be a pointer to a live object.
// Try to find the beginning of the object.
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-12-10 12:17:43 -07:00
// Round down to word boundary.
if ( ( ( uintptr ) obj & ( ( uintptr ) PtrSize - 1 ) ) ! = 0 ) {
obj = ( void * ) ( ( uintptr ) obj & ~ ( ( uintptr ) PtrSize - 1 ) ) ;
ti = 0 ;
}
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// Find bits for this word.
off = ( uintptr * ) obj - ( uintptr * ) arena_start ;
bitp = ( uintptr * ) arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
xbits = * bitp ;
bits = xbits > > shift ;
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// Pointing at the beginning of a block?
if ( ( bits & ( bitAllocated | bitBlockBoundary ) ) ! = 0 ) {
if ( CollectStats )
runtime · xadd64 ( & gcstats . flushptrbuf . foundbit , 1 ) ;
goto found ;
}
ti = 0 ;
// Pointing just past the beginning?
// Scan backward a little to find a block boundary.
for ( j = shift ; j - - > 0 ; ) {
if ( ( ( xbits > > j ) & ( bitAllocated | bitBlockBoundary ) ) ! = 0 ) {
obj = ( byte * ) obj - ( shift - j ) * PtrSize ;
shift = j ;
bits = xbits > > shift ;
2013-08-29 14:52:38 -06:00
if ( CollectStats )
2013-12-10 12:17:43 -07:00
runtime · xadd64 ( & gcstats . flushptrbuf . foundword , 1 ) ;
2011-02-02 21:03:47 -07:00
goto found ;
2013-08-29 14:52:38 -06:00
}
2013-12-10 12:17:43 -07:00
}
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// Otherwise consult span table to find beginning.
// (Manually inlined copy of MHeap_LookupMaybe.)
k = ( uintptr ) obj > > PageShift ;
x = k ;
if ( sizeof ( void * ) = = 8 )
x - = ( uintptr ) arena_start > > PageShift ;
s = runtime · mheap . spans [ x ] ;
if ( s = = nil | | k < s - > start | | obj > = s - > limit | | s - > state ! = MSpanInUse )
continue ;
p = ( byte * ) ( ( uintptr ) s - > start < < PageShift ) ;
if ( s - > sizeclass = = 0 ) {
obj = p ;
} else {
size = s - > elemsize ;
int32 i = ( ( byte * ) obj - p ) / size ;
obj = p + i * size ;
}
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// Now that we know the object header, reload bits.
off = ( uintptr * ) obj - ( uintptr * ) arena_start ;
bitp = ( uintptr * ) arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
xbits = * bitp ;
bits = xbits > > shift ;
if ( CollectStats )
runtime · xadd64 ( & gcstats . flushptrbuf . foundspan , 1 ) ;
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
found :
// Now we have bits, bitp, and shift correct for
// obj pointing at the base of the object.
// Only care about allocated and not marked.
if ( ( bits & ( bitAllocated | bitMarked ) ) ! = bitAllocated )
continue ;
if ( work . nproc = = 1 )
* bitp | = bitMarked < < shift ;
else {
for ( ; ; ) {
x = * bitp ;
if ( x & ( bitMarked < < shift ) )
goto continue_obj ;
if ( runtime · casp ( ( void * * ) bitp , ( void * ) x , ( void * ) ( x | ( bitMarked < < shift ) ) ) )
break ;
2013-03-15 02:02:36 -06:00
}
2013-12-10 12:17:43 -07:00
}
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// If object has no pointers, don't need to scan further.
2013-12-18 18:13:59 -07:00
if ( ( bits & bitScan ) = = 0 )
2013-12-10 12:17:43 -07:00
continue ;
2011-02-02 21:03:47 -07:00
2013-12-10 12:17:43 -07:00
// Ask span about size class.
// (Manually inlined copy of MHeap_Lookup.)
x = ( uintptr ) obj > > PageShift ;
if ( sizeof ( void * ) = = 8 )
x - = ( uintptr ) arena_start > > PageShift ;
s = runtime · mheap . spans [ x ] ;
2012-12-16 17:32:12 -07:00
2013-12-10 12:17:43 -07:00
PREFETCH ( obj ) ;
2012-04-07 07:02:44 -06:00
2013-12-10 12:17:43 -07:00
* wp = ( Obj ) { obj , s - > elemsize , ti } ;
wp + + ;
nobj + + ;
continue_obj : ;
}
2012-12-16 17:32:12 -07:00
2013-12-10 12:17:43 -07:00
// If another proc wants a pointer, give it some.
if ( work . nwait > 0 & & nobj > handoffThreshold & & work . full = = 0 ) {
wbuf - > nobj = nobj ;
wbuf = handoff ( wbuf ) ;
nobj = wbuf - > nobj ;
wp = wbuf - > obj + nobj ;
2012-12-16 17:32:12 -07:00
}
2013-12-03 15:12:55 -07:00
sbuf - > wp = wp ;
sbuf - > wbuf = wbuf ;
sbuf - > nobj = nobj ;
2012-12-16 17:32:12 -07:00
}
2013-02-08 14:00:33 -07:00
static void
2013-12-03 15:12:55 -07:00
flushobjbuf ( Scanbuf * sbuf )
2013-02-08 14:00:33 -07:00
{
uintptr nobj , off ;
Obj * wp , obj ;
Workbuf * wbuf ;
2013-12-03 15:12:55 -07:00
Obj * objbuf ;
2013-02-08 14:00:33 -07:00
Obj * objbuf_end ;
2013-12-03 15:12:55 -07:00
wp = sbuf - > wp ;
wbuf = sbuf - > wbuf ;
nobj = sbuf - > nobj ;
2013-02-08 14:00:33 -07:00
2013-12-03 15:12:55 -07:00
objbuf = sbuf - > obj . begin ;
objbuf_end = sbuf - > obj . pos ;
sbuf - > obj . pos = sbuf - > obj . begin ;
2013-02-08 14:00:33 -07:00
while ( objbuf < objbuf_end ) {
obj = * objbuf + + ;
// Align obj.b to a word boundary.
off = ( uintptr ) obj . p & ( PtrSize - 1 ) ;
if ( off ! = 0 ) {
obj . p + = PtrSize - off ;
obj . n - = PtrSize - off ;
obj . ti = 0 ;
}
if ( obj . p = = nil | | obj . n = = 0 )
continue ;
// If buffer is full, get a new one.
if ( wbuf = = nil | | nobj > = nelem ( wbuf - > obj ) ) {
if ( wbuf ! = nil )
wbuf - > nobj = nobj ;
wbuf = getempty ( wbuf ) ;
wp = wbuf - > obj ;
nobj = 0 ;
}
* wp = obj ;
wp + + ;
nobj + + ;
}
// If another proc wants a pointer, give it some.
if ( work . nwait > 0 & & nobj > handoffThreshold & & work . full = = 0 ) {
wbuf - > nobj = nobj ;
wbuf = handoff ( wbuf ) ;
nobj = wbuf - > nobj ;
wp = wbuf - > obj + nobj ;
}
2013-12-03 15:12:55 -07:00
sbuf - > wp = wp ;
sbuf - > wbuf = wbuf ;
sbuf - > nobj = nobj ;
2013-02-08 14:00:33 -07:00
}
2012-12-16 17:32:12 -07:00
// Program that scans the whole block and treats every block element as a potential pointer
static uintptr defaultProg [ 2 ] = { PtrSize , GC_DEFAULT_PTR } ;
2013-02-25 13:58:23 -07:00
// Hchan program
static uintptr chanProg [ 2 ] = { 0 , GC_CHAN } ;
2013-12-03 15:12:55 -07:00
// G* program
static uintptr gptrProg [ 2 ] = { 0 , GC_G_PTR } ;
2013-01-10 13:45:46 -07:00
// Local variables of a program fragment or loop
typedef struct Frame Frame ;
struct Frame {
uintptr count , elemsize , b ;
uintptr * loop_or_ret ;
} ;
2013-04-08 14:36:35 -06:00
// Sanity check for the derived type info objti.
static void
checkptr ( void * obj , uintptr objti )
{
uintptr * pc1 , * pc2 , type , tisize , i , j , x ;
byte * objstart ;
Type * t ;
MSpan * s ;
if ( ! Debug )
runtime · throw ( " checkptr is debug only " ) ;
2013-05-28 12:14:47 -06:00
if ( obj < runtime · mheap . arena_start | | obj > = runtime · mheap . arena_used )
2013-04-08 14:36:35 -06:00
return ;
type = runtime · gettype ( obj ) ;
t = ( Type * ) ( type & ~ ( uintptr ) ( PtrSize - 1 ) ) ;
if ( t = = nil )
return ;
x = ( uintptr ) obj > > PageShift ;
if ( sizeof ( void * ) = = 8 )
2013-05-28 12:14:47 -06:00
x - = ( uintptr ) ( runtime · mheap . arena_start ) > > PageShift ;
2013-05-30 07:09:58 -06:00
s = runtime · mheap . spans [ x ] ;
2013-04-08 14:36:35 -06:00
objstart = ( byte * ) ( ( uintptr ) s - > start < < PageShift ) ;
if ( s - > sizeclass ! = 0 ) {
i = ( ( byte * ) obj - objstart ) / s - > elemsize ;
objstart + = i * s - > elemsize ;
}
tisize = * ( uintptr * ) objti ;
// Sanity check for object size: it should fit into the memory block.
2013-08-31 10:09:50 -06:00
if ( ( byte * ) obj + tisize > objstart + s - > elemsize ) {
runtime · printf ( " object of type '%S' at %p/%p does not fit in block %p/%p \n " ,
* t - > string , obj , tisize , objstart , s - > elemsize ) ;
2013-04-08 14:36:35 -06:00
runtime · throw ( " invalid gc type info " ) ;
2013-08-31 10:09:50 -06:00
}
2013-04-08 14:36:35 -06:00
if ( obj ! = objstart )
return ;
// If obj points to the beginning of the memory block,
// check type info as well.
if ( t - > string = = nil | |
// Gob allocates unsafe pointers for indirection.
( runtime · strcmp ( t - > string - > str , ( byte * ) " unsafe.Pointer " ) & &
// Runtime and gc think differently about closures.
runtime · strstr ( t - > string - > str , ( byte * ) " struct { F uintptr " ) ! = t - > string - > str ) ) {
pc1 = ( uintptr * ) objti ;
pc2 = ( uintptr * ) t - > gc ;
// A simple best-effort check until first GC_END.
for ( j = 1 ; pc1 [ j ] ! = GC_END & & pc2 [ j ] ! = GC_END ; j + + ) {
if ( pc1 [ j ] ! = pc2 [ j ] ) {
runtime · printf ( " invalid gc type info for '%s' at %p, type info %p, block info %p \n " ,
2013-08-31 10:09:50 -06:00
t - > string ? ( int8 * ) t - > string - > str : ( int8 * ) " ? " , j , pc1 [ j ] , pc2 [ j ] ) ;
2013-04-08 14:36:35 -06:00
runtime · throw ( " invalid gc type info " ) ;
}
}
}
}
2012-12-16 17:32:12 -07:00
// scanblock scans a block of n bytes starting at pointer b for references
// to other objects, scanning any it finds recursively until there are no
// unscanned objects left. Instead of using an explicit recursion, it keeps
// a work list in the Workbuf* structures and loops in the main function
// body. Keeping an explicit work list is easier on the stack allocator and
// more efficient.
//
// wbuf: current work buffer
// wp: storage for next queued pointer (write pointer)
// nobj: number of queued objects
static void
scanblock ( Workbuf * wbuf , Obj * wp , uintptr nobj , bool keepworking )
{
byte * b , * arena_start , * arena_used ;
2013-02-27 09:28:53 -07:00
uintptr n , i , end_b , elemsize , size , ti , objti , count , type ;
2013-01-10 13:45:46 -07:00
uintptr * pc , precise_type , nominal_size ;
2013-08-31 10:09:50 -06:00
uintptr * chan_ret , chancap ;
2012-12-16 17:32:12 -07:00
void * obj ;
2013-01-10 13:45:46 -07:00
Type * t ;
Slice * sliceptr ;
Frame * stack_ptr , stack_top , stack [ GC_STACK_CAPACITY + 4 ] ;
2013-01-04 08:20:50 -07:00
BufferList * scanbuffers ;
2013-12-03 15:12:55 -07:00
Scanbuf sbuf ;
2013-01-10 13:45:46 -07:00
Eface * eface ;
Iface * iface ;
2013-02-25 13:58:23 -07:00
Hchan * chan ;
ChanType * chantype ;
2012-12-16 17:32:12 -07:00
if ( sizeof ( Workbuf ) % PageSize ! = 0 )
runtime · throw ( " scanblock: size of Workbuf is suboptimal " ) ;
// Memory arena parameters.
2013-05-28 12:14:47 -06:00
arena_start = runtime · mheap . arena_start ;
arena_used = runtime · mheap . arena_used ;
2012-12-16 17:32:12 -07:00
2013-01-10 13:45:46 -07:00
stack_ptr = stack + nelem ( stack ) - 1 ;
2013-12-03 15:12:55 -07:00
2013-01-10 13:45:46 -07:00
precise_type = false ;
nominal_size = 0 ;
2013-12-03 15:12:55 -07:00
// Initialize sbuf
scanbuffers = & bufferList [ m - > helpgc ] ;
sbuf . ptr . begin = sbuf . ptr . pos = & scanbuffers - > ptrtarget [ 0 ] ;
sbuf . ptr . end = sbuf . ptr . begin + nelem ( scanbuffers - > ptrtarget ) ;
2012-12-16 17:32:12 -07:00
2013-12-03 15:12:55 -07:00
sbuf . obj . begin = sbuf . obj . pos = & scanbuffers - > obj [ 0 ] ;
sbuf . obj . end = sbuf . obj . begin + nelem ( scanbuffers - > obj ) ;
sbuf . wbuf = wbuf ;
sbuf . wp = wp ;
sbuf . nobj = nobj ;
2013-02-08 14:00:33 -07:00
// (Silence the compiler)
2013-02-25 13:58:23 -07:00
chan = nil ;
chantype = nil ;
2013-03-19 12:51:03 -06:00
chan_ret = nil ;
2012-12-16 17:32:12 -07:00
goto next_block ;
for ( ; ; ) {
// Each iteration scans the block b of length n, queueing pointers in
// the work buffer.
if ( Debug > 1 ) {
runtime · printf ( " scanblock %p %D \n " , b , ( int64 ) n ) ;
}
2013-03-04 08:54:37 -07:00
if ( CollectStats ) {
runtime · xadd64 ( & gcstats . nbytes , n ) ;
2013-12-03 15:12:55 -07:00
runtime · xadd64 ( & gcstats . obj . sum , sbuf . nobj ) ;
2013-03-04 08:54:37 -07:00
runtime · xadd64 ( & gcstats . obj . cnt , 1 ) ;
}
2013-01-10 13:45:46 -07:00
if ( ti ! = 0 ) {
pc = ( uintptr * ) ( ti & ~ ( uintptr ) PC_BITS ) ;
precise_type = ( ti & PRECISE ) ;
stack_top . elemsize = pc [ 0 ] ;
if ( ! precise_type )
nominal_size = pc [ 0 ] ;
if ( ti & LOOP ) {
stack_top . count = 0 ; // 0 means an infinite number of iterations
stack_top . loop_or_ret = pc + 1 ;
} else {
stack_top . count = 1 ;
}
2013-04-08 14:36:35 -06:00
if ( Debug ) {
// Simple sanity check for provided type info ti:
// The declared size of the object must be not larger than the actual size
// (it can be smaller due to inferior pointers).
// It's difficult to make a comprehensive check due to inferior pointers,
// reflection, gob, etc.
if ( pc [ 0 ] > n ) {
runtime · printf ( " invalid gc type info: type info size %p, block size %p \n " , pc [ 0 ] , n ) ;
runtime · throw ( " invalid gc type info " ) ;
}
}
2013-01-18 14:56:17 -07:00
} else if ( UseSpanType ) {
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . obj . notype , 1 ) ;
2013-01-18 14:56:17 -07:00
type = runtime · gettype ( b ) ;
if ( type ! = 0 ) {
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . obj . typelookup , 1 ) ;
2013-01-18 14:56:17 -07:00
t = ( Type * ) ( type & ~ ( uintptr ) ( PtrSize - 1 ) ) ;
switch ( type & ( PtrSize - 1 ) ) {
case TypeInfo_SingleObject :
pc = ( uintptr * ) t - > gc ;
precise_type = true ; // type information about 'b' is precise
stack_top . count = 1 ;
stack_top . elemsize = pc [ 0 ] ;
break ;
case TypeInfo_Array :
pc = ( uintptr * ) t - > gc ;
if ( pc [ 0 ] = = 0 )
goto next_block ;
precise_type = true ; // type information about 'b' is precise
stack_top . count = 0 ; // 0 means an infinite number of iterations
stack_top . elemsize = pc [ 0 ] ;
stack_top . loop_or_ret = pc + 1 ;
break ;
2013-02-25 13:58:23 -07:00
case TypeInfo_Chan :
chan = ( Hchan * ) b ;
chantype = ( ChanType * ) t ;
2013-03-19 12:51:03 -06:00
chan_ret = nil ;
2013-02-25 13:58:23 -07:00
pc = chanProg ;
break ;
2013-01-18 14:56:17 -07:00
default :
runtime · throw ( " scanblock: invalid type " ) ;
return ;
}
} else {
pc = defaultProg ;
}
2013-01-10 13:45:46 -07:00
} else {
pc = defaultProg ;
}
2012-12-16 17:32:12 -07:00
2013-04-12 06:23:38 -06:00
if ( IgnorePreciseGC )
pc = defaultProg ;
2012-12-16 17:32:12 -07:00
pc + + ;
stack_top . b = ( uintptr ) b ;
end_b = ( uintptr ) b + n - PtrSize ;
2013-01-10 13:45:46 -07:00
for ( ; ; ) {
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . instr [ pc [ 0 ] ] , 1 ) ;
2013-01-10 13:45:46 -07:00
obj = nil ;
objti = 0 ;
2012-12-16 17:32:12 -07:00
switch ( pc [ 0 ] ) {
2013-01-10 13:45:46 -07:00
case GC_PTR :
obj = * ( void * * ) ( stack_top . b + pc [ 1 ] ) ;
objti = pc [ 2 ] ;
pc + = 3 ;
2013-04-08 14:36:35 -06:00
if ( Debug )
checkptr ( obj , objti ) ;
2013-01-10 13:45:46 -07:00
break ;
case GC_SLICE :
sliceptr = ( Slice * ) ( stack_top . b + pc [ 1 ] ) ;
if ( sliceptr - > cap ! = 0 ) {
obj = sliceptr - > array ;
2013-05-15 13:50:32 -06:00
// Can't use slice element type for scanning,
// because if it points to an array embedded
// in the beginning of a struct,
// we will scan the whole struct as the slice.
// So just obtain type info from heap.
2013-01-10 13:45:46 -07:00
}
pc + = 3 ;
break ;
case GC_APTR :
obj = * ( void * * ) ( stack_top . b + pc [ 1 ] ) ;
pc + = 2 ;
break ;
case GC_STRING :
obj = * ( void * * ) ( stack_top . b + pc [ 1 ] ) ;
2013-03-21 12:00:02 -06:00
markonly ( obj ) ;
2013-01-10 13:45:46 -07:00
pc + = 2 ;
2013-03-21 12:00:02 -06:00
continue ;
2013-01-10 13:45:46 -07:00
case GC_EFACE :
eface = ( Eface * ) ( stack_top . b + pc [ 1 ] ) ;
pc + = 2 ;
2013-03-15 14:07:52 -06:00
if ( eface - > type = = nil )
continue ;
// eface->type
t = eface - > type ;
if ( ( void * ) t > = arena_start & & ( void * ) t < arena_used ) {
2013-12-03 15:12:55 -07:00
* sbuf . ptr . pos + + = ( PtrTarget ) { t , 0 } ;
if ( sbuf . ptr . pos = = sbuf . ptr . end )
flushptrbuf ( & sbuf ) ;
2013-03-15 14:07:52 -06:00
}
// eface->data
if ( eface - > data > = arena_start & & eface - > data < arena_used ) {
2013-01-10 13:45:46 -07:00
if ( t - > size < = sizeof ( void * ) ) {
if ( ( t - > kind & KindNoPointers ) )
2013-03-15 14:07:52 -06:00
continue ;
2013-01-10 13:45:46 -07:00
obj = eface - > data ;
if ( ( t - > kind & ~ KindNoPointers ) = = KindPtr )
objti = ( uintptr ) ( ( PtrType * ) t ) - > elem - > gc ;
} else {
obj = eface - > data ;
objti = ( uintptr ) t - > gc ;
}
}
break ;
case GC_IFACE :
iface = ( Iface * ) ( stack_top . b + pc [ 1 ] ) ;
pc + = 2 ;
if ( iface - > tab = = nil )
2013-03-15 14:07:52 -06:00
continue ;
2013-01-10 13:45:46 -07:00
// iface->tab
if ( ( void * ) iface - > tab > = arena_start & & ( void * ) iface - > tab < arena_used ) {
2013-12-03 15:12:55 -07:00
* sbuf . ptr . pos + + = ( PtrTarget ) { iface - > tab , ( uintptr ) itabtype - > gc } ;
if ( sbuf . ptr . pos = = sbuf . ptr . end )
flushptrbuf ( & sbuf ) ;
2013-01-10 13:45:46 -07:00
}
// iface->data
if ( iface - > data > = arena_start & & iface - > data < arena_used ) {
t = iface - > tab - > type ;
if ( t - > size < = sizeof ( void * ) ) {
if ( ( t - > kind & KindNoPointers ) )
2013-03-15 14:07:52 -06:00
continue ;
2013-01-10 13:45:46 -07:00
obj = iface - > data ;
if ( ( t - > kind & ~ KindNoPointers ) = = KindPtr )
objti = ( uintptr ) ( ( PtrType * ) t ) - > elem - > gc ;
} else {
obj = iface - > data ;
objti = ( uintptr ) t - > gc ;
}
}
break ;
2012-12-16 17:32:12 -07:00
case GC_DEFAULT_PTR :
2013-03-21 12:00:02 -06:00
while ( stack_top . b < = end_b ) {
obj = * ( byte * * ) stack_top . b ;
2012-12-16 17:32:12 -07:00
stack_top . b + = PtrSize ;
if ( obj > = arena_start & & obj < arena_used ) {
2013-12-03 15:12:55 -07:00
* sbuf . ptr . pos + + = ( PtrTarget ) { obj , 0 } ;
if ( sbuf . ptr . pos = = sbuf . ptr . end )
flushptrbuf ( & sbuf ) ;
2013-01-10 13:45:46 -07:00
}
}
goto next_block ;
case GC_END :
if ( - - stack_top . count ! = 0 ) {
// Next iteration of a loop if possible.
2013-03-15 14:07:52 -06:00
stack_top . b + = stack_top . elemsize ;
if ( stack_top . b + stack_top . elemsize < = end_b + PtrSize ) {
2013-01-10 13:45:46 -07:00
pc = stack_top . loop_or_ret ;
continue ;
}
i = stack_top . b ;
} else {
// Stack pop if possible.
if ( stack_ptr + 1 < stack + nelem ( stack ) ) {
pc = stack_top . loop_or_ret ;
stack_top = * ( + + stack_ptr ) ;
continue ;
}
i = ( uintptr ) b + nominal_size ;
}
if ( ! precise_type ) {
// Quickly scan [b+i,b+n) for possible pointers.
for ( ; i < = end_b ; i + = PtrSize ) {
if ( * ( byte * * ) i ! = nil ) {
// Found a value that may be a pointer.
// Do a rescan of the entire block.
2013-12-03 15:12:55 -07:00
enqueue ( ( Obj ) { b , n , 0 } , & sbuf . wbuf , & sbuf . wp , & sbuf . nobj ) ;
2013-03-04 08:54:37 -07:00
if ( CollectStats ) {
runtime · xadd64 ( & gcstats . rescan , 1 ) ;
runtime · xadd64 ( & gcstats . rescanbytes , n ) ;
}
2013-01-10 13:45:46 -07:00
break ;
}
2012-12-16 17:32:12 -07:00
}
}
2013-01-10 13:45:46 -07:00
goto next_block ;
case GC_ARRAY_START :
i = stack_top . b + pc [ 1 ] ;
count = pc [ 2 ] ;
elemsize = pc [ 3 ] ;
pc + = 4 ;
// Stack push.
* stack_ptr - - = stack_top ;
stack_top = ( Frame ) { count , elemsize , i , pc } ;
continue ;
case GC_ARRAY_NEXT :
if ( - - stack_top . count ! = 0 ) {
stack_top . b + = stack_top . elemsize ;
pc = stack_top . loop_or_ret ;
} else {
// Stack pop.
stack_top = * ( + + stack_ptr ) ;
pc + = 1 ;
}
continue ;
case GC_CALL :
// Stack push.
* stack_ptr - - = stack_top ;
stack_top = ( Frame ) { 1 , 0 , stack_top . b + pc [ 1 ] , pc + 3 /*return address*/ } ;
2013-02-26 20:42:56 -07:00
pc = ( uintptr * ) ( ( byte * ) pc + * ( int32 * ) ( pc + 2 ) ) ; // target of the CALL instruction
2013-01-10 13:45:46 -07:00
continue ;
case GC_REGION :
obj = ( void * ) ( stack_top . b + pc [ 1 ] ) ;
2013-02-27 09:28:53 -07:00
size = pc [ 2 ] ;
objti = pc [ 3 ] ;
2013-01-10 13:45:46 -07:00
pc + = 4 ;
2013-02-27 09:28:53 -07:00
2013-12-03 15:12:55 -07:00
* sbuf . obj . pos + + = ( Obj ) { obj , size , objti } ;
if ( sbuf . obj . pos = = sbuf . obj . end )
flushobjbuf ( & sbuf ) ;
2013-03-15 14:07:52 -06:00
continue ;
2012-12-16 17:32:12 -07:00
2013-03-19 12:51:03 -06:00
case GC_CHAN_PTR :
chan = * ( Hchan * * ) ( stack_top . b + pc [ 1 ] ) ;
if ( chan = = nil ) {
pc + = 3 ;
continue ;
}
if ( markonly ( chan ) ) {
chantype = ( ChanType * ) pc [ 2 ] ;
if ( ! ( chantype - > elem - > kind & KindNoPointers ) ) {
// Start chanProg.
chan_ret = pc + 3 ;
pc = chanProg + 1 ;
continue ;
}
}
pc + = 3 ;
continue ;
2013-02-25 13:58:23 -07:00
case GC_CHAN :
// There are no heap pointers in struct Hchan,
// so we can ignore the leading sizeof(Hchan) bytes.
if ( ! ( chantype - > elem - > kind & KindNoPointers ) ) {
// Channel's buffer follows Hchan immediately in memory.
// Size of buffer (cap(c)) is second int in the chan struct.
2013-05-28 09:17:47 -06:00
chancap = ( ( uintgo * ) chan ) [ 1 ] ;
if ( chancap > 0 ) {
2013-02-25 13:58:23 -07:00
// TODO(atom): split into two chunks so that only the
// in-use part of the circular buffer is scanned.
// (Channel routines zero the unused part, so the current
// code does not lead to leaks, it's just a little inefficient.)
2013-12-03 15:12:55 -07:00
* sbuf . obj . pos + + = ( Obj ) { ( byte * ) chan + runtime · Hchansize , chancap * chantype - > elem - > size ,
2013-02-25 13:58:23 -07:00
( uintptr ) chantype - > elem - > gc | PRECISE | LOOP } ;
2013-12-03 15:12:55 -07:00
if ( sbuf . obj . pos = = sbuf . obj . end )
flushobjbuf ( & sbuf ) ;
2013-02-25 13:58:23 -07:00
}
}
2013-03-19 12:51:03 -06:00
if ( chan_ret = = nil )
goto next_block ;
pc = chan_ret ;
continue ;
2013-02-25 13:58:23 -07:00
2013-12-03 15:12:55 -07:00
case GC_G_PTR :
obj = ( void * ) stack_top . b ;
scanstack ( obj , & sbuf ) ;
goto next_block ;
2012-12-16 17:32:12 -07:00
default :
runtime · throw ( " scanblock: invalid GC instruction " ) ;
return ;
2011-02-02 21:03:47 -07:00
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-01-10 13:45:46 -07:00
if ( obj > = arena_start & & obj < arena_used ) {
2013-12-03 15:12:55 -07:00
* sbuf . ptr . pos + + = ( PtrTarget ) { obj , objti } ;
if ( sbuf . ptr . pos = = sbuf . ptr . end )
flushptrbuf ( & sbuf ) ;
2013-01-10 13:45:46 -07:00
}
}
2012-12-16 17:32:12 -07:00
next_block :
2011-02-02 21:03:47 -07:00
// Done scanning [b, b+n). Prepare for the next iteration of
2013-01-10 13:45:46 -07:00
// the loop by setting b, n, ti to the parameters for the next block.
2011-02-02 21:03:47 -07:00
2013-12-03 15:12:55 -07:00
if ( sbuf . nobj = = 0 ) {
flushptrbuf ( & sbuf ) ;
flushobjbuf ( & sbuf ) ;
2012-12-16 17:32:12 -07:00
2013-12-03 15:12:55 -07:00
if ( sbuf . nobj = = 0 ) {
2012-12-16 17:32:12 -07:00
if ( ! keepworking ) {
2013-12-03 15:12:55 -07:00
if ( sbuf . wbuf )
putempty ( sbuf . wbuf ) ;
return ;
2012-12-16 17:32:12 -07:00
}
// Emptied our buffer: refill.
2013-12-03 15:12:55 -07:00
sbuf . wbuf = getfull ( sbuf . wbuf ) ;
if ( sbuf . wbuf = = nil )
return ;
sbuf . nobj = sbuf . wbuf - > nobj ;
sbuf . wp = sbuf . wbuf - > obj + sbuf . wbuf - > nobj ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
2009-01-26 18:37:05 -07:00
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2012-12-16 17:32:12 -07:00
// Fetch b from the work buffer.
2013-12-03 15:12:55 -07:00
- - sbuf . wp ;
b = sbuf . wp - > p ;
n = sbuf . wp - > n ;
ti = sbuf . wp - > ti ;
sbuf . nobj - - ;
2011-02-02 21:03:47 -07:00
}
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
// debug_scanblock is the debug copy of scanblock.
// it is simpler, slower, single-threaded, recursive,
// and uses bitSpecial as the mark bit.
static void
2012-06-05 02:55:14 -06:00
debug_scanblock ( byte * b , uintptr n )
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
{
byte * obj , * p ;
void * * vp ;
uintptr size , * bitp , bits , shift , i , xbits , off ;
MSpan * s ;
if ( ! DebugMark )
runtime · throw ( " debug_scanblock without DebugMark " ) ;
2012-06-05 02:55:14 -06:00
if ( ( intptr ) n < 0 ) {
runtime · printf ( " debug_scanblock %p %D \n " , b , ( int64 ) n ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
runtime · throw ( " debug_scanblock " ) ;
}
// Align b to a word boundary.
off = ( uintptr ) b & ( PtrSize - 1 ) ;
if ( off ! = 0 ) {
b + = PtrSize - off ;
n - = PtrSize - off ;
}
vp = ( void * * ) b ;
n / = PtrSize ;
for ( i = 0 ; i < n ; i + + ) {
obj = ( byte * ) vp [ i ] ;
// Words outside the arena cannot be pointers.
2013-05-28 12:14:47 -06:00
if ( ( byte * ) obj < runtime · mheap . arena_start | | ( byte * ) obj > = runtime · mheap . arena_used )
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
continue ;
// Round down to word boundary.
obj = ( void * ) ( ( uintptr ) obj & ~ ( ( uintptr ) PtrSize - 1 ) ) ;
// Consult span table to find beginning.
2013-05-28 12:14:47 -06:00
s = runtime · MHeap_LookupMaybe ( & runtime · mheap , obj ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
if ( s = = nil )
continue ;
p = ( byte * ) ( ( uintptr ) s - > start < < PageShift ) ;
2012-12-16 17:32:12 -07:00
size = s - > elemsize ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
if ( s - > sizeclass = = 0 ) {
obj = p ;
} else {
int32 i = ( ( byte * ) obj - p ) / size ;
obj = p + i * size ;
}
// Now that we know the object header, reload bits.
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) obj - ( uintptr * ) runtime · mheap . arena_start ;
bitp = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
shift = off % wordsPerBitmapWord ;
xbits = * bitp ;
bits = xbits > > shift ;
// Now we have bits, bitp, and shift correct for
// obj pointing at the base of the object.
// If not allocated or already marked, done.
if ( ( bits & bitAllocated ) = = 0 | | ( bits & bitSpecial ) ! = 0 ) // NOTE: bitSpecial not bitMarked
continue ;
* bitp | = bitSpecial < < shift ;
if ( ! ( bits & bitMarked ) )
runtime · printf ( " found unmarked block %p in %p \n " , obj , vp + i ) ;
// If object has no pointers, don't need to scan further.
2013-12-18 18:13:59 -07:00
if ( ( bits & bitScan ) = = 0 )
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
continue ;
debug_scanblock ( obj , size ) ;
}
}
2011-02-02 21:03:47 -07:00
2012-12-16 17:32:12 -07:00
// Append obj to the work buffer.
// _wbuf, _wp, _nobj are input/output parameters and are specifying the work buffer.
static void
enqueue ( Obj obj , Workbuf * * _wbuf , Obj * * _wp , uintptr * _nobj )
{
uintptr nobj , off ;
Obj * wp ;
Workbuf * wbuf ;
if ( Debug > 1 )
runtime · printf ( " append obj(%p %D %p) \n " , obj . p , ( int64 ) obj . n , obj . ti ) ;
// Align obj.b to a word boundary.
off = ( uintptr ) obj . p & ( PtrSize - 1 ) ;
if ( off ! = 0 ) {
obj . p + = PtrSize - off ;
obj . n - = PtrSize - off ;
obj . ti = 0 ;
}
if ( obj . p = = nil | | obj . n = = 0 )
return ;
// Load work buffer state
wp = * _wp ;
wbuf = * _wbuf ;
nobj = * _nobj ;
// If another proc wants a pointer, give it some.
if ( work . nwait > 0 & & nobj > handoffThreshold & & work . full = = 0 ) {
wbuf - > nobj = nobj ;
wbuf = handoff ( wbuf ) ;
nobj = wbuf - > nobj ;
wp = wbuf - > obj + nobj ;
}
// If buffer is full, get a new one.
if ( wbuf = = nil | | nobj > = nelem ( wbuf - > obj ) ) {
if ( wbuf ! = nil )
wbuf - > nobj = nobj ;
wbuf = getempty ( wbuf ) ;
wp = wbuf - > obj ;
nobj = 0 ;
}
* wp = obj ;
wp + + ;
nobj + + ;
// Save work buffer state
* _wp = wp ;
* _wbuf = wbuf ;
* _nobj = nobj ;
}
2012-05-24 00:55:50 -06:00
static void
markroot ( ParFor * desc , uint32 i )
{
2012-12-16 17:32:12 -07:00
Obj * wp ;
Workbuf * wbuf ;
uintptr nobj ;
2012-05-24 00:55:50 -06:00
USED ( & desc ) ;
2012-12-16 17:32:12 -07:00
wp = nil ;
wbuf = nil ;
nobj = 0 ;
enqueue ( work . roots [ i ] , & wbuf , & wp , & nobj ) ;
scanblock ( wbuf , wp , nobj , false ) ;
2012-05-24 00:55:50 -06:00
}
2011-02-02 21:03:47 -07:00
// Get an empty work buffer off the work.empty list,
// allocating new buffers as needed.
static Workbuf *
getempty ( Workbuf * b )
{
2012-05-24 00:55:50 -06:00
if ( b ! = nil )
runtime · lfstackpush ( & work . full , & b - > node ) ;
b = ( Workbuf * ) runtime · lfstackpop ( & work . empty ) ;
if ( b = = nil ) {
// Need to allocate.
runtime · lock ( & work ) ;
if ( work . nchunk < sizeof * b ) {
work . nchunk = 1 < < 20 ;
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
work . chunk = runtime · SysAlloc ( work . nchunk , & mstats . gc_sys ) ;
2013-02-28 22:21:08 -07:00
if ( work . chunk = = nil )
runtime · throw ( " runtime: cannot allocate memory " ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
2012-05-24 00:55:50 -06:00
b = ( Workbuf * ) work . chunk ;
work . chunk + = sizeof * b ;
work . nchunk - = sizeof * b ;
runtime · unlock ( & work ) ;
2011-02-02 21:03:47 -07:00
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
b - > nobj = 0 ;
2011-02-02 21:03:47 -07:00
return b ;
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
static void
putempty ( Workbuf * b )
{
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . putempty , 1 ) ;
2012-05-24 00:55:50 -06:00
runtime · lfstackpush ( & work . empty , & b - > node ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
2011-02-02 21:03:47 -07:00
// Get a full work buffer off the work.full list, or return nil.
static Workbuf *
getfull ( Workbuf * b )
{
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
int32 i ;
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · xadd64 ( & gcstats . getfull , 1 ) ;
2012-05-24 00:55:50 -06:00
if ( b ! = nil )
runtime · lfstackpush ( & work . empty , & b - > node ) ;
b = ( Workbuf * ) runtime · lfstackpop ( & work . full ) ;
if ( b ! = nil | | work . nproc = = 1 )
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
return b ;
runtime · xadd ( & work . nwait , + 1 ) ;
for ( i = 0 ; ; i + + ) {
2012-05-24 00:55:50 -06:00
if ( work . full ! = 0 ) {
runtime · xadd ( & work . nwait , - 1 ) ;
b = ( Workbuf * ) runtime · lfstackpop ( & work . full ) ;
if ( b ! = nil )
return b ;
runtime · xadd ( & work . nwait , + 1 ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
if ( work . nwait = = work . nproc )
return nil ;
2012-04-05 10:48:28 -06:00
if ( i < 10 ) {
m - > gcstats . nprocyield + + ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
runtime · procyield ( 20 ) ;
2012-04-05 10:48:28 -06:00
} else if ( i < 20 ) {
m - > gcstats . nosyield + + ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
runtime · osyield ( ) ;
2012-04-05 10:48:28 -06:00
} else {
m - > gcstats . nsleep + + ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
runtime · usleep ( 100 ) ;
2012-04-05 10:48:28 -06:00
}
2009-01-26 18:37:05 -07:00
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
static Workbuf *
handoff ( Workbuf * b )
{
int32 n ;
Workbuf * b1 ;
// Make new buffer with half of b's pointers.
b1 = getempty ( nil ) ;
n = b - > nobj / 2 ;
b - > nobj - = n ;
b1 - > nobj = n ;
runtime · memmove ( b1 - > obj , b - > obj + b - > nobj , n * sizeof b1 - > obj [ 0 ] ) ;
2012-04-05 10:48:28 -06:00
m - > gcstats . nhandoff + + ;
m - > gcstats . nhandoffcnt + = n ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
// Put b on full list - let first half of b get stolen.
2012-05-24 00:55:50 -06:00
runtime · lfstackpush ( & work . full , & b - > node ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
return b1 ;
2009-01-26 18:37:05 -07:00
}
static void
2012-12-16 17:32:12 -07:00
addroot ( Obj obj )
2012-05-24 00:55:50 -06:00
{
uint32 cap ;
2012-12-16 17:32:12 -07:00
Obj * new ;
2012-05-24 00:55:50 -06:00
if ( work . nroot > = work . rootcap ) {
2012-12-16 17:32:12 -07:00
cap = PageSize / sizeof ( Obj ) ;
2012-05-24 00:55:50 -06:00
if ( cap < 2 * work . rootcap )
cap = 2 * work . rootcap ;
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
new = ( Obj * ) runtime · SysAlloc ( cap * sizeof ( Obj ) , & mstats . gc_sys ) ;
2013-02-28 22:21:08 -07:00
if ( new = = nil )
runtime · throw ( " runtime: cannot allocate memory " ) ;
2012-05-24 00:55:50 -06:00
if ( work . roots ! = nil ) {
2012-12-16 17:32:12 -07:00
runtime · memmove ( new , work . roots , work . rootcap * sizeof ( Obj ) ) ;
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
runtime · SysFree ( work . roots , work . rootcap * sizeof ( Obj ) , & mstats . gc_sys ) ;
2012-05-24 00:55:50 -06:00
}
work . roots = new ;
work . rootcap = cap ;
}
2012-12-16 17:32:12 -07:00
work . roots [ work . nroot ] = obj ;
2012-05-24 00:55:50 -06:00
work . nroot + + ;
}
2013-07-18 08:43:22 -06:00
extern byte pclntab [ ] ; // base for f->ptrsoff
2013-08-07 13:47:01 -06:00
typedef struct BitVector BitVector ;
struct BitVector
2013-07-19 14:04:09 -06:00
{
2013-08-07 13:47:01 -06:00
int32 n ;
uint32 data [ ] ;
2013-07-19 14:04:09 -06:00
} ;
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
typedef struct StackMap StackMap ;
struct StackMap
{
int32 n ;
uint32 data [ ] ;
} ;
static BitVector *
stackmapdata ( StackMap * stackmap , int32 n )
{
BitVector * bv ;
uint32 * ptr ;
uint32 words ;
int32 i ;
if ( n < 0 | | n > = stackmap - > n ) {
runtime · throw ( " stackmapdata: index out of range " ) ;
}
ptr = stackmap - > data ;
for ( i = 0 ; i < n ; i + + ) {
bv = ( BitVector * ) ptr ;
words = ( ( bv - > n + 31 ) / 32 ) + 1 ;
ptr + = words ;
}
return ( BitVector * ) ptr ;
}
2013-08-21 14:51:00 -06:00
// Scans an interface data value when the interface type indicates
// that it is a pointer.
static void
2013-12-03 15:12:55 -07:00
scaninterfacedata ( uintptr bits , byte * scanp , bool afterprologue , Scanbuf * sbuf )
2013-08-21 14:51:00 -06:00
{
Itab * tab ;
Type * type ;
2013-09-16 18:26:10 -06:00
if ( runtime · precisestack & & afterprologue ) {
2013-08-21 14:51:00 -06:00
if ( bits = = BitsIface ) {
tab = * ( Itab * * ) scanp ;
if ( tab - > type - > size < = sizeof ( void * ) & & ( tab - > type - > kind & KindNoPointers ) )
return ;
} else { // bits == BitsEface
type = * ( Type * * ) scanp ;
if ( type - > size < = sizeof ( void * ) & & ( type - > kind & KindNoPointers ) )
return ;
}
}
2013-12-03 15:12:55 -07:00
* sbuf - > obj . pos + + = ( Obj ) { scanp + PtrSize , PtrSize , 0 } ;
if ( sbuf - > obj . pos = = sbuf - > obj . end )
flushobjbuf ( sbuf ) ;
2013-08-21 14:51:00 -06:00
}
2013-08-07 13:47:01 -06:00
// Starting from scanp, scans words corresponding to set bits.
static void
2013-12-03 15:12:55 -07:00
scanbitvector ( byte * scanp , BitVector * bv , bool afterprologue , Scanbuf * sbuf )
2013-08-07 13:47:01 -06:00
{
2013-08-21 14:51:00 -06:00
uintptr word , bits ;
uint32 * wordp ;
2013-08-07 13:47:01 -06:00
int32 i , remptrs ;
2013-08-21 14:51:00 -06:00
wordp = bv - > data ;
2013-08-07 13:47:01 -06:00
for ( remptrs = bv - > n ; remptrs > 0 ; remptrs - = 32 ) {
2013-08-21 14:51:00 -06:00
word = * wordp + + ;
2013-08-07 13:47:01 -06:00
if ( remptrs < 32 )
i = remptrs ;
else
i = 32 ;
2013-08-09 17:48:12 -06:00
i / = BitsPerPointer ;
2013-08-07 13:47:01 -06:00
for ( ; i > 0 ; i - - ) {
2013-08-21 14:51:00 -06:00
bits = word & 3 ;
if ( bits ! = BitsNoPointer & & * ( void * * ) scanp ! = nil )
2013-12-03 15:12:55 -07:00
if ( bits = = BitsPointer ) {
* sbuf - > obj . pos + + = ( Obj ) { scanp , PtrSize , 0 } ;
if ( sbuf - > obj . pos = = sbuf - > obj . end )
flushobjbuf ( sbuf ) ;
} else
scaninterfacedata ( bits , scanp , afterprologue , sbuf ) ;
2013-08-21 14:51:00 -06:00
word > > = BitsPerPointer ;
2013-08-07 13:47:01 -06:00
scanp + = PtrSize ;
}
}
}
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
// Scan a stack frame: local variables and function arguments/results.
2013-12-03 15:12:55 -07:00
static void
scanframe ( Stkframe * frame , void * arg )
2013-03-28 15:36:23 -06:00
{
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
Func * f ;
2013-12-03 15:12:55 -07:00
Scanbuf * sbuf ;
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
StackMap * stackmap ;
BitVector * bv ;
2013-08-07 13:47:01 -06:00
uintptr size ;
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
uintptr targetpc ;
int32 pcdata ;
2013-08-21 14:51:00 -06:00
bool afterprologue ;
2013-03-28 15:36:23 -06:00
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
f = frame - > fn ;
targetpc = frame - > pc ;
if ( targetpc ! = f - > entry )
targetpc - - ;
pcdata = runtime · pcdatavalue ( f , PCDATA_StackMapIndex , targetpc ) ;
if ( pcdata = = - 1 ) {
// We do not have a valid pcdata value but there might be a
// stackmap for this function. It is likely that we are looking
// at the function prologue, assume so and hope for the best.
pcdata = 0 ;
}
2013-12-03 15:12:55 -07:00
sbuf = arg ;
2013-06-12 06:49:38 -06:00
// Scan local variables if stack frame has been allocated.
2013-08-07 13:47:01 -06:00
// Use pointer information if known.
2013-08-21 14:51:00 -06:00
afterprologue = ( frame - > varp > ( byte * ) frame - > sp ) ;
if ( afterprologue ) {
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
stackmap = runtime · funcdata ( f , FUNCDATA_LocalsPointerMaps ) ;
if ( stackmap = = nil ) {
2013-08-07 13:47:01 -06:00
// No locals information, scan everything.
size = frame - > varp - ( byte * ) frame - > sp ;
2013-12-03 15:12:55 -07:00
* sbuf - > obj . pos + + = ( Obj ) { frame - > varp - size , size , 0 } ;
if ( sbuf - > obj . pos = = sbuf - > obj . end )
flushobjbuf ( sbuf ) ;
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
} else if ( stackmap - > n < 0 ) {
// Locals size information, scan just the locals.
size = - stackmap - > n ;
2013-12-03 15:12:55 -07:00
* sbuf - > obj . pos + + = ( Obj ) { frame - > varp - size , size , 0 } ;
if ( sbuf - > obj . pos = = sbuf - > obj . end )
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
flushobjbuf ( sbuf ) ; } else if ( stackmap - > n > 0 ) {
// Locals bitmap information, scan just the pointers in
// locals.
if ( pcdata < 0 | | pcdata > = stackmap - > n ) {
// don't know where we are
runtime · printf ( " pcdata is %d and %d stack map entries \n " , pcdata , stackmap - > n ) ;
runtime · throw ( " addframeroots: bad symbol table " ) ;
}
bv = stackmapdata ( stackmap , pcdata ) ;
size = ( bv - > n * PtrSize ) / BitsPerPointer ;
scanbitvector ( frame - > varp - size , bv , afterprologue , sbuf ) ;
2013-08-07 13:47:01 -06:00
}
2013-07-19 14:04:09 -06:00
}
2013-06-12 06:49:38 -06:00
// Scan arguments.
// Use pointer information if known.
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
stackmap = runtime · funcdata ( f , FUNCDATA_ArgsPointerMaps ) ;
if ( stackmap ! = nil ) {
bv = stackmapdata ( stackmap , pcdata ) ;
scanbitvector ( frame - > argp , bv , false , sbuf ) ;
} else {
2013-12-03 15:12:55 -07:00
* sbuf - > obj . pos + + = ( Obj ) { frame - > argp , frame - > arglen , 0 } ;
if ( sbuf - > obj . pos = = sbuf - > obj . end )
flushobjbuf ( sbuf ) ;
}
2013-03-28 15:36:23 -06:00
}
2012-05-24 00:55:50 -06:00
static void
2013-12-03 15:12:55 -07:00
scanstack ( G * gp , void * scanbuf )
2009-01-26 18:37:05 -07:00
{
2013-12-13 13:44:57 -07:00
runtime · gentraceback ( ~ ( uintptr ) 0 , ~ ( uintptr ) 0 , 0 , gp , 0 , nil , 0x7fffffff , scanframe , scanbuf , false ) ;
2009-01-26 18:37:05 -07:00
}
cmd/5g, cmd/5l, cmd/6g, cmd/6l, cmd/8g, cmd/8l, cmd/gc, runtime: generate pointer maps by liveness analysis
This change allows the garbage collector to examine stack
slots that are determined as live and containing a pointer
value by the garbage collector. This results in a mean
reduction of 65% in the number of stack slots scanned during
an invocation of "GOGC=1 all.bash".
Unfortunately, this does not yet allow garbage collection to
be precise for the stack slots computed as live. Pointers
confound the determination of what definitions reach a given
instruction. In general, this problem is not solvable without
runtime cost but some advanced cooperation from the compiler
might mitigate common cases.
R=golang-dev, rsc, cshapiro
CC=golang-dev
https://golang.org/cl/14430048
2013-12-05 18:35:22 -07:00
static void
addstackroots ( G * gp )
{
M * mp ;
int32 n ;
Stktop * stk ;
uintptr sp , guard ;
void * base ;
uintptr size ;
if ( gp = = g )
runtime · throw ( " can't scan our own stack " ) ;
if ( ( mp = gp - > m ) ! = nil & & mp - > helpgc )
runtime · throw ( " can't scan gchelper stack " ) ;
if ( gp - > syscallstack ! = ( uintptr ) nil ) {
// Scanning another goroutine that is about to enter or might
// have just exited a system call. It may be executing code such
// as schedlock and may have needed to start a new stack segment.
// Use the stack segment and stack pointer at the time of
// the system call instead, since that won't change underfoot.
sp = gp - > syscallsp ;
stk = ( Stktop * ) gp - > syscallstack ;
guard = gp - > syscallguard ;
} else {
// Scanning another goroutine's stack.
// The goroutine is usually asleep (the world is stopped).
sp = gp - > sched . sp ;
stk = ( Stktop * ) gp - > stackbase ;
guard = gp - > stackguard ;
// For function about to start, context argument is a root too.
if ( gp - > sched . ctxt ! = 0 & & runtime · mlookup ( gp - > sched . ctxt , & base , & size , nil ) )
addroot ( ( Obj ) { base , size , 0 } ) ;
}
if ( ScanStackByFrames ) {
USED ( sp ) ;
USED ( stk ) ;
USED ( guard ) ;
addroot ( ( Obj ) { ( byte * ) gp , PtrSize , ( uintptr ) gptrProg } ) ;
} else {
n = 0 ;
while ( stk ) {
if ( sp < guard - StackGuard | | ( uintptr ) stk < sp ) {
runtime · printf ( " scanstack inconsistent: g%D#%d sp=%p not in [%p,%p] \n " , gp - > goid , n , sp , guard - StackGuard , stk ) ;
runtime · throw ( " scanstack " ) ;
}
addroot ( ( Obj ) { ( byte * ) sp , ( uintptr ) stk - sp , ( uintptr ) defaultProg | PRECISE | LOOP } ) ;
sp = stk - > gobuf . sp ;
guard = stk - > stackguard ;
stk = ( Stktop * ) stk - > stackbase ;
n + + ;
}
}
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
static void
2012-05-24 00:55:50 -06:00
addroots ( void )
2009-01-26 18:37:05 -07:00
{
2010-01-06 20:24:11 -07:00
G * gp ;
2011-10-06 09:42:51 -06:00
FinBlock * fb ;
2012-09-24 18:08:05 -06:00
MSpan * s , * * allspans ;
uint32 spanidx ;
2014-01-07 14:45:50 -07:00
Special * sp ;
SpecialFinalizer * spf ;
2012-05-24 00:55:50 -06:00
work . nroot = 0 ;
2009-01-26 18:37:05 -07:00
2012-12-16 17:32:12 -07:00
// data & bss
// TODO(atom): load balancing
addroot ( ( Obj ) { data , edata - data , ( uintptr ) gcdata } ) ;
addroot ( ( Obj ) { bss , ebss - bss , ( uintptr ) gcbss } ) ;
2009-01-26 18:37:05 -07:00
2012-09-24 18:08:05 -06:00
// MSpan.types
2013-05-28 12:14:47 -06:00
allspans = runtime · mheap . allspans ;
for ( spanidx = 0 ; spanidx < runtime · mheap . nspan ; spanidx + + ) {
2012-09-24 18:08:05 -06:00
s = allspans [ spanidx ] ;
if ( s - > state = = MSpanInUse ) {
2013-03-19 12:57:15 -06:00
// The garbage collector ignores type pointers stored in MSpan.types:
// - Compiler-generated types are stored outside of heap.
// - The reflect package has runtime-generated types cached in its data structures.
// The garbage collector relies on finding the references via that cache.
2012-09-24 18:08:05 -06:00
switch ( s - > types . compression ) {
case MTypes_Empty :
case MTypes_Single :
break ;
case MTypes_Words :
case MTypes_Bytes :
2013-03-19 12:57:15 -06:00
markonly ( ( byte * ) s - > types . data ) ;
2012-09-24 18:08:05 -06:00
break ;
}
}
}
2014-01-07 14:45:50 -07:00
// MSpan.specials
allspans = runtime · mheap . allspans ;
for ( spanidx = 0 ; spanidx < runtime · mheap . nspan ; spanidx + + ) {
s = allspans [ spanidx ] ;
if ( s - > state ! = MSpanInUse )
continue ;
for ( sp = s - > specials ; sp ! = nil ; sp = sp - > next ) {
switch ( sp - > kind ) {
case KindSpecialFinalizer :
spf = ( SpecialFinalizer * ) sp ;
// don't mark finalized object, but scan it so we
// retain everything it points to.
addroot ( ( Obj ) { ( void * ) ( ( s - > start < < PageShift ) + spf - > offset ) , s - > elemsize , 0 } ) ;
addroot ( ( Obj ) { ( void * ) & spf - > fn , PtrSize , 0 } ) ;
addroot ( ( Obj ) { ( void * ) & spf - > fint , PtrSize , 0 } ) ;
addroot ( ( Obj ) { ( void * ) & spf - > ot , PtrSize , 0 } ) ;
break ;
case KindSpecialProfile :
break ;
}
}
}
2012-12-16 17:32:12 -07:00
// stacks
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
for ( gp = runtime · allg ; gp ! = nil ; gp = gp - > alllink ) {
2009-01-26 18:37:05 -07:00
switch ( gp - > status ) {
default :
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
runtime · printf ( " unexpected G.status %d \n " , gp - > status ) ;
runtime · throw ( " mark - bad status " ) ;
2009-01-26 18:37:05 -07:00
case Gdead :
break ;
case Grunning :
2013-08-21 16:17:45 -06:00
runtime · throw ( " mark - world not stopped " ) ;
2009-01-26 18:37:05 -07:00
case Grunnable :
case Gsyscall :
case Gwaiting :
2012-05-24 00:55:50 -06:00
addstackroots ( gp ) ;
2009-01-26 18:37:05 -07:00
break ;
}
}
2011-10-06 09:42:51 -06:00
for ( fb = allfin ; fb ; fb = fb - > alllink )
2012-12-16 17:32:12 -07:00
addroot ( ( Obj ) { ( byte * ) fb - > fin , fb - > cnt * sizeof ( fb - > fin [ 0 ] ) , 0 } ) ;
2010-03-26 15:15:30 -06:00
}
2010-02-03 17:31:34 -07:00
2013-12-18 18:13:59 -07:00
static void
addfreelists ( void )
{
int32 i ;
P * p , * * pp ;
MCache * c ;
MLink * m ;
// Mark objects in the MCache of each P so we don't collect them.
for ( pp = runtime · allp ; p = * pp ; pp + + ) {
c = p - > mcache ;
if ( c = = nil )
continue ;
for ( i = 0 ; i < NumSizeClasses ; i + + ) {
for ( m = c - > list [ i ] . list ; m ! = nil ; m = m - > next ) {
markonly ( m ) ;
}
}
}
// Note: the sweeper will mark objects in each span's freelist.
}
2014-01-07 14:45:50 -07:00
void
runtime · queuefinalizer ( byte * p , FuncVal * fn , uintptr nret , Type * fint , PtrType * ot )
2011-10-06 09:42:51 -06:00
{
FinBlock * block ;
Finalizer * f ;
2012-05-15 09:10:16 -06:00
2011-10-06 09:42:51 -06:00
runtime · lock ( & finlock ) ;
if ( finq = = nil | | finq - > cnt = = finq - > cap ) {
if ( finc = = nil ) {
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
finc = runtime · persistentalloc ( PageSize , 0 , & mstats . gc_sys ) ;
2011-10-06 09:42:51 -06:00
finc - > cap = ( PageSize - sizeof ( FinBlock ) ) / sizeof ( Finalizer ) + 1 ;
finc - > alllink = allfin ;
allfin = finc ;
}
block = finc ;
finc = block - > next ;
block - > next = finq ;
finq = block ;
}
f = & finq - > fin [ finq - > cnt ] ;
finq - > cnt + + ;
f - > fn = fn ;
f - > nret = nret ;
2013-08-14 12:54:31 -06:00
f - > fint = fint ;
2013-07-29 09:43:08 -06:00
f - > ot = ot ;
2011-10-06 09:42:51 -06:00
f - > arg = p ;
2012-05-15 09:10:16 -06:00
runtime · unlock ( & finlock ) ;
2011-10-06 09:42:51 -06:00
}
// Sweep frees or collects finalizers for blocks not marked in the mark phase.
2011-02-02 21:03:47 -07:00
// It clears the mark bits in preparation for the next GC round.
2010-02-03 17:31:34 -07:00
static void
2012-05-22 11:35:52 -06:00
sweepspan ( ParFor * desc , uint32 idx )
2012-04-05 11:02:20 -06:00
{
int32 cl , n , npages ;
2014-01-07 14:45:50 -07:00
uintptr size , off , * bitp , shift , bits ;
2012-04-05 11:02:20 -06:00
byte * p ;
MCache * c ;
byte * arena_start ;
2012-09-24 18:08:05 -06:00
MLink head , * end ;
2012-04-12 02:01:24 -06:00
int32 nfree ;
2012-09-24 18:08:05 -06:00
byte * type_data ;
byte compression ;
uintptr type_data_inc ;
2012-05-22 11:35:52 -06:00
MSpan * s ;
2013-12-18 18:13:59 -07:00
MLink * x ;
2014-01-07 14:45:50 -07:00
Special * special , * * specialp , * y ;
2010-02-10 15:59:39 -07:00
2012-05-22 11:35:52 -06:00
USED ( & desc ) ;
2013-05-28 12:14:47 -06:00
s = runtime · mheap . allspans [ idx ] ;
2012-05-22 11:35:52 -06:00
if ( s - > state ! = MSpanInUse )
return ;
2013-05-28 12:14:47 -06:00
arena_start = runtime · mheap . arena_start ;
2012-04-05 11:02:20 -06:00
cl = s - > sizeclass ;
2012-09-24 18:08:05 -06:00
size = s - > elemsize ;
2012-04-05 11:02:20 -06:00
if ( cl = = 0 ) {
n = 1 ;
} else {
// Chunk full of small blocks.
npages = runtime · class_to_allocnpages [ cl ] ;
n = ( npages < < PageShift ) / size ;
}
2012-04-12 02:01:24 -06:00
nfree = 0 ;
2012-09-24 18:08:05 -06:00
end = & head ;
2012-04-12 02:01:24 -06:00
c = m - > mcache ;
2013-12-18 18:13:59 -07:00
// mark any free objects in this span so we don't collect them
for ( x = s - > freelist ; x ! = nil ; x = x - > next ) {
// This is markonly(x) but faster because we don't need
// atomic access and we're guaranteed to be pointing at
// the head of a valid object.
off = ( uintptr * ) x - ( uintptr * ) runtime · mheap . arena_start ;
bitp = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
* bitp | = bitMarked < < shift ;
}
2012-09-24 18:08:05 -06:00
2014-01-07 14:45:50 -07:00
// Unlink & free special records for any objects we're about to free.
specialp = & s - > specials ;
special = * specialp ;
while ( special ! = nil ) {
p = ( byte * ) ( s - > start < < PageShift ) + special - > offset ;
off = ( uintptr * ) p - ( uintptr * ) arena_start ;
bitp = ( uintptr * ) arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
bits = * bitp > > shift ;
if ( ( bits & ( bitAllocated | bitMarked ) ) = = bitAllocated ) {
// about to free object: splice out special record
y = special ;
special = special - > next ;
* specialp = special ;
if ( ! runtime · freespecial ( y , p , size ) ) {
// stop freeing of object if it has a finalizer
* bitp | = bitMarked < < shift ;
}
} else {
// object is still live: keep special record
specialp = & special - > next ;
special = * specialp ;
}
}
2012-09-24 18:08:05 -06:00
type_data = ( byte * ) s - > types . data ;
type_data_inc = sizeof ( uintptr ) ;
compression = s - > types . compression ;
switch ( compression ) {
case MTypes_Bytes :
type_data + = 8 * sizeof ( uintptr ) ;
type_data_inc = 1 ;
break ;
}
2011-02-02 21:03:47 -07:00
2012-04-05 11:02:20 -06:00
// Sweep through n objects of given size starting at p.
// This thread owns the span now, so it can manipulate
// the block bitmap without atomic operations.
2014-01-07 14:45:50 -07:00
p = ( byte * ) ( s - > start < < PageShift ) ;
2012-09-24 18:08:05 -06:00
for ( ; n > 0 ; n - - , p + = size , type_data + = type_data_inc ) {
2012-04-05 11:02:20 -06:00
off = ( uintptr * ) p - ( uintptr * ) arena_start ;
bitp = ( uintptr * ) arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
bits = * bitp > > shift ;
2011-02-02 21:03:47 -07:00
2012-04-05 11:02:20 -06:00
if ( ( bits & bitAllocated ) = = 0 )
continue ;
if ( ( bits & bitMarked ) ! = 0 ) {
if ( DebugMark ) {
if ( ! ( bits & bitSpecial ) )
runtime · printf ( " found spurious mark on %p \n " , p ) ;
* bitp & = ~ ( bitSpecial < < shift ) ;
2011-02-02 21:03:47 -07:00
}
2012-04-05 11:02:20 -06:00
* bitp & = ~ ( bitMarked < < shift ) ;
continue ;
}
2011-02-02 21:03:47 -07:00
2013-12-18 18:13:59 -07:00
// Clear mark, scan, and special bits.
* bitp & = ~ ( ( bitScan | bitMarked | bitSpecial ) < < shift ) ;
2012-04-05 11:02:20 -06:00
2012-09-24 18:08:05 -06:00
if ( cl = = 0 ) {
2012-04-05 11:02:20 -06:00
// Free large span.
runtime · unmarkspan ( p , 1 < < PageShift ) ;
2013-04-04 15:18:52 -06:00
* ( uintptr * ) p = ( uintptr ) 0xdeaddeaddeaddeadll ; // needs zeroing
2013-12-06 15:40:45 -07:00
if ( runtime · debug . efence )
runtime · SysFree ( p , size , & mstats . gc_sys ) ;
else
runtime · MHeap_Free ( & runtime · mheap , s , 1 ) ;
2013-06-06 04:56:50 -06:00
c - > local_nlargefree + + ;
c - > local_largefree + = size ;
2012-04-05 11:02:20 -06:00
} else {
// Free small object.
2012-09-24 18:08:05 -06:00
switch ( compression ) {
case MTypes_Words :
* ( uintptr * ) type_data = 0 ;
break ;
case MTypes_Bytes :
* ( byte * ) type_data = 0 ;
break ;
}
2012-04-05 11:02:20 -06:00
if ( size > sizeof ( uintptr ) )
2013-04-04 15:18:52 -06:00
( ( uintptr * ) p ) [ 1 ] = ( uintptr ) 0xdeaddeaddeaddeadll ; // mark as "needs to be zeroed"
2012-09-24 18:08:05 -06:00
end - > next = ( MLink * ) p ;
2012-04-12 02:01:24 -06:00
end = ( MLink * ) p ;
nfree + + ;
2009-01-26 18:37:05 -07:00
}
2012-04-12 02:01:24 -06:00
}
if ( nfree ) {
2013-06-06 04:56:50 -06:00
c - > local_nsmallfree [ cl ] + = nfree ;
2012-04-12 02:01:24 -06:00
c - > local_cachealloc - = nfree * size ;
2013-05-28 12:14:47 -06:00
runtime · MCentral_FreeSpan ( & runtime · mheap . central [ cl ] , s , nfree , head . next , end ) ;
2009-01-26 18:37:05 -07:00
}
}
2012-11-01 10:56:25 -06:00
static void
dumpspan ( uint32 idx )
{
int32 sizeclass , n , npages , i , column ;
uintptr size ;
byte * p ;
byte * arena_start ;
MSpan * s ;
bool allocated , special ;
2013-05-28 12:14:47 -06:00
s = runtime · mheap . allspans [ idx ] ;
2012-11-01 10:56:25 -06:00
if ( s - > state ! = MSpanInUse )
return ;
2013-05-28 12:14:47 -06:00
arena_start = runtime · mheap . arena_start ;
2012-11-01 10:56:25 -06:00
p = ( byte * ) ( s - > start < < PageShift ) ;
sizeclass = s - > sizeclass ;
size = s - > elemsize ;
if ( sizeclass = = 0 ) {
n = 1 ;
} else {
npages = runtime · class_to_allocnpages [ sizeclass ] ;
n = ( npages < < PageShift ) / size ;
}
runtime · printf ( " %p .. %p: \n " , p , p + n * size ) ;
column = 0 ;
for ( ; n > 0 ; n - - , p + = size ) {
uintptr off , * bitp , shift , bits ;
off = ( uintptr * ) p - ( uintptr * ) arena_start ;
bitp = ( uintptr * ) arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
bits = * bitp > > shift ;
allocated = ( ( bits & bitAllocated ) ! = 0 ) ;
special = ( ( bits & bitSpecial ) ! = 0 ) ;
for ( i = 0 ; i < size ; i + = sizeof ( void * ) ) {
if ( column = = 0 ) {
runtime · printf ( " \t " ) ;
}
if ( i = = 0 ) {
runtime · printf ( allocated ? " ( " : " [ " ) ;
runtime · printf ( special ? " @ " : " " ) ;
runtime · printf ( " %p: " , p + i ) ;
} else {
runtime · printf ( " " ) ;
}
runtime · printf ( " %p " , * ( void * * ) ( p + i ) ) ;
if ( i + sizeof ( void * ) > = size ) {
runtime · printf ( allocated ? " ) " : " ] " ) ;
}
column + + ;
if ( column = = 8 ) {
runtime · printf ( " \n " ) ;
column = 0 ;
}
}
}
runtime · printf ( " \n " ) ;
}
// A debugging function to dump the contents of memory
void
runtime · memorydump ( void )
{
uint32 spanidx ;
2013-05-28 12:14:47 -06:00
for ( spanidx = 0 ; spanidx < runtime · mheap . nspan ; spanidx + + ) {
2012-11-01 10:56:25 -06:00
dumpspan ( spanidx ) ;
}
}
2012-11-27 11:04:59 -07:00
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
void
runtime · gchelper ( void )
{
2013-03-21 02:48:02 -06:00
gchelperstart ( ) ;
2012-05-24 00:55:50 -06:00
// parallel mark for over gc roots
runtime · parfordo ( work . markfor ) ;
2012-12-16 17:32:12 -07:00
2012-05-24 00:55:50 -06:00
// help other threads scan secondary blocks
2012-12-16 17:32:12 -07:00
scanblock ( nil , nil , 0 , true ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2012-05-22 11:35:52 -06:00
if ( DebugMark ) {
// wait while the main thread executes mark(debug_scanblock)
while ( runtime · atomicload ( & work . debugmarkdone ) = = 0 )
runtime · usleep ( 10 ) ;
}
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2012-05-22 11:35:52 -06:00
runtime · parfordo ( work . sweepfor ) ;
2013-03-21 02:48:02 -06:00
bufferList [ m - > helpgc ] . busy = 0 ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
if ( runtime · xadd ( & work . ndone , + 1 ) = = work . nproc - 1 )
runtime · notewakeup ( & work . alldone ) ;
}
2013-02-03 22:00:55 -07:00
# define GcpercentUnknown (-2)
2009-01-26 18:37:05 -07:00
// Initialized from $GOGC. GOGC=off means no gc.
//
// Next gc is after we've allocated an extra amount of
// memory proportional to the amount already in use.
// If gcpercent=100 and we're using 4M, we'll gc again
// when we get to 8M. This keeps the gc cost in linear
// proportion to the allocation cost. Adjusting gcpercent
// just changes the linear constant (and also the amount of
// extra memory used).
2013-02-03 22:00:55 -07:00
static int32 gcpercent = GcpercentUnknown ;
2009-01-26 18:37:05 -07:00
2010-09-07 07:57:22 -06:00
static void
2013-06-06 04:56:50 -06:00
cachestats ( void )
{
MCache * c ;
P * p , * * pp ;
for ( pp = runtime · allp ; p = * pp ; pp + + ) {
c = p - > mcache ;
if ( c = = nil )
continue ;
runtime · purgecachedstats ( c ) ;
}
}
static void
updatememstats ( GCStats * stats )
2010-09-07 07:57:22 -06:00
{
2012-12-18 09:30:29 -07:00
M * mp ;
2013-06-06 04:56:50 -06:00
MSpan * s ;
2010-09-07 07:57:22 -06:00
MCache * c ;
2013-03-01 04:49:16 -07:00
P * p , * * pp ;
2011-07-18 12:52:57 -06:00
int32 i ;
2013-06-06 04:56:50 -06:00
uint64 stacks_inuse , smallfree ;
2012-04-05 10:48:28 -06:00
uint64 * src , * dst ;
2010-09-07 07:57:22 -06:00
2012-04-05 10:48:28 -06:00
if ( stats )
runtime · memclr ( ( byte * ) stats , sizeof ( * stats ) ) ;
2011-07-18 12:52:57 -06:00
stacks_inuse = 0 ;
2012-12-18 09:30:29 -07:00
for ( mp = runtime · allm ; mp ; mp = mp - > alllink ) {
2013-01-09 22:57:06 -07:00
stacks_inuse + = mp - > stackinuse * FixedStack ;
2012-04-05 10:48:28 -06:00
if ( stats ) {
2012-12-18 09:30:29 -07:00
src = ( uint64 * ) & mp - > gcstats ;
2012-04-05 10:48:28 -06:00
dst = ( uint64 * ) stats ;
for ( i = 0 ; i < sizeof ( * stats ) / sizeof ( uint64 ) ; i + + )
dst [ i ] + = src [ i ] ;
2012-12-18 09:30:29 -07:00
runtime · memclr ( ( byte * ) & mp - > gcstats , sizeof ( mp - > gcstats ) ) ;
2012-04-05 10:48:28 -06:00
}
2013-03-01 04:49:16 -07:00
}
2013-06-06 04:56:50 -06:00
mstats . stacks_inuse = stacks_inuse ;
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
mstats . mcache_inuse = runtime · mheap . cachealloc . inuse ;
mstats . mspan_inuse = runtime · mheap . spanalloc . inuse ;
mstats . sys = mstats . heap_sys + mstats . stacks_sys + mstats . mspan_sys +
mstats . mcache_sys + mstats . buckhash_sys + mstats . gc_sys + mstats . other_sys ;
2013-06-06 04:56:50 -06:00
// Calculate memory allocator stats.
// During program execution we only count number of frees and amount of freed memory.
// Current number of alive object in the heap and amount of alive heap memory
// are calculated by scanning all spans.
// Total number of mallocs is calculated as number of frees plus number of alive objects.
// Similarly, total amount of allocated memory is calculated as amount of freed memory
// plus amount of alive heap memory.
mstats . alloc = 0 ;
mstats . total_alloc = 0 ;
mstats . nmalloc = 0 ;
mstats . nfree = 0 ;
for ( i = 0 ; i < nelem ( mstats . by_size ) ; i + + ) {
mstats . by_size [ i ] . nmalloc = 0 ;
mstats . by_size [ i ] . nfree = 0 ;
}
// Flush MCache's to MCentral.
2013-03-01 04:49:16 -07:00
for ( pp = runtime · allp ; p = * pp ; pp + + ) {
c = p - > mcache ;
if ( c = = nil )
continue ;
2013-06-06 04:56:50 -06:00
runtime · MCache_ReleaseAll ( c ) ;
2010-09-07 07:57:22 -06:00
}
2013-06-06 04:56:50 -06:00
// Aggregate local stats.
cachestats ( ) ;
// Scan all spans and count number of alive objects.
for ( i = 0 ; i < runtime · mheap . nspan ; i + + ) {
s = runtime · mheap . allspans [ i ] ;
if ( s - > state ! = MSpanInUse )
continue ;
if ( s - > sizeclass = = 0 ) {
mstats . nmalloc + + ;
mstats . alloc + = s - > elemsize ;
} else {
mstats . nmalloc + = s - > ref ;
mstats . by_size [ s - > sizeclass ] . nmalloc + = s - > ref ;
mstats . alloc + = s - > ref * s - > elemsize ;
}
}
// Aggregate by size class.
smallfree = 0 ;
mstats . nfree = runtime · mheap . nlargefree ;
for ( i = 0 ; i < nelem ( mstats . by_size ) ; i + + ) {
mstats . nfree + = runtime · mheap . nsmallfree [ i ] ;
mstats . by_size [ i ] . nfree = runtime · mheap . nsmallfree [ i ] ;
mstats . by_size [ i ] . nmalloc + = runtime · mheap . nsmallfree [ i ] ;
smallfree + = runtime · mheap . nsmallfree [ i ] * runtime · class_to_size [ i ] ;
}
mstats . nmalloc + = mstats . nfree ;
// Calculate derived stats.
mstats . total_alloc = mstats . alloc + runtime · mheap . largefree + smallfree ;
mstats . heap_alloc = mstats . alloc ;
mstats . heap_objects = mstats . nmalloc - mstats . nfree ;
2010-09-07 07:57:22 -06:00
}
2012-11-27 11:04:59 -07:00
// Structure of arguments passed to function gc().
2013-05-31 21:43:33 -06:00
// This allows the arguments to be passed via runtime·mcall.
2012-11-27 11:04:59 -07:00
struct gc_args
{
2013-05-31 21:43:33 -06:00
int64 start_time ; // start time of GC in ns (just before stoptheworld)
2012-11-27 11:04:59 -07:00
} ;
static void gc ( struct gc_args * args ) ;
2013-05-31 21:43:33 -06:00
static void mgc ( G * gp ) ;
2012-11-27 11:04:59 -07:00
2013-02-03 22:00:55 -07:00
static int32
readgogc ( void )
{
byte * p ;
p = runtime · getenv ( " GOGC " ) ;
if ( p = = nil | | p [ 0 ] = = ' \0 ' )
return 100 ;
if ( runtime · strcmp ( p , ( byte * ) " off " ) = = 0 )
return - 1 ;
return runtime · atoi ( p ) ;
}
2013-05-31 21:43:33 -06:00
static FuncVal runfinqv = { runfinq } ;
2009-01-26 18:37:05 -07:00
void
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
runtime · gc ( int32 force )
2009-01-26 18:37:05 -07:00
{
2013-05-31 21:43:33 -06:00
struct gc_args a ;
int32 i ;
2009-01-26 18:37:05 -07:00
2012-10-09 10:50:06 -06:00
// The atomic operations are not atomic if the uint64s
// are not aligned on uint64 boundaries. This has been
// a problem in the past.
if ( ( ( ( uintptr ) & work . empty ) & 7 ) ! = 0 )
runtime · throw ( " runtime: gc work buffer is misaligned " ) ;
2013-03-10 10:46:11 -06:00
if ( ( ( ( uintptr ) & work . full ) & 7 ) ! = 0 )
runtime · throw ( " runtime: gc work buffer is misaligned " ) ;
2012-10-09 10:50:06 -06:00
2009-01-26 18:37:05 -07:00
// The gc is turned off (via enablegc) until
// the bootstrap has completed.
// Also, malloc gets called in the guts
// of a number of libraries that might be
// holding locks. To avoid priority inversion
// problems, don't bother trying to run gc
// while holding a lock. The next mallocgc
// without a lock will do the gc instead.
2013-08-21 16:17:45 -06:00
if ( ! mstats . enablegc | | g = = m - > g0 | | m - > locks > 0 | | runtime · panicking )
2009-01-26 18:37:05 -07:00
return ;
2013-02-03 22:00:55 -07:00
if ( gcpercent = = GcpercentUnknown ) { // first time through
2013-06-15 06:07:06 -06:00
runtime · lock ( & runtime · mheap ) ;
if ( gcpercent = = GcpercentUnknown )
gcpercent = readgogc ( ) ;
runtime · unlock ( & runtime · mheap ) ;
2009-01-26 18:37:05 -07:00
}
2009-06-05 11:59:37 -06:00
if ( gcpercent < 0 )
2009-01-26 18:37:05 -07:00
return ;
net: add special netFD mutex
The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.
On linux/amd64, Intel E5-2690:
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 9595 9454 -1.47%
BenchmarkTCP4Persistent-2 8978 8772 -2.29%
BenchmarkTCP4ConcurrentReadWrite 4900 4625 -5.61%
BenchmarkTCP4ConcurrentReadWrite-2 2603 2500 -3.96%
In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.
Fixes #6074.
R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
2013-08-09 11:43:00 -06:00
runtime · semacquire ( & runtime · worldsema , false ) ;
2013-05-31 21:43:33 -06:00
if ( ! force & & mstats . heap_alloc < mstats . next_gc ) {
// typically threads which lost the race to grab
// worldsema exit here when gc is done.
runtime · semrelease ( & runtime · worldsema ) ;
return ;
}
// Ok, we're doing it! Stop everybody else
a . start_time = runtime · nanotime ( ) ;
m - > gcing = 1 ;
runtime · stoptheworld ( ) ;
2013-12-18 12:08:34 -07:00
clearpools ( ) ;
2013-05-31 21:43:33 -06:00
// Run gc on the g0 stack. We do this so that the g stack
// we're currently running on will no longer change. Cuts
// the root set down a bit (g0 stacks are not scanned, and
// we don't need to scan gc's internal state). Also an
// enabler for copyable stacks.
2013-06-28 08:37:06 -06:00
for ( i = 0 ; i < ( runtime · debug . gctrace > 1 ? 2 : 1 ) ; i + + ) {
2013-08-21 16:17:45 -06:00
// switch to g0, call gc(&a), then switch back
g - > param = & a ;
g - > status = Gwaiting ;
g - > waitreason = " garbage collection " ;
runtime · mcall ( mgc ) ;
2013-05-31 21:43:33 -06:00
// record a new start time in case we're going around again
a . start_time = runtime · nanotime ( ) ;
}
// all done
2013-07-17 10:52:37 -06:00
m - > gcing = 0 ;
2013-07-19 14:04:09 -06:00
m - > locks + + ;
2013-05-31 21:43:33 -06:00
runtime · semrelease ( & runtime · worldsema ) ;
runtime · starttheworld ( ) ;
2013-07-19 14:04:09 -06:00
m - > locks - - ;
2013-05-31 21:43:33 -06:00
2013-08-19 13:20:50 -06:00
// now that gc is done, kick off finalizer thread if needed
2013-05-31 21:43:33 -06:00
if ( finq ! = nil ) {
runtime · lock ( & finlock ) ;
// kick off or wake up goroutine to run queued finalizers
if ( fing = = nil )
fing = runtime · newproc1 ( & runfinqv , nil , 0 , 0 , runtime · gc ) ;
else if ( fingwait ) {
fingwait = 0 ;
runtime · ready ( fing ) ;
}
runtime · unlock ( & finlock ) ;
2012-11-27 11:04:59 -07:00
}
2013-08-19 13:20:50 -06:00
// give the queued finalizers, if any, a chance to run
2013-08-21 16:17:45 -06:00
runtime · gosched ( ) ;
2012-11-27 11:04:59 -07:00
}
2013-05-31 21:43:33 -06:00
static void
mgc ( G * gp )
{
gc ( gp - > param ) ;
gp - > param = nil ;
runtime: record proper goroutine state during stack split
Until now, the goroutine state has been scattered during the
execution of newstack and oldstack. It's all there, and those routines
know how to get back to a working goroutine, but other pieces of
the system, like stack traces, do not. If something does interrupt
the newstack or oldstack execution, the rest of the system can't
understand the goroutine. For example, if newstack decides there
is an overflow and calls throw, the stack tracer wouldn't dump the
goroutine correctly.
For newstack to save a useful state snapshot, it needs to be able
to rewind the PC in the function that triggered the split back to
the beginning of the function. (The PC is a few instructions in, just
after the call to morestack.) To make that possible, we change the
prologues to insert a jmp back to the beginning of the function
after the call to morestack. That is, the prologue used to be roughly:
TEXT myfunc
check for split
jmpcond nosplit
call morestack
nosplit:
sub $xxx, sp
Now an extra instruction is inserted after the call:
TEXT myfunc
start:
check for split
jmpcond nosplit
call morestack
jmp start
nosplit:
sub $xxx, sp
The jmp is not executed directly. It is decoded and simulated by
runtime.rewindmorestack to discover the beginning of the function,
and then the call to morestack returns directly to the start label
instead of to the jump instruction. So logically the jmp is still
executed, just not by the cpu.
The prologue thus repeats in the case of a function that needs a
stack split, but against the cost of the split itself, the extra few
instructions are noise. The repeated prologue has the nice effect of
making a stack split double-check that the new stack is big enough:
if morestack happens to return on a too-small stack, we'll now notice
before corruption happens.
The ability for newstack to rewind to the beginning of the function
should help preemption too. If newstack decides that it was called
for preemption instead of a stack split, it now has the goroutine state
correctly paused if rescheduling is needed, and when the goroutine
can run again, it can return to the start label on its original stack
and re-execute the split check.
Here is an example of a split stack overflow showing the full
trace, without any special cases in the stack printer.
(This one was triggered by making the split check incorrect.)
runtime: newstack framesize=0x0 argsize=0x18 sp=0x6aebd0 stack=[0x6b0000, 0x6b0fa0]
morebuf={pc:0x69f5b sp:0x6aebd8 lr:0x0}
sched={pc:0x68880 sp:0x6aebd0 lr:0x0 ctxt:0x34e700}
runtime: split stack overflow: 0x6aebd0 < 0x6b0000
fatal error: runtime: split stack overflow
goroutine 1 [stack split]:
runtime.mallocgc(0x290, 0x100000000, 0x1)
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:21 fp=0x6aebd8
runtime.new()
/Users/rsc/g/go/src/pkg/runtime/zmalloc_darwin_amd64.c:682 +0x5b fp=0x6aec08
go/build.(*Context).Import(0x5ae340, 0xc210030c71, 0xa, 0xc2100b4380, 0x1b, ...)
/Users/rsc/g/go/src/pkg/go/build/build.go:424 +0x3a fp=0x6b00a0
main.loadImport(0xc210030c71, 0xa, 0xc2100b4380, 0x1b, 0xc2100b42c0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:249 +0x371 fp=0x6b01a8
main.(*Package).load(0xc21017c800, 0xc2100b42c0, 0xc2101828c0, 0x0, 0x0, ...)
/Users/rsc/g/go/src/cmd/go/pkg.go:431 +0x2801 fp=0x6b0c98
main.loadPackage(0x369040, 0x7, 0xc2100b42c0, 0x0)
/Users/rsc/g/go/src/cmd/go/pkg.go:709 +0x857 fp=0x6b0f80
----- stack segment boundary -----
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc2100e6c00, 0xc2100e5750, ...)
/Users/rsc/g/go/src/cmd/go/build.go:539 +0x437 fp=0x6b14a0
main.(*builder).action(0xc2100902a0, 0x0, 0x0, 0xc21015b400, 0x2, ...)
/Users/rsc/g/go/src/cmd/go/build.go:528 +0x1d2 fp=0x6b1658
main.(*builder).test(0xc2100902a0, 0xc210092000, 0x0, 0x0, 0xc21008ff60, ...)
/Users/rsc/g/go/src/cmd/go/test.go:622 +0x1b53 fp=0x6b1f68
----- stack segment boundary -----
main.runTest(0x5a6b20, 0xc21000a020, 0x2, 0x2)
/Users/rsc/g/go/src/cmd/go/test.go:366 +0xd09 fp=0x6a5cf0
main.main()
/Users/rsc/g/go/src/cmd/go/main.go:161 +0x4f9 fp=0x6a5f78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:183 +0x92 fp=0x6a5fa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1266 fp=0x6a5fa8
And here is a seg fault during oldstack:
SIGSEGV: segmentation violation
PC=0x1b2a6
runtime.oldstack()
/Users/rsc/g/go/src/pkg/runtime/stack.c:159 +0x76
runtime.lessstack()
/Users/rsc/g/go/src/pkg/runtime/asm_amd64.s:270 +0x22
goroutine 1 [stack unsplit]:
fmt.(*pp).printArg(0x2102e64e0, 0xe5c80, 0x2102c9220, 0x73, 0x0, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:818 +0x3d3 fp=0x221031e6f8
fmt.(*pp).doPrintf(0x2102e64e0, 0x12fb20, 0x2, 0x221031eb98, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:1183 +0x15cb fp=0x221031eaf0
fmt.Sprintf(0x12fb20, 0x2, 0x221031eb98, 0x1, 0x1, ...)
/Users/rsc/g/go/src/pkg/fmt/print.go:234 +0x67 fp=0x221031eb40
flag.(*stringValue).String(0x2102c9210, 0x1, 0x0)
/Users/rsc/g/go/src/pkg/flag/flag.go:180 +0xb3 fp=0x221031ebb0
flag.(*FlagSet).Var(0x2102f6000, 0x293d38, 0x2102c9210, 0x143490, 0xa, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:633 +0x40 fp=0x221031eca0
flag.(*FlagSet).StringVar(0x2102f6000, 0x2102c9210, 0x143490, 0xa, 0x12fa60, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:550 +0x91 fp=0x221031ece8
flag.(*FlagSet).String(0x2102f6000, 0x143490, 0xa, 0x12fa60, 0x0, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:563 +0x87 fp=0x221031ed38
flag.String(0x143490, 0xa, 0x12fa60, 0x0, 0x161950, ...)
/Users/rsc/g/go/src/pkg/flag/flag.go:570 +0x6b fp=0x221031ed80
testing.init()
/Users/rsc/g/go/src/pkg/testing/testing.go:-531 +0xbb fp=0x221031edc0
strings_test.init()
/Users/rsc/g/go/src/pkg/strings/strings_test.go:1115 +0x62 fp=0x221031ef70
main.init()
strings/_test/_testmain.go:90 +0x3d fp=0x221031ef78
runtime.main()
/Users/rsc/g/go/src/pkg/runtime/proc.c:180 +0x8a fp=0x221031efa0
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269 fp=0x221031efa8
goroutine 2 [runnable]:
runtime.MHeap_Scavenger()
/Users/rsc/g/go/src/pkg/runtime/mheap.c:438
runtime.goexit()
/Users/rsc/g/go/src/pkg/runtime/proc.c:1269
created by runtime.main
/Users/rsc/g/go/src/pkg/runtime/proc.c:166
rax 0x23ccc0
rbx 0x23ccc0
rcx 0x0
rdx 0x38
rdi 0x2102c0170
rsi 0x221032cfe0
rbp 0x221032cfa0
rsp 0x7fff5fbff5b0
r8 0x2102c0120
r9 0x221032cfa0
r10 0x221032c000
r11 0x104ce8
r12 0xe5c80
r13 0x1be82baac718
r14 0x13091135f7d69200
r15 0x0
rip 0x1b2a6
rflags 0x10246
cs 0x2b
fs 0x0
gs 0x0
Fixes #5723.
R=r, dvyukov, go.peter.90, dave, iant
CC=golang-dev
https://golang.org/cl/10360048
2013-06-27 09:32:01 -06:00
gp - > status = Grunning ;
2013-06-12 13:22:26 -06:00
runtime · gogo ( & gp - > sched ) ;
2013-05-31 21:43:33 -06:00
}
2013-02-21 15:01:13 -07:00
2012-11-27 11:04:59 -07:00
static void
gc ( struct gc_args * args )
{
2013-01-22 02:44:49 -07:00
int64 t0 , t1 , t2 , t3 , t4 ;
2013-03-04 08:54:37 -07:00
uint64 heap0 , heap1 , obj0 , obj1 , ninstr ;
2012-11-27 11:04:59 -07:00
GCStats stats ;
2012-12-18 09:30:29 -07:00
M * mp ;
2012-11-27 11:04:59 -07:00
uint32 i ;
2013-01-10 13:45:46 -07:00
Eface eface ;
2012-11-27 11:04:59 -07:00
2013-05-31 21:43:33 -06:00
t0 = args - > start_time ;
2011-02-02 21:03:47 -07:00
2013-03-04 08:54:37 -07:00
if ( CollectStats )
runtime · memclr ( ( byte * ) & gcstats , sizeof ( gcstats ) ) ;
2012-12-18 09:30:29 -07:00
for ( mp = runtime · allm ; mp ; mp = mp - > alllink )
2013-08-04 13:32:06 -06:00
runtime · settype_flush ( mp ) ;
2012-09-24 18:08:05 -06:00
2012-05-15 09:10:16 -06:00
heap0 = 0 ;
obj0 = 0 ;
2013-06-28 08:37:06 -06:00
if ( runtime · debug . gctrace ) {
2013-06-06 04:56:50 -06:00
updatememstats ( nil ) ;
2012-05-15 09:10:16 -06:00
heap0 = mstats . heap_alloc ;
obj0 = mstats . nmalloc - mstats . nfree ;
}
2011-02-02 21:03:47 -07:00
2012-12-16 17:32:12 -07:00
m - > locks + + ; // disable gc during mallocs in parforalloc
if ( work . markfor = = nil )
work . markfor = runtime · parforalloc ( MaxGcproc ) ;
if ( work . sweepfor = = nil )
work . sweepfor = runtime · parforalloc ( MaxGcproc ) ;
m - > locks - - ;
2013-01-10 13:45:46 -07:00
if ( itabtype = = nil ) {
// get C pointer to the Go type "itab"
runtime · gc_itab_ptr ( & eface ) ;
itabtype = ( ( PtrType * ) eface . type ) - > elem ;
}
2012-05-22 11:35:52 -06:00
work . nwait = 0 ;
work . ndone = 0 ;
work . debugmarkdone = 0 ;
2012-05-15 09:10:16 -06:00
work . nproc = runtime · gcprocs ( ) ;
2012-05-24 00:55:50 -06:00
addroots ( ) ;
2013-12-18 18:13:59 -07:00
addfreelists ( ) ;
2012-05-24 00:55:50 -06:00
runtime · parforsetup ( work . markfor , work . nproc , work . nroot , nil , false , markroot ) ;
2013-05-28 12:14:47 -06:00
runtime · parforsetup ( work . sweepfor , work . nproc , runtime · mheap . nspan , nil , true , sweepspan ) ;
2012-05-15 09:10:16 -06:00
if ( work . nproc > 1 ) {
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
runtime · noteclear ( & work . alldone ) ;
2012-05-15 09:10:16 -06:00
runtime · helpgc ( work . nproc ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
}
2013-01-22 02:44:49 -07:00
t1 = runtime · nanotime ( ) ;
2013-03-21 02:48:02 -06:00
gchelperstart ( ) ;
2012-05-24 00:55:50 -06:00
runtime · parfordo ( work . markfor ) ;
2012-12-16 17:32:12 -07:00
scanblock ( nil , nil , 0 , true ) ;
2012-05-24 00:55:50 -06:00
2012-05-22 11:35:52 -06:00
if ( DebugMark ) {
2012-05-24 00:55:50 -06:00
for ( i = 0 ; i < work . nroot ; i + + )
debug_scanblock ( work . roots [ i ] . p , work . roots [ i ] . n ) ;
2012-05-22 11:35:52 -06:00
runtime · atomicstore ( & work . debugmarkdone , 1 ) ;
}
2013-01-22 02:44:49 -07:00
t2 = runtime · nanotime ( ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2012-05-22 11:35:52 -06:00
runtime · parfordo ( work . sweepfor ) ;
2013-03-21 02:48:02 -06:00
bufferList [ m - > helpgc ] . busy = 0 ;
2013-01-22 02:44:49 -07:00
t3 = runtime · nanotime ( ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2012-05-22 11:35:52 -06:00
if ( work . nproc > 1 )
runtime · notesleep ( & work . alldone ) ;
2013-06-06 04:56:50 -06:00
cachestats ( ) ;
2011-02-02 21:03:47 -07:00
mstats . next_gc = mstats . heap_alloc + mstats . heap_alloc * gcpercent / 100 ;
2010-02-10 01:00:12 -07:00
2013-01-22 02:44:49 -07:00
t4 = runtime · nanotime ( ) ;
mstats . last_gc = t4 ;
mstats . pause_ns [ mstats . numgc % nelem ( mstats . pause_ns ) ] = t4 - t0 ;
mstats . pause_total_ns + = t4 - t0 ;
2010-02-08 15:32:22 -07:00
mstats . numgc + + ;
if ( mstats . debuggc )
2013-01-22 02:44:49 -07:00
runtime · printf ( " pause %D \n " , t4 - t0 ) ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2013-06-28 08:37:06 -06:00
if ( runtime · debug . gctrace ) {
2013-06-06 04:56:50 -06:00
updatememstats ( & stats ) ;
heap1 = mstats . heap_alloc ;
obj1 = mstats . nmalloc - mstats . nfree ;
stats . nprocyield + = work . sweepfor - > nprocyield ;
stats . nosyield + = work . sweepfor - > nosyield ;
stats . nsleep + = work . sweepfor - > nsleep ;
2012-04-05 10:48:28 -06:00
runtime · printf ( " gc%d(%d): %D+%D+%D ms, %D -> %D MB %D -> %D (%D-%D) objects, "
2012-05-22 11:35:52 -06:00
" %D(%D) handoff, %D(%D) steal, %D/%D/%D yields \n " ,
2013-01-22 02:44:49 -07:00
mstats . numgc , work . nproc , ( t2 - t1 ) / 1000000 , ( t3 - t2 ) / 1000000 , ( t1 - t0 + t4 - t3 ) / 1000000 ,
2011-02-02 21:03:47 -07:00
heap0 > > 20 , heap1 > > 20 , obj0 , obj1 ,
mstats . nmalloc , mstats . nfree ,
2012-04-05 10:48:28 -06:00
stats . nhandoff , stats . nhandoffcnt ,
2012-05-22 11:35:52 -06:00
work . sweepfor - > nsteal , work . sweepfor - > nstealcnt ,
2012-04-05 10:48:28 -06:00
stats . nprocyield , stats . nosyield , stats . nsleep ) ;
2013-03-04 08:54:37 -07:00
if ( CollectStats ) {
runtime · printf ( " scan: %D bytes, %D objects, %D untyped, %D types from MSpan \n " ,
gcstats . nbytes , gcstats . obj . cnt , gcstats . obj . notype , gcstats . obj . typelookup ) ;
if ( gcstats . ptr . cnt ! = 0 )
runtime · printf ( " avg ptrbufsize: %D (%D/%D) \n " ,
gcstats . ptr . sum / gcstats . ptr . cnt , gcstats . ptr . sum , gcstats . ptr . cnt ) ;
if ( gcstats . obj . cnt ! = 0 )
runtime · printf ( " avg nobj: %D (%D/%D) \n " ,
gcstats . obj . sum / gcstats . obj . cnt , gcstats . obj . sum , gcstats . obj . cnt ) ;
runtime · printf ( " rescans: %D, %D bytes \n " , gcstats . rescan , gcstats . rescanbytes ) ;
runtime · printf ( " instruction counts: \n " ) ;
ninstr = 0 ;
for ( i = 0 ; i < nelem ( gcstats . instr ) ; i + + ) {
runtime · printf ( " \t %d: \t %D \n " , i , gcstats . instr [ i ] ) ;
ninstr + = gcstats . instr [ i ] ;
}
runtime · printf ( " \t total: \t %D \n " , ninstr ) ;
runtime · printf ( " putempty: %D, getfull: %D \n " , gcstats . putempty , gcstats . getfull ) ;
2013-08-29 14:52:38 -06:00
runtime · printf ( " markonly base lookup: bit %D word %D span %D \n " , gcstats . markonly . foundbit , gcstats . markonly . foundword , gcstats . markonly . foundspan ) ;
runtime · printf ( " flushptrbuf base lookup: bit %D word %D span %D \n " , gcstats . flushptrbuf . foundbit , gcstats . flushptrbuf . foundword , gcstats . flushptrbuf . foundspan ) ;
2013-03-04 08:54:37 -07:00
}
2011-02-02 21:03:47 -07:00
}
2012-05-22 11:35:52 -06:00
2012-02-22 19:45:01 -07:00
runtime · MProf_GC ( ) ;
2010-03-26 15:15:30 -06:00
}
2011-07-21 22:55:01 -06:00
void
2012-02-06 11:16:26 -07:00
runtime · ReadMemStats ( MStats * stats )
2011-07-21 22:55:01 -06:00
{
2012-02-22 19:45:01 -07:00
// Have to acquire worldsema to stop the world,
2011-07-21 22:55:01 -06:00
// because stoptheworld can only be used by
// one goroutine at a time, and there might be
// a pending garbage collection already calling it.
net: add special netFD mutex
The mutex, fdMutex, handles locking and lifetime of sysfd,
and serializes Read and Write methods.
This allows to strip 2 sync.Mutex.Lock calls,
2 sync.Mutex.Unlock calls, 1 defer and some amount
of misc overhead from every network operation.
On linux/amd64, Intel E5-2690:
benchmark old ns/op new ns/op delta
BenchmarkTCP4Persistent 9595 9454 -1.47%
BenchmarkTCP4Persistent-2 8978 8772 -2.29%
BenchmarkTCP4ConcurrentReadWrite 4900 4625 -5.61%
BenchmarkTCP4ConcurrentReadWrite-2 2603 2500 -3.96%
In general it strips 70-500 ns from every network operation depending
on processor model. On my relatively new E5-2690 it accounts to ~5%
of network op cost.
Fixes #6074.
R=golang-dev, bradfitz, alex.brainman, iant, mikioh.mikioh
CC=golang-dev
https://golang.org/cl/12418043
2013-08-09 11:43:00 -06:00
runtime · semacquire ( & runtime · worldsema , false ) ;
2011-07-21 22:55:01 -06:00
m - > gcing = 1 ;
runtime · stoptheworld ( ) ;
2013-06-06 04:56:50 -06:00
updatememstats ( nil ) ;
2012-02-06 11:16:26 -07:00
* stats = mstats ;
2011-07-21 22:55:01 -06:00
m - > gcing = 0 ;
2013-07-19 14:04:09 -06:00
m - > locks + + ;
2012-02-22 19:45:01 -07:00
runtime · semrelease ( & runtime · worldsema ) ;
2012-05-15 09:10:16 -06:00
runtime · starttheworld ( ) ;
2013-07-19 14:04:09 -06:00
m - > locks - - ;
2011-07-21 22:55:01 -06:00
}
2013-02-03 22:00:55 -07:00
void
runtime ∕ debug · readGCStats ( Slice * pauses )
{
uint64 * p ;
uint32 i , n ;
// Calling code in runtime/debug should make the slice large enough.
if ( pauses - > cap < nelem ( mstats . pause_ns ) + 3 )
runtime · throw ( " runtime: short slice passed to readGCStats " ) ;
// Pass back: pauses, last gc (absolute time), number of gc, total pause ns.
p = ( uint64 * ) pauses - > array ;
2013-05-28 12:14:47 -06:00
runtime · lock ( & runtime · mheap ) ;
2013-02-03 22:00:55 -07:00
n = mstats . numgc ;
if ( n > nelem ( mstats . pause_ns ) )
n = nelem ( mstats . pause_ns ) ;
// The pause buffer is circular. The most recent pause is at
// pause_ns[(numgc-1)%nelem(pause_ns)], and then backward
// from there to go back farther in time. We deliver the times
// most recent first (in p[0]).
for ( i = 0 ; i < n ; i + + )
p [ i ] = mstats . pause_ns [ ( mstats . numgc - 1 - i ) % nelem ( mstats . pause_ns ) ] ;
p [ n ] = mstats . last_gc ;
p [ n + 1 ] = mstats . numgc ;
p [ n + 2 ] = mstats . pause_total_ns ;
2013-05-28 12:14:47 -06:00
runtime · unlock ( & runtime · mheap ) ;
2013-02-03 22:00:55 -07:00
pauses - > len = n + 3 ;
}
void
runtime ∕ debug · setGCPercent ( intgo in , intgo out )
{
2013-05-28 12:14:47 -06:00
runtime · lock ( & runtime · mheap ) ;
2013-02-03 22:00:55 -07:00
if ( gcpercent = = GcpercentUnknown )
gcpercent = readgogc ( ) ;
out = gcpercent ;
if ( in < 0 )
in = - 1 ;
gcpercent = in ;
2013-05-28 12:14:47 -06:00
runtime · unlock ( & runtime · mheap ) ;
2013-02-03 22:00:55 -07:00
FLUSH ( & out ) ;
}
2013-03-21 02:48:02 -06:00
static void
gchelperstart ( void )
{
if ( m - > helpgc < 0 | | m - > helpgc > = MaxGcproc )
runtime · throw ( " gchelperstart: bad m->helpgc " ) ;
if ( runtime · xchg ( & bufferList [ m - > helpgc ] . busy , 1 ) )
runtime · throw ( " gchelperstart: already busy " ) ;
2013-05-31 21:43:33 -06:00
if ( g ! = m - > g0 )
runtime · throw ( " gchelper not running on g0 stack " ) ;
2013-03-21 02:48:02 -06:00
}
2010-03-26 15:15:30 -06:00
static void
runfinq ( void )
{
2011-10-06 09:42:51 -06:00
Finalizer * f ;
FinBlock * fb , * next ;
2010-03-26 15:15:30 -06:00
byte * frame ;
2011-10-06 09:42:51 -06:00
uint32 framesz , framecap , i ;
2013-08-14 12:54:31 -06:00
Eface * ef , ef1 ;
2010-03-26 15:15:30 -06:00
2011-10-06 09:42:51 -06:00
frame = nil ;
framecap = 0 ;
2010-03-26 15:15:30 -06:00
for ( ; ; ) {
2013-05-22 13:04:46 -06:00
runtime · lock ( & finlock ) ;
2011-10-06 09:42:51 -06:00
fb = finq ;
2010-03-26 15:15:30 -06:00
finq = nil ;
2011-10-06 09:42:51 -06:00
if ( fb = = nil ) {
2010-04-07 21:38:02 -06:00
fingwait = 1 ;
2013-05-22 13:04:46 -06:00
runtime · park ( runtime · unlock , & finlock , " finalizer wait " ) ;
2010-03-26 15:15:30 -06:00
continue ;
}
2013-05-22 13:04:46 -06:00
runtime · unlock ( & finlock ) ;
2012-11-14 05:58:10 -07:00
if ( raceenabled )
runtime · racefingo ( ) ;
2011-10-06 09:42:51 -06:00
for ( ; fb ; fb = next ) {
next = fb - > next ;
for ( i = 0 ; i < fb - > cnt ; i + + ) {
f = & fb - > fin [ i ] ;
2013-07-29 09:43:08 -06:00
framesz = sizeof ( Eface ) + f - > nret ;
2011-10-06 09:42:51 -06:00
if ( framecap < framesz ) {
runtime · free ( frame ) ;
2013-07-19 08:01:33 -06:00
// The frame does not contain pointers interesting for GC,
// all not yet finalized objects are stored in finc.
2013-08-23 18:28:47 -06:00
// If we do not mark it as FlagNoScan,
2013-07-19 08:01:33 -06:00
// the last finalized object is not collected.
2013-08-23 18:28:47 -06:00
frame = runtime · mallocgc ( framesz , 0 , FlagNoScan | FlagNoInvokeGC ) ;
2011-10-06 09:42:51 -06:00
framecap = framesz ;
}
2013-08-14 12:54:31 -06:00
if ( f - > fint = = nil )
runtime · throw ( " missing type in runfinq " ) ;
if ( f - > fint - > kind = = KindPtr ) {
// direct use of pointer
2013-07-29 09:43:08 -06:00
* ( void * * ) frame = f - > arg ;
2013-08-14 12:54:31 -06:00
} else if ( ( ( InterfaceType * ) f - > fint ) - > mhdr . len = = 0 ) {
// convert to empty interface
2013-07-29 09:43:08 -06:00
ef = ( Eface * ) frame ;
ef - > type = f - > ot ;
ef - > data = f - > arg ;
2013-08-14 12:54:31 -06:00
} else {
// convert to interface with methods, via empty interface.
ef1 . type = f - > ot ;
ef1 . data = f - > arg ;
if ( ! runtime · ifaceE2I2 ( ( InterfaceType * ) f - > fint , ef1 , ( Iface * ) frame ) )
runtime · throw ( " invalid type conversion in runfinq " ) ;
2013-07-29 09:43:08 -06:00
}
reflect · call ( f - > fn , frame , framesz ) ;
2011-10-06 09:42:51 -06:00
f - > fn = nil ;
f - > arg = nil ;
2013-07-29 09:43:08 -06:00
f - > ot = nil ;
2011-10-06 09:42:51 -06:00
}
fb - > cnt = 0 ;
fb - > next = finc ;
finc = fb ;
2010-03-26 15:15:30 -06:00
}
runtime: ,s/[a-zA-Z0-9_]+/runtime·&/g, almost
Prefix all external symbols in runtime by runtime·,
to avoid conflicts with possible symbols of the same
name in linked-in C libraries. The obvious conflicts
are printf, malloc, and free, but hide everything to
avoid future pain.
The symbols left alone are:
** known to cgo **
_cgo_free
_cgo_malloc
libcgo_thread_start
initcgo
ncgocall
** known to linker **
_rt0_$GOARCH
_rt0_$GOARCH_$GOOS
text
etext
data
end
pclntab
epclntab
symtab
esymtab
** known to C compiler **
_divv
_modv
_div64by32
etc (arch specific)
Tested on darwin/386, darwin/amd64, linux/386, linux/amd64.
Built (but not tested) for freebsd/386, freebsd/amd64, linux/arm, windows/386.
R=r, PeterGo
CC=golang-dev
https://golang.org/cl/2899041
2010-11-04 12:00:19 -06:00
runtime · gc ( 1 ) ; // trigger another gc to clean up the finalized objects, if possible
2010-03-26 15:15:30 -06:00
}
2009-01-26 18:37:05 -07:00
}
2011-02-02 21:03:47 -07:00
void
2013-12-18 18:13:59 -07:00
runtime · marknogc ( void * v )
2011-02-02 21:03:47 -07:00
{
2011-02-16 11:21:20 -07:00
uintptr * b , obits , bits , off , shift ;
2011-02-02 21:03:47 -07:00
2013-12-18 18:13:59 -07:00
off = ( uintptr * ) v - ( uintptr * ) runtime · mheap . arena_start ; // word offset
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
shift = off % wordsPerBitmapWord ;
2011-02-02 21:03:47 -07:00
2013-12-18 18:13:59 -07:00
for ( ; ; ) {
obits = * b ;
if ( ( obits > > shift & bitMask ) ! = bitAllocated )
runtime · throw ( " bad initial state for marknogc " ) ;
bits = ( obits & ~ ( bitAllocated < < shift ) ) | bitBlockBoundary < < shift ;
if ( runtime · gomaxprocs = = 1 ) {
* b = bits ;
break ;
} else {
// more than one goroutine is potentially running: use atomic op
if ( runtime · casp ( ( void * * ) b , ( void * ) obits , ( void * ) bits ) )
break ;
}
}
}
void
runtime · markscan ( void * v )
{
uintptr * b , obits , bits , off , shift ;
2011-02-02 21:03:47 -07:00
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) v - ( uintptr * ) runtime · mheap . arena_start ; // word offset
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2011-02-02 21:03:47 -07:00
shift = off % wordsPerBitmapWord ;
2011-02-16 11:21:20 -07:00
for ( ; ; ) {
obits = * b ;
2013-12-18 18:13:59 -07:00
if ( ( obits > > shift & bitMask ) ! = bitAllocated )
runtime · throw ( " bad initial state for markscan " ) ;
bits = obits | bitScan < < shift ;
2013-08-05 12:58:02 -06:00
if ( runtime · gomaxprocs = = 1 ) {
2011-02-16 11:21:20 -07:00
* b = bits ;
break ;
} else {
2011-08-16 14:53:02 -06:00
// more than one goroutine is potentially running: use atomic op
2011-02-16 11:21:20 -07:00
if ( runtime · casp ( ( void * * ) b , ( void * ) obits , ( void * ) bits ) )
break ;
}
}
2011-02-02 21:03:47 -07:00
}
// mark the block at v of size n as freed.
void
runtime · markfreed ( void * v , uintptr n )
{
2011-02-16 11:21:20 -07:00
uintptr * b , obits , bits , off , shift ;
2011-02-02 21:03:47 -07:00
if ( 0 )
2013-10-09 14:28:47 -06:00
runtime · printf ( " markfreed %p+%p \n " , v , n ) ;
2011-02-02 21:03:47 -07:00
2013-05-28 12:14:47 -06:00
if ( ( byte * ) v + n > ( byte * ) runtime · mheap . arena_used | | ( byte * ) v < runtime · mheap . arena_start )
2013-10-09 14:28:47 -06:00
runtime · throw ( " markfreed: bad pointer " ) ;
2011-02-02 21:03:47 -07:00
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) v - ( uintptr * ) runtime · mheap . arena_start ; // word offset
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2011-02-02 21:03:47 -07:00
shift = off % wordsPerBitmapWord ;
2011-02-16 11:21:20 -07:00
for ( ; ; ) {
obits = * b ;
2013-12-18 18:13:59 -07:00
// This could be a free of a gc-eligible object (bitAllocated + others) or
// a FlagNoGC object (bitBlockBoundary set). In either case, we revert to
// a simple no-scan allocated object because it is going on a free list.
bits = ( obits & ~ ( bitMask < < shift ) ) | ( bitAllocated < < shift ) ;
2013-08-05 12:58:02 -06:00
if ( runtime · gomaxprocs = = 1 ) {
2011-02-16 11:21:20 -07:00
* b = bits ;
break ;
} else {
2011-08-16 14:53:02 -06:00
// more than one goroutine is potentially running: use atomic op
2011-02-16 11:21:20 -07:00
if ( runtime · casp ( ( void * * ) b , ( void * ) obits , ( void * ) bits ) )
break ;
}
}
2011-02-02 21:03:47 -07:00
}
// check that the block at v of size n is marked freed.
void
runtime · checkfreed ( void * v , uintptr n )
{
uintptr * b , bits , off , shift ;
if ( ! runtime · checking )
return ;
2013-05-28 12:14:47 -06:00
if ( ( byte * ) v + n > ( byte * ) runtime · mheap . arena_used | | ( byte * ) v < runtime · mheap . arena_start )
2011-02-02 21:03:47 -07:00
return ; // not allocated, so okay
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) v - ( uintptr * ) runtime · mheap . arena_start ; // word offset
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2011-02-02 21:03:47 -07:00
shift = off % wordsPerBitmapWord ;
bits = * b > > shift ;
if ( ( bits & bitAllocated ) ! = 0 ) {
runtime · printf ( " checkfreed %p+%p: off=%p have=%p \n " ,
v , n , off , bits & bitMask ) ;
runtime · throw ( " checkfreed: not freed " ) ;
}
}
// mark the span of memory at v as having n blocks of the given size.
// if leftover is true, there is left over space at the end of the span.
void
runtime · markspan ( void * v , uintptr size , uintptr n , bool leftover )
{
2013-12-18 18:13:59 -07:00
uintptr * b , off , shift , i ;
2011-02-02 21:03:47 -07:00
byte * p ;
2013-05-28 12:14:47 -06:00
if ( ( byte * ) v + size * n > ( byte * ) runtime · mheap . arena_used | | ( byte * ) v < runtime · mheap . arena_start )
2011-02-02 21:03:47 -07:00
runtime · throw ( " markspan: bad pointer " ) ;
2013-12-18 18:13:59 -07:00
if ( runtime · checking ) {
// bits should be all zero at the start
off = ( byte * ) v + size - runtime · mheap . arena_start ;
b = ( uintptr * ) ( runtime · mheap . arena_start - off / wordsPerBitmapWord ) ;
for ( i = 0 ; i < size / PtrSize / wordsPerBitmapWord ; i + + ) {
if ( b [ i ] ! = 0 )
runtime · throw ( " markspan: span bits not zero " ) ;
}
}
2011-02-02 21:03:47 -07:00
p = v ;
if ( leftover ) // mark a boundary just past end of last block too
n + + ;
for ( ; n - - > 0 ; p + = size ) {
2011-02-16 11:21:20 -07:00
// Okay to use non-atomic ops here, because we control
// the entire span, and each bitmap word has bits for only
// one span, so no other goroutines are changing these
// bitmap words.
2013-05-28 12:14:47 -06:00
off = ( uintptr * ) p - ( uintptr * ) runtime · mheap . arena_start ; // word offset
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2011-02-02 21:03:47 -07:00
shift = off % wordsPerBitmapWord ;
2013-12-18 18:13:59 -07:00
* b = ( * b & ~ ( bitMask < < shift ) ) | ( bitAllocated < < shift ) ;
2011-02-02 21:03:47 -07:00
}
}
// unmark the span of memory at v of length n bytes.
void
runtime · unmarkspan ( void * v , uintptr n )
{
uintptr * p , * b , off ;
2013-05-28 12:14:47 -06:00
if ( ( byte * ) v + n > ( byte * ) runtime · mheap . arena_used | | ( byte * ) v < runtime · mheap . arena_start )
2011-02-02 21:03:47 -07:00
runtime · throw ( " markspan: bad pointer " ) ;
p = v ;
2013-05-28 12:14:47 -06:00
off = p - ( uintptr * ) runtime · mheap . arena_start ; // word offset
2011-02-02 21:03:47 -07:00
if ( off % wordsPerBitmapWord ! = 0 )
runtime · throw ( " markspan: unaligned pointer " ) ;
2013-05-28 12:14:47 -06:00
b = ( uintptr * ) runtime · mheap . arena_start - off / wordsPerBitmapWord - 1 ;
2011-02-02 21:03:47 -07:00
n / = PtrSize ;
if ( n % wordsPerBitmapWord ! = 0 )
runtime · throw ( " unmarkspan: unaligned length " ) ;
2011-02-16 11:21:20 -07:00
// Okay to use non-atomic ops here, because we control
// the entire span, and each bitmap word has bits for only
// one span, so no other goroutines are changing these
// bitmap words.
2011-02-02 21:03:47 -07:00
n / = wordsPerBitmapWord ;
while ( n - - > 0 )
* b - - = 0 ;
}
void
runtime · MHeap_MapBits ( MHeap * h )
{
// Caller has added extra mappings to the arena.
// Add extra mappings of bitmap words as needed.
// We allocate extra bitmap pieces in chunks of bitmapChunk.
enum {
bitmapChunk = 8192
} ;
uintptr n ;
runtime: parallelize garbage collector mark + sweep
Running test/garbage/parser.out.
On a 4-core Lenovo X201s (Linux):
31.12u 0.60s 31.74r 1 cpu, no atomics
32.27u 0.58s 32.86r 1 cpu, atomic instructions
33.04u 0.83s 27.47r 2 cpu
On a 16-core Xeon (Linux):
33.08u 0.65s 33.80r 1 cpu, no atomics
34.87u 1.12s 29.60r 2 cpu
36.00u 1.87s 28.43r 3 cpu
36.46u 2.34s 27.10r 4 cpu
38.28u 3.85s 26.92r 5 cpu
37.72u 5.25s 26.73r 6 cpu
39.63u 7.11s 26.95r 7 cpu
39.67u 8.10s 26.68r 8 cpu
On a 2-core MacBook Pro Core 2 Duo 2.26 (circa 2009, MacBookPro5,5):
39.43u 1.45s 41.27r 1 cpu, no atomics
43.98u 2.95s 38.69r 2 cpu
On a 2-core Mac Mini Core 2 Duo 1.83 (circa 2008; Macmini2,1):
48.81u 2.12s 51.76r 1 cpu, no atomics
57.15u 4.72s 51.54r 2 cpu
The handoff algorithm is really only good for two cores.
Beyond that we will need to so something more sophisticated,
like have each core hand off to the next one, around a circle.
Even so, the code is a good checkpoint; for now we'll limit the
number of gc procs to at most 2.
R=dvyukov
CC=golang-dev
https://golang.org/cl/4641082
2011-09-30 07:40:01 -06:00
2011-02-02 21:03:47 -07:00
n = ( h - > arena_used - h - > arena_start ) / wordsPerBitmapWord ;
2013-05-28 12:04:34 -06:00
n = ROUND ( n , bitmapChunk ) ;
2011-02-02 21:03:47 -07:00
if ( h - > bitmap_mapped > = n )
return ;
runtime: account for all sys memory in MemStats
Currently lots of sys allocations are not accounted in any of XxxSys,
including GC bitmap, spans table, GC roots blocks, GC finalizer blocks,
iface table, netpoll descriptors and more. Up to ~20% can unaccounted.
This change introduces 2 new stats: GCSys and OtherSys for GC metadata
and all other misc allocations, respectively.
Also ensures that all XxxSys indeed sum up to Sys. All sys memory allocation
functions require the stat for accounting, so that it's impossible to miss something.
Also fix updating of mcache_sys/inuse, they were not updated after deallocation.
test/bench/garbage/parser before:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14204928
MCacheSys 16384
BuckHashSys 1439992
after:
Sys 670064344
HeapSys 610271232
StackSys 65536
MSpanSys 14188544
MCacheSys 16384
BuckHashSys 3194304
GCSys 39198688
OtherSys 3129656
Fixes #5799.
R=rsc, dave, alex.brainman
CC=golang-dev
https://golang.org/cl/12946043
2013-09-06 14:55:40 -06:00
runtime · SysMap ( h - > arena_start - n , n - h - > bitmap_mapped , & mstats . gc_sys ) ;
2011-02-02 21:03:47 -07:00
h - > bitmap_mapped = n ;
}