1
0
mirror of https://github.com/golang/go synced 2024-11-18 09:54:57 -07:00

runtime: don't corrupt arena bounds on low mmap

If mheap.sysAlloc doesn't have room in the heap arena for an
allocation, it will attempt to map more address space with sysReserve.
sysReserve is given a hint, but can return any unused address range.
Currently, mheap.sysAlloc incorrectly assumes the returned region will
never fall between arena_start and arena_used. If it does,
mheap.sysAlloc will blindly accept the new region as the new
arena_used and arena_end, causing these to decrease and make it so any
Go heap above the new arena_used is no longer considered part of the
Go heap. This assumption *used to be* safe because we had all memory
between arena_start and arena_used mapped, but when we switched to an
arena_start of 0 on 32-bit, it became no longer safe.

Most likely, we've only recently seen this bug occur because we
usually start arena_used just above the binary, which is low in the
address space. Hence, the kernel is very unlikely to give us a region
before arena_used.

Since mheap.sysAlloc is a linear allocator, there's not much we can do
to handle this well. Hence, we fix this problem by simply rejecting
the new region if it isn't after arena_end. In this case, we'll take
the fall-back path and mmap a small region at any address just for the
requested memory.

Fixes #20259.

Change-Id: Ib72e8cd621545002d595c7cade1e817cfe3e5b1e
Reviewed-on: https://go-review.googlesource.com/43870
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
This commit is contained in:
Austin Clements 2017-05-22 15:53:49 -04:00
parent 95d991d30c
commit e5a5c03f5b

View File

@ -411,12 +411,14 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
if p == 0 {
return nil
}
// p can be just about anywhere in the address
// space, including before arena_end.
if p == h.arena_end {
// The new reservation is contiguous
// with the old reservation.
h.arena_end = new_end
h.arena_reserved = reserved
} else if h.arena_start <= p && p+p_size-h.arena_start-1 <= _MaxMem {
} else if h.arena_end < p && p+p_size-h.arena_start-1 <= _MaxMem {
// We were able to reserve more memory
// within the arena space, but it's
// not contiguous with our previous
@ -430,6 +432,16 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
h.setArenaUsed(used, false)
h.arena_reserved = reserved
} else {
// We got a mapping, but it's not
// linear with our current arena, so
// we can't use it.
//
// TODO: Make it possible to allocate
// from this. We can't decrease
// arena_used, but we could introduce
// a new variable for the current
// allocation position.
// We haven't added this allocation to
// the stats, so subtract it from a
// fake stat (but avoid underflow).