mirror of
https://github.com/golang/go
synced 2024-11-11 22:50:22 -07:00
runtime: check for updated arena_end overflow
Currently, if an allocation is large enough that arena_end + size
overflows (which is not hard to do on 32-bit), we go ahead and call
sysReserve with the impossible base and length and depend on this to
either directly fail because the kernel can't possibly fulfill the
requested mapping (causing mheap.sysAlloc to return nil) or to succeed
with a mapping at some other address which will then be rejected as
outside the arena.
In order to make this less subtle, less dependent on the kernel
getting all of this right, and to eliminate the hopeless system call,
add an explicit overflow check.
Updates #13143. This real issue has been fixed by 0de59c2
, but this is
a belt-and-suspenders improvement on top of that. It was uncovered by
my symbolic modeling of that bug.
Change-Id: I85fa868a33286fdcc23cdd7cdf86b19abf1cb2d1
Reviewed-on: https://go-review.googlesource.com/16961
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This commit is contained in:
parent
fbe855ba29
commit
f9357cdec1
@ -392,8 +392,8 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
|
||||
// We are in 32-bit mode, maybe we didn't use all possible address space yet.
|
||||
// Reserve some more space.
|
||||
p_size := round(n+_PageSize, 256<<20)
|
||||
new_end := h.arena_end + p_size
|
||||
if new_end <= h.arena_start+_MaxArena32 {
|
||||
new_end := h.arena_end + p_size // Careful: can overflow
|
||||
if h.arena_end <= new_end && new_end <= h.arena_start+_MaxArena32 {
|
||||
// TODO: It would be bad if part of the arena
|
||||
// is reserved and part is not.
|
||||
var reserved bool
|
||||
|
Loading…
Reference in New Issue
Block a user