mirror of
https://github.com/golang/go
synced 2024-11-26 03:17:57 -07:00
test: move allocation before munmap in recover4
recover4 allocates 16 pages of memory via mmap, makes a 4 page hole in it with munmap, allocates another 16 pages of memory via normal allocation and then tries to copy from one to the other. For some reason on arm64 (but no other platform I have tested) the second allocation sometimes causes the runtime to ask the kernel for 4 additional pages of memory -- which the kernel satisfies by remapping the pages that were just unmapped! Moving the second allocation before the munmap fixes this behaviour, I can run recover4 tens of thousands of times without failure with this fix vs a failure rate of ~0.5% before. Fixes #12549 Change-Id: I490b895b606897e4f7f25b1b51f5d485a366fffb Reviewed-on: https://go-review.googlesource.com/14632 Reviewed-by: Dave Cheney <dave@cheney.net>
This commit is contained in:
parent
7c61d24f97
commit
955b4caa48
@ -52,6 +52,8 @@ func main() {
|
||||
log.Fatalf("mmap: %v", err)
|
||||
}
|
||||
|
||||
other := make([]byte, 16*size)
|
||||
|
||||
// Note: Cannot call syscall.Munmap, because Munmap checks
|
||||
// that you are unmapping a whole region returned by Mmap.
|
||||
// We are trying to unmap just a hole in the middle.
|
||||
@ -59,8 +61,6 @@ func main() {
|
||||
log.Fatalf("munmap: %v", err)
|
||||
}
|
||||
|
||||
other := make([]byte, 16*size)
|
||||
|
||||
// Check that memcopy returns the actual amount copied
|
||||
// before the fault (8*size - 5, the offset we skip in the argument).
|
||||
n, err := memcopy(data[5:], other)
|
||||
|
Loading…
Reference in New Issue
Block a user