Commit f9357cde authored by Austin Clements's avatar Austin Clements

runtime: check for updated arena_end overflow

Currently, if an allocation is large enough that arena_end + size
overflows (which is not hard to do on 32-bit), we go ahead and call
sysReserve with the impossible base and length and depend on this to
either directly fail because the kernel can't possibly fulfill the
requested mapping (causing mheap.sysAlloc to return nil) or to succeed
with a mapping at some other address which will then be rejected as
outside the arena.

In order to make this less subtle, less dependent on the kernel
getting all of this right, and to eliminate the hopeless system call,
add an explicit overflow check.

Updates #13143. This real issue has been fixed by 0de59c27, but this is
a belt-and-suspenders improvement on top of that. It was uncovered by
my symbolic modeling of that bug.

Change-Id: I85fa868a33286fdcc23cdd7cdf86b19abf1cb2d1
Reviewed-on: https://go-review.googlesource.com/16961
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
parent fbe855ba
......@@ -392,8 +392,8 @@ func (h *mheap) sysAlloc(n uintptr) unsafe.Pointer {
// We are in 32-bit mode, maybe we didn't use all possible address space yet.
// Reserve some more space.
p_size := round(n+_PageSize, 256<<20)
new_end := h.arena_end + p_size
if new_end <= h.arena_start+_MaxArena32 {
new_end := h.arena_end + p_size // Careful: can overflow
if h.arena_end <= new_end && new_end <= h.arena_start+_MaxArena32 {
// TODO: It would be bad if part of the arena
// is reserved and part is not.
var reserved bool
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment