Commit eabdd058 authored by Austin Clements's avatar Austin Clements

runtime: document memory ordering for h_spans

h_spans can be accessed concurrently without synchronization from
other threads, which means it needs the appropriate memory barriers on
weakly ordered machines. It happens to already have the necessary
memory barriers because all accesses to h_spans are currently
protected by the heap lock and the unlocks happen in exactly the
places where release barriers are needed, but it's easy to imagine
that this could change in the future. Document the fact that we're
depending on the barrier implied by the unlock.

Related to issue #9984.

Change-Id: I1bc3c95cd73361b041c8c95cd4bb92daf8c1f94a
Reviewed-on: https://go-review.googlesource.com/11361Reviewed-by: default avatarRick Hudson <rlh@golang.org>
Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
parent ef4a17bc
......@@ -444,6 +444,16 @@ func mHeap_Alloc_m(h *mheap, npage uintptr, sizeclass int32, large bool) *mspan
if trace.enabled {
traceHeapAlloc()
}
// h_spans is accessed concurrently without synchronization
// from other threads. Hence, there must be a store/store
// barrier here to ensure the writes to h_spans above happen
// before the caller can publish a pointer p to an object
// allocated from s. As soon as this happens, the garbage
// collector running on another processor could read p and
// look up s in h_spans. The unlock acts as the barrier to
// order these writes. On the read side, the data dependency
// between p and the index in h_spans orders the reads.
unlock(&h.lock)
return s
}
......@@ -479,6 +489,8 @@ func mHeap_AllocStack(h *mheap, npage uintptr) *mspan {
s.ref = 0
memstats.stacks_inuse += uint64(s.npages << _PageShift)
}
// This unlock acts as a release barrier. See mHeap_Alloc_m.
unlock(&h.lock)
return s
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment