1. 29 Jul, 2015 4 commits
  2. 28 Jul, 2015 23 commits
  3. 27 Jul, 2015 13 commits
    • Brad Fitzpatrick's avatar
      net/http: pause briefly after closing Server connection when body remains · a3ffd836
      Brad Fitzpatrick authored
      From https://github.com/golang/go/issues/11745#issuecomment-123555313
      this implements option (b), having the server pause slightly after
      sending the final response on a TCP connection when we're about to close
      it when we know there's a request body outstanding. This biases the
      client (which might not be Go) to prefer our response header over the
      request body write error.
      
      Updates #11745
      
      Change-Id: I07cb0b74519d266c8049d9e0eb23a61304eedbf8
      Reviewed-on: https://go-review.googlesource.com/12658Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      a3ffd836
    • David Crawshaw's avatar
      runtime/cgo: remove TMPDIR logic for iOS · 249894ab
      David Crawshaw authored
      Seems like the simplest solution for 1.5. All the parts of the test
      suite I can run on my current device (for which my exception handler
      fix no longer works, apparently) pass without this code. I'll move it
      into x/mobile/app.
      
      Fixes #11884
      
      Change-Id: I2da40c8c7b48a4c6970c4d709dd7c148a22e8727
      Reviewed-on: https://go-review.googlesource.com/12721Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      249894ab
    • Austin Clements's avatar
      runtime: close window that hides GC work from concurrent mark · c1f7a56f
      Austin Clements authored
      Currently we enter mark 2 by first flushing all existing gcWork caches
      and then setting gcBlackenPromptly, which disables further gcWork
      caching. However, if a worker or assist pulls a work buffer in to its
      gcWork cache after that cache has been flushed but before caching is
      disabled, that work may remain in that cache until mark termination.
      If that work represents a heap bottleneck (e.g., a single pointer that
      is the only way to reach a large amount of the heap), this can force
      mark termination to do a large amount of work, resulting in a long
      STW.
      
      Fix this by reversing the order of these steps: first disable caching,
      then flush all existing caches.
      
      Rick Hudson <rlh> did the hard work of tracking this down. This CL
      combined with CL 12672 and CL 12646 distills the critical parts of his
      fix from CL 12539.
      
      Fixes #11694.
      
      Change-Id: Ib10d0a21e3f6170a80727d0286f9990df049fed2
      Reviewed-on: https://go-review.googlesource.com/12688Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      c1f7a56f
    • Austin Clements's avatar
      runtime: enable GC assists ASAP · 510fd135
      Austin Clements authored
      Currently the GC coordinator enables GC assists at the same time it
      enables background mark workers, after the concurrent scan phase is
      done. However, this means a rapidly allocating mutator has the entire
      scan phase during which to allocate beyond the heap trigger and
      potentially beyond the heap goal with no back-pressure from assists.
      This prevents the feedback system that's supposed to keep the heap
      size under the heap goal from doing its job.
      
      Fix this by enabling mutator assists during the scan phase. This is
      safe because the write barrier is already enabled and globally
      acknowledged at this point.
      
      There's still a very small window between when the heap size reaches
      the heap trigger and when the GC coordinator is able to stop the world
      during which the mutator can allocate unabated. This allows *very*
      rapidly allocator mutators like TestTraceStress to still occasionally
      exceed the heap goal by a small amount (~20 MB at most for
      TestTraceStress). However, this seems like a corner case.
      
      Fixes #11677.
      
      Change-Id: I0f80d949ec82341cd31ca1604a626efb7295a819
      Reviewed-on: https://go-review.googlesource.com/12674Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      510fd135
    • Austin Clements's avatar
      runtime: allow GC drain whenever write barrier is enabled · f5e67e53
      Austin Clements authored
      Currently we hand-code a set of phases when draining is allowed.
      However, this set of phases is conservative. The critical invariant is
      simply that the write barrier must be enabled if we're draining.
      
      Shortly we're going to enable mutator assists during the scan phase,
      which means we may drain during the scan phase. In preparation, this
      commit generalizes these assertions to check the fundamental condition
      that the write barrier is enabled, rather than checking that we're in
      any particular phase.
      
      Change-Id: I0e1bec1ca823d4a697a0831ec4c50f5dd3f2a893
      Reviewed-on: https://go-review.googlesource.com/12673Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      f5e67e53
    • Austin Clements's avatar
      runtime: don't start workers between mark 1 & 2 · 64a32ffe
      Austin Clements authored
      Currently we clear both the mark 1 and mark 2 signals at the beginning
      of concurrent mark. If either if these is clear, it acts as a signal
      to the scheduler that it should start background workers. However,
      this means that in the interim *between* mark 1 and mark 2, the
      scheduler basically loops starting up new workers only to have them
      return with nothing to do. In addition to harming performance and
      delaying mutator work, this approach has a race where workers started
      for mark 1 can mistakenly signal mark 2, causing it to complete
      prematurely. This approach also interferes with starting assists
      earlier to fix #11677.
      
      Fix this by initially setting both mark 1 and mark 2 to "signaled".
      The scheduler will not start background mark workers, though assists
      can still run. When we're ready to enter mark 1, we clear the mark 1
      signal and wait for it. Then, when we're ready to enter mark 2, we
      clear the mark 2 signal and wait for it.
      
      This structure also lets us deal cleanly with the situation where all
      work is drained *prior* to the mark 2 wait, meaning that there may be
      no workers to signal completion. Currently we deal with this using a
      racy (and possibly incorrect) check for work in the coordinator itself
      to skip the mark 2 wait if there's no work. This change makes the
      coordinator unconditionally wait for mark completion and makes the
      scheduler itself signal completion by slightly extending the logic it
      already has to determine that there's no work and hence no use in
      starting a new worker.
      
      This is a prerequisite to fixing the remaining component of #11677,
      which will require enabling assists during the scan phase. However, we
      don't want to enable background workers until the mark phase because
      they will compete with the scan. This change lets us use bgMark1 and
      bgMark2 to indicate when it's okay to start background workers
      independent of assists.
      
      This is also a prerequisite to fixing #11694. It significantly reduces
      the occurrence of long mark termination pauses in #11694 (from 64 out
      of 1000 to 2 out of 1000 in one experiment).
      
      Coincidentally, this also reduces the final heap size (and hence run
      time) of TestTraceStress from ~100 MB and ~1.9 seconds to ~14 MB and
      ~0.4 seconds because it significantly shortens concurrent mark
      duration.
      
      Rick Hudson <rlh> did the hard work of tracking this down.
      
      Change-Id: I12ea9ee2db9a0ae9d3a90dde4944a75fcf408f4c
      Reviewed-on: https://go-review.googlesource.com/12672Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      64a32ffe
    • Austin Clements's avatar
      runtime: retry GC assist until debt is paid off · 8f34b253
      Austin Clements authored
      Currently, there are three ways to satisfy a GC assist: 1) the mutator
      steals credit from background GC, 2) the mutator actually does GC
      work, and 3) there is no more work available. 3 was never really
      intended as a way to satisfy an assist, and it causes problems: there
      are periods when it's expected that the GC won't have any work, such
      as when transitioning from mark 1 to mark 2 and from mark 2 to mark
      termination. During these periods, there's no back-pressure on rapidly
      allocating mutators, which lets them race ahead of the heap goal.
      
      For example, test/init1.go and the runtime/trace test both have small
      reachable heaps and contain loops that rapidly allocate large garbage
      byte slices. This bug lets these tests exceed the heap goal by several
      orders of magnitude.
      
      Fix this by forcing the assist (and hence the allocation) to block
      until it can satisfy its debt via either 1 or 2, or the GC cycle
      terminates.
      
      This fixes one the causes of #11677. It's still possible to overshoot
      the GC heap goal, but with this change the overshoot is almost exactly
      by the amount of allocation that happens during the concurrent scan
      phase, between when the heap passes the GC trigger and when the GC
      enables assists.
      
      Change-Id: I5ef4edcb0d2e13a1e432e66e8245f2bd9f8995be
      Reviewed-on: https://go-review.googlesource.com/12671Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      8f34b253
    • Austin Clements's avatar
      runtime: yield to GC coordinator after assist completion · 500c88d4
      Austin Clements authored
      Currently it's possible for the GC assist to signal completion of the
      mark phase, which puts the GC coordinator goroutine on the current P's
      run queue, and then return to mutator code that delays until the next
      forced preemption before actually yielding control to the GC
      coordinator, dragging out completion of the mark phase. This delay can
      be further exacerbated if the mutator makes other goroutines runnable
      before yielding control, since this will push the GC coordinator on
      the back of the P's run queue.
      
      To fix this, this adds a Gosched to the assist if it completed the
      mark phase. This immediately and directly yields control to the GC
      coordinator. This already happens implicitly in the background mark
      workers because they park immediately after completing the mark.
      
      This is one of the reasons completion of the mark phase is being
      dragged out and allowing the mutator to allocate without assisting,
      leading to the large heap goal overshoot in issue #11677. This is also
      a prerequisite to making the assist block when it can't pay off its
      debt.
      
      Change-Id: I586adfbecb3ca042a37966752c1dc757f5c7fc78
      Reviewed-on: https://go-review.googlesource.com/12670Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      500c88d4
    • Austin Clements's avatar
      runtime: disallow GC assists in non-preemptible contexts · 4f188c2d
      Austin Clements authored
      Currently it's possible to perform GC work on a system stack or when
      locks are held if there's an allocation that triggers an assist. This
      is generally a bad idea because of the fragility of these contexts,
      and it's incompatible with two changes we're about to make: one is to
      yield after signaling mark completion (which we can't do from a
      non-preemptible context) and the other is to make assists block if
      there's no other way for them to pay off the assist debt.
      
      This commit simply skips the assist if it's called from a
      non-preemptible context. The allocation will still count toward the
      assist debt, so it will be paid off by a later assist. There should be
      little allocation from non-preemptible contexts, so this shouldn't
      harm the overall assist mechanism.
      
      Change-Id: I7bf0e6c73e659fe6b52f27437abf39d76b245c79
      Reviewed-on: https://go-review.googlesource.com/12649Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      4f188c2d
    • Austin Clements's avatar
      runtime: make notetsleep_internal nowritebarrier · dff9108d
      Austin Clements authored
      When notetsleep_internal is called from notetsleepg, notetsleepg has
      just given up the P, so write barriers are not allowed in
      notetsleep_internal.
      
      Change-Id: I1b214fa388b1ea05b8ce2dcfe1c0074c0a3c8870
      Reviewed-on: https://go-review.googlesource.com/12647Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      dff9108d
    • Austin Clements's avatar
      runtime: fix mark 2 completion in fractional/idle workers · cf225a17
      Austin Clements authored
      Currently fractional and idle mark workers dispose of their gcWork
      cache during mark 2 after incrementing work.nwait and after checking
      whether there are any workers or any work available. This creates a
      window for two races:
      
      1) If the only remaining work is in this worker's gcWork cache, it
         will see that there are no more workers and no more work on the
         global lists (since it has not yet flushed its own cache) and
         prematurely signal mark 2 completion.
      
      2) After this worker has incremented work.nwait but before it has
         flushed its cache, another worker may observe that there are no
         more workers and no more work and prematurely signal mark 2
         completion.
      
      We can fix both of these by simply moving the cache flush above the
      increment of nwait and the test of the completion condition.
      
      This is probably contributing to #11694, though this alone is not
      enough to fix it.
      
      Change-Id: Idcf9656e5c460c5ea0d23c19c6c51e951f7716c3
      Reviewed-on: https://go-review.googlesource.com/12646Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      cf225a17
    • Austin Clements's avatar
      runtime: steal the correct amount of GC assist credit · b8526a83
      Austin Clements authored
      GC assists are supposed to steal at most the amount of background GC
      credit available so that background GC credit doesn't go negative.
      However, they are instead stealing the *total* amount of their debt
      but only claiming up to the amount of credit that was available. This
      results in draining the background GC credit pool too quickly, which
      results in unnecessary assist work.
      
      The fix is trivial: steal the amount of work we meant to steal (which
      is already computed).
      
      Change-Id: I837fe60ed515ba91c6baf363248069734a7895ef
      Reviewed-on: https://go-review.googlesource.com/12643Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      b8526a83
    • Ian Lance Taylor's avatar
      cmd/dist: run misc/cgo/testsovar on darwin and netbsd · 129cfa27
      Ian Lance Taylor authored
      CL https://golang.org/cl/12470 has reportedly fixed the problems that
      the misc/cgo/testsovar test encountered on darwin and netbsd.  Let's
      actually run the test.
      
      Update #10360.
      Update #11654.
      
      Change-Id: I4cdd27a8ec8713620e0135780a03f63cfcc538d0
      Reviewed-on: https://go-review.googlesource.com/12702Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      129cfa27