1. 06 Aug, 2015 6 commits
  2. 05 Aug, 2015 22 commits
  3. 04 Aug, 2015 12 commits
    • Robert Griesemer's avatar
      reflect: fix doc string · 3cfc34a5
      Robert Griesemer authored
      Fixes #12017.
      
      Change-Id: I3dfcf9d0b62cae02eca1973383f0aad286a6ef4d
      Reviewed-on: https://go-review.googlesource.com/13136Reviewed-by: default avatarKeith Randall <khr@golang.org>
      3cfc34a5
    • Austin Clements's avatar
      runtime: fix typos in comments · be39a429
      Austin Clements authored
      Change-Id: I66f7937b22bb6e05c3f2f0f2a057151020ad9699
      Reviewed-on: https://go-review.googlesource.com/13049Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      be39a429
    • Austin Clements's avatar
      runtime: fix assist utilization computation · e3870aa6
      Austin Clements authored
      When commit 510fd135 enabled assists during the scan phase, it failed
      to also update the code in the GC controller that computed the assist
      CPU utilization and adjusted the trigger based on it. Fix that code so
      it uses the start of the scan phase as the wall-clock time when
      assists were enabled rather than the start of the mark phase.
      
      Change-Id: I05013734b4448c3e2c730dc7b0b5ee28c86ed8cf
      Reviewed-on: https://go-review.googlesource.com/13048Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      e3870aa6
    • Austin Clements's avatar
      runtime: revise assist ratio aggressively · 1fb01a88
      Austin Clements authored
      At the start of a GC cycle, the garbage collector computes the assist
      ratio based on the total scannable heap size. This was intended to be
      conservative; after all, this assumes the entire heap may be reachable
      and hence needs to be scanned. But it only assumes that the *current*
      entire heap may be reachable. It fails to account for heap allocated
      during the GC cycle. If the trigger ratio is very low (near zero), and
      most of the heap is reachable when GC starts (which is likely if the
      trigger ratio is near zero), then it's possible for the mutator to
      create new, reachable heap fast enough that the assists won't keep up
      based on the assist ratio computed at the beginning of the cycle. As a
      result, the heap can grow beyond the heap goal (by hundreds of megs in
      stress tests like in issue #11911).
      
      We already have some vestigial logic for dealing with situations like
      this; it just doesn't run often enough. Currently, every 10 ms during
      the GC cycle, the GC revises the assist ratio. This was put in before
      we switched to a conservative assist ratio (when we really were using
      estimates of scannable heap), and it turns out to be exactly what we
      need now. However, every 10 ms is far too infrequent for a rapidly
      allocating mutator.
      
      This commit reuses this logic, but replaces the 10 ms timer with
      revising the assist ratio every time the heap is locked, which
      coincides precisely with when the statistics used to compute the
      assist ratio are updated.
      
      Fixes #11911.
      
      Change-Id: I377b231ab064946228378fa10422a46d1b50f4c5
      Reviewed-on: https://go-review.googlesource.com/13047Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      1fb01a88
    • Austin Clements's avatar
      runtime: when gcpacertrace > 0, print information about assist ratio · f9dc3382
      Austin Clements authored
      This was useful in debugging the mutator assist behavior for #11911,
      and it fits with the other gcpacertrace output.
      
      Change-Id: I1e25590bb4098223a160de796578bd11086309c7
      Reviewed-on: https://go-review.googlesource.com/13046Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      f9dc3382
    • Austin Clements's avatar
      runtime: make sweep proportional to spans bytes allocated · fc9ca85f
      Austin Clements authored
      Proportional concurrent sweep is currently based on a ratio of spans
      to be swept per bytes of object allocation. However, proportional
      sweeping is performed during span allocation, not object allocation,
      in order to minimize contention and overhead. Since objects are
      allocated from spans after those spans are allocated, the system tends
      to operate in debt, which means when the next GC cycle starts, there
      is often sweep debt remaining, so GC has to finish the sweep, which
      delays the start of the cycle and delays enabling mutator assists.
      
      For example, it's quite likely that many Ps will simultaneously refill
      their span caches immediately after a GC cycle (because GC flushes the
      span caches), but at this point, there has been very little object
      allocation since the end of GC, so very little sweeping is done. The
      Ps then allocate objects from these cached spans, which drives up the
      bytes of object allocation, but since these allocations are coming
      from cached spans, nothing considers whether more sweeping has to
      happen. If the sweep ratio is high enough (which can happen if the
      next GC trigger is very close to the retained heap size), this can
      easily represent a sweep debt of thousands of pages.
      
      Fix this by making proportional sweep proportional to the number of
      bytes of spans allocated, rather than the number of bytes of objects
      allocated. Prior to allocating a span, both the small object path and
      the large object path ensure credit for allocating that span, so the
      system operates in the black, rather than in the red.
      
      Combined with the previous commit, this should eliminate all sweeping
      from GC start up. On the stress test in issue #11911, this reduces the
      time spent sweeping during GC (and delaying start up) by several
      orders of magnitude:
      
                      mean    99%ile     max
          pre fix      1 ms    11 ms   144 ms
          post fix   270 ns   735 ns   916 ns
      
      Updates #11911.
      
      Change-Id: I89223712883954c9d6ec2a7a51ecb97172097df3
      Reviewed-on: https://go-review.googlesource.com/13044Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      fc9ca85f
    • Austin Clements's avatar
      runtime: always give concurrent sweep some heap distance · e30c6d64
      Austin Clements authored
      Currently it's possible for the next_gc heap size trigger computed for
      the next GC cycle to be less than the current allocated heap size.
      This means the next cycle will start immediately, which means there's
      no time to perform the concurrent sweep between GC cycles. This places
      responsibility for finishing the sweep on GC itself, which delays GC
      start-up and hence delays mutator assist.
      
      Fix this by ensuring that next_gc is always at least a little higher
      than the allocated heap size, so we won't trigger the next cycle
      instantly.
      
      Updates #11911.
      
      Change-Id: I74f0b887bf187518d5fedffc7989817cbcf30592
      Reviewed-on: https://go-review.googlesource.com/13043Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      e30c6d64
    • Austin Clements's avatar
      runtime: assist the GC during GC startup and shutdown · fb5230af
      Austin Clements authored
      Currently there are two sensitive periods during which a mutator can
      allocate past the heap goal but mutator assists can't be enabled: 1)
      at the beginning of GC between when the heap first passes the heap
      trigger and sweep termination and 2) at the end of GC between mark
      termination and when the background GC goroutine parks. During these
      periods there's no back-pressure or safety net, so a rapidly
      allocating mutator can allocate past the heap goal. This is
      exacerbated if there are many goroutines because the GC coordinator is
      scheduled as any other goroutine, so if it gets preempted during one
      of these periods, it may stay preempted for a long period (10s or 100s
      of milliseconds).
      
      Normally the mutator does scan work to create back-pressure against
      allocation, but there is no scan work during these periods. Hence, as
      a fall back, if a mutator would assist but can't yet, simply yield the
      CPU. This delays the mutator somewhat, but more importantly gives more
      CPU time to the GC coordinator for it to complete the transition.
      
      This is obviously a workaround. Issue #11970 suggests a far better but
      far more invasive way to fix this.
      
      Updates #11911. (This very nearly fixes the issue, but about once
      every 15 minutes I get a GC cycle where the assists are enabled but
      don't do enough work.)
      
      Change-Id: I9768b79e3778abd3e06d306596c3bd77f65bf3f1
      Reviewed-on: https://go-review.googlesource.com/13026Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      fb5230af
    • Austin Clements's avatar
      runtime: recheck GC trigger before actually starting GC · 88e945fd
      Austin Clements authored
      Currently allocation checks the GC trigger speculatively during
      allocation and then triggers the GC without rechecking. As a result,
      it's possible for G 1 and G 2 to detect the trigger simultaneously,
      both enter startGC, G 1 actually starts GC while G 2 gets preempted
      until after the whole GC cycle, then G 2 immediately starts another GC
      cycle even though the heap is now well under the trigger.
      
      Fix this by re-checking the GC trigger non-speculatively just before
      actually kicking off a new GC cycle.
      
      This contributes to #11911 because when this happens, we definitely
      don't finish the background sweep before starting the next GC cycle,
      which can significantly delay the start of concurrent scan.
      
      Change-Id: I560ab79ba5684ba435084410a9765d28f5745976
      Reviewed-on: https://go-review.googlesource.com/13025Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      88e945fd
    • Ian Lance Taylor's avatar
      doc: link to design doc for GOMAXPROCS change in go1.5.html · d5f5e658
      Ian Lance Taylor authored
      Change-Id: Ifac10621fece766f3a0e8551e98d1f8d7072852f
      Reviewed-on: https://go-review.googlesource.com/13068Reviewed-by: default avatarRuss Cox <rsc@golang.org>
      Reviewed-by: default avatarRob Pike <r@golang.org>
      d5f5e658
    • Ian Lance Taylor's avatar
      cmd/go: fix documentation for exported functions · c2ef8e75
      Ian Lance Taylor authored
      I accidentally submitted https://golang.org/cl/13080 too early.
      
      Update #11955.
      
      Change-Id: I1a5a6860bb46bc4bc6fd278f8a867d2dd9e411e1
      Reviewed-on: https://go-review.googlesource.com/13096Reviewed-by: default avatarAndrew Gerrand <adg@golang.org>
      c2ef8e75
    • Vincent Batts's avatar
      archive/tar: don't treat multiple file system links as a tar hardlink · a1d093d9
      Vincent Batts authored
      Do not assume that if stat shows multiple links that we should mark the
      file as a hardlink in the tar format.  If the hardlink link was not
      referenced, this caused a link to "/".  On an overlay file system, all
      files have multiple links.
      
      The caller must keep the inode references and set TypeLink, Size = 0,
      and LinkName themselves.
      
      Change-Id: I873b8a235bc8f8fbb271db74ee54232da36ca013
      Reviewed-on: https://go-review.googlesource.com/13045Reviewed-by: default avatarIan Lance Taylor <iant@golang.org>
      a1d093d9