- 03 Apr, 2017 1 commit
-
-
Josh Bleecher Snyder authored
nodfp is a global, so modifying it is unsafe in a concurrent backend. It is also not necessary, since the Used marks are only relevant for nodes in fn.Dcl. For good measure, mark nodfp as always used. Passes toolstash-check. Updates #15756 Change-Id: I5320459f5eced2898615a17b395a10c1064bcaf5 Reviewed-on: https://go-review.googlesource.com/39200 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
- 02 Apr, 2017 1 commit
-
-
Cherry Zhang authored
Dupok symbols may be defined in multiple packages. Its associated package is chosen sort of arbitrarily (the first containing package that the linker loads). Canonicalize its package to the package with which it will be laid down in text, which is the first package in dependency order that defines the symbol. So later passes (for example, trampoline insertion pass) know that the dupok symbol is laid down along with the package. Fixes #19764. Change-Id: I7cbc7474ff3016d5069c8b7be04af934abab8bc3 Reviewed-on: https://go-review.googlesource.com/39150 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com> Reviewed-by: David Chase <drchase@google.com>
-
- 01 Apr, 2017 5 commits
-
-
Dave Cheney authored
Each architecture's Rconv function is only used inside its respective package, so it does not need to be exported. Change-Id: Ifbd629964d7a9edd66501d7cdf4750621d66d646 Reviewed-on: https://go-review.googlesource.com/39110 Run-TryBot: Dave Cheney <dave@cheney.net> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Jamie Stackhouse authored
This change makes it possible to retrieve the size of a file part without having to Seek to determine file-size. Resolves #19501 Change-Id: I7b9994c4cf41c9b06a046eb7046f8952ae1f15e9 Reviewed-on: https://go-review.googlesource.com/39223Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Emmanuel Odeke authored
Replace yyerror usages with yyerrorl in function typecheckswitch. Updates #19683. Change-Id: I7188cdecddd2ce4e06b8cee45b57f3765a979405 Reviewed-on: https://go-review.googlesource.com/38597Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Matthew Dempsky <mdempsky@google.com> Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
For #13560. Change-Id: I884e63f89d0756ca87b8c2092b4fd8360f791a2f Reviewed-on: https://go-review.googlesource.com/39171 Run-TryBot: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Josh Bleecher Snyder authored
For backend concurrency safety. Follow-up to CL 38721. This does introduce a Nodes where there wasn't one before, but these are so rare that the performance impact is negligible. Does not pass toolstash-check, but the only change is line numbers, and the new line numbers appear preferable. Updates #15756 name old alloc/op new alloc/op delta Template 39.9MB ± 0% 39.9MB ± 0% ~ (p=0.841 n=5+5) Unicode 29.8MB ± 0% 29.8MB ± 0% ~ (p=0.690 n=5+5) GoTypes 113MB ± 0% 113MB ± 0% +0.09% (p=0.008 n=5+5) SSA 854MB ± 0% 855MB ± 0% ~ (p=0.222 n=5+5) Flate 25.3MB ± 0% 25.3MB ± 0% ~ (p=0.690 n=5+5) GoParser 31.8MB ± 0% 31.9MB ± 0% ~ (p=0.421 n=5+5) Reflect 78.2MB ± 0% 78.3MB ± 0% ~ (p=0.548 n=5+5) Tar 26.7MB ± 0% 26.7MB ± 0% ~ (p=0.690 n=5+5) XML 42.3MB ± 0% 42.3MB ± 0% ~ (p=0.222 n=5+5) name old allocs/op new allocs/op delta Template 391k ± 1% 391k ± 0% ~ (p=0.841 n=5+5) Unicode 320k ± 0% 320k ± 0% ~ (p=0.841 n=5+5) GoTypes 1.14M ± 0% 1.14M ± 0% +0.26% (p=0.008 n=5+5) SSA 7.60M ± 0% 7.60M ± 0% ~ (p=0.548 n=5+5) Flate 234k ± 0% 234k ± 1% ~ (p=1.000 n=5+5) GoParser 316k ± 1% 317k ± 0% ~ (p=0.841 n=5+5) Reflect 979k ± 0% 980k ± 0% ~ (p=0.690 n=5+5) Tar 251k ± 1% 251k ± 0% ~ (p=0.595 n=5+5) XML 394k ± 0% 393k ± 0% ~ (p=0.222 n=5+5) Change-Id: I237ae5502db4560f78ce021dc62f6d289797afd6 Reviewed-on: https://go-review.googlesource.com/39197 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
- 31 Mar, 2017 33 commits
-
-
Josh Bleecher Snyder authored
Follow-up to review comments on CL 39193. Change-Id: I7649af9d70ad73e039061a7a66fea416a7476192 Reviewed-on: https://go-review.googlesource.com/39199Reviewed-by: Robert Griesemer <gri@golang.org>
-
Josh Bleecher Snyder authored
A few gc.Node ops may be shared across functions. The compiler is (mostly) already careful to avoid mutating them. However, from a concurrency perspective, replacing (say) an empty list with an empty list still counts as a mutation. One place this occurs is orderinit. Avoid it. This requires fixing one spot where shared nodes were mutated. It doesn't result in any functional or performance changes. Passes toolstash-check. Updates #15756 Change-Id: I63c93b31baeeac62d7574804acb6b7f2bc9d14a9 Reviewed-on: https://go-review.googlesource.com/39196 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Lynn Boger authored
This change improves the performance for LoweredMove on ppc64le and ppc64. benchmark old ns/op new ns/op delta BenchmarkCopyFat8-16 0.93 0.69 -25.81% BenchmarkCopyFat12-16 2.61 1.85 -29.12% BenchmarkCopyFat16-16 9.68 1.89 -80.48% BenchmarkCopyFat24-16 4.48 1.85 -58.71% BenchmarkCopyFat32-16 6.12 1.82 -70.26% BenchmarkCopyFat64-16 21.2 2.70 -87.26% BenchmarkCopyFat128-16 29.6 3.97 -86.59% BenchmarkCopyFat256-16 52.6 13.4 -74.52% BenchmarkCopyFat512-16 97.1 18.7 -80.74% BenchmarkCopyFat1024-16 186 35.3 -81.02% BenchmarkAssertE2TLarge-16 14.2 5.06 -64.37% Fixes #19785 Change-Id: I7d5e0052712b75811c02c7d86c5112e5649ad782 Reviewed-on: https://go-review.googlesource.com/38950Reviewed-by: Keith Randall <khr@golang.org>
-
Russ Cox authored
Round uses r+r < d to decide whether the remainder is above or below half of d (to decide whether to round up or down). This is wrong when r+r wraps negative, because it looks < d but is really > d. No one will ever care about rounding to a multiple of d > 2⁶² (about 146 years), but might as well get it right. Fixes #19807. Change-Id: I1b55a742dc36e02a7465bc778bf5dd74fe71f7c0 Reviewed-on: https://go-review.googlesource.com/39151 Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-by: Rob Pike <r@golang.org> Reviewed-by: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
The node in typenamesym requires neither a position nor a curfn. Passes toolstash-check. Updates #15756 Change-Id: I6d39a8961e5578fe5924aaceb29045b6de2699df Reviewed-on: https://go-review.googlesource.com/39194 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
For concurrency safety. Passes toolstash-check. Updates #15756. Change-Id: I1caca231a962781ff8f4f589b2e0454d2820ffb6 Reviewed-on: https://go-review.googlesource.com/39192 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Josh Bleecher Snyder authored
newnamel is newname but with no dependency on lineno or Curfn. This makes it suitable for use in a concurrent back end. Use it now to make tempAt global-free. The decision to push the assignment to n.Name.Curfn to the caller of newnamel is based on mdempsky's comments in #19683 that he'd like to do that for callers of newname as well. Passes toolstash-check. No compiler performance impact. Updates #19683 Updates #15756 Change-Id: Idc461a1716916d268c9ff323129830d9a6e4a4d9 Reviewed-on: https://go-review.googlesource.com/39191 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Josh Bleecher Snyder authored
This causes a minor reduction in allocations, because the old funcdatasym names were being interned unnecessarily. Updates #15756 name old alloc/op new alloc/op delta Template 39.9MB ± 0% 39.9MB ± 0% ~ (p=0.280 n=10+10) Unicode 29.9MB ± 0% 29.8MB ± 0% -0.26% (p=0.000 n=10+10) GoTypes 113MB ± 0% 113MB ± 0% -0.12% (p=0.000 n=10+10) SSA 855MB ± 0% 855MB ± 0% -0.03% (p=0.001 n=10+10) Flate 25.4MB ± 0% 25.3MB ± 0% -0.30% (p=0.000 n=10+10) GoParser 31.9MB ± 0% 31.8MB ± 0% ~ (p=0.065 n=10+9) Reflect 78.4MB ± 0% 78.2MB ± 0% -0.15% (p=0.000 n=9+10) Tar 26.7MB ± 0% 26.7MB ± 0% -0.17% (p=0.000 n=9+10) XML 42.3MB ± 0% 42.4MB ± 0% +0.07% (p=0.011 n=10+10) name old allocs/op new allocs/op delta Template 390k ± 0% 390k ± 0% ~ (p=0.905 n=9+10) Unicode 319k ± 1% 319k ± 1% ~ (p=0.724 n=10+10) GoTypes 1.14M ± 0% 1.14M ± 0% ~ (p=0.393 n=10+10) SSA 7.60M ± 0% 7.60M ± 0% ~ (p=0.604 n=9+10) Flate 235k ± 1% 234k ± 1% ~ (p=0.105 n=10+10) GoParser 317k ± 0% 316k ± 1% ~ (p=0.280 n=10+10) Reflect 979k ± 0% 979k ± 0% ~ (p=0.315 n=10+10) Tar 251k ± 0% 251k ± 1% ~ (p=0.762 n=8+10) XML 393k ± 0% 394k ± 1% ~ (p=0.095 n=9+10) name old text-bytes new text-bytes delta HelloSize 684k ± 0% 684k ± 0% ~ (all equal) name old data-bytes new data-bytes delta HelloSize 138k ± 0% 138k ± 0% ~ (all equal) name old exe-bytes new exe-bytes delta HelloSize 1.03M ± 0% 1.03M ± 0% ~ (all equal) Change-Id: Idba33da4e89c325984ac46e4852cf12e4a7fd1a9 Reviewed-on: https://go-review.googlesource.com/39032 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Josh Bleecher Snyder authored
Convert yyerrors into Fatals. Remove the goto. Move variable declaration closer to use. Unify printing strings a bit. Convert an int param into a bool. Passes toolstash-check. No compiler performance impact. Change-Id: I9017681417b785cf8693d18b124ac4f1ff37f2b5 Reviewed-on: https://go-review.googlesource.com/39170 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Josh Bleecher Snyder authored
The names never occur more than once, so interning the results is counterproductive. The impact is not very big, but neither is the fix. name old time/op new time/op delta Unicode 90.2ms ± 3% 88.3ms ± 5% -2.10% (p=0.000 n=94+98) Change-Id: I1e3a24433db4ae0c9a6e98166568941824ff0779 Reviewed-on: https://go-review.googlesource.com/39193 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Robert Griesemer authored
Also: Fix (testdata/gen/) copyGen.go, zeroGen.go, and arithConstGen.go to actually match (testdata/) copy.go, zero.go, and arithConst.go, all of which were manually edited in https://go-review.googlesource.com/20823 and https://go-review.googlesource.com/22748 despite the 'do not edit' (or perhaps because it was missing in the case of arithConst.go). For #13560. Change-Id: I366e1b521e51885e0d318ae848760e5e14ccd488 Reviewed-on: https://go-review.googlesource.com/39172Reviewed-by: Rob Pike <r@golang.org>
-
Josh Bleecher Snyder authored
Change-Id: I986f0c106e059455874692f5bfe2b5af25cf470e Reviewed-on: https://go-review.googlesource.com/39090 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Josh Bleecher Snyder authored
Prior to this CL, the SSA backend reported violations of the //go:nowritebarrier annotation immediately. This necessitated emitting errors during SSA compilation, which is not compatible with a concurrent backend. Instead, check for such violations later. We already save the data required to do a late check for violations of the //go:nowritebarrierrec annotation. Use the same data, and check //go:nowritebarrier at the same time. One downside to doing this is that now only a single violation will be reported per function. Given that this is for the runtime only, and violations are rare, this seems an acceptable cost. While we are here, remove several 'nerrors != 0' checks that are rendered pointless. Updates #15756 Fixes #19250 (as much as it ever can be) Change-Id: Ia44c4ad5b6fd6f804d9f88d9571cec8d23665cb3 Reviewed-on: https://go-review.googlesource.com/38973 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Josh Bleecher Snyder authored
We don't support stack frames over 2GB. Rather than detect this during backend compilation, check for it at the end of compilation. This is arguably a more accurate check anyway, since it takes into account the full frame, including local stack, arguments, and arch-specific rounding, although it's unlikely anyone would ever notice. Also, rather than reporting the error right away, take note of it and report it later, at the top level. This is not relevant now, but it will help with making the backend concurrent, as the append to the list of oversized functions can be cheaply protected by a plain mutex. Updates #15756 Updates #19250 Change-Id: Id3fa21906616d62e9dc66e27a17fd5f83304e96e Reviewed-on: https://go-review.googlesource.com/38972 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Ben Shi authored
By checking GOARM in ssa/gen/ARM.rules, each intermediate operator can be implemented via different instruction serials. It is up to the user to choose between compitability and efficiency. The Bswap32(x) is optimized to REV(x) when GOARM >= 6. The CTZ(x) is optimized to CLZ(RBIT x) when GOARM == 7. Change-Id: Ie9ee645fa39333fa79ad84ed4d1cefac30422814 Reviewed-on: https://go-review.googlesource.com/35610 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Carlos Eduardo Seo authored
Starting in go1.9, the minimum processor requirement for ppc64 is POWER8. So it may now use the same divWW implementation as ppc64le. Updates #19074 Change-Id: If1a85f175cda89eee06a1024ccd468da6124c844 Reviewed-on: https://go-review.googlesource.com/39010 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
-
Daniel Theophanes authored
User defined numeric types such as "type Int int64" have been able to be scanned into without a custom scanner by using the reflect scan code path used to convert between various numeric types. Add in a path for string types for symmetry and least surprise. Fixes #18101 Change-Id: I00553bcf021ffe6d95047eca0067ee94b54ff501 Reviewed-on: https://go-review.googlesource.com/39031 Run-TryBot: Daniel Theophanes <kardianos@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Dave Cheney authored
Rather than using arm64.Rconv directly in the archArm64 constructor use the generic obj.Rconv helper. This removes the only use of arm64.Rconv outside the arm64 package itself. Change-Id: I99e9e7156b52cd26dc134f610f764ec794264e2c Reviewed-on: https://go-review.googlesource.com/38756 Run-TryBot: Dave Cheney <dave@cheney.net> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Brad Fitzpatrick authored
TestUnshareMountNameSpace fails on arm64 due to permission problems. Skip that test for now when permission problems are encountered, so we don't regress elsewhere in the meantime. Updates #19698 Change-Id: I9058928afa474b813652c9489f343b8957160a6c Reviewed-on: https://go-review.googlesource.com/39052 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
-
Austin Clements authored
Currently runtime.GC() triggers a STW GC. For common uses in tests and benchmarks, it doesn't matter whether it's STW or concurrent, but for uses in servers for things like collecting heap profiles and controlling memory footprint, this pause can be a bit problem for latency. This changes runtime.GC() to trigger a concurrent GC. In order to remain as close as possible to its current meaning, we define it to always perform a full mark/sweep GC cycle before returning (even if that means it has to finish up a cycle we're in the middle of first) and to publish the heap profile as of the triggered mark termination. While it must perform a full cycle, simultaneous runtime.GC() calls can be consolidated into a single full cycle. Fixes #18216. Change-Id: I9088cc5deef4ab6bcf0245ed1982a852a01c44b5 Reviewed-on: https://go-review.googlesource.com/37520 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
sweepone returns ^uintptr(0) when there are no more spans to *start* sweeping, but there may be spans being swept concurrently at the time and there's currently no efficient way to tell when the sweeper is done sweeping all the spans. We'll need this for concurrent runtime.GC(), so add a count of the number of active sweepone calls to make it possible to block until sweeping is truly done. This is also useful for more accurately printing the gcpacertrace, since that should be printed after all of the sweeping stats are in (currently we can print it slightly too early). For #18216. Change-Id: I06e6240c9e7b40aca6fd7b788bb6962107c10a0f Reviewed-on: https://go-review.googlesource.com/37716 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Forced GCs don't provide good information about how to adjust the GC trigger. Currently we avoid adjusting the trigger on forced GC because forced GC is also STW and we don't adjust the trigger on STW GC. However, this will become a problem when forced GC is concurrent. Fix this by skipping trigger adjustment if the GC was user-forced. For #18216. Change-Id: I03dfdad12ecd3cfeca4573140a0768abb29aac5e Reviewed-on: https://go-review.googlesource.com/38951 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently gcMode != gcBackgroundMode implies this was a user-forced GC cycle. This is no longer going to be true when we make runtime.GC() trigger a concurrent GC, so replace this with an explicit work.userForced bit. For #18216. Change-Id: If7d71bbca78b5f0b35641b070f9d457f5c9a52bd Reviewed-on: https://go-review.googlesource.com/37519 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently freeOSMemory calls gcStart directly, but we really just want it to behave like runtime.GC() and then perform a scavenge, so make it call runtime.GC() rather than gcStart. For #18216. Change-Id: I548ec007afc788e87d383532a443a10d92105937 Reviewed-on: https://go-review.googlesource.com/37518 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Now that the gcMode is no longer involved in the GC trigger condition, we can simplify the triggering of forced GCs. By making the trigger condition for forced GCs true even if gcphase is not _GCoff, we don't need any special case path in gcStart to ensure that forced GCs don't get consolidated. Change-Id: I6067a13d76e40ff2eef8fade6fc14adb0cb58ee5 Reviewed-on: https://go-review.googlesource.com/37517 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently the GC triggering condition is an awkward combination of the gcMode (whether or not it's gcBackgroundMode) and a boolean "forceTrigger" flag. Replace this with a new gcTrigger type that represents the range of transition predicates we need. This has several advantages: 1. We can remove the awkward logic that affects the trigger behavior based on the gcMode. Now gcMode purely controls whether to run a STW GC or not and the gcTrigger controls whether this is a forced GC that cannot be consolidated with other GC cycles. 2. We can lift the time-based triggering logic in sysmon to just another type of GC trigger and move the logic to the trigger test. 3. This sets us up to have a cycle count-based trigger, which we'll use to make runtime.GC trigger concurrent GC with the desired consolidation properties. For #18216. Change-Id: If9cd49349579a548800f5022ae47b8128004bbfc Reviewed-on: https://go-review.googlesource.com/37516 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently sysmon triggers periodic GC if GC is not currently running and it's been long enough since the last GC. This misses some important conditions; for example, whether GC is enabled at all by GOGC. As a result, if GOGC is off, once we pass the timeout for periodic GC, sysmon will attempt to trigger a GC every 10ms. This GC will be a no-op because gcStart will check all of the appropriate conditions and do nothing, but it still goes through the motions of waking the forcegc goroutine and printing a gctrace line. Fix this by making sysmon call gcShouldStart to check *all* of the appropriate transition conditions before attempting to trigger a periodic GC. Fixes #19247. Change-Id: Icee5521ce175e8419f934723849853d53773af31 Reviewed-on: https://go-review.googlesource.com/37515 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently the heap profile is flushed by *either* gcSweep in STW mode or by gcMarkTermination in concurrent mode. Simplify this by making gcMarkTermination always flush the heap profile and by making gcSweep do one extra flush (instead of two) in STW mode. Change-Id: I62147afb2a128e1f3d92ef4bb8144c8a345f53c4 Reviewed-on: https://go-review.googlesource.com/37715 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we snapshot the heap profile just *after* mark termination starts the world because it's a relatively expensive operation. However, this means any alloc or free events that happen between starting the world and snapshotting the heap profile can be accounted to the wrong cycle. In the worst case, a free can be accounted to the cycle before the alloc; if the heap is small, this can result temporarily in a negative "in use" count in the profile. Fix this without making STW more expensive by using a global heap profile cycle counter. This lets us split up the operation into a two parts: 1) a super-cheap snapshot operation that simply increments the global cycle counter during STW, and 2) a more expensive cleanup operation we can do after starting the world that frees up a slot in all buckets for use by the next heap profile cycle. Fixes #19311. Change-Id: I6bdafabf111c48b3d26fe2d91267f7bef0bd4270 Reviewed-on: https://go-review.googlesource.com/37714 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently memRecord has the same set of four fields repeated three times. Pull these into a type and use this type three times. This cleans up and simplifies the code a bit and will make it easier to switch to a globally tracked heap profile cycle for #19311. Change-Id: I414d15673feaa406a8366b48784437c642997cf2 Reviewed-on: https://go-review.googlesource.com/37713 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Robert Griesemer authored
The symExport flag tells whether a symbol is in the export list already or not (and it's also used to avoid being added to that list). Exporting is based on that export list - no need to check again. Change-Id: I6056f97aa5c24a19376957da29199135c8da35f9 Reviewed-on: https://go-review.googlesource.com/39033Reviewed-by: Dave Cheney <dave@cheney.net>
-
Austin Clements authored
Every time I modify heap profiling, I find myself redrawing this diagram, so add it to the comments. This shows how allocations and frees are accounted, how we arrive at consistent profile snapshots, and when those snapshots are published to the user. Change-Id: I106aba1200af3c773b46e24e5f50205e808e2c69 Reviewed-on: https://go-review.googlesource.com/37514 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Now that we have a nice predicate system, improve the tests performed by TestMemStats. We add some more non-zero checks (now that we force a GC, things like NumGC must be non-zero), checks for trivial boolean fields, and a few more range checks. Change-Id: I6da46d33fa0ce5738407ee57d587825479413171 Reviewed-on: https://go-review.googlesource.com/37513 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-