- 09 Nov, 2019 2 commits
-
-
David Chase authored
This restores intrinsic status to functions copied from math/bits into runtime/internal/sys, as an aid to runtime performance. Updates #35112. Change-Id: I41a7d87cf00f1e64d82aa95c5b1000bc128de820 Reviewed-on: https://go-review.googlesource.com/c/go/+/206200 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Ian Lance Taylor authored
Now that the runtime can send preemption signals, it is possible that a channel that asks for all signals can see both SIGURG and SIGHUP before reading either, in which case one of the signals will be dropped. We have to use a larger buffer so that the test see the signal it expects. Fixes #35466 Change-Id: I36271eae0661c421780c72292a5bcbd443ada987 Reviewed-on: https://go-review.googlesource.com/c/go/+/206257 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
- 08 Nov, 2019 38 commits
-
-
David Chase authored
CL 201765 activated calls from the runtime to functions in math/bits. When coverage and race detection were simultaneously enabled, this caused a crash when the covered+race-checked code in math/bits was called from the runtime before there was even a P. PS Win for gdlv in helping sort this out. TODO - next CL intrinsifies the new functions in runtime/internal/sys TODO/Would-be-nice - Ctz64 and TrailingZeros64 are the same function; 386.s is intrinsified; clean all that up. Fixes #35461. Updates #35112. Change-Id: I750a54dba493130ad3e68a06530ede7687d41e1d Reviewed-on: https://go-review.googlesource.com/c/go/+/206199Reviewed-by: Michael Knyszek <mknyszek@google.com> Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
witchard authored
Enables insecure fetching of dependencies whos path matches those specified in the enironment variable GOINSECURE. Fixes #32966 Change-Id: I378920fbd5a4436df0b5af3fb5533e663e2cc758 GitHub-Last-Rev: 2c87b303acbe86e273bd0b8514e338d34794b0d6 GitHub-Pull-Request: golang/go#35357 Reviewed-on: https://go-review.googlesource.com/c/go/+/205238 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Bryan C. Mills <bcmills@google.com> Reviewed-by: Jay Conrod <jayconrod@google.com>
-
Bryan C. Mills authored
Updates #35471 Change-Id: Ie06c442e405a267eb909621e1205444b6a00fda1 Reviewed-on: https://go-review.googlesource.com/c/go/+/206197 Run-TryBot: Bryan C. Mills <bcmills@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David du Colombier <0intro@gmail.com>
-
Matthew Dempsky authored
Previously langSupported applied -lang as though it's a global restriction, but it's actually a per-package restriction. This CL fixes langSupported to take a *types.Pkg parameter to reflect this and updates its callers accordingly. This is relevant for signed shifts (added in Go 1.12), because they can be inlined into a Go 1.11 package; and for overlapping interfaces (added in Go 1.13), because they can be exported as part of the package's API. Today we require all Go packages to be compiled with the same toolchain, and all uses of langSupported are for controlling backwards-compatible features. So we can simply assume that since the imported packages type-checked successfully, they must have been compiled with an appropriate -lang setting. In the future if we ever want to use langSupported to control backwards-incompatible language changes, we might need to record the -lang flag used for compiling a package in its export data. Fixes #35437. Fixes #35442. Change-Id: Ifdf6a62ee80cd5fb4366cbf12933152506d1b36e Reviewed-on: https://go-review.googlesource.com/c/go/+/205977Reviewed-by: Bryan C. Mills <bcmills@google.com> Reviewed-by: Robert Griesemer <gri@golang.org>
-
Keith Randall authored
Unlike function calls, when processing instructions that directly fault we must not subtract 1 from the pc before looking up the file/line information. Since the file/line lookup unconditionally subtracts 1, add 1 to the faulting instruction PCs to compensate. Fixes #34123 Change-Id: Ie7361e3d2f84a0d4f48d97e5a9e74f6291ba7a8b Reviewed-on: https://go-review.googlesource.com/c/go/+/196962 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
-
Ariel Mashraki authored
Changing the Import function to return a PackageNotInModuleError if no package was found in a local module. This replacing the vague message "missing dot in first path element" you get today with much more friendly one - "module was found, but does not contain package". Fixes #35273 Change-Id: I6d726c17e6412258274b10f58f76621617d26e0a Reviewed-on: https://go-review.googlesource.com/c/go/+/203118Reviewed-by: Bryan C. Mills <bcmills@google.com> Run-TryBot: Bryan C. Mills <bcmills@google.com>
-
Gerrit Code Review authored
-
Brad Fitzpatrick authored
Fixes #35423 Change-Id: Idb254d6a2c4b147d20e290411e4380df5cdcb306 Reviewed-on: https://go-review.googlesource.com/c/go/+/206178 Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Ian Lance Taylor authored
This adds pipe/pipe2 on Solaris as they exist on other Unix systems. They were not added previously because Solaris does not need them for netpollBreak. They are added now in preparation for using pipes in TestSignalM. Updates #35276 Change-Id: I53dfdf077430153155f0a79715af98b0972a841c Reviewed-on: https://go-review.googlesource.com/c/go/+/206077 Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Baokun Lee authored
Fixes #35338 Change-Id: Ic2a3a446ef56b1e5723d6192c8aeec32ae0bbeac Reviewed-on: https://go-review.googlesource.com/c/go/+/205779 Run-TryBot: Baokun Lee <nototon@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
Emmanuel T Odeke authored
Support "gzip" aka "x-gzip" as a transfer-encoding for requests and responses as per RFC 7230 Section 3.3.1. "gzip" and "x-gzip" are equivalents as requested by RFC 7230 Section 4.2.3. Transfer-Encoding is an on-fly property of the body that can be applied by proxies, other servers and basically any intermediary to transport the content e.g. across data centers or backends/machine to machine that need compression. For this change, "gzip" is both explicitly and implicitly combined with transfer-encoding "chunked" in an ordering such as: Transfer-Encoding: gzip, chunked and NOT Transfer-Encoding: chunked, gzip Obviously the latter form is counter-intuitive for streaming. Thus "chunked" is the last value to appear in that transfer-encoding header, if explicitly included. When parsing the response, the chunked body is concatenated as "chunked" does, before finally being decompressed as "gzip". A chunked and compressed body would typically look like this: <LENGTH_1>\r\n<CHUNK_1_GZIPPED_BODY>\r\n<LENGTH_2>\r\n<CHUNK_2_GZIPPED_BODY>\0\r\n which when being processed we would contentate <FULL_BODY> := <CHUNK_1_GZIPPED_BODY> + <CHUNK_2_GZIPPED_BODY> + ... and then finally gunzip it <FINAL_BODY> := gunzip(<FULL_BODY>) If a "chunked" transfer-encoding is NOT applied but "gzip" is applied, we implicitly assume that they requested using "chunked" at the end. This is as per the recommendation of RFC 3.3.1. which explicitly says that for: * Request: " If any transfer coding other than chunked is applied to a request payload body, the sender MUST apply chunked as the final transfer coding to ensure that the message is properly framed." * Response: " If any transfer coding other than chunked is applied to a response payload body, the sender MUST either apply chunked as the final transfer coding or terminate the message by closing the connection." RELNOTE=yes Fixes #29162 Change-Id: Icb8b8b838cf4119705605b29725cabb1fe258491 Reviewed-on: https://go-review.googlesource.com/c/go/+/166517 Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Than McIntosh authored
This reverts CL 160819 (commit 4692343c) Reason for revert: causing lots of failures on master Change-Id: I96fd39ae80fe350ba8b3aa310443d41daec38093 Reviewed-on: https://go-review.googlesource.com/c/go/+/206146Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Visweswara R authored
Fixes #28957 Change-Id: Ie8ba841bd4ee71766bcfbbfbdc9173b9be867ed1 Reviewed-on: https://go-review.googlesource.com/c/go/+/151479Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Ian Lance Taylor authored
Fixes #35376 Change-Id: Ib95ad336425e73cc4d412dafed0ba5e0a8130bd2 Reviewed-on: https://go-review.googlesource.com/c/go/+/205718 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
Martin Garton authored
This adds float type support to the main switch blocks in Read and Write, instead of falling back to reflection. This gives a considerable speedup for the float types: ReadFloats-8 129ns ± 9% 70ns ± 8% -46.02% (p=0.001 n=7+7) WriteFloats-8 131ns ± 6% 86ns ±11% -34.59% (p=0.001 n=7+7) ReadSlice1000Float32s-8 14.6µs ±14% 4.8µs ±12% -67.29% (p=0.001 n=7+7) WriteSlice1000Float32s-8 16.4µs ±20% 4.7µs ± 8% -71.01% (p=0.001 n=7+7) Change-Id: I0be99d068b07d10dd6eb1137b45eff6f7c216b87 GitHub-Last-Rev: 4ff326e99ca35977d819f0ba29c10d9efc7e811c GitHub-Pull-Request: golang/go#31803 Reviewed-on: https://go-review.googlesource.com/c/go/+/174959Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
sergey authored
pregrow result array to avoid small allocation. Change-Id: Ife5f815efa4c163ecdbb3a4c16bfb60a484dfa11 Reviewed-on: https://go-review.googlesource.com/c/go/+/174706Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
kaxapi authored
Fixes #27426 Change-Id: I34d4784658ce7b9e6130bae9717e80d0e9a290a2 GitHub-Last-Rev: 6de610cdcef11832f131b84a0338b68af16b10da GitHub-Pull-Request: golang/go#30059 Reviewed-on: https://go-review.googlesource.com/c/go/+/160819Reviewed-by: Agniva De Sarker <agniva.quicksilver@gmail.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Run-TryBot: Agniva De Sarker <agniva.quicksilver@gmail.com>
-
Chris Stockton authored
Share a slice backing between the host address, network ip and mask. Add tests to verify that each slice header has len==cap to prevent introducing new behavior into Go programs. This has a small tradeoff of allocating a larger slice backing when the address is invalid. Earlier error detection of invalid prefix length helps balance this cost and a new benchmark for ParseCIDR helps measure it. This yields a ~22% speedup for all nil err cidr tests: name old time/op new time/op delta ParseCIDR/IPv4-24 9.17µs ± 6% 7.20µs ± 7% -21.47% (p=0.000 n=20+20) ParseCIDR/IPv6-24 9.02µs ± 6% 6.95µs ± 9% -23.02% (p=0.000 n=20+20) ParseCIDR/IPv4-24 1.51kB ± 0% 1.55kB ± 0% +2.65% (p=0.000 n=20+20) ParseCIDR/IPv6-24 1.51kB ± 0% 1.55kB ± 0% +2.65% (p=0.000 n=20+20) ParseCIDR/IPv4-24 68.0 ± 0% 34.0 ± 0% -50.00% (p=0.000 n=20+20) ParseCIDR/IPv6-24 68.0 ± 0% 34.0 ± 0% -50.00% (p=0.000 n=20+20) Including non-nil err cidr tests gains around 25%~: name old time/op new time/op delta ParseCIDR/IPv4-24 11.8µs ±11% 8.9µs ± 8% -24.88% (p=0.000 n=20+20) ParseCIDR/IPv6-24 11.7µs ± 7% 8.7µs ± 5% -25.93% (p=0.000 n=20+20) ParseCIDR/IPv4-24 1.98kB ± 0% 2.00kB ± 0% +1.21% (p=0.000 n=20+20) ParseCIDR/IPv6-24 1.98kB ± 0% 2.00kB ± 0% +1.21% (p=0.000 n=20+20) ParseCIDR/IPv4-24 87.0 ± 0% 48.0 ± 0% -44.83% (p=0.000 n=20+20) ParseCIDR/IPv6-24 87.0 ± 0% 48.0 ± 0% -44.83% (p=0.000 n=20+20) Change-Id: I17f33c9049f7875b6ebdfde1f80b386a7aef9b94 GitHub-Last-Rev: 0a031f44b458e2c6465d0e59fb4653e08c44a854 GitHub-Pull-Request: golang/go#26948 Reviewed-on: https://go-review.googlesource.com/c/go/+/129118Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
David Ndungu authored
Use names to better communicate when a test case fails. Change-Id: Id882783cb5e444b705443fbcdf612713f8a3b032 Reviewed-on: https://go-review.googlesource.com/c/go/+/187823Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Ian Lance Taylor authored
Without this CL, one of the TestDebugCall tests would fail 1% to 2% of the time on the android-amd64-emu gomote. With this CL, I ran the tests for 1000 iterations with no failures. Fixes #32985 Change-Id: I541268a2a0c10d0cd7604f0b2dbd15c1d18e5730 Reviewed-on: https://go-review.googlesource.com/c/go/+/205248 Run-TryBot: Ian Lance Taylor <iant@golang.org> Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
Michael Anthony Knyszek authored
This change adds a per-p free page cache which the page allocator may allocate out of without a lock. The change also introduces a completely lockless page allocator fast path. Although the cache contains at most 64 pages (and usually less), the vast majority (85%+) of page allocations are exactly 1 page in size. Updates #35112. Change-Id: I170bf0a9375873e7e3230845eb1df7e5cf741b78 Reviewed-on: https://go-review.googlesource.com/c/go/+/195701 Run-TryBot: Michael Knyszek <mknyszek@google.com> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
This change adds a page cache structure which owns a chunk of free pages at a given base address. It also adds code to allocate to this cache from the page allocator. Finally, it adds tests for both. Notably this change does not yet integrate the code into the runtime, just into runtime tests. Updates #35112. Change-Id: Ibe121498d5c3be40390fab58a3816295601670df Reviewed-on: https://go-review.googlesource.com/c/go/+/196643 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Udalov Max authored
Make binary.Read return an error when passed `data` argument is not a pointer to a fixed-size value or a slice of fixed-size values. Fixes #32927 Change-Id: I04f48be55fe9b0cc66c983d152407d0e42cbcd95 Reviewed-on: https://go-review.googlesource.com/c/go/+/184957Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Bryan C. Mills authored
Updates #35425 Change-Id: I9ca2251246ee2fa9bb7a335d5eff94d3c9f1f004 Reviewed-on: https://go-review.googlesource.com/c/go/+/206143Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Bryan C. Mills authored
Updates #34634 Fixes #35425 Change-Id: I878a8d229b33dcde9e7d4dfd82ddf9815d38a465 Reviewed-on: https://go-review.googlesource.com/c/go/+/206142 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Michael Anthony Knyszek authored
This change adds a per-p mspan object cache similar to the sudog cache. Unfortunately this cache can't quite operate like the sudog cache, since it is used in contexts where write barriers are disallowed (i.e. allocation codepaths), so rather than managing an array and a slice, it's just an array and a length. A little bit more unsafe, but avoids any write barriers. The purpose of this change is to reduce the number of operations which require the heap lock in allocation, paving the way for a lockless fast path. Updates #35112. Change-Id: I32cfdcd8528fb7be985640e4f3a13cb98ffb7865 Reviewed-on: https://go-review.googlesource.com/c/go/+/196642 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
This change combines the functionality of allocSpanLocked, allocManual, and alloc_m into a new method called allocSpan. While these methods' abstraction boundaries are OK when the heap lock is held throughout, they start to break down when we want finer-grained locking in the page allocator. allocSpan does just that, and only locks the heap when it absolutely has to. Piggy-backing off of work in previous CLs to make more of span initialization lockless, this change makes span initialization entirely lockless as part of the reorganization. Ultimately this change will enable us to add a lockless fast path to allocSpan. Updates #35112. Change-Id: I99875939d75fb4e958a67ac99e4a7cda44f06864 Reviewed-on: https://go-review.googlesource.com/c/go/+/196641 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
Currently gcSweepBuf guarantees that push operations may be performed concurrently with each other and that block operations may be performed concurrently with push operations as well. Unfortunately, this isn't quite true. The existing code allows push operations to happen concurrently with each other, but block operations may return blocks with nil entries. The way this can happen is if two concurrent pushers grab a slot to push to, and the first one (the one with the earlier slot in the buffer) doesn't quite write a span value when the block is called. The existing code in block only checks if the very last value in the block is nil, when really an arbitrary number of the last few values in the block may or may not be nil. Today, this case can't actually happen because when push operations happen concurrently during a GC (which is the only time block is called), they only ever happen during an allocation with the heap lock held, effectively serializing them. A block operation may happen concurrently with one of these pushes, but its callers will never see a nil mspan. Outside of a GC, this isn't a problem because although push operations from allocations can run concurrently with push operations from sweeping, block operations will never run. In essence, the real concurrency guarantees provided by gcSweepBuf are that block operations may happen concurrently with push operations, but that push operations may not be concurrent with each other if there are any block operations. To fix this, and to prepare for push operations happening without the heap lock held in a future CL, we update the documentation for block to correctly state that there may be nil entries in the returned slice. While we're here, make the mspan writes into the buffer atomic to avoid a block user racing on a nil check, and document that the user should load mspan values from the returned slice atomically. Finally, we make all callers of block adhere to the new rules. We choose to allow nil values rather than filter them out because the only caller of block is markrootSpans, and if it catches a nil entry, then there wasn't anything to mark in there anyway since the span is just being created. Updates #35112. Change-Id: I6450aab15f51690d7a000ba5b3d529cf2ca5da1e Reviewed-on: https://go-review.googlesource.com/c/go/+/203318 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
This change makes it so that allocation and free related page sweeper metadata operations (e.g. pageInUse and pagesInUse) are atomic rather than protected by the heap lock. This will help in reducing the length of the critical path with the heap lock held in future changes. Updates #35112. Change-Id: Ie82bff024204dd17c4c671af63350a7a41add354 Reviewed-on: https://go-review.googlesource.com/c/go/+/196640 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Cherry Zhang authored
When the frame size is large, we generate MOVD.P 0xf0(SP), LR ADD $(framesize-0xf0), SP This is problematic: after the first instruction, we have a partial frame of size (framesize-0xf0). If we try to unwind the stack at this point, we'll try to read the LR from the stack at 0(SP) (the new SP) as the frame size is not 0. But this slot does not contain a valid LR. Fix this by not changing SP in two instructions. Instead, generate MOVD (SP), LR ADD $framesize, SP This affects not only async preemption but also profiling. So we change the generated instructions, instead of marking unsafe point. Change-Id: I4e78c62d50ffc4acff70ccfbfec16a5ccae17f24 Reviewed-on: https://go-review.googlesource.com/c/go/+/206057 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Cherry Zhang authored
This CL adds support of call injection and async preemption on PPC64. For the injected call to return to the preempted PC, we have to clobber either LR or CTR. For reasons mentioned in previous CLs, we choose CTR. Previous CLs have marked code sequences that use CTR async-nonpreemtible. Change-Id: Ia642b5f06a890dd52476f45023b2a830c522eee0 Reviewed-on: https://go-review.googlesource.com/c/go/+/203824 Run-TryBot: Cherry Zhang <cherryyz@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
Michael Anthony Knyszek authored
mheap_.alloc currently accepts both a spanClass and a "large" parameter indicating whether the allocation is large. These are redundant, since spanClass.sizeclass() == 0 is an equivalent way to determine this and is already used in mheap_.alloc. There are no places in the runtime where the size class could be non-zero and large == true. Updates #35112. Change-Id: Ie66facf8f0faca6f4cd3d20a8ac4bc259e11823d Reviewed-on: https://go-review.googlesource.com/c/go/+/196639 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
This change defines a maximum supported physical and huge page size in the runtime based on the new page allocator's implementation, and uses them where appropriate. Furthemore, if the system exceeds the maximum supported huge page size, we simply ignore it silently. It also fixes a huge-page-related test which is only triggered by a condition which is definitely wrong. Finally, it adds a few TODOs related to code clean-up and supporting larger huge page sizes. Updates #35112. Fixes #35431. Change-Id: Ie4348afb6bf047cce2c1433576d1514720d8230f Reviewed-on: https://go-review.googlesource.com/c/go/+/205937 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Bryan C. Mills authored
If QEMU user-mode is actually a supported configuration, then per http://golang.org/wiki/PortingPolicy it needs to have a builder running tests for all packages, not just a simple “hello world” program. Updates #1508 Updates #13024 Fixes #35457 Change-Id: Ib6122b06ad1d265550a0e92131506266495893cc Reviewed-on: https://go-review.googlesource.com/c/go/+/206137 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Michael Anthony Knyszek authored
For the most part, heap memstats are already updated atomically when passed down to OS-level memory functions (e.g. sysMap). Elsewhere, however, they're updated with the heap lock. In order to facilitate holding the heap lock for less time during allocation paths, this change more consistently makes the update of these statistics atomic by calling mSysStat{Inc,Dec} appropriately instead of simply adding or subtracting. It also ensures these values are loaded atomically. Furthermore, an undocumented but safe update condition for these memstats is during STW, at which point using atomics is unnecessary. This change also documents this condition in mstats.go. Updates #35112. Change-Id: I87d0b6c27b98c88099acd2563ea23f8da1239b66 Reviewed-on: https://go-review.googlesource.com/c/go/+/196638 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
This change removes useless additional heap_objects accounting for large objects. heap_objects is computed from scratch at ReadMemStats time (which stops the world) by using nlargealloc and nlargefree, so mutating heap_objects turns out to be pointless. As a result, the "large" parameter on "mheap_.freeSpan" is no longer necessary and so this change cleans that up too. Change-Id: I7d6b486d9b57c018e3db46221d81b55fe4c1b021 Reviewed-on: https://go-review.googlesource.com/c/go/+/196637 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Michael Anthony Knyszek authored
In preparation for a lockless fast path in the page allocator, this change makes it so that checking if an allocation needs to be zeroed may be done atomically. Unfortunately, this means there is a CAS-loop to ensure monotonicity of the zeroedBase value in heapArena. This CAS-loop exits if an allocator acquiring memory further on in the arena wins or if it succeeds. The CAS-loop should have a relatively small amount of contention because of this monotonicity, though it would be ideal if we could just have CAS-ers with the greatest value always win. The CAS-loop is unnecessary in the steady-state, but should bring some start-up performance gains as it's likely cheaper than the additional zeroing required, especially for large allocations. For very large allocations that span arenas, the CAS-loop should be completely uncontended for most of the arenas it touches, it may only encounter contention on the first and last arena. Updates #35112. Change-Id: If3d19198b33f1b1387b71e1ce5902d39a5c0f98e Reviewed-on: https://go-review.googlesource.com/c/go/+/203859 Run-TryBot: Michael Knyszek <mknyszek@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Bryan C. Mills authored
This test is failing consistently in the longtest builders, potentially masking regressions in other packages. Updates #35271 Change-Id: Idc03171c0109b5c8d4913e0af2078c1115666897 Reviewed-on: https://go-review.googlesource.com/c/go/+/206098Reviewed-by: Carlos Amedee <carlos@golang.org>
-