- 28 Oct, 2019 9 commits
-
-
Cherry Zhang authored
ZeroAuto was used with the ambiguously live logic. The ambiguously live logic is removed as we switched to stack objects. It is now never called. Remove. Change-Id: If4cdd7fed5297f8ab591cc392a76c80f57820856 Reviewed-on: https://go-review.googlesource.com/c/go/+/203538 Run-TryBot: Cherry Zhang <cherryyz@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Brad Fitzpatrick authored
Fixes #34751 Change-Id: I5ae1bb2bfddaa05245b364556d2b999b158a4cc4 Reviewed-on: https://go-review.googlesource.com/c/go/+/203879Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
Aditya Harindar authored
First letter of an error string should not be capitalized, as prescribed in the [wiki](https://github.com/golang/go/wiki/Errors). Change-Id: Iea1413f19b5240d3bef79f216094d210b54bdb62 GitHub-Last-Rev: d8e167107122b603c4f647d722537668ad1c680d GitHub-Pull-Request: golang/go#35203 Reviewed-on: https://go-review.googlesource.com/c/go/+/203797Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Lynn Boger authored
If a compilation has multiple text sections, code in textOff must compare the offset argument against the range for each text section to determine which one it is in. The comparison looks like this: if uintptr(off) >= sectaddr && uintptr(off) <= sectaddr+sectlen If the off value being compared is equal to sectaddr+sectlen then it is not within the range of the text section but after it. The comparison should be just '<'. Updates #35207 Change-Id: I114633fd734563d38f4e842dd884c6c239f73c95 Reviewed-on: https://go-review.googlesource.com/c/go/+/203817 Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Lynn Boger authored
This adds an asm implementation of the p256 functions used in crypto/elliptic, utilizing VMX, VSX to improve performance. On a power9 the improvement is: elliptic benchmarks: name old time/op new time/op delta BaseMult 1.40ms ± 0% 1.44ms ± 0% +2.66% (p=0.029 n=4+4) BaseMultP256 317µs ± 0% 50µs ± 0% -84.14% (p=0.029 n=4+4) ScalarMultP256 854µs ± 2% 214µs ± 0% -74.91% (p=0.029 n=4+4) ecdsa benchmarks: name old time/op new time/op delta SignP256 377µs ± 0% 111µs ± 0% -70.57% (p=0.029 n=4+4) SignP384 6.55ms ± 0% 6.48ms ± 0% -1.03% (p=0.029 n=4+4) VerifyP256 1.19ms ± 0% 0.26ms ± 0% -78.54% (p=0.029 n=4+4) KeyGeneration 319µs ± 0% 52µs ± 0% -83.56% (p=0.029 n=4+4) This implemenation is based on the s390x implementation, using comparable instructions for most with some minor changes where the instructions are not quite the same. Some changes were also needed since s390x is big endian and ppc64le is little endian. This also enables the fuzz_test for ppc64le. Change-Id: I59a69515703b82ad2929f68ba2f11208fa833181 Reviewed-on: https://go-review.googlesource.com/c/go/+/168478 Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Munday <mike.munday@ibm.com>
-
Phil Pearl authored
If we marshal a non-pointer struct field whose type implements Marshaler with a non-pointer receiver, then we avoid an allocation if we take the address of the field before casting it to an interface. name old time/op new time/op delta EncodeMarshaler-8 104ns ± 1% 92ns ± 2% -11.72% (p=0.001 n=7+7) name old alloc/op new alloc/op delta EncodeMarshaler-8 36.0B ± 0% 4.0B ± 0% -88.89% (p=0.000 n=8+8) name old allocs/op new allocs/op delta EncodeMarshaler-8 2.00 ± 0% 1.00 ± 0% -50.00% (p=0.000 n=8+8) Test coverage already looks good enough for this change. TestRefValMarshal already covers all possible combinations of value & pointer receivers on value and pointer struct fields. Change-Id: I6fc7f72396396d98f9a90c3c86e813690f41c099 Reviewed-on: https://go-review.googlesource.com/c/go/+/203608Reviewed-by: Daniel Martí <mvdan@mvdan.cc> Run-TryBot: Daniel Martí <mvdan@mvdan.cc> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Tobias Klauser authored
Based on work by Mikaël Urankar (@MikaelUrankar). Updates #24715 Updates #35197 Change-Id: I91144101043d67d3f8444bf8389c9606abe2a66c Reviewed-on: https://go-review.googlesource.com/c/go/+/199919 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Dmitri Goutnik authored
Updates golang/go#35197 Change-Id: I4fd85c84475761d71d2c17e62796e0a411cf91d8 Reviewed-on: https://go-review.googlesource.com/c/go/+/203519 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Tobias Klauser <tobias.klauser@gmail.com>
-
Dan Scales authored
_defer.fn can be nil, so we need to add a check when dumping _defer.fn.fn. Fixes #35172 Change-Id: Ic1138be5ec9dce915a87467cfa51ff83acc6e3a9 Reviewed-on: https://go-review.googlesource.com/c/go/+/203697 Run-TryBot: Dan Scales <danscales@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
- 27 Oct, 2019 3 commits
-
-
Phil Pearl authored
This change improves performance of Compact by using a sync.Pool to allow re-use of a scanner. This also has the side-effect of removing an allocation for each field that implements Marshaler when marshalling JSON. name old time/op new time/op delta EncodeMarshaler-8 118ns ± 2% 104ns ± 1% -12.21% (p=0.001 n=7+7) name old alloc/op new alloc/op delta EncodeMarshaler-8 100B ± 0% 36B ± 0% -64.00% (p=0.000 n=8+8) name old allocs/op new allocs/op delta EncodeMarshaler-8 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=8+8) Change-Id: Ic70c61a0a6354823da5220f5aad04b94c054f233 Reviewed-on: https://go-review.googlesource.com/c/go/+/200864Reviewed-by: Daniel Martí <mvdan@mvdan.cc> Run-TryBot: Daniel Martí <mvdan@mvdan.cc> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Jason A. Donenfeld authored
There's no reason not to enable DEP in 2019, especially given Go's minimum operating system level. RELNOTE=yes Change-Id: I9c3bbc5b05a1654876a218123dd57b9c9077b780 Reviewed-on: https://go-review.googlesource.com/c/go/+/203601Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
-
Jason A. Donenfeld authored
This test's existence was predicated upon assumptions about the full range of known data types and known data into those types. However, we've learned from Microsoft that there are several undocumented secret registry types that are in use by various parts of Windows, and we've learned from inspection that many Microsoft uses of registry types don't strictly adhere to the recommended value size. It's therefore foolhardy to make any assumptions about what goes in and out of the registry, and so this test, as well as its "blacklist", are meaningless. Fixes #35084 Change-Id: I6c3fe5fb0e740e88858321b3b042c0ff1a23284e Reviewed-on: https://go-review.googlesource.com/c/go/+/203604 Run-TryBot: Jason A. Donenfeld <Jason@zx2c4.com> Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
-
- 26 Oct, 2019 16 commits
-
-
Josh Bleecher Snyder authored
Rhys Hiltner noted in #14939 that this defer was syntactically inside a loop, but was only ever executed once. Now that defer in a loop is significantly slower, pull this one out. name old time/op new time/op delta Throughput/MaxPacket/1MB/TLSv12-8 3.94ms ± 8% 3.93ms ±13% ~ (p=0.967 n=15+15) Throughput/MaxPacket/1MB/TLSv13-8 4.33ms ± 3% 4.51ms ± 7% +4.00% (p=0.000 n=14+14) Throughput/MaxPacket/2MB/TLSv12-8 6.80ms ± 6% 7.01ms ± 4% +3.15% (p=0.000 n=14+14) Throughput/MaxPacket/2MB/TLSv13-8 6.96ms ± 5% 6.80ms ± 5% -2.43% (p=0.006 n=15+14) Throughput/MaxPacket/4MB/TLSv12-8 12.0ms ± 3% 11.7ms ± 2% -2.88% (p=0.000 n=15+13) Throughput/MaxPacket/4MB/TLSv13-8 12.1ms ± 3% 11.7ms ± 2% -3.54% (p=0.000 n=13+13) Throughput/MaxPacket/8MB/TLSv12-8 22.2ms ± 3% 21.6ms ± 3% -2.97% (p=0.000 n=15+15) Throughput/MaxPacket/8MB/TLSv13-8 22.5ms ± 5% 22.0ms ± 3% -2.34% (p=0.004 n=15+15) Throughput/MaxPacket/16MB/TLSv12-8 42.4ms ± 3% 41.3ms ± 3% -2.49% (p=0.001 n=15+15) Throughput/MaxPacket/16MB/TLSv13-8 43.4ms ± 5% 42.3ms ± 3% -2.33% (p=0.006 n=15+14) Throughput/MaxPacket/32MB/TLSv12-8 83.1ms ± 4% 80.6ms ± 3% -2.98% (p=0.000 n=15+15) Throughput/MaxPacket/32MB/TLSv13-8 85.2ms ± 8% 82.6ms ± 4% -3.02% (p=0.005 n=15+15) Throughput/MaxPacket/64MB/TLSv12-8 167ms ± 7% 158ms ± 2% -5.21% (p=0.000 n=15+15) Throughput/MaxPacket/64MB/TLSv13-8 170ms ± 4% 162ms ± 3% -4.83% (p=0.000 n=15+15) Throughput/DynamicPacket/1MB/TLSv12-8 4.13ms ± 7% 4.00ms ± 8% ~ (p=0.061 n=15+15) Throughput/DynamicPacket/1MB/TLSv13-8 4.72ms ± 6% 4.64ms ± 7% ~ (p=0.377 n=14+15) Throughput/DynamicPacket/2MB/TLSv12-8 7.29ms ± 7% 7.09ms ± 7% ~ (p=0.070 n=15+14) Throughput/DynamicPacket/2MB/TLSv13-8 7.18ms ± 5% 6.59ms ± 4% -8.34% (p=0.000 n=15+15) Throughput/DynamicPacket/4MB/TLSv12-8 12.3ms ± 3% 11.9ms ± 4% -3.31% (p=0.000 n=15+14) Throughput/DynamicPacket/4MB/TLSv13-8 12.2ms ± 4% 12.0ms ± 4% -1.91% (p=0.019 n=15+15) Throughput/DynamicPacket/8MB/TLSv12-8 22.4ms ± 3% 21.9ms ± 3% -2.18% (p=0.000 n=15+15) Throughput/DynamicPacket/8MB/TLSv13-8 22.7ms ± 3% 22.2ms ± 3% -2.35% (p=0.000 n=15+15) Throughput/DynamicPacket/16MB/TLSv12-8 42.3ms ± 3% 42.1ms ± 3% ~ (p=0.505 n=14+15) Throughput/DynamicPacket/16MB/TLSv13-8 42.7ms ± 3% 43.3ms ± 7% ~ (p=0.123 n=15+14) Throughput/DynamicPacket/32MB/TLSv12-8 82.8ms ± 3% 81.9ms ± 3% ~ (p=0.112 n=14+15) Throughput/DynamicPacket/32MB/TLSv13-8 84.6ms ± 6% 83.9ms ± 4% ~ (p=0.624 n=15+15) Throughput/DynamicPacket/64MB/TLSv12-8 166ms ± 4% 163ms ± 6% ~ (p=0.081 n=15+15) Throughput/DynamicPacket/64MB/TLSv13-8 165ms ± 3% 168ms ± 3% +1.56% (p=0.029 n=15+15) Change-Id: I22409b05afe761b8ed1912b15c67fc03f88d3d1f Reviewed-on: https://go-review.googlesource.com/c/go/+/203481 Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Giovanni Bajo authored
Updates #35157 (the bug there was fixed by CL200861) Change-Id: I67069207b4cdc2ad4a475dd0bbc8555ecc5f534f Reviewed-on: https://go-review.googlesource.com/c/go/+/203598 Run-TryBot: Giovanni Bajo <rasky@develer.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Alberto Donizetti <alb.donizetti@gmail.com>
-
Giovanni Bajo authored
Sometimes, poset needs to collapse a path making all nodes in the path aliases. For instance, we know that A<=N1<=B and we learn that B<=A, we can deduce A==N1==B, and thus we can collapse all paths from A to B into a single aliased node. Currently, this is a TODO. This CL implements the path-collapsing primitive by doing a DFS walk to build a bitset of all nodes across all paths, and then calling the new aliasnodes that allow to mark multiple nodes as aliases of a single master node. This helps only 4 times in std+cmd, but it will be fundamental when we will rely on poset to calculate numerical limits, to calculate the correct values. This also fixes #35157, a bug uncovered by a previous CL in this serie. A testcase will be added soon. Change-Id: I5fc54259711769d7bd7c2d166a5abc1cddc26350 Reviewed-on: https://go-review.googlesource.com/c/go/+/200861 Run-TryBot: Giovanni Bajo <rasky@develer.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
-
Giovanni Bajo authored
Change aliasnode into aliasnodes, to allow for recording multiple aliases in a single pass. The nodes being aliased are passed as bitset for performance reason (O(1) lookups). It does look worse in the existing case of SetEqual where we now need to allocate a bitset just for a single node, but the new API will allow to fully implement a path-collapsing primitive in next CL. No functional changes, passes toolstash -cmp. Change-Id: I06259610e8ef478106b36852464ed2caacd29ab5 Reviewed-on: https://go-review.googlesource.com/c/go/+/200860Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Giovanni Bajo <rasky@develer.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Giovanni Bajo authored
In preparation for allowing to make multiple nodes as aliases in a single pass, refactor aliasnode splitting out the case in which one of the nodes is not in the post into a new funciton (aliasnewnode). No functional changes, passes toolstash -cmp Change-Id: I19ca6ef8426f8aec9f2622b6151c5c617dbb25b5 Reviewed-on: https://go-review.googlesource.com/c/go/+/200859Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Giovanni Bajo <rasky@develer.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
nikita-vanyasin authored
Also a similar 'elapsed' function and its usages were deleted. Fixes #19865. Change-Id: Ib125365e69cf2eda60de64fa74290c8c7d1fd65a Reviewed-on: https://go-review.googlesource.com/c/go/+/171730 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com> Reviewed-by: Than McIntosh <thanm@google.com>
-
Ben Shi authored
ARM's R4-R8 & R10-R11 are callee-save registers, and R9 may be callee-save or not. This CL saves them at the beginning of sigtramp and restores them in the end. fixes #32738 Change-Id: Ib7eb80836bc074e2e6a46ae4602ba8a3b96c5456 Reviewed-on: https://go-review.googlesource.com/c/go/+/183777Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
For #10958, #24543. Change-Id: Ib009a83fe02bc623894f4908fe8f6b266382ba95 Reviewed-on: https://go-review.googlesource.com/c/go/+/201404 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
For #10958, #24543. Change-Id: I82bee63b49e15bd5a53228eb85179814c80437ef Reviewed-on: https://go-review.googlesource.com/c/go/+/201403 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
For these, we split up the existing runtime.raise assembly implementation into its constituent "get thread ID" and "signal thread" parts. This lets us implement signalM and reimplement raise in pure Go. (NetBSD conveniently already had lwp_self.) We also change minit to store the procid directly, rather than depending on newosproc to do so. This is because newosproc isn't called for the bootstrap M, but we need a procid for every M. This is also simpler overall. For #10958, #24543. Change-Id: Ie5f1fcada6a33046375066bcbe054d1f784d39c0 Reviewed-on: https://go-review.googlesource.com/c/go/+/201402 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
We'll add a test once all of the POSIX platforms are done. For #10958, #24543. Change-Id: If7e3f14e8391791364877629bf415d9f8e788b0a Reviewed-on: https://go-review.googlesource.com/c/go/+/201401 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Vivek V authored
Change-Id: I8dcb425c37a067277549ba3bda6a21206459890a GitHub-Last-Rev: dc51b9b425ca51cd0a4ac494d4c7a3c0e2306951 GitHub-Pull-Request: golang/go#35132 Reviewed-on: https://go-review.googlesource.com/c/go/+/203097 Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Bryan C. Mills authored
Previously we injected an error, and the injection points were (empirically) not realistic on some platforms. Instead, we now make the directory read-only, which (on most platforms) suffices to prevent the removal of its files. Fixes #35117 Updates #29921 Change-Id: Ica4e2818566f8c14df3eed7c3b8de5c0abeb6963 Reviewed-on: https://go-review.googlesource.com/c/go/+/203502Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Cuong Manh Le authored
findObject takes the pointer argument as uintptr. If the pointer is to the local stack and calling findObject happens to require the stack to be reallocated, then spanOf is called for the old pointer. Marking findObject as nosplit fixes the issue. Fixes #35068 Change-Id: I029d36f9c23f91812f18f98839edf02e0ba4082e Reviewed-on: https://go-review.googlesource.com/c/go/+/202798 Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Cuong Manh Le authored
This helps keeping findObject's frame small. Updates #35068 Change-Id: I1b8c1fcc5831944c86f1a30ed2f2d867a5f2b242 Reviewed-on: https://go-review.googlesource.com/c/go/+/202797 Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com>
-
Cuong Manh Le authored
Factor out case s == nil, make the code cleaner and easier to read. Change-Id: I63f52e14351c0a0d20a611b1fe10fdc0d4947d96 Reviewed-on: https://go-review.googlesource.com/c/go/+/202498 Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Austin Clements <austin@google.com> Reviewed-by: Keith Randall <khr@golang.org>
-
- 25 Oct, 2019 12 commits
-
-
Austin Clements authored
We check whether an M is preemptible in a surprising number of places. Put it in one function. For #10958, #24543. Change-Id: I305090fdb1ea7f7a55ffe25851c1e35012d0d06c Reviewed-on: https://go-review.googlesource.com/c/go/+/201439 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
We're about to introduce asynchronous safe points, where we won't have precise pointer maps for all stack frames. That's okay for scanning the stack (conservatively), but not for shrinking the stack. Hence, this CL prepares for this by only shrinking the stack as part of the stack scan if the goroutine is stopped at a synchronous safe point. Otherwise, it queues up the stack shrink for the next synchronous safe point. We already have one condition under which we can't shrink the stack for very similar reasons: syscalls. Currently, we just give up on shrinking the stack if it's in a syscall. But with this mechanism, we defer that stack shrink until the next synchronous safe point. For #10958, #24543. Change-Id: Ifa1dec6f33fdf30f9067be2ce3f7ab8a7f62ce38 Reviewed-on: https://go-review.googlesource.com/c/go/+/201438 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
When we copy a stack of a goroutine blocked in a channel operation, we have to be very careful because other goroutines may be writing to that goroutine's stack. To handle this, stack copying acquires the locks for the channels a goroutine is waiting on. One complication is that stack growth may happen while a goroutine holds these locks, in which case stack copying must *not* acquire these locks because that would self-deadlock. Currently, stack growth never acquires these locks because stack growth only happens when a goroutine is running, which means it's either not blocking on a channel or it's holding the channel locks already. Stack shrinking always acquires these locks because shrinking happens asynchronously, so the goroutine is never running, so there are either no locks or they've been released by the goroutine. However, we're about to change when stack shrinking can happen, which is going to break the current rules. Rather than find a new way to derive whether to acquire these locks or not, this CL simply adds a flag to the g struct that indicates that stack copying should acquire channel locks. This flag is set while the goroutine is blocked on a channel op. For #10958, #24543. Change-Id: Ia2ac8831b1bfda98d39bb30285e144c4f7eaf9ab Reviewed-on: https://go-review.googlesource.com/c/go/+/172982 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Michael Knyszek <mknyszek@google.com>
-
Austin Clements authored
Currently, gcscanvalid is used to resolve a race between attempts to scan a stack. Now that there's a clear owner of the stack scan operation, there's no longer any danger of racing or attempting to scan a stack more than once, so this CL eliminates gcscanvalid. I double-checked my reasoning by first adding a throw if gcscanvalid was set in scanstack and verifying that all.bash still passed. For #10958, #24543. Fixes #24363. Change-Id: I76794a5fcda325ed7cfc2b545e2a839b8b3bc713 Reviewed-on: https://go-review.googlesource.com/c/go/+/201139 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
This removes scang and preemptscan, since the stack scanning code now uses suspendG. For #10958, #24543. Change-Id: Ic868bf5d6dcce40662a82cb27bb996cb74d0720e Reviewed-on: https://go-review.googlesource.com/c/go/+/201138 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
Austin Clements authored
Currently, the process of suspending a goroutine is tied to stack scanning. In preparation for non-cooperative preemption, this CL abstracts this into general purpose suspendG/resumeG functions. suspendG and resumeG closely follow the existing scang and restartg functions with one exception: the addition of a _Gpreempted status. Currently, preemption tasks (stack scanning) are carried out by the target goroutine if it's in _Grunning. In this new approach, the task is always carried out by the goroutine that called suspendG. Thus, we need a reliable way to drive the target goroutine out of _Grunning until the requesting goroutine is ready to resume it. The new _Gpreempted state provides the handshake: when a runnable goroutine responds to a preemption request, it now parks itself and enters _Gpreempted. The requesting goroutine races to put it in _Gwaiting, which gives it ownership, but also the responsibility to start it again. This CL adds several TODOs about improving the synchronization on the G status. The existing code already has these problems; we're just taking note of them. The next CL will remove the now-dead scang and preemptscan. For #10958, #24543. Change-Id: I16dbf87bea9d50399cc86719c156f48e67198f16 Reviewed-on: https://go-review.googlesource.com/c/go/+/201137 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
-
pokutuna authored
Fixes #35161 Updates #34439 Change-Id: I978534cbb8b9fb32c115dba0066cf099c61d8ee9 GitHub-Last-Rev: d60581635e8cefb7cfc4b571057542395034c575 GitHub-Pull-Request: golang/go#35162 Reviewed-on: https://go-review.googlesource.com/c/go/+/203478Reviewed-by: Bryan C. Mills <bcmills@google.com> Run-TryBot: Bryan C. Mills <bcmills@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Bryan C. Mills authored
mod_get_svn passes, and I also tested this manually on a real-world svn-hosted package: example.com$ go mod init example.com go: creating new go.mod: module example.com example.com$ GOPROXY=direct GONOSUMDB=llvm.org go get -d llvm.org/llvm/bindings/go/llvm go: finding llvm.org/llvm latest go: finding llvm.org/llvm/bindings/go/llvm latest go: downloading llvm.org/llvm v0.0.0-20191022153947-000000375505 go: extracting llvm.org/llvm v0.0.0-20191022153947-000000375505 example.com$ go list llvm.org/llvm/bindings/... llvm.org/llvm/bindings/go llvm.org/llvm/bindings/go/llvm Fixes #26092 Change-Id: Iefe2151b82a0225c73bb6f8dd7cd8a352897d4c0 Reviewed-on: https://go-review.googlesource.com/c/go/+/203497 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Jay Conrod <jayconrod@google.com>
-
Tobias Klauser authored
CL 198544 broke the linux/arm64 build because it declares emptyfunc for GOARCH=arm64, but only freebsd/arm64 defines it. Make it a static assembly function specific for freebsd/arm64 and remove the stub. Fixes #35160 Change-Id: I5fd94249b60c6fd259c251407b6eccc8fa512934 Reviewed-on: https://go-review.googlesource.com/c/go/+/203418Reviewed-by: Bryan C. Mills <bcmills@google.com>
-
Bryan C. Mills authored
bradfitz is actively thinking about a proper fix. In the meantime, skip the test to suss out any other failures in the builder. Updates #35122 Change-Id: I9bf0640222e3d385c1a3e2be5ab52b80d3e8c21a Reviewed-on: https://go-review.googlesource.com/c/go/+/203500 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Bryan C. Mills authored
Previously, TestAtomicStop used a hard-coded 2-second timeout. That empirically is not long enough on certain builders. Rather than adjusting it to a different arbitrary value, use a slice of the overall timeout for the test binary. If everything is working, we won't block nearly that long anyway. Updates #35085 Change-Id: I7b789388e3152413395088088fc497419976cf5c Reviewed-on: https://go-review.googlesource.com/c/go/+/203499 Run-TryBot: Bryan C. Mills <bcmills@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Tobias Klauser authored
Updates #24715 Change-Id: I110a10a5d5ed4a471f67f35cbcdcbea296c5dcaf Reviewed-on: https://go-review.googlesource.com/c/go/+/198542 Run-TryBot: Tobias Klauser <tobias.klauser@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-