- 23 Nov, 2015 6 commits
-
-
Russ Cox authored
Solaris needs to make system calls without a g, and Solaris uses asmcgocall to make system calls. I know, I know. I hope this makes CL 16915, fixing #12277, work on Solaris. Change-Id: If988dfd37f418b302da9c7096f598e5113ecea87 Reviewed-on: https://go-review.googlesource.com/17072Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Aram Hăvărneanu <aram@mgk.ro> Run-TryBot: Russ Cox <rsc@golang.org> Reviewed-on: https://go-review.googlesource.com/17129
-
Russ Cox authored
It's intended primarily as a torture test for OS X. Apparently Windows can't take it. Updates fix for #12327. Change-Id: If2af249ea8e2f55bff8f232dce06172e6fef9f49 Reviewed-on: https://go-review.googlesource.com/17073Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-on: https://go-review.googlesource.com/17128Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Russ Cox authored
This is a bit of a belt-and-suspenders fix. On OS X, we now parse the Mach-O file to find the __text section, which is arguably the more proper fix. But it's a bit worrisome to depend on a name like __text not changing, so we also read more of the initial file (now 32 kB, up from 8 kB) and scan that too. Fixes #12327. Change-Id: I3a201a3dc278d24707109bb3961c3bdd8b8a0b7b Reviewed-on: https://go-review.googlesource.com/17038Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/17127
-
Russ Cox authored
Does not fix #12327 but nicer anyway. Change-Id: I4ad730a4ca833d76957b7571895b3a08a6a530d4 Reviewed-on: https://go-review.googlesource.com/16964Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/17126
-
Russ Cox authored
[release-branch.go1.5] cmd/compile: fix crash with -race on large expr containing string->[]byte conversion The assumption is that there are no nested function calls in complex expressions. For the most part that assumption is true. It wasn't for these calls inserted during walk. Fix that. I looked through all the calls to mkcall in walk and these were the only cases that emitted calls, that could be part of larger expressions (like not delete), and that were not already handled. Fixes #12225. Change-Id: Iad380683fe2e054d480e7ae4e8faf1078cdd744c Reviewed-on: https://go-review.googlesource.com/17034Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/17125
-
Russ Cox authored
Fixes #12686. Change-Id: I7a9f49dbd1f60b1d0240de57787753b425f9548c Reviewed-on: https://go-review.googlesource.com/17031Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/17124
-
- 20 Nov, 2015 3 commits
-
-
Austin Clements authored
A sigprof during stack barrier insertion or removal can crash if it detects an inconsistency between the stkbar array and the stack itself. Currently we protect against this when scanning another G's stack using stackLock, but we don't protect against it when unwinding stack barriers for a recover or a memmove to the stack. This commit cleans up and improves the stack locking code. It abstracts out the lock and unlock operations. It uses the lock consistently everywhere we perform stack operations, and pushes the lock/unlock down closer to where the stack barrier operations happen to make it more obvious what it's protecting. Finally, it modifies sigprof so that instead of spinning until it acquires the lock, it simply doesn't perform a traceback if it can't acquire it. This is necessary to prevent self-deadlock. Updates #11863, which introduced stackLock to fix some of these issues, but didn't go far enough. Updates #12528. Change-Id: I9d1fa88ae3744d31ba91500c96c6988ce1a3a349 Reviewed-on: https://go-review.googlesource.com/17036Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/17057
-
Russ Cox authored
During a crash showing goroutine stacks of all threads (with GOTRACEBACK=crash), it can be that f == nil. Only happens on Solaris; not sure why. Change-Id: Iee2c394a0cf19fa0a24f6befbc70776b9e42d25a Reviewed-on: https://go-review.googlesource.com/17110Reviewed-by: Austin Clements <austin@google.com> Reviewed-on: https://go-review.googlesource.com/17122Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Currently, if a profiling signal happens in the middle of stackBarrier, gentraceback may see inconsistencies between stkbar and the barriers on the stack and it will certainly get the wrong return PC for stackBarrier. In most cases, the return PC won't be a PC at all and this will immediately abort the traceback (which is considered okay for a sigprof), but if it happens to be a valid PC this may sent gentraceback down a rabbit hole. Fix this by detecting when the gentraceback starts in stackBarrier and simulating the completion of the barrier to get the correct initial frame. Change-Id: Ib11f705ac9194925f63fe5dfbfc84013a38333e6 Reviewed-on: https://go-review.googlesource.com/17035Reviewed-by: Russ Cox <rsc@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/17056
-
- 17 Nov, 2015 14 commits
-
-
Austin Clements authored
In mheap.sysAlloc, if an allocation at arena_used would exceed arena_end (but wouldn't yet push us past arena_start+_MaxArean32), it trie to extend the arena reservation by another 256 MB. It extends the arena by calling sysReserve, which, on 32-bit, calls mmap without MAP_FIXED, which means the address is just a hint and the kernel can put the mapping wherever it wants. In particular, mmap may choose an address below arena_start (the kernel also chose arena_start, so there could be lots of space below it). Currently, we don't detect this case and, if it happens, mheap.sysAlloc will corrupt arena_end and arena_used then return the low pointer to mheap.grow, which will crash when it attempts to index in to h_spans with an underflowed index. Fix this by checking not only that that p+p_size isn't too high, but that p isn't too low. Fixes #13143. Change-Id: I8d0f42bd1484460282a83c6f1a6f8f0df7fb2048 Reviewed-on: https://go-review.googlesource.com/16927 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/16988Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
If the area returned by sysReserve in mheap.sysAlloc is outside the usable arena, we sysFree it. We pass a fake stat pointer to sysFree because we haven't added the allocation to any stat at that point. However, we pass a 0 stat, so sysFree panics when it decrements the stat because the fake stat underflows. Fix this by setting the fake stat to the allocation size. Updates #13143 (this is a prerequisite to fixing that bug). Change-Id: I61a6c9be19ac1c95863cf6a8435e19790c8bfc9a Reviewed-on: https://go-review.googlesource.com/16926Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/16987 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
Michael Hudson-Doyle authored
A forward port of https://codereview.appspot.com/124900043/ which somehow got lost somewhere. Fixes #13024 Change-Id: Iab128899e65c51d90f6704e3e1b2fc9326e3a1c2 Reviewed-on: https://go-review.googlesource.com/16853Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-on: https://go-review.googlesource.com/16986 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Austin Clements authored
sigprof tracebacks the stack across systemstack switches to make profile tracebacks more complete. However, it does this even if the user stack is currently being copied, which means it may be in an inconsistent state that will cause the traceback to panic. One specific way this can happen is during stack shrinking. Some goroutine blocks for STW, then enters gchelper, which then assists with root marking. If that root marking happens to pick the original goroutine and its stack needs to be shrunk, it will begin to copy that stack. During this copy, the stack is generally inconsistent and, in particular, the actual locations of the stack barriers and their recorded locations are temporarily out of sync. If a SIGPROF happens during this inconsistency, it will walk the stack all the way back to the blocked goroutine and panic when it fails to unwind the stack barriers. Fix this by disallowing jumping to the user stack during SIGPROF if that user stack is in the process of being copied. Fixes #12932. Change-Id: I9ef694c2c01e3653e292ce22612418dd3daff1b4 Reviewed-on: https://go-review.googlesource.com/16819Reviewed-by: Daniel Morsing <daniel.morsing@gmail.com> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/16985Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Alex Brainman authored
Fixes #12301 Change-Id: I8d01ec9551c6cff7e6129e06a7deb36a3be9de41 Reviewed-on: https://go-review.googlesource.com/16751Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/16984 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Alex Brainman <alex.brainman@gmail.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
Michael Hudson-Doyle authored
sradi and sradi. hide the top bit of their immediate argument apart from the rest of it, but the code only handled the sradi case. I'm pretty sure this is the only instruction missing (a couple of the rotate instructions encode their immediate the same way but their handling looks OK). This fixes the failure of "GOARCH=amd64 ~/go/bin/go install -v runtime" as reported in the bug. Fixes #11987 Change-Id: I0cdefcd7a04e0e8fce45827e7054ffde9a83f589 Reviewed-on: https://go-review.googlesource.com/16710Reviewed-by: Minux Ma <minux@golang.org> Reviewed-on: https://go-review.googlesource.com/16983 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Ian Lance Taylor authored
The GNU binutils recently picked up support for new 386/amd64 relocations. Add support for them in the Go linker when doing an internal link. The 386 relocation R_386_GOT32X was proposed in https://groups.google.com/forum/#!topic/ia32-abi/GbJJskkid4I . It can be treated as identical to the R_386_GOT32 relocation. The amd64 relocations R_X86_64_GOTPCRELX and R_X86_64_REX_GOTPCRELX were proposed in https://groups.google.com/forum/#!topic/x86-64-abi/n9AWHogmVY0 . They can both be treated as identical to the R_X86_64_GOTPCREL relocation. The purpose of the new relocations is to permit additional linker relaxations in some cases. We do not attempt to support those cases. While we're at it, remove the unused and in some cases out of date _COUNT names from ld/elf.go. Fixes #13114. Change-Id: I34ef07f6fcd00cdd2996038ecf46bb77a49e968b Reviewed-on: https://go-review.googlesource.com/16529Reviewed-by: Minux Ma <minux@golang.org> Reviewed-on: https://go-review.googlesource.com/16982 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Michael Hudson-Doyle authored
ppc64 codegen assumes that it is OK to stomp on r31 at any time, but it is not excluded from the set of registers that regopt is allowed to use. Fixes #12597 Change-Id: I29c7655e32abd22f3c21d88427b73e4fca055233 Reviewed-on: https://go-review.googlesource.com/15245Reviewed-by: Minux Ma <minux@golang.org> Reviewed-on: https://go-review.googlesource.com/16981 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This fixes an issue where the runtime panics with "out of memory" or "cannot allocate memory" even though there's ample memory by reducing the number of memory mappings created by the memory allocator. Commit 7e1b61c7 worked around issue #8832 where Linux's transparent huge page support could dramatically increase the RSS of a Go process by setting the MADV_NOHUGEPAGE flag on any regions of pages released to the OS with MADV_DONTNEED. This had the side effect of also increasing the number of VMAs (memory mappings) in a Go address space because a separate VMA is needed for every region of the virtual address space with different flags. Unfortunately, by default, Linux limits the number of VMAs in an address space to 65530, and a large heap can quickly reach this limit when the runtime starts scavenging memory. This commit dramatically reduces the number of VMAs. It does this primarily by only adjusting the huge page flag at huge page granularity. With this change, on amd64, even a pessimal heap that alternates between MADV_NOHUGEPAGE and MADV_HUGEPAGE must reach 128GB to reach the VMA limit. Because of this rounding to huge page granularity, this change is also careful to leave large used and unused regions huge page-enabled. This change reduces the maximum number of VMAs during the runtime benchmarks with GODEBUG=scavenge=1 from 692 to 49. Fixes #12233. Change-Id: Ic397776d042f20d53783a1cacf122e2e2db00584 Reviewed-on: https://go-review.googlesource.com/15191Reviewed-by: Keith Randall <khr@golang.org> Reviewed-on: https://go-review.googlesource.com/16980 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Francisco Claude authored
When parsing the multipart data, if the delimiter appears but doesn't finish with -- or \n or \r\n, it assumes the data can be consumed. This is incorrect when the peeking buffer finishes with --delimiter- Fixes #12662 Change-Id: I329556a9a206407c0958289bf7a9009229120bb9 Reviewed-on: https://go-review.googlesource.com/14652 TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-on: https://go-review.googlesource.com/16969 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
David Crawshaw authored
The _rt0_arm_darwin_lib entrypoint has to conform to the darwin ARMv7 calling convention, which requires functions to preserve the value of R11. Go uses R11 as the liblink REGTMP register, so save it manually. Also avoid using R4, which is also callee-save. Fixes #12590 Change-Id: I9c3b374e330f81ff8fc9c01fa20505a33ddcf39a Reviewed-on: https://go-review.googlesource.com/14603Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/16968 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
Ian Lance Taylor authored
Glibc uses some special signals for special thread operations. These signals will be used in programs that use cgo and invoke certain glibc functions, such as setgid. In order for this to work, these signals need to not be masked by any thread. Before this change, they were being masked by programs that used os/signal.Notify, because it carefully masks all non-thread-specific signals in all threads so that a dedicated thread will collect and report those signals (see ensureSigM in signal1_unix.go). This change adds the two glibc special signals to the set of signals that are unmasked in each thread. Fixes #12498. Change-Id: I797d71a099a2169c186f024185d44a2e1972d4ad Reviewed-on: https://go-review.googlesource.com/14297Reviewed-by: David Crawshaw <crawshaw@golang.org> Reviewed-on: https://go-review.googlesource.com/16967 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Shenghou Ma authored
It's because runtime links to ntdll, and ntdll exports a couple incompatible libc functions. We must link to msvcrt first and then try ntdll. Fixes #12030. Change-Id: I0105417bada108da55f5ae4482c2423ac7a92957 Reviewed-on: https://go-review.googlesource.com/14472Reviewed-by: Alex Brainman <alex.brainman@gmail.com> Run-TryBot: Minux Ma <minux@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/16966 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Minux Ma <minux@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
Keith Randall authored
We compute whether two keys k1 and k2 in a map literal are duplicates by constructing the expression OEQ(k1, k2) and calling the constant expression evaluator on that expression, then extracting the boolean result. Unfortunately, the constant expression evaluator can fail for various reasons. I'm not really sure why it is dying in the case of 12536, but to be safe we should use the result only if we get a constant back (if we get a constant back, it must be boolean). This probably isn't a permanent fix, but it should be good enough for 1.5.2. A permanent fix would be to ensure that the constant expression evaluator can always work for map literal keys, and if not the compiler should generate an error saying that the key isn't a constant (or isn't comparable to some specific other key). This patch has the effect of allowing the map literal to compile when constant eval of the OEQ fails. If the keys are really equal (which the map impl will notice at runtime), one will overwrite the other in the resulting map. Not great, but better than a compiler crash. Fixes #12536 Change-Id: Ic151a5e3f131c2e8efa0c25c9218b431c55c1b30 Reviewed-on: https://go-review.googlesource.com/14400Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/16965 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
-
- 13 Nov, 2015 4 commits
-
-
Keith Randall authored
Make sure that we're moving or zeroing pointers atomically. Anything that is a multiple of pointer size and at least pointer aligned might have pointers in it. All the code looks ok except for the 1-pointer-sized moves. Fixes #13160 Update #12552 Change-Id: Ib97d9b918fa9f4cc5c56c67ed90255b7fdfb7b45 Reviewed-on: https://go-review.googlesource.com/16668Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/16910Reviewed-by: Russ Cox <rsc@golang.org>
-
Michael Hudson-Doyle authored
[release-branch.go1.5] runtime: adjust the arm64 memmove and memclr to operate by word as much as they can Not only is this an obvious optimization: benchmark old MB/s new MB/s speedup BenchmarkMemmove1-4 35.35 29.65 0.84x BenchmarkMemmove2-4 63.78 52.53 0.82x BenchmarkMemmove3-4 89.72 73.96 0.82x BenchmarkMemmove4-4 109.94 95.73 0.87x BenchmarkMemmove5-4 127.60 112.80 0.88x BenchmarkMemmove6-4 143.59 126.67 0.88x BenchmarkMemmove7-4 157.90 138.92 0.88x BenchmarkMemmove8-4 167.18 231.81 1.39x BenchmarkMemmove9-4 175.23 252.07 1.44x BenchmarkMemmove10-4 165.68 261.10 1.58x BenchmarkMemmove11-4 174.43 263.31 1.51x BenchmarkMemmove12-4 180.76 267.56 1.48x BenchmarkMemmove13-4 189.06 284.93 1.51x BenchmarkMemmove14-4 186.31 284.72 1.53x BenchmarkMemmove15-4 195.75 281.62 1.44x BenchmarkMemmove16-4 202.96 439.23 2.16x BenchmarkMemmove32-4 264.77 775.77 2.93x BenchmarkMemmove64-4 306.81 1209.64 3.94x BenchmarkMemmove128-4 357.03 1515.41 4.24x BenchmarkMemmove256-4 380.77 2066.01 5.43x BenchmarkMemmove512-4 385.05 2556.45 6.64x BenchmarkMemmove1024-4 381.23 2804.10 7.36x BenchmarkMemmove2048-4 379.06 2814.83 7.43x BenchmarkMemmove4096-4 387.43 3064.96 7.91x BenchmarkMemmoveUnaligned1-4 28.91 25.40 0.88x BenchmarkMemmoveUnaligned2-4 56.13 47.56 0.85x BenchmarkMemmoveUnaligned3-4 74.32 69.31 0.93x BenchmarkMemmoveUnaligned4-4 97.02 83.58 0.86x BenchmarkMemmoveUnaligned5-4 110.17 103.62 0.94x BenchmarkMemmoveUnaligned6-4 124.95 113.26 0.91x BenchmarkMemmoveUnaligned7-4 142.37 130.82 0.92x BenchmarkMemmoveUnaligned8-4 151.20 205.64 1.36x BenchmarkMemmoveUnaligned9-4 166.97 215.42 1.29x BenchmarkMemmoveUnaligned10-4 148.49 221.22 1.49x BenchmarkMemmoveUnaligned11-4 159.47 239.57 1.50x BenchmarkMemmoveUnaligned12-4 163.52 247.32 1.51x BenchmarkMemmoveUnaligned13-4 167.55 256.54 1.53x BenchmarkMemmoveUnaligned14-4 175.12 251.03 1.43x BenchmarkMemmoveUnaligned15-4 192.10 267.13 1.39x BenchmarkMemmoveUnaligned16-4 190.76 378.87 1.99x BenchmarkMemmoveUnaligned32-4 259.02 562.98 2.17x BenchmarkMemmoveUnaligned64-4 317.72 842.44 2.65x BenchmarkMemmoveUnaligned128-4 355.43 1274.49 3.59x BenchmarkMemmoveUnaligned256-4 378.17 1815.74 4.80x BenchmarkMemmoveUnaligned512-4 362.15 2180.81 6.02x BenchmarkMemmoveUnaligned1024-4 376.07 2453.58 6.52x BenchmarkMemmoveUnaligned2048-4 381.66 2568.32 6.73x BenchmarkMemmoveUnaligned4096-4 398.51 2669.36 6.70x BenchmarkMemclr5-4 113.83 107.93 0.95x BenchmarkMemclr16-4 223.84 389.63 1.74x BenchmarkMemclr64-4 421.99 1209.58 2.87x BenchmarkMemclr256-4 525.94 2411.58 4.59x BenchmarkMemclr4096-4 581.66 4372.20 7.52x BenchmarkMemclr65536-4 565.84 4747.48 8.39x BenchmarkGoMemclr5-4 194.63 160.31 0.82x BenchmarkGoMemclr16-4 295.30 630.07 2.13x BenchmarkGoMemclr64-4 480.24 1884.03 3.92x BenchmarkGoMemclr256-4 540.23 2926.49 5.42x but it turns out that it's necessary to avoid the GC seeing partially written pointers. It's of course possible to be more sophisticated (using ldp/stp to move 16 bytes at a time in the core loop and unrolling the tail copying loops being the obvious ideas) but I wanted something simple and (reasonably) obviously correct. Fixes #12552 Change-Id: Iaeaf8a812cd06f4747ba2f792de1ded738890735 Reviewed-on: https://go-review.googlesource.com/14813Reviewed-by: Austin Clements <austin@google.com> Reviewed-on: https://go-review.googlesource.com/16909Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
Currently, amd64p32's memmove and memclr use 8 byte writes as much as possible and 1 byte writes for the tail of the object. However, if an object ends with a 4 byte pointer at an 8 byte aligned offset, this may copy/zero the pointer field one byte at a time, allowing the garbage collector to observe a partially copied pointer. Fix this by using 4 byte writes instead of 8 byte writes. Updates #12552. Change-Id: I13324fd05756fb25ae57e812e836f0a975b5595c Reviewed-on: https://go-review.googlesource.com/15370 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-on: https://go-review.googlesource.com/16908Reviewed-by: Russ Cox <rsc@golang.org>
-
Michael Hudson-Doyle authored
[release-branch.go1.5] runtime: adjust the ppc64x memmove and memclr to copy by word as much as it can Issue #12552 can happen on ppc64 too, although much less frequently in my testing. I'm fairly sure this fixes it (2 out of 200 runs of oracle.test failed without this change and 0 of 200 failed with it). It's also a lot faster for large moves/clears: name old speed new speed delta Memmove1-6 157MB/s ± 9% 144MB/s ± 0% -8.20% (p=0.004 n=10+9) Memmove2-6 281MB/s ± 1% 249MB/s ± 1% -11.53% (p=0.000 n=10+10) Memmove3-6 376MB/s ± 1% 328MB/s ± 1% -12.64% (p=0.000 n=10+10) Memmove4-6 475MB/s ± 4% 345MB/s ± 1% -27.28% (p=0.000 n=10+8) Memmove5-6 540MB/s ± 1% 393MB/s ± 0% -27.21% (p=0.000 n=10+10) Memmove6-6 609MB/s ± 0% 423MB/s ± 0% -30.56% (p=0.000 n=9+10) Memmove7-6 659MB/s ± 0% 468MB/s ± 0% -28.99% (p=0.000 n=8+10) Memmove8-6 705MB/s ± 0% 1295MB/s ± 1% +83.73% (p=0.000 n=9+9) Memmove9-6 740MB/s ± 1% 1241MB/s ± 1% +67.61% (p=0.000 n=10+8) Memmove10-6 780MB/s ± 0% 1162MB/s ± 1% +48.95% (p=0.000 n=10+9) Memmove11-6 811MB/s ± 0% 1180MB/s ± 0% +45.58% (p=0.000 n=8+9) Memmove12-6 820MB/s ± 1% 1073MB/s ± 1% +30.83% (p=0.000 n=10+9) Memmove13-6 849MB/s ± 0% 1068MB/s ± 1% +25.87% (p=0.000 n=10+10) Memmove14-6 877MB/s ± 0% 911MB/s ± 0% +3.83% (p=0.000 n=10+10) Memmove15-6 893MB/s ± 0% 922MB/s ± 0% +3.25% (p=0.000 n=10+9) Memmove16-6 897MB/s ± 1% 2418MB/s ± 1% +169.67% (p=0.000 n=10+9) Memmove32-6 908MB/s ± 0% 3927MB/s ± 2% +332.64% (p=0.000 n=10+8) Memmove64-6 1.11GB/s ± 0% 5.59GB/s ± 0% +404.64% (p=0.000 n=9+9) Memmove128-6 1.25GB/s ± 0% 6.71GB/s ± 2% +437.49% (p=0.000 n=9+10) Memmove256-6 1.33GB/s ± 0% 7.25GB/s ± 1% +445.06% (p=0.000 n=10+10) Memmove512-6 1.38GB/s ± 0% 8.87GB/s ± 0% +544.43% (p=0.000 n=10+10) Memmove1024-6 1.40GB/s ± 0% 10.00GB/s ± 0% +613.80% (p=0.000 n=10+10) Memmove2048-6 1.41GB/s ± 0% 10.65GB/s ± 0% +652.95% (p=0.000 n=9+10) Memmove4096-6 1.42GB/s ± 0% 11.01GB/s ± 0% +675.37% (p=0.000 n=8+10) Memclr5-6 269MB/s ± 1% 264MB/s ± 0% -1.80% (p=0.000 n=10+10) Memclr16-6 600MB/s ± 0% 887MB/s ± 1% +47.83% (p=0.000 n=10+10) Memclr64-6 1.06GB/s ± 0% 2.91GB/s ± 1% +174.58% (p=0.000 n=8+10) Memclr256-6 1.32GB/s ± 0% 6.58GB/s ± 0% +399.86% (p=0.000 n=9+10) Memclr4096-6 1.42GB/s ± 0% 10.90GB/s ± 0% +668.03% (p=0.000 n=8+10) Memclr65536-6 1.43GB/s ± 0% 11.37GB/s ± 0% +697.83% (p=0.000 n=9+8) GoMemclr5-6 359MB/s ± 0% 360MB/s ± 0% +0.46% (p=0.000 n=10+10) GoMemclr16-6 750MB/s ± 0% 1264MB/s ± 1% +68.45% (p=0.000 n=10+10) GoMemclr64-6 1.17GB/s ± 0% 3.78GB/s ± 1% +223.58% (p=0.000 n=10+9) GoMemclr256-6 1.35GB/s ± 0% 7.47GB/s ± 0% +452.44% (p=0.000 n=10+10) Update #12552 Change-Id: I7192e9deb9684a843aed37f58a16a4e29970e893 Reviewed-on: https://go-review.googlesource.com/14840Reviewed-by: Minux Ma <minux@golang.org> Reviewed-on: https://go-review.googlesource.com/16907Reviewed-by: Russ Cox <rsc@golang.org>
-
- 15 Oct, 2015 1 commit
-
-
Austin Clements authored
Commit c257dfb1 attempted to fix recursive allocation in gcAssistAlloc; however, it only reduced it: setting gp.gcalloc to 0 isn't sufficient to disable assists at the beginning of the GC cycle when gp.gcscanwork is also small or zero. Fix this recursion more completely by setting gcalloc to a sentinel value that directly disables assists. Fixes #12894 (again). Change-Id: I9599566222d8f540d0b39806846bfc702e6666e5 Reviewed-on: https://go-review.googlesource.com/15891 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Russ Cox <rsc@golang.org>
-
- 12 Oct, 2015 1 commit
-
-
Austin Clements authored
If gcAssistAlloc is unable to steal or perform enough scan work, it calls timeSleep, which allocates. If this allocation requires obtaining a new span, it will in turn attempt to assist GC. Since there's likely still no way to satisfy the assist, it will sleep again, and so on, leading to potentially deep (not infinite, but also not bounded) recursion. Fix this by disallowing assists during the timeSleep. This same problem was fixed on master by 65aa2da6. That commit built on several other changes and hence can't be directly cherry-picked. This commit implements the same idea. Fixes #12894. Change-Id: I152977eb1d0a3005c42ff3985d58778f054a86d4 Reviewed-on: https://go-review.googlesource.com/15720Reviewed-by: Rick Hudson <rlh@golang.org>
-
- 03 Oct, 2015 1 commit
-
-
Austin Clements authored
Change-Id: I8da1c7a86adf6672e5e5c44cbab422706833c1da Reviewed-on: https://go-review.googlesource.com/15350Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-on: https://go-review.googlesource.com/15244
-
- 23 Sep, 2015 1 commit
-
-
Chris Broadfoot authored
Change-Id: Ib1bfe4038e2b125a31acd9ff7772e462b0a6358f Reviewed-on: https://go-review.googlesource.com/14852Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-on: https://go-review.googlesource.com/14854
-
- 09 Sep, 2015 2 commits
-
-
Chris Broadfoot authored
Change-Id: I98d9fefd923e2a35031385045382ba372f1d614a Reviewed-on: https://go-review.googlesource.com/14401Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Chris Broadfoot authored
Change-Id: I56452559acc432e06c15844d3f25dbeacafe77b7 Reviewed-on: https://go-review.googlesource.com/14402Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-on: https://go-review.googlesource.com/14403
-
- 08 Sep, 2015 7 commits
-
-
Rob Pike authored
These instructions are special cases that were missed in the translation. The second argument must go into the Reg field not the To field. Fixes #12458 For Go 1.5.1 Change-Id: Iad57c60c7e38e3bcfafda483ed5037ce670e8816 Reviewed-on: https://go-review.googlesource.com/14183Reviewed-by: Dave Cheney <dave@cheney.net> Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-on: https://go-review.googlesource.com/14358Reviewed-by: Chris Broadfoot <cbro@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Rob Pike authored
In short, %c should just give you the next rune, period. Apparently this is the design. I use the term loosely. Fixes #12275 Change-Id: I6f30bed442c0e88eac2244d465c7d151b29cf393 Reviewed-on: https://go-review.googlesource.com/13821Reviewed-by: Andrew Gerrand <adg@golang.org> Reviewed-on: https://go-review.googlesource.com/14395
-
Rob Pike authored
Fixes #12288. For inclusion in the 1.5.1 release. Change-Id: I9354b7eaa76000498465c4a5cbab7246de9ecb7c Reviewed-on: https://go-review.googlesource.com/14382Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/14394
-
Brad Fitzpatrick authored
Some commits made by Aram from his personal email address are actually copyright Oracle: a77fcb3f net: fix comment in sendFile b0e71f46 net: link with networking libraries when net package is in use 92e959a4 syscall, net: use sendfile on Solaris db8d5b76 net: try to fix setKeepAlivePeriod on Solaris fe5ef5c9 runtime, syscall: link Solaris binaries directly instead of using dlopen/dlsym 2b90c3e8 go/build: enable cgo by default on solaris/amd64 2d18ab75 doc/progs: disable cgo tests that use C.Stdout on Solaris 2230e9d2 misc/cgo: add various solaris build lines 649c7b6d net: add cgo support for Solaris 24396dae os/user: small fixes for Solaris 121489cb runtime/cgo: add cgo support for solaris/amd64 83b25d93 cmd/ld: make .rela and .rela.plt sections contiguous c94f1f79 runtime: always load address of libcFunc on Solaris e481aac0 cmd/6l: use .plt instead of .got on Solaris See bug for clarification. Fixes #12452 Change-Id: I0aeb1b46c0c7d09c5c736e383ecf40240d2cf85f Reviewed-on: https://go-review.googlesource.com/14380Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-on: https://go-review.googlesource.com/14393Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Chris Broadfoot authored
This reverts commit f265044a. Change-Id: I454f9da3a40d6724ab106aae904b8e77756aae99 Reviewed-on: https://go-review.googlesource.com/14383 Run-TryBot: Chris Broadfoot <cbro@golang.org> Reviewed-by: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
Currently the stack barrier stub blindly unwinds the next stack barrier from the G's stack barrier array without checking that it's the right stack barrier. If through some bug the stack barrier array position gets out of sync with where we actually are on the stack, this could return to the wrong PC, which would lead to difficult to debug crashes. To address this, this commit adds a check to the amd64 stack barrier stub that it's unwinding the correct stack barrier. Updates #12238. Change-Id: If824d95191d07e2512dc5dba0d9978cfd9f54e02 Reviewed-on: https://go-review.googlesource.com/13948Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-on: https://go-review.googlesource.com/14241Reviewed-by: Austin Clements <austin@google.com>
-
Brad Fitzpatrick authored
Accepting a request with a nil body was never explicitly supported but happened to work in the past. This doesn't happen in most cases because usually people pass a Server's incoming Request to the ReverseProxy's ServeHTTP method, and incoming server requests are guaranteed to have non-nil bodies. Still, it's a regression, so fix. Fixes #12344 Change-Id: Id9a5a47aea3f2875d195b66c9a5f8581c4ca2aed Reviewed-on: https://go-review.googlesource.com/13935Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-on: https://go-review.googlesource.com/14245
-