- 29 Apr, 2016 25 commits
-
-
Michael Munday authored
This commit adds the cbcEncAble and cbcDecAble interfaces that can be implemented by block ciphers that support an optimized implementation of CBC. This is similar to what is done for GCM with the gcmAble interface. The cbcEncAble, cbcDecAble and gcmAble interfaces all now have tests to ensure they are detected correctly in the cipher package. name old speed new speed delta AESCBCEncrypt1K 152MB/s ± 1% 1362MB/s ± 0% +795.59% (p=0.000 n=10+9) AESCBCDecrypt1K 143MB/s ± 1% 1362MB/s ± 0% +853.00% (p=0.000 n=10+9) Change-Id: I715f686ab3686b189a3dac02f86001178fa60580 Reviewed-on: https://go-review.googlesource.com/22523 Run-TryBot: Michael Munday <munday@ca.ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Adam Langley <agl@golang.org>
-
Keith Randall authored
Fixes #15488 Change-Id: I054eb1e1c859de315e3cdbdef5428682bce693fd Reviewed-on: https://go-review.googlesource.com/22609 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: David Chase <drchase@google.com>
-
Rick Hudson authored
This commit moves the GC from free list allocation to bit mark allocation. Instead of using the bitmaps generated during the mark phases to generate free list and then using the free lists for allocation we allocate directly from the bitmaps. The change in the garbage benchmark name old time/op new time/op delta XBenchGarbage-12 2.22ms ± 1% 2.13ms ± 1% -3.90% (p=0.000 n=18+18) Change-Id: I17f57233336f0ca5ef5404c3be4ecb443ab622aa
-
Rick Hudson authored
nextFreeFast is currently not inlined by the compiler due to its size and complexity. This CL simplifies nextFreeFast by letting the slow path handle (nextFree) handle a corner cases. Change-Id: Ia9c5d1a7912bcb4bec072f5fd240f0e0bafb20e4 Reviewed-on: https://go-review.googlesource.com/22598Reviewed-by: Austin Clements <austin@google.com> Run-TryBot: Austin Clements <austin@google.com>
-
David Chase authored
This is necessary to avoid disrupting the go1 suite and gives us a place to put other tests of basic compiler function and correctness. Change-Id: I36933819ff2bfe6a2121fff2be9a98efd2123d9a Reviewed-on: https://go-review.googlesource.com/22597 Run-TryBot: David Chase <drchase@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
-
Keith Randall authored
Break really long lines. Add spacing to line up columns. In AMD64, put all the optimization rules after all the lowering rules. Change-Id: I45cc7368bf278416e67f89e74358db1bd4326a93 Reviewed-on: https://go-review.googlesource.com/22470Reviewed-by: David Chase <drchase@google.com>
-
Austin Clements authored
sweep used to skip mcental.freeSpan (and its locking) if it didn't find any new free objects. We lost that optimization when the freed-object counting changed in dad83f7 to count total free objects instead of newly freed objects. The previous commit brings back counting of newly freed objects, so we can easily revive this optimization by checking that count (like we used to) instead of the total free objects count. Change-Id: I43658707a1c61674d0366124d5976b00d98741a9 Reviewed-on: https://go-review.googlesource.com/22596 Run-TryBot: Austin Clements <austin@google.com> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Commit 8dda1c4c changed the meaning of "nfree" in sweep from the number of newly freed objects to the total number of free objects in the span, but didn't update where sweep added nfree to c.local_nsmallfree. Hence, we're over-accounting the number of frees. This is causing TestArrayHash to fail with "too many allocs NNN - hash not balanced". Fix this by computing the number of newly freed objects and adding that to c.local_nsmallfree, so it behaves like it used to. Computing this requires a small tweak to mallocgc: apparently we've never set s.allocCount when allocating a large object; fix this by setting it to 1 so sweep doesn't get confused. Change-Id: I31902ffd310110da4ffd807c5c06f1117b872dc8 Reviewed-on: https://go-review.googlesource.com/22595Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
We broke tracing of freed objects in GODEBUG=allocfreetrace=1 mode when we removed the sweep over the mark bitmap. Fix it by re-introducing the sweep over the bitmap specifically if we're in allocfreetrace mode. This doesn't have to be even remotely efficient, since the overhead of allocfreetrace is huge anyway, so we can keep the code for this down to just a few lines. Change-Id: I9e176b3b04c73608a0ea3068d5d0cd30760ebd40 Reviewed-on: https://go-review.googlesource.com/22592 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
-
Austin Clements authored
Currently we always zero objects when we allocate them. We used to have an optimization that would not zero objects that had not been allocated since the whole span was last zeroed (either by getting it from the system or by getting it from the heap, which does a bulk zero), but this depended on the sweeper clobbering the first two words of each object. Hence, we lost this optimization when the bitmap sweeper went away. Re-introduce this optimization using a different mechanism. Each span already keeps a flag indicating that it just came from the OS or was just bulk zeroed by the mheap. We can simply use this flag to know when we don't need to zero an object. This is slightly less efficient than the old optimization: if a span gets allocated and partially used, then GC happens and the span gets returned to the mcentral, then the span gets re-acquired, the old optimization knew that it only had to re-zero the objects that had been reclaimed, whereas this optimization will re-zero everything. However, in this case, you're already paying for the garbage collection, and you've only wasted one zeroing of the span, so in practice there seems to be little difference. (If we did want to revive the full optimization, each span could keep track of a frontier beyond which all free slots are zeroed. I prototyped this and it didn't obvious do any better than the much simpler approach in this commit.) This significantly improves BinaryTree17, which is allocation-heavy (and runs first, so most pages are already zeroed), and slightly improves everything else. name old time/op new time/op delta XBenchGarbage-12 2.15ms ± 1% 2.14ms ± 1% -0.80% (p=0.000 n=17+17) name old time/op new time/op delta BinaryTree17-12 2.71s ± 1% 2.56s ± 1% -5.73% (p=0.000 n=18+19) DivconstI64-12 1.70ns ± 1% 1.70ns ± 1% ~ (p=0.562 n=18+18) DivconstU64-12 1.74ns ± 2% 1.74ns ± 1% ~ (p=0.394 n=20+20) DivconstI32-12 1.74ns ± 0% 1.74ns ± 0% ~ (all samples are equal) DivconstU32-12 1.66ns ± 1% 1.66ns ± 0% ~ (p=0.516 n=15+16) DivconstI16-12 1.84ns ± 0% 1.84ns ± 0% ~ (all samples are equal) DivconstU16-12 1.82ns ± 0% 1.82ns ± 0% ~ (all samples are equal) DivconstI8-12 1.79ns ± 0% 1.79ns ± 0% ~ (all samples are equal) DivconstU8-12 1.60ns ± 0% 1.60ns ± 1% ~ (p=0.603 n=17+19) Fannkuch11-12 2.11s ± 1% 2.11s ± 0% ~ (p=0.333 n=16+19) FmtFprintfEmpty-12 45.1ns ± 4% 45.4ns ± 5% ~ (p=0.111 n=20+20) FmtFprintfString-12 134ns ± 0% 129ns ± 0% -3.45% (p=0.000 n=18+16) FmtFprintfInt-12 131ns ± 1% 129ns ± 1% -1.54% (p=0.000 n=16+18) FmtFprintfIntInt-12 205ns ± 2% 203ns ± 0% -0.56% (p=0.014 n=20+18) FmtFprintfPrefixedInt-12 200ns ± 2% 197ns ± 1% -1.48% (p=0.000 n=20+18) FmtFprintfFloat-12 256ns ± 1% 256ns ± 0% -0.21% (p=0.008 n=18+20) FmtManyArgs-12 805ns ± 0% 804ns ± 0% -0.19% (p=0.001 n=18+18) GobDecode-12 7.21ms ± 1% 7.14ms ± 1% -0.92% (p=0.000 n=19+20) GobEncode-12 5.88ms ± 1% 5.88ms ± 1% ~ (p=0.641 n=18+19) Gzip-12 218ms ± 1% 218ms ± 1% ~ (p=0.271 n=19+18) Gunzip-12 37.1ms ± 0% 36.9ms ± 0% -0.29% (p=0.000 n=18+17) HTTPClientServer-12 78.1µs ± 2% 77.4µs ± 2% ~ (p=0.070 n=19+19) JSONEncode-12 15.5ms ± 1% 15.5ms ± 0% ~ (p=0.063 n=20+18) JSONDecode-12 56.1ms ± 0% 55.4ms ± 1% -1.18% (p=0.000 n=19+18) Mandelbrot200-12 4.05ms ± 0% 4.06ms ± 0% +0.29% (p=0.001 n=18+18) GoParse-12 3.28ms ± 1% 3.21ms ± 1% -2.30% (p=0.000 n=20+20) RegexpMatchEasy0_32-12 69.4ns ± 2% 69.3ns ± 1% ~ (p=0.205 n=18+16) RegexpMatchEasy0_1K-12 239ns ± 0% 239ns ± 0% ~ (all samples are equal) RegexpMatchEasy1_32-12 69.4ns ± 1% 69.4ns ± 1% ~ (p=0.620 n=15+18) RegexpMatchEasy1_1K-12 370ns ± 1% 369ns ± 2% ~ (p=0.088 n=20+20) RegexpMatchMedium_32-12 108ns ± 0% 108ns ± 0% ~ (all samples are equal) RegexpMatchMedium_1K-12 33.6µs ± 3% 33.5µs ± 3% ~ (p=0.718 n=20+20) RegexpMatchHard_32-12 1.68µs ± 1% 1.67µs ± 2% ~ (p=0.316 n=20+20) RegexpMatchHard_1K-12 50.5µs ± 3% 50.4µs ± 3% ~ (p=0.659 n=20+20) Revcomp-12 381ms ± 1% 381ms ± 1% ~ (p=0.916 n=19+18) Template-12 66.5ms ± 1% 65.8ms ± 2% -1.08% (p=0.000 n=20+20) TimeParse-12 317ns ± 0% 319ns ± 0% +0.48% (p=0.000 n=19+12) TimeFormat-12 338ns ± 0% 338ns ± 0% ~ (p=0.124 n=19+18) [Geo mean] 5.99µs 5.96µs -0.54% Change-Id: I638ffd9d9f178835bbfa499bac20bd7224f1a907 Reviewed-on: https://go-review.googlesource.com/22591Reviewed-by: Rick Hudson <rlh@golang.org>
-
Nigel Tao authored
This makes compress/flate's version of Snappy diverge from the upstream golang/snappy version, but the latter has a goal of matching C++ snappy output byte-for-byte. Both C++ and the asm version of golang/snappy can use a smaller N for the O(N) zero-initialization of the hash table when the input is small, even if the pure Go golang/snappy algorithm cannot: "var table [tableSize]uint16" zeroes all tableSize elements. For this package, we don't have the match-C++-snappy goal, so we can use a different (constant) hash table size. This is a small win, in terms of throughput and output size, but it also enables us to re-use the (constant size) hash table between encodeBestSpeed calls, avoiding the cost of zero-initializing the hash table altogether. This will be implemented in follow-up commits. This package's benchmarks: name old speed new speed delta EncodeDigitsSpeed1e4-8 72.8MB/s ± 1% 73.5MB/s ± 1% +0.86% (p=0.000 n=10+10) EncodeDigitsSpeed1e5-8 77.5MB/s ± 1% 78.0MB/s ± 0% +0.69% (p=0.000 n=10+10) EncodeDigitsSpeed1e6-8 82.0MB/s ± 1% 82.7MB/s ± 1% +0.85% (p=0.000 n=10+9) EncodeTwainSpeed1e4-8 65.1MB/s ± 1% 65.6MB/s ± 0% +0.78% (p=0.000 n=10+9) EncodeTwainSpeed1e5-8 80.0MB/s ± 0% 80.6MB/s ± 1% +0.66% (p=0.000 n=9+10) EncodeTwainSpeed1e6-8 81.6MB/s ± 1% 82.1MB/s ± 1% +0.55% (p=0.017 n=10+10) Input size in bytes, output size (and time taken) before and after on some larger files: 1073741824 57269781 ( 3183ms) 57269781 ( 3177ms) adresser.001 1000000000 391052000 ( 11071ms) 391051996 ( 11067ms) enwik9 1911399616 378679516 ( 13450ms) 378679514 ( 13079ms) gob-stream 8558382592 3972329193 ( 99962ms) 3972329193 ( 91290ms) rawstudio-mint14.tar 200000000 200015265 ( 776ms) 200015265 ( 774ms) sharnd.out Thanks to Klaus Post for the original suggestion on cl/21021. Change-Id: Ia4c63a8d1b92c67e1765ec5c3c8c69d289d9a6ce Reviewed-on: https://go-review.googlesource.com/22604Reviewed-by: Russ Cox <rsc@golang.org>
-
Dave Cheney authored
Drive by gardening of bv.go. - Unexport the Bvec type, it is not used outside internal/gc. (machine translated with gofmt -r) - Removed unused constants and functions. (driven by cmd/unused) Change-Id: I3433758ad4e62439f802f4b0ed306e67336d9aba Reviewed-on: https://go-review.googlesource.com/22602 Run-TryBot: Dave Cheney <dave@cheney.net> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Cherry Zhang authored
After CL 22461, c-archive build on darwin/arm is by default compiled with -shared and installed in pkg/darwin_arm_shared. Fix build (2nd time...) Change-Id: Ia2bb09bb6e1ebc9bc74f7570dd80c81d05eaf744 Reviewed-on: https://go-review.googlesource.com/22534Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: David Crawshaw <crawshaw@golang.org> Run-TryBot: David Crawshaw <crawshaw@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Nigel Tao authored
This encoding algorithm, which prioritizes speed over output size, is based on Snappy's LZ77-style encoder: github.com/golang/snappy This commit keeps the diff between this package's encodeBestSpeed function and and Snappy's encodeBlock function as small as possible (see the diff below). Follow-up commits will improve this package's performance and output size. This package's speed benchmarks: name old speed new speed delta EncodeDigitsSpeed1e4-8 40.7MB/s ± 0% 73.0MB/s ± 0% +79.18% (p=0.008 n=5+5) EncodeDigitsSpeed1e5-8 33.0MB/s ± 0% 77.3MB/s ± 1% +134.04% (p=0.008 n=5+5) EncodeDigitsSpeed1e6-8 32.1MB/s ± 0% 82.1MB/s ± 0% +156.18% (p=0.008 n=5+5) EncodeTwainSpeed1e4-8 42.1MB/s ± 0% 65.0MB/s ± 0% +54.61% (p=0.008 n=5+5) EncodeTwainSpeed1e5-8 46.3MB/s ± 0% 80.0MB/s ± 0% +72.81% (p=0.008 n=5+5) EncodeTwainSpeed1e6-8 47.3MB/s ± 0% 81.7MB/s ± 0% +72.86% (p=0.008 n=5+5) Here's the milliseconds taken, before and after this commit, to compress a number of test files: Go's src/compress/testdata files: 4 1 e.txt 8 4 Mark.Twain-Tom.Sawyer.txt github.com/golang/snappy's benchmark files: 3 1 alice29.txt 12 3 asyoulik.txt 6 1 fireworks.jpeg 1 1 geo.protodata 1 0 html 2 2 html_x_4 6 3 kppkn.gtb 11 4 lcet10.txt 5 1 paper-100k.pdf 14 6 plrabn12.txt 17 6 urls.10K Larger files linked to from https://docs.google.com/spreadsheets/d/1VLxi-ac0BAtf735HyH3c1xRulbkYYUkFecKdLPH7NIQ/edit#gid=166102500 2409 3182 adresser.001 16757 11027 enwik9 13764 12946 gob-stream 153978 74317 rawstudio-mint14.tar 4371 770 sharnd.out Output size is larger. In the table below, the first column is the input size, the second column is the output size prior to this commit, the third column is the output size after this commit. 100003 47707 50006 e.txt 387851 172707 182930 Mark.Twain-Tom.Sawyer.txt 152089 62457 66705 alice29.txt 125179 54503 57274 asyoulik.txt 123093 122827 123108 fireworks.jpeg 118588 18574 20558 geo.protodata 102400 16601 17305 html 409600 65506 70313 html_x_4 184320 49007 50944 kppkn.gtb 426754 166957 179355 lcet10.txt 102400 82126 84937 paper-100k.pdf 481861 218617 231988 plrabn12.txt 702087 241774 258020 urls.10K 1073741824 43074110 57269781 adresser.001 1000000000 365772256 391052000 enwik9 1911399616 340364558 378679516 gob-stream 8558382592 3807229562 3972329193 rawstudio-mint14.tar 200000000 200061040 200015265 sharnd.out The diff between github.com/golang/snappy's encodeBlock function and this commit's encodeBestSpeed function: 1c1,7 < func encodeBlock(dst, src []byte) (d int) { --- > func encodeBestSpeed(dst []token, src []byte) []token { > // This check isn't in the Snappy implementation, but there, the caller > // instead of the callee handles this case. > if len(src) < minNonLiteralBlockSize { > return emitLiteral(dst, src) > } > 4c10 < // and len(src) <= maxBlockSize and maxBlockSize == 65536. --- > // and len(src) <= maxStoreBlockSize and maxStoreBlockSize == 65535. 65c71 < if load32(src, s) == load32(src, candidate) { --- > if s-candidate < maxOffset && load32(src, s) == load32(src, candidate) { 73c79 < d += emitLiteral(dst[d:], src[nextEmit:s]) --- > dst = emitLiteral(dst, src[nextEmit:s]) 90c96 < // This is an inlined version of: --- > // This is an inlined version of Snappy's: 93c99,103 < for i := candidate + 4; s < len(src) && src[i] == src[s]; i, s = i+1, s+1 { --- > s1 := base + maxMatchLength > if s1 > len(src) { > s1 = len(src) > } > for i := candidate + 4; s < s1 && src[i] == src[s]; i, s = i+1, s+1 { 96c106,107 < d += emitCopy(dst[d:], base-candidate, s-base) --- > // matchToken is flate's equivalent of Snappy's emitCopy. > dst = append(dst, matchToken(uint32(s-base-3), uint32(base-candidate-minOffsetSize))) 114c125 < if uint32(x>>8) != load32(src, candidate) { --- > if s-candidate >= maxOffset || uint32(x>>8) != load32(src, candidate) { 124c135 < d += emitLiteral(dst[d:], src[nextEmit:]) --- > dst = emitLiteral(dst, src[nextEmit:]) 126c137 < return d --- > return dst This change is based on https://go-review.googlesource.com/#/c/21021/ by Klaus Post, but it is a separate changelist as cl/21021 seems to have stalled in code review, and the Go 1.7 feature freeze approaches. Golang-dev discussion: https://groups.google.com/d/topic/golang-dev/XYgHX9p8IOk/discussion and of course cl/21021. Change-Id: Ib662439417b3bd0b61c2977c12c658db3e44d164 Reviewed-on: https://go-review.googlesource.com/22370Reviewed-by: Russ Cox <rsc@golang.org>
-
Austin Clements authored
This converts all remaining uses of mspan.start to instead use mspan.base(). In many cases, this actually reduces the complexity of the code. Change-Id: If113840e00d3345a6cf979637f6a152e6344aee7 Reviewed-on: https://go-review.googlesource.com/22590Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
Currently we have lots of (s.start << _PageShift) and variants. We now have an s.base() function that returns this. It's faster and more readable, so use it. Change-Id: I888060a9dae15ea75ca8cc1c2b31c905e71b452b Reviewed-on: https://go-review.googlesource.com/22559Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com>
-
Austin Clements authored
In particular, it always returns an aligned pointer. Change-Id: I763789a539a4bfd8b0efb36a39a80be1a479d3e2 Reviewed-on: https://go-review.googlesource.com/22558Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Austin Clements authored
These used to be used for the list of newly freed objects, but that's no longer a thing. Change-Id: I5a4503137b74ec0eae5372ca271b1aa0b32df074 Reviewed-on: https://go-review.googlesource.com/22557Reviewed-by: Rick Hudson <rlh@golang.org> Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
Change-Id: I2e8ae403622ba7131cadaba506100d79613183f1 Reviewed-on: https://go-review.googlesource.com/22601Reviewed-by: Russ Cox <rsc@golang.org> Reviewed-by: Andrew Gerrand <adg@golang.org>
-
Alex Brainman authored
.bss section has no data stored in PE file. But when .bss section data is used by the linker it is assumed that its every byte is set to zero. (*Section).Data returns garbage at this moment. Change (*Section).Data so it returns slice filled with 0s. Updates #15345 Change-Id: I1fa5138244a9447e1d59dec24178b1dd0fd4c5d7 Reviewed-on: https://go-review.googlesource.com/22544Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Alex Brainman <alex.brainman@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Robert Griesemer authored
Follow-up to https://golang.org/cl/22543. Change-Id: I873b4fa6616ac2aea8faada2fccd126233bbc07f Reviewed-on: https://go-review.googlesource.com/22583 Run-TryBot: Robert Griesemer <gri@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Russ Cox authored
See https://golang.org/design/2775-binary-only-packages for design. Fixes #2775. Change-Id: I33e74eebffadc14d3340bba96083af0dec5172d5 Reviewed-on: https://go-review.googlesource.com/22433Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Nigel Tao authored
This is an error according to the spec, but Firefox and Google Chrome seem OK with this. Fixes #15059. Change-Id: I841cf44e96655e91a2481555f38fbd7055a32202 Reviewed-on: https://go-review.googlesource.com/22546Reviewed-by: Rob Pike <r@golang.org>
-
Rick Hudson authored
Our compilers now provides instrinsics including sys.Ctz64 that support CTZ (count trailing zero) instructions. This CL replaces the Go versions of CTZ with the compiler intrinsic. Count trailing zeros CTZ finds the least significant 1 in a word and returns the number of less significant 0s in the word. Allocation uses the bitmap created by the garbage collector to locate an unmarked object. The logic takes a word of the bitmap, complements, and then caches it. It then uses CTZ to locate an available unmarked object. It then shifts marked bits out of the bitmap word preparing it for the next search. Once all the unmarked objects are used in the cached work the bitmap gets another word and repeats the process. Change-Id: Id2fc42d1d4b9893efaa2e1bd01896985b7e42f82 Reviewed-on: https://go-review.googlesource.com/21366Reviewed-by: Austin Clements <austin@google.com>
-
Rick Hudson authored
Two changes are included here that are dependent on the other. The first is that allocBits and gcamrkBits are changed to a *uint8 which points to the first byte of that span's mark and alloc bits. Several places were altered to perform pointer arithmetic to locate the byte corresponding to an object in the span. The actual bit corresponding to an object is indexed in the byte by using the lower three bits of the objects index. The second change avoids the redundant calculation of an object's index. The index is returned from heapBitsForObject and then used by the functions indexing allocBits and gcmarkBits. Finally we no longer allocate the gc bits in the span structures. Instead we use an arena based allocation scheme that allows for a more compact bit map as well as recycling and bulk clearing of the mark bits. Change-Id: If4d04b2021c092ec39a4caef5937a8182c64dfef Reviewed-on: https://go-review.googlesource.com/20705Reviewed-by: Austin Clements <austin@google.com>
-
- 28 Apr, 2016 15 commits
-
-
Nigel Tao authored
See Section 23. Graphic Control Extension of the spec: https://www.w3.org/Graphics/GIF/spec-gif89a.txt Change-Id: Ie78b4ff4aa97e1b332ade67ae4fa25f7c0770610 Reviewed-on: https://go-review.googlesource.com/22547Reviewed-by: Rob Pike <r@golang.org>
-
Michael Hudson-Doyle authored
golang.org/cl/22453 was supposed to pass -no-pie to the linker when linking a race-enabled binary if the host toolchain supports it. But I bungled the supported check as I forgot to pass -c to the host compiler so it tried to compile a 0 byte .c file into an executable, which will never work. Fix it to pass -c as it should have all along. Change-Id: I4801345c7a29cb18d5f22cec5337ce535f92135d Reviewed-on: https://go-review.googlesource.com/22587 Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Minux Ma <minux@golang.org>
-
Keith Randall authored
It is unused, remove the clutter. Change-Id: I51a44326b125ef79241459c463441f76a289cc08 Reviewed-on: https://go-review.googlesource.com/22586Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
David Symonds authored
Change-Id: I46d9ea31cf5836d054a9ce22af4dd1742a418a07 Reviewed-on: https://go-review.googlesource.com/22588 Run-TryBot: David Symonds <dsymonds@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rob Pike <r@golang.org>
-
Mikio Hara authored
The _SigUnblock flag was appended to SIGSYS slot of runtime signal table for Linux in https://go-review.googlesource.com/22202, but there is still no concrete opinion on whether SIGSYS must be an unblocked signal for runtime. This change removes _SigUnblock flag from SIGSYS on Linux for consistency in runtime signal handling and adds a reference to #15204 to runtime signal table for FreeBSD. Updates #15204. Change-Id: I42992b1d852c2ab5dd37d6dbb481dba46929f665 Reviewed-on: https://go-review.googlesource.com/22537Reviewed-by: Ian Lance Taylor <iant@golang.org>
-
Matthew Dempsky authored
DNS packing and unpacking uses hand-coded struct walking functions rather than reflection, so these tags are unneeded and just contribute to their runtime reflect metadata size. Change-Id: I2db09d5159912bcbc3b482cbf23a50fa8fa807fa Reviewed-on: https://go-review.googlesource.com/22594 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
There are no real world use cases for HINFO, MINFO, MB, MG, or MR records, and package net's exposed APIs don't provide any way to access them even if there were. If a use ever does show up, we can revive them. In the mean time, this is just effectively-dead code that sticks around because of rr_mk. Change-Id: I6c188b5ee32f3b3a04588b79a0ee9c2e3e725ccc Reviewed-on: https://go-review.googlesource.com/22593 Run-TryBot: Matthew Dempsky <mdempsky@google.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
Updates #12580 Change-Id: I9f9578148ef2b48dffede1007317032d39f6af55 Reviewed-on: https://go-review.googlesource.com/22191Reviewed-by: Ian Lance Taylor <iant@golang.org> Reviewed-by: Tom Bergan <tombergan@google.com> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-
Brad Fitzpatrick authored
Goroutine leak checking is still too tedious, so untested. See #6705 which is my fault for forgetting to mail out. Change-Id: I899fb311c9d4229ff1dbd3f54fe307805e17efee Reviewed-on: https://go-review.googlesource.com/22581Reviewed-by: Ahmed W. <oneofone@gmail.com> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Robert Griesemer authored
This reduces the export data size significantly (15%-25%) for some packages, especially where the paths are very long or if there are many files involved. Slight (2%) reduction on average, with virtually no increases in export data size. Selected export data sizes for packages with |delta %| > 3%: package before after delta % cmd/asm/internal/arch 11647 11088 -559 -4% cmd/compile/internal/amd64 838 600 -238 -27% cmd/compile/internal/arm 7323 6793 -530 -6% cmd/compile/internal/arm64 19948 18971 -977 -4% cmd/compile/internal/big 9043 8548 -495 -4% cmd/compile/internal/mips64 645 482 -163 -24% cmd/compile/internal/ppc64 695 497 -198 -27% cmd/compile/internal/s390x 553 433 -120 -21% cmd/compile/internal/x86 744 555 -189 -24% cmd/dist 145 121 -24 -16% cmd/internal/objfile 17359 16474 -885 -4% cmd/internal/pprof/symbolz 8346 7941 -405 -4% cmd/link/internal/amd64 11178 10604 -574 -4% cmd/link/internal/arm 204 171 -33 -15% cmd/link/internal/arm64 210 175 -35 -16% cmd/link/internal/mips64 213 177 -36 -16% cmd/link/internal/ppc64 211 176 -35 -16% cmd/link/internal/s390x 210 175 -35 -16% cmd/link/internal/x86 203 170 -33 -15% cmd/trace 782 744 -38 -4% compress/lzw 402 383 -19 -4% crypto/aes 311 262 -49 -15% crypto/cipher 1138 959 -179 -15% crypto/des 315 288 -27 -8% crypto/elliptic 6063 5746 -317 -4% crypto/rc4 317 295 -22 -6% crypto/sha256 348 312 -36 -9% crypto/sha512 487 451 -36 -6% go/doc 3871 3649 -222 -5% go/internal/gccgoimporter 2063 1949 -114 -5% go/internal/gcimporter 3253 3096 -157 -4% math 4343 3572 -771 -17% math/cmplx 1580 1274 -306 -18% math/rand 982 926 -56 -5% net/internal/socktest 2159 2049 -110 -4% os/exec 7928 7492 -436 -4% os/signal 237 208 -29 -11% os/user 717 682 -35 -4% runtime/internal/atomic 728 693 -35 -4% runtime/internal/sys 2287 2107 -180 -7% sync 1306 1214 -92 -6% all packages 1509255 1465507 -43748 -2% Change-Id: I98a11521b552166b7f47f2039a29f106748bf5d4 Reviewed-on: https://go-review.googlesource.com/22580Reviewed-by: Alan Donovan <adonovan@google.com>
-
Matthew Dempsky authored
Change-Id: Icecbf9bae8c39670d1ceef62dd94b36e90b27b04 Reviewed-on: https://go-review.googlesource.com/22570 Run-TryBot: Matthew Dempsky <mdempsky@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Michael Munday authored
MGHI (16-bit signed immediate) is now used where possible for both MULLW and MULLD. MGHI is 2-bytes shorter than MSGFI. Change-Id: I5d0648934f28b3403b1126913fd703d8f62b9e9f Reviewed-on: https://go-review.googlesource.com/22398 Run-TryBot: Michael Munday <munday@ca.ibm.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Bill O'Farrell <billotosyr@gmail.com> Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Avoids some extra work and string concatenation at query time. benchmark old allocs new allocs delta BenchmarkGoLookupIP-32 154 150 -2.60% BenchmarkGoLookupIPNoSuchHost-32 446 442 -0.90% BenchmarkGoLookupIPWithBrokenNameServer-32 564 568 +0.71% benchmark old bytes new bytes delta BenchmarkGoLookupIP-32 10824 10704 -1.11% BenchmarkGoLookupIPNoSuchHost-32 43140 42992 -0.34% BenchmarkGoLookupIPWithBrokenNameServer-32 46616 46680 +0.14% BenchmarkGoLookupIPWithBrokenNameServer's regression appears to be because it's actually only performing 1 LookupIP call, so the extra work done parsing the DNS config file doesn't amortize as well as for BenchmarkGoLookupIP or BenchmarkGoLOokupIPNoSuchHost, which perform 2000+ LookupIP calls per run. Update #15473. Change-Id: I98c8072f2f39e2f2ccd6c55e9e9bd309f5ad68f8 Reviewed-on: https://go-review.googlesource.com/22571Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Matthew Dempsky authored
Avoids generating some redundant garbage from re-concatenating the same string for every DNS query. benchmark old allocs new allocs delta BenchmarkGoLookupIP-32 156 154 -1.28% BenchmarkGoLookupIPNoSuchHost-32 456 446 -2.19% BenchmarkGoLookupIPWithBrokenNameServer-32 577 564 -2.25% benchmark old bytes new bytes delta BenchmarkGoLookupIP-32 10873 10824 -0.45% BenchmarkGoLookupIPNoSuchHost-32 43303 43140 -0.38% BenchmarkGoLookupIPWithBrokenNameServer-32 46824 46616 -0.44% Update #15473. Change-Id: I3b0173dfedf31bd08eaea1069968b416850864a1 Reviewed-on: https://go-review.googlesource.com/22556Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
-
Brad Fitzpatrick authored
Updates #14660 Change-Id: Ifa5c97ba327ad7ceea0a9a252e3dbd9d079dae54 Reviewed-on: https://go-review.googlesource.com/22529Reviewed-by: Ian Lance Taylor <iant@golang.org> Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org>
-