1. 11 Aug, 2016 3 commits
  2. 10 Aug, 2016 3 commits
    • Cherry Zhang's avatar
      [dev.ssa] cmd/internal/obj/arm64: fix encoding constant into some instructions · 748aa844
      Cherry Zhang authored
      When a constant can be encoded in a logical instruction (BITCON), do
      it this way instead of using the constant pool. The BITCON testing
      code runs faster than table lookup (using map):
      
      (on AMD64 machine, with pseudo random input)
      BenchmarkIsBitcon-4   	300000000	         4.04 ns/op
      BenchmarkTable-4      	50000000	        27.3 ns/op
      
      The equivalent C code of BITCON testing is formally verified with
      model checker CBMC against linear search of the lookup table.
      
      Also handle cases when a constant can be encoded in a MOV instruction.
      In this case, materializa the constant into REGTMP without using the
      constant pool.
      
      When constants need to be added to the constant pool, make sure to
      check whether it fits in 32-bit. If not, store 64-bit.
      
      Both legacy and SSA compiler backends are happy with this.
      
      Fixes #16226.
      
      Change-Id: I883e3069dee093a1cdc40853c42221a198a152b0
      Reviewed-on: https://go-review.googlesource.com/26631
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      748aa844
    • Keith Randall's avatar
      [dev.ssa] cmd/compile: implement GO386=387 · c069bc49
      Keith Randall authored
      Last part of the 386 SSA port.
      
      Modify the x86 backend to simulate SSE registers and
      instructions with 387 registers and instructions.
      The simulation isn't terribly performant, but it works,
      and the old implementation wasn't very performant either.
      Leaving to people who care about 387 to optimize if they want.
      
      Turn on SSA backend for 386 by default.
      
      Fixes #16358
      
      Change-Id: I678fb59132620b2c47e993c1c10c4c21135f70c0
      Reviewed-on: https://go-review.googlesource.com/25271
      Run-TryBot: Keith Randall <khr@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      c069bc49
    • Keith Randall's avatar
      [dev.ssa] cmd/compile: more fixes for 386 shared libraries · 77ef597f
      Keith Randall authored
      Use the destination register for materializing the pc
      for GOT references also. See https://go-review.googlesource.com/c/25442/
      The SSA backend assumes CX does not get clobbered for these instructions.
      
      Mark duffzero as clobbering CX. The linker needs to clobber CX
      to materialize the address to call. (This affects the non-shared-library
      duffzero also, but hopefully forbidding one register across duffzero
      won't be a big deal.)
      
      Hopefully this is all the cases where the linker is clobbering CX
      under the hood and SSA assumes it isn't.
      
      Change-Id: I080c938170193df57cd5ce1f2a956b68a34cc886
      Reviewed-on: https://go-review.googlesource.com/26611
      Run-TryBot: Keith Randall <khr@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarMichael Hudson-Doyle <michael.hudson@canonical.com>
      77ef597f
  3. 09 Aug, 2016 3 commits
    • David Chase's avatar
      [dev.ssa] cmd/compile: PPC: FP load/store/const/cmp/neg; div/mod · ff37d0e6
      David Chase authored
      FP<->int conversions remain.
      
      Updates #16010.
      
      Change-Id: I38d7a4923e34d0a489935fffc4c96c020cafdba2
      Reviewed-on: https://go-review.googlesource.com/25589
      Run-TryBot: David Chase <drchase@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      ff37d0e6
    • Keith Randall's avatar
      [dev.ssa] cmd/compile: fix PIC for SSA-generated code · 2cbdd55d
      Keith Randall authored
      Access to globals requires a 2-instruction sequence on PIC 386.
      
          MOVL foo(SB), AX
      
      is translated by the obj package into:
      
          CALL getPCofNextInstructionInTempRegister(SB)
          MOVL (&foo-&thisInstruction)(tmpReg), AX
      
      The call returns the PC of the next instruction in a register.
      The next instruction then offsets from that register to get the
      address required.  The tricky part is the allocation of the
      temp register.  The legacy compiler always used CX, and forbid
      the register allocator from allocating CX when in PIC mode.
      We can't easily do that in SSA because CX is actually a required
      register for shift instructions. (I think the old backend got away
      with this because the register allocator never uses CX, only
      codegen knows that shifts must use CX.)
      
      Instead, we allow the temp register to be anything.  When the
      destination of the MOV (or LEA) is an integer register, we can
      use that register.  Otherwise, we make sure to compile the
      operation using an LEA to reference the global.  So
      
          MOVL AX, foo(SB)
      
      is never generated directly.  Instead, SSA generates:
      
          LEAL foo(SB), DX
          MOVL AX, (DX)
      
      which is then rewritten by the obj package to:
      
          CALL getPcInDX(SB)
          LEAL (&foo-&thisInstruction)(DX), AX
          MOVL AX, (DX)
      
      So this CL modifies the obj package to use different thunks
      to materialize the pc into different registers.  We use the
      registers that regalloc chose so that SSA can still allocate
      the full set of registers.
      
      Change-Id: Ie095644f7164a026c62e95baf9d18a8bcaed0bba
      Reviewed-on: https://go-review.googlesource.com/25442
      Run-TryBot: Keith Randall <khr@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      2cbdd55d
    • Keith Randall's avatar
      [dev.ssa] cmd/compile: port SSA backend to amd64p32 · 69a755b6
      Keith Randall authored
      It's not a new backend, just a PtrSize==4 modification
      of the existing AMD64 backend.
      
      Change-Id: Icc63521a5cf4ebb379f7430ef3f070894c09afda
      Reviewed-on: https://go-review.googlesource.com/25586
      Run-TryBot: Keith Randall <khr@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      69a755b6
  4. 08 Aug, 2016 1 commit
  5. 07 Aug, 2016 1 commit
  6. 06 Aug, 2016 1 commit
  7. 04 Aug, 2016 4 commits
  8. 03 Aug, 2016 1 commit
    • Josh Bleecher Snyder's avatar
      [dev.ssa] cmd/compile: refactor out rulegen value parsing · 6a1153ac
      Josh Bleecher Snyder authored
      Previously, genMatch0 and genResult0 contained
      lots of duplication: locating the op, parsing
      the value, validation, etc.
      Parsing and validation was mixed in with code gen.
      
      Extract a helper, parseValue. It is responsible
      for parsing the value, locating the op, and doing
      shared validation.
      
      As a bonus (and possibly as my original motivation),
      make op selection pay attention to the number
      of args present.
      This allows arch-specific ops to share a name
      with generic ops as long as there is no ambiguity.
      It also detects and reports unresolved ambiguity,
      unlike before, where it would simply always
      pick the generic op, with no warning.
      
      Also use parseValue when generating the top-level
      op dispatch, to ensure its opinion about ops
      matches genMatch0 and genResult0.
      
      The order of statements in the generated code used
      to depend on the exact rule. It is now somewhat
      independent of the rule. That is the source
      of some of the generated code changes in this CL.
      See rewritedec64 and rewritegeneric for examples.
      It is a one-time change.
      
      The op dispatch switch and functions used to be
      sorted by opname without architecture. The sort
      now includes the architecture, leading to further
      generated code changes.
      See rewriteARM and rewriteAMD64 for examples.
      Again, it is a one-time change.
      
      There are no functional changes.
      
      Change-Id: I22c989183ad5651741ebdc0566349c5fd6c6b23c
      Reviewed-on: https://go-review.googlesource.com/24649
      Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      6a1153ac
  9. 02 Aug, 2016 10 commits
  10. 01 Aug, 2016 1 commit
  11. 29 Jul, 2016 1 commit
    • Cherry Zhang's avatar
      cmd/compile: fix possible spill of invalid pointer with DUFFZERO on AMD64 · 111d590f
      Cherry Zhang authored
      SSA compiler on AMD64 may spill Duff-adjusted address as scalar. If
      the object is on stack and the stack moves, the spilled address become
      invalid.
      
      Making the spill pointer-typed does not work. The Duff-adjusted address
      points to the memory before the area to be zeroed and may be invalid.
      This may cause stack scanning code panic.
      
      Fix it by doing Duff-adjustment in genValue, so the intermediate value
      is not seen by the reg allocator, and will not be spilled.
      
      Add a test to cover both cases. As it depends on allocation, it may
      be not always triggered.
      
      Fixes #16515.
      
      Change-Id: Ia81d60204782de7405b7046165ad063384ede0db
      Reviewed-on: https://go-review.googlesource.com/25309
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      111d590f
  12. 28 Jul, 2016 2 commits
  13. 27 Jul, 2016 4 commits
    • Rhys Hiltner's avatar
      runtime: reduce GC assist extra credit · ccca9c9c
      Rhys Hiltner authored
      Mutator goroutines that allocate memory during the concurrent mark
      phase are required to spend some time assisting the garbage
      collector. The magnitude of this mandatory assistance is proportional
      to the goroutine's allocation debt and subject to the assistance
      ratio as calculated by the pacer.
      
      When assisting the garbage collector, a mutator goroutine will go
      beyond paying off its allocation debt. It will build up extra credit
      to amortize the overhead of the assist.
      
      In fast-allocating applications with high assist ratios, building up
      this credit can take the affected goroutine's entire time slice.
      Reduce the penalty on each goroutine being selected to assist the GC
      in two ways, to spread the responsibility more evenly.
      
      First, do a consistent amount of extra scan work without regard for
      the pacer's assistance ratio. Second, reduce the magnitude of the
      extra scan work so it can be completed within a few hundred
      microseconds.
      
      Commentary on gcOverAssistWork is by Austin Clements, originally in
      https://golang.org/cl/24704
      
      Updates #14812
      Fixes #16432
      
      Change-Id: I436f899e778c20daa314f3e9f0e2a1bbd53b43e1
      Reviewed-on: https://go-review.googlesource.com/25155
      Run-TryBot: Austin Clements <austin@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarAustin Clements <austin@google.com>
      Reviewed-by: default avatarRick Hudson <rlh@golang.org>
      Reviewed-by: default avatarChris Broadfoot <cbro@golang.org>
      ccca9c9c
    • Cherry Zhang's avatar
      [dev.ssa] cmd/compile: fix possible invalid pointer spill in large Zero/Move on ARM · 114c0596
      Cherry Zhang authored
      Instead of comparing the address of the end of the memory to zero/copy,
      comparing the address of the last element, which is a valid pointer.
      Also unify large and unaligned Zero/Move, by passing alignment as AuxInt.
      
      Fixes #16515 for ARM.
      
      Change-Id: I19a62b31c5acf5c55c16a89bea1039c926dc91e5
      Reviewed-on: https://go-review.googlesource.com/25300
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      114c0596
    • Cherry Zhang's avatar
      [dev.ssa] cmd/compile: add more on ARM64 SSA · 83208504
      Cherry Zhang authored
      Support the following:
      - Shifts. ARM64 machine instructions only use lowest 6 bits of the
        shift (i.e. mod 64). Use conditional selection instruction to
        ensure Go semantics.
      - Zero/Move. Alignment is ensured.
      - Hmul, Avg64u, Sqrt.
      - reserve R18 (platform register in ARM64 ABI) and R29 (frame pointer
        in ARM64 ABI).
      
      Everything compiles, all.bash passed (with non-SSA test disabled).
      
      Change-Id: Ia8ed58dae5cbc001946f0b889357b258655078b1
      Reviewed-on: https://go-review.googlesource.com/25290
      Run-TryBot: Cherry Zhang <cherryyz@google.com>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarDavid Chase <drchase@google.com>
      83208504
    • Brad Fitzpatrick's avatar
      net/http: fix data race with concurrent use of Server.Serve · c80e0d37
      Brad Fitzpatrick authored
      Fixes #16505
      
      Change-Id: I0afabcc8b1be3a5dbee59946b0c44d4c00a28d71
      Reviewed-on: https://go-review.googlesource.com/25280
      Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
      TryBot-Result: Gobot Gobot <gobot@golang.org>
      Reviewed-by: default avatarChris Broadfoot <cbro@golang.org>
      c80e0d37
  14. 26 Jul, 2016 5 commits