1. 24 Aug, 2015 3 commits
    • Josh Bleecher Snyder's avatar
      [dev.ssa] cmd/compile: support spilling and loading flags · 9f8f8c27
      Josh Bleecher Snyder authored
      This CL takes a simple approach to spilling and loading flags.
      We never spill. When a load is needed, we recalculate,
      loading the arguments as needed.
      
      This is simple and architecture-independent.
      It is not very efficient, but as of this CL,
      there are fewer than 200 flag spills during make.bash.
      
      This was tested by manually reverting CLs 13813 and 13843,
      causing SETcc, MOV, and LEA instructions to clobber flags,
      which dramatically increases the number of flags spills.
      With that done, all stdlib tests that used to pass
      still pass.
      
      For future reference, here are some other, more efficient
      amd64-only schemes that we could adapt in the future if needed.
      
      (1) Spill exactly the flags needed.
      
      For example, if we know that the flags will be needed
      by a SETcc or Jcc op later, we could use SETcc to
      extract just the relevant flag. When needed,
      we could use TESTB and change the op to JNE/SETNE.
      (Alternatively, we could leave the op unaltered
      and prepare an appropriate CMPB instruction
      to produce the desired flag.)
      
      However, this requires separate handling for every
      instruction that uses the flags register,
      including (say) SBBQcarrymask.
      
      We could enable this on an ad hoc basis for common cases
      and fall back to recalculation for other cases.
      
      (2) Spill all flags with PUSHF and POPF
      
      This modifies SP, which the runtime won't like.
      It also requires coordination with stackalloc to
      make sure that we have a stack slot ready for use.
      
      (3) Spill almost all flags with LAHF, SETO, and SAHF
      
      See http://blog.freearrow.com/archives/396
      for details. This would handle all the flags we currently
      use. However, LAHF and SAHF are not universally available
      and it requires arranging for AX to be free.
      
      Change-Id: Ie36600fd8e807ef2bee83e2e2ae3685112a7f276
      Reviewed-on: https://go-review.googlesource.com/13844Reviewed-by: default avatarKeith Randall <khr@golang.org>
      9f8f8c27
    • Josh Bleecher Snyder's avatar
      [dev.ssa] cmd/compile: mark LEA and MOV instructions as not clobbering flags · f3171994
      Josh Bleecher Snyder authored
      This further reduces the number of flags spills
      during make.bash by about 50%.
      
      Note that GetG is implemented by one or two MOVs,
      which is why it does not clobber flags.
      
      Change-Id: I6fede8c027b7dc340e00d1e15df1b87bf2b2d9ec
      Reviewed-on: https://go-review.googlesource.com/13843Reviewed-by: default avatarKeith Randall <khr@golang.org>
      f3171994
    • Josh Bleecher Snyder's avatar
      [dev.ssa] cmd/compile: make "*Value".String more robust · 220e7054
      Josh Bleecher Snyder authored
      Change-Id: I4ae38440a33574421c9e3e350701e86e8a224b92
      Reviewed-on: https://go-review.googlesource.com/13842Reviewed-by: default avatarTodd Neal <todd@tneal.org>
      Reviewed-by: default avatarKeith Randall <khr@golang.org>
      220e7054
  2. 22 Aug, 2015 1 commit
  3. 21 Aug, 2015 3 commits
  4. 20 Aug, 2015 1 commit
    • Keith Randall's avatar
      [dev.ssa] cmd/compile: add decompose pass · 9f954db1
      Keith Randall authored
      Decompose breaks compound objects up into pieces that can be
      operated on by the target architecture.  The decompose pass only
      does phi ops, the rest is done by the rewrite rules in generic.rules.
      
      Compound objects include strings,slices,interfaces,structs,arrays.
      
      Arrays aren't decomposed because of indexing (we could support
      constant indexes, but dynamic indexes can't be handled using SSA).
      Structs will come in a subsequent CL.
      
      TODO: after this pass we have lost the association between, e.g.,
      a string's pointer and its size.  It would be nice if we could keep
      that information around for debugging info somehow.
      
      Change-Id: I6379ab962a7beef62297d0f68c421f22aa0a0901
      Reviewed-on: https://go-review.googlesource.com/13683Reviewed-by: default avatarJosh Bleecher Snyder <josharian@gmail.com>
      9f954db1
  5. 19 Aug, 2015 3 commits
  6. 18 Aug, 2015 2 commits
  7. 17 Aug, 2015 4 commits
  8. 15 Aug, 2015 1 commit
    • Keith Randall's avatar
      [dev.ssa] cmd/compile/internal/ssa: Use explicit size for store ops · d4cc51d4
      Keith Randall authored
      Using the type of the store argument is not safe, it may change
      during rewriting, giving us the wrong store width.
      
      (Store ptr (Trunc32to16 val) mem)
      
      This should be a 2-byte store.  But we have the rule:
      
      (Trunc32to16 x) -> x
      
      So if the Trunc rewrite happens before the Store -> MOVW rewrite,
      then the Store thinks that the value it is storing is 4 bytes
      in size and uses a MOVL.  Bad things ensue.
      
      Fix this by encoding the store width explicitly in the auxint field.
      
      In general, we can't rely on the type of arguments, as they may
      change during rewrites.  The type of the op itself (as used by
      the Load rules) is still ok to use.
      
      Change-Id: I9e2359e4f657bb0ea0e40038969628bf0f84e584
      Reviewed-on: https://go-review.googlesource.com/13636Reviewed-by: default avatarJosh Bleecher Snyder <josharian@gmail.com>
      d4cc51d4
  9. 14 Aug, 2015 2 commits
  10. 13 Aug, 2015 6 commits
  11. 12 Aug, 2015 8 commits
  12. 11 Aug, 2015 5 commits
  13. 10 Aug, 2015 1 commit