Commit 46b88c9f authored by Josh Bleecher Snyder's avatar Josh Bleecher Snyder

cmd/compile: change ssa.Type into *types.Type

When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.

In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.

Though this is a big CL, most of the changes are
mechanical and uninteresting.

Interesting bits:

* Add new singleton globals to package types for the special
  SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
  and TTUPLE, for SSA tuple types.
  ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
  to package types.
* We had picked the name "types" in our rules for the handy
  list of types provided by ssa.Config. That conflicted with
  the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
  types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
  and probably also some other duplicated Type methods
  designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
  and they were not particularly careful about types in general.
  Of necessity, this CL switches them to use *types.Type;
  it does not make them more type-accurate.
  Unfortunately, using types.Type means initializing a bit
  of the types universe.
  This is prime for refactoring and improvement.

This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.

name        old alloc/op      new alloc/op      delta
Template         37.9MB ± 0%       37.7MB ± 0%  -0.57%  (p=0.000 n=10+8)
Unicode          28.9MB ± 0%       28.7MB ± 0%  -0.52%  (p=0.000 n=10+10)
GoTypes           110MB ± 0%        109MB ± 0%  -0.88%  (p=0.000 n=10+10)
Flate            24.7MB ± 0%       24.6MB ± 0%  -0.66%  (p=0.000 n=10+10)
GoParser         31.1MB ± 0%       30.9MB ± 0%  -0.61%  (p=0.000 n=10+9)
Reflect          73.9MB ± 0%       73.4MB ± 0%  -0.62%  (p=0.000 n=10+8)
Tar              25.8MB ± 0%       25.6MB ± 0%  -0.77%  (p=0.000 n=9+10)
XML              41.2MB ± 0%       40.9MB ± 0%  -0.80%  (p=0.000 n=10+10)
[Geo mean]       40.5MB            40.3MB       -0.68%

name        old allocs/op     new allocs/op     delta
Template           385k ± 0%         386k ± 0%    ~     (p=0.356 n=10+9)
Unicode            343k ± 1%         344k ± 0%    ~     (p=0.481 n=10+10)
GoTypes           1.16M ± 0%        1.16M ± 0%  -0.16%  (p=0.004 n=10+10)
Flate              238k ± 1%         238k ± 1%    ~     (p=0.853 n=10+10)
GoParser           320k ± 0%         320k ± 0%    ~     (p=0.720 n=10+9)
Reflect            957k ± 0%         957k ± 0%    ~     (p=0.460 n=10+8)
Tar                252k ± 0%         252k ± 0%    ~     (p=0.133 n=9+10)
XML                400k ± 0%         400k ± 0%    ~     (p=0.796 n=10+10)
[Geo mean]         428k              428k       -0.01%


Removing all the interface calls helps non-trivially with CPU, though.

name        old time/op       new time/op       delta
Template          178ms ± 4%        173ms ± 3%  -2.90%  (p=0.000 n=94+96)
Unicode          85.0ms ± 4%       83.9ms ± 4%  -1.23%  (p=0.000 n=96+96)
GoTypes           543ms ± 3%        528ms ± 3%  -2.73%  (p=0.000 n=98+96)
Flate             116ms ± 3%        113ms ± 4%  -2.34%  (p=0.000 n=96+99)
GoParser          144ms ± 3%        140ms ± 4%  -2.80%  (p=0.000 n=99+97)
Reflect           344ms ± 3%        334ms ± 4%  -3.02%  (p=0.000 n=100+99)
Tar               106ms ± 5%        103ms ± 4%  -3.30%  (p=0.000 n=98+94)
XML               198ms ± 5%        192ms ± 4%  -2.88%  (p=0.000 n=92+95)
[Geo mean]        178ms             173ms       -2.65%

name        old user-time/op  new user-time/op  delta
Template          229ms ± 5%        224ms ± 5%  -2.36%  (p=0.000 n=95+99)
Unicode           107ms ± 6%        106ms ± 5%  -1.13%  (p=0.001 n=93+95)
GoTypes           696ms ± 4%        679ms ± 4%  -2.45%  (p=0.000 n=97+99)
Flate             137ms ± 4%        134ms ± 5%  -2.66%  (p=0.000 n=99+96)
GoParser          176ms ± 5%        172ms ± 8%  -2.27%  (p=0.000 n=98+100)
Reflect           430ms ± 6%        411ms ± 5%  -4.46%  (p=0.000 n=100+92)
Tar               128ms ±13%        123ms ±13%  -4.21%  (p=0.000 n=100+100)
XML               239ms ± 6%        233ms ± 6%  -2.50%  (p=0.000 n=95+97)
[Geo mean]        220ms             213ms       -2.76%


Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: default avatarKeith Randall <khr@golang.org>
parent 6a24b2d0
......@@ -590,6 +590,7 @@ var knownFormats = map[string]string{
"*cmd/compile/internal/types.Type %L": "",
"*cmd/compile/internal/types.Type %S": "",
"*cmd/compile/internal/types.Type %p": "",
"*cmd/compile/internal/types.Type %s": "",
"*cmd/compile/internal/types.Type %v": "",
"*cmd/internal/obj.Addr %v": "",
"*cmd/internal/obj.LSym %v": "",
......@@ -633,8 +634,6 @@ var knownFormats = map[string]string{
"cmd/compile/internal/ssa.Location %v": "",
"cmd/compile/internal/ssa.Op %s": "",
"cmd/compile/internal/ssa.Op %v": "",
"cmd/compile/internal/ssa.Type %s": "",
"cmd/compile/internal/ssa.Type %v": "",
"cmd/compile/internal/ssa.ValAndOff %s": "",
"cmd/compile/internal/ssa.rbrank %d": "",
"cmd/compile/internal/ssa.regMask %d": "",
......
......@@ -10,6 +10,7 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/x86"
)
......@@ -38,7 +39,7 @@ func ssaMarkMoves(s *gc.SSAGenState, b *ssa.Block) {
}
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
func loadByType(t *types.Type) obj.As {
// Avoid partial register write
if !t.IsFloat() && t.Size() <= 2 {
if t.Size() == 1 {
......@@ -52,7 +53,7 @@ func loadByType(t ssa.Type) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
func storeByType(t *types.Type) obj.As {
width := t.Size()
if t.IsFloat() {
switch width {
......@@ -77,7 +78,7 @@ func storeByType(t ssa.Type) obj.As {
}
// moveByType returns the reg->reg move instruction of the given type.
func moveByType(t ssa.Type) obj.As {
func moveByType(t *types.Type) obj.As {
if t.IsFloat() {
// Moving the whole sse2 register is faster
// than moving just the correct low portion of it.
......
......@@ -10,12 +10,13 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/arm"
)
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......@@ -45,7 +46,7 @@ func loadByType(t ssa.Type) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......
......@@ -9,12 +9,13 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/arm64"
)
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......@@ -50,7 +51,7 @@ func loadByType(t ssa.Type) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......
......@@ -1794,6 +1794,12 @@ func tconv(t *types.Type, flag FmtFlag, mode fmtMode, depth int) string {
if t == nil {
return "<T>"
}
if t.Etype == types.TSSA {
return t.Extra.(string)
}
if t.Etype == types.TTUPLE {
return t.FieldType(0).String() + "," + t.FieldType(1).String()
}
if depth > 100 {
return "<...>"
......
......@@ -6,6 +6,7 @@ package gc
import (
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/src"
"container/heap"
"fmt"
......@@ -71,7 +72,7 @@ func (s *phiState) insertPhis() {
// Generate a numbering for these variables.
s.varnum = map[*Node]int32{}
var vars []*Node
var vartypes []ssa.Type
var vartypes []*types.Type
for _, b := range s.f.Blocks {
for _, v := range b.Values {
if v.Op != ssa.OpFwdRef {
......@@ -162,7 +163,7 @@ levels:
s.queued = newSparseSet(s.f.NumBlocks())
s.hasPhi = newSparseSet(s.f.NumBlocks())
s.hasDef = newSparseSet(s.f.NumBlocks())
s.placeholder = s.s.entryNewValue0(ssa.OpUnknown, ssa.TypeInvalid)
s.placeholder = s.s.entryNewValue0(ssa.OpUnknown, types.TypeInvalid)
// Generate phi ops for each variable.
for n := range vartypes {
......@@ -182,7 +183,7 @@ levels:
}
}
func (s *phiState) insertVarPhis(n int, var_ *Node, defs []*ssa.Block, typ ssa.Type) {
func (s *phiState) insertVarPhis(n int, var_ *Node, defs []*ssa.Block, typ *types.Type) {
priq := &s.priq
q := s.q
queued := s.queued
......@@ -509,7 +510,7 @@ loop:
}
// lookupVarOutgoing finds the variable's value at the end of block b.
func (s *simplePhiState) lookupVarOutgoing(b *ssa.Block, t ssa.Type, var_ *Node, line src.XPos) *ssa.Value {
func (s *simplePhiState) lookupVarOutgoing(b *ssa.Block, t *types.Type, var_ *Node, line src.XPos) *ssa.Value {
for {
if v := s.defvars[b.ID][var_]; v != nil {
return v
......
......@@ -930,7 +930,7 @@ func clobberPtr(b *ssa.Block, v *Node, offset int64) {
} else {
aux = &ssa.ArgSymbol{Node: v}
}
b.NewValue0IA(src.NoXPos, ssa.OpClobber, ssa.TypeVoid, offset, aux)
b.NewValue0IA(src.NoXPos, ssa.OpClobber, types.TypeVoid, offset, aux)
}
func (lv *Liveness) avarinitanyall(b *ssa.Block, any, all bvec) {
......
This diff is collapsed.
......@@ -9,6 +9,7 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/mips"
)
......@@ -24,7 +25,7 @@ func isHILO(r int16) bool {
}
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type, r int16) obj.As {
func loadByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
......@@ -53,7 +54,7 @@ func loadByType(t ssa.Type, r int16) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type, r int16) obj.As {
func storeByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
......
......@@ -9,6 +9,7 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/mips"
)
......@@ -24,7 +25,7 @@ func isHILO(r int16) bool {
}
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type, r int16) obj.As {
func loadByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
......@@ -59,7 +60,7 @@ func loadByType(t ssa.Type, r int16) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type, r int16) obj.As {
func storeByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
......
......@@ -7,6 +7,7 @@ package ppc64
import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/ppc64"
"math"
......@@ -58,7 +59,7 @@ func ssaMarkMoves(s *gc.SSAGenState, b *ssa.Block) {
}
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......@@ -94,7 +95,7 @@ func loadByType(t ssa.Type) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......
......@@ -9,6 +9,7 @@ import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/s390x"
)
......@@ -37,7 +38,7 @@ func ssaMarkMoves(s *gc.SSAGenState, b *ssa.Block) {
}
// loadByType returns the load instruction of the given type.
func loadByType(t ssa.Type) obj.As {
func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
......@@ -73,7 +74,7 @@ func loadByType(t ssa.Type) obj.As {
}
// storeByType returns the store instruction of the given type.
func storeByType(t ssa.Type) obj.As {
func storeByType(t *types.Type) obj.As {
width := t.Size()
if t.IsFloat() {
switch width {
......@@ -98,7 +99,7 @@ func storeByType(t ssa.Type) obj.As {
}
// moveByType returns the reg->reg move instruction of the given type.
func moveByType(t ssa.Type) obj.As {
func moveByType(t *types.Type) obj.As {
if t.IsFloat() {
return s390x.AFMOVD
} else {
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/objabi"
"cmd/internal/src"
......@@ -45,28 +46,28 @@ type (
)
type Types struct {
Bool Type
Int8 Type
Int16 Type
Int32 Type
Int64 Type
UInt8 Type
UInt16 Type
UInt32 Type
UInt64 Type
Int Type
Float32 Type
Float64 Type
Uintptr Type
String Type
BytePtr Type // TODO: use unsafe.Pointer instead?
Int32Ptr Type
UInt32Ptr Type
IntPtr Type
UintptrPtr Type
Float32Ptr Type
Float64Ptr Type
BytePtrPtr Type
Bool *types.Type
Int8 *types.Type
Int16 *types.Type
Int32 *types.Type
Int64 *types.Type
UInt8 *types.Type
UInt16 *types.Type
UInt32 *types.Type
UInt64 *types.Type
Int *types.Type
Float32 *types.Type
Float64 *types.Type
Uintptr *types.Type
String *types.Type
BytePtr *types.Type // TODO: use unsafe.Pointer instead?
Int32Ptr *types.Type
UInt32Ptr *types.Type
IntPtr *types.Type
UintptrPtr *types.Type
Float32Ptr *types.Type
Float64Ptr *types.Type
BytePtrPtr *types.Type
}
type Logger interface {
......@@ -89,7 +90,7 @@ type Logger interface {
}
type Frontend interface {
CanSSA(t Type) bool
CanSSA(t *types.Type) bool
Logger
......@@ -98,7 +99,7 @@ type Frontend interface {
// Auto returns a Node for an auto variable of the given type.
// The SSA compiler uses this function to allocate space for spills.
Auto(src.XPos, Type) GCNode
Auto(src.XPos, *types.Type) GCNode
// Given the name for a compound type, returns the name we should use
// for the parts of that compound type.
......@@ -133,7 +134,7 @@ type Frontend interface {
// interface used to hold *gc.Node. We'd use *gc.Node directly but
// that would lead to an import cycle.
type GCNode interface {
Typ() Type
Typ() *types.Type
String() string
}
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"fmt"
"testing"
)
......@@ -20,11 +21,11 @@ func benchmarkCopyElim(b *testing.B, n int) {
c := testConfig(b)
values := make([]interface{}, 0, n+2)
values = append(values, Valu("mem", OpInitMem, TypeMem, 0, nil))
values = append(values, Valu("mem", OpInitMem, types.TypeMem, 0, nil))
last := "mem"
for i := 0; i < n; i++ {
name := fmt.Sprintf("copy%d", i)
values = append(values, Valu(name, OpCopy, TypeMem, 0, nil, last))
values = append(values, Valu(name, OpCopy, types.TypeMem, 0, nil, last))
last = name
}
values = append(values, Exit(last))
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"fmt"
"sort"
)
......@@ -281,7 +282,7 @@ func partitionValues(a []*Value, auxIDs auxmap) []eqclass {
j := 1
for ; j < len(a); j++ {
w := a[j]
if cmpVal(v, w, auxIDs) != CMPeq {
if cmpVal(v, w, auxIDs) != types.CMPeq {
break
}
}
......@@ -293,16 +294,16 @@ func partitionValues(a []*Value, auxIDs auxmap) []eqclass {
return partition
}
func lt2Cmp(isLt bool) Cmp {
func lt2Cmp(isLt bool) types.Cmp {
if isLt {
return CMPlt
return types.CMPlt
}
return CMPgt
return types.CMPgt
}
type auxmap map[interface{}]int32
func cmpVal(v, w *Value, auxIDs auxmap) Cmp {
func cmpVal(v, w *Value, auxIDs auxmap) types.Cmp {
// Try to order these comparison by cost (cheaper first)
if v.Op != w.Op {
return lt2Cmp(v.Op < w.Op)
......@@ -322,21 +323,21 @@ func cmpVal(v, w *Value, auxIDs auxmap) Cmp {
return lt2Cmp(v.ID < w.ID)
}
if tc := v.Type.Compare(w.Type); tc != CMPeq {
if tc := v.Type.Compare(w.Type); tc != types.CMPeq {
return tc
}
if v.Aux != w.Aux {
if v.Aux == nil {
return CMPlt
return types.CMPlt
}
if w.Aux == nil {
return CMPgt
return types.CMPgt
}
return lt2Cmp(auxIDs[v.Aux] < auxIDs[w.Aux])
}
return CMPeq
return types.CMPeq
}
// Sort values to make the initial partition.
......@@ -350,8 +351,8 @@ func (sv sortvalues) Swap(i, j int) { sv.a[i], sv.a[j] = sv.a[j], sv.a[i] }
func (sv sortvalues) Less(i, j int) bool {
v := sv.a[i]
w := sv.a[j]
if cmp := cmpVal(v, w, sv.auxIDs); cmp != CMPeq {
return cmp == CMPlt
if cmp := cmpVal(v, w, sv.auxIDs); cmp != types.CMPeq {
return cmp == types.CMPlt
}
// Sort by value ID last to keep the sort result deterministic.
......
......@@ -4,7 +4,10 @@
package ssa
import "testing"
import (
"cmd/compile/internal/types"
"testing"
)
type tstAux struct {
s string
......@@ -21,24 +24,24 @@ func TestCSEAuxPartitionBug(t *testing.T) {
// them in an order that triggers the bug
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sp", OpSP, TypeBytePtr, 0, nil),
Valu("r7", OpAdd64, TypeInt64, 0, nil, "arg3", "arg1"),
Valu("r1", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
Valu("arg1", OpArg, TypeInt64, 0, arg1Aux),
Valu("arg2", OpArg, TypeInt64, 0, arg2Aux),
Valu("arg3", OpArg, TypeInt64, 0, arg3Aux),
Valu("r9", OpAdd64, TypeInt64, 0, nil, "r7", "r8"),
Valu("r4", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
Valu("r8", OpAdd64, TypeInt64, 0, nil, "arg3", "arg2"),
Valu("r2", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
Valu("r6", OpAdd64, TypeInt64, 0, nil, "r4", "r5"),
Valu("r3", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
Valu("r5", OpAdd64, TypeInt64, 0, nil, "r2", "r3"),
Valu("r10", OpAdd64, TypeInt64, 0, nil, "r6", "r9"),
Valu("rstore", OpStore, TypeMem, 0, TypeInt64, "raddr", "r10", "raddrdef"),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sp", OpSP, c.config.Types.BytePtr, 0, nil),
Valu("r7", OpAdd64, c.config.Types.Int64, 0, nil, "arg3", "arg1"),
Valu("r1", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
Valu("arg1", OpArg, c.config.Types.Int64, 0, arg1Aux),
Valu("arg2", OpArg, c.config.Types.Int64, 0, arg2Aux),
Valu("arg3", OpArg, c.config.Types.Int64, 0, arg3Aux),
Valu("r9", OpAdd64, c.config.Types.Int64, 0, nil, "r7", "r8"),
Valu("r4", OpAdd64, c.config.Types.Int64, 0, nil, "r1", "r2"),
Valu("r8", OpAdd64, c.config.Types.Int64, 0, nil, "arg3", "arg2"),
Valu("r2", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
Valu("raddr", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sp"),
Valu("raddrdef", OpVarDef, types.TypeMem, 0, nil, "start"),
Valu("r6", OpAdd64, c.config.Types.Int64, 0, nil, "r4", "r5"),
Valu("r3", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
Valu("r5", OpAdd64, c.config.Types.Int64, 0, nil, "r2", "r3"),
Valu("r10", OpAdd64, c.config.Types.Int64, 0, nil, "r6", "r9"),
Valu("rstore", OpStore, types.TypeMem, 0, c.config.Types.Int64, "raddr", "r10", "raddrdef"),
Goto("exit")),
Bloc("exit",
Exit("rstore")))
......@@ -89,22 +92,22 @@ func TestZCSE(t *testing.T) {
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sp", OpSP, TypeBytePtr, 0, nil),
Valu("sb1", OpSB, TypeBytePtr, 0, nil),
Valu("sb2", OpSB, TypeBytePtr, 0, nil),
Valu("addr1", OpAddr, TypeInt64Ptr, 0, nil, "sb1"),
Valu("addr2", OpAddr, TypeInt64Ptr, 0, nil, "sb2"),
Valu("a1ld", OpLoad, TypeInt64, 0, nil, "addr1", "start"),
Valu("a2ld", OpLoad, TypeInt64, 0, nil, "addr2", "start"),
Valu("c1", OpConst64, TypeInt64, 1, nil),
Valu("r1", OpAdd64, TypeInt64, 0, nil, "a1ld", "c1"),
Valu("c2", OpConst64, TypeInt64, 1, nil),
Valu("r2", OpAdd64, TypeInt64, 0, nil, "a2ld", "c2"),
Valu("r3", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
Valu("rstore", OpStore, TypeMem, 0, TypeInt64, "raddr", "r3", "raddrdef"),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sp", OpSP, c.config.Types.BytePtr, 0, nil),
Valu("sb1", OpSB, c.config.Types.BytePtr, 0, nil),
Valu("sb2", OpSB, c.config.Types.BytePtr, 0, nil),
Valu("addr1", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sb1"),
Valu("addr2", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sb2"),
Valu("a1ld", OpLoad, c.config.Types.Int64, 0, nil, "addr1", "start"),
Valu("a2ld", OpLoad, c.config.Types.Int64, 0, nil, "addr2", "start"),
Valu("c1", OpConst64, c.config.Types.Int64, 1, nil),
Valu("r1", OpAdd64, c.config.Types.Int64, 0, nil, "a1ld", "c1"),
Valu("c2", OpConst64, c.config.Types.Int64, 1, nil),
Valu("r2", OpAdd64, c.config.Types.Int64, 0, nil, "a2ld", "c2"),
Valu("r3", OpAdd64, c.config.Types.Int64, 0, nil, "r1", "r2"),
Valu("raddr", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sp"),
Valu("raddrdef", OpVarDef, types.TypeMem, 0, nil, "start"),
Valu("rstore", OpStore, types.TypeMem, 0, c.config.Types.Int64, "raddr", "r3", "raddrdef"),
Goto("exit")),
Bloc("exit",
Exit("rstore")))
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"fmt"
"strconv"
"testing"
......@@ -14,14 +15,14 @@ func TestDeadLoop(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")),
// dead loop
Bloc("deadblock",
// dead value in dead block
Valu("deadval", OpConstBool, TypeBool, 1, nil),
Valu("deadval", OpConstBool, c.config.Types.Bool, 1, nil),
If("deadval", "deadblock", "exit")))
CheckFunc(fun.f)
......@@ -44,8 +45,8 @@ func TestDeadValue(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("deadval", OpConst64, TypeInt64, 37, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("deadval", OpConst64, c.config.Types.Int64, 37, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")))
......@@ -67,8 +68,8 @@ func TestNeverTaken(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("cond", OpConstBool, TypeBool, 0, nil),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("cond", OpConstBool, c.config.Types.Bool, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
If("cond", "then", "else")),
Bloc("then",
Goto("exit")),
......@@ -102,8 +103,8 @@ func TestNestedDeadBlocks(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("cond", OpConstBool, TypeBool, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("cond", OpConstBool, c.config.Types.Bool, 0, nil),
If("cond", "b2", "b4")),
Bloc("b2",
If("cond", "b3", "b4")),
......@@ -144,7 +145,7 @@ func BenchmarkDeadCode(b *testing.B) {
blocks := make([]bloc, 0, n+2)
blocks = append(blocks,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")))
blocks = append(blocks, Bloc("exit", Exit("mem")))
for i := 0; i < n; i++ {
......
......@@ -4,7 +4,10 @@
package ssa
import "cmd/internal/src"
import (
"cmd/compile/internal/types"
"cmd/internal/src"
)
// dse does dead-store elimination on the Function.
// Dead stores are those which are unconditionally followed by
......@@ -88,7 +91,7 @@ func dse(f *Func) {
if v.Op == OpStore || v.Op == OpZero {
var sz int64
if v.Op == OpStore {
sz = v.Aux.(Type).Size()
sz = v.Aux.(*types.Type).Size()
} else { // OpZero
sz = v.AuxInt
}
......
......@@ -4,25 +4,28 @@
package ssa
import "testing"
import (
"cmd/compile/internal/types"
"testing"
)
func TestDeadStore(t *testing.T) {
c := testConfig(t)
elemType := &TypeImpl{Size_: 1, Name: "testtype"}
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
ptrType := c.config.Types.BytePtr
t.Logf("PTRTYPE %v", ptrType)
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("v", OpConstBool, TypeBool, 1, nil),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
Valu("addr2", OpAddr, ptrType, 0, nil, "sb"),
Valu("addr3", OpAddr, ptrType, 0, nil, "sb"),
Valu("zero1", OpZero, TypeMem, 1, TypeBool, "addr3", "start"),
Valu("store1", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "zero1"),
Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr2", "v", "store1"),
Valu("store3", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "store2"),
Valu("store4", OpStore, TypeMem, 0, TypeBool, "addr3", "v", "store3"),
Valu("zero1", OpZero, types.TypeMem, 1, c.config.Types.Bool, "addr3", "start"),
Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "zero1"),
Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr2", "v", "store1"),
Valu("store3", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "store2"),
Valu("store4", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr3", "v", "store3"),
Goto("exit")),
Bloc("exit",
Exit("store3")))
......@@ -44,17 +47,17 @@ func TestDeadStore(t *testing.T) {
func TestDeadStorePhi(t *testing.T) {
// make sure we don't get into an infinite loop with phi values.
c := testConfig(t)
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("v", OpConstBool, TypeBool, 1, nil),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr", OpAddr, ptrType, 0, nil, "sb"),
Goto("loop")),
Bloc("loop",
Valu("phi", OpPhi, TypeMem, 0, nil, "start", "store"),
Valu("store", OpStore, TypeMem, 0, TypeBool, "addr", "v", "phi"),
Valu("phi", OpPhi, types.TypeMem, 0, nil, "start", "store"),
Valu("store", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr", "v", "phi"),
If("v", "loop", "exit")),
Bloc("exit",
Exit("store")))
......@@ -70,17 +73,17 @@ func TestDeadStoreTypes(t *testing.T) {
// types of the address fields are identical (where identicalness is
// decided by the CSE pass).
c := testConfig(t)
t1 := &TypeImpl{Size_: 8, Ptr: true, Name: "t1"}
t2 := &TypeImpl{Size_: 4, Ptr: true, Name: "t2"}
t1 := c.config.Types.UInt64.PtrTo()
t2 := c.config.Types.UInt32.PtrTo()
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("v", OpConstBool, TypeBool, 1, nil),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, t1, 0, nil, "sb"),
Valu("addr2", OpAddr, t2, 0, nil, "sb"),
Valu("store1", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "start"),
Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr2", "v", "store1"),
Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "start"),
Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr2", "v", "store1"),
Goto("exit")),
Bloc("exit",
Exit("store2")))
......@@ -101,15 +104,15 @@ func TestDeadStoreUnsafe(t *testing.T) {
// covers the case of two different types, but unsafe pointer casting
// can get to a point where the size is changed but type unchanged.
c := testConfig(t)
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
ptrType := c.config.Types.UInt64.PtrTo()
fun := c.Fun("entry",
Bloc("entry",
Valu("start", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("v", OpConstBool, TypeBool, 1, nil),
Valu("start", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
Valu("store1", OpStore, TypeMem, 0, TypeInt64, "addr1", "v", "start"), // store 8 bytes
Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "store1"), // store 1 byte
Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Int64, "addr1", "v", "start"), // store 8 bytes
Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "store1"), // store 1 byte
Goto("exit")),
Bloc("exit",
Exit("store2")))
......
......@@ -4,6 +4,8 @@
package ssa
import "cmd/compile/internal/types"
// decompose converts phi ops on compound builtin types into phi
// ops on simple types.
// (The remaining compound ops are decomposed with rewrite rules.)
......@@ -26,7 +28,7 @@ func decomposeBuiltIn(f *Func) {
t := name.Type
switch {
case t.IsInteger() && t.Size() > f.Config.RegSize:
var elemType Type
var elemType *types.Type
if t.IsSigned() {
elemType = f.Config.Types.Int32
} else {
......@@ -42,7 +44,7 @@ func decomposeBuiltIn(f *Func) {
}
delete(f.NamedValues, name)
case t.IsComplex():
var elemType Type
var elemType *types.Type
if t.Size() == 16 {
elemType = f.Config.Types.Float64
} else {
......@@ -160,19 +162,19 @@ func decomposeSlicePhi(v *Value) {
}
func decomposeInt64Phi(v *Value) {
types := &v.Block.Func.Config.Types
var partType Type
cfgtypes := &v.Block.Func.Config.Types
var partType *types.Type
if v.Type.IsSigned() {
partType = types.Int32
partType = cfgtypes.Int32
} else {
partType = types.UInt32
partType = cfgtypes.UInt32
}
hi := v.Block.NewValue0(v.Pos, OpPhi, partType)
lo := v.Block.NewValue0(v.Pos, OpPhi, types.UInt32)
lo := v.Block.NewValue0(v.Pos, OpPhi, cfgtypes.UInt32)
for _, a := range v.Args {
hi.AddArg(a.Block.NewValue1(v.Pos, OpInt64Hi, partType, a))
lo.AddArg(a.Block.NewValue1(v.Pos, OpInt64Lo, types.UInt32, a))
lo.AddArg(a.Block.NewValue1(v.Pos, OpInt64Lo, cfgtypes.UInt32, a))
}
v.reset(OpInt64Make)
v.AddArg(hi)
......@@ -180,13 +182,13 @@ func decomposeInt64Phi(v *Value) {
}
func decomposeComplexPhi(v *Value) {
types := &v.Block.Func.Config.Types
var partType Type
cfgtypes := &v.Block.Func.Config.Types
var partType *types.Type
switch z := v.Type.Size(); z {
case 8:
partType = types.Float32
partType = cfgtypes.Float32
case 16:
partType = types.Float64
partType = cfgtypes.Float64
default:
v.Fatalf("decomposeComplexPhi: bad complex size %d", z)
}
......
......@@ -4,7 +4,10 @@
package ssa
import "testing"
import (
"cmd/compile/internal/types"
"testing"
)
func BenchmarkDominatorsLinear(b *testing.B) { benchmarkDominators(b, 10000, genLinear) }
func BenchmarkDominatorsFwdBack(b *testing.B) { benchmarkDominators(b, 10000, genFwdBack) }
......@@ -20,7 +23,7 @@ func genLinear(size int) []bloc {
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto(blockn(0)),
),
)
......@@ -43,8 +46,8 @@ func genFwdBack(size int) []bloc {
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
......@@ -73,8 +76,8 @@ func genManyPred(size int) []bloc {
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
......@@ -85,15 +88,15 @@ func genManyPred(size int) []bloc {
switch i % 3 {
case 0:
blocs = append(blocs, Bloc(blockn(i),
Valu("a", OpConstBool, TypeBool, 1, nil),
Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(i+1))))
case 1:
blocs = append(blocs, Bloc(blockn(i),
Valu("a", OpConstBool, TypeBool, 1, nil),
Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), blockn(0))))
case 2:
blocs = append(blocs, Bloc(blockn(i),
Valu("a", OpConstBool, TypeBool, 1, nil),
Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), blockn(size))))
}
}
......@@ -111,8 +114,8 @@ func genMaxPred(size int) []bloc {
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
......@@ -136,15 +139,15 @@ func genMaxPredValue(size int) []bloc {
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
for i := 0; i < size; i++ {
blocs = append(blocs, Bloc(blockn(i),
Valu("a", OpConstBool, TypeBool, 1, nil),
Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), "exit")))
}
......@@ -223,7 +226,7 @@ func TestDominatorsSingleBlock(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem")))
doms := map[string]string{}
......@@ -238,7 +241,7 @@ func TestDominatorsSimple(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("a")),
Bloc("a",
Goto("b")),
......@@ -266,8 +269,8 @@ func TestDominatorsMultPredFwd(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "a", "c")),
Bloc("a",
If("p", "b", "c")),
......@@ -294,8 +297,8 @@ func TestDominatorsDeadCode(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 0, nil),
If("p", "b3", "b5")),
Bloc("b2", Exit("mem")),
Bloc("b3", Goto("b2")),
......@@ -319,8 +322,8 @@ func TestDominatorsMultPredRev(t *testing.T) {
Bloc("entry",
Goto("first")),
Bloc("first",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("a")),
Bloc("a",
If("p", "b", "first")),
......@@ -348,8 +351,8 @@ func TestDominatorsMultPred(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "a", "c")),
Bloc("a",
If("p", "b", "c")),
......@@ -377,8 +380,8 @@ func TestInfiniteLoop(t *testing.T) {
// note lack of an exit block
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("a")),
Bloc("a",
Goto("b")),
......@@ -414,8 +417,8 @@ func TestDomTricky(t *testing.T) {
cfg := testConfig(t)
fun := cfg.Fun("1",
Bloc("1",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("4")),
Bloc("2",
Goto("11")),
......@@ -490,8 +493,8 @@ func testDominatorsPostTricky(t *testing.T, b7then, b7else, b12then, b12else, b1
c := testConfig(t)
fun := c.Fun("b1",
Bloc("b1",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("p", OpConstBool, TypeBool, 1, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "b3", "b2")),
Bloc("b3",
If("p", "b5", "b6")),
......
......@@ -5,10 +5,12 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/s390x"
"cmd/internal/obj/x86"
"cmd/internal/src"
"fmt"
"testing"
)
......@@ -61,11 +63,11 @@ type DummyFrontend struct {
}
type DummyAuto struct {
t Type
t *types.Type
s string
}
func (d *DummyAuto) Typ() Type {
func (d *DummyAuto) Typ() *types.Type {
return d.t
}
......@@ -76,7 +78,7 @@ func (d *DummyAuto) String() string {
func (DummyFrontend) StringData(s string) interface{} {
return nil
}
func (DummyFrontend) Auto(pos src.XPos, t Type) GCNode {
func (DummyFrontend) Auto(pos src.XPos, t *types.Type) GCNode {
return &DummyAuto{t: t, s: "aDummyAuto"}
}
func (d DummyFrontend) SplitString(s LocalSlot) (LocalSlot, LocalSlot) {
......@@ -128,34 +130,81 @@ func (d DummyFrontend) Warnl(_ src.XPos, msg string, args ...interface{}) { d.t
func (d DummyFrontend) Debug_checknil() bool { return false }
func (d DummyFrontend) Debug_wb() bool { return false }
var dummyTypes = Types{
Bool: TypeBool,
Int8: TypeInt8,
Int16: TypeInt16,
Int32: TypeInt32,
Int64: TypeInt64,
UInt8: TypeUInt8,
UInt16: TypeUInt16,
UInt32: TypeUInt32,
UInt64: TypeUInt64,
Float32: TypeFloat32,
Float64: TypeFloat64,
Int: TypeInt64,
Uintptr: TypeUInt64,
String: nil,
BytePtr: TypeBytePtr,
Int32Ptr: TypeInt32.PtrTo(),
UInt32Ptr: TypeUInt32.PtrTo(),
IntPtr: TypeInt64.PtrTo(),
UintptrPtr: TypeUInt64.PtrTo(),
Float32Ptr: TypeFloat32.PtrTo(),
Float64Ptr: TypeFloat64.PtrTo(),
BytePtrPtr: TypeBytePtr.PtrTo(),
var dummyTypes Types
func init() {
// Initialize just enough of the universe and the types package to make our tests function.
// TODO(josharian): move universe initialization to the types package,
// so this test setup can share it.
types.Tconv = func(t *types.Type, flag, mode, depth int) string {
return t.Etype.String()
}
types.Sconv = func(s *types.Sym, flag, mode int) string {
return "sym"
}
types.FormatSym = func(sym *types.Sym, s fmt.State, verb rune, mode int) {
fmt.Fprintf(s, "sym")
}
types.FormatType = func(t *types.Type, s fmt.State, verb rune, mode int) {
fmt.Fprintf(s, "%v", t.Etype)
}
types.Dowidth = func(t *types.Type) {}
types.Tptr = types.TPTR64
for _, typ := range [...]struct {
width int64
et types.EType
}{
{1, types.TINT8},
{1, types.TUINT8},
{1, types.TBOOL},
{2, types.TINT16},
{2, types.TUINT16},
{4, types.TINT32},
{4, types.TUINT32},
{4, types.TFLOAT32},
{4, types.TFLOAT64},
{8, types.TUINT64},
{8, types.TINT64},
{8, types.TINT},
{8, types.TUINTPTR},
} {
t := types.New(typ.et)
t.Width = typ.width
t.Align = uint8(typ.width)
types.Types[typ.et] = t
}
dummyTypes = Types{
Bool: types.Types[types.TBOOL],
Int8: types.Types[types.TINT8],
Int16: types.Types[types.TINT16],
Int32: types.Types[types.TINT32],
Int64: types.Types[types.TINT64],
UInt8: types.Types[types.TUINT8],
UInt16: types.Types[types.TUINT16],
UInt32: types.Types[types.TUINT32],
UInt64: types.Types[types.TUINT64],
Float32: types.Types[types.TFLOAT32],
Float64: types.Types[types.TFLOAT64],
Int: types.Types[types.TINT],
Uintptr: types.Types[types.TUINTPTR],
String: types.Types[types.TSTRING],
BytePtr: types.NewPtr(types.Types[types.TUINT8]),
Int32Ptr: types.NewPtr(types.Types[types.TINT32]),
UInt32Ptr: types.NewPtr(types.Types[types.TUINT32]),
IntPtr: types.NewPtr(types.Types[types.TINT]),
UintptrPtr: types.NewPtr(types.Types[types.TUINTPTR]),
Float32Ptr: types.NewPtr(types.Types[types.TFLOAT32]),
Float64Ptr: types.NewPtr(types.Types[types.TFLOAT64]),
BytePtrPtr: types.NewPtr(types.NewPtr(types.Types[types.TUINT8])),
}
}
func (d DummyFrontend) DerefItab(sym *obj.LSym, off int64) *obj.LSym { return nil }
func (d DummyFrontend) CanSSA(t Type) bool {
func (d DummyFrontend) CanSSA(t *types.Type) bool {
// There are no un-SSAable types in dummy land.
return true
}
This diff is collapsed.
......@@ -18,12 +18,12 @@
//
// fun := Fun("entry",
// Bloc("entry",
// Valu("mem", OpInitMem, TypeMem, 0, nil),
// Valu("mem", OpInitMem, types.TypeMem, 0, nil),
// Goto("exit")),
// Bloc("exit",
// Exit("mem")),
// Bloc("deadblock",
// Valu("deadval", OpConstBool, TypeBool, 0, true),
// Valu("deadval", OpConstBool, c.config.Types.Bool, 0, true),
// If("deadval", "deadblock", "exit")))
//
// and the Blocks or Values used in the Func can be accessed
......@@ -37,6 +37,7 @@ package ssa
// the parser can be used instead of Fun.
import (
"cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
"reflect"
......@@ -223,7 +224,7 @@ func Bloc(name string, entries ...interface{}) bloc {
}
// Valu defines a value in a block.
func Valu(name string, op Op, t Type, auxint int64, aux interface{}, args ...string) valu {
func Valu(name string, op Op, t *types.Type, auxint int64, aux interface{}, args ...string) valu {
return valu{name, op, t, auxint, aux, args}
}
......@@ -266,7 +267,7 @@ type ctrl struct {
type valu struct {
name string
op Op
t Type
t *types.Type
auxint int64
aux interface{}
args []string
......@@ -276,10 +277,10 @@ func TestArgs(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, c.config.Types.Int64, 14, nil),
Valu("b", OpConst64, c.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, c.config.Types.Int64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")))
......@@ -299,19 +300,19 @@ func TestEquiv(t *testing.T) {
{
cfg.Fun("entry",
Bloc("entry",
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
......@@ -320,10 +321,10 @@ func TestEquiv(t *testing.T) {
{
cfg.Fun("entry",
Bloc("entry",
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
......@@ -331,10 +332,10 @@ func TestEquiv(t *testing.T) {
Bloc("exit",
Exit("mem")),
Bloc("entry",
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit"))),
},
}
......@@ -351,71 +352,71 @@ func TestEquiv(t *testing.T) {
{
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem"))),
},
// value order changed
{
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Exit("mem"))),
},
// value auxint different
{
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 26, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 26, nil),
Exit("mem"))),
},
// value aux different
{
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 0, 14),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 0, 14),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 0, 26),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 0, 26),
Exit("mem"))),
},
// value args different
{
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 14, nil),
Valu("b", OpConst64, TypeInt64, 26, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("a", OpConst64, TypeInt64, 0, nil),
Valu("b", OpConst64, TypeInt64, 14, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "b", "a"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 0, nil),
Valu("b", OpConst64, cfg.config.Types.Int64, 14, nil),
Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "b", "a"),
Exit("mem"))),
},
}
......@@ -434,14 +435,14 @@ func TestConstCache(t *testing.T) {
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem")))
v1 := f.f.ConstBool(src.NoXPos, TypeBool, false)
v2 := f.f.ConstBool(src.NoXPos, TypeBool, true)
v1 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, false)
v2 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, true)
f.f.freeValue(v1)
f.f.freeValue(v2)
v3 := f.f.ConstBool(src.NoXPos, TypeBool, false)
v4 := f.f.ConstBool(src.NoXPos, TypeBool, true)
v3 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, false)
v4 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, true)
if v3.AuxInt != 0 {
t.Errorf("expected %s to have auxint of 0\n", v3.LongString())
}
......
package ssa
import (
"cmd/compile/internal/types"
"fmt"
"strconv"
"testing"
)
func TestFuseEliminatesOneBranch(t *testing.T) {
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "exit")),
Bloc("then",
Goto("exit")),
......@@ -35,17 +36,17 @@ func TestFuseEliminatesOneBranch(t *testing.T) {
}
func TestFuseEliminatesBothBranches(t *testing.T) {
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "else")),
Bloc("then",
Goto("exit")),
......@@ -68,17 +69,17 @@ func TestFuseEliminatesBothBranches(t *testing.T) {
}
func TestFuseHandlesPhis(t *testing.T) {
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "else")),
Bloc("then",
Goto("exit")),
......@@ -105,8 +106,8 @@ func TestFuseEliminatesEmptyBlocks(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("z0")),
Bloc("z1",
Goto("z2")),
......@@ -138,9 +139,9 @@ func BenchmarkFuse(b *testing.B) {
blocks := make([]bloc, 0, 2*n+3)
blocks = append(blocks,
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("cond", OpArg, TypeBool, 0, nil),
Valu("x", OpArg, TypeInt64, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("cond", OpArg, c.config.Types.Bool, 0, nil),
Valu("x", OpArg, c.config.Types.Int64, 0, nil),
Goto("exit")))
phiArgs := make([]string, 0, 2*n)
......@@ -153,7 +154,7 @@ func BenchmarkFuse(b *testing.B) {
}
blocks = append(blocks,
Bloc("merge",
Valu("phi", OpPhi, TypeMem, 0, nil, phiArgs...),
Valu("phi", OpPhi, types.TypeMem, 0, nil, phiArgs...),
Goto("exit")),
Bloc("exit",
Exit("mem")))
......
......@@ -68,8 +68,8 @@
(Neg32 x) -> (NEGL x)
(Neg16 x) -> (NEGL x)
(Neg8 x) -> (NEGL x)
(Neg32F x) && !config.use387 -> (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
(Neg64F x) && !config.use387 -> (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
(Neg32F x) && !config.use387 -> (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
(Neg64F x) && !config.use387 -> (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
(Neg32F x) && config.use387 -> (FCHS x)
(Neg64F x) && config.use387 -> (FCHS x)
......@@ -256,12 +256,12 @@
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVLstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVLstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
(Move [0] _ _ mem) -> mem
......
......@@ -78,8 +78,8 @@
(Neg32 x) -> (NEGL x)
(Neg16 x) -> (NEGL x)
(Neg8 x) -> (NEGL x)
(Neg32F x) -> (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
(Neg64F x) -> (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
(Neg32F x) -> (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
(Neg64F x) -> (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
(Com64 x) -> (NOTQ x)
(Com32 x) -> (NOTL x)
......@@ -97,19 +97,19 @@
(OffPtr [off] ptr) && config.PtrSize == 4 -> (ADDLconst [off] ptr)
// Lowering other arithmetic
(Ctz64 <t> x) -> (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <TypeFlags> (BSFQ x)))
(Ctz32 x) -> (Select0 (BSFQ (ORQ <types.UInt64> (MOVQconst [1<<32]) x)))
(Ctz64 <t> x) -> (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <types.TypeFlags> (BSFQ x)))
(Ctz32 x) -> (Select0 (BSFQ (ORQ <typ.UInt64> (MOVQconst [1<<32]) x)))
(BitLen64 <t> x) -> (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <TypeFlags> (BSRQ x))))
(BitLen32 x) -> (BitLen64 (MOVLQZX <types.UInt64> x))
(BitLen64 <t> x) -> (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <types.TypeFlags> (BSRQ x))))
(BitLen32 x) -> (BitLen64 (MOVLQZX <typ.UInt64> x))
(Bswap64 x) -> (BSWAPQ x)
(Bswap32 x) -> (BSWAPL x)
(PopCount64 x) -> (POPCNTQ x)
(PopCount32 x) -> (POPCNTL x)
(PopCount16 x) -> (POPCNTL (MOVWQZX <types.UInt32> x))
(PopCount8 x) -> (POPCNTL (MOVBQZX <types.UInt32> x))
(PopCount16 x) -> (POPCNTL (MOVWQZX <typ.UInt32> x))
(PopCount8 x) -> (POPCNTL (MOVBQZX <typ.UInt32> x))
(Sqrt x) -> (SQRTSD x)
......@@ -305,13 +305,13 @@
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 8 -> (MOVQstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVLstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 -> (MOVQstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVLstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
(Move [0] _ _ mem) -> mem
......@@ -477,10 +477,10 @@
// Atomic stores. We use XCHG to prevent the hardware reordering a subsequent load.
// TODO: most runtime uses of atomic stores don't need that property. Use normal stores for those?
(AtomicStore32 ptr val mem) -> (Select1 (XCHGL <MakeTuple(types.UInt32,TypeMem)> val ptr mem))
(AtomicStore64 ptr val mem) -> (Select1 (XCHGQ <MakeTuple(types.UInt64,TypeMem)> val ptr mem))
(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 8 -> (Select1 (XCHGQ <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 4 -> (Select1 (XCHGL <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
(AtomicStore32 ptr val mem) -> (Select1 (XCHGL <types.NewTuple(typ.UInt32,types.TypeMem)> val ptr mem))
(AtomicStore64 ptr val mem) -> (Select1 (XCHGQ <types.NewTuple(typ.UInt64,types.TypeMem)> val ptr mem))
(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 8 -> (Select1 (XCHGQ <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 4 -> (Select1 (XCHGL <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
// Atomic exchanges.
(AtomicExchange32 ptr val mem) -> (XCHGL val ptr mem)
......@@ -566,8 +566,8 @@
(NE (TESTB (SETNEF cmp) (SETNEF cmp)) yes no) -> (NEF cmp yes no)
// Disabled because it interferes with the pattern match above and makes worse code.
// (SETNEF x) -> (ORQ (SETNE <types.Int8> x) (SETNAN <types.Int8> x))
// (SETEQF x) -> (ANDQ (SETEQ <types.Int8> x) (SETORD <types.Int8> x))
// (SETNEF x) -> (ORQ (SETNE <typ.Int8> x) (SETNAN <typ.Int8> x))
// (SETEQF x) -> (ANDQ (SETEQ <typ.Int8> x) (SETORD <typ.Int8> x))
// fold constants into instructions
(ADDQ x (MOVQconst [c])) && is32Bit(c) -> (ADDQconst [c] x)
......@@ -1898,7 +1898,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
-> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
(ORQ
s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem))
......@@ -1919,7 +1919,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
(ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem)))
......@@ -1944,7 +1944,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
// Big-endian indexed loads
......@@ -2044,7 +2044,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
-> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
(ORQ
s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem))
......@@ -2065,7 +2065,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
(ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem)))
......@@ -2090,7 +2090,7 @@
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
-> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
// Combine 2 byte stores + shift into rolw 8 + word store
(MOVBstore [i] {s} p w
......
......@@ -34,12 +34,12 @@
(Mul32uhilo x y) -> (MULLU x y)
(Div32 x y) ->
(SUB (XOR <types.UInt32> // negate the result if one operand is negative
(Select0 <types.UInt32> (CALLudiv
(SUB <types.UInt32> (XOR x <types.UInt32> (Signmask x)) (Signmask x)) // negate x if negative
(SUB <types.UInt32> (XOR y <types.UInt32> (Signmask y)) (Signmask y)))) // negate y if negative
(Signmask (XOR <types.UInt32> x y))) (Signmask (XOR <types.UInt32> x y)))
(Div32u x y) -> (Select0 <types.UInt32> (CALLudiv x y))
(SUB (XOR <typ.UInt32> // negate the result if one operand is negative
(Select0 <typ.UInt32> (CALLudiv
(SUB <typ.UInt32> (XOR x <typ.UInt32> (Signmask x)) (Signmask x)) // negate x if negative
(SUB <typ.UInt32> (XOR y <typ.UInt32> (Signmask y)) (Signmask y)))) // negate y if negative
(Signmask (XOR <typ.UInt32> x y))) (Signmask (XOR <typ.UInt32> x y)))
(Div32u x y) -> (Select0 <typ.UInt32> (CALLudiv x y))
(Div16 x y) -> (Div32 (SignExt16to32 x) (SignExt16to32 y))
(Div16u x y) -> (Div32u (ZeroExt16to32 x) (ZeroExt16to32 y))
(Div8 x y) -> (Div32 (SignExt8to32 x) (SignExt8to32 y))
......@@ -48,12 +48,12 @@
(Div64F x y) -> (DIVD x y)
(Mod32 x y) ->
(SUB (XOR <types.UInt32> // negate the result if x is negative
(Select1 <types.UInt32> (CALLudiv
(SUB <types.UInt32> (XOR <types.UInt32> x (Signmask x)) (Signmask x)) // negate x if negative
(SUB <types.UInt32> (XOR <types.UInt32> y (Signmask y)) (Signmask y)))) // negate y if negative
(SUB (XOR <typ.UInt32> // negate the result if x is negative
(Select1 <typ.UInt32> (CALLudiv
(SUB <typ.UInt32> (XOR <typ.UInt32> x (Signmask x)) (Signmask x)) // negate x if negative
(SUB <typ.UInt32> (XOR <typ.UInt32> y (Signmask y)) (Signmask y)))) // negate y if negative
(Signmask x)) (Signmask x))
(Mod32u x y) -> (Select1 <types.UInt32> (CALLudiv x y))
(Mod32u x y) -> (Select1 <typ.UInt32> (CALLudiv x y))
(Mod16 x y) -> (Mod32 (SignExt16to32 x) (SignExt16to32 y))
(Mod16u x y) -> (Mod32u (ZeroExt16to32 x) (ZeroExt16to32 y))
(Mod8 x y) -> (Mod32 (SignExt8to32 x) (SignExt8to32 y))
......@@ -117,7 +117,7 @@
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
(EqB x y) -> (XORconst [1] (XOR <types.Bool> x y))
(EqB x y) -> (XORconst [1] (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XORconst [1] x)
......@@ -166,11 +166,11 @@
(Rsh32x64 x (Const64 [c])) && uint64(c) < 32 -> (SRAconst x [c])
(Rsh32Ux64 x (Const64 [c])) && uint64(c) < 32 -> (SRLconst x [c])
(Lsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SLLconst x [c])
(Rsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [c+16])
(Rsh16Ux64 x (Const64 [c])) && uint64(c) < 16 -> (SRLconst (SLLconst <types.UInt32> x [16]) [c+16])
(Rsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [c+16])
(Rsh16Ux64 x (Const64 [c])) && uint64(c) < 16 -> (SRLconst (SLLconst <typ.UInt32> x [16]) [c+16])
(Lsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SLLconst x [c])
(Rsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [c+24])
(Rsh8Ux64 x (Const64 [c])) && uint64(c) < 8 -> (SRLconst (SLLconst <types.UInt32> x [24]) [c+24])
(Rsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [c+24])
(Rsh8Ux64 x (Const64 [c])) && uint64(c) < 8 -> (SRLconst (SLLconst <typ.UInt32> x [24]) [c+24])
// large constant shifts
(Lsh32x64 _ (Const64 [c])) && uint64(c) >= 32 -> (Const32 [0])
......@@ -182,8 +182,8 @@
// large constant signed right shift, we leave the sign bit
(Rsh32x64 x (Const64 [c])) && uint64(c) >= 32 -> (SRAconst x [31])
(Rsh16x64 x (Const64 [c])) && uint64(c) >= 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [31])
(Rsh8x64 x (Const64 [c])) && uint64(c) >= 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [31])
(Rsh16x64 x (Const64 [c])) && uint64(c) >= 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [31])
(Rsh8x64 x (Const64 [c])) && uint64(c) >= 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [31])
// constants
(Const8 [val]) -> (MOVWconst [val])
......@@ -210,7 +210,7 @@
(SignExt16to32 x) -> (MOVHreg x)
(Signmask x) -> (SRAconst x [31])
(Zeromask x) -> (SRAconst (RSBshiftRL <types.Int32> x x [1]) [31]) // sign bit of uint32(x)>>1 - x
(Zeromask x) -> (SRAconst (RSBshiftRL <typ.Int32> x x [1]) [31]) // sign bit of uint32(x)>>1 - x
(Slicemask <t> x) -> (SRAconst (RSBconst <t> [0] x) [31])
// float <-> int conversion
......@@ -299,23 +299,23 @@
(Load <t> ptr mem) && is64BitFloat(t) -> (MOVDload ptr mem)
// stores
(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
// zero instructions
(Zero [0] _ mem) -> mem
(Zero [1] ptr mem) -> (MOVBstore ptr (MOVWconst [0]) mem)
(Zero [2] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
(Zero [2] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore ptr (MOVWconst [0]) mem)
(Zero [2] ptr mem) ->
(MOVBstore [1] ptr (MOVWconst [0])
(MOVBstore [0] ptr (MOVWconst [0]) mem))
(Zero [4] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore ptr (MOVWconst [0]) mem)
(Zero [4] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] ptr (MOVWconst [0])
(MOVHstore [0] ptr (MOVWconst [0]) mem))
(Zero [4] ptr mem) ->
......@@ -333,29 +333,29 @@
// 4 and 128 are magic constants, see runtime/mkduff.go
(Zero [s] {t} ptr mem)
&& s%4 == 0 && s > 4 && s <= 512
&& t.(Type).Alignment()%4 == 0 && !config.noDuffDevice ->
&& t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice ->
(DUFFZERO [4 * (128 - int64(s/4))] ptr (MOVWconst [0]) mem)
// Large zeroing uses a loop
(Zero [s] {t} ptr mem)
&& (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0 ->
(LoweredZero [t.(Type).Alignment()]
&& (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0 ->
(LoweredZero [t.(*types.Type).Alignment()]
ptr
(ADDconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)])
(ADDconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)])
(MOVWconst [0])
mem)
// moves
(Move [0] _ _ mem) -> mem
(Move [1] dst src mem) -> (MOVBstore dst (MOVBUload src mem) mem)
(Move [2] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
(Move [2] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore dst (MOVHUload src mem) mem)
(Move [2] dst src mem) ->
(MOVBstore [1] dst (MOVBUload [1] src mem)
(MOVBstore dst (MOVBUload src mem) mem))
(Move [4] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore dst (MOVWload src mem) mem)
(Move [4] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] dst (MOVHUload [2] src mem)
(MOVHstore dst (MOVHUload src mem) mem))
(Move [4] dst src mem) ->
......@@ -373,16 +373,16 @@
// 8 and 128 are magic constants, see runtime/mkduff.go
(Move [s] {t} dst src mem)
&& s%4 == 0 && s > 4 && s <= 512
&& t.(Type).Alignment()%4 == 0 && !config.noDuffDevice ->
&& t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice ->
(DUFFCOPY [8 * (128 - int64(s/4))] dst src mem)
// Large move uses a loop
(Move [s] {t} dst src mem)
&& (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0 ->
(LoweredMove [t.(Type).Alignment()]
&& (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0 ->
(LoweredMove [t.(*types.Type).Alignment()]
dst
src
(ADDconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)])
(ADDconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// calls
......
......@@ -27,8 +27,8 @@
(Hmul64 x y) -> (MULH x y)
(Hmul64u x y) -> (UMULH x y)
(Hmul32 x y) -> (SRAconst (MULL <types.Int64> x y) [32])
(Hmul32u x y) -> (SRAconst (UMULL <types.UInt64> x y) [32])
(Hmul32 x y) -> (SRAconst (MULL <typ.Int64> x y) [32])
(Hmul32u x y) -> (SRAconst (UMULL <typ.UInt64> x y) [32])
(Div64 x y) -> (DIV x y)
(Div64u x y) -> (UDIV x y)
......@@ -86,20 +86,20 @@
(Ctz64 <t> x) -> (CLZ (RBIT <t> x))
(Ctz32 <t> x) -> (CLZW (RBITW <t> x))
(BitLen64 x) -> (SUB (MOVDconst [64]) (CLZ <types.Int> x))
(BitLen64 x) -> (SUB (MOVDconst [64]) (CLZ <typ.Int> x))
(Bswap64 x) -> (REV x)
(Bswap32 x) -> (REVW x)
(BitRev64 x) -> (RBIT x)
(BitRev32 x) -> (RBITW x)
(BitRev16 x) -> (SRLconst [48] (RBIT <types.UInt64> x))
(BitRev8 x) -> (SRLconst [56] (RBIT <types.UInt64> x))
(BitRev16 x) -> (SRLconst [48] (RBIT <typ.UInt64> x))
(BitRev8 x) -> (SRLconst [56] (RBIT <typ.UInt64> x))
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
(EqB x y) -> (XOR (MOVDconst [1]) (XOR <types.Bool> x y))
(EqB x y) -> (XOR (MOVDconst [1]) (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XOR (MOVDconst [1]) x)
......@@ -338,12 +338,12 @@
(Load <t> ptr mem) && is64BitFloat(t) -> (FMOVDload ptr mem)
// stores
(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
// zeroing
(Zero [0] _ mem) -> mem
......
This diff is collapsed.
......@@ -322,13 +322,13 @@
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 8 -> (MOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 -> (MOVDstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVWstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
......@@ -437,7 +437,7 @@
(If (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) cmp) yes no) -> (GTF cmp yes no)
(If (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) cmp) yes no) -> (GEF cmp yes no)
(If cond yes no) -> (NE (CMPWconst [0] (MOVBZreg <types.Bool> cond)) yes no)
(If cond yes no) -> (NE (CMPWconst [0] (MOVBZreg <typ.Bool> cond)) yes no)
// ***************************
// Above: lowering rules
......@@ -446,8 +446,8 @@
// TODO: Should the optimizations be a separate pass?
// Fold unnecessary type conversions.
(MOVDreg <t> x) && t.Compare(x.Type) == CMPeq -> x
(MOVDnop <t> x) && t.Compare(x.Type) == CMPeq -> x
(MOVDreg <t> x) && t.Compare(x.Type) == types.CMPeq -> x
(MOVDnop <t> x) && t.Compare(x.Type) == types.CMPeq -> x
// Propagate constants through type conversions.
(MOVDreg (MOVDconst [c])) -> (MOVDconst [c])
......
......@@ -13,28 +13,28 @@
(Load <t> ptr mem) && t.IsComplex() && t.Size() == 8 ->
(ComplexMake
(Load <types.Float32> ptr mem)
(Load <types.Float32>
(OffPtr <types.Float32Ptr> [4] ptr)
(Load <typ.Float32> ptr mem)
(Load <typ.Float32>
(OffPtr <typ.Float32Ptr> [4] ptr)
mem)
)
(Store {t} dst (ComplexMake real imag) mem) && t.(Type).Size() == 8 ->
(Store {types.Float32}
(OffPtr <types.Float32Ptr> [4] dst)
(Store {t} dst (ComplexMake real imag) mem) && t.(*types.Type).Size() == 8 ->
(Store {typ.Float32}
(OffPtr <typ.Float32Ptr> [4] dst)
imag
(Store {types.Float32} dst real mem))
(Store {typ.Float32} dst real mem))
(Load <t> ptr mem) && t.IsComplex() && t.Size() == 16 ->
(ComplexMake
(Load <types.Float64> ptr mem)
(Load <types.Float64>
(OffPtr <types.Float64Ptr> [8] ptr)
(Load <typ.Float64> ptr mem)
(Load <typ.Float64>
(OffPtr <typ.Float64Ptr> [8] ptr)
mem)
)
(Store {t} dst (ComplexMake real imag) mem) && t.(Type).Size() == 16 ->
(Store {types.Float64}
(OffPtr <types.Float64Ptr> [8] dst)
(Store {t} dst (ComplexMake real imag) mem) && t.(*types.Type).Size() == 16 ->
(Store {typ.Float64}
(OffPtr <typ.Float64Ptr> [8] dst)
imag
(Store {types.Float64} dst real mem))
(Store {typ.Float64} dst real mem))
// string ops
(StringPtr (StringMake ptr _)) -> ptr
......@@ -42,15 +42,15 @@
(Load <t> ptr mem) && t.IsString() ->
(StringMake
(Load <types.BytePtr> ptr mem)
(Load <types.Int>
(OffPtr <types.IntPtr> [config.PtrSize] ptr)
(Load <typ.BytePtr> ptr mem)
(Load <typ.Int>
(OffPtr <typ.IntPtr> [config.PtrSize] ptr)
mem))
(Store dst (StringMake ptr len) mem) ->
(Store {types.Int}
(OffPtr <types.IntPtr> [config.PtrSize] dst)
(Store {typ.Int}
(OffPtr <typ.IntPtr> [config.PtrSize] dst)
len
(Store {types.BytePtr} dst ptr mem))
(Store {typ.BytePtr} dst ptr mem))
// slice ops
(SlicePtr (SliceMake ptr _ _ )) -> ptr
......@@ -60,20 +60,20 @@
(Load <t> ptr mem) && t.IsSlice() ->
(SliceMake
(Load <t.ElemType().PtrTo()> ptr mem)
(Load <types.Int>
(OffPtr <types.IntPtr> [config.PtrSize] ptr)
(Load <typ.Int>
(OffPtr <typ.IntPtr> [config.PtrSize] ptr)
mem)
(Load <types.Int>
(OffPtr <types.IntPtr> [2*config.PtrSize] ptr)
(Load <typ.Int>
(OffPtr <typ.IntPtr> [2*config.PtrSize] ptr)
mem))
(Store dst (SliceMake ptr len cap) mem) ->
(Store {types.Int}
(OffPtr <types.IntPtr> [2*config.PtrSize] dst)
(Store {typ.Int}
(OffPtr <typ.IntPtr> [2*config.PtrSize] dst)
cap
(Store {types.Int}
(OffPtr <types.IntPtr> [config.PtrSize] dst)
(Store {typ.Int}
(OffPtr <typ.IntPtr> [config.PtrSize] dst)
len
(Store {types.BytePtr} dst ptr mem)))
(Store {typ.BytePtr} dst ptr mem)))
// interface ops
(ITab (IMake itab _)) -> itab
......@@ -81,12 +81,12 @@
(Load <t> ptr mem) && t.IsInterface() ->
(IMake
(Load <types.BytePtr> ptr mem)
(Load <types.BytePtr>
(OffPtr <types.BytePtrPtr> [config.PtrSize] ptr)
(Load <typ.BytePtr> ptr mem)
(Load <typ.BytePtr>
(OffPtr <typ.BytePtrPtr> [config.PtrSize] ptr)
mem))
(Store dst (IMake itab data) mem) ->
(Store {types.BytePtr}
(OffPtr <types.BytePtrPtr> [config.PtrSize] dst)
(Store {typ.BytePtr}
(OffPtr <typ.BytePtrPtr> [config.PtrSize] dst)
data
(Store {types.Uintptr} dst itab mem))
(Store {typ.Uintptr} dst itab mem))
......@@ -157,9 +157,11 @@ func genRules(arch arch) {
fmt.Fprintln(w, "import \"math\"")
fmt.Fprintln(w, "import \"cmd/internal/obj\"")
fmt.Fprintln(w, "import \"cmd/internal/objabi\"")
fmt.Fprintln(w, "import \"cmd/compile/internal/types\"")
fmt.Fprintln(w, "var _ = math.MinInt8 // in case not otherwise used")
fmt.Fprintln(w, "var _ = obj.ANOP // in case not otherwise used")
fmt.Fprintln(w, "var _ = objabi.GOROOT // in case not otherwise used")
fmt.Fprintln(w, "var _ = types.TypeMem // in case not otherwise used")
fmt.Fprintln(w)
const chunkSize = 10
......@@ -230,9 +232,9 @@ func genRules(arch arch) {
hasb := strings.Contains(body, "b.")
hasconfig := strings.Contains(body, "config.") || strings.Contains(body, "config)")
hasfe := strings.Contains(body, "fe.")
hasts := strings.Contains(body, "types.")
hastyps := strings.Contains(body, "typ.")
fmt.Fprintf(w, "func rewriteValue%s_%s_%d(v *Value) bool {\n", arch.name, op, chunk)
if hasb || hasconfig || hasfe {
if hasb || hasconfig || hasfe || hastyps {
fmt.Fprintln(w, "b := v.Block")
fmt.Fprintln(w, "_ = b")
}
......@@ -244,9 +246,9 @@ func genRules(arch arch) {
fmt.Fprintln(w, "fe := b.Func.fe")
fmt.Fprintln(w, "_ = fe")
}
if hasts {
fmt.Fprintln(w, "types := &b.Func.Config.Types")
fmt.Fprintln(w, "_ = types")
if hastyps {
fmt.Fprintln(w, "typ := &b.Func.Config.Types")
fmt.Fprintln(w, "_ = typ")
}
fmt.Fprint(w, body)
fmt.Fprintf(w, "}\n")
......@@ -260,8 +262,8 @@ func genRules(arch arch) {
fmt.Fprintln(w, "_ = config")
fmt.Fprintln(w, "fe := b.Func.fe")
fmt.Fprintln(w, "_ = fe")
fmt.Fprintln(w, "types := &config.Types")
fmt.Fprintln(w, "_ = types")
fmt.Fprintln(w, "typ := &config.Types")
fmt.Fprintln(w, "_ = typ")
fmt.Fprintf(w, "switch b.Kind {\n")
ops = nil
for op := range blockrules {
......@@ -731,13 +733,13 @@ func typeName(typ string) string {
if len(ts) != 2 {
panic("Tuple expect 2 arguments")
}
return "MakeTuple(" + typeName(ts[0]) + ", " + typeName(ts[1]) + ")"
return "types.NewTuple(" + typeName(ts[0]) + ", " + typeName(ts[1]) + ")"
}
switch typ {
case "Flags", "Mem", "Void", "Int128":
return "Type" + typ
return "types.Type" + typ
default:
return "types." + typ
return "typ." + typ
}
}
......
......@@ -4,7 +4,10 @@
package ssa
import "fmt"
import (
"cmd/compile/internal/types"
"fmt"
)
// A place that an ssa variable can reside.
type Location interface {
......@@ -26,9 +29,9 @@ func (r *Register) Name() string {
// A LocalSlot is a location in the stack frame.
// It is (possibly a subpiece of) a PPARAM, PPARAMOUT, or PAUTO ONAME node.
type LocalSlot struct {
N GCNode // an ONAME *gc.Node representing a variable on the stack
Type Type // type of slot
Off int64 // offset of slot in N
N GCNode // an ONAME *gc.Node representing a variable on the stack
Type *types.Type // type of slot
Off int64 // offset of slot in N
}
func (s LocalSlot) Name() string {
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/src"
"testing"
)
......@@ -47,27 +48,27 @@ func TestLoopConditionS390X(t *testing.T) {
c := testConfigS390X(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("SP", OpSP, TypeUInt64, 0, nil),
Valu("ret", OpAddr, TypeInt64Ptr, 0, nil, "SP"),
Valu("N", OpArg, TypeInt64, 0, c.Frontend().Auto(src.NoXPos, TypeInt64)),
Valu("starti", OpConst64, TypeInt64, 0, nil),
Valu("startsum", OpConst64, TypeInt64, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("SP", OpSP, c.config.Types.UInt64, 0, nil),
Valu("ret", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "SP"),
Valu("N", OpArg, c.config.Types.Int64, 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Int64)),
Valu("starti", OpConst64, c.config.Types.Int64, 0, nil),
Valu("startsum", OpConst64, c.config.Types.Int64, 0, nil),
Goto("b1")),
Bloc("b1",
Valu("phii", OpPhi, TypeInt64, 0, nil, "starti", "i"),
Valu("phisum", OpPhi, TypeInt64, 0, nil, "startsum", "sum"),
Valu("cmp1", OpLess64, TypeBool, 0, nil, "phii", "N"),
Valu("phii", OpPhi, c.config.Types.Int64, 0, nil, "starti", "i"),
Valu("phisum", OpPhi, c.config.Types.Int64, 0, nil, "startsum", "sum"),
Valu("cmp1", OpLess64, c.config.Types.Bool, 0, nil, "phii", "N"),
If("cmp1", "b2", "b3")),
Bloc("b2",
Valu("c1", OpConst64, TypeInt64, 1, nil),
Valu("i", OpAdd64, TypeInt64, 0, nil, "phii", "c1"),
Valu("c3", OpConst64, TypeInt64, 3, nil),
Valu("sum", OpAdd64, TypeInt64, 0, nil, "phisum", "c3"),
Valu("c1", OpConst64, c.config.Types.Int64, 1, nil),
Valu("i", OpAdd64, c.config.Types.Int64, 0, nil, "phii", "c1"),
Valu("c3", OpConst64, c.config.Types.Int64, 3, nil),
Valu("sum", OpAdd64, c.config.Types.Int64, 0, nil, "phisum", "c3"),
Goto("b1")),
Bloc("b3",
Valu("retdef", OpVarDef, TypeMem, 0, nil, "mem"),
Valu("store", OpStore, TypeMem, 0, TypeInt64, "ret", "phisum", "retdef"),
Valu("retdef", OpVarDef, types.TypeMem, 0, nil, "mem"),
Valu("store", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ret", "phisum", "retdef"),
Exit("store")))
CheckFunc(fun.f)
Compile(fun.f)
......
......@@ -4,7 +4,10 @@
package ssa
import "fmt"
import (
"cmd/compile/internal/types"
"fmt"
)
// an edgeMem records a backedge, together with the memory
// phi functions at the target of the backedge that must
......@@ -84,7 +87,7 @@ func insertLoopReschedChecks(f *Func) {
// It's possible that there is no memory state (no global/pointer loads/stores or calls)
if lastMems[f.Entry.ID] == nil {
lastMems[f.Entry.ID] = f.Entry.NewValue0(f.Entry.Pos, OpInitMem, TypeMem)
lastMems[f.Entry.ID] = f.Entry.NewValue0(f.Entry.Pos, OpInitMem, types.TypeMem)
}
memDefsAtBlockEnds := make([]*Value, f.NumBlocks()) // For each block, the mem def seen at its bottom. Could be from earlier block.
......@@ -197,8 +200,8 @@ func insertLoopReschedChecks(f *Func) {
// if sp < g.limit { goto sched }
// goto header
types := &f.Config.Types
pt := types.Uintptr
cfgtypes := &f.Config.Types
pt := cfgtypes.Uintptr
g := test.NewValue1(bb.Pos, OpGetG, pt, mem0)
sp := test.NewValue0(bb.Pos, OpSP, pt)
cmpOp := OpLess64U
......@@ -207,7 +210,7 @@ func insertLoopReschedChecks(f *Func) {
}
limaddr := test.NewValue1I(bb.Pos, OpOffPtr, pt, 2*pt.Size(), g)
lim := test.NewValue2(bb.Pos, OpLoad, pt, limaddr, mem0)
cmp := test.NewValue2(bb.Pos, cmpOp, types.Bool, sp, lim)
cmp := test.NewValue2(bb.Pos, cmpOp, cfgtypes.Bool, sp, lim)
test.SetControl(cmp)
// if true, goto sched
......@@ -226,7 +229,7 @@ func insertLoopReschedChecks(f *Func) {
// mem1 := call resched (mem0)
// goto header
resched := f.fe.Syslook("goschedguarded")
mem1 := sched.NewValue1A(bb.Pos, OpStaticCall, TypeMem, resched, mem0)
mem1 := sched.NewValue1A(bb.Pos, OpStaticCall, types.TypeMem, resched, mem0)
sched.AddEdgeTo(h)
headerMemPhi.AddArg(mem1)
......
......@@ -4,6 +4,7 @@
package ssa
import (
"cmd/compile/internal/types"
"fmt"
"testing"
)
......@@ -60,32 +61,32 @@ func benchFnBlock(b *testing.B, fn passFunc, bg blockGen) {
func genFunction(size int) []bloc {
var blocs []bloc
elemType := &TypeImpl{Size_: 8, Name: "testtype"}
ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
elemType := types.Types[types.TINT64]
ptrType := elemType.PtrTo()
valn := func(s string, m, n int) string { return fmt.Sprintf("%s%d-%d", s, m, n) }
blocs = append(blocs,
Bloc("entry",
Valu(valn("store", 0, 4), OpInitMem, TypeMem, 0, nil),
Valu("sb", OpSB, TypeInvalid, 0, nil),
Valu(valn("store", 0, 4), OpInitMem, types.TypeMem, 0, nil),
Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto(blockn(1)),
),
)
for i := 1; i < size+1; i++ {
blocs = append(blocs, Bloc(blockn(i),
Valu(valn("v", i, 0), OpConstBool, TypeBool, 1, nil),
Valu(valn("v", i, 0), OpConstBool, types.Types[types.TBOOL], 1, nil),
Valu(valn("addr", i, 1), OpAddr, ptrType, 0, nil, "sb"),
Valu(valn("addr", i, 2), OpAddr, ptrType, 0, nil, "sb"),
Valu(valn("addr", i, 3), OpAddr, ptrType, 0, nil, "sb"),
Valu(valn("zero", i, 1), OpZero, TypeMem, 8, elemType, valn("addr", i, 3),
Valu(valn("zero", i, 1), OpZero, types.TypeMem, 8, elemType, valn("addr", i, 3),
valn("store", i-1, 4)),
Valu(valn("store", i, 1), OpStore, TypeMem, 0, elemType, valn("addr", i, 1),
Valu(valn("store", i, 1), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 1),
valn("v", i, 0), valn("zero", i, 1)),
Valu(valn("store", i, 2), OpStore, TypeMem, 0, elemType, valn("addr", i, 2),
Valu(valn("store", i, 2), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 2),
valn("v", i, 0), valn("store", i, 1)),
Valu(valn("store", i, 3), OpStore, TypeMem, 0, elemType, valn("addr", i, 1),
Valu(valn("store", i, 3), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 1),
valn("v", i, 0), valn("store", i, 2)),
Valu(valn("store", i, 4), OpStore, TypeMem, 0, elemType, valn("addr", i, 3),
Valu(valn("store", i, 4), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 3),
valn("v", i, 0), valn("store", i, 3)),
Goto(blockn(i+1))))
}
......
......@@ -114,6 +114,7 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/objabi"
"cmd/internal/src"
"fmt"
......@@ -698,12 +699,12 @@ func (s *regAllocState) setState(regs []endReg) {
}
// compatRegs returns the set of registers which can store a type t.
func (s *regAllocState) compatRegs(t Type) regMask {
func (s *regAllocState) compatRegs(t *types.Type) regMask {
var m regMask
if t.IsTuple() || t.IsFlags() {
return 0
}
if t.IsFloat() || t == TypeInt128 {
if t.IsFloat() || t == types.TypeInt128 {
m = s.f.Config.fpRegMask
} else {
m = s.f.Config.gpRegMask
......@@ -2078,7 +2079,7 @@ func (e *edgeState) erase(loc Location) {
}
// findRegFor finds a register we can use to make a temp copy of type typ.
func (e *edgeState) findRegFor(typ Type) Location {
func (e *edgeState) findRegFor(typ *types.Type) Location {
// Which registers are possibilities.
var m regMask
types := &e.s.f.Config.Types
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/src"
"testing"
)
......@@ -13,11 +14,11 @@ func TestLiveControlOps(t *testing.T) {
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("x", OpAMD64MOVLconst, TypeInt8, 1, nil),
Valu("y", OpAMD64MOVLconst, TypeInt8, 2, nil),
Valu("a", OpAMD64TESTB, TypeFlags, 0, nil, "x", "y"),
Valu("b", OpAMD64TESTB, TypeFlags, 0, nil, "y", "x"),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("x", OpAMD64MOVLconst, c.config.Types.Int8, 1, nil),
Valu("y", OpAMD64MOVLconst, c.config.Types.Int8, 2, nil),
Valu("a", OpAMD64TESTB, types.TypeFlags, 0, nil, "x", "y"),
Valu("b", OpAMD64TESTB, types.TypeFlags, 0, nil, "y", "x"),
Eq("a", "if", "exit"),
),
Bloc("if",
......@@ -41,23 +42,23 @@ func TestSpillWithLoop(t *testing.T) {
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("ptr", OpArg, TypeInt64Ptr, 0, c.Frontend().Auto(src.NoXPos, TypeInt64)),
Valu("cond", OpArg, TypeBool, 0, c.Frontend().Auto(src.NoXPos, TypeBool)),
Valu("ld", OpAMD64MOVQload, TypeInt64, 0, nil, "ptr", "mem"), // this value needs a spill
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("ptr", OpArg, c.config.Types.Int64.PtrTo(), 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Int64)),
Valu("cond", OpArg, c.config.Types.Bool, 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Bool)),
Valu("ld", OpAMD64MOVQload, c.config.Types.Int64, 0, nil, "ptr", "mem"), // this value needs a spill
Goto("loop"),
),
Bloc("loop",
Valu("memphi", OpPhi, TypeMem, 0, nil, "mem", "call"),
Valu("call", OpAMD64CALLstatic, TypeMem, 0, nil, "memphi"),
Valu("test", OpAMD64CMPBconst, TypeFlags, 0, nil, "cond"),
Valu("memphi", OpPhi, types.TypeMem, 0, nil, "mem", "call"),
Valu("call", OpAMD64CALLstatic, types.TypeMem, 0, nil, "memphi"),
Valu("test", OpAMD64CMPBconst, types.TypeFlags, 0, nil, "cond"),
Eq("test", "next", "exit"),
),
Bloc("next",
Goto("loop"),
),
Bloc("exit",
Valu("store", OpAMD64MOVQstore, TypeMem, 0, nil, "ptr", "ld", "call"),
Valu("store", OpAMD64MOVQstore, types.TypeMem, 0, nil, "ptr", "ld", "call"),
Exit("store"),
),
)
......
......@@ -5,6 +5,7 @@
package ssa
import (
"cmd/compile/internal/types"
"cmd/internal/obj"
"fmt"
"io"
......@@ -84,39 +85,39 @@ func applyRewrite(f *Func, rb blockRewriter, rv valueRewriter) {
// Common functions called from rewriting rules
func is64BitFloat(t Type) bool {
func is64BitFloat(t *types.Type) bool {
return t.Size() == 8 && t.IsFloat()
}
func is32BitFloat(t Type) bool {
func is32BitFloat(t *types.Type) bool {
return t.Size() == 4 && t.IsFloat()
}
func is64BitInt(t Type) bool {
func is64BitInt(t *types.Type) bool {
return t.Size() == 8 && t.IsInteger()
}
func is32BitInt(t Type) bool {
func is32BitInt(t *types.Type) bool {
return t.Size() == 4 && t.IsInteger()
}
func is16BitInt(t Type) bool {
func is16BitInt(t *types.Type) bool {
return t.Size() == 2 && t.IsInteger()
}
func is8BitInt(t Type) bool {
func is8BitInt(t *types.Type) bool {
return t.Size() == 1 && t.IsInteger()
}
func isPtr(t Type) bool {
func isPtr(t *types.Type) bool {
return t.IsPtrShaped()
}
func isSigned(t Type) bool {
func isSigned(t *types.Type) bool {
return t.IsSigned()
}
func typeSize(t Type) int64 {
func typeSize(t *types.Type) int64 {
return t.Size()
}
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -4,32 +4,35 @@
package ssa
import "testing"
import (
"cmd/compile/internal/types"
"testing"
)
func TestShortCircuit(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, TypeMem, 0, nil),
Valu("arg1", OpArg, TypeInt64, 0, nil),
Valu("arg2", OpArg, TypeInt64, 0, nil),
Valu("arg3", OpArg, TypeInt64, 0, nil),
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("arg1", OpArg, c.config.Types.Int64, 0, nil),
Valu("arg2", OpArg, c.config.Types.Int64, 0, nil),
Valu("arg3", OpArg, c.config.Types.Int64, 0, nil),
Goto("b1")),
Bloc("b1",
Valu("cmp1", OpLess64, TypeBool, 0, nil, "arg1", "arg2"),
Valu("cmp1", OpLess64, c.config.Types.Bool, 0, nil, "arg1", "arg2"),
If("cmp1", "b2", "b3")),
Bloc("b2",
Valu("cmp2", OpLess64, TypeBool, 0, nil, "arg2", "arg3"),
Valu("cmp2", OpLess64, c.config.Types.Bool, 0, nil, "arg2", "arg3"),
Goto("b3")),
Bloc("b3",
Valu("phi2", OpPhi, TypeBool, 0, nil, "cmp1", "cmp2"),
Valu("phi2", OpPhi, c.config.Types.Bool, 0, nil, "cmp1", "cmp2"),
If("phi2", "b4", "b5")),
Bloc("b4",
Valu("cmp3", OpLess64, TypeBool, 0, nil, "arg3", "arg1"),
Valu("cmp3", OpLess64, c.config.Types.Bool, 0, nil, "arg3", "arg1"),
Goto("b5")),
Bloc("b5",
Valu("phi3", OpPhi, TypeBool, 0, nil, "phi2", "cmp3"),
Valu("phi3", OpPhi, c.config.Types.Bool, 0, nil, "phi2", "cmp3"),
If("phi3", "b6", "b7")),
Bloc("b6",
Exit("mem")),
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment