LLVM  8.0.1
SystemZISelLowering.h
Go to the documentation of this file.
1 //===-- SystemZISelLowering.h - SystemZ DAG lowering interface --*- C++ -*-===//
2 //
3 // The LLVM Compiler Infrastructure
4 //
5 // This file is distributed under the University of Illinois Open Source
6 // License. See LICENSE.TXT for details.
7 //
8 //===----------------------------------------------------------------------===//
9 //
10 // This file defines the interfaces that SystemZ uses to lower LLVM code into a
11 // selection DAG.
12 //
13 //===----------------------------------------------------------------------===//
14 
15 #ifndef LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
16 #define LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
17 
18 #include "SystemZ.h"
22 
23 namespace llvm {
24 namespace SystemZISD {
25 enum NodeType : unsigned {
27 
28  // Return with a flag operand. Operand 0 is the chain operand.
30 
31  // Calls a function. Operand 0 is the chain operand and operand 1
32  // is the target address. The arguments start at operand 2.
33  // There is an optional glue operand at the end.
36 
37  // TLS calls. Like regular calls, except operand 1 is the TLS symbol.
38  // (The call target is implicitly __tls_get_offset.)
41 
42  // Wraps a TargetGlobalAddress that should be loaded using PC-relative
43  // accesses (LARL). Operand 0 is the address.
45 
46  // Used in cases where an offset is applied to a TargetGlobalAddress.
47  // Operand 0 is the full TargetGlobalAddress and operand 1 is a
48  // PCREL_WRAPPER for an anchor point. This is used so that we can
49  // cheaply refer to either the full address or the anchor point
50  // as a register base.
52 
53  // Integer absolute.
55 
56  // Integer comparisons. There are three operands: the two values
57  // to compare, and an integer of type SystemZICMP.
59 
60  // Floating-point comparisons. The two operands are the values to compare.
62 
63  // Test under mask. The first operand is ANDed with the second operand
64  // and the condition codes are set on the result. The third operand is
65  // a boolean that is true if the condition codes need to distinguish
66  // between CCMASK_TM_MIXED_MSB_0 and CCMASK_TM_MIXED_MSB_1 (which the
67  // register forms do but the memory forms don't).
68  TM,
69 
70  // Branches if a condition is true. Operand 0 is the chain operand;
71  // operand 1 is the 4-bit condition-code mask, with bit N in
72  // big-endian order meaning "branch if CC=N"; operand 2 is the
73  // target block and operand 3 is the flag operand.
75 
76  // Selects between operand 0 and operand 1. Operand 2 is the
77  // mask of condition-code values for which operand 0 should be
78  // chosen over operand 1; it has the same form as BR_CCMASK.
79  // Operand 3 is the flag operand.
81 
82  // Evaluates to the gap between the stack pointer and the
83  // base of the dynamically-allocatable area.
85 
86  // Count number of bits set in operand 0 per byte.
88 
89  // Wrappers around the ISD opcodes of the same name. The output is GR128.
90  // Input operands may be GR64 or GR32, depending on the instruction.
95 
96  // Add/subtract with overflow/carry. These have the same operands as
97  // the corresponding standard operations, except with the carry flag
98  // replaced by a condition code value.
100 
101  // Set the condition code from a boolean value in operand 0.
102  // Operand 1 is a mask of all condition-code values that may result of this
103  // operation, operand 2 is a mask of condition-code values that may result
104  // if the boolean is true.
105  // Note that this operation is always optimized away, we will never
106  // generate any code for it.
108 
109  // Use a series of MVCs to copy bytes from one memory location to another.
110  // The operands are:
111  // - the target address
112  // - the source address
113  // - the constant length
114  //
115  // This isn't a memory opcode because we'd need to attach two
116  // MachineMemOperands rather than one.
118 
119  // Like MVC, but implemented as a loop that handles X*256 bytes
120  // followed by straight-line code to handle the rest (if any).
121  // The value of X is passed as an additional operand.
123 
124  // Similar to MVC and MVC_LOOP, but for logic operations (AND, OR, XOR).
125  NC,
127  OC,
129  XC,
131 
132  // Use CLC to compare two blocks of memory, with the same comments
133  // as for MVC and MVC_LOOP.
136 
137  // Use an MVST-based sequence to implement stpcpy().
139 
140  // Use a CLST-based sequence to implement strcmp(). The two input operands
141  // are the addresses of the strings to compare.
143 
144  // Use an SRST-based sequence to search a block of memory. The first
145  // operand is the end address, the second is the start, and the third
146  // is the character to search for. CC is set to 1 on success and 2
147  // on failure.
149 
150  // Store the CC value in bits 29 and 28 of an integer.
152 
153  // Compiler barrier only; generate a no-op.
155 
156  // Transaction begin. The first operand is the chain, the second
157  // the TDB pointer, and the third the immediate control field.
158  // Returns CC value and chain.
161 
162  // Transaction end. Just the chain operand. Returns CC value and chain.
164 
165  // Create a vector constant by filling byte N of the result with bit
166  // 15-N of the single operand.
168 
169  // Create a vector constant by replicating an element-sized RISBG-style mask.
170  // The first operand specifies the starting set bit and the second operand
171  // specifies the ending set bit. Both operands count from the MSB of the
172  // element.
174 
175  // Replicate a GPR scalar value into all elements of a vector.
177 
178  // Create a vector from two i64 GPRs.
180 
181  // Replicate one element of a vector into all elements. The first operand
182  // is the vector and the second is the index of the element to replicate.
184 
185  // Interleave elements from the high half of operand 0 and the high half
186  // of operand 1.
188 
189  // Likewise for the low halves.
191 
192  // Concatenate the vectors in the first two operands, shift them left
193  // by the third operand, and take the first half of the result.
195 
196  // Take one element of the first v2i64 operand and the one element of
197  // the second v2i64 operand and concatenate them to form a v2i64 result.
198  // The third operand is a 4-bit value of the form 0A0B, where A and B
199  // are the element selectors for the first operand and second operands
200  // respectively.
202 
203  // Perform a general vector permute on vector operands 0 and 1.
204  // Each byte of operand 2 controls the corresponding byte of the result,
205  // in the same way as a byte-level VECTOR_SHUFFLE mask.
207 
208  // Pack vector operands 0 and 1 into a single vector with half-sized elements.
210 
211  // Likewise, but saturate the result and set CC. PACKS_CC does signed
212  // saturation and PACKLS_CC does unsigned saturation.
215 
216  // Unpack the first half of vector operand 0 into double-sized elements.
217  // UNPACK_HIGH sign-extends and UNPACKL_HIGH zero-extends.
220 
221  // Likewise for the second half.
224 
225  // Shift each element of vector operand 0 by the number of bits specified
226  // by scalar operand 1.
230 
231  // For each element of the output type, sum across all sub-elements of
232  // operand 0 belonging to the corresponding element, and add in the
233  // rightmost sub-element of the corresponding element of operand 1.
235 
236  // Compare integer vector operands 0 and 1 to produce the usual 0/-1
237  // vector result. VICMPE is for equality, VICMPH for "signed greater than"
238  // and VICMPHL for "unsigned greater than".
242 
243  // Likewise, but also set the condition codes on the result.
247 
248  // Compare floating-point vector operands 0 and 1 to preoduce the usual 0/-1
249  // vector result. VFCMPE is for "ordered and equal", VFCMPH for "ordered and
250  // greater than" and VFCMPHE for "ordered and greater than or equal to".
254 
255  // Likewise, but also set the condition codes on the result.
259 
260  // Test floating-point data class for vectors.
262 
263  // Extend the even f32 elements of vector operand 0 to produce a vector
264  // of f64 elements.
266 
267  // Round the f64 elements of vector operand 0 to f32s and store them in the
268  // even elements of the result.
270 
271  // AND the two vector operands together and set CC based on the result.
273 
274  // String operations that set CC as a side-effect.
284 
285  // Test Data Class.
286  //
287  // Operand 0: the value to test
288  // Operand 1: the bit mask
290 
291  // Wrappers around the inner loop of an 8- or 16-bit ATOMIC_SWAP or
292  // ATOMIC_LOAD_<op>.
293  //
294  // Operand 0: the address of the containing 32-bit-aligned field
295  // Operand 1: the second operand of <op>, in the high bits of an i32
296  // for everything except ATOMIC_SWAPW
297  // Operand 2: how many bits to rotate the i32 left to bring the first
298  // operand into the high bits
299  // Operand 3: the negative of operand 2, for rotating the other way
300  // Operand 4: the width of the field in bits (8 or 16)
312 
313  // A wrapper around the inner loop of an ATOMIC_CMP_SWAP.
314  //
315  // Operand 0: the address of the containing 32-bit-aligned field
316  // Operand 1: the compare value, in the low bits of an i32
317  // Operand 2: the swap value, in the low bits of an i32
318  // Operand 3: how many bits to rotate the i32 left to bring the first
319  // operand into the high bits
320  // Operand 4: the negative of operand 2, for rotating the other way
321  // Operand 5: the width of the field in bits (8 or 16)
323 
324  // Atomic compare-and-swap returning CC value.
325  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
327 
328  // 128-bit atomic load.
329  // Val, OUTCHAIN = ATOMIC_LOAD_128(INCHAIN, ptr)
331 
332  // 128-bit atomic store.
333  // OUTCHAIN = ATOMIC_STORE_128(INCHAIN, val, ptr)
335 
336  // 128-bit atomic compare-and-swap.
337  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
339 
340  // Byte swapping load/store. Same operands as regular load/store.
342 
343  // Prefetch from the second operand using the 4-bit control code in
344  // the first operand. The code is 1 for a load prefetch and 2 for
345  // a store prefetch.
347 };
348 
349 // Return true if OPCODE is some kind of PC-relative address.
350 inline bool isPCREL(unsigned Opcode) {
351  return Opcode == PCREL_WRAPPER || Opcode == PCREL_OFFSET;
352 }
353 } // end namespace SystemZISD
354 
355 namespace SystemZICMP {
356 // Describes whether an integer comparison needs to be signed or unsigned,
357 // or whether either type is OK.
358 enum {
362 };
363 } // end namespace SystemZICMP
364 
365 class SystemZSubtarget;
367 
369 public:
370  explicit SystemZTargetLowering(const TargetMachine &TM,
371  const SystemZSubtarget &STI);
372 
373  // Override TargetLowering.
374  MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override {
375  return MVT::i32;
376  }
377  MVT getVectorIdxTy(const DataLayout &DL) const override {
378  // Only the lower 12 bits of an element index are used, so we don't
379  // want to clobber the upper 32 bits of a GPR unnecessarily.
380  return MVT::i32;
381  }
383  const override {
384  // Widen subvectors to the full width rather than promoting integer
385  // elements. This is better because:
386  //
387  // (a) it means that we can handle the ABI for passing and returning
388  // sub-128 vectors without having to handle them as legal types.
389  //
390  // (b) we don't have instructions to extend on load and truncate on store,
391  // so promoting the integers is less efficient.
392  //
393  // (c) there are no multiplication instructions for the widest integer
394  // type (v2i64).
395  if (VT.getScalarSizeInBits() % 8 == 0)
396  return TypeWidenVector;
398  }
399  EVT getSetCCResultType(const DataLayout &DL, LLVMContext &,
400  EVT) const override;
401  bool isFMAFasterThanFMulAndFAdd(EVT VT) const override;
402  bool isFPImmLegal(const APFloat &Imm, EVT VT) const override;
403  bool isLegalICmpImmediate(int64_t Imm) const override;
404  bool isLegalAddImmediate(int64_t Imm) const override;
405  bool isLegalAddressingMode(const DataLayout &DL, const AddrMode &AM, Type *Ty,
406  unsigned AS,
407  Instruction *I = nullptr) const override;
408  bool allowsMisalignedMemoryAccesses(EVT VT, unsigned AS,
409  unsigned Align,
410  bool *Fast) const override;
411  bool isTruncateFree(Type *, Type *) const override;
412  bool isTruncateFree(EVT, EVT) const override;
413  const char *getTargetNodeName(unsigned Opcode) const override;
414  std::pair<unsigned, const TargetRegisterClass *>
415  getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI,
416  StringRef Constraint, MVT VT) const override;
418  getConstraintType(StringRef Constraint) const override;
420  getSingleConstraintMatchWeight(AsmOperandInfo &info,
421  const char *constraint) const override;
422  void LowerAsmOperandForConstraint(SDValue Op,
423  std::string &Constraint,
424  std::vector<SDValue> &Ops,
425  SelectionDAG &DAG) const override;
426 
427  unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override {
428  if (ConstraintCode.size() == 1) {
429  switch(ConstraintCode[0]) {
430  default:
431  break;
432  case 'o':
434  case 'Q':
436  case 'R':
438  case 'S':
440  case 'T':
442  }
443  }
444  return TargetLowering::getInlineAsmMemConstraint(ConstraintCode);
445  }
446 
447  /// If a physical register, this returns the register that receives the
448  /// exception address on entry to an EH pad.
449  unsigned
450  getExceptionPointerRegister(const Constant *PersonalityFn) const override {
451  return SystemZ::R6D;
452  }
453 
454  /// If a physical register, this returns the register that receives the
455  /// exception typeid on entry to a landing pad.
456  unsigned
457  getExceptionSelectorRegister(const Constant *PersonalityFn) const override {
458  return SystemZ::R7D;
459  }
460 
461  /// Override to support customized stack guard loading.
462  bool useLoadStackGuardNode() const override {
463  return true;
464  }
465  void insertSSPDeclarations(Module &M) const override {
466  }
467 
469  EmitInstrWithCustomInserter(MachineInstr &MI,
470  MachineBasicBlock *BB) const override;
471  SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const override;
472  void LowerOperationWrapper(SDNode *N, SmallVectorImpl<SDValue> &Results,
473  SelectionDAG &DAG) const override;
474  void ReplaceNodeResults(SDNode *N, SmallVectorImpl<SDValue>&Results,
475  SelectionDAG &DAG) const override;
476  const MCPhysReg *getScratchRegisters(CallingConv::ID CC) const override;
477  bool allowTruncateForTailCall(Type *, Type *) const override;
478  bool mayBeEmittedAsTailCall(const CallInst *CI) const override;
479  SDValue LowerFormalArguments(SDValue Chain, CallingConv::ID CallConv,
480  bool isVarArg,
482  const SDLoc &DL, SelectionDAG &DAG,
483  SmallVectorImpl<SDValue> &InVals) const override;
484  SDValue LowerCall(CallLoweringInfo &CLI,
485  SmallVectorImpl<SDValue> &InVals) const override;
486 
487  bool CanLowerReturn(CallingConv::ID CallConv, MachineFunction &MF,
488  bool isVarArg,
490  LLVMContext &Context) const override;
491  SDValue LowerReturn(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg,
493  const SmallVectorImpl<SDValue> &OutVals, const SDLoc &DL,
494  SelectionDAG &DAG) const override;
495  SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const override;
496 
497  /// Determine which of the bits specified in Mask are known to be either
498  /// zero or one and return them in the KnownZero/KnownOne bitsets.
499  void computeKnownBitsForTargetNode(const SDValue Op,
500  KnownBits &Known,
501  const APInt &DemandedElts,
502  const SelectionDAG &DAG,
503  unsigned Depth = 0) const override;
504 
505  /// Determine the number of bits in the operation that are sign bits.
506  unsigned ComputeNumSignBitsForTargetNode(SDValue Op,
507  const APInt &DemandedElts,
508  const SelectionDAG &DAG,
509  unsigned Depth) const override;
510 
512  return ISD::ANY_EXTEND;
513  }
514 
515  bool supportSwiftError() const override {
516  return true;
517  }
518 
519 private:
520  const SystemZSubtarget &Subtarget;
521 
522  // Implement LowerOperation for individual opcodes.
523  SDValue getVectorCmp(SelectionDAG &DAG, unsigned Opcode,
524  const SDLoc &DL, EVT VT,
525  SDValue CmpOp0, SDValue CmpOp1) const;
526  SDValue lowerVectorSETCC(SelectionDAG &DAG, const SDLoc &DL,
527  EVT VT, ISD::CondCode CC,
528  SDValue CmpOp0, SDValue CmpOp1) const;
529  SDValue lowerSETCC(SDValue Op, SelectionDAG &DAG) const;
530  SDValue lowerBR_CC(SDValue Op, SelectionDAG &DAG) const;
531  SDValue lowerSELECT_CC(SDValue Op, SelectionDAG &DAG) const;
532  SDValue lowerGlobalAddress(GlobalAddressSDNode *Node,
533  SelectionDAG &DAG) const;
534  SDValue lowerTLSGetOffset(GlobalAddressSDNode *Node,
535  SelectionDAG &DAG, unsigned Opcode,
536  SDValue GOTOffset) const;
537  SDValue lowerThreadPointer(const SDLoc &DL, SelectionDAG &DAG) const;
538  SDValue lowerGlobalTLSAddress(GlobalAddressSDNode *Node,
539  SelectionDAG &DAG) const;
540  SDValue lowerBlockAddress(BlockAddressSDNode *Node,
541  SelectionDAG &DAG) const;
542  SDValue lowerJumpTable(JumpTableSDNode *JT, SelectionDAG &DAG) const;
543  SDValue lowerConstantPool(ConstantPoolSDNode *CP, SelectionDAG &DAG) const;
544  SDValue lowerFRAMEADDR(SDValue Op, SelectionDAG &DAG) const;
545  SDValue lowerRETURNADDR(SDValue Op, SelectionDAG &DAG) const;
546  SDValue lowerVASTART(SDValue Op, SelectionDAG &DAG) const;
547  SDValue lowerVACOPY(SDValue Op, SelectionDAG &DAG) const;
548  SDValue lowerDYNAMIC_STACKALLOC(SDValue Op, SelectionDAG &DAG) const;
549  SDValue lowerGET_DYNAMIC_AREA_OFFSET(SDValue Op, SelectionDAG &DAG) const;
550  SDValue lowerSMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
551  SDValue lowerUMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
552  SDValue lowerSDIVREM(SDValue Op, SelectionDAG &DAG) const;
553  SDValue lowerUDIVREM(SDValue Op, SelectionDAG &DAG) const;
554  SDValue lowerXALUO(SDValue Op, SelectionDAG &DAG) const;
555  SDValue lowerADDSUBCARRY(SDValue Op, SelectionDAG &DAG) const;
556  SDValue lowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
557  SDValue lowerOR(SDValue Op, SelectionDAG &DAG) const;
558  SDValue lowerCTPOP(SDValue Op, SelectionDAG &DAG) const;
559  SDValue lowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG) const;
560  SDValue lowerATOMIC_LOAD(SDValue Op, SelectionDAG &DAG) const;
561  SDValue lowerATOMIC_STORE(SDValue Op, SelectionDAG &DAG) const;
562  SDValue lowerATOMIC_LOAD_OP(SDValue Op, SelectionDAG &DAG,
563  unsigned Opcode) const;
564  SDValue lowerATOMIC_LOAD_SUB(SDValue Op, SelectionDAG &DAG) const;
565  SDValue lowerATOMIC_CMP_SWAP(SDValue Op, SelectionDAG &DAG) const;
566  SDValue lowerSTACKSAVE(SDValue Op, SelectionDAG &DAG) const;
567  SDValue lowerSTACKRESTORE(SDValue Op, SelectionDAG &DAG) const;
568  SDValue lowerPREFETCH(SDValue Op, SelectionDAG &DAG) const;
569  SDValue lowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG) const;
570  SDValue lowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) const;
571  SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) const;
572  SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) const;
573  SDValue lowerSCALAR_TO_VECTOR(SDValue Op, SelectionDAG &DAG) const;
574  SDValue lowerINSERT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
575  SDValue lowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
576  SDValue lowerExtendVectorInreg(SDValue Op, SelectionDAG &DAG,
577  unsigned UnpackHigh) const;
578  SDValue lowerShift(SDValue Op, SelectionDAG &DAG, unsigned ByScalar) const;
579 
580  bool canTreatAsByteVector(EVT VT) const;
581  SDValue combineExtract(const SDLoc &DL, EVT ElemVT, EVT VecVT, SDValue OrigOp,
582  unsigned Index, DAGCombinerInfo &DCI,
583  bool Force) const;
584  SDValue combineTruncateExtract(const SDLoc &DL, EVT TruncVT, SDValue Op,
585  DAGCombinerInfo &DCI) const;
586  SDValue combineZERO_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
587  SDValue combineSIGN_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
588  SDValue combineSIGN_EXTEND_INREG(SDNode *N, DAGCombinerInfo &DCI) const;
589  SDValue combineMERGE(SDNode *N, DAGCombinerInfo &DCI) const;
590  SDValue combineLOAD(SDNode *N, DAGCombinerInfo &DCI) const;
591  SDValue combineSTORE(SDNode *N, DAGCombinerInfo &DCI) const;
592  SDValue combineEXTRACT_VECTOR_ELT(SDNode *N, DAGCombinerInfo &DCI) const;
593  SDValue combineJOIN_DWORDS(SDNode *N, DAGCombinerInfo &DCI) const;
594  SDValue combineFP_ROUND(SDNode *N, DAGCombinerInfo &DCI) const;
595  SDValue combineFP_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
596  SDValue combineBSWAP(SDNode *N, DAGCombinerInfo &DCI) const;
597  SDValue combineBR_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
598  SDValue combineSELECT_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
599  SDValue combineGET_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
600  SDValue combineIntDIVREM(SDNode *N, DAGCombinerInfo &DCI) const;
601 
602  // If the last instruction before MBBI in MBB was some form of COMPARE,
603  // try to replace it with a COMPARE AND BRANCH just before MBBI.
604  // CCMask and Target are the BRC-like operands for the branch.
605  // Return true if the change was made.
606  bool convertPrevCompareToBranch(MachineBasicBlock *MBB,
608  unsigned CCMask,
609  MachineBasicBlock *Target) const;
610 
611  // Implement EmitInstrWithCustomInserter for individual operation types.
612  MachineBasicBlock *emitSelect(MachineInstr &MI, MachineBasicBlock *BB) const;
613  MachineBasicBlock *emitCondStore(MachineInstr &MI, MachineBasicBlock *BB,
614  unsigned StoreOpcode, unsigned STOCOpcode,
615  bool Invert) const;
616  MachineBasicBlock *emitPair128(MachineInstr &MI,
617  MachineBasicBlock *MBB) const;
618  MachineBasicBlock *emitExt128(MachineInstr &MI, MachineBasicBlock *MBB,
619  bool ClearEven) const;
620  MachineBasicBlock *emitAtomicLoadBinary(MachineInstr &MI,
621  MachineBasicBlock *BB,
622  unsigned BinOpcode, unsigned BitSize,
623  bool Invert = false) const;
624  MachineBasicBlock *emitAtomicLoadMinMax(MachineInstr &MI,
625  MachineBasicBlock *MBB,
626  unsigned CompareOpcode,
627  unsigned KeepOldMask,
628  unsigned BitSize) const;
629  MachineBasicBlock *emitAtomicCmpSwapW(MachineInstr &MI,
630  MachineBasicBlock *BB) const;
631  MachineBasicBlock *emitMemMemWrapper(MachineInstr &MI, MachineBasicBlock *BB,
632  unsigned Opcode) const;
633  MachineBasicBlock *emitStringWrapper(MachineInstr &MI, MachineBasicBlock *BB,
634  unsigned Opcode) const;
635  MachineBasicBlock *emitTransactionBegin(MachineInstr &MI,
636  MachineBasicBlock *MBB,
637  unsigned Opcode, bool NoFloat) const;
638  MachineBasicBlock *emitLoadAndTestCmp0(MachineInstr &MI,
639  MachineBasicBlock *MBB,
640  unsigned Opcode) const;
641 
642  const TargetRegisterClass *getRepRegClassFor(MVT VT) const override;
643 };
644 } // end namespace llvm
645 
646 #endif
BUILTIN_OP_END - This must be the last enum value in this list.
Definition: ISDOpcodes.h:877
A parsed version of the target data layout string in and methods for querying it. ...
Definition: DataLayout.h:111
constexpr char Align[]
Key for Kernel::Arg::Metadata::mAlign.
Definition: Any.h:27
This represents an addressing mode of: BaseGV + BaseOffs + BaseReg + Scale*ScaleReg If BaseGV is null...
LLVMContext & Context
This class represents lattice values for constants.
Definition: AllocatorList.h:24
A Module instance is used to store all the information related to an LLVM module. ...
Definition: Module.h:65
LLVM_NODISCARD LLVM_ATTRIBUTE_ALWAYS_INLINE size_t size() const
size - Get the string size.
Definition: StringRef.h:138
This class represents a function call, abstracting a target machine&#39;s calling convention.
Function Alias Analysis Results
unsigned const TargetRegisterInfo * TRI
NodeType
ISD::NodeType enum - This enum defines the target-independent operators for a SelectionDAG.
Definition: ISDOpcodes.h:39
MVT getVectorIdxTy(const DataLayout &DL) const override
Returns the type to be used for the index operand of: ISD::INSERT_VECTOR_ELT, ISD::EXTRACT_VECTOR_ELT...
unsigned getExceptionSelectorRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception typeid on entry to a la...
bool useLoadStackGuardNode() const override
Override to support customized stack guard loading.
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
Definition: APFloat.h:42
This class defines information used to lower LLVM code to legal SelectionDAG operators that the targe...
Fast - This calling convention attempts to make calls as fast as possible (e.g.
Definition: CallingConv.h:43
This contains information for each constraint that we are lowering.
CondCode
ISD::CondCode enum - These are ordered carefully to make the bitfields below work out...
Definition: ISDOpcodes.h:959
virtual unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const
uint16_t MCPhysReg
An unsigned integer type large enough to represent all physical registers, but not necessarily virtua...
Machine Value Type.
The instances of the Type class are immutable: once they are created, they are never changed...
Definition: Type.h:46
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:69
unsigned getScalarSizeInBits() const
This is an important base class in LLVM.
Definition: Constant.h:42
bool isPCREL(unsigned Opcode)
lazy value info
Extended Value Type.
Definition: ValueTypes.h:34
TargetRegisterInfo base class - We assume that the target defines a static array of TargetRegisterDes...
This structure contains all information that is necessary for lowering calls.
unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override
This is used to represent a portion of an LLVM function in a low-level Data Dependence DAG representa...
Definition: SelectionDAG.h:222
Wrapper class for IR location info (IR ordering and DebugLoc) to be passed into SDNode creation funct...
static const int FIRST_TARGET_MEMORY_OPCODE
FIRST_TARGET_MEMORY_OPCODE - Target-specific pre-isel operations which do not reference a specific me...
Definition: ISDOpcodes.h:884
Represents one node in the SelectionDAG.
Target - Wrapper for Target specific information.
Class for arbitrary precision integers.
Definition: APInt.h:70
ANY_EXTEND - Used for integer types. The high bits are undefined.
Definition: ISDOpcodes.h:471
MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override
EVT is not used in-tree, but is used by out-of-tree target.
Representation of each machine instruction.
Definition: MachineInstr.h:64
ISD::NodeType getExtendForAtomicOps() const override
Returns how the platform&#39;s atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
#define I(x, y, z)
Definition: MD5.cpp:58
#define N
void insertSSPDeclarations(Module &M) const override
Inserts necessary declarations for SSP (stack protection) purpose.
LegalizeTypeAction
This enum indicates whether a types are legal for a target, and if not, what action should be used to...
unsigned getExceptionPointerRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception address on entry to an ...
TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(MVT VT) const override
Return the preferred vector type legalization action.
Primary interface to the complete machine description for the target machine.
Definition: TargetMachine.h:59
IRTranslator LLVM IR MI
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:49
Unlike LLVM values, Selection DAG nodes may return multiple values as the result of a computation...
virtual TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(MVT VT) const
Return the preferred vector type legalization action.
This file describes how to lower LLVM code to machine code.
bool supportSwiftError() const override
Return true if the target supports swifterror attribute.