LLVM  8.0.1
LSUnit.h
Go to the documentation of this file.
1 //===------------------------- LSUnit.h --------------------------*- C++-*-===//
2 //
3 // The LLVM Compiler Infrastructure
4 //
5 // This file is distributed under the University of Illinois Open Source
6 // License. See LICENSE.TXT for details.
7 //
8 //===----------------------------------------------------------------------===//
9 /// \file
10 ///
11 /// A Load/Store unit class that models load/store queues and that implements
12 /// a simple weak memory consistency model.
13 ///
14 //===----------------------------------------------------------------------===//
15 
16 #ifndef LLVM_MCA_LSUNIT_H
17 #define LLVM_MCA_LSUNIT_H
18 
19 #include "llvm/ADT/SmallSet.h"
20 #include "llvm/MC/MCSchedule.h"
22 
23 namespace llvm {
24 namespace mca {
25 
26 class InstRef;
27 class Scheduler;
28 
29 /// A Load/Store Unit implementing a load and store queues.
30 ///
31 /// This class implements a load queue and a store queue to emulate the
32 /// out-of-order execution of memory operations.
33 /// Each load (or store) consumes an entry in the load (or store) queue.
34 ///
35 /// Rules are:
36 /// 1) A younger load is allowed to pass an older load only if there are no
37 /// stores nor barriers in between the two loads.
38 /// 2) An younger store is not allowed to pass an older store.
39 /// 3) A younger store is not allowed to pass an older load.
40 /// 4) A younger load is allowed to pass an older store only if the load does
41 /// not alias with the store.
42 ///
43 /// This class optimistically assumes that loads don't alias store operations.
44 /// Under this assumption, younger loads are always allowed to pass older
45 /// stores (this would only affects rule 4).
46 /// Essentially, this class doesn't perform any sort alias analysis to
47 /// identify aliasing loads and stores.
48 ///
49 /// To enforce aliasing between loads and stores, flag `AssumeNoAlias` must be
50 /// set to `false` by the constructor of LSUnit.
51 ///
52 /// Note that this class doesn't know about the existence of different memory
53 /// types for memory operations (example: write-through, write-combining, etc.).
54 /// Derived classes are responsible for implementing that extra knowledge, and
55 /// provide different sets of rules for loads and stores by overriding method
56 /// `isReady()`.
57 /// To emulate a write-combining memory type, rule 2. must be relaxed in a
58 /// derived class to enable the reordering of non-aliasing store operations.
59 ///
60 /// No assumptions are made by this class on the size of the store buffer. This
61 /// class doesn't know how to identify cases where store-to-load forwarding may
62 /// occur.
63 ///
64 /// LSUnit doesn't attempt to predict whether a load or store hits or misses
65 /// the L1 cache. To be more specific, LSUnit doesn't know anything about
66 /// cache hierarchy and memory types.
67 /// It only knows if an instruction "mayLoad" and/or "mayStore". For loads, the
68 /// scheduling model provides an "optimistic" load-to-use latency (which usually
69 /// matches the load-to-use latency for when there is a hit in the L1D).
70 /// Derived classes may expand this knowledge.
71 ///
72 /// Class MCInstrDesc in LLVM doesn't know about serializing operations, nor
73 /// memory-barrier like instructions.
74 /// LSUnit conservatively assumes that an instruction which `mayLoad` and has
75 /// `unmodeled side effects` behave like a "soft" load-barrier. That means, it
76 /// serializes loads without forcing a flush of the load queue.
77 /// Similarly, instructions that both `mayStore` and have `unmodeled side
78 /// effects` are treated like store barriers. A full memory
79 /// barrier is a 'mayLoad' and 'mayStore' instruction with unmodeled side
80 /// effects. This is obviously inaccurate, but this is the best that we can do
81 /// at the moment.
82 ///
83 /// Each load/store barrier consumes one entry in the load/store queue. A
84 /// load/store barrier enforces ordering of loads/stores:
85 /// - A younger load cannot pass a load barrier.
86 /// - A younger store cannot pass a store barrier.
87 ///
88 /// A younger load has to wait for the memory load barrier to execute.
89 /// A load/store barrier is "executed" when it becomes the oldest entry in
90 /// the load/store queue(s). That also means, all the older loads/stores have
91 /// already been executed.
92 class LSUnit : public HardwareUnit {
93  // Load queue size.
94  // LQ_Size == 0 means that there are infinite slots in the load queue.
95  unsigned LQ_Size;
96 
97  // Store queue size.
98  // SQ_Size == 0 means that there are infinite slots in the store queue.
99  unsigned SQ_Size;
100 
101  // If true, loads will never alias with stores. This is the default.
102  bool NoAlias;
103 
104  // When a `MayLoad` instruction is dispatched to the schedulers for execution,
105  // the LSUnit reserves an entry in the `LoadQueue` for it.
106  //
107  // LoadQueue keeps track of all the loads that are in-flight. A load
108  // instruction is eventually removed from the LoadQueue when it reaches
109  // completion stage. That means, a load leaves the queue whe it is 'executed',
110  // and its value can be forwarded on the data path to outside units.
111  //
112  // This class doesn't know about the latency of a load instruction. So, it
113  // conservatively/pessimistically assumes that the latency of a load opcode
114  // matches the instruction latency.
115  //
116  // FIXME: In the absence of cache misses (i.e. L1I/L1D/iTLB/dTLB hits/misses),
117  // and load/store conflicts, the latency of a load is determined by the depth
118  // of the load pipeline. So, we could use field `LoadLatency` in the
119  // MCSchedModel to model that latency.
120  // Field `LoadLatency` often matches the so-called 'load-to-use' latency from
121  // L1D, and it usually already accounts for any extra latency due to data
122  // forwarding.
123  // When doing throughput analysis, `LoadLatency` is likely to
124  // be a better predictor of load latency than instruction latency. This is
125  // particularly true when simulating code with temporal/spatial locality of
126  // memory accesses.
127  // Using `LoadLatency` (instead of the instruction latency) is also expected
128  // to improve the load queue allocation for long latency instructions with
129  // folded memory operands (See PR39829).
130  //
131  // FIXME: On some processors, load/store operations are split into multiple
132  // uOps. For example, X86 AMD Jaguar natively supports 128-bit data types, but
133  // not 256-bit data types. So, a 256-bit load is effectively split into two
134  // 128-bit loads, and each split load consumes one 'LoadQueue' entry. For
135  // simplicity, this class optimistically assumes that a load instruction only
136  // consumes one entry in the LoadQueue. Similarly, store instructions only
137  // consume a single entry in the StoreQueue.
138  // In future, we should reassess the quality of this design, and consider
139  // alternative approaches that let instructions specify the number of
140  // load/store queue entries which they consume at dispatch stage (See
141  // PR39830).
142  SmallSet<unsigned, 16> LoadQueue;
143  SmallSet<unsigned, 16> StoreQueue;
144 
145  void assignLQSlot(unsigned Index);
146  void assignSQSlot(unsigned Index);
147  bool isReadyNoAlias(unsigned Index) const;
148 
149  // An instruction that both 'mayStore' and 'HasUnmodeledSideEffects' is
150  // conservatively treated as a store barrier. It forces older store to be
151  // executed before newer stores are issued.
152  SmallSet<unsigned, 8> StoreBarriers;
153 
154  // An instruction that both 'MayLoad' and 'HasUnmodeledSideEffects' is
155  // conservatively treated as a load barrier. It forces older loads to execute
156  // before newer loads are issued.
157  SmallSet<unsigned, 8> LoadBarriers;
158 
159  bool isSQEmpty() const { return StoreQueue.empty(); }
160  bool isLQEmpty() const { return LoadQueue.empty(); }
161  bool isSQFull() const { return SQ_Size != 0 && StoreQueue.size() == SQ_Size; }
162  bool isLQFull() const { return LQ_Size != 0 && LoadQueue.size() == LQ_Size; }
163 
164 public:
165  LSUnit(const MCSchedModel &SM, unsigned LQ = 0, unsigned SQ = 0,
166  bool AssumeNoAlias = false);
167 
168 #ifndef NDEBUG
169  void dump() const;
170 #endif
171 
173 
174  // Returns LSU_AVAILABLE if there are enough load/store queue entries to serve
175  // IR. It also returns LSU_AVAILABLE if IR is not a memory operation.
176  Status isAvailable(const InstRef &IR) const;
177 
178  // Allocates load/store queue resources for IR.
179  //
180  // This method assumes that a previous call to `isAvailable(IR)` returned
181  // LSU_AVAILABLE, and that IR is a memory operation.
182  void dispatch(const InstRef &IR);
183 
184  // By default, rules are:
185  // 1. A store may not pass a previous store.
186  // 2. A load may not pass a previous store unless flag 'NoAlias' is set.
187  // 3. A load may pass a previous load.
188  // 4. A store may not pass a previous load (regardless of flag 'NoAlias').
189  // 5. A load has to wait until an older load barrier is fully executed.
190  // 6. A store has to wait until an older store barrier is fully executed.
191  virtual bool isReady(const InstRef &IR) const;
192 
193  // Load and store instructions are tracked by their corresponding queues from
194  // dispatch until the "instruction executed" event.
195  // Only when a load instruction reaches the 'Executed' stage, its value
196  // becomes available to the users. At that point, the load no longer needs to
197  // be tracked by the load queue.
198  // FIXME: For simplicity, we optimistically assume a similar behavior for
199  // store instructions. In practice, store operations don't tend to leave the
200  // store queue until they reach the 'Retired' stage (See PR39830).
201  void onInstructionExecuted(const InstRef &IR);
202 };
203 
204 } // namespace mca
205 } // namespace llvm
206 
207 #endif // LLVM_MCA_LSUNIT_H
Status isAvailable(const InstRef &IR) const
Definition: LSUnit.cpp:88
virtual bool isReady(const InstRef &IR) const
Definition: LSUnit.cpp:97
This class represents lattice values for constants.
Definition: AllocatorList.h:24
void onInstructionExecuted(const InstRef &IR)
Definition: LSUnit.cpp:155
An InstRef contains both a SourceMgr index and Instruction pair.
Definition: Instruction.h:478
void dump() const
Definition: LSUnit.cpp:43
LLVM_NODISCARD bool empty() const
Definition: SmallSet.h:156
A Load/Store Unit implementing a load and store queues.
Definition: LSUnit.h:92
size_type size() const
Definition: SmallSet.h:160
This file defines a base class for describing a simulated hardware unit.
LSUnit(const MCSchedModel &SM, unsigned LQ=0, unsigned SQ=0, bool AssumeNoAlias=false)
Definition: LSUnit.cpp:25
void dispatch(const InstRef &IR)
Definition: LSUnit.cpp:69
Machine Instruction Scheduler
Machine model for scheduling, bundling, and heuristics.
Definition: MCSchedule.h:244
Statically lint checks LLVM IR
Definition: Lint.cpp:193