LLVM 22.0.0git
InferAddressSpaces.cpp
Go to the documentation of this file.
1//===- InferAddressSpace.cpp - --------------------------------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// CUDA C/C++ includes memory space designation as variable type qualifers (such
10// as __global__ and __shared__). Knowing the space of a memory access allows
11// CUDA compilers to emit faster PTX loads and stores. For example, a load from
12// shared memory can be translated to `ld.shared` which is roughly 10% faster
13// than a generic `ld` on an NVIDIA Tesla K40c.
14//
15// Unfortunately, type qualifiers only apply to variable declarations, so CUDA
16// compilers must infer the memory space of an address expression from
17// type-qualified variables.
18//
19// LLVM IR uses non-zero (so-called) specific address spaces to represent memory
20// spaces (e.g. addrspace(3) means shared memory). The Clang frontend
21// places only type-qualified variables in specific address spaces, and then
22// conservatively `addrspacecast`s each type-qualified variable to addrspace(0)
23// (so-called the generic address space) for other instructions to use.
24//
25// For example, the Clang translates the following CUDA code
26// __shared__ float a[10];
27// float v = a[i];
28// to
29// %0 = addrspacecast [10 x float] addrspace(3)* @a to [10 x float]*
30// %1 = gep [10 x float], [10 x float]* %0, i64 0, i64 %i
31// %v = load float, float* %1 ; emits ld.f32
32// @a is in addrspace(3) since it's type-qualified, but its use from %1 is
33// redirected to %0 (the generic version of @a).
34//
35// The optimization implemented in this file propagates specific address spaces
36// from type-qualified variable declarations to its users. For example, it
37// optimizes the above IR to
38// %1 = gep [10 x float] addrspace(3)* @a, i64 0, i64 %i
39// %v = load float addrspace(3)* %1 ; emits ld.shared.f32
40// propagating the addrspace(3) from @a to %1. As the result, the NVPTX
41// codegen is able to emit ld.shared.f32 for %v.
42//
43// Address space inference works in two steps. First, it uses a data-flow
44// analysis to infer as many generic pointers as possible to point to only one
45// specific address space. In the above example, it can prove that %1 only
46// points to addrspace(3). This algorithm was published in
47// CUDA: Compiling and optimizing for a GPU platform
48// Chakrabarti, Grover, Aarts, Kong, Kudlur, Lin, Marathe, Murphy, Wang
49// ICCS 2012
50//
51// Then, address space inference replaces all refinable generic pointers with
52// equivalent specific pointers.
53//
54// The major challenge of implementing this optimization is handling PHINodes,
55// which may create loops in the data flow graph. This brings two complications.
56//
57// First, the data flow analysis in Step 1 needs to be circular. For example,
58// %generic.input = addrspacecast float addrspace(3)* %input to float*
59// loop:
60// %y = phi [ %generic.input, %y2 ]
61// %y2 = getelementptr %y, 1
62// %v = load %y2
63// br ..., label %loop, ...
64// proving %y specific requires proving both %generic.input and %y2 specific,
65// but proving %y2 specific circles back to %y. To address this complication,
66// the data flow analysis operates on a lattice:
67// uninitialized > specific address spaces > generic.
68// All address expressions (our implementation only considers phi, bitcast,
69// addrspacecast, and getelementptr) start with the uninitialized address space.
70// The monotone transfer function moves the address space of a pointer down a
71// lattice path from uninitialized to specific and then to generic. A join
72// operation of two different specific address spaces pushes the expression down
73// to the generic address space. The analysis completes once it reaches a fixed
74// point.
75//
76// Second, IR rewriting in Step 2 also needs to be circular. For example,
77// converting %y to addrspace(3) requires the compiler to know the converted
78// %y2, but converting %y2 needs the converted %y. To address this complication,
79// we break these cycles using "poison" placeholders. When converting an
80// instruction `I` to a new address space, if its operand `Op` is not converted
81// yet, we let `I` temporarily use `poison` and fix all the uses later.
82// For instance, our algorithm first converts %y to
83// %y' = phi float addrspace(3)* [ %input, poison ]
84// Then, it converts %y2 to
85// %y2' = getelementptr %y', 1
86// Finally, it fixes the poison in %y' so that
87// %y' = phi float addrspace(3)* [ %input, %y2' ]
88//
89//===----------------------------------------------------------------------===//
90
92#include "llvm/ADT/ArrayRef.h"
93#include "llvm/ADT/DenseMap.h"
94#include "llvm/ADT/DenseSet.h"
95#include "llvm/ADT/SetVector.h"
100#include "llvm/IR/BasicBlock.h"
101#include "llvm/IR/Constant.h"
102#include "llvm/IR/Constants.h"
103#include "llvm/IR/Dominators.h"
104#include "llvm/IR/Function.h"
105#include "llvm/IR/IRBuilder.h"
106#include "llvm/IR/InstIterator.h"
107#include "llvm/IR/Instruction.h"
108#include "llvm/IR/Instructions.h"
110#include "llvm/IR/Intrinsics.h"
111#include "llvm/IR/LLVMContext.h"
112#include "llvm/IR/Operator.h"
113#include "llvm/IR/PassManager.h"
114#include "llvm/IR/Type.h"
115#include "llvm/IR/Use.h"
116#include "llvm/IR/User.h"
117#include "llvm/IR/Value.h"
118#include "llvm/IR/ValueHandle.h"
120#include "llvm/Pass.h"
121#include "llvm/Support/Casting.h"
123#include "llvm/Support/Debug.h"
129#include <cassert>
130#include <iterator>
131#include <limits>
132#include <utility>
133#include <vector>
134
135#define DEBUG_TYPE "infer-address-spaces"
136
137using namespace llvm;
138
140 "assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden,
141 cl::desc("The default address space is assumed as the flat address space. "
142 "This is mainly for test purpose."));
143
144static const unsigned UninitializedAddressSpace =
145 std::numeric_limits<unsigned>::max();
146
147namespace {
148
149using ValueToAddrSpaceMapTy = DenseMap<const Value *, unsigned>;
150// Different from ValueToAddrSpaceMapTy, where a new addrspace is inferred on
151// the *def* of a value, PredicatedAddrSpaceMapTy is map where a new
152// addrspace is inferred on the *use* of a pointer. This map is introduced to
153// infer addrspace from the addrspace predicate assumption built from assume
154// intrinsic. In that scenario, only specific uses (under valid assumption
155// context) could be inferred with a new addrspace.
156using PredicatedAddrSpaceMapTy =
158using PostorderStackTy = llvm::SmallVector<PointerIntPair<Value *, 1, bool>, 4>;
159
160class InferAddressSpaces : public FunctionPass {
161 unsigned FlatAddrSpace = 0;
162
163public:
164 static char ID;
165
166 InferAddressSpaces()
167 : FunctionPass(ID), FlatAddrSpace(UninitializedAddressSpace) {
169 }
170 InferAddressSpaces(unsigned AS) : FunctionPass(ID), FlatAddrSpace(AS) {
172 }
173
174 void getAnalysisUsage(AnalysisUsage &AU) const override {
175 AU.setPreservesCFG();
179 }
180
181 bool runOnFunction(Function &F) override;
182};
183
184class InferAddressSpacesImpl {
185 AssumptionCache &AC;
186 Function *F = nullptr;
187 const DominatorTree *DT = nullptr;
188 const TargetTransformInfo *TTI = nullptr;
189 const DataLayout *DL = nullptr;
190
191 /// Target specific address space which uses of should be replaced if
192 /// possible.
193 unsigned FlatAddrSpace = 0;
194
195 // Try to update the address space of V. If V is updated, returns true and
196 // false otherwise.
197 bool updateAddressSpace(const Value &V,
198 ValueToAddrSpaceMapTy &InferredAddrSpace,
199 PredicatedAddrSpaceMapTy &PredicatedAS) const;
200
201 // Tries to infer the specific address space of each address expression in
202 // Postorder.
203 void inferAddressSpaces(ArrayRef<WeakTrackingVH> Postorder,
204 ValueToAddrSpaceMapTy &InferredAddrSpace,
205 PredicatedAddrSpaceMapTy &PredicatedAS) const;
206
207 bool isSafeToCastConstAddrSpace(Constant *C, unsigned NewAS) const;
208
209 Value *cloneInstructionWithNewAddressSpace(
210 Instruction *I, unsigned NewAddrSpace,
211 const ValueToValueMapTy &ValueWithNewAddrSpace,
212 const PredicatedAddrSpaceMapTy &PredicatedAS,
213 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
214
215 void performPointerReplacement(
216 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
217 SmallVectorImpl<Instruction *> &DeadInstructions) const;
218
219 // Changes the flat address expressions in function F to point to specific
220 // address spaces if InferredAddrSpace says so. Postorder is the postorder of
221 // all flat expressions in the use-def graph of function F.
222 bool rewriteWithNewAddressSpaces(
223 ArrayRef<WeakTrackingVH> Postorder,
224 const ValueToAddrSpaceMapTy &InferredAddrSpace,
225 const PredicatedAddrSpaceMapTy &PredicatedAS) const;
226
227 void appendsFlatAddressExpressionToPostorderStack(
228 Value *V, PostorderStackTy &PostorderStack,
229 DenseSet<Value *> &Visited) const;
230
231 bool rewriteIntrinsicOperands(IntrinsicInst *II, Value *OldV,
232 Value *NewV) const;
233 void collectRewritableIntrinsicOperands(IntrinsicInst *II,
234 PostorderStackTy &PostorderStack,
235 DenseSet<Value *> &Visited) const;
236
237 std::vector<WeakTrackingVH> collectFlatAddressExpressions(Function &F) const;
238
239 Value *cloneValueWithNewAddressSpace(
240 Value *V, unsigned NewAddrSpace,
241 const ValueToValueMapTy &ValueWithNewAddrSpace,
242 const PredicatedAddrSpaceMapTy &PredicatedAS,
243 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
244 unsigned joinAddressSpaces(unsigned AS1, unsigned AS2) const;
245
246 unsigned getPredicatedAddrSpace(const Value &PtrV,
247 const Value *UserCtx) const;
248
249public:
250 InferAddressSpacesImpl(AssumptionCache &AC, const DominatorTree *DT,
251 const TargetTransformInfo *TTI, unsigned FlatAddrSpace)
252 : AC(AC), DT(DT), TTI(TTI), FlatAddrSpace(FlatAddrSpace) {}
253 bool run(Function &F);
254};
255
256} // end anonymous namespace
257
258char InferAddressSpaces::ID = 0;
259
260INITIALIZE_PASS_BEGIN(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
261 false, false)
264INITIALIZE_PASS_END(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
266
267static Type *getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace) {
268 assert(Ty->isPtrOrPtrVectorTy());
269 PointerType *NPT = PointerType::get(Ty->getContext(), NewAddrSpace);
270 return Ty->getWithNewType(NPT);
271}
272
273// Check whether that's no-op pointer bicast using a pair of
274// `ptrtoint`/`inttoptr` due to the missing no-op pointer bitcast over
275// different address spaces.
276static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL,
277 const TargetTransformInfo *TTI) {
278 assert(I2P->getOpcode() == Instruction::IntToPtr);
279 auto *P2I = dyn_cast<Operator>(I2P->getOperand(0));
280 if (!P2I || P2I->getOpcode() != Instruction::PtrToInt)
281 return false;
282 // Check it's really safe to treat that pair of `ptrtoint`/`inttoptr` as a
283 // no-op cast. Besides checking both of them are no-op casts, as the
284 // reinterpreted pointer may be used in other pointer arithmetic, we also
285 // need to double-check that through the target-specific hook. That ensures
286 // the underlying target also agrees that's a no-op address space cast and
287 // pointer bits are preserved.
288 // The current IR spec doesn't have clear rules on address space casts,
289 // especially a clear definition for pointer bits in non-default address
290 // spaces. It would be undefined if that pointer is dereferenced after an
291 // invalid reinterpret cast. Also, due to the unclearness for the meaning of
292 // bits in non-default address spaces in the current spec, the pointer
293 // arithmetic may also be undefined after invalid pointer reinterpret cast.
294 // However, as we confirm through the target hooks that it's a no-op
295 // addrspacecast, it doesn't matter since the bits should be the same.
296 unsigned P2IOp0AS = P2I->getOperand(0)->getType()->getPointerAddressSpace();
297 unsigned I2PAS = I2P->getType()->getPointerAddressSpace();
299 I2P->getOperand(0)->getType(), I2P->getType(),
300 DL) &&
302 P2I->getOperand(0)->getType(), P2I->getType(),
303 DL) &&
304 (P2IOp0AS == I2PAS || TTI->isNoopAddrSpaceCast(P2IOp0AS, I2PAS));
305}
306
307// Returns true if V is an address expression.
308// TODO: Currently, we only consider:
309// - arguments
310// - phi, bitcast, addrspacecast, and getelementptr operators
311static bool isAddressExpression(const Value &V, const DataLayout &DL,
312 const TargetTransformInfo *TTI) {
313
314 if (const Argument *Arg = dyn_cast<Argument>(&V))
315 return Arg->getType()->isPointerTy() &&
317
318 const Operator *Op = dyn_cast<Operator>(&V);
319 if (!Op)
320 return false;
321
322 switch (Op->getOpcode()) {
323 case Instruction::PHI:
324 assert(Op->getType()->isPtrOrPtrVectorTy());
325 return true;
326 case Instruction::BitCast:
327 case Instruction::AddrSpaceCast:
328 case Instruction::GetElementPtr:
329 return true;
330 case Instruction::Select:
331 return Op->getType()->isPtrOrPtrVectorTy();
332 case Instruction::Call: {
333 const IntrinsicInst *II = dyn_cast<IntrinsicInst>(&V);
334 return II && II->getIntrinsicID() == Intrinsic::ptrmask;
335 }
336 case Instruction::IntToPtr:
337 return isNoopPtrIntCastPair(Op, DL, TTI);
338 default:
339 // That value is an address expression if it has an assumed address space.
341 }
342}
343
344// Returns the pointer operands of V.
345//
346// Precondition: V is an address expression.
349 const TargetTransformInfo *TTI) {
350 if (isa<Argument>(&V))
351 return {};
352
353 const Operator &Op = cast<Operator>(V);
354 switch (Op.getOpcode()) {
355 case Instruction::PHI: {
356 auto IncomingValues = cast<PHINode>(Op).incoming_values();
357 return {IncomingValues.begin(), IncomingValues.end()};
358 }
359 case Instruction::BitCast:
360 case Instruction::AddrSpaceCast:
361 case Instruction::GetElementPtr:
362 return {Op.getOperand(0)};
363 case Instruction::Select:
364 return {Op.getOperand(1), Op.getOperand(2)};
365 case Instruction::Call: {
366 const IntrinsicInst &II = cast<IntrinsicInst>(Op);
367 assert(II.getIntrinsicID() == Intrinsic::ptrmask &&
368 "unexpected intrinsic call");
369 return {II.getArgOperand(0)};
370 }
371 case Instruction::IntToPtr: {
373 auto *P2I = cast<Operator>(Op.getOperand(0));
374 return {P2I->getOperand(0)};
375 }
376 default:
377 llvm_unreachable("Unexpected instruction type.");
378 }
379}
380
381bool InferAddressSpacesImpl::rewriteIntrinsicOperands(IntrinsicInst *II,
382 Value *OldV,
383 Value *NewV) const {
384 Module *M = II->getParent()->getParent()->getParent();
385 Intrinsic::ID IID = II->getIntrinsicID();
386 switch (IID) {
387 case Intrinsic::objectsize:
388 case Intrinsic::masked_load: {
389 Type *DestTy = II->getType();
390 Type *SrcTy = NewV->getType();
391 Function *NewDecl =
392 Intrinsic::getOrInsertDeclaration(M, IID, {DestTy, SrcTy});
393 II->setArgOperand(0, NewV);
394 II->setCalledFunction(NewDecl);
395 return true;
396 }
397 case Intrinsic::ptrmask:
398 // This is handled as an address expression, not as a use memory operation.
399 return false;
400 case Intrinsic::masked_gather: {
401 Type *RetTy = II->getType();
402 Type *NewPtrTy = NewV->getType();
403 Function *NewDecl =
404 Intrinsic::getOrInsertDeclaration(M, IID, {RetTy, NewPtrTy});
405 II->setArgOperand(0, NewV);
406 II->setCalledFunction(NewDecl);
407 return true;
408 }
409 case Intrinsic::masked_store:
410 case Intrinsic::masked_scatter: {
411 Type *ValueTy = II->getOperand(0)->getType();
412 Type *NewPtrTy = NewV->getType();
414 M, II->getIntrinsicID(), {ValueTy, NewPtrTy});
415 II->setArgOperand(1, NewV);
416 II->setCalledFunction(NewDecl);
417 return true;
418 }
419 case Intrinsic::prefetch:
420 case Intrinsic::is_constant: {
422 M, II->getIntrinsicID(), {NewV->getType()});
423 II->setArgOperand(0, NewV);
424 II->setCalledFunction(NewDecl);
425 return true;
426 }
427 case Intrinsic::fake_use: {
428 II->replaceUsesOfWith(OldV, NewV);
429 return true;
430 }
431 case Intrinsic::lifetime_start:
432 case Intrinsic::lifetime_end: {
433 // Always force lifetime markers to work directly on the alloca.
434 NewV = NewV->stripPointerCasts();
436 M, II->getIntrinsicID(), {NewV->getType()});
437 II->setArgOperand(0, NewV);
438 II->setCalledFunction(NewDecl);
439 return true;
440 }
441 default: {
442 Value *Rewrite = TTI->rewriteIntrinsicWithAddressSpace(II, OldV, NewV);
443 if (!Rewrite)
444 return false;
445 if (Rewrite != II)
446 II->replaceAllUsesWith(Rewrite);
447 return true;
448 }
449 }
450}
451
452void InferAddressSpacesImpl::collectRewritableIntrinsicOperands(
453 IntrinsicInst *II, PostorderStackTy &PostorderStack,
454 DenseSet<Value *> &Visited) const {
455 auto IID = II->getIntrinsicID();
456 switch (IID) {
457 case Intrinsic::ptrmask:
458 case Intrinsic::objectsize:
459 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
460 PostorderStack, Visited);
461 break;
462 case Intrinsic::is_constant: {
463 Value *Ptr = II->getArgOperand(0);
464 if (Ptr->getType()->isPtrOrPtrVectorTy()) {
465 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack,
466 Visited);
467 }
468
469 break;
470 }
471 case Intrinsic::masked_load:
472 case Intrinsic::masked_gather:
473 case Intrinsic::prefetch:
474 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
475 PostorderStack, Visited);
476 break;
477 case Intrinsic::masked_store:
478 case Intrinsic::masked_scatter:
479 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(1),
480 PostorderStack, Visited);
481 break;
482 case Intrinsic::fake_use: {
483 for (Value *Op : II->operands()) {
484 if (Op->getType()->isPtrOrPtrVectorTy()) {
485 appendsFlatAddressExpressionToPostorderStack(Op, PostorderStack,
486 Visited);
487 }
488 }
489
490 break;
491 }
492 case Intrinsic::lifetime_start:
493 case Intrinsic::lifetime_end: {
494 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
495 PostorderStack, Visited);
496 break;
497 }
498 default:
499 SmallVector<int, 2> OpIndexes;
500 if (TTI->collectFlatAddressOperands(OpIndexes, IID)) {
501 for (int Idx : OpIndexes) {
502 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(Idx),
503 PostorderStack, Visited);
504 }
505 }
506 break;
507 }
508}
509
510// Returns all flat address expressions in function F. The elements are
511// If V is an unvisited flat address expression, appends V to PostorderStack
512// and marks it as visited.
513void InferAddressSpacesImpl::appendsFlatAddressExpressionToPostorderStack(
514 Value *V, PostorderStackTy &PostorderStack,
515 DenseSet<Value *> &Visited) const {
516 assert(V->getType()->isPtrOrPtrVectorTy());
517
518 // Generic addressing expressions may be hidden in nested constant
519 // expressions.
520 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(V)) {
521 // TODO: Look in non-address parts, like icmp operands.
522 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
523 PostorderStack.emplace_back(CE, false);
524
525 return;
526 }
527
528 if (V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
529 isAddressExpression(*V, *DL, TTI)) {
530 if (Visited.insert(V).second) {
531 PostorderStack.emplace_back(V, false);
532
533 if (auto *Op = dyn_cast<Operator>(V))
534 for (auto &O : Op->operands())
535 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(O))
536 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
537 PostorderStack.emplace_back(CE, false);
538 }
539 }
540}
541
542// Returns all flat address expressions in function F. The elements are ordered
543// in postorder.
544std::vector<WeakTrackingVH>
545InferAddressSpacesImpl::collectFlatAddressExpressions(Function &F) const {
546 // This function implements a non-recursive postorder traversal of a partial
547 // use-def graph of function F.
548 PostorderStackTy PostorderStack;
549 // The set of visited expressions.
550 DenseSet<Value *> Visited;
551
552 auto PushPtrOperand = [&](Value *Ptr) {
553 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack, Visited);
554 };
555
556 // Look at operations that may be interesting accelerate by moving to a known
557 // address space. We aim at generating after loads and stores, but pure
558 // addressing calculations may also be faster.
559 for (Instruction &I : instructions(F)) {
560 if (auto *GEP = dyn_cast<GetElementPtrInst>(&I)) {
561 PushPtrOperand(GEP->getPointerOperand());
562 } else if (auto *LI = dyn_cast<LoadInst>(&I))
563 PushPtrOperand(LI->getPointerOperand());
564 else if (auto *SI = dyn_cast<StoreInst>(&I))
565 PushPtrOperand(SI->getPointerOperand());
566 else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I))
567 PushPtrOperand(RMW->getPointerOperand());
568 else if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(&I))
569 PushPtrOperand(CmpX->getPointerOperand());
570 else if (auto *MI = dyn_cast<MemIntrinsic>(&I)) {
571 // For memset/memcpy/memmove, any pointer operand can be replaced.
572 PushPtrOperand(MI->getRawDest());
573
574 // Handle 2nd operand for memcpy/memmove.
575 if (auto *MTI = dyn_cast<MemTransferInst>(MI))
576 PushPtrOperand(MTI->getRawSource());
577 } else if (auto *II = dyn_cast<IntrinsicInst>(&I))
578 collectRewritableIntrinsicOperands(II, PostorderStack, Visited);
579 else if (ICmpInst *Cmp = dyn_cast<ICmpInst>(&I)) {
580 if (Cmp->getOperand(0)->getType()->isPtrOrPtrVectorTy()) {
581 PushPtrOperand(Cmp->getOperand(0));
582 PushPtrOperand(Cmp->getOperand(1));
583 }
584 } else if (auto *ASC = dyn_cast<AddrSpaceCastInst>(&I)) {
585 PushPtrOperand(ASC->getPointerOperand());
586 } else if (auto *I2P = dyn_cast<IntToPtrInst>(&I)) {
587 if (isNoopPtrIntCastPair(cast<Operator>(I2P), *DL, TTI))
588 PushPtrOperand(cast<Operator>(I2P->getOperand(0))->getOperand(0));
589 } else if (auto *RI = dyn_cast<ReturnInst>(&I)) {
590 if (auto *RV = RI->getReturnValue();
591 RV && RV->getType()->isPtrOrPtrVectorTy())
592 PushPtrOperand(RV);
593 }
594 }
595
596 std::vector<WeakTrackingVH> Postorder; // The resultant postorder.
597 while (!PostorderStack.empty()) {
598 Value *TopVal = PostorderStack.back().getPointer();
599 // If the operands of the expression on the top are already explored,
600 // adds that expression to the resultant postorder.
601 if (PostorderStack.back().getInt()) {
602 if (TopVal->getType()->getPointerAddressSpace() == FlatAddrSpace)
603 Postorder.push_back(TopVal);
604 PostorderStack.pop_back();
605 continue;
606 }
607 // Otherwise, adds its operands to the stack and explores them.
608 PostorderStack.back().setInt(true);
609 // Skip values with an assumed address space.
611 for (Value *PtrOperand : getPointerOperands(*TopVal, *DL, TTI)) {
612 appendsFlatAddressExpressionToPostorderStack(PtrOperand, PostorderStack,
613 Visited);
614 }
615 }
616 }
617 return Postorder;
618}
619
620// A helper function for cloneInstructionWithNewAddressSpace. Returns the clone
621// of OperandUse.get() in the new address space. If the clone is not ready yet,
622// returns poison in the new address space as a placeholder.
624 const Use &OperandUse, unsigned NewAddrSpace,
625 const ValueToValueMapTy &ValueWithNewAddrSpace,
626 const PredicatedAddrSpaceMapTy &PredicatedAS,
627 SmallVectorImpl<const Use *> *PoisonUsesToFix) {
628 Value *Operand = OperandUse.get();
629
630 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAddrSpace);
631
632 if (Constant *C = dyn_cast<Constant>(Operand))
633 return ConstantExpr::getAddrSpaceCast(C, NewPtrTy);
634
635 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand))
636 return NewOperand;
637
638 Instruction *Inst = cast<Instruction>(OperandUse.getUser());
639 auto I = PredicatedAS.find(std::make_pair(Inst, Operand));
640 if (I != PredicatedAS.end()) {
641 // Insert an addrspacecast on that operand before the user.
642 unsigned NewAS = I->second;
643 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAS);
644 auto *NewI = new AddrSpaceCastInst(Operand, NewPtrTy);
645 NewI->insertBefore(Inst->getIterator());
646 NewI->setDebugLoc(Inst->getDebugLoc());
647 return NewI;
648 }
649
650 PoisonUsesToFix->push_back(&OperandUse);
651 return PoisonValue::get(NewPtrTy);
652}
653
654// Returns a clone of `I` with its operands converted to those specified in
655// ValueWithNewAddrSpace. Due to potential cycles in the data flow graph, an
656// operand whose address space needs to be modified might not exist in
657// ValueWithNewAddrSpace. In that case, uses poison as a placeholder operand and
658// adds that operand use to PoisonUsesToFix so that caller can fix them later.
659//
660// Note that we do not necessarily clone `I`, e.g., if it is an addrspacecast
661// from a pointer whose type already matches. Therefore, this function returns a
662// Value* instead of an Instruction*.
663//
664// This may also return nullptr in the case the instruction could not be
665// rewritten.
666Value *InferAddressSpacesImpl::cloneInstructionWithNewAddressSpace(
667 Instruction *I, unsigned NewAddrSpace,
668 const ValueToValueMapTy &ValueWithNewAddrSpace,
669 const PredicatedAddrSpaceMapTy &PredicatedAS,
670 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
671 Type *NewPtrType = getPtrOrVecOfPtrsWithNewAS(I->getType(), NewAddrSpace);
672
673 if (I->getOpcode() == Instruction::AddrSpaceCast) {
674 Value *Src = I->getOperand(0);
675 // Because `I` is flat, the source address space must be specific.
676 // Therefore, the inferred address space must be the source space, according
677 // to our algorithm.
678 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
679 return Src;
680 }
681
682 if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
683 // Technically the intrinsic ID is a pointer typed argument, so specially
684 // handle calls early.
685 assert(II->getIntrinsicID() == Intrinsic::ptrmask);
687 II->getArgOperandUse(0), NewAddrSpace, ValueWithNewAddrSpace,
688 PredicatedAS, PoisonUsesToFix);
689 Value *Rewrite =
690 TTI->rewriteIntrinsicWithAddressSpace(II, II->getArgOperand(0), NewPtr);
691 if (Rewrite) {
692 assert(Rewrite != II && "cannot modify this pointer operation in place");
693 return Rewrite;
694 }
695
696 return nullptr;
697 }
698
699 unsigned AS = TTI->getAssumedAddrSpace(I);
700 if (AS != UninitializedAddressSpace) {
701 // For the assumed address space, insert an `addrspacecast` to make that
702 // explicit.
703 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(I->getType(), AS);
704 auto *NewI = new AddrSpaceCastInst(I, NewPtrTy);
705 NewI->insertAfter(I->getIterator());
706 NewI->setDebugLoc(I->getDebugLoc());
707 return NewI;
708 }
709
710 // Computes the converted pointer operands.
711 SmallVector<Value *, 4> NewPointerOperands;
712 for (const Use &OperandUse : I->operands()) {
713 if (!OperandUse.get()->getType()->isPtrOrPtrVectorTy())
714 NewPointerOperands.push_back(nullptr);
715 else
717 OperandUse, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS,
718 PoisonUsesToFix));
719 }
720
721 switch (I->getOpcode()) {
722 case Instruction::BitCast:
723 return new BitCastInst(NewPointerOperands[0], NewPtrType);
724 case Instruction::PHI: {
725 assert(I->getType()->isPtrOrPtrVectorTy());
726 PHINode *PHI = cast<PHINode>(I);
727 PHINode *NewPHI = PHINode::Create(NewPtrType, PHI->getNumIncomingValues());
728 for (unsigned Index = 0; Index < PHI->getNumIncomingValues(); ++Index) {
729 unsigned OperandNo = PHINode::getOperandNumForIncomingValue(Index);
730 NewPHI->addIncoming(NewPointerOperands[OperandNo],
731 PHI->getIncomingBlock(Index));
732 }
733 return NewPHI;
734 }
735 case Instruction::GetElementPtr: {
736 GetElementPtrInst *GEP = cast<GetElementPtrInst>(I);
738 GEP->getSourceElementType(), NewPointerOperands[0],
739 SmallVector<Value *, 4>(GEP->indices()));
740 NewGEP->setIsInBounds(GEP->isInBounds());
741 return NewGEP;
742 }
743 case Instruction::Select:
744 assert(I->getType()->isPtrOrPtrVectorTy());
745 return SelectInst::Create(I->getOperand(0), NewPointerOperands[1],
746 NewPointerOperands[2], "", nullptr, I);
747 case Instruction::IntToPtr: {
748 assert(isNoopPtrIntCastPair(cast<Operator>(I), *DL, TTI));
749 Value *Src = cast<Operator>(I->getOperand(0))->getOperand(0);
750 if (Src->getType() == NewPtrType)
751 return Src;
752
753 // If we had a no-op inttoptr/ptrtoint pair, we may still have inferred a
754 // source address space from a generic pointer source need to insert a cast
755 // back.
756 return new AddrSpaceCastInst(Src, NewPtrType);
757 }
758 default:
759 llvm_unreachable("Unexpected opcode");
760 }
761}
762
763// Similar to cloneInstructionWithNewAddressSpace, returns a clone of the
764// constant expression `CE` with its operands replaced as specified in
765// ValueWithNewAddrSpace.
767 ConstantExpr *CE, unsigned NewAddrSpace,
768 const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL,
769 const TargetTransformInfo *TTI) {
770 Type *TargetType =
771 CE->getType()->isPtrOrPtrVectorTy()
772 ? getPtrOrVecOfPtrsWithNewAS(CE->getType(), NewAddrSpace)
773 : CE->getType();
774
775 if (CE->getOpcode() == Instruction::AddrSpaceCast) {
776 // Because CE is flat, the source address space must be specific.
777 // Therefore, the inferred address space must be the source space according
778 // to our algorithm.
779 assert(CE->getOperand(0)->getType()->getPointerAddressSpace() ==
780 NewAddrSpace);
781 return CE->getOperand(0);
782 }
783
784 if (CE->getOpcode() == Instruction::BitCast) {
785 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(CE->getOperand(0)))
786 return ConstantExpr::getBitCast(cast<Constant>(NewOperand), TargetType);
787 return ConstantExpr::getAddrSpaceCast(CE, TargetType);
788 }
789
790 if (CE->getOpcode() == Instruction::IntToPtr) {
791 assert(isNoopPtrIntCastPair(cast<Operator>(CE), *DL, TTI));
792 Constant *Src = cast<ConstantExpr>(CE->getOperand(0))->getOperand(0);
793 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
794 return Src;
795 }
796
797 // Computes the operands of the new constant expression.
798 bool IsNew = false;
799 SmallVector<Constant *, 4> NewOperands;
800 for (unsigned Index = 0; Index < CE->getNumOperands(); ++Index) {
801 Constant *Operand = CE->getOperand(Index);
802 // If the address space of `Operand` needs to be modified, the new operand
803 // with the new address space should already be in ValueWithNewAddrSpace
804 // because (1) the constant expressions we consider (i.e. addrspacecast,
805 // bitcast, and getelementptr) do not incur cycles in the data flow graph
806 // and (2) this function is called on constant expressions in postorder.
807 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand)) {
808 IsNew = true;
809 NewOperands.push_back(cast<Constant>(NewOperand));
810 continue;
811 }
812 if (auto *CExpr = dyn_cast<ConstantExpr>(Operand))
814 CExpr, NewAddrSpace, ValueWithNewAddrSpace, DL, TTI)) {
815 IsNew = true;
816 NewOperands.push_back(cast<Constant>(NewOperand));
817 continue;
818 }
819 // Otherwise, reuses the old operand.
820 NewOperands.push_back(Operand);
821 }
822
823 // If !IsNew, we will replace the Value with itself. However, replaced values
824 // are assumed to wrapped in an addrspacecast cast later so drop it now.
825 if (!IsNew)
826 return nullptr;
827
828 if (CE->getOpcode() == Instruction::GetElementPtr) {
829 // Needs to specify the source type while constructing a getelementptr
830 // constant expression.
831 return CE->getWithOperands(NewOperands, TargetType, /*OnlyIfReduced=*/false,
832 cast<GEPOperator>(CE)->getSourceElementType());
833 }
834
835 return CE->getWithOperands(NewOperands, TargetType);
836}
837
838// Returns a clone of the value `V`, with its operands replaced as specified in
839// ValueWithNewAddrSpace. This function is called on every flat address
840// expression whose address space needs to be modified, in postorder.
841//
842// See cloneInstructionWithNewAddressSpace for the meaning of PoisonUsesToFix.
843Value *InferAddressSpacesImpl::cloneValueWithNewAddressSpace(
844 Value *V, unsigned NewAddrSpace,
845 const ValueToValueMapTy &ValueWithNewAddrSpace,
846 const PredicatedAddrSpaceMapTy &PredicatedAS,
847 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
848 // All values in Postorder are flat address expressions.
849 assert(V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
850 isAddressExpression(*V, *DL, TTI));
851
852 if (auto *Arg = dyn_cast<Argument>(V)) {
853 // Arguments are address space casted in the function body, as we do not
854 // want to change the function signature.
855 Function *F = Arg->getParent();
856 BasicBlock::iterator Insert = F->getEntryBlock().getFirstNonPHIIt();
857
858 Type *NewPtrTy = PointerType::get(Arg->getContext(), NewAddrSpace);
859 auto *NewI = new AddrSpaceCastInst(Arg, NewPtrTy);
860 NewI->insertBefore(Insert);
861 return NewI;
862 }
863
864 if (Instruction *I = dyn_cast<Instruction>(V)) {
865 Value *NewV = cloneInstructionWithNewAddressSpace(
866 I, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS, PoisonUsesToFix);
867 if (Instruction *NewI = dyn_cast_or_null<Instruction>(NewV)) {
868 if (NewI->getParent() == nullptr) {
869 NewI->insertBefore(I->getIterator());
870 NewI->takeName(I);
871 NewI->setDebugLoc(I->getDebugLoc());
872 }
873 }
874 return NewV;
875 }
876
878 cast<ConstantExpr>(V), NewAddrSpace, ValueWithNewAddrSpace, DL, TTI);
879}
880
881// Defines the join operation on the address space lattice (see the file header
882// comments).
883unsigned InferAddressSpacesImpl::joinAddressSpaces(unsigned AS1,
884 unsigned AS2) const {
885 if (AS1 == FlatAddrSpace || AS2 == FlatAddrSpace)
886 return FlatAddrSpace;
887
888 if (AS1 == UninitializedAddressSpace)
889 return AS2;
890 if (AS2 == UninitializedAddressSpace)
891 return AS1;
892
893 // The join of two different specific address spaces is flat.
894 return (AS1 == AS2) ? AS1 : FlatAddrSpace;
895}
896
897bool InferAddressSpacesImpl::run(Function &CurFn) {
898 F = &CurFn;
899 DL = &F->getDataLayout();
900
902 FlatAddrSpace = 0;
903
904 if (FlatAddrSpace == UninitializedAddressSpace) {
905 FlatAddrSpace = TTI->getFlatAddressSpace();
906 if (FlatAddrSpace == UninitializedAddressSpace)
907 return false;
908 }
909
910 // Collects all flat address expressions in postorder.
911 std::vector<WeakTrackingVH> Postorder = collectFlatAddressExpressions(*F);
912
913 // Runs a data-flow analysis to refine the address spaces of every expression
914 // in Postorder.
915 ValueToAddrSpaceMapTy InferredAddrSpace;
916 PredicatedAddrSpaceMapTy PredicatedAS;
917 inferAddressSpaces(Postorder, InferredAddrSpace, PredicatedAS);
918
919 // Changes the address spaces of the flat address expressions who are inferred
920 // to point to a specific address space.
921 return rewriteWithNewAddressSpaces(Postorder, InferredAddrSpace,
922 PredicatedAS);
923}
924
925// Constants need to be tracked through RAUW to handle cases with nested
926// constant expressions, so wrap values in WeakTrackingVH.
927void InferAddressSpacesImpl::inferAddressSpaces(
928 ArrayRef<WeakTrackingVH> Postorder,
929 ValueToAddrSpaceMapTy &InferredAddrSpace,
930 PredicatedAddrSpaceMapTy &PredicatedAS) const {
931 SetVector<Value *> Worklist(llvm::from_range, Postorder);
932 // Initially, all expressions are in the uninitialized address space.
933 for (Value *V : Postorder)
934 InferredAddrSpace[V] = UninitializedAddressSpace;
935
936 while (!Worklist.empty()) {
937 Value *V = Worklist.pop_back_val();
938
939 // Try to update the address space of the stack top according to the
940 // address spaces of its operands.
941 if (!updateAddressSpace(*V, InferredAddrSpace, PredicatedAS))
942 continue;
943
944 for (Value *User : V->users()) {
945 // Skip if User is already in the worklist.
946 if (Worklist.count(User))
947 continue;
948
949 auto Pos = InferredAddrSpace.find(User);
950 // Our algorithm only updates the address spaces of flat address
951 // expressions, which are those in InferredAddrSpace.
952 if (Pos == InferredAddrSpace.end())
953 continue;
954
955 // Function updateAddressSpace moves the address space down a lattice
956 // path. Therefore, nothing to do if User is already inferred as flat (the
957 // bottom element in the lattice).
958 if (Pos->second == FlatAddrSpace)
959 continue;
960
961 Worklist.insert(User);
962 }
963 }
964}
965
966unsigned
967InferAddressSpacesImpl::getPredicatedAddrSpace(const Value &Ptr,
968 const Value *UserCtx) const {
969 const Instruction *UserCtxI = dyn_cast<Instruction>(UserCtx);
970 if (!UserCtxI)
972
973 const Value *StrippedPtr = Ptr.stripInBoundsOffsets();
974 for (auto &AssumeVH : AC.assumptionsFor(StrippedPtr)) {
975 if (!AssumeVH)
976 continue;
977 CallInst *CI = cast<CallInst>(AssumeVH);
978 if (!isValidAssumeForContext(CI, UserCtxI, DT))
979 continue;
980
981 const Value *Ptr;
982 unsigned AS;
983 std::tie(Ptr, AS) = TTI->getPredicatedAddrSpace(CI->getArgOperand(0));
984 if (Ptr)
985 return AS;
986 }
987
989}
990
991bool InferAddressSpacesImpl::updateAddressSpace(
992 const Value &V, ValueToAddrSpaceMapTy &InferredAddrSpace,
993 PredicatedAddrSpaceMapTy &PredicatedAS) const {
994 assert(InferredAddrSpace.count(&V));
995
996 LLVM_DEBUG(dbgs() << "Updating the address space of\n " << V << '\n');
997
998 // The new inferred address space equals the join of the address spaces
999 // of all its pointer operands.
1000 unsigned NewAS = UninitializedAddressSpace;
1001
1002 // isAddressExpression should guarantee that V is an operator or an argument.
1003 assert(isa<Operator>(V) || isa<Argument>(V));
1004
1005 if (isa<Operator>(V) &&
1006 cast<Operator>(V).getOpcode() == Instruction::Select) {
1007 const Operator &Op = cast<Operator>(V);
1008 Value *Src0 = Op.getOperand(1);
1009 Value *Src1 = Op.getOperand(2);
1010
1011 auto I = InferredAddrSpace.find(Src0);
1012 unsigned Src0AS = (I != InferredAddrSpace.end())
1013 ? I->second
1014 : Src0->getType()->getPointerAddressSpace();
1015
1016 auto J = InferredAddrSpace.find(Src1);
1017 unsigned Src1AS = (J != InferredAddrSpace.end())
1018 ? J->second
1019 : Src1->getType()->getPointerAddressSpace();
1020
1021 auto *C0 = dyn_cast<Constant>(Src0);
1022 auto *C1 = dyn_cast<Constant>(Src1);
1023
1024 // If one of the inputs is a constant, we may be able to do a constant
1025 // addrspacecast of it. Defer inferring the address space until the input
1026 // address space is known.
1027 if ((C1 && Src0AS == UninitializedAddressSpace) ||
1028 (C0 && Src1AS == UninitializedAddressSpace))
1029 return false;
1030
1031 if (C0 && isSafeToCastConstAddrSpace(C0, Src1AS))
1032 NewAS = Src1AS;
1033 else if (C1 && isSafeToCastConstAddrSpace(C1, Src0AS))
1034 NewAS = Src0AS;
1035 else
1036 NewAS = joinAddressSpaces(Src0AS, Src1AS);
1037 } else {
1038 unsigned AS = TTI->getAssumedAddrSpace(&V);
1039 if (AS != UninitializedAddressSpace) {
1040 // Use the assumed address space directly.
1041 NewAS = AS;
1042 } else {
1043 // Otherwise, infer the address space from its pointer operands.
1044 for (Value *PtrOperand : getPointerOperands(V, *DL, TTI)) {
1045 auto I = InferredAddrSpace.find(PtrOperand);
1046 unsigned OperandAS;
1047 if (I == InferredAddrSpace.end()) {
1048 OperandAS = PtrOperand->getType()->getPointerAddressSpace();
1049 if (OperandAS == FlatAddrSpace) {
1050 // Check AC for assumption dominating V.
1051 unsigned AS = getPredicatedAddrSpace(*PtrOperand, &V);
1052 if (AS != UninitializedAddressSpace) {
1054 << " deduce operand AS from the predicate addrspace "
1055 << AS << '\n');
1056 OperandAS = AS;
1057 // Record this use with the predicated AS.
1058 PredicatedAS[std::make_pair(&V, PtrOperand)] = OperandAS;
1059 }
1060 }
1061 } else
1062 OperandAS = I->second;
1063
1064 // join(flat, *) = flat. So we can break if NewAS is already flat.
1065 NewAS = joinAddressSpaces(NewAS, OperandAS);
1066 if (NewAS == FlatAddrSpace)
1067 break;
1068 }
1069 }
1070 }
1071
1072 unsigned OldAS = InferredAddrSpace.lookup(&V);
1073 assert(OldAS != FlatAddrSpace);
1074 if (OldAS == NewAS)
1075 return false;
1076
1077 // If any updates are made, grabs its users to the worklist because
1078 // their address spaces can also be possibly updated.
1079 LLVM_DEBUG(dbgs() << " to " << NewAS << '\n');
1080 InferredAddrSpace[&V] = NewAS;
1081 return true;
1082}
1083
1084/// Replace operand \p OpIdx in \p Inst, if the value is the same as \p OldVal
1085/// with \p NewVal.
1086static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx,
1087 Value *OldVal, Value *NewVal) {
1088 Use &U = Inst->getOperandUse(OpIdx);
1089 if (U.get() == OldVal) {
1090 U.set(NewVal);
1091 return true;
1092 }
1093
1094 return false;
1095}
1096
1097template <typename InstrType>
1099 InstrType *MemInstr, unsigned AddrSpace,
1100 Value *OldV, Value *NewV) {
1101 if (!MemInstr->isVolatile() || TTI.hasVolatileVariant(MemInstr, AddrSpace)) {
1102 return replaceOperandIfSame(MemInstr, InstrType::getPointerOperandIndex(),
1103 OldV, NewV);
1104 }
1105
1106 return false;
1107}
1108
1109/// If \p OldV is used as the pointer operand of a compatible memory operation
1110/// \p Inst, replaces the pointer operand with NewV.
1111///
1112/// This covers memory instructions with a single pointer operand that can have
1113/// its address space changed by simply mutating the use to a new value.
1114///
1115/// \p returns true the user replacement was made.
1117 User *Inst, unsigned AddrSpace,
1118 Value *OldV, Value *NewV) {
1119 if (auto *LI = dyn_cast<LoadInst>(Inst))
1120 return replaceSimplePointerUse(TTI, LI, AddrSpace, OldV, NewV);
1121
1122 if (auto *SI = dyn_cast<StoreInst>(Inst))
1123 return replaceSimplePointerUse(TTI, SI, AddrSpace, OldV, NewV);
1124
1125 if (auto *RMW = dyn_cast<AtomicRMWInst>(Inst))
1126 return replaceSimplePointerUse(TTI, RMW, AddrSpace, OldV, NewV);
1127
1128 if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(Inst))
1129 return replaceSimplePointerUse(TTI, CmpX, AddrSpace, OldV, NewV);
1130
1131 return false;
1132}
1133
1134/// Update memory intrinsic uses that require more complex processing than
1135/// simple memory instructions. These require re-mangling and may have multiple
1136/// pointer operands.
1138 Value *NewV) {
1139 IRBuilder<> B(MI);
1140 if (auto *MSI = dyn_cast<MemSetInst>(MI)) {
1141 B.CreateMemSet(NewV, MSI->getValue(), MSI->getLength(), MSI->getDestAlign(),
1142 false, // isVolatile
1143 MI->getAAMetadata());
1144 } else if (auto *MTI = dyn_cast<MemTransferInst>(MI)) {
1145 Value *Src = MTI->getRawSource();
1146 Value *Dest = MTI->getRawDest();
1147
1148 // Be careful in case this is a self-to-self copy.
1149 if (Src == OldV)
1150 Src = NewV;
1151
1152 if (Dest == OldV)
1153 Dest = NewV;
1154
1155 if (auto *MCI = dyn_cast<MemCpyInst>(MTI)) {
1156 if (MCI->isForceInlined())
1157 B.CreateMemCpyInline(Dest, MTI->getDestAlign(), Src,
1158 MTI->getSourceAlign(), MTI->getLength(),
1159 false, // isVolatile
1160 MI->getAAMetadata());
1161 else
1162 B.CreateMemCpy(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1163 MTI->getLength(),
1164 false, // isVolatile
1165 MI->getAAMetadata());
1166 } else {
1167 assert(isa<MemMoveInst>(MTI));
1168 B.CreateMemMove(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1169 MTI->getLength(),
1170 false, // isVolatile
1171 MI->getAAMetadata());
1172 }
1173 } else
1174 llvm_unreachable("unhandled MemIntrinsic");
1175
1176 MI->eraseFromParent();
1177 return true;
1178}
1179
1180// \p returns true if it is OK to change the address space of constant \p C with
1181// a ConstantExpr addrspacecast.
1182bool InferAddressSpacesImpl::isSafeToCastConstAddrSpace(Constant *C,
1183 unsigned NewAS) const {
1185
1186 unsigned SrcAS = C->getType()->getPointerAddressSpace();
1187 if (SrcAS == NewAS || isa<UndefValue>(C))
1188 return true;
1189
1190 // Prevent illegal casts between different non-flat address spaces.
1191 if (SrcAS != FlatAddrSpace && NewAS != FlatAddrSpace)
1192 return false;
1193
1194 if (isa<ConstantPointerNull>(C))
1195 return true;
1196
1197 if (auto *Op = dyn_cast<Operator>(C)) {
1198 // If we already have a constant addrspacecast, it should be safe to cast it
1199 // off.
1200 if (Op->getOpcode() == Instruction::AddrSpaceCast)
1201 return isSafeToCastConstAddrSpace(cast<Constant>(Op->getOperand(0)),
1202 NewAS);
1203
1204 if (Op->getOpcode() == Instruction::IntToPtr &&
1205 Op->getType()->getPointerAddressSpace() == FlatAddrSpace)
1206 return true;
1207 }
1208
1209 return false;
1210}
1211
1214 User *CurUser = I->getUser();
1215 ++I;
1216
1217 while (I != End && I->getUser() == CurUser)
1218 ++I;
1219
1220 return I;
1221}
1222
1223void InferAddressSpacesImpl::performPointerReplacement(
1224 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
1225 SmallVectorImpl<Instruction *> &DeadInstructions) const {
1226
1227 User *CurUser = U.getUser();
1228
1229 unsigned AddrSpace = V->getType()->getPointerAddressSpace();
1230 if (replaceIfSimplePointerUse(*TTI, CurUser, AddrSpace, V, NewV))
1231 return;
1232
1233 // Skip if the current user is the new value itself.
1234 if (CurUser == NewV)
1235 return;
1236
1237 auto *CurUserI = dyn_cast<Instruction>(CurUser);
1238 if (!CurUserI || CurUserI->getFunction() != F)
1239 return;
1240
1241 // Handle more complex cases like intrinsic that need to be remangled.
1242 if (auto *MI = dyn_cast<MemIntrinsic>(CurUser)) {
1243 if (!MI->isVolatile() && handleMemIntrinsicPtrUse(MI, V, NewV))
1244 return;
1245 }
1246
1247 if (auto *II = dyn_cast<IntrinsicInst>(CurUser)) {
1248 if (rewriteIntrinsicOperands(II, V, NewV))
1249 return;
1250 }
1251
1252 if (ICmpInst *Cmp = dyn_cast<ICmpInst>(CurUserI)) {
1253 // If we can infer that both pointers are in the same addrspace,
1254 // transform e.g.
1255 // %cmp = icmp eq float* %p, %q
1256 // into
1257 // %cmp = icmp eq float addrspace(3)* %new_p, %new_q
1258
1259 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1260 int SrcIdx = U.getOperandNo();
1261 int OtherIdx = (SrcIdx == 0) ? 1 : 0;
1262 Value *OtherSrc = Cmp->getOperand(OtherIdx);
1263
1264 if (Value *OtherNewV = ValueWithNewAddrSpace.lookup(OtherSrc)) {
1265 if (OtherNewV->getType()->getPointerAddressSpace() == NewAS) {
1266 Cmp->setOperand(OtherIdx, OtherNewV);
1267 Cmp->setOperand(SrcIdx, NewV);
1268 return;
1269 }
1270 }
1271
1272 // Even if the type mismatches, we can cast the constant.
1273 if (auto *KOtherSrc = dyn_cast<Constant>(OtherSrc)) {
1274 if (isSafeToCastConstAddrSpace(KOtherSrc, NewAS)) {
1275 Cmp->setOperand(SrcIdx, NewV);
1276 Cmp->setOperand(OtherIdx, ConstantExpr::getAddrSpaceCast(
1277 KOtherSrc, NewV->getType()));
1278 return;
1279 }
1280 }
1281 }
1282
1283 if (AddrSpaceCastInst *ASC = dyn_cast<AddrSpaceCastInst>(CurUserI)) {
1284 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1285 if (ASC->getDestAddressSpace() == NewAS) {
1286 ASC->replaceAllUsesWith(NewV);
1287 DeadInstructions.push_back(ASC);
1288 return;
1289 }
1290 }
1291
1292 // Otherwise, replaces the use with flat(NewV).
1293 if (isa<Instruction>(V) || isa<Instruction>(NewV)) {
1294 // Don't create a copy of the original addrspacecast.
1295 if (U == V && isa<AddrSpaceCastInst>(V))
1296 return;
1297
1298 // Insert the addrspacecast after NewV.
1299 BasicBlock::iterator InsertPos;
1300 if (Instruction *NewVInst = dyn_cast<Instruction>(NewV))
1301 InsertPos = std::next(NewVInst->getIterator());
1302 else
1303 InsertPos = std::next(cast<Instruction>(V)->getIterator());
1304
1305 while (isa<PHINode>(InsertPos))
1306 ++InsertPos;
1307 // This instruction may contain multiple uses of V, update them all.
1308 CurUser->replaceUsesOfWith(
1309 V, new AddrSpaceCastInst(NewV, V->getType(), "", InsertPos));
1310 } else {
1311 CurUserI->replaceUsesOfWith(
1312 V, ConstantExpr::getAddrSpaceCast(cast<Constant>(NewV), V->getType()));
1313 }
1314}
1315
1316bool InferAddressSpacesImpl::rewriteWithNewAddressSpaces(
1317 ArrayRef<WeakTrackingVH> Postorder,
1318 const ValueToAddrSpaceMapTy &InferredAddrSpace,
1319 const PredicatedAddrSpaceMapTy &PredicatedAS) const {
1320 // For each address expression to be modified, creates a clone of it with its
1321 // pointer operands converted to the new address space. Since the pointer
1322 // operands are converted, the clone is naturally in the new address space by
1323 // construction.
1324 ValueToValueMapTy ValueWithNewAddrSpace;
1325 SmallVector<const Use *, 32> PoisonUsesToFix;
1326 for (Value *V : Postorder) {
1327 unsigned NewAddrSpace = InferredAddrSpace.lookup(V);
1328
1329 // In some degenerate cases (e.g. invalid IR in unreachable code), we may
1330 // not even infer the value to have its original address space.
1331 if (NewAddrSpace == UninitializedAddressSpace)
1332 continue;
1333
1334 if (V->getType()->getPointerAddressSpace() != NewAddrSpace) {
1335 Value *New =
1336 cloneValueWithNewAddressSpace(V, NewAddrSpace, ValueWithNewAddrSpace,
1337 PredicatedAS, &PoisonUsesToFix);
1338 if (New)
1339 ValueWithNewAddrSpace[V] = New;
1340 }
1341 }
1342
1343 if (ValueWithNewAddrSpace.empty())
1344 return false;
1345
1346 // Fixes all the poison uses generated by cloneInstructionWithNewAddressSpace.
1347 for (const Use *PoisonUse : PoisonUsesToFix) {
1348 User *V = PoisonUse->getUser();
1349 User *NewV = cast_or_null<User>(ValueWithNewAddrSpace.lookup(V));
1350 if (!NewV)
1351 continue;
1352
1353 unsigned OperandNo = PoisonUse->getOperandNo();
1354 assert(isa<PoisonValue>(NewV->getOperand(OperandNo)));
1355 NewV->setOperand(OperandNo, ValueWithNewAddrSpace.lookup(PoisonUse->get()));
1356 }
1357
1358 SmallVector<Instruction *, 16> DeadInstructions;
1359 ValueToValueMapTy VMap;
1361
1362 // Replaces the uses of the old address expressions with the new ones.
1363 for (const WeakTrackingVH &WVH : Postorder) {
1364 assert(WVH && "value was unexpectedly deleted");
1365 Value *V = WVH;
1366 Value *NewV = ValueWithNewAddrSpace.lookup(V);
1367 if (NewV == nullptr)
1368 continue;
1369
1370 LLVM_DEBUG(dbgs() << "Replacing the uses of " << *V << "\n with\n "
1371 << *NewV << '\n');
1372
1373 if (Constant *C = dyn_cast<Constant>(V)) {
1374 Constant *Replace =
1375 ConstantExpr::getAddrSpaceCast(cast<Constant>(NewV), C->getType());
1376 if (C != Replace) {
1377 LLVM_DEBUG(dbgs() << "Inserting replacement const cast: " << Replace
1378 << ": " << *Replace << '\n');
1379 SmallVector<User *, 16> WorkList;
1380 for (User *U : make_early_inc_range(C->users())) {
1381 if (auto *I = dyn_cast<Instruction>(U)) {
1382 if (I->getFunction() == F)
1383 I->replaceUsesOfWith(C, Replace);
1384 } else {
1385 WorkList.append(U->user_begin(), U->user_end());
1386 }
1387 }
1388 if (!WorkList.empty()) {
1389 VMap[C] = Replace;
1390 DenseSet<User *> Visited{WorkList.begin(), WorkList.end()};
1391 while (!WorkList.empty()) {
1392 User *U = WorkList.pop_back_val();
1393 if (auto *I = dyn_cast<Instruction>(U)) {
1394 if (I->getFunction() == F)
1395 VMapper.remapInstruction(*I);
1396 continue;
1397 }
1398 for (User *U2 : U->users())
1399 if (Visited.insert(U2).second)
1400 WorkList.push_back(U2);
1401 }
1402 }
1403 V = Replace;
1404 }
1405 }
1406
1407 Value::use_iterator I, E, Next;
1408 for (I = V->use_begin(), E = V->use_end(); I != E;) {
1409 Use &U = *I;
1410
1411 // Some users may see the same pointer operand in multiple operands. Skip
1412 // to the next instruction.
1413 I = skipToNextUser(I, E);
1414
1415 performPointerReplacement(V, NewV, U, ValueWithNewAddrSpace,
1416 DeadInstructions);
1417 }
1418
1419 if (V->use_empty()) {
1420 if (Instruction *I = dyn_cast<Instruction>(V))
1421 DeadInstructions.push_back(I);
1422 }
1423 }
1424
1425 for (Instruction *I : DeadInstructions)
1427
1428 return true;
1429}
1430
1431bool InferAddressSpaces::runOnFunction(Function &F) {
1432 if (skipFunction(F))
1433 return false;
1434
1435 auto *DTWP = getAnalysisIfAvailable<DominatorTreeWrapperPass>();
1436 DominatorTree *DT = DTWP ? &DTWP->getDomTree() : nullptr;
1437 return InferAddressSpacesImpl(
1438 getAnalysis<AssumptionCacheTracker>().getAssumptionCache(F), DT,
1439 &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F),
1440 FlatAddrSpace)
1441 .run(F);
1442}
1443
1445 return new InferAddressSpaces(AddressSpace);
1446}
1447
1449 : FlatAddrSpace(UninitializedAddressSpace) {}
1451 : FlatAddrSpace(AddressSpace) {}
1452
1455 bool Changed =
1456 InferAddressSpacesImpl(AM.getResult<AssumptionAnalysis>(F),
1458 &AM.getResult<TargetIRAnalysis>(F), FlatAddrSpace)
1459 .run(F);
1460 if (Changed) {
1464 return PA;
1465 }
1466 return PreservedAnalyses::all();
1467}
assert(UImm &&(UImm !=~static_cast< T >(0)) &&"Invalid immediate!")
Rewrite undef for PHI
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
Expand Atomic instructions
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
This file contains the declarations for the subclasses of Constant, which represent the different fla...
return RetTy
Returns the sub type a function will return at a given Idx Should correspond to the result type of an ExtractValue instruction executed with just that one unsigned Idx
This file defines the DenseMap class.
This file defines the DenseSet and SmallDenseSet classes.
bool End
Definition: ELF_riscv.cpp:480
Hexagon Common GEP
IRTranslator LLVM IR MI
This header defines various interfaces for pass management in LLVM.
This defines the Use class.
static bool replaceIfSimplePointerUse(const TargetTransformInfo &TTI, User *Inst, unsigned AddrSpace, Value *OldV, Value *NewV)
If OldV is used as the pointer operand of a compatible memory operation Inst, replaces the pointer op...
static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx, Value *OldVal, Value *NewVal)
Replace operand OpIdx in Inst, if the value is the same as OldVal with NewVal.
static cl::opt< bool > AssumeDefaultIsFlatAddressSpace("assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden, cl::desc("The default address space is assumed as the flat address space. " "This is mainly for test purpose."))
static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL, const TargetTransformInfo *TTI)
static bool isAddressExpression(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static bool handleMemIntrinsicPtrUse(MemIntrinsic *MI, Value *OldV, Value *NewV)
Update memory intrinsic uses that require more complex processing than simple memory instructions.
Infer address spaces
static SmallVector< Value *, 2 > getPointerOperands(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static Value * operandWithNewAddressSpaceOrCreatePoison(const Use &OperandUse, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const PredicatedAddrSpaceMapTy &PredicatedAS, SmallVectorImpl< const Use * > *PoisonUsesToFix)
static Value * cloneConstantExprWithNewAddressSpace(ConstantExpr *CE, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL, const TargetTransformInfo *TTI)
static Value::use_iterator skipToNextUser(Value::use_iterator I, Value::use_iterator End)
Infer address static false Type * getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace)
static bool replaceSimplePointerUse(const TargetTransformInfo &TTI, InstrType *MemInstr, unsigned AddrSpace, Value *OldV, Value *NewV)
#define DEBUG_TYPE
static const unsigned UninitializedAddressSpace
#define F(x, y, z)
Definition: MD5.cpp:55
#define I(x, y, z)
Definition: MD5.cpp:58
MachineInstr unsigned OpIdx
uint64_t IntrinsicInst * II
#define INITIALIZE_PASS_DEPENDENCY(depName)
Definition: PassSupport.h:42
#define INITIALIZE_PASS_END(passName, arg, name, cfg, analysis)
Definition: PassSupport.h:44
#define INITIALIZE_PASS_BEGIN(passName, arg, name, cfg, analysis)
Definition: PassSupport.h:39
This file implements a set that has insertion order iteration characteristics.
This file defines the SmallVector class.
#define LLVM_DEBUG(...)
Definition: Debug.h:119
This pass exposes codegen information to IR-level passes.
static std::optional< unsigned > getOpcode(ArrayRef< VPValue * > Values)
Returns the opcode of Values or ~0 if they do not all agree.
Definition: VPlanSLP.cpp:247
This class represents a conversion between pointers from one address space to another.
A container for analyses that lazily runs them and caches their results.
Definition: PassManager.h:255
PassT::Result * getCachedResult(IRUnitT &IR) const
Get the cached result of an analysis pass for a given IR unit.
Definition: PassManager.h:431
PassT::Result & getResult(IRUnitT &IR, ExtraArgTs... ExtraArgs)
Get the result of an analysis pass for a given IR unit.
Definition: PassManager.h:412
Represent the analysis usage information of a pass.
AnalysisUsage & addRequired()
AnalysisUsage & addPreserved()
Add the specified Pass class to the set of analyses preserved by this pass.
LLVM_ABI void setPreservesCFG()
This function should be called by the pass, iff they do not:
Definition: Pass.cpp:270
This class represents an incoming formal argument to a Function.
Definition: Argument.h:32
ArrayRef - Represent a constant reference to an array (0 or more elements consecutively in memory),...
Definition: ArrayRef.h:41
A function analysis which provides an AssumptionCache.
An immutable pass that tracks lazily created AssumptionCache objects.
A cache of @llvm.assume calls within a function.
InstListType::iterator iterator
Instruction iterators...
Definition: BasicBlock.h:170
This class represents a no-op cast from one type to another.
Represents analyses that only rely on functions' control flow.
Definition: Analysis.h:73
Value * getArgOperand(unsigned i) const
Definition: InstrTypes.h:1292
This class represents a function call, abstracting a target machine's calling convention.
static LLVM_ABI bool isNoopCast(Instruction::CastOps Opcode, Type *SrcTy, Type *DstTy, const DataLayout &DL)
A no-op cast is one that can be effected without changing any bits.
A constant value that is initialized with an expression using other constant values.
Definition: Constants.h:1120
static LLVM_ABI Constant * getAddrSpaceCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
Definition: Constants.cpp:2340
static LLVM_ABI Constant * getBitCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
Definition: Constants.cpp:2328
This is an important base class in LLVM.
Definition: Constant.h:43
This class represents an Operation in the Expression.
A parsed version of the target data layout string in and methods for querying it.
Definition: DataLayout.h:63
Implements a dense probed hash-table based set.
Definition: DenseSet.h:263
Analysis pass which computes a DominatorTree.
Definition: Dominators.h:284
Legacy analysis pass which computes a DominatorTree.
Definition: Dominators.h:322
Concrete subclass of DominatorTreeBase that is used to compute a normal dominator tree.
Definition: Dominators.h:165
FunctionPass class - This class is used to implement most global optimizations.
Definition: Pass.h:314
virtual bool runOnFunction(Function &F)=0
runOnFunction - Virtual method overriden by subclasses to do the per-function processing of the pass.
an instruction for type-safe pointer arithmetic to access elements of arrays and structs
Definition: Instructions.h:949
static GetElementPtrInst * Create(Type *PointeeType, Value *Ptr, ArrayRef< Value * > IdxList, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
Definition: Instructions.h:973
LLVM_ABI void setIsInBounds(bool b=true)
Set or clear the inbounds flag on this GEP instruction.
This instruction compares its operands according to the predicate given to the constructor.
This provides a uniform API for creating instructions and inserting them into a basic block: either a...
Definition: IRBuilder.h:2780
const DebugLoc & getDebugLoc() const
Return the debug location for this node as a DebugLoc.
Definition: Instruction.h:513
A wrapper class for inspecting calls to intrinsic functions.
Definition: IntrinsicInst.h:49
This is the common base class for memset/memcpy/memmove.
A Module instance is used to store all the information related to an LLVM module.
Definition: Module.h:67
This is a utility class that provides an abstraction for the common functionality between Instruction...
Definition: Operator.h:33
unsigned getOpcode() const
Return the opcode for this Instruction or ConstantExpr.
Definition: Operator.h:43
void addIncoming(Value *V, BasicBlock *BB)
Add an incoming value to the end of the PHI list.
static unsigned getOperandNumForIncomingValue(unsigned i)
static PHINode * Create(Type *Ty, unsigned NumReservedValues, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
Constructors - NumReservedValues is a hint for the number of incoming edges that this phi node will h...
static LLVM_ABI PassRegistry * getPassRegistry()
getPassRegistry - Access the global registry object, which is automatically initialized at applicatio...
virtual void getAnalysisUsage(AnalysisUsage &) const
getAnalysisUsage - This function should be overriden by passes that need analysis information to do t...
Definition: Pass.cpp:112
static LLVM_ABI PoisonValue * get(Type *T)
Static factory methods - Return an 'poison' object of the specified type.
Definition: Constants.cpp:1885
A set of analyses that are preserved following a run of a transformation pass.
Definition: Analysis.h:112
static PreservedAnalyses all()
Construct a special preserved set that preserves all passes.
Definition: Analysis.h:118
PreservedAnalyses & preserveSet()
Mark an analysis set as preserved.
Definition: Analysis.h:151
PreservedAnalyses & preserve()
Mark an analysis as preserved.
Definition: Analysis.h:132
static SelectInst * Create(Value *C, Value *S1, Value *S2, const Twine &NameStr="", InsertPosition InsertBefore=nullptr, Instruction *MDFrom=nullptr)
A vector that has set insertion semantics.
Definition: SetVector.h:59
bool empty() const
Definition: SmallVector.h:82
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
Definition: SmallVector.h:574
void append(ItTy in_start, ItTy in_end)
Add the specified range to the end of the SmallVector.
Definition: SmallVector.h:684
void push_back(const T &Elt)
Definition: SmallVector.h:414
This is a 'vector' (really, a variable-sized array), optimized for the case when the array is small.
Definition: SmallVector.h:1197
Analysis pass providing the TargetTransformInfo.
Wrapper pass for TargetTransformInfo.
This pass provides access to the codegen interfaces that are needed for IR-level transformations.
LLVM_ABI unsigned getAssumedAddrSpace(const Value *V) const
LLVM_ABI bool isNoopAddrSpaceCast(unsigned FromAS, unsigned ToAS) const
LLVM_ABI std::pair< const Value *, unsigned > getPredicatedAddrSpace(const Value *V) const
LLVM_ABI bool collectFlatAddressOperands(SmallVectorImpl< int > &OpIndexes, Intrinsic::ID IID) const
Return any intrinsic address operand indexes which may be rewritten if they use a flat address space ...
LLVM_ABI Value * rewriteIntrinsicWithAddressSpace(IntrinsicInst *II, Value *OldV, Value *NewV) const
Rewrite intrinsic call II such that OldV will be replaced with NewV, which has a different address sp...
LLVM_ABI unsigned getFlatAddressSpace() const
Returns the address space ID for a target's 'flat' address space.
LLVM_ABI bool hasVolatileVariant(Instruction *I, unsigned AddrSpace) const
Return true if the given instruction (assumed to be a memory access instruction) has a volatile varia...
The instances of the Type class are immutable: once they are created, they are never changed.
Definition: Type.h:45
LLVM_ABI unsigned getPointerAddressSpace() const
Get the address space of this pointer or pointer vector type.
A Use represents the edge between a Value definition and its users.
Definition: Use.h:35
User * getUser() const
Returns the User that contains this Use.
Definition: Use.h:61
Value * get() const
Definition: Use.h:55
LLVM_ABI bool replaceUsesOfWith(Value *From, Value *To)
Replace uses of one Value with another.
Definition: User.cpp:21
const Use & getOperandUse(unsigned i) const
Definition: User.h:245
void setOperand(unsigned i, Value *Val)
Definition: User.h:237
Value * getOperand(unsigned i) const
Definition: User.h:232
ValueT lookup(const KeyT &Val) const
lookup - Return the entry for the specified key, or a default constructed value if no such entry exis...
Definition: ValueMap.h:169
bool empty() const
Definition: ValueMap.h:143
Context for (re-)mapping values (and metadata).
Definition: ValueMapper.h:163
LLVM Value Representation.
Definition: Value.h:75
Type * getType() const
All values are typed, get the type of this value.
Definition: Value.h:256
use_iterator_impl< Use > use_iterator
Definition: Value.h:353
LLVM_ABI const Value * stripPointerCasts() const
Strip off pointer casts, all-zero GEPs and address space casts.
Definition: Value.cpp:701
Value handle that is nullable, but tries to track the Value.
Definition: ValueHandle.h:205
std::pair< iterator, bool > insert(const ValueT &V)
Definition: DenseSet.h:194
self_iterator getIterator()
Definition: ilist_node.h:134
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
unsigned ID
LLVM IR allows to use arbitrary numbers as calling convention identifiers.
Definition: CallingConv.h:24
@ C
The default llvm calling convention, compatible with C.
Definition: CallingConv.h:34
InstrType
This represents what is and is not supported when finding similarity in Instructions.
LLVM_ABI Function * getOrInsertDeclaration(Module *M, ID id, ArrayRef< Type * > Tys={})
Look up the Function declaration of the intrinsic id in the Module M.
Definition: Intrinsics.cpp:751
@ ReallyHidden
Definition: CommandLine.h:139
initializer< Ty > init(const Ty &Val)
Definition: CommandLine.h:444
PointerTypeMap run(const Module &M)
Compute the PointerTypeMap for the module M.
This is an optimization pass for GlobalISel generic memory operations.
Definition: AddressRanges.h:18
LLVM_ABI bool isValidAssumeForContext(const Instruction *I, const Instruction *CxtI, const DominatorTree *DT=nullptr, bool AllowEphemerals=false)
Return true if it is valid to use the assumptions provided by an assume intrinsic,...
LLVM_ABI bool RecursivelyDeleteTriviallyDeadInstructions(Value *V, const TargetLibraryInfo *TLI=nullptr, MemorySSAUpdater *MSSAU=nullptr, std::function< void(Value *)> AboutToDeleteCallback=std::function< void(Value *)>())
If the specified value is a trivially dead instruction, delete it.
Definition: Local.cpp:533
LLVM_ABI void initializeInferAddressSpacesPass(PassRegistry &)
constexpr from_range_t from_range
iterator_range< early_inc_iterator_impl< detail::IterOfRange< RangeT > > > make_early_inc_range(RangeT &&Range)
Make a range that does early increment to allow mutation of the underlying range without disrupting i...
Definition: STLExtras.h:663
@ RF_IgnoreMissingLocals
If this flag is set, the remapper ignores missing function-local entries (Argument,...
Definition: ValueMapper.h:98
@ RF_NoModuleLevelChanges
If this flag is set, the remapper knows that only local values within a function (such as an instruct...
Definition: ValueMapper.h:80
LLVM_ABI raw_ostream & dbgs()
dbgs() - This returns a reference to a raw_ostream for debugging messages.
Definition: Debug.cpp:207
LLVM_ABI FunctionPass * createInferAddressSpacesPass(unsigned AddressSpace=~0u)
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM)