• Engineering Mathematics
  • Discrete Mathematics
  • Operating System
  • Computer Networks
  • Digital Logic and Design
  • C Programming
  • Data Structures
  • Theory of Computation
  • Compiler Design
  • Computer Org and Architecture
  • Directed Acyclic graph in Compiler Design (with examples)
  • Advantages of Multipass Compiler Over Single Pass Compiler
  • Iterative algorithm for a forward data-flow problem
  • LALR Parser (with Examples)
  • CLR Parser (with Examples)
  • SLR Parser (with Examples)
  • Source to Source Compiler
  • Activation Records
  • Constant Propagation in Compiler Design
  • Removal of ambiguity (Converting an Ambiguous grammar into Unambiguous grammar)
  • Token, Patterns, and Lexemes
  • Liveliness Analysis in Compiler Design
  • Applications of Compiler Technology
  • Lexical Error
  • Difference between Compiler and Interpreter
  • Symbolic Analysis in Compiler Design
  • What is Handle Pruning?
  • Global Code Scheduling in Compiler Design
  • Problem on LR(0) parser

Static Single Assignment (with relevant examples)

Static Single Assignment was presented in 1988 by Barry K. Rosen, Mark N, Wegman, and F. Kenneth Zadeck. 

In compiler design, Static Single Assignment ( shortened SSA) is a means of structuring the IR (intermediate representation) such that every variable is allotted a value only once and every variable is defined before it’s use. The prime use of SSA is it simplifies and improves the results of compiler optimisation algorithms, simultaneously by simplifying the variable properties. Some Algorithms improved by application of SSA – 

  • Constant Propagation –   Translation of calculations from runtime to compile time. E.g. – the instruction v = 2*7+13 is treated like v = 27
  • Value Range Propagation –   Finding the possible range of values a calculation could result in.
  • Dead Code Elimination – Removing the code which is not accessible and will have no effect on results whatsoever.
  • Strength Reduction – Replacing computationally expensive calculations by inexpensive ones.
  • Register Allocation – Optimising the use of registers for calculations.

Any code can be converted to SSA form by simply replacing the target variable of each code segment with a new variable and substituting each use of a variable with the new edition of the variable reaching that point. Versions are created by splitting the original variables existing in IR and are represented by original name with a subscript such that every variable gets its own version.

Example #1:

Convert the following code segment to SSA form:

Here x,y,z,s,p,q are original variables and x 2 , s 2 , s 3 , s 4 are versions of x and s. 

Example #2:

Here a,b,c,d,e,q,s are original variables and a 2 , q 2 , q 3 are versions of a and q. 

Please Login to comment...

author

  • 10 Best HuggingChat Alternatives and Competitors
  • Best Free Android Apps for Podcast Listening
  • Google AI Model: Predicts Floods 7 Days in Advance
  • Who is Devika AI? India's 'AI coder', an alternative to Devin AI
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

ENOSUCHBLOG

Programming, philosophy, pedaling., understanding static single assignment forms, oct 23, 2020     tags: llvm , programming    .

This post is at least a year old.

With thanks to Niki Carroll , winny, and kurufu for their invaluable proofreading and advice.

By popular demand , I’m doing another LLVM post. This time, it’s single static assignment (or SSA) form, a common feature in the intermediate representations of optimizing compilers.

Like the last one , SSA is a topic in compiler and IR design that I mostly understand but could benefit from some self-guided education on. So here we are.

How to represent a program

At the highest level, a compiler’s job is singular: to turn some source language input into some machine language output . Internally, this breaks down into a sequence of clearly delineated 1 tasks:

  • Lexing the source into a sequence of tokens
  • Parsing the token stream into an abstract syntax tree , or AST 2
  • Validating the AST (e.g., ensuring that all uses of identifiers are consistent with the source language’s scoping and definition rules) 3
  • Translating the AST into machine code, with all of its complexities (instruction selection, register allocation, frame generation, &c)

In a single-pass compiler, (4) is monolithic: machine code is generated as the compiler walks the AST, with no revisiting of previously generated code. This is extremely fast (in terms of compiler performance) in exchange for some a few significant limitations:

Optimization potential: because machine code is generated in a single pass, it can’t be revisited for optimizations. Single-pass compilers tend to generate extremely slow and conservative machine code.

By way of example: the System V ABI (used by Linux and macOS) defines a special 128-byte region beyond the current stack pointer ( %rsp ) that can be used by leaf functions whose stack frames fit within it. This, in turn, saves a few stack management instructions in the function prologue and epilogue.

A single-pass compiler will struggle to take advantage of this ABI-supplied optimization: it needs to emit a stack slot for each automatic variable as they’re visited, and cannot revisit its function prologue for erasure if all variables fit within the red zone.

Language limitations: single-pass compilers struggle with common language design decisions, like allowing use of identifiers before their declaration or definition. For example, the following is valid C++:

C and C++ generally require pre-declaration and/or definition for identifiers, but member function bodies may reference the entire class scope. This will frustrate a single-pass compiler, which expects Rect::width and Rect::height to already exist in some symbol lookup table for call generation.

Consequently, (virtually) all modern compilers are multi-pass .

Pictured: Leeloo Dallas from The Fifth Element holding up her multi-pass.

Multi-pass compilers break the translation phase down even more:

  • The AST is lowered into an intermediate representation , or IR
  • Analyses (or passes) are performed on the IR, refining it according to some optimization profile (code size, performance, &c)
  • The IR is either translated to machine code or lowered to another IR, for further target specialization or optimization 4

So, we want an IR that’s easy to correctly transform and that’s amenable to optimization. Let’s talk about why IRs that have the static single assignment property fill that niche.

At its core, the SSA form of any program source program introduces only one new constraint: all variables are assigned (i.e., stored to) exactly once .

By way of example: the following (not actually very helpful) function is not in a valid SSA form with respect to the flags variable:

Why? Because flags is stored to twice: once for initialization, and (potentially) again inside the conditional body.

As programmers, we could rewrite helpful_open to only ever store once to each automatic variable:

But this is clumsy and repetitive: we essentially need to duplicate every chain of uses that follow any variable that is stored to more than once. That’s not great for readability, maintainability, or code size.

So, we do what we always do: make the compiler do the hard work for us. Fortunately there exists a transformation from every valid program into an equivalent SSA form, conditioned on two simple rules.

Rule #1: Whenever we see a store to an already-stored variable, we replace it with a brand new “version” of that variable.

Using rule #1 and the example above, we can rewrite flags using _N suffixes to indicate versions:

But wait a second: we’ve made a mistake!

  • open(..., flags_1, ...) is incorrect: it unconditionally assigns O_CREAT , which wasn’t in the original function semantics.
  • open(..., flags_0, ...) is also incorrect: it never assigns O_CREAT , and thus is wrong for the same reason.

So, what do we do? We use rule 2!

Rule #2: Whenever we need to choose a variable based on control flow, we use the Phi function (φ) to introduce a new variable based on our choice.

Using our example once more:

Our quandary is resolved: open always takes flags_2 , where flags_2 is a fresh SSA variable produced applying φ to flags_0 and flags_1 .

Observe, too, that φ is a symbolic function: compilers that use SSA forms internally do not emit real φ functions in generated code 5 . φ exists solely to reconcile rule #1 with the existence of control flow.

As such, it’s a little bit silly to talk about SSA forms with C examples (since C and other high-level languages are what we’re translating from in the first place). Let’s dive into how LLVM’s IR actually represents them.

SSA in LLVM

First of all, let’s see what happens when we run our very first helpful_open through clang with no optimizations:

(View it on Godbolt .)

So, we call open with %3 , which comes from…a load from an i32* named %flags ? Where’s the φ?

This is something that consistently slips me up when reading LLVM’s IR: only values , not memory, are in SSA form. Because we’ve compiled with optimizations disabled, %flags is just a stack slot that we can store into as many times as we please, and that’s exactly what LLVM has elected to do above.

As such, LLVM’s SSA-based optimizations aren’t all that useful when passed IR that makes direct use of stack slots. We want to maximize our use of SSA variables, whenever possible, to make future optimization passes as effective as possible.

This is where mem2reg comes in:

This file (optimization pass) promotes memory references to be register references. It promotes alloca instructions which only have loads and stores as uses. An alloca is transformed by using dominator frontiers to place phi nodes, then traversing the function in depth-first order to rewrite loads and stores as appropriate. This is just the standard SSA construction algorithm to construct “pruned” SSA form.

(Parenthetical mine.)

mem2reg gets run at -O1 and higher, so let’s do exactly that:

Foiled again! Our stack slots are gone thanks to mem2reg , but LLVM has actually optimized too far : it figured out that our flags value is wholly dependent on the return value of our access call and erased the conditional entirely.

Instead of a φ node, we got this select :

which the LLVM Language Reference describes concisely:

The ‘select’ instruction is used to choose one value based on a condition, without IR-level branching.

So we need a better example. Let’s do something that LLVM can’t trivially optimize into a select (or sequence of select s), like adding an else if with a function that we’ve only provided the declaration for:

That’s more like it! Here’s our magical φ:

LLVM’s phi is slightly more complicated than the φ(flags_0, flags_1) that I made up before, but not by much: it takes a list of pairs (two, in this case), with each pair containing a possible value and that value’s originating basic block (which, by construction, is always a predecessor block in the context of the φ node).

The Language Reference backs us up:

The type of the incoming values is specified with the first type field. After this, the ‘phi’ instruction takes a list of pairs as arguments, with one pair for each predecessor basic block of the current block. Only values of first class type may be used as the value arguments to the PHI node. Only labels may be used as the label arguments. There must be no non-phi instructions between the start of a basic block and the PHI instructions: i.e. PHI instructions must be first in a basic block.

Observe, too, that LLVM is still being clever: one of our φ choices is a computed select ( %spec.select ), so LLVM still managed to partially erase the original control flow.

So that’s cool. But there’s a piece of control flow that we’ve conspicuously ignored.

What about loops?

Not one, not two, but three φs! In order of appearance:

Because we supply the loop bounds via count , LLVM has no way to ensure that we actually enter the loop body. Consequently, our very first φ selects between the initial %base and %add . LLVM’s phi syntax helpfully tells us that %base comes from the entry block and %add from the loop, just as we expect. I have no idea why LLVM selected such a hideous name for the resulting value ( %base.addr.0.lcssa ).

Our index variable is initialized once and then updated with each for iteration, so it also needs a φ. Our selections here are %inc (which each body computes from %i.07 ) and the 0 literal (i.e., our initialization value).

Finally, the heart of our loop body: we need to get base , where base is either the initial base value ( %base ) or the value computed as part of the prior loop ( %add ). One last φ gets us there.

The rest of the IR is bookkeeping: we need separate SSA variables to compute the addition ( %add ), increment ( %inc ), and exit check ( %exitcond.not ) with each loop iteration.

So now we know what an SSA form is , and how LLVM represents them 6 . Why should we care?

As I briefly alluded to early in the post, it comes down to optimization potential: the SSA forms of programs are particularly suited to a number of effective optimizations.

Let’s go through a select few of them.

Dead code elimination

One of the simplest things that an optimizing compiler can do is remove code that cannot possibly be executed . This makes the resulting binary smaller (and usually faster, since more of it can fit in the instruction cache).

“Dead” code falls into several categories 7 , but a common one is assignments that cannot affect program behavior, like redundant initialization:

Without an SSA form, an optimizing compiler would need to check whether any use of x reaches its original definition ( x = 100 ). Tedious. In SSA form, the impossibility of that is obvious:

And sure enough, LLVM eliminates the initial assignment of 100 entirely:

Constant propagation

Compilers can also optimize a program by substituting uses of a constant variable for the constant value itself. Let’s take a look at another blob of C:

As humans, we can see that y and z are trivially assigned and never modified 8 . For the compiler, however, this is a variant of the reaching definition problem from above: before it can replace y and z with 7 and 10 respectively, it needs to make sure that y and z are never assigned a different value.

Let’s do our SSA reduction:

This is virtually identical to our original form, but with one critical difference: the compiler can now see that every load of y and z is the original assignment. In other words, they’re all safe to replace!

So we’ve gotten rid of a few potential register operations, which is nice. But here’s the really critical part: we’ve set ourselves up for several other optimizations :

Now that we’ve propagated some of our constants, we can do some trivial constant folding : 7 + 10 becomes 17 , and so forth.

In SSA form, it’s trivial to observe that only x and a_{1..4} can affect the program’s behavior. So we can apply our dead code elimination from above and delete y and z entirely!

This is the real magic of an optimizing compiler: each individual optimization is simple and largely independent, but together they produce a virtuous cycle that can be repeated until gains diminish.

One potential virtuous cycle.

Register allocation

Register allocation (alternatively: register scheduling) is less of an optimization itself , and more of an unavoidable problem in compiler engineering: it’s fun to pretend to have access to an infinite number of addressable variables, but the compiler eventually insists that we boil our operations down to a small, fixed set of CPU registers .

The constraints and complexities of register allocation vary by architecture: x86 (prior to AMD64) is notoriously starved for registers 9 (only 8 full general purpose registers, of which 6 might be usable within a function’s scope 10 ), while RISC architectures typically employ larger numbers of registers to compensate for the lack of register-memory operations.

Just as above, reductions to SSA form have both indirect and direct advantages for the register allocator:

Indirectly: Eliminations of redundant loads and stores reduces the overall pressure on the register allocator, allowing it to avoid expensive spills (i.e., having to temporarily transfer a live register to main memory to accommodate another instruction).

Directly: Compilers have historically lowered φs into copies before register allocation, meaning that register allocators traditionally haven’t benefited from the SSA form itself 11 . There is, however, (semi-)recent research on direct application of SSA forms to both linear and coloring allocators 12 13 .

A concrete example: modern JavaScript engines use JITs to accelerate program evaluation. These JITs frequently use linear register allocators for their acceptable tradeoff between register selection speed (linear, as the name suggests) and acceptable register scheduling. Converting out of SSA form is a timely operation of its own, so linear allocation on the SSA representation itself is appealing in JITs and other contexts where compile time is part of execution time.

There are many things about SSA that I didn’t cover in this post: dominance frontiers , tradeoffs between “pruned” and less optimal SSA forms, and feedback mechanisms between the SSA form of a program and the compiler’s decision to cease optimizing, among others. Each of these could be its own blog post, and maybe will be in the future!

In the sense that each task is conceptually isolated and has well-defined inputs and outputs. Individual compilers have some flexibility with respect to whether they combine or further split the tasks.  ↩

The distinction between an AST and an intermediate representation is hazy: Rust converts their AST to HIR early in the compilation process, and languages can be designed to have ASTs that are amendable to analyses that would otherwise be best on an IR.  ↩

This can be broken up into lexical validation (e.g. use of an undeclared identifier) and semantic validation (e.g. incorrect initialization of a type).  ↩

This is what LLVM does: LLVM IR is lowered to MIR (not to be confused with Rust’s MIR ), which is subsequently lowered to machine code.  ↩

Not because they can’t: the SSA form of a program can be executed by evaluating φ with concrete control flow.  ↩

We haven’t talked at all about minimal or pruned SSAs, and I don’t plan on doing so in this post. The TL;DR of them: naïve SSA form generation can lead to lots of unnecessary φ nodes, impeding analyses. LLVM (and GCC, and anything else that uses SSAs probably) will attempt to translate any initial SSA form into one with a minimally viable number of φs. For LLVM, this tied directly to the rest of mem2reg .  ↩

Including removing code that has undefined behavior in it, since “doesn’t run at all” is a valid consequence of invoking UB.  ↩

And are also function scoped, meaning that another translation unit can’t address them.  ↩

x86 makes up for this by not being a load-store architecture : many instructions can pay the price of a memory round-trip in exchange for saving a register.  ↩

Assuming that %esp and %ebp are being used by the compiler to manage the function’s frame.  ↩

LLVM, for example, lowers all φs as one of its very first preparations for register allocation. See this 2009 LLVM Developers’ Meeting talk .  ↩

Wimmer 2010a: “Linear Scan Register Allocation on SSA Form” ( PDF )  ↩

Hack 2005: “Towards Register Allocation for Programs in SSA-form” ( PDF )  ↩

  • What is the Static Single Assignment Form (SSA) and when to use it?

When coding, we are used to reassigning variables repeatedly. Just take for (int i = 0; i < 100; i++) { } as an example, where the value of i changes a hundred times after its initialization. But what happens when we allow variables to be assigned only once? And why should we even do it?

By applying static single assignment form , or short SSA , each variable is assigned exactly once. This concept is utilized, for instance, in intermediate representations such as in compilers. For achieving SSA, variables get versioned, usually by adding an index to the variable’s name. For example, let’s translate the following lines of code into SSA:

The variables a_0 , a_1 and b_0 are assigned to a value only once whereas a is set twice, resulting in two versions of the variable name in SSA.

Why is SSA important?

In the example above, it is easy for the human eye to see that the first assignment a = 1 is not necessary since this value of the variable is never used. For a computer, however, it is not, as it would need to perform further analysis to spot this. But when SSA is applied to the code, even a computer can immediately recognize that a_0 is not used at all.

In this manner, SSA enables different kinds of optimization in compilers. It aids tasks such as eliminating dead code (i.e. code that has no effect on the outcome) or determining when two operations are equivalent in order to replace expensive computations with cheaper, equivalent ones.

What does SSA have to do with Symflower’s core technology?

Similarly to a compiler, Symflower’s symbolic execution needs to traverse the given source code, or respectively, an intermediate representation of it and translate it to something it can work with. By symbolically executing all parts of a program, Symflower can find all relevant paths through the code and, at the same time, the corresponding conditions around how the paths are reached. In order to assemble these constraints, we use the translation of the code into what you might have already guessed: SSA. This way, a constraint solver can efficiently produce values that satisfy the collected requirements.

The results computed by the symbolic execution are program inputs that trigger certain behavior. Read the first blog post of this series on Symflower’s Core Technology to learn more about symbolic execution.

The example we investigated in the first blog post was the following function:

We applied symbolic execution by hand to the problem of finding a division by zero in the last statement of the function. Let’s see how the collected constraints look like in SSA.

Please note that the variables which we want the solver to compute the value for are not given a value, they are only defined, so that the assignment can still be done by the solver. Due to this, the SSA variables in Symflower’s symbolic execution have exactly one value only after we query the results from the solver. Until that point, they have at most one . In these pictures, we represent the definition of variables without assignments with the prefix “var”. Therefore, the first assignments of the variables x and y and their corresponding constraints look as follows:

Please check out the blog post on the topic to find out how the paths and their constraints are collected. The result in SSA is the following:

Along each path, every SSA variable is assigned at most once. In the solvable case, each variable has exactly one value as expected.

If you enjoyed learning about SSA and peeking into Symflower’s Core Technology, stay tuned for the next blog posts of this blog series. Don’t forget to subscribe to our newsletter , and follow us on Twitter , Facebook , and LinkedIn for more content.

Series: Core Technology

  • What is symbolic execution for software programs?
  • Methods for automated test value generation

Next: Alias analysis , Previous: SSA Operands , Up: Analysis and Optimization of GIMPLE tuples   [ Contents ][ Index ]

13.3 Static Single Assignment ¶

Most of the tree optimizers rely on the data flow information provided by the Static Single Assignment (SSA) form. We implement the SSA form as described in R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph. ACM Transactions on Programming Languages and Systems, 13(4):451-490, October 1991 .

The SSA form is based on the premise that program variables are assigned in exactly one location in the program. Multiple assignments to the same variable create new versions of that variable. Naturally, actual programs are seldom in SSA form initially because variables tend to be assigned multiple times. The compiler modifies the program representation so that every time a variable is assigned in the code, a new version of the variable is created. Different versions of the same variable are distinguished by subscripting the variable name with its version number. Variables used in the right-hand side of expressions are renamed so that their version number matches that of the most recent assignment.

We represent variable versions using SSA_NAME nodes. The renaming process in tree-ssa.cc wraps every real and virtual operand with an SSA_NAME node which contains the version number and the statement that created the SSA_NAME . Only definitions and virtual definitions may create new SSA_NAME nodes.

Sometimes, flow of control makes it impossible to determine the most recent version of a variable. In these cases, the compiler inserts an artificial definition for that variable called PHI function or PHI node . This new definition merges all the incoming versions of the variable to create a new name for it. For instance,

Since it is not possible to determine which of the three branches will be taken at runtime, we don’t know which of a_1 , a_2 or a_3 to use at the return statement. So, the SSA renamer creates a new version a_4 which is assigned the result of “merging” a_1 , a_2 and a_3 . Hence, PHI nodes mean “one of these operands. I don’t know which”.

The following functions can be used to examine PHI nodes

Returns the SSA_NAME created by PHI node phi (i.e., phi ’s LHS).

Returns the number of arguments in phi . This number is exactly the number of incoming edges to the basic block holding phi .

Returns i th argument of phi .

Returns the incoming edge for the i th argument of phi .

Returns the SSA_NAME for the i th argument of phi .

  • Preserving the SSA form
  • Examining SSA_NAME nodes
  • Walking the dominator tree

13.3.1 Preserving the SSA form ¶

Some optimization passes make changes to the function that invalidate the SSA property. This can happen when a pass has added new symbols or changed the program so that variables that were previously aliased aren’t anymore. Whenever something like this happens, the affected symbols must be renamed into SSA form again. Transformations that emit new code or replicate existing statements will also need to update the SSA form.

Since GCC implements two different SSA forms for register and virtual variables, keeping the SSA form up to date depends on whether you are updating register or virtual names. In both cases, the general idea behind incremental SSA updates is similar: when new SSA names are created, they typically are meant to replace other existing names in the program.

For instance, given the following code:

Suppose that we insert new names x_10 and x_11 (lines 4 and 8 ).

We want to replace all the uses of x_1 with the new definitions of x_10 and x_11 . Note that the only uses that should be replaced are those at lines 5 , 9 and 11 . Also, the use of x_7 at line 9 should not be replaced (this is why we cannot just mark symbol x for renaming).

Additionally, we may need to insert a PHI node at line 11 because that is a merge point for x_10 and x_11 . So the use of x_1 at line 11 will be replaced with the new PHI node. The insertion of PHI nodes is optional. They are not strictly necessary to preserve the SSA form, and depending on what the caller inserted, they may not even be useful for the optimizers.

Updating the SSA form is a two step process. First, the pass has to identify which names need to be updated and/or which symbols need to be renamed into SSA form for the first time. When new names are introduced to replace existing names in the program, the mapping between the old and the new names are registered by calling register_new_name_mapping (note that if your pass creates new code by duplicating basic blocks, the call to tree_duplicate_bb will set up the necessary mappings automatically).

After the replacement mappings have been registered and new symbols marked for renaming, a call to update_ssa makes the registered changes. This can be done with an explicit call or by creating TODO flags in the tree_opt_pass structure for your pass. There are several TODO flags that control the behavior of update_ssa :

  • TODO_update_ssa . Update the SSA form inserting PHI nodes for newly exposed symbols and virtual names marked for updating. When updating real names, only insert PHI nodes for a real name O_j in blocks reached by all the new and old definitions for O_j . If the iterated dominance frontier for O_j is not pruned, we may end up inserting PHI nodes in blocks that have one or more edges with no incoming definition for O_j . This would lead to uninitialized warnings for O_j ’s symbol.
  • TODO_update_ssa_no_phi . Update the SSA form without inserting any new PHI nodes at all. This is used by passes that have either inserted all the PHI nodes themselves or passes that need only to patch use-def and def-def chains for virtuals (e.g., DCE).

WARNING: If you need to use this flag, chances are that your pass may be doing something wrong. Inserting PHI nodes for an old name where not all edges carry a new replacement may lead to silent codegen errors or spurious uninitialized warnings.

  • TODO_update_ssa_only_virtuals . Passes that update the SSA form on their own may want to delegate the updating of virtual names to the generic updater. Since FUD chains are easier to maintain, this simplifies the work they need to do. NOTE: If this flag is used, any OLD->NEW mappings for real names are explicitly destroyed and only the symbols marked for renaming are processed.

13.3.2 Examining SSA_NAME nodes ¶

The following macros can be used to examine SSA_NAME nodes

Returns the statement s that creates the SSA_NAME var . If s is an empty statement (i.e., IS_EMPTY_STMT ( s ) returns true ), it means that the first reference to this variable is a USE or a VUSE.

Returns the version number of the SSA_NAME object var .

13.3.3 Walking the dominator tree ¶

This function walks the dominator tree for the current CFG calling a set of callback functions defined in struct dom_walk_data in domwalk.h . The call back functions you need to define give you hooks to execute custom code at various points during traversal:

  • Once to initialize any local data needed while processing bb and its children. This local data is pushed into an internal stack which is automatically pushed and popped as the walker traverses the dominator tree.
  • Once before traversing all the statements in the bb .
  • Once for every statement inside bb .
  • Once after traversing all the statements and before recursing into bb ’s dominator children.
  • It then recurses into all the dominator children of bb .
  • After recursing into all the dominator children of bb it can, optionally, traverse every statement in bb again (i.e., repeating steps 2 and 3).
  • Once after walking the statements in bb and bb ’s dominator children. At this stage, the block local data stack is popped.

Lesson 5: Global Analysis & SSA

  • global analysis & optimization
  • static single assignment
  • SSA slides from Todd Mowry at CMU another presentation of the pseudocode for various algorithms herein
  • Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency by Boissinot on more sophisticated was to translate out of SSA form
  • tasks due October 7

Lots of definitions!

  • Reminders: Successors & predecessors. Paths in CFGs.
  • A dominates B iff all paths from the entry to B include A .
  • The dominator tree is a convenient data structure for storing the dominance relationships in an entire function. The recursive children of a given node in a tree are the nodes that that node dominates.
  • A strictly dominates B iff A dominates B and A ≠ B . (Dominance is reflexive, so "strict" dominance just takes that part away.)
  • A immediately dominates B iff A dominates B but A does not strictly dominate any other node that strictly dominates B . (In which case A is B 's direct parent in the dominator tree.)
  • A dominance frontier is the set of nodes that are just "one edge away" from being dominated by a given node. Put differently, A 's dominance frontier contains B iff A does not strictly dominate B , but A does dominate some predecessor of B .
  • Post-dominance is the reverse of dominance. A post-dominates B iff all paths from B to the exit include A . (You can extend the strict version, the immediate version, trees, etc. to post-dominance.)

An algorithm for finding dominators:

The dom relation will, in the end, map each block to its set of dominators. We initialize it as the "complete" relation, i.e., mapping every block to the set of all blocks. The loop pares down the sets by iterating to convergence.

The running time is O(n²) in the worst case. But there's a trick: if you iterate over the CFG in reverse post-order , and the CFG is well behaved (reducible), it runs in linear time—the outer loop runs a constant number of times.

Natural Loops

Some things about loops:

  • Natural loops are strongly connected components in the CFG with a single entry.
  • Natural loops are formed around backedges , which are edges from A to B where B dominates A .
  • A natural loop is the smallest set of vertices L including A and B such that, for every v in L , either all the predecessors of v are in L or v = B .
  • A language that only has for , while , if , break , continue , etc. can only generate reducible CFGs. You need goto or something to generate irreducible CFGs.

Loop-Invariant Code Motion (LICM)

And finally, loop-invariant code motion (LICM) is an optimization that works on natural loops. It moves code from inside a loop to before the loop, if the computation always does the same thing on every iteration of the loop.

A loop's preheader is its header's unique predecessor. LICM moves code to the preheader. But while natural loops need to have a unique header, the header does not necessarily have a unique predecessor. So it's often convenient to invent an empty preheader block that jumps directly to the header, and then move all the in-edges to the header to point there instead.

LICM needs two ingredients: identifying loop-invariant instructions in the loop body, and deciding when it's safe to move one from the body to the preheader.

To identify loop-invariant instructions:

(This determination requires that you already calculated reaching definitions! Presumably using data flow.)

It's safe to move a loop-invariant instruction to the preheader iff:

  • The definition dominates all of its uses, and
  • No other definitions of the same variable exist in the loop, and
  • The instruction dominates all loop exits.

The last criterion is somewhat tricky: it ensures that the computation would have been computed eventually anyway, so it's safe to just do it earlier. But it's not true of loops that may execute zero times, which, when you think about it, rules out most for loops! It's possible to relax this condition if:

  • The assigned-to variable is dead after the loop, and
  • The instruction can't have side effects, including exceptions—generally ruling out division because it might divide by zero. (A thing that you generally need to be careful of in such speculative optimizations that do computations that might not actually be necessary.)

Static Single Assignment (SSA)

You have undoubtedly noticed by now that many of the annoying problems in implementing analyses & optimizations stem from variable name conflicts. Wouldn't it be nice if every assignment in a program used a unique variable name? Of course, people don't write programs that way, so we're out of luck. Right?

Wrong! Many compilers convert programs into static single assignment (SSA) form, which does exactly what it says: it ensures that, globally, every variable has exactly one static assignment location. (Of course, that statement might be executed multiple times, which is why it's not dynamic single assignment.) In Bril terms, we convert a program like this:

Into a program like this, by renaming all the variables:

Of course, things will get a little more complicated when there is control flow. And because real machines are not SSA, using separate variables (i.e., memory locations and registers) for everything is bound to be inefficient. The idea in SSA is to convert general programs into SSA form, do all our optimization there, and then convert back to a standard mutating form before we generate backend code.

Just renaming assignments willy-nilly will quickly run into problems. Consider this program:

If we start renaming all the occurrences of a , everything goes fine until we try to write that last print a . Which "version" of a should it use?

To match the expressiveness of unrestricted programs, SSA adds a new kind of instruction: a ϕ-node . ϕ-nodes are flow-sensitive copy instructions: they get a value from one of several variables, depending on which incoming CFG edge was most recently taken to get to them.

In Bril, a ϕ-node appears as a phi instruction:

The phi instruction chooses between any number of variables, and it picks between them based on labels. If the program most recently executed a basic block with the given label, then the phi instruction takes its value from the corresponding variable.

You can write the above program in SSA like this:

It can also be useful to see how ϕ-nodes crop up in loops.

(An aside: some recent SSA-form IRs, such as MLIR and Swift's IR , use an alternative to ϕ-nodes called basic block arguments . Instead of making ϕ-nodes look like weird instructions, these IRs bake the need for ϕ-like conditional copies into the structure of the CFG. Basic blocks have named parameters, and whenever you jump to a block, you must provide arguments for those parameters. With ϕ-nodes, a basic block enumerates all the possible sources for a given variable, one for each in-edge in the CFG; with basic block arguments, the sources are distributed to the "other end" of the CFG edge. Basic block arguments are a nice alternative for "SSA-native" IRs because they avoid messy problems that arise when needing to treat ϕ-nodes differently from every other kind of instruction.)

Bril in SSA

Bril has an SSA extension . It adds support for a phi instruction. Beyond that, SSA form is just a restriction on the normal expressiveness of Bril—if you solemnly promise never to assign statically to the same variable twice, you are writing "SSA Bril."

The reference interpreter has built-in support for phi , so you can execute your SSA-form Bril programs without fuss.

The SSA Philosophy

In addition to a language form, SSA is also a philosophy! It can fundamentally change the way you think about programs. In the SSA philosophy:

  • definitions == variables
  • instructions == values
  • arguments == data flow graph edges

In LLVM, for example, instructions do not refer to argument variables by name—an argument is a pointer to defining instruction.

Converting to SSA

To convert to SSA, we want to insert ϕ-nodes whenever there are distinct paths containing distinct definitions of a variable. We don't need ϕ-nodes in places that are dominated by a definition of the variable. So what's a way to know when control reachable from a definition is not dominated by that definition? The dominance frontier!

We do it in two steps. First, insert ϕ-nodes:

Then, rename variables:

Converting from SSA

Eventually, we need to convert out of SSA form to generate efficient code for real machines that don't have phi -nodes and do have finite space for variable storage.

The basic algorithm is pretty straightforward. If you have a ϕ-node:

Then there must be assignments to x and y (recursively) preceding this statement in the CFG. The paths from x to the phi -containing block and from y to the same block must "converge" at that block. So insert code into the phi -containing block's immediate predecessors along each of those two paths: one that does v = id x and one that does v = id y . Then you can delete the phi instruction.

This basic approach can introduce some redundant copying. (Take a look at the code it generates after you implement it!) Non-SSA copy propagation optimization can work well as a post-processing step. For a more extensive take on how to translate out of SSA efficiently, see “Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency” by Boissinot et al.

  • Find dominators for a function.
  • Construct the dominance tree.
  • Compute the dominance frontier.
  • One thing to watch out for: a tricky part of the translation from the pseudocode to the real world is dealing with variables that are undefined along some paths.
  • You will want to make sure the output of your "to SSA" pass is actually in SSA form. There's a really simple is_ssa.py script that can check that for you.
  • You'll also want to make sure that programs do the same thing when converted to SSA form and back again. Fortunately, brili supports the phi instruction, so you can interpret your SSA-form programs if you want to check the midpoint of that round trip.
  • For bonus "points," implement global value numbering for SSA-form Bril code.

static single assignment for functional programmers

Friends, I have an admission to make: I am a functional programmer.

By that I mean that lambda is my tribe. And you know how tribalism works: when two tribes meet, it's usually to argue and not to communicate.

So it is that I've been well-indoctrinated in the lore of the lambda calculus, continuation-passing style intermediate languages, closure conversion and lambda lifting. But when it comes to ideas from outside our tribe, we in the lambda tribe tune out, generally.

At the last Scheme workshop in Montreal, some poor fellow had the temerity to mention SSA on stage. (SSA is what the "machine tribe" uses as an intermediate langauge in their compilers.) I don't think the "A" was out of his mouth before Olin Shivers' booming drawl started, "d'you mean CPS?" (CPS is what "we" use.) There were titters from the audience, myself included.

But there are valuable lessons to be learned from SSA language and the optimizations that it enables, come though it may from another tribe. In this article I'd like to look at what the essence of SSA is. To do so, I'll start with explaining the functional programming story on intermediate languages, as many of my readers are not of my tribe. Then we'll use that as a fixed point against which SSA may be compared.

the lambda tribe in two sentences

In the beginning was the lambda . God saw it, realized he didn't need anything else, and stopped there.

Hey, it's true, right? The lambda-calculus is great because of its expressivity and precision. In that sense this evaluation is a utilitarian one: the lambda-calculus allows us to reason about computation with precision, so it is worth keeping around.

I don't think that Church was thinking about digital computers when he came up with the lambda-calculus back in the 1930s, given that digital computers didn't exist yet. Nor was McCarthy thinking about computers when he came up with Lisp in the 1960s. But one of McCarthy's students did hack it up, and that's still where we are now: translating between the language of the lambda-calculus and machine language.

This translation process is compilation, of course. For the first 20 years or so of practicing computer science, compilers (and indeed, languages) were very ad-hoc. In the beginning they didn't exist, and you just wrote machine code directly, using switches on a control panel or other such things, and later, assembly language. But eventually folks figured out parsing, and you get the first compilers for high-level languages.

I've written before about C not being a high-level assembly language , but back then, FORTRAN was indeed such a language. There wasn't much between the parser and the code generator. Everyone knows how good compilers work these days: you parse, you optimize, then you generate code. The medium in which you do your work is your intermediate language . A good intermediate language should be simple, so your optimizer can be simple; expressive, so that you can easily produce it from your source program; and utilitarian, in that its structure enables the kinds of optimizations that you want to make.

The lambda tribe already had a good intermediate language in this regard, in the form of the lambda-calculus itself. In many ways, solving a logic problem in the lambda-calculus is a lot like optimizing a program. Copy propagation is beta-reduction . Inlining is copy propagation extended to lambda expressions. Eta-conversion of continuations eliminates "forwarding blocks" -- basic blocks which have no statements, and just jump to some other continuation. Eta-conversion of functions eliminates functional trampolines.

continuation-passing style

But I'm getting ahead of myself. In the lambda tribe, we don't actually program in the lambda-calculus, you see. If you read any of our papers there's always a section in the beginning that defines the language we're working in, and then defines its semantics as a translation to the lambda-calculus.

This translation is always possible, for any programming language, and indeed Peter Landin did so in 1965 for Algol. Landin's original translations used his "J operator" to capture continuations, allowing a more direct translation of code into the lambda-calculus.

I wrote more on Landin, letrec, and the Y combinator a couple of years ago, but I wanted to mention one recent paper that takes a modern look at J, A Rational Deconstruction of Landin's J Operator . This paper is co-authored by V8 hacker Kevin Millikin, and cites work by V8 hackers Mads Sig Ager and Lasse R. Nielsen. Indeed all three seem to have had the privilege of having Olivier Danvy as PhD advisor. That's my tribe!

Anyway, J was useful in the context of Landin's abstract SECD machine, used to investigate the semantics of programs and programming languages. However it does not help the implementor of a compiler to a normal machine, and intermediate languages are all about utility. The answer to this problem, for functional programmers, was to convert the source program to what is known as continuation-passing style (CPS).

With CPS, the program is turned inside out. So instead of (+ 1 (f (+ 2 3))) , you would have:

Usually the outer lambda is left off, as it is implicit. Every call in a CPS program is a tail call, for the purposes of the lambda calculus. Continuations are explicitly represented as lambda expressions. Every function call or primitive operation takes the continuation as an argument. Papers in this field usually use Church's original lambda-calculus notation instead of the ML-like notation I give here. Continuations introduced by a CPS transformation are usually marked as such, so that they can be efficiently compiled later, without any flow analysis.

Expressing a program in CPS has a number of practical advantages:

CPS is capable of expressing higher-order control-flow, for languages in which functions may be passed as values.

All temporary values are named. Unreferenced names represent dead code, or code compiled for effect only. Referenced names are the natural input to a register allocator.

Continuations correspond to named basic blocks. Their names in the source code correspond to a natural flow analysis simply by tracing the definitions and uses of the names. Flow analysis enables more optimizations, like code motion.

Full beta-reduction is sound on terms of this type, even in call-by-value languages.

Depending on how you implement your CPS language, you can also attach notes to different continuations to help your graph reduce further: this continuation is an effect context (because its formal parameter is unreferenced in its body, or because you knew that when you made it), so its caller can be processed for effect and not for value; this one is of variable arity (e.g. can receive one or two values), so we could jump directly to the right handler, depending on what we want; etc. Guile's compiler is not in CPS right now, but I am thinking of rewriting it for this reason, to allow more transparent handling of control flow.

Note that nowhere in there did I mention Scheme's call-with-current-continuation ! For me, the utility of CPS is in its explicit naming of temporaries, continuations, and its affordances for optimization. Call/cc is a rare construct in Guile Scheme, that could be optimized better with CPS, but one that I don't care a whole lot about, because it's often impossible to prove that the continuation doesn't escape, and in that case you're on the slow path anyway.

So that's CPS. Continuations compile to jumps within a function, and functions get compiled to closures, or labels for toplevel functions. The best reference I can give on it is Andrew Kennedy's 2007 paper, Compiling With Continuations, Continued . CWCC is a really fantastic paper and I highly recommend it.

a digression: anf

CPS fell out of favor in the nineties, in favor of what became known as Administrative Normal Form, or ANF. ANF is like CPS except instead of naming the continuations, the code is left in so-called "direct-style", in which the continuations are implicit. So my previous example would look like this:

There are ANF correspondences for CPS reductions, like the beta-rule. See the Essence of Compiling With Continuations paper, which introduced ANF and sparked the decline of the original CPS formulation, for more.

This CPS-vs-ANF discussion still goes on, even now in 2011. In particular, Kennedy's CWCC paper is quite compelling. But the debate has been largely mooted by the advances made by the machine tribe, as enabled by their SSA intermediate language.

the machine tribe in two sentences

In the beginning was the Segmentation fault (core dumped)

(Just kidding, guys & ladies!)

Instead of compiling abstract ideas of naming and control to existing hardware, as the lambda tribe did, the machine tribe took as a given the hardware available, and tries to expose the capabilities of the machine to the programmer.

The machine tribe doesn't roll with closures, continuations, or tail calls. But they do have loops, and they crunch a lot of data. The most important thing for a compiler of a machine-tribe language like C is to produce efficient machine code for loops.

Clearly, I'm making some simplifications here. But if you look at a machine-tribe language like Java, you will be dealing with many control-flow constructs that are built-in to the language ( for , while , etc.) instead of layered on top of recursion like loops in Scheme. What this means is that large, important parts of your program have already collapsed to a first-order control-flow graph problem. Layering other optimizations on top of this like inlining ( the mother of all optimizations ) only expands this first-order flow graph. More on "first-order" later.

So! After decades of struggling with this problem, after having abstracted away from assembly language to three-address register transfer language, finally the machine folks came up with something truly great: static single-assignment (SSA) form. The arc here is away from the machine, and towards more abstraction, in order to be able to optimize better, and thus generate better code.

It's precisely for this utilitarian reason that SSA was developed. Consider one of the earliest SSA papers, Global Value Numbers and Redundant Comparisons by Rosen, Wegman, and Zadeck. Rosen et al were concerned about being able to move invariant expressions out of loops, extending the "value numbering" technique to operate across basic blocks. But the assignment-oriented intermediate languages that they had been using were getting in the way of code motion.

To fix this issue, Rosen et al switched from the assignment-oriented model of the machine tribe to the binding-oriented model of the lambda tribe.

In SSA, variables are never mutated (assigned); they are bound once and then left alone. Assignment to a source-program variable produces a new binding in the SSA intermediate language.

For the following function:

The SSA translation would be:

SSA form breaks down a procedure into basic blocks, each of which ends with a branch to another block, either conditional or unconditional. Usually temporary values receive their own names as well, as it facilitates optimization.

phony functions

The funny thing about SSA is the last bit, the "phi" function. Phi functions are placed at control-flow joins. In our case, the value of x may be proceed from the argument or from the assignment in the first or second if statement. The phi function indicates that.

But you know, lambda tribe, I didn't really get what this meant. What is a phi function? It doesn't help to consider where the name comes from, that the original IBM compiler hackers put in a "phony" function to merge the various values, but considered that "phi" was a better name if they wanted to be taken seriously by journal editors.

Maybe phi functions are intuitive to the machine tribe; I don't know. I doubt it. But fortunately there is another interpretation: that each basic block is a function, and that a phi function indicates that the basic block has an argument.

Here I have represented basic blocks as named functions instead of labels. Instead of phi functions, we allow the blocks to take a number of arguments; the call sites determine the values that the phi function may take on.

Note that all calls to blocks are tail calls. Reminds you of CPS, doesn't it? For more, see Richard Kelsey's classic paper, A Correspondence Between Continuation-Passing Style and Static Single Assignment Form , or my earlier article about Landin, Steele, letrec, and labels .

But for a shorter, readable, entertaining piece, see Appel's SSA is Functional Programming . I agree with Appel that we in the lambda-tribe get too hung up on our formalisms, when sometimes the right thing to do is draw a flow-graph.

so what's the big deal?

If it were only this, what I've said up to now, then SSA would not be as interesting as CPS, or even ANF. But SSA is not just about binding, it is also about control flow. In order to place your phi functions correctly, you need to build what is called a dominator tree . One basic block is said to dominate another if all control paths must pass through the first before reaching the second.

For example, the entry block always dominates the entirety of a function. In our example above, b0 also dominates every other block. However though b1 does branch to exit , it does not dominate it, as exit may be reached on other paths.

It turns out that you need to place phi functions wherever a definition of a variable meets a use of the variable that is not strictly dominated by the definition. In our case, that means we place a phi node on exit . The dominator tree is a precise, efficient control-flow analysis that allows us to answer questions like this one (where do I place a phi node?).

For more on SSA and dominators, see the very readable 1991 paper by Cytron, Ferrante, Rosen, Wegman, and Zadeck, Efficiently Computing Static Single Assignment Form and the Control Dependence Graph .

Typical implementations of SSA embed in each basic block pointers to the predecessors and successors of the blocks, as well as the block's dominators and (sometimes) post-dominators. (A predecessor is a block that precedes the given node in the control-flow graph; a successor succeeds it. A post-dominator is like a dominator, but for the reverse control flow; search the tubes for more.) There are well-known algorithms to calculate these links in linear time, and the SSA community has developed a number of optimizations on top of this cheap flow information.

In contrast, the focus in the lambda tribe has been more on inter procedural control flow, which -- as far as I can tell -- no one does in less than O(N 2 ) time, which is, as my grandmother would say, "just turrible".

I started off with a mention of global value numbering (GVN) on purpose. This is still, 20+ years later, the big algorithm for code motion in JIT compilers. HotSpot C1 and V8 both use it, and it just landed in IonMonkey . GVN is well-known, well-studied, and it works. It results in loop-invariant code motion: if an invariant definition reaches a loop header, it can be hoisted out of the loop. In contrast I don't know of anything from the lambda tribe that really stacks up. There probably is something, but it's certainly not as well-studied.

why not ssa?

Peoples of the machine tribe, could you imagine returning a block as a value? Didn't think so. It doesn't make sense to return a label. But that's exactly what the lambda-calculus is about. One may represent blocks as functions, and use them as such, but one may also pass them as arguments and return them as values. Such blocks are of a higher order than the normal kind of block that is a jump target. Indeed it's the only way to express recursion in the basic lambda calculus.

That's what I mean when I say that CPS is good as a higher-order intermediate language, and when I say that SSA is a good first-order intermediate language.

If you have a fundamentally higher-order language, one in which you need to loop by recursion, then you have two options: do whole-program analysis to aggressively closure-convert your program to be first-order, and then you can use SSA, or use a higher-order IL, and use something more like CPS.

MLton is an example of a compiler that does the former. Actually, MLton's SSA implementation is simply lovely. They do represent blocks as functions with arguments instead of labels and phi functions.

But if you can't do whole-program analysis -- maybe because you want to extend your program at runtime, support separate compilation, or whatever -- then you can't use SSA as a global IL. That's not to say that you shouldn't identify first-order segments of your program and apply SSA-like analysis and optimization on them, of course! That's really where the lambda tribe should go.

I wrote this because I was in the middle of V8's Crankshaft compiler and realized I didn't understand some of the idioms, so I went off to read a bunch of papers. At the same time, I wanted to settle the CPS-versus-ANF question for my Guile work. (Guile currently has a direct-style compiler, for which there are precious few optimizations; this fact is mostly a result of being difficult to work with the IL.)

This post summarizes my findings, but I'm sure I made a mistake somewhere. Please note any corrections in the comments.

related articles

  • effects analysis in guile
  • a continuation-passing style intermediate language for guile
  • a register vm for guile
  • the half strap: self-hosting and guile
  • revisiting common subexpression elimination in guile
  • a closer look at crankshaft, v8's optimizing compiler

7 responses

I don't see any mistakes, but i only know SSA and no CPS/ANF.

What is a phi-function? You may see it as a funny way to denote copies, because this is how they are deconstructed/removed. I'd write your example in SSA-form like this:

function clamp (x0, lower, upper) { if (x0 < lower) x1 = lower; else if (x0 > upper) x2 = upper; x3 = phi(x1,x2,x0); return x3; }

Since copies are noops, this can be simplified:

function clamp (x, lower, upper) { if (x < lower); else if (x > upper); x2 = phi(lower,upper,x); return x2; }

And deconstructed to:

function clamp (x0, lower, upper) { if (x < lower) x2 = lower; else if (x > upper) x2 = upper; else x2 = x; return x2; }

For example, LLVM does this in the backend. Since x2 is assigned twice, this is not in SSA-form anymore.

Also, note that SSA-form is not a kind or type of IL. It is a property of a representation. Also, there is nothing that about SSA, that you could not do without it. It just encodes analyses like "reaching definition" into the program representation.

Finally, for a pet peeve of mine, SSA actually makes the concept of variables unecessary. Since any operand has exactly one defining operation, you can represent that just by a reference to this operation. This makes stuff like copy-propagation implicit, since copies are noops. See http://pp.info.uni-karlsruhe.de/publication.php?id=braun11wir

Thanks for the article! This is a really good introduction.

"In contrast, the focus in the lambda tribe has been more on *inter*procedural control flow, which -- as far as I can tell -- no one does in less than O(N^2) time, which is, as my grandmother would say, "just turrible"."

Does what in less than O(N^2) time?

If you do certain things lazily, you can get below that for the common case. Whether you can or not depends on what you're actually trying to analyse, though.

Note that you can convert between SSA and ANF form. See the 2003 paper, "A Functional Perspective on SSA Optimisation Algorithms", Chakravarty, Keller and Zadarnowski, which introduces (and implements) a bi-directional translation between SSA and ANF.

http://www.jantar.org/papers/chakravarty03perspective.pdf

If SSA can't handle functions as values, how does it deal with constructs like computed gotos?

Good point about names, Andreas! I suspect the same thing applies to continuations in CPS.

Verte, I was referring to the kCFA control flow analysis algorithms, of which only 0CFA is polynomial. Higher-order (k > 0) seems to take exponential time!

Matthew, from gccint :

computed jumps Computed jumps contain edges to all labels in the function referenced from the code. All those edges have EDGE_ABNORMAL flag set. The edges used to represent computed jumps often cause compile time performance problems, since functions consisting of many taken labels and many computed jumps may have very dense flow graphs, so these edges need to be handled with special care...

mlton seems to have moved to github:

https://github.com/MLton/mlton/blob/master/mlton/ssa/ssa-tree.sig

Comments are closed.

  • Implementing Product Management

Example of Assigning Organizations Using Item Rules

You can define assignment rules that automatically assign items to one or more organizations when a condition is satisfied. The condition can be based on attribute values, another organization assignment, or a catalog assignment.

The following table summarizes an example of an item rule that:

Is defined in an assignment rule set.

Tests whether an item has a product type of In-house . If the result of the test is true, then assign the item to the organizations M2 and M3 .

Tests whether an item has a product type of Bought-outside . If the result of the test is true, then assign the item to the organization T1 .

Note that this rule has more than one THEN expression. You add THEN expressions by using the Add Row action. The expressions are evaluated in the sequence of the rows, and the execution is halted when the first THEN expression evaluates to True. You can add additional rows of Then Expressions by clicking Add Row .

Related Topics

  • Rules and Rule Sets
  • Define Rule Sets and Item Rules
  • Item Rule Syntax

what is static single assignment

Toronto Blue Jays' Former Ace to Begin Rehab Assignment as He Works His Way Back

Alek Manoah, fresh off a dreadful 2023 season and an injury-riddled spring, is ready to head off on a rehab assignment with a goal of making it back to the Majors.

  • Author: Brady Farkas

In this story:

Toronto Blue Jays' former ace Alek Manoah is set to begin a rehab assignment as he works his way back from a shoulder injury that caused him to miss most of spring training.

Per Kaitlyn McGrath of 'The Athletic' on social media:

Alek Manoah’s sim game went well yesterday, per manager John Schneider. He threw 48 pitches, had 6 strikeouts. Velo and command were good. Next step is for him to pitch 4 innings with Low-A Dunedin on Sunday

Alek Manoah’s sim game went well yesterday, per #BlueJays manager John Schneider. He threw 48 pitches, had 6 strikeouts. Velo and command were good. Next step is for him to pitch 4 innings with Low-A Dunedin on Sunday — Kaitlyn McGrath (@kaitlyncmcgrath) April 3, 2024

At this point, Manoah is a total wild card. Still just 26 years old, he offers tantalizing upside. He's under team control through 2027 so it's understandable why the Jays don't want to give up on him. He went 9-2 as a rookie in 2021 and then went 16-7 in 2022, pitching to a 2.24 ERA, an All-Star appearance and a third-place finish in the American League Cy Young voting. He also started Game 1 of the Jays' playoff series.

However, things cratered for him in 2023, as he went 3-9 with a 5.87 ERA in 19 starts. He got sent to the minor leagues on multiple occasions. Furthermore, this spring, despite efforts to get in better shape, he made one start, lasting 1.2 innings and giving up four earned runs and hitting three batters.

If Manoah ends up making it back and contributing, it's certainly good for the Jays, but they appear to be covered either way. The rotation currently has Jose Berrios, Kevin Gausman, Yusei Kikuchi, Chris Bassitt and Bowden Francis, and they also have Yariel Rodriguez and top prospect Ricky Tiedemann waiting in the wings.

Follow Fastball on FanNation on social media

Continue to follow our Fastball on FanNation coverage on social media by liking us on  Facebook  and by following us on Twitter  @FastballFN .

Latest News

Feb 26, 2024; Salt River Pima-Maricopa, Arizona, USA; Los Angeles Dodgers infielder Mookie Betts against the Colorado Rockies during a spring training game at Salt River Fields at Talking Stick.

Los Angeles Dodgers Superstar Mookie Betts Hits MLB's 1st Home Run of 2024

USATSI_22821555_168388303_lowres

Former Seattle Mariners' Ace on Wrong Side of Spring Training History in Loss to New York Yankees

Jul 27, 2023; Detroit, Michigan, USA; Detroit Tigers starting pitcher Michael Lorenzen (21) pitches in the fourth inning against the Los Angeles Angels at Comerica Park.

Chicago White Sox Lose Out on Free Agent Target as He Signs with Texas Rangers

USATSI_22625795_168388303_lowres

Toronto Blue Jays Trade Former All-Star INF to Cincinnati Reds

Feb 19, 2024; Glendale, AZ, USA; Los Angeles Dodgers starting pitcher Yoshinobu Yamamoto (18) walks on the field during spring training at Camelback Ranch.

Dodgers vs. Padres Best Bets, Seoul Series Picks & Lines for Today, 3/21

  • CBSSports.com
  • Fanatics Sportsbook
  • CBS Sports Home
  • NCAA Tournament
  • W. Tournament
  • Champions League
  • Motor Sports
  • High School
  • Horse Racing 

mens-brackets-180x100.jpg

Men's Brackets

womens-brackets-180x100.jpg

Women's Brackets

Fantasy Baseball

Fantasy football, football pick'em, college pick'em, fantasy basketball, fantasy hockey, franchise games, 24/7 sports news network.

cbs-sports-hq-watch-dropdown.jpg

  • CBS Sports Golazo Network
  • March Madness Live
  • Masters Live
  • PGA Tour on CBS
  • UEFA Champions League
  • UEFA Europa League
  • Italian Serie A
  • Watch CBS Sports Network
  • TV Shows & Listings

The Early Edge

201120-early-edge-logo-square.jpg

A Daily SportsLine Betting Podcast

With the First Pick

wtfp-logo-01.png

NFL Draft is coming up!

  • Podcasts Home
  • Eye On College Basketball
  • The First Cut Golf
  • NFL Pick Six
  • Cover 3 College Football
  • Fantasy Football Today
  • My Teams Organize / See All Teams Help Account Settings Log Out

Blue Jays' Alek Manoah: Beginning assignment in Single-A

Share video.

Manoah (shoulder) will begin a rehab assignment Sunday with Single-A Dunedin, Kaitlyn McGrath of The Athletic reports.

Manoah pitched a three-inning simulated game Tuesday at the Jays' spring training facility without any issues, and he's slated to up his workload to four frames in his first rehab start. If Manoah's Single-A outing goes well, his assignment may move to a higher-level affiliate, but the 26-year-old is likely still several rehab appearances away from making his season debut with Toronto.

Blue Jays' Alek Manoah: Pitching simulated game Tuesday

Blue jays' alek manoah: moves to il, blue jays' alek manoah: builds up to 34 pitches in sim game, blue jays' alek manoah: completes live bp, blue jays' alek manoah: slated for live batting practice, blue jays' alek manoah: bullpen session tuesday, our latest fantasy baseball stories.

eugenio-suarez.jpg

Week 3 Preview: Sleeper hitters

Scott white • 1 min read.

reynaldo-lopez.jpg

Week 3 Preview: Sleeper pitchers

blake-snell.jpg

Week 3 two-start pitcher rankings

bailey-ober.jpg

Who to add, who to drop this weekend

Chris towers • 6 min read.

MLB: Spring Training-Los Angeles Angels at Kansas City Royals

Fantasy Baseball Waiver Wire

Chris towers • 4 min read.

jackson-holliday-getty-images.jpg

Prospects: Holliday won't be forgotten

Scott white • 10 min read.

IMAGES

  1. PPT

    what is static single assignment

  2. PPT

    what is static single assignment

  3. Static Single Assignment

    what is static single assignment

  4. PPT

    what is static single assignment

  5. Example of static single assignment (SSA) representation and φ

    what is static single assignment

  6. PPT

    what is static single assignment

VIDEO

  1. الدرس الواحد والثلاثون : التعرف على static

  2. What is Static scheduling| lec 51

  3. MATT Assignment Basic and Advanced Static Analysis

  4. Static Single Assignment

  5. L7.21

  6. L7.20

COMMENTS

  1. Static single-assignment form

    In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a property of an intermediate representation (IR) that requires each variable to be assigned exactly once and defined before it is used.

  2. Static Single Assignment (with relevant examples)

    Static Single Assignment was presented in 1988 by Barry K. Rosen, Mark N, Wegman, and F. Kenneth Zadeck. In compiler design, Static Single Assignment ( shortened SSA) is a means of structuring the IR (intermediate representation) such that every variable is allotted a value only once and every variable is defined before it's use. The prime ...

  3. Understanding static single assignment forms

    By popular demand, I'm doing another LLVM post. This time, it's single static assignment (or SSA) form, a common feature in the intermediate representations of optimizing compilers. Like the last one, SSA is a topic in compiler and IR design that I mostly understand but could benefit from some self-guided education on.

  4. PDF CS153: Compilers Lecture 23: Static Single Assignment Form

    •Static Single Assignment (SSA) •CFGs but with immutable variables •Plus a slight "hack" to make graphs work out •Now widely used (e.g., LLVM) •Intra-procedural representation only •An SSA representation for whole program is possible (i.e., each global variable and memory location has static single

  5. PDF Static Single Assignment

    SSA form. Static single-assignment form arranges for every value computed by a program to have a unique assignment (aka, "definition") A procedure is in SSA form if every variable has (statically) exactly one definition SSA form simplifies several important optimizations, including various forms of redundancy elimination. Example.

  6. CS 6120: Static Single Assignment

    Many compilers convert programs into static single assignment (SSA) form, which does exactly what it says: it ensures that, globally, every variable has exactly one static assignment location. (Of course, that statement might be executed multiple times, which is why it's not dynamic single assignment.) In Bril terms, we convert a program like ...

  7. PDF Lecture 13 Introduction to Static Single Assignment (SSA)

    SSA. Static single assignment is an IR where every variable is assigned a value at most once in the program text. E as y for a b asi c bl ock : assign to a fresh variable at each stmt. each use uses the most recently defined var. (Si mil ar to V al ue N umb eri ng) Straight-line SSA. . + y.

  8. PDF Static Single Assignment

    Static single-assignment (SSA) form arranges for every value computed by a program to have a unique definition SSA is a way of structuring the intermediate representation so that every variable is (statically) assigned exactly once (hence it is a dynamic constant)

  9. PDF Computing Static Single Assignment (SSA) Form

    Computing Static Single Assignment (SSA) Form Overview † What is SSA? † Advantages of SSA over use-def chains † \Flavors" of SSA † Dominance frontiers revisited † Inserting f-nodes † Renaming the variables † Translating out of SSA form R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K. Zadeck, \E-ciently Computing Static

  10. PDF CS 380C Lecture 7 2 Static Single Assignment

    Static Single Assignment •Induction variables (standard vs. SSA) •Loop Invariant Code Motion with SSA CS 380C Lecture 7 21 Static Single Assignment Cytron et al. Dominance Frontier Algorithm let SUCC(S) = [s∈S SUCC(s) DOM!−1(v) = DOM−1(v) - v, then

  11. PDF Static Single Assignment Form

    Static Single Assignment Form (and dominators, post-dominators, dominance frontiers…) CS252r Spring 2011 ... •If node X contains assignment to a, put Φ function for a in dominance frontier of X •Adding Φ fn may require introducing additional Φ fn •Step 2: Rename variables so only one definition ...

  12. PDF Static Single Assignment Form

    In Static Single Assignment (SSA) Form each assignment to a variable, v, is changed into a unique assignment to new variable, vi. If variable v has n assignments to it throughout the program, then (at least) n new variables, v1 to vn, are created to replace v. All uses of v are

  13. PDF Lecture 13 Static Single Assignment & Intro to Satisfiability Modulo

    • Static Single Assignment form: type of intermediate representation oEach variable is assigned statically (in code) exactly once oEach definition is assigned a unique name • Properties: oMakes def-use chains explicit oDefinitions dominate uses (key property) oThis makes certain optimizations simpler or more efficient

  14. What is the Static Single Assignment Form (SSA) and when to use it?

    By applying static single assignment form, or short SSA, each variable is assigned exactly once. This concept is utilized, for instance, in intermediate representations such as in compilers. For achieving SSA, variables get versioned, usually by adding an index to the variable's name.

  15. PDF Lecture Notes on Static Single Assignment Form

    Static Single Assignment Form L12.2 2 Basic Blocks As before, a basic block is a sequence of instructions with one entry point and one exit point. In particular, from nowhere in the program do we jump into the middle of the basic block, nor do we exit the block from the middle. In our language, the

  16. PDF CSC D70: Compiler Optimization Static Single Assignment (SSA)

    Static Single Assignment (SSA) • Static single assignment is an IR where every variable is assigned a value at most once in the program text • Easy for a basic block (reminiscent of Value Numbering): -Visit each instruction in program order: •LHS: assign to a fresh version of the variable

  17. Static Single-Assignment Form (Chapter 19)

    In this way the compiler can hop quickly from use to definition to use to definition. An improvement on the idea of def-use chains is static single-assignment form, or SSA form, an intermediate representation in which each variable has only one definition in the program text. The one (static) definition-site may be in a loop that is executed ...

  18. Compiler Design: Static Single Assignment

    In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a property of an intermediate representation (IR), which requires that each variable is assigned exactly once, and every variable is defined before it is used. t1=b-c. t2=t1+d. t3=t2+e. t4=c*f.

  19. SSA (GNU Compiler Collection (GCC) Internals)

    13.3 Static Single Assignment. ¶. Most of the tree optimizers rely on the data flow information provided by the Static Single Assignment (SSA) form. We implement the SSA form as described in R. Cytron, J. Ferrante, B. Rosen, M. Wegman, and K. Zadeck. Efficiently Computing Static Single Assignment Form and the Control Dependence Graph.

  20. CS 6120: Global Analysis & SSA

    Many compilers convert programs into static single assignment (SSA) form, which does exactly what it says: it ensures that, globally, every variable has exactly one static assignment location. (Of course, that statement might be executed multiple times, which is why it's not dynamic single assignment.) In Bril terms, we convert a program like this:

  21. PDF Lecture Notes on Static Single Assignment Form

    Static Single Assignment Form L6.2 2 Basic Blocks A basic block is a sequence of instructions with one entry point and one exit point. In particular, from nowhere in the program do we jump into the middle of the

  22. static single assignment for functional programmers

    For more, see Richard Kelsey's classic paper, A Correspondence Between Continuation-Passing Style and Static Single Assignment Form, or my earlier article about Landin, Steele, letrec, and labels. But for a shorter, readable, entertaining piece, see Appel's SSA is Functional Programming. I agree with Appel that we in the lambda-tribe get too ...

  23. Example of Assigning Organizations Using Item Rules

    You can define assignment rules that automatically assign items to one or more organizations when a condition is satisfied. The condition can be based on attribute values, another organization assignment, or a catalog assignment. Scenario. The following table summarizes an example of an item rule that: Is defined in an assignment rule set.

  24. Toronto Blue Jays' Former Ace to Begin Rehab Assignment as He Works His

    alek manoah record. Toronto Blue Jays' former ace Alek Manoah is set to begin a rehab assignment as he works his way back from a shoulder injury that caused him to miss most of spring training ...

  25. PDF Static Single Assignment

    I. Review: Static Single Assignment (SSA) Static single assignment is an IR where every variable is assigned a value at most once in the program text. Easy for a basic block (reminiscent of Value Numbering): Visit each instruction in program order: LHS: assign to a fresh version of the variable. RHS: use the most recent version of each variable.

  26. Blue Jays' Alek Manoah: Beginning assignment in Single-A

    Apr 3, 2024 at 6:26 pm ET • 1 min read. Manoah (shoulder) will begin a rehab assignment Sunday with Single-A Dunedin, Kaitlyn McGrath of The Athletic reports. Manoah pitched a three-inning ...