If you’ve read anything about compilers in the last two decades or so, you have almost certainly heard of SSA compilers, a popular architecture featured in many optimizing compilers, including ahead-of-time compilers such as LLVM, GCC, Go, CUDA (and various shader compilers), Swift1, and MSVC2, and just-in-time compilers such as HotSpot C23, V84, SpiderMonkey5, LuaJIT, and the Android Runtime6.
SSA is hugely popular, to the point that most compiler projects no longer bother with other IRs for optimization7. This is because SSA is incredibly nimble at the types of program analysis and transformation that compiler optimizations want to do on your code. But why? Many of my friends who don’t do compilers often say that compilers seem like opaque magical black boxes, and SSA, as it often appears in the literature, is impenetrably complex.
But it’s not! SSA is actually very simple once you forget everything you think your programs are actually doing. We will develop the concept of SSA form, a simple SSA IR, prove facts about it, and design some optimizations on it.
I have previously written about the granddaddy of all modern SSA compilers, LLVM. This article is about SSA in general, and won’t really have anything to do with LLVM. However, it may be helpful to read that article to make some of the things in this article feel more concrete.
SSA is a property of intermediate representations (IRs), primarily used by compilers for optimizing imperative code that target a register machine. Register machines are computers that feature a fixed set of registers that can be used as the operands for instructions: this includes virtually all physical processors, including CPUs, GPUs, and weird tings like DSPs.
SSA is most frequently found in compiler middle-ends, the optimizing component between the frontend (which deals with the surface language programmers write, and lowers it into the middle-end’s IR), and the backend (which takes the optimized IR and lowers it into the target platform’s assembly).
SSA IRs, however, often have little resemblance to the surface language they lower out of, or the assembly language they target. This is because neither of these representations make it easy for a compiler to intuit optimization opportunities.
Imperative code consists of a sequence of operations that mutate the executing machine’s state to produce a desired result. For example, consider the following C program:
But, how would you write a general algorithm to detect that all of the operations cancel out? You’re forced to keep in mind program order to perform the necessary dataflow analysis, following mutations of a and b through the program. But this isn’t very general, and traversing all of those paths makes the search space for large functions very big. Instead, you would like to rewrite the program such that a and b gradually get replaced with the expression that calculates the most recent value, like this:
And finally, we see that we’re returning argc - argc, and can replace it with 0. All the other variables are now unused, so we can delete them.
The reason this works so well is because we took a function with mutation, and converted it into a combinatorial circuit, a type of digital logic circuit that has no state, and which is very easy to analyze. The dependencies between nodes in the circuit (corresponding to primitive operations such as addition or multiplication) are obvious from its structure. For example, consider the following circuit diagram for a one-bit multiplier:
A binary multiplier (Wikipedia)
This graph representation of an operation program has two huge benefits:
The powerful tools of graph theory can be used to algorithmically analyze the program and discover useful properties, such as operations that are independent of each other or whose results are never used.
The operations are not ordered with respect to each other except when there is a dependency; this is useful for reordering operations, something compilers really like to do.
The reason combinatorial circuits are the best circuits is because they are directed acyclic graphs (DAGs) which admit really nice algorithms. For example, longest path in a graph is NP-hard (and because P=NP8, has complexity O(2n)). However, if the graph is a DAG, it admits an O(n) solution!
To understand this benefit, consider another program:
Suppose we wanted to replace each variable with its definition like we did before. We can’t just replace each constant variable with the expression that defines it though, because we would wind up with a different program!
intf(intx){inty=x*2;x*=y;// const int z = y; // Replace z with its definition.y*=y;returnx+y;}
Now, we pick up an extra y term because the squaring operation is no longer unused! We can put this into circuit form, but it requires inserting new variables for every mutation.
But we can’t do this when complex control flow is involved! So all of our algorithms need to carefully account for mutations and program order, meaning that we don’t get to use the nice graph algorithms without careful modification.
SSA stands for “static single assignment”, and was developed in the 80s as a way to enhance the existing three-argument code (where every statement is in the form x = y op z) so that every program was circuit-like, using a very similar procedure to the one described above.
The SSA invariant states that every variable in the program is assigned to by precisely one operation. If every operation in the program is visited once, they form a combinatorial circuit. Transformations are required to respect this invariant. In circuit form, a program is a graph where operations are nodes, and “registers” (which is what variables are usually called in SSA) are edges (specifically, each output of an operation corresponds to a register).
But, again, control flow. We can’t hope to circuitize a loop, right? The key observation of SSA is that most parts of a program are circuit-like. A basic block is a maximal circuital component of a program. Simply put, it is a sequence of non-control flow operations, and a final terminator operation that transfers control to another basic block.
The basic blocks themselves form a graph, the control flow graph, or CFG. This formulation of SSA is sometimes called SSA-CFG9. This graph is not a DAG in general; however, separating the program into basic blocks conveniently factors out the “non-DAG” parts of the program, allowing for simpler analysis within basic blocks.
There are two equivalent formalisms for SSA-CFG. The traditional one uses special “phi” operations (often called phi nodes, which is what I will call them here) to link registers across basic blocks. This is the formalism LLVM uses. A more modern approach, used by MLIR, is block arguments: each basic block specifies parameters, like a function, and blocks transferring control flow to it must pass arguments of those types to it.
How might we express this in an SSA-CFG IR? Let’s start inventing our SSA IR! It will look a little bit like LLVM IR, since that’s what I’m used to looking at.
// Globals (including functions) start with $, registers with %.// Each function declares a signature.funcfib(%n:i32)->(i32){// The first block has no label and can't be "jumped to".//// Single-argument goto jumps directly into a block with// the given arguments.goto@loop.start(%n,0,1)// Block labels start with a `!`, can contain dots, and// define parameters. Register names are scoped to a block.@loop.start(%n,%a,%b:i32):// Integer comparison: %n > 0.%cont=cmp.gt%n,0// Multi-argument goto is a switch statement. The compiler// may assume that `%cont` is among the cases listed in the// goto.goto%cont{0->@ret(%a),// Goto can jump to the function exit.1->@loop.body(%n,%a,%b),}@loop.body(%n,%a,%b:i32):// Addition and subtraction.%c=add%a,%b%n.2=sub%n,1// Note the assignments in @loop.start:// %n = %n.2, %a = %b, %b = %c.goto@loop.start(%n.2,%b,%c)}
Every block ends in a goto, which transfers control to one of several possible blocks. In the process, it calls that block with the given arguments. One can think of a basic block as a tiny function which tails10 into other basic blocks in the same function.
LLVM IR is… older, so it uses the older formalism of phi nodes. “Phi” comes from “phony”, because it is an operation that doesn’t do anything; it just links registers from predecessors.
A phi operation is essentially a switch-case on the predecessors, each case selecting a register from that predecessor (or an immediate). For example, @loop.start has two predecessors, the implicit entry block @entry, and @loop.body. In a phi node IR, instead of taking a block argument for %n, it would specify
The value of the phi operation is the value from whichever block jumped to this one.
This can be awkward to type out by hand and read, but is a more convenient representation for describing algorithms (just “add a phi node” instead of “add a parameter and a corresponding argument”) and for the in-memory representation, but is otherwise completely equivalent.
It’s a bit easier to understand the transformation from C to our IR if we first rewrite the C to use goto instead of a for loop:
However, we still have mutation in the picture, so this isn’t SSA. To get into SSA, we need to replace every assignment with a new register, and somehow insert block arguments…
The above IR code is already partially optimized; the named variables in the C program have been lifted out of memory and into registers. If we represent each named variable in our C program with a pointer, we can avoid needing to put the program into SSA form immediately. This technique is used by frontends that lower into LLVM, like Clang.
We’ll enhance our IR by adding a stack declaration for functions, which defines scratch space on the stack for the function to use. Each stack slot produces a pointer that we can load from and store to.
Our Fibonacci function would now look like so:
func&fib(%n:i32)->(i32){// Declare stack slots.%np=stacki32%ap=stacki32%bp=stacki32// Load initial values into them.store%np,%nstore%ap,0store%bp,1// Start the loop.goto@loop.start(%np,%ap,%bp)@loop.start(%np,%ap,%bp:ptr):%n=load%np%cont=cmp.gt%n,0goto%cont{0->@exit(%ap)1->@loop.body(%n,%a,%b),}@loop.body(%np,%ap,%bp:ptr):%a=load%ap%b=load%bp%c=add%a,%bstore%ap,%bstore%bp,%c%n=load%np%n.2=sub%n,1store%np,%n.2goto@loop.start(%np,%ap,%bp)@exit(%ap:ptr):%a=load%apgoto@ret(%ap)}
Any time we reference a named variable, we load from its stack slot, and any time we assign it, we store to that slot. This is very easy to get into from C, but the code sucks because it’s doing lots of unnecessary pointer operations. How do we get from this to the register-only function I showed earlier?
We want program order to not matter for the purposes of reordering, but as we’ve written code here, program order does matter: loads depend on prior stores but stores don’t produce a value that can be used to link the two operations.
We can restore not having program order by introducing operands representing an “address space”; loads and stores take an address space as an argument, and stores return a new address space. An address space, or mem, represents the state of some region of memory. Loads and stores are independent when they are not connected by a mem argument.
This type of enhancement is used by Go’s SSA IR, for example. However, it adds a layer of complexity to the examples, so instead I will hand-wave this away.
The predecessors (or “preds”) of a basic block is the set of blocks with an outgoing edge to that block. A block may be its own predecessors.
Some literature calls the above “direct” or immediate predecessors. For example, the preds of in our example are @loop.start are @entry (the special name for the function entry-point) @loop.body.
The successors (no, not “succs”) of a basic block is the set of blocks with an outgoing edge from that block. A block may be its own successors.
The sucessors of @loop.start are @exit and @loop.body. The successors are listed in the loop’s goto.
If a block @a is a transitive pred of a block @b, we say that @aweakly dominates@b, or that it is a weak dominator of @b. For example, @entry, @loop.start and @loop.body both weakly dominate @exit.
However, this is not usually an especially useful relationship. Instead, we want to speak of dominators:
We only consider CFGs which are flowgraphs, that is, all blocks are reachable from the root block @entry, which has no preds. This is necessary to eliminate some pathological graphs from our proofs. Importantly, we can always ask for an acyclic path11 from @entry to any block @b.
An equivalent way to state the dominance relationship is that from every path from @entry to @b contains all of @b’s dominators.
First, assume every @entry to @b path contains @a. If @b is @a, we’re done. Otherwise we need to prove each predecessor of @b is dominated by @a; we do this by induction on the length of acyclic paths from @entry to @b. Consider preds @p of @b that are not @a, and consider all acyclic paths p from @entry to @p; by appending @b to them, we have an acyclic path p′ from @entry to @b, which must contain @a. Because both the last and second-to-last elements of this are not @a, it must be within the shorter path p which is shorter than p′. Thus, by induction, @a dominates @p and therefore @b
Going the other way, if @a dominates @b, and consider a path p from @entry to @b. The second-to-last element of p is a pred @p of @b; if it is @a we are done. Otherwise, we can consider the path p made by deleting @b at the end. @p is dominated by @a, and p′ is shorter than p, so we can proceed by induction as above.
Onto those nice properties. Dominance allows us to take an arbitrarily complicated CFG and extract from it a DAG, composed of blocks ordered by dominance.
Dominance is reflexive and transitive by definition, so we only need to show blocks can’t dominate each other.
Suppose distinct @a and @b dominate each other.Pick an acyclic pathp from @entry to @a. Because @b dominates @a, there is a prefix p′ of this path ending in @b. But because @a dominates @b, some prefix p′′ of p′ ends in @a. But now p must contain @a twice, contradicting that it is acyclic.
This allows us to write @a < @b when @a dominates @b. There is an even more refined graph structure that we can build out of dominators, which follows immediately from the partial order theorem.
Suppose @a1 < @b and @a2 < @b, but neither dominates the other. Then, there must exist acyclic paths from @entry to @b which contain both, but in different orders. Take the subpaths of those paths which follow @entry ... @a1, and @a1 ... @b, neither of which contains @a2. Concatenating these paths yields a path from @entry to @b that does not contain @a2, a contradiction.
This tells us that the DAG we get from the dominance relation is actually a tree, rooted at @entry. The parent of a node in this tree is called its immediate dominator.
Computing dominators can be done iteratively: the dominator set of a block @b is the intersection the dominator sets of its preds, plus @b. This algorithm runs in quadratic time.
A better algorithm is the Lengauer-Tarjan algorithm[^lta]. It is relatively simple, but explaining how to implement it is a bit out of scope for this article. I found a nice treatment of it here.
What’s important is we can compute the dominator tree without breaking the bank, and given any node, we can ask for its immediate dominator. Using immediate dominators, we can introduce the final, important property of dominators.
The dominance frontier of a block @a is the set of all blocks not dominated by @a with at least one pred which @a dominates.
These are points where control flow merges from distinct paths: one containing @a and one not. The dominance frontier of @loop.body is @loop.start, whose preds are @entry and @loop.body.
There are many ways to calculate dominance frontiers, but with a dominance tree in hand, we can do it like this:
For each block @b with more than one pred, for each of its preds, let @p be that pred. Add @b to the dominance frontier of @p and all of its dominators, stopping when encountering @b’ immediate dominator.
We need to prove that every block examined by the algorithm winds up in the correct frontiers.
First, we check that every examined block @b is added to the correct frontier. If @a < @p, where @p is a pred of @b, and a @d is @b’s immediate dominator, then if @a < @d, @b is not in its frontier, because @a must dominate @b. Otherwise, @b must be in @a’s frontier, because @a dominates a pred but it cannot dominate @b, because then it would be dominated by @i, a contradiction.
Second, we check that every frontier is complete. Consider a block @a. If an examined block @b is in its frontier, then @a must be among the dominators of some pred @p, and it must be dominated by @b’s immediate dominator; otherwise, @a would dominate @b (and thus @b would not be in its frontier). Thus, @b gets added to @a’s dominator.
You might notice that all of these algorithms are quadratic. This is actually a very good time complexity for a compilers-related graph algorithm. Cubic and quartic algorithms are not especially uncommon, and yes, your optimizing compiler’s time complexity is probably cubic or quartic in the size of the program!
Ok. Let’s construct an optimization. We want to figure out if we can replace a load from a pointer with the most recent store to that pointer. This will allow us to fully lift values out of memory by cancelling out store/load pairs.
This will make use of yet another implicit graph data structure.
The dataflow graph is the directed graph made up of the internal circuit graphs of each each basic block, connected along block arguments.
To follow a use-def chain is to walk this graph forward from an operation to discover operations that potentially depend on it, or backwards to find operations it potentially depends on.
It’s important to remember that the dataflow graph, like the CFG, does not have a well defined “up” direction. Navigating it and the CFG requires the dominator tree.
One other important thing to remember here is that every instruction in a basic block always executes if the block executes. In much of this analysis, we need to appeal to “program order” to select the last load in a block, but we are always able to do so. This is an important property of basic blocks that makes them essential for constructing optimizations.
For a given store %p, %v, we want to identify all loads that depend on it. We can follow the use-def chain of %p to find which blocks contain loads that potentially depend on the store (call it %s).
First, we can eliminate loads within the same basic block (call it @a). Replace all load %p instructions after s (but before any other store %p, _s, in program order) with %v’s def. If s is not the last store in this block, we’re done.
Otherwise, follow the use-def chain of %p to successors which use %p, i.e., successors whose goto case has %p as at least one argument. Recurse into those successors, and now replacing the pointer %p of interest with the parameters of the successor which were set to %p (more than one argument may be %p).
If successor @b loads from one of the registers holding %p, replace all such loads before a store to %p. We also now need to send %v into @b somehow.
This is where we run into something of a wrinkle. If @b has exactly one predecessor, we need to add a new block argument to pass whichever register is holding %v (which exists by induction). If %v is already passed into @b by another argument, we can use that one.
However, if @b has multiple predecessors, we need to make sure that every path from @a to @b sends %v, and canonicalizing those will be tricky. Worse still, if @b is in @a’s domination frontier, a different store could be contributing to that load! For this reason, dataflow from stores to loads is not a great strategy.
Instead, we’ll look at dataflow from loads backwards to stores (in general, dataflow from uses to defs tends to be more useful), which we can use to augment the above forward dataflow analysis to remove the complex issues around domination frontiers.
Let’s analyze loads instead. For each load %p in @a, we want to determine all stores that could potentially contribute to its value. We can find those stores as follows:
We want to be able to determine which register in a given block corresponds to the value of %p, and then find its last store in that block.
To do this, we’ll flood-fill the CFG backwards in BFS order. This means that we’ll follow preds (through the use-def chain) recursively, visiting each pred before visiting their preds, and never revisiting a basic block (except we may need to come back to @a at the end).
Determining the “equivalent”12 of %p in @b (we’ll call it %p.b) can be done recursively: while examining @b, follow the def of %p.b. If %p.b is a block parameter, for each pred @c, set %p.c to the corresponding argument in the @b(...) case in @c’s goto.
Using this information, we can collect all stores that the load potentially depends on. If a predecessor @b stores to %p.b, we add the last such store in @b (in program order) to our set of stores, and do not recurse to @b’s preds (because this store overwrites all past stores). Note that we may revisit @a in this process, and collect a store to %p from it occurs in the block. This is necessary in the case of loops.
The result is a set stores of (store %p.s %v.s, @s) pairs. In the process, we also collected a set of all blocks visited, subgraph, which are dominators of @a which we need to plumb a %v.b through. This process is called memory dependency analysis, and is a key component of many optimizations.
Not all contributing operations are stores. Some may be references to globals (which we’re disregarding), or function arguments or the results of a function call (which means we probably can’t lift this load). For example %p gets traced all the way back to a function argument, there is a code path which loads from a pointer whose stores we can’t see.
It may also trace back to a stack slot that is potentially not stored to. This means there is a code path that can potentially load uninitialized memory. Like LLVM, we can assume this is not observable behavior, so we can discount such dependencies. If all of the dependencies are uninitialized loads, we can potentially delete not just the load, but operations which depend on it (reverse dataflow analysis is the origin of so-called “time-traveling” UB).
Now that we have the full set of dependency information, we can start lifting loads. Loads can be safely lifted when all of their dependencies are stores in the current function, or dependencies we can disregard thanks to UB in the surface language (such as null loads or uninitialized loads).
There is a lot of fuss in this algorithm about plumbing values through block arguments. A lot of IRs make a simplifying change, where every block implicitly receives the registers from its dominators as block arguments.
I am keeping the fuss because it makes it clearer what’s going on, but in practice, most of this plumbing, except at dominance frontiers, would be happening in the background.
Suppose we can safely lift some load. Now we need to plumb the stored values down to the load. For each block @b in subgraph (all other blocks will now be in subgraph unless stated otherwise). We will be building two mappings: one (@s, @b) -> %v.s.b, which is the register equivalent to %v.s in that block. We will also be building a map @b -> %v.b, which is the value that %p must have in that block.
Prepare a work queue, with each @s in it initially.
Pop a block @a form the queue. For each successor @b (in subgraph):
If %v.b isn’t already defined, add it as a block argument. Have @a pass %v.a to that argument.
If @b hasn’t been visited yet, and isn’t the block containing the load we’re deleting, add it to the queue.
Once we’re done, if @a is the block that contains the load, we can now replace all loads to %p before any stores to %p with %v.a.
There are cases where this whole process can be skipped, by applying a “peephole” optimization. For example, stores followed by loads within the same basic block can be optimized away locally, leaving the heavy-weight analysis for cross-block store/load pairs.
Let’s look at L1. Is contributing loads are in @entry and @loop.body. So we add a new parameter %n: in @entry, we call that parameter with %n (since that’s stored to it in @entry), while in @loop.body, we pass %n.2.
What about L4? The contributing loads are also in @entry and @loop.body, but one of those isn’t a pred of @exit. @loop.start is also in the subgraph for this load, though. So, starting from @entry, we add a new parameter %a to @loop.body and feed 0 (the stored value, an immediate this time) through it. Now looking at @loop.body, we see there is already a parameter for this load (%a), so we just pass %b as that argument. Now we process @loop.start, which @entry pushed onto the queue. @exit gets a new parameter %a, which is fed @loop.start’s own %a. We do not re-process @loop.body, even though it also appears in @loop.start’s gotos, because we already visited it.
After doing this for the other two loads, we get this:
After lifting, if we know that a stack slot’s pointer does not escape (i.e., none of its uses wind up going into a function call13) or a write to a global (or a pointer that escapes), we can delete every store to that pointer. If we delete every store to a stack slot, we can delete the stack slot altogether (there should be no loads left for that stack slot at this point).
This analysis is simple, because it assumes pointers do not alias in general. Alias analysis is necessary for more accurate dependency analysis. This is necessary, for example, for lifting loads of fields of structs through subobject pointers, and dealing with pointer arithmetic in general.
However, our dependency analysis is robust to passing different pointers as arguments to the same block from different predecessors. This is the case that is specifically handled by all of the fussing about with dominance frontiers. This robustness ultimately comes from SSA’s circuital nature.
Similarly, this analysis needs to be tweaked to deal with something like select %cond, %a, %b (a ternary, essentially). selects of pointers need to be replaced with selects of the loaded values, which means we need to do the lifting transformation “all at once”: lifting some liftable loads will leave the IR in an inconsistent state, until all of them have been lifted.
Many optimizations will make a mess of the CFG, so it’s useful to have simple passes that “clean up” the mess left by transformations. Here’s some easy examples.
If an operation’s result has zero uses, and the operation has no side-effects, it can be deleted. This allows us to then delete operations that it depended on that now have no side effects. Doing this is very simple, due to the circuital nature of SSA: collect all instructions whose outputs have zero uses, and delete them. Then, examine the defs of their operands; if those operations now have no uses, delete them, and recurse.
This bubbles up all the way to block arguments. Deleting block arguments is a bit trickier, but we can use a work queue to do it. Put all of the blocks into a work queue.
Pop a block from the queue.
Run unused result elimination on its operations.
If it now has parameters with no uses, remove those parameters.
For each pred, delete the corresponding arguments to this block. Then, Place those preds into the work queue (since some of their operations may have lost their last use).
There are many CFG configurations that are redundant and can be simplified to reduce the number of basic blocks.
For example, unreachable code can help delete blocks. Other optimizations may cause the goto at the end of a function to be empty (because all of its successors were optimized away). We treat an empty goto as being unreachable (since it has no cases!), so we can delete every operation in the block up to the last non-pure operation. If we delete every instruction in the block, we can delete the block entirely, and delete it from its preds’ gotos. This is a form of dead code elimination, or DCE, which combines with the previous optimization to aggressively delete redundant code.
Some jumps are redundant. For example, if a block has exactly one pred and one successor, the pred’s goto case for that block can be wired directly to the successor. Similarly, if two blocks are each other’s unique predecessor/successor, they can be fused, creating a single block by connecting the input blocks’ circuits directly, instead of through a goto.
If we have a ternary select operation, we can do more sophisticated fusion. If a block has two successors, both of which the same unique successor, and those successors consist only of gotos, we can fuse all four blocks, replacing the CFG diamond with a select. In terms of C, this is this transformation:
I am hoping to write more about SSA optimization passes. This is a very rich subject, and viewing optimizations in isolation is a great way to understand how a sophisticated optimization pipeline is built out of simple, dumb components.
It’s also a practical application of graph theory that shows just how powerful it can be, and (at least in my opinion), is an intuitive setting for understanding graph theory, which can feel very abstract otherwise.
In the future, I’d like to cover CSE/GVN, loop optimizations, and, if I’m feeling brave, getting out of SSA into a finite-register machine (backends are not my strong suit!).
Specifically the Swift frontend before lowering into LLVM IR. ↩
Microsoft Visual C++, a non-conforming C++ compiler sold by Microsoft ↩
HotSpot is the JVM implementation provided by OpenJDK; C2 is the “second compiler”, which has the best performance among HotSpot’s Java execution engines. ↩
The Android Runtime (ART) is the “JVM” (scare quotes) on the Android platform. ↩
The Glasgow Haskell Compiler (GHC), does not use SSA; it (like some other pure-functional languages) uses a continuation-oriented IR (compare to Scheme’s call/cc). ↩
Every compiler person firmly believes that P=NP, because program optimization is full of NP-hard problems and we would have definitely found polynomial ideal register allocation by now if it existed. ↩
Some more recent IRs use a different version of SSA called “structured control flow”, or SCF. Wasm is a notable example of an SCF IR. SSA-SCF is equivalent to SSA-CFG, and polynomial time algorithms exist for losslessly converting between them (LLVM compiling Wasm, for example, converts its CFG into SCF using a “relooping algorithm”).
In SCF, operations like switch statements and loops are represented as macro operations that contain basic blocks. For example, a switch operation might take a value as input, select a basic block to execute based on that, and return the value that basic block evaluates to as its output.
RVSDG is a notable innovation in this space, because it allows circuit analysis of entire imperative programs.
I am convering SSA-CFG instead of SSA-SCF simply because it’s more common, and because it’s what LLVM IR is.
Tail calling is when a function call is the last operation in a function; this allows the caller to jump directly to the callee, recycling its own stack frame for it instead of requiring it to allocate its own. ↩
Given any path from @a to @b, we can make it acyclic by replacing each subpath from @c to @c with a single @c node. ↩
When moving from a basic block to a pred, a register in that block which is defined as a block parameter corresponds to some register (or immediate) in each predecessor. That is the “equivalent” of %p.
One possible option for the “equivalent” is an immediate: for example, null or the address of a global. In the case of a global &g, assuming no data races, we would instead need alias information to tell if stores to this global within the current function (a) exist and (b) are liftable at all.
If the equivalent is null, we can proceed in one of two ways depending on optimization level. If we want loads of null to trap (as in Go), we need to mark this load as not being liftable, because it may trap. If we want loads of null to be UB, we simply ignore that pred, because we can assume (for our analysis) that if the pointer is null, it is never loaded from. ↩
Returned stack pointers do not escape: stack slots’ lifetimes end at function exit, so we return a dangling pointer, which we assume are never loaded. So stores to that pointer before returning it can be discarded. ↩
Go’s interfaces are very funny. Rather than being explicitly implemented, like in Java or Rust, they are simply a collection of methods (a “method set”) that the concrete type must happen to have. This is called structural typing, which is the opposite of nominal typing.
Go interfaces are very cute, but this conceptual simplicity leads to a lot of implementation problems (a theme with Go, honestly). It removes a lot of intentionality from implementing interfaces, and there is no canonical way to document that A satisfies1B, nor can you avoid conforming to interfaces, especially if one forces a particular method on you.
It also has very quirky results for the language runtime. To cast an interface value to another interface type (via the type assertion syntax a.(B)), the runtime essentially has to use reflection to go through the method set of the concrete type of a. I go into detail on how this is implemented here.
Because of their structural nature, this also means that you can’t add new methods to an interface without breaking existing code, because there is no way to attach default implementations to interface methods. This results in very silly APIs because someone screwed up an interface.
For example, in the standard library’s package flag, the interface flag.Value represents a value which can be parsed as a CLI flag. It looks like this:
typeValueinterface{// Get a string representation of the value.String()string// Parse a value from a string, possibly returning an error.Set(string)error}
flag.Value also has an optional method, which is only specified in the documentation. If the concrete type happens to provide IsBoolFlag() bool, it will be queries for determining if the flag should have bool-like behavior. Essentially, this means that something like this exists in the flag library:
The flag package already uses reflection, but you can see how it might be a problem if this interface-to-interface cast happens regularly, even taking into account Go’s caching of cast results.
There is also flag.Getter, which exists because they messed up and didn’t provide a way for a flag.Value to unwrap into the value it contains. For example, if a flag is defined with flag.Int, and then that flag is looked up with flag.Lookup, there’s no straightforward way to get the int out of the returned flag.Value.
Instead, you have to side-cast to flag.Getter:
typeGetterinterface{Value// Returns the value of the flag.Get()any}
As a result, flag.Lookup("...").(flag.Getter) needs to do a lot more work than if flag.Value had just added Get() any, with a default return value of nil.
It turns out that there is a rather elegant workaround for this.
The A-typed embedded field behaves as if we had declared the field A A, but selectors on var b B will search in A if they do not match something on the B level. For example, if A has a method Bar, and B does not, b.Bar() will resolve to b.A.Bar(). However, if A has a method Foo, b.Foo resolves to b.Foo, not b.A.Foo, because b has a field Foo.
Importantly, any methods from A which B does not already have will be added to B’s method set. So this works:
type(AintBstruct{A}Cinterface{Foo()Bar()})func(A)Foo(){}func(B)Bar(){}var_C=B{}// B satisfies C.
Now, suppose that we were trying to add Get() any to flag.Value. Let’s suppose that we had also defined flag.ValueDefaults, a type that all satisfiers of flag.Value must embed. Then, we can write the following:
typeValueinterface{String()stringSet(string)errorGet()any// New method.}typeValueDefaultsstruct{}func(ValueDefaults)Get(){returnnil}
Now, this only works if we had required in the first place that anyone satisfying flag.Value embeds flag.ValueDefaults. How can we force that?
A little-known Go feature is that interfaces can have unexported methods. The way these work, for the purposes of interface conformance, is that exported methods are matched just by their name, but unexported methods must match both name and package.
So, if we have an interface like interface { foo() }, then foo will only match methods defined in the same package that this interface expression appears. This is useful for preventing satisfaction of interfaces.
However, there is a loophole: embedding inherits the entire method set, including unexported methods. Therefore, we can enhance Value to account for this:
typeValueinterface{String()stringSet(string)errorGet()any// New method.value()// Unexported!}typeValueDefaultsstruct{}func(ValueDefaults)Get(){returnnil}func(ValueDefaults)value(){}
Now, it’s impossible for any type defined outside of this package to satisfy flag.Value, without embedding flag.ValueDefaults (either directly or through another embedded flag.Value).
Now, another problem is that you can’t control the name of embedded fields. If the embedded type is Foo, the field’s name is Foo. Except, it’s not based on the name of the type itself; it will pick up the name of a type alias. So, if you want to unexport the defaults struct, you can simply write:
This also has the side-effect of hiding all of ValueDefaults’ methods from MyValue’s documentation, despite the fact that exported and fields methods are still selectable and callable by other packages (including via interfaces). As far as I can tell, this is simply a bug in godoc, since this behavior is not documented.
There is still a failure mode: if a user type satisfying flag.Value happened to define a Get method with a different interface. In this case, that Get takes precedence, and changes to flag.Value will break users.
There are two workarounds:
Tell people not to define methods on their satisfying type, and if they do, they’re screwed. Because satisfying flag.Value is now explicit, this is not too difficult to ask for.
Pick a name for new methods that is unlikely to collide with anything.
Unfortunately, this runs into a big issue with structural typing, which is that it is very difficult to avoid making mistakes when making changes, due to the lack of intent involved. A similar problem occurs with C++ templates, where the interfaces defined by concepts are implicit, and can result in violating contract expectations.
Go has historically be relatively cavalier about this kind of issue, so I think that breaking people based on this is fine.
And of course, you cannot retrofit a default struct into a interface; you have to define it from day one.
Now IsBoolFlag is more than just a random throw-away comment on a type.
We can also use defaults to speed up side casts. Many functions around the io package will cast an io.Reader into an io.Seeker or io.ReadAt to perform more efficient I/O.
In a hypothetical world where we had defaults structs for all the io interfaces, we can enhance io.Reader with a ReadAt default method that by default returns an error.
We can do something similar for io.Seeker, but because it’s a rather general interface, it’s better to keep io.Seeker as-is. So, we can add a conversion method:
Here, Reader.Seeker() converts to an io.Seeker, returning nil if that’s not possible. How is this faster than r.(io.Seeker)? Well, consider what this would look like in user code:
Now, we might consider looking past that, but it becomes a big problem with reflection. If we passed Foo(x) into reflect.ValueOf, the resulting any conversion would discard the defaulted method, meaning that it would not be findable by reflect.Value.MethodByName(). Oops.
So we need to somehow add Baz to MyFoo’s method set. Maybe we say that if MyFoo is ever converted into Foo, it gets the method. But this doesn’t work, because the compiler might not be able to see through something like any(MyFoo{...}).(Foo). This means that Baz must be applied unconditionally. But, now we have the problem that if we have another interface interface { Bar(); Baz(int) }, MyFoo would need to receive incompatible signatures for Baz.
Again, we’re screwed by the non-intentionality of structural typing.
Ok, let’s forget about default method implementations, that doesn’t seem to be workable. What if we make some methods optional, like IsBoolFlag() earlier? Let’s invent some syntax for it.
Then, suppose that MyFoo provides Bar but not Baz (or Baz with the wrong signature). Then, the entry in the itab for Baz would contain a nil function pointer, such that x.Baz() panics! To determine if Baz is safe to call, we would use the following idiom:
The compiler is already smart enough to elide construction of funcvals for cases like this, although it does mean that x.Func in general, for an interface value x, requires an extra cmov or similar to make sure that x.Func is nil when it’s a missing method.
All of the use cases described above would work Just Fine using this construction, though! However, we run into the same issue that Foo(x) appears to have a larger method set than x. It is not clear if Foo(x) should conform to interface { Bar(); Baz() }, where Baz is required. My intuition would be no: Foo is a strictly weaker interface. Perhaps it might be necessary to avoid the method access syntax for optional methods, but that’s a question of aesthetics.
This idea of having nulls in place of function pointers in a vtable is not new, but to my knowledge is not used especially widely. It would be very useful in C++, for example, to be able to determine if no implementation was provided for a non-pure virtual function. However, the nominal nature of C++’s virtual functions does not make this as big of a need.
Another alternative is to store a related interfaces’ itabs on in an itab. For example, suppose that we invent the syntax A<- within an interface{} to indicate that that interface will likely get cast to A. For example:
Satisfying B does not require satisfying A. However, the A<- must be part of public API, because a interface{ Bar() } cannot be used in place of an interface{ A<- }
Within B’s itab, after all of the methods, there is a pointer to an itab for A, if the concrete type for this itab also happens to satisfy A. Then, a cast from B to A is just loading a pointer from the itab. If the cast would fail, the loaded pointer will be nil.
I had always assumed that Go did an optimization like this for embedding interfaces, but no! Any inter-interface conversion, including upcasts, goes through the whole type assertion machinery! Of course, Go cannot hope to generate an itab for every possible subset of the method set of an interface (exponential blow-up), but it’s surprising that they don’t do this for embedded interfaces, which are Go’s equivalent of superinterfaces (present in basically every language with interfaces).
Using this feature, we can update flag.Value to look like this:
Unfortunately, because A<- changes the ABI of an interface, it does not seem possible to actually add this to existing interfaces, because the following code is valid:
Even though this fix seems really clean, it doesn’t work! The only way it could work is if PGO determines that a particular interface conversion A to B happens a lot, and updates the ABI of all interfaces with the method set of A, program-globally, to contain a pointer to a B itab if available.
Go’s interfaces are pretty bad; in my opinion, a feature that looks good on a slide, but which results in a lot of mess due to its granular and intention-less nature. We can sort of patch over it with embeds, but there’s still problems.
Due to how method sets work in Go, it’s very hard to “add” methods through an interface, and honestly at this point, any interface mechanism that makes it impossible (or expensive) to add new functions is going to be a huge problem.
Missing methods seems like the best way out of this problem, but for now, we can stick to the janky embedded structs.
Go uses the term “implements” to say that a type satisfies an interface. I am instead intentionally using the term “satisfies”, because it makes the structural, passive nature of implementing an interface clearer. This is also more in-line with interfaces’ use as generic constraints.
Swift uses the term “conform” instead, which I am avoiding for this reason. ↩
Historically I have worked on many projects related to high-performance Protobuf, be that on the C++ runtime, on the Rust runtime, or on integrating UPB, the fastest Protobuf runtime, written by my colleague Josh Haberman. I generally don’t post directly about my current job, but my most recent toy-turned-product is something I’m very excited to write about: hyperpb.
Here’s how we measure up against other Go Protobuf parsers. This is a subset of my benchmarks, since the benchmark suite contains many dozens of specimens. This was recorded on an AMD Zen 4 machine.
Throughput for various configurations of hyperpb (colored bars) vs. competing parsers (grey bars). Each successive hyperpb includes all previous optimizations, corresponding to zerocopy mode, arena reuse, and profile-guided optimization. Bigger is better.
Traditionally, Protobuf backends would generate parsers by generating source code specialized to each type. Naively, this would give the best performance, because everything would be “right-sized” to a particular message type. Unfortunately, now that we know better, there are a bunch of drawbacks:
Every type you care about must be compiled ahead-of-time. Tricky for when you want to build something generic over schemas your users provide you.
Every type contributes to a cost on the instruction cache, meaning that if your program parses a lot of different types, it will essentially flush your instruction cache any time you enter a parser. Worse still, if a parse involves enough types, the parser itself will hit instruction decoding throughput issues.
These effects are not directly visible in normal workloads, but other side-effects are visible: for example, giant switches on field numbers can turn into chains of branch instructions, meaning that higher-numbered fields will be quite slow. Even binary-searching on field numbers isn’t exactly ideal. However, we know that every Protobuf codec ever emits fields in index order (i.e., declaration order in the .proto file), which is a data conditioning fact we don’t take advantage of with a switch.
UPB solves this problem. It is a small C kernel for parsing Protobuf messages, which is completely dynamic: a UPB “parser” is actually a collection of data tables that are evaluated by a table-driven parser. In other words, a UPB parser is actually configuration for an interpreter VM, which executes Protobuf messages as its bytecode. UPB also contains many arena optimizations to improve allocation throughput when parsing complex messages.
hyperpb is a brand new library, written in the most cursed Go imaginable, which brings many of the optimizations of UPB to Go, and many new ones, while being tuned to Go’s own weird needs. The result leaves the competition in the dust in virtually every benchmark, while being completely runtime-dynamic. This means it’s faster than Protobuf Go’s own generated code, andvtprotobuf (a popular but non-conforming1 parser generator for Go).
This post is about some of the internals of hyperpb. I have also prepared a more sales-ey writeup, which you can read on the Buf blog.
UPB is awesome. It can slot easily into any language that has C FFI, which is basically every language ever.
Unfortunately, Go’s C FFI is really, really bad. It’s hard to overstate how bad cgo is. There isn’t a good way to cooperate with C on memory allocation (C can’t really handle Go memory without a lot of problems, due to the GC). Having C memory get cleaned up by the GC requires finalizers, which are very slow. Calling into C is very slow, because Go pessimistically assumes that C requires a large stack, and also calling into C does nasty things to the scheduler.
All of these things can be worked around, of course. For a while I considered compiling UPB to assembly, and rewriting that assembly into Go’s awful assembly syntax2, and then having Go assemble UPB out of that. This presents a few issues though, particularly because Go’s assembly calling convention is still in the stone age3 (arguments are passed on the stack), and because we would still need to do a lot of work to get UPB to match the protoreflect API.
Go also has a few… unique qualities that make writing a Protobuf interpreter an interesting challenge with exciting optimization opportunities.
First, of course, is the register ABI, which on x86_64 gives us a whopping nine argument and return registers, meaning that we can simply pass the entire parser state in registers all the time.
Second is that Go does not have much UB to speak of, so we can get away with a lot of very evil pointer crimes that we could not in C++ or Rust.
Third is that Protobuf Go has a robust reflection system that we can target if we design specifically for it.
Also, the Go ecosystem seems much more tolerant of less-than-ideal startup times (because the language loves life-before-main due to init() functions), so unlike UPB, we can require that the interpreter’s program be generated at runtime, meaning that we can design for online PGO. In other words, we have the perfect storm to create the first-ever Protobuf JIT compiler (which we also refer to as “online PGO” or “real-time PGO”).
Right now, hyperpb’s API is very simple. There are hyperpb.Compile* functions that accept some representation of a message descriptor, and return a *hyperpb.MessageType, which implements the protoreflect type APIs. This can be used to allocate a new *hyperpb.Message , which you can shove into proto.Unmarshal and do reflection on the result. However, you can’t mutate *hyperpb.Messages currently, because the main use-cases I am optimizing for are read-only. All mutations panic instead.
The hero use-case, using Buf’s protovalidate library, uses reflection to execute validation predicates. It looks like this:
// Compile a new message type, deserializing an encoded FileDescriptorSet.msgType:=hyperpb.CompileForBytes(schema,"my.api.v1.Request")// Allocate a new message of that type.msg:=hyperpb.NewMessage(msgType)// Unmarshal like you would any other message, using proto.Unmarshal.iferr:=proto.Unmarshal(data,msg);err!=nil{// Handle parse failure.}// Validate the message. Protovalidate uses reflection, so this Just Works.iferr:=protovalidate.Validate(msg);err!=nil{// Handle validation failure.}
We tell users to make sure to cache the compilation step because compilation can be arbitrarily slow: it’s an optimizing compiler! This is not unlike the same warning on regexp.Compile, which makes it easy to teach users how to use this API correctly.
In addition to the main API, there’s a bunch of performance tuning knobs for the compiler, for unmarshaling, and for recording profiles. Types can be recompiled using a recorded profile to be more optimized for the kinds of messages that actually come on the wire. hyperpb PGO4 affects a number of things that we’ll get into as I dive into the implementation details.
Most of the core implementation lives under internal/tdp. The main components are as follows:
tdp, which defines the “object code format” for the interpreter. This includes definitions for describing types and fields to the parser.
tdp/compiler, which contains all of the code for converting a protoreflect.MessageDescriptor into a tdp.Library, which contains all of the types relevant to a particular parsing operation.
tdp/dynamic defines what dynamic message types look like. The compiler does a bunch of layout work that gets stored in tdp.Type values, which a dynamic.Message interprets to find the offsets of fields within itself.
tdp/vm contains the core interpreter implementation, including the VM state that is passed in registers everywhere. It also includes hand-optimized routines for parsing varints and validating UTF-8.
tdp/thunks defines archetypes, which are classes of fields that all use the same layout and parsers. This corresponds roughly to a (presence, kind) pair, but not exactly. There are around 200 different archetypes.
This article won’t be a deep-dive into everything in the parser, and even this excludes large portions of hyperpb. For example, the internal/arena package is already described in a different blogpost of mine. I recommend taking a look at that to learn about how we implement a GC-friendly arena for hyperpb .
Instead, I will give a brief overview of how the object code is organized and how the parser interprets it. I will also go over a few of the more interesting optimizations we have.
Every MessageDescriptor that is reachable from the root message (either as a field or as an extension) becomes a tdp.Type . This contains the dynamic size of the corresponding message type, a pointer to the type’s default parser (there can be more than one parser for a type) and a variable number of tdp.Field values. These specify the offset of each field and provide accessor thunks, for actually extracting the value of the field.
A tdp.TypeParser is what the parser VM interprets alongside encoded Protobuf data. It contains all of the information needed for decoding a message in compact form, including tdp.FieldParsers for each of its fields (and extensions), as well as a hashtable for looking up a field by tag, which is used by the VM as a fallback.
The tdp.FieldParsers each contain:
The same offset information as a tdp.Field.
The field’s tag, in a special format.
A function pointer that gets called to parse the field.
The next field(s) to try parsing after this one is parsed.
Each tdp.FieldParser actually corresponds to a possible tag on a record for this message. Some fields have multiple different tags: for example, a repeated int32 can have a VARINT-type tag for the repeated representation, and a LEN-type tag for the packed representation.
Each field specifies which fields to try next. This allows the compiler to perform field scheduling, by carefully deciding which order to try fields in based both on their declaration order and a rough estimation of their “hotness”, much like branch scheduling happens in a program compiler. This avoids almost all of the work of looking up the next field in the common case, because we have already pre-loaded the correct guess.
I haven’t managed to nail down a good algorithm for this yet, but I am working on a system for implementing a type of “branch prediction” for PGO, that tries to provide better predictions for the next fields to try based on what has been seen before.
The offset information for a field is more than just a memory offset. A tdp.Offset includes a bit offset, for fields which request allocation of individual bits in the message’s bitfields. These are used to implement the hasbits of optional fields (and the values of bool fields). It also includes a byte offset for larger storage. However, this byte offset can be negative, in which case it’s actually an offset into the cold region.
In many messages, most fields won’t be set, particularly extensions. But we would like to avoid having to allocate memory for the very rare (i.e., “cold”) fields. For this, a special “cold region” exists in a separate allocation from the main message, which is referenced via a compressed pointer. If a message happens to need a cold field set, it takes a slow path to allocate a cold region only if needed. Whether a field is cold is a dynamic property that can be affected by PGO.
The parser is designed to make maximal use of Go’s generous ABI without spilling anything to the stack that isn’t absolutely necessary. The parser state consists of eight 64-bit integers, split across two types: vm.P1 and vm.P2. Unfortunately, these can’t be merged due to a compiler bug, as documented in vm/vm.go.
Every parser function takes these two structs as its first two arguments, and returns them as its first two results. This ensures that register allocation tries its darnedest to keep those eight integers in the first eight argument registers, even across calls. This leads to the common idiom of
Overwriting the parser state like this ensures that future uses of p1 and p2 use the values that DoSomething places in registers for us.
I spent a lot of time and a lot of profiling catching all of the places where Go would incorrectly spill parser state to the stack, which would result in stalls. I found quite a few codegen bugs in the process. Particularly notable (and shocking!) is #73589. Go has somehow made it a decade without a very basic pointer-to-SSA lifting pass (for comparison, this is a heavy-lifting cleanup pass (mem2reg) in LLVM).
The core loop of the VM goes something like this:
Are we out of bytes to parse? If so, pop a parser stack frame5. If we popped the last stack frame, parsing is done; return success.
Parse a tag. This does not fully decode the tag, because tdp.FieldParsers contain a carefully-formatted, partially-decoded tag to reduce decoding work.
Check if the next field we would parse matches the tag.
If yes, call the function pointer tdp.Field.Parser; update the current field to tdp.Field.NextOk; goto 1.
If no, update the current field to tdp.Field.NextErr; goto 3.
If no “enough times”, fall through.
Slow path: hit tdp.Field.Tags to find the matching field for that tag.
If matched, go to 3a.
If not, this is an unknown field; put it into the unknown field set; parse a tag and goto 4.
Naturally, this is implemented as a single function whose control flow consists exclusively of ifs and gotos, because getting Go to generate good control flow otherwise proved too hard.
Now, you might be wondering why the hot loop for the parser includes calling a virtual function. Conventional wisdom holds that virtual calls are slow. After all, the actual virtual call instruction is quite slow, because it’s an indirect branch, meaning that it can easily stall the CPU. However, it’s actually much faster than the alternatives in this case, due to a few quirks of our workload and how modern CPUs are designed:
Modern CPUs are not great at traversing complex “branch mazes”. This means that selecting one of ~100 alternatives using branches, even if they are well-predicted and you use unrolled binary search, is still likely to result in frequent mispredictions, and is an obstacle to other JIT optimizations in the processor’s backend.
Predicting a single indirect branch with dozens of popular targets is something modern CPUs are pretty good at. Chips and Cheese have a great writeup on the indirect prediction characteristics of Zen 4 chips.
In fact, the “optimized” form of a large switch is a jump table, which is essentially an array of function pointers. Rather than doing a large number of comparisons and direct branches, a jump table turns a switch into a load and an indirect branch.
This is great news for us, because it means we can make use of a powerful assumption about most messages: most messages only feature a handful of field archetypes. How often is it that you see a message which has more in it than int32, int64, string , and submessages? In effect, this allows us to have a very large “instruction set”, consisting of all of the different field archetypes, but a particular message only pays for what it uses. The fewer archetypes it uses at runtime, the better the CPU can predict this indirect jump.
On the other hand, we can just keep adding archetypes over time to specialize for common parse workloads, which PGO can select for. Adding new archetypes that are not used by most messages does not incur a performance penalty.
We’ve already discussed the hot/cold split, and briefly touched on the message bitfields used for bools and hasbits. I’d like to mention a few other cool optimizations that help cover all our bases, as far as high-performance parsing does.
The fastest memcpy implementation is the one you don’t call. For this reason, we try to, whenever possible, avoid copying anything out of the input buffer. strings and bytes are represented as zc.Ranges, which are a packed pair of offset+length in a uint64. Protobuf is not able to handle lengths greater than 2GB properly, so we can assume that this covers all the data we could ever care about. This means that a bytes field is 8 bytes, rather than 24, in our representation.
Zerocopy is also used for packed fields. For example, a repeated double will typically be encoded as a LEN record. The number of float64s in this record is equal to its length divided by 8, and the float64s are already encoded in IEEE754 format for us. So we can just retain the whole repeated fields as a zc.Range . Of course, we need to be able to handle cases where there are multiple disjoint records, so the backing repeated.Scalars can also function as a 24-byte arena slice. Being able to switch between these modes gracefully is a delicate and carefully-tested part of the repeated field thunks.
Surprisingly, we also use zerocopy for varint fields, such as repeated int32. Varints are variable-length, so we can’t just index directly into the packed buffer to get the n th element… unless all of the elements happen to be the same size. In the case that every varint is one byte (so, between 0 and 127), we can zerocopy the packed field. This is a relatively common scenario, too, so it results in big savings6. We already count the number of varints in the packed field in order to preallocate space for it, so this doesn’t add extra cost. This counting is very efficient because I have manually vectorized the loop.
PGO records the median size of each repeated/map field, and that is used to calculate a “preload” for each repeated field. Whenever the field is first allocated, it is pre-allocated using the preload to try to right-size the field with minimal waste.
Using the median ensures that large outliers don’t result in huge memory waste; instead, this guarantees that at least 50% of repeated fields will only need to allocate from the arena once. Packed fields don’t use the preload, since in the common case only one record appears for packed fields. This mostly benefits string- and message-typed repeated fields, which can’t be packed.
We don’t use Go’s built-in map, because it has significant overhead in some cases: in particular, it has to support Go’s mutation-during-iteration semantics, as well as deletion. Although both are Swisstables7 under the hood, my implementation can afford to take a few shortcuts. It also allows our implementation to use arena-managed memory. swiss.Tables are used both for the backing store of map fields, and for maps inside of tdp.Types.
Currently, the hash used is the variant of fxhash used by the Rust compiler. This greatly out-performs Go’s maphash for integers, but maphash is better for larger strings. I hope to maybe switch to maphash at some point for large strings, but it hasn’t been a priority.
Hitting the Go allocator is always going to be a little slow, because it’s a general-case allocator. Ideally, we should learn the estimated memory requirements for a particular workload, and then allocate a single block of that size for the arena to portion out.
The best way to do this is via arena reuse In the context of a service, each request has a bounded lifetime on the message that it parses. Once that lifetime is over (the request is complete), the message is discarded. This gives the programmer an opportunity to reset the backing arena, so that it keeps its largest memory block for re-allocation.
You can show that over time, this will cause the arena to never hit the Go allocator. If the largest block is too small for a message, a block twice as large will wind up getting allocated. Messages that use the same amount of memory will keep doubling the largest block, until the largest block is large enough to fit the whole message. Memory usage will be at worst 2x the size of this message. Note that, thanks to extensive use of zero-copy optimizations, we can often avoid allocating memory for large portions of the message.
Of course, arena re-use poses a memory safety danger, if the previously allocated message is kept around after the arena is reset. For this reason, it’s not the default behavior. Using arena resets is a double-digit percentage improvement, however.
Go does not properly support unions, because the GC does not keep the necessary book-keeping to distinguish a memory location that may be an integer or a pointer at runtime. Instead, this gets worked around using interfaces, which is always a pointer to some memory. Go’s GC can handle untyped pointers just fine, so this just works.
The generated API for Protobuf Go uses interface values for oneofs. This API is… pretty messy to use, unfortunately, and triggers unnecessary allocations, (much like optional fields do in the open API).
However, my arena design (read about it here) makes it possible to store arena pointers on the arena as if they are integers, since the GC does not need to scan through arena memory. Thus, our oneofs are true unions, like in C++.
hyperpb is really exciting because its growing JIT capabilities offer an improvement in the state of the art over UPB. It’s also been a really fun challenge working around Go compiler bugs to get the best assembly possible. The code is already so well-optimized that re-building the benchmarks with the Go compiler’s own PGO mode (based on a profile collected from the benchmarks) didn’t really seem to move the needle!
I’m always working on making hyperpb better (I get paid for it!) and I’m always excited to try new optimizations. If you think of something, file an issue! I have meticulously commented most things within hyperpb , so it should be pretty easy to get an idea of where things are if you want to contribute.
I would like to write more posts diving into some of the weeds of the implementation. I can’t promise anything, but there’s lots to talk about. For now… have fun source-diving!
There’s a lot of other things we could be doing: for example, we could be using SIMD to parse varints, we could have smarter parser scheduling, we could be allocating small submessages inline to improve locality… there’s still so much we can do!
And most importantly, I hope you’ve learned something new about performance optimization!
vtprotobuf gets a lot of things wrong that make it beat us in like two benchmarks, because it’s so sloppy. For example, vtprotobuf believes that it’s ok to not validate UTF-8 strings. This is non-conforming behavior. It also believes that map entries’ fields are always in order and always populated, meaning that valid Protobuf messages containing maps can be parsed incorrectly. This sloppiness is unacceptable, which is why hyperpb goes to great lengths to implement all of Protobuf correctly. ↩
Never gonna let Rob live that one down. Of all of Rob’s careless design decisions, the assembler is definitely one of the least forgivable ones. ↩
There are only really two pieces of code in hyperpb that could benefit from hand-written assembly: varint decoding and UTF-8 validation. Both of these vectorize well, however, ABI0 is so inefficient that no hand-written implementation will be faster.
If I do wind up doing this, it will require a build tag like hyperasm, along with something like -gcflags=buf.build/go/hyperpb/internal/asm/...=-+ to treat the assembly implementations as part of the Go runtime, allowing the use of ABIInternal. But even then, this only speeds up parsing of large (>2 byte) varints. ↩
This is PGO performed by hyperpb itself; this is unrelated to gc’s own PGO mode, which seems to not actually make hyperpb faster. ↩
Yes, the parser manages its own stack separate from the goroutine stack. This ensures that nothing in the parser has to be reentrant. The only time the stack is pushed to is when we “recurse” into a submessage. ↩
Large packed repeated fields are where the biggest wins are for us. Being able to zero-copy large packed int32 fields full of small values allows us to eliminate all of the overhead that the other runtimes are paying for; we also choose different parsing strategies depending on the byte-to-varint ratio of the record.
Throughput for various repeated field benchmarks. This excludes the repeated fixed32 benchmarks, since those achieve such high throughputs (~20 Gbps) that they make the chart unreadable.
These optimizations account for the performance difference between descriptor/#00 and descriptor/#01 in the first benchmark chart. The latter is a FileDescriptorSet containing SourceCodeInfo, Protobuf’s janky debuginfo format. It is dominated by repeated int32 fields.
NB: This chart is currently missing the Y-axis, I need to have it re-made. ↩
Map parsing performance has been a bit of a puzzle. vtprotobuf cheats by rejecting some valid map entry encodings, such as (in Protoscope) {1: {"key"}} (value is implied to be ""), while mis-parsing others, such as {2: {"value"} 1: {"key"}} (fields can go in any order), since they don’t actually validate the field numbers like hyperpb does.
Here’s where the benchmarks currently stand for maps:
Throughput for various map parsing benchmarks.
Maps, I’m told, are not very popular in Protobuf, so they’re not something I have tried to optimize as hard as packed repeated fields. ↩