Two Faces of Uncertainty: An Epistemic Theory of Strategy

Imagine that you stumbled upon a genie, and he could grant you a single wish.  You might wish for a giant pile of “fuck you cash”; that’s pretty straightforward, we all know what money is.  I suppose you might have to make room for a lot of physical cash or get some special debit card since your bank might get suspicious if it were to suddenly see ten million dollars appear in your bank account, but that’s all straightforward enough.  Even if it’s one of those devils that interprets your wishes as maliciously as possible there’s not much they can do to twist what you’re asking for.

But maybe your wish is something a little bit more nuanced: you want to find true love or become an expert at programming, or wish away a psychological condition.  Now it gets tricky: how does it happen? Who is it that you meet, and how? What tradeoffs will occur in life as a consequence of your meeting them? Maybe instead you asked for a specific person to fall in love with you, but what is it that caused the attraction? Is it a sudden physical attraction, something they saw in you, or some moment of intimacy that ignited the spark? Most “reasons” you fell for someone might be rationalizations, but there still was a reason it happened, albeit an opaque one.

The programming example might seem a little bit more straightforward by comparison, but it’s actually much less so.  There might be a common body of knowledge essential for learning the basics of a skill, but when you get to the more advanced levels your experience becomes specific to the task at hand.  It’s not enough to merely enumerate things like what programming languages you’d like to know; every expert has a very specific body of knowledge that allows them to solve certain problems and not others.  Conversely, this same idiosyncracy of experience is what allows masters of their craft to discover and formulate genuinely novel ideas and solutions; you can’t create anything new if there’s no unique signature to your comprehension.  Put another way, mastery doesn’t require 10,000 hours of practice, it requries 10,000 hours of experience.

Now, in theory this genie could grant you this kind of wish, but in order to do so they’d have to set up a sufficiently complex scenario, and this is where you have a problem.  Even if the genie isn’t a swindler along the lines of Elizabeth Hurley in Bedazzled, there are so many inevitable tradeoffs among countless factors involved that even the nicest genie in the universe would still have trouble figuring out which ones are and aren’t acceptable to you.  Unlike the wish for money, where (for all intents and purposes) you’ll acquire an edge regardless of context, these experience-based wishes require a narrative that captures the most salient aspects of your desired outcome.  Even then, you’d best not ask these kinds of things of a genie: any specific outcome pursued unconditionally invites the possibility of severe unintended consequences; to deal with this requires constant tinkering with not only your means but also your ends.  All of this boils down to the inescapable fact that the significance of any outcome is path-dependent: you can’t truly understand the import of a goal without looking at all of the events that preceded it.

Such worries don’t matter very much in the real world where no such genies exist, but it begs the question of whether there’s any value at all in setting goals, let alone strategizing about how to achieve them.  If we can’t anticipate or even understand the final outcome, then why bother trying to get from point A to point B? The point, however, is not to get to point B, it’s to get away from point A.  In the most basic sense, it’s a kind of “annealing”, in which injecting randomness into a situation can break you out of a rut into a different situation.  In most cases, however, random action is not enough: to achieve escape velocity, you need to coordinate your actions by means of a unifying strategy.

Strategic Narratives, Tactical Value

It’s not enough, however, to simply “coordinate” a number of actions; in order for such an idea to even make sense, its constituent “actions” need to be compared using a common system of coordinates.  To understand how this is done in a less nebulous way requires drawing a distinction between strategy and tactics.  Fortunately, a good distinction already exists thanks to an old essay by Venkatesh Rao:

Strategies are imagined stories about possible worlds, whose constraints are determined by elements of doctrine, and whose vocabulary is determined by available tactics.  Converting those stories into reality through appropriate mixes of deliberative, reactive and opportunistic planning, scheduling, resource allocation and risk management, in the fog of action, is the discipline of operations.

-Strategy, Tactics, Operations, and Doctrine: A Decision Language Tutorial (

It’s worth noting how these definitions relate to our genie: a given “scenario” that a genie would have to construct is a “possible world”, whereas the overall narrative that we provide to capture the most important things about the scenario is equivalent to a set of “imagined stories”.  Tactics, by contrast, are defined in another part of the essay as “abstract action[s] that can be applied in any of a large class of situations that conform to set criteria”.  These may not seem like the unambiguous wishes one could make of our genie, but consider the things you could easily ask of a genie if they knew the context: if you were fighting a pitched battle and asked him to “blind the enemy”, it’s easy to infer how one could do so: take down their satellites, jam their communications, put up a smoke screen.  Even if there are many different manifestations, they all converge on a common purpose.  By contrast, wishing to be an expert at programming diverges in its ramifications depending on how the wish manifests.  Whether a wish is tactical or strategic is not a question of how idiomatic or abstract it is, but one of whether or not its significance can be narrowed down.  For those wondering about the other two terms, operations is equivalent to the “tinkering” that needs to be done in pursuit of any strategic goal.  As for doctrine, I’ll get back to that later on.

Before moving on, I’d like to make it clear that the concept of “unintended consequences” isn’t relevant to the distinction that I’ve made.  It’s possible, for example, that if you down the enemy’s satellites you might affect the communications of other countries and in doing so inadvertently cause them to declare war on you, but whatever consequences there may be after the fact, the wish can be granted without need for further qualification.  The reason why “unintended consequences” seem especially vicious with strategic wishes has to do with the fact that an entire scenario has to be constructed, which means that you are not only dealing with possible consequences of a given outcome, but also the details of an intractable number of events leading up to the outcome that you are unaware of.  But these are not actually “consequences” at all, they’re simply the unenumerable details that constitute one of countless possible worlds that could fit a given narrative.  A tactical wish, by contrast, has a specific meaning framed by its strategic context, and as I’ll explain in a moment, can be assigned a specific “value” even if our numeraire (inevitably) fails to capture all possible consequences of our actions.

“Cheap Tricks” and Compound Interest

To understand how tactics differ from strategy in that they can have “value” requires understanding how strategies are constructed.  As I mentioned before, a strategy defines its tactics by relating them according to a common set of coordinates.  As Venkat’s earlier definition suggests, such tactics form the “vocabulary” of a given strategy, juxtaposing strategy and tactics in a chicken-and-egg relationship that made way for the concept of the cheap trick: a “crystalizing insight” around which we form the schwerpunkt (roughly translatable to “focal point” or “epicenter”) of a new strategy on the basis of a temporarily exploitable “free lunch”.

This dialectic process is key to understanding how the value of a given tactic can be determined in a non-arbitrary way by framing it within a coherent strategy.  Specifically, a strategy is not an arbitrary a-priori framework, but a system that gives tactics value by making them mutually fungible.  By doing this, tactical decisions no longer become a question of ambiguous tradeoffs but instead a process of arbitrage by which one consistently acquires what card-counters call “the edge”.  The idea that many qualitatively different actions can be made fungible like this sounds like black magic, but it’s actually not mysterious or complicated at all: all it requires is that your strategy’s constituent tactics pay compound interest.

It’s easiest to explain this using a relatively simple example: consider you’re trying to become a great programmer.  One thing you have to do is practice, but of course you also have to sleep, or else you won’t be able to keep going.  In fact, you might want to take care to make sure you’re getting more hours of sleep per night than you usually would in order to boost your productivity and memory retention.  This in turn means that you become a better programmer faster, which may allow you to get not just any programming job but one that gives you enough money and flexibility that you can reduce stress in other areas of your life and get more sleep.  In addition, a higher paying job will probably be a more challenging one, which will allow you to .  All of these taken together form a virtuous cycle in which your returns accelerate at a non-linear speed.  While some such strategies can even achieve exponential speeds, this is in all likelihood limited to the domain of finance.

Note, however, that such compound interest does not come simply from going full-throttle on how much you practice or how difficult a job you take on.  Even assuming you can focus indefinitely, you’d still hurt yourself after a point by not making enough time for sleep or relaxation.  You also wouldn’t want to stick too much to a single specialty, since depending too much on other people for other skills could ultimately slow you down in your work.  One could even go a bit further and consider how practicing seemingly unrelated skills may ultimately bolster your progress by helping you better understand your specialty in subtle ways.  Your goal is not to maximize the tactic that gives you the most returns; instead, you want to calibrate them such that they provide that maximum possible synergy.  The mathematical analogue to this is a concept in finance known as the Kelly Criterion:

The Kelly Criterion is based on the idea that whenever you bet on anything, there are two things you want to do: (1) avoid any chance of bankruptcy by betting a percentage of your money rather than an absolute amount, and (2) bet the correct percentage of your money such that it maximizes the compound interest you receive.  To use a simple example, imagine a slightly weighted coin (60% heads) that you’ll flip 100 times.  If you bet an absolute number each time, then a single streak of bad luck can bankrupt you, and good luck will not give you significant gains anyway.  By contrast, if you bet a certain percentage, it’s impossible to go bankrupt, and you’ll be able to reap compounding returns from cumulative wins.  In fact, the amount you’re expected to win is completely independent from the order of the flips.  In this case, the optimal amount to bet in this case is 20%; bet more, and you’ll actually make less in the long run.  Why? Without going into unnecessary detail about the precise math, the reason why there’s such a thing as betting too aggressively is because if you lose 20% of your money and then win 20% back, you’re still down by 4%, so the volatility always imposes a tax on the edge that you’ve gained.*  The smaller the volatility and the larger the edge, the higher the optimal betting number.

Note that this does not just apply to allocating for just one series of bets: the Kelly criterion is generally used to figure out how you should spread your bets across multiple possibilities.  The only difference is that in this case, instead of just figuring out the optimal returns for a given bet, you may also have to compare your bets in the case that you have multiple “optimal bets” that total up to more than 100% of your cash.

The fact that a certain distribution of betting will maximize your compound interest gives you an objective criteria for how to distribute your bets.  Similarly, linking various tactics in such a way that they pay compound interest allows you to assign each of them an objective value based on what will pay the most compound interest.  Even though a good night’s sleep and a hard day’s work seem like apples and oranges, one can compare the relative values of the two so long as they both support one another in reaching the same goal.  The fact that they bolster one another is essential: otherwise, you cannot use the gains from one tactic to apply more resources to another, in which case there’s no Kelly betting of any kind.

The reason this mutual bolstering is so important is that otherwise the “value” of a decision becomes contingent on unknown outcomes.  What do I mean by this? If you’re making a one time bet equivalent to most of your net worth, there’s no way to assign a value to this even if the odds of winning are 90%.  Maybe it’s worth the risk, maybe it’s not, but this is the territory of strategy: a subjective degree of belief in whether the tradeoff is “worth it”.  Luckily, the Kelly criterion shows us that when we bet by allocating percentages of a common currency, we don’t have to worry about a streak of bad luck.  So while you can’t claim to have an “edge” in a one-time bet of all your money (however lopsided it may be), the “edge” is a very real concept in Kelly betting because you’re all but guaranteed to benefit in proportion to how much of an “edge” you have.  For those who are into probability theory, you may have noticed that where strategy is a Bayesian concept, based on subjective degree-of-belief, tactics belongs in the domain of frequentism: given enough repetitions, your gains will converge on a certain number.

Tactical decisions, in this sense, are about efficiency: you’re looking to optimize your edge based on your numeraire, albeit one that’s been decided by how these various tactics relate to one another.  Strategy, by contrast is not about efficiency: leaving aside the fact that strategic decisions cannot be made using a numeraire, they are fundamentally ambiguous choices and will therefore on average be a net loss when looked at through a randomly chosen numeraire, since betting on pure randomness is expected to be a net loss (a win/loss pair of 20% = a loss of 4%).  Despite the fact that strategies are fundamentally inefficient, they are nonetheless effective in that they are able to break you out of a given equilibrium.  Seen another way, strategy is the means by which entities maintain a state of disequilibrium against the forces of entropy.

An Epistemic Cleavage

There is still one question that hasn’t been directly addressed: if the consequences of our actions are ultimately unpredictable, then what’s the point of having a numeraire? The answer to this question is that a numeraire is not supposed to measure “value” in some omniscient sense, nor anticipate any and all possibilities: you can only infer value in a context that relates all relevant outcomes such that they’re reliably fungible.  There may be unforeseen consequences that could arise from any such action, but to value those possibilities within the numeraire simply doesn’t make sense: these unknowns are strategic, not tactical concerns.

This is not to say that we simply ignore contingencies: it’s just that they don’t factor into the tactical level of decision making.  On a strategic level, however, we deal with these hidden dangers and opportunities through brainstorming, contingency planning, and experimentation.  In finance, this dichotomy is expressed through a framework known as Value at Risk (VaR), as explained by Aaron Brown in his book Red Blooded Risk: on “normal” days (somewhere around 99% of days, give or take), losses will be less than some given number (the number itself is determined by sophisticated statistical algorithms that rigorously test against past data and current results.) On the other 1% or so of days, losses will exceed at least this number; these are known as “breaks”.  It’s important to understand that this is not a worst case scenario: it’s to signify the point at which you are no longer in familiar territory and novel measures must be brought to bear.

This difference between the normal days and the “break” days represents a fundamental boundary between domains.  It exists not just in finance, but in science as “normal science” and “extraordinary science”, in geopolitics as political systems and their revolutions, in evolution as direct competition and the formation of new niches, in storytelling as canonicity and breach.  In each of these situations, there is a territory beyond the boundaries of VaR in which the rules are no longer legible, at which tactical decisions can no longer be relied upon.  This “VaR boundary”, by the way, is analogous to Venkat’s definition of “doctrine”.

Strategic ideas, in stark contrast to tactical decisions, arise through a process of bricolage by incrementally linking tactics to one another until a coherent schwerpunkt forms the nucleus of a new strategy.  These tactics, coming from various disparate domains, can only be linked through the use of metaphor, from which a new domain is defined that unifies these seemingly unrelated ideas into a coherent whole.  This concept, formulated by John Boyd in his magnum opus, A Discourse On Winning and Losing, was succintly recapitulated in his parable about how one could conceivably come up with the concept of the snowmobile:

Imagine that you are in Florida riding in an outboard motor boat, maybe even towing water skiers.  Imagine that you are riding a bicycle on a nice spring day.  Imagine that you are a parent taking your son to a department store and that you notice he is fascinated by the toy tractors or tanks with rubber caterpillar treads.
Now imagine that you pull the skies off but you are still on the ski slope.  Imagine also that you remove the outboard motor from the motor boat, and you are no longer in Florida.  And from the bicycle you remove the handlebar and discard the rest of the bike.  Finally, you take off the rubber treads from the toy tractor or tanks; this leaves only the following separate pieces: skis, outboard motor, handle bars and rubber treads.
He then asks; what emerges when you pull all this together? SNOWMOBILE
The message is obvious; to discern what is happening, we must interact in a variety of ways with our environment.  We must be able to look at the world from numerous perspectives so that we can generate mental images or impressions (orientation) that correspond with, “what’s happening now?”

That last part is key to understanding the nature of this process: given the inevitable limits of thinking within a single paradigm, we must constantly re-organize reality with new perspectives.  These novel perspectives function as options, things you can (but are not obligated to) use should they prove useful in the future.  In finance, this is equivalent to betting on events that reside on the tails of probability distributions, also known as black swans, by buying out of the money options; any individual one is unlikely to pay off, but both their value and their likelihood are fundamentally incomputable, leading most to underestimate their ultimate impact in the big picture.

Although it might be tempting to assume that these black swans are the only birds you should keep your eye on (and the statistics often seem to suggest as much), I’m convinced that these two sides of the VaR boundary coexist in a Yin-Yang relationship.  In the book The Black Swan, this avian vulnerability is attributed to “the narrative fallacy”, which highlights our tendency to look for patterns even where none exist.  For many readers, the takeaway is ironically simplistic: avoid causal explanations, at least the ones made by people who are smarter than you.

The problem with this is that once you insulate yourself from anything that stinks of “narrative”, you’re stuck with the narratives you already have (which you’re probably not even aware of.) All narratives are dangerous, but those that you never let go of are a death sentence.  Skepticism comes not from a frantic evasion of Descartes’ demon in which one rejects all incoming information, but an acknowledgement of the perennial tension between narratives and their inevitable contradictions.

The fundamental limit that defines the contours of the VaR boundary is determined by the inevitable semantic gaps that exist in the creation of any schwerpunkt.  No matter what defining metaphors you use to relate various tactics to one another, there will be parts that just don’t match; every metaphor, even the ones that define our most subconscious levels of thinking, works by foregrounding some parallels while obscuring others.  Many are quick to dismiss such voluntary bias as irrational if not downright foolish, but it’s a well known truth that an idea that explains everything explains nothing.

However long this maxim may have been known, it was only formally demonstrated in the middle of the 20th century by Kurt Godel’s Incompleteness Theorem, which demonstrated that any mathematical system cannot be simultaneously sound and complete.  For a mathematical system to be sound, it must not contradict itself, and to be complete, all of its implied theorems must be overtly provable.  While the logic of the proof itself is beyond the scope of this essay, the takeaway of the theorem is that any sufficiently powerful system of logic will inevitably be rendered invalid by its own conclusions; in other words, there exists no all-encompassing system of logic that stands on its own.

Any and all doctrine that forms the basis for a given strategy suffers this same tragic flaw: a system that is perfectly sound will be fundamentally limited in its expressiveness.  We cannot move past banality without accepting that there will be a point at which our underlying assumptions break down.  These breakdowns are nonetheless essential to adapting to changing conditions: feedback comes not from statistical inference on a stream of raw sense-data (which would require a sound and complete organizing framework), but a catabolic process by which tactics are pulled from various domains to form a new schwerpunkt.

Where the unanticipated breakdowns of doctrine correspond to harmful, or “negative” black swans, “positive” black swans find their home in the process of synthesis that gives birth to unforseen insights.  Even then, however, there is no escaping from the pull of entropy.  While some fantasize about the idea of finding a “right tail” that outweighs the “left tail”**, the idea doesn’t actually work because these “tails” do not objectively exist as part of a numeraire; they are ultimately signifiers of what we cannot understand, an acknowledgement of where we’re flying by the seat of our pants under the gaze of an angry and impulsive god.

None of this is to suggest that we “shouldn’t get out of bed”, as some people put it.  Life is not about avoiding mutilation, but figuring out what’s worth enduring mutilation: great scientists have died from voluntary radiation exposure, economic progress climbs across the bodies of fallen entrepreneurs that went ahead with meager odds of success, technological innovation is reliably accompanied by economic booms and busts, and as the actual writer of The Black Swan has noted many times, the ancients were afraid not of death, but of an unheroic death.

Nor is this to say that we should be completely uninhibited in our risk taking: while one may eventually feel the need to put all of their chips on the table, this only makes sense as a last resort when you feel that the bet really is worth it, and nobody but you can decide when that moment comes.  The purpose of strategic thinking is ultimately the same as that of tactical thinking: to allocate your bets appropriately so that you can achieve momentum while avoiding the catastrophic failure that comes with overbetting.  Given the lack of a numeraire, however, this process works instead by looking for positive black swans while avoiding the possibility of ruin, the latter of which requires that you don’t inadvertently bet the entire pot.  While there’s no guarantee that you’ll always be aware of when you’re overbetting, that’s no excuse to do it knowingly.

This difference about overbetting highlights the opposition between my cautionary attitude about GMOs and my lack of concern about “superintelligent” AI.  In the case of the former, there is a clear mechanism by which a novel organism could gain dominance over its ecosystem and proceed to spread and do the same thing elsewhere–we’ve seen the destructive effects of an invasive species in a region before, what happens when a species is foreign to the entire biosphere? It’s unlikely, but the scenario is clearly defined by a mechanism of compound interest.  One might say that in trillions of years nature never did this (though maybe it did to an extent), but nature also never played matchmaker for a fish and an ear of corn.  The idea of an emerging AI superintelligence, by contrast, cannot be well-defined in this way: gains in processing power, however exponential, say nothing about a qualitative change in what a computer can do any more than increasing the horsepower of a car makes it do anything other than drive faster.  One might object that one can always come up with a Rube-Goldberg style narrative if they try hard enough, but the issue there is that once you reach a certain level of contingency in your explanations, you’re no longer talking about fat tails.

This is a very important caveat, as fat tails are what distinguish signal from noise in the absence of certainty.  The more preconditions a given scenario relies on, the less it can be distinguished from a randomly chosen “possible world”.  When we detect plausible patterns of compound interest, however, the number of possible worlds containing this idea vastly increases, enough so that we can see a pattern.  Storytelling is what allows us to identify these patterns.

It’s a Spiral, Stupid!

All of this talk about uncertainty and how to handle it leads me back to what I consider to be one of the most important questions of all time: to try or not to try? Interestingly, the answer from the most seasoned people usually seems to be the latter, but on many levels it just seems too simplistic: I’d hardly consider it a good idea to eat a pint of ice cream every day and stop showing up to my job.  But at the same time, there is a point: as I’ve elaborated on before[link], feedback, not “willpower”, is the driver of efficacy.

What this eventually led me to realize, however long it took, is that before you can truly “not try”, you have to become a certain person.  A return to innocence is not the same as innocence.  To consider the difference, think back to the slightly cliched notion of “formlessness”:

If I determine the enemy’s disposition of forces while I have no perceptible form, I can concentrate my forces while the enemy is fragmented.  The pinnacle of military deployment approaches the formless: if it is formless, then even the deepest spy cannot discern it nor the wise make plans against it.

He’s absolutely right, but it also begs the question of whether you can remember the last time a puddle punched you in the face.  Note, however, that he speaks not of being formless, but of approaching the formless: rather than have no form, you must continually improve the nuance with which you act, approaching formlessness not as a destination but as an asymptote.  For a sufficiently advanced practitioner to do things with “effort” would indeed make little sense: they’ve already reached a point where their modus operandi has gone far beyond any process of naive deliberation.  For the rest of us, there’s grit.

This idea goes back to my previous essay, in which I noted that systems do not attempt to stay in some equilibrium but become increasingly complex by acquiring optionality.  Similarly, the purpose of strategy is not to reach some final destination but to maintain disequilibrium by becoming ever more complex.  Tactics, by contrast, provide the necessary scale to push us away from any attractor we may fall into.  As the dichotomy between strategy and tactics (as well as their companion concepts, doctrine and operations) fades away, these attractors ultimately resolve into a single strange attractor.

It is at this point that we approach a forever elusive “return to innocence”, enacting our journey not as a circle in which beginning and end are indistinguishable, but a spiral in which one never truly escapes the duality of these two faces of uncertainty.


*This example is taken from Red Blooded Risk by Aaron Brown.  The concept is common knowledge, but the numbers I’m choosing come straight out of his book.

** The “tails” refer to the far ends of a probability distribution where extreme values reside.  To the left, you have negative black swans, to the right, positive ones.

One thought on “Two Faces of Uncertainty: An Epistemic Theory of Strategy

  1. Conrad Clement

    “All of this boils down to the inescapable fact that the significance of any outcome is path-dependent: you can’t truly understand the import of a goal without looking at all of the events that preceded it.

    Such worries don’t matter very much in the real world where no such genies exist, but it begs the question of whether there’s any value at all in setting goals, let alone strategizing about how to achieve them. If we can’t anticipate or even understand the final outcome, then why bother trying to get from point A to point B? The point, however, is not to get to point B, it’s to get away from point A. In the most basic sense, it’s a kind of “annealing”, in which injecting randomness into a situation can break you out of a rut into a different situation. In most cases, however, random action is not enough: to achieve escape velocity, you need to coordinate your actions by means of a unifying strategy.”

    This was the best passage for me. It’s something that feels like it should be obvious in retrospect, but for whatever reason is not at all obvious. Maybe it has to do with a prevailing cultural narrative that’s all about trying to achieve (any, generic) goal and just assumes the personal worthiness of whatever that happens to be, more than it speaks to the process of setting and adjusting goals. Thus neglecting path-dependence, emphasizing destination over journey, etc. People only really talk about getting away from something if it’s obviously harmful (addictions, poverty, abusive relationships). Which I guess makes sense given how much more legible a specific goal is, than trying to pursue something like annealing / escape velocity.

    Thanks for the essay.


Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>