Category Archives: Semiotics

Shouts, Whispers, and the Myth of Willpower: A Recursive Guide to Efficacy

It occurred to me as soon as I started writing that people absolutely love saying “I’ve always been fascinated (!) by this topic.” I don’t know how many times I’ve done it on this blog or other pieces of writing, but I’ll try to refrain from it at most times. In this case though, the subject is to some extent a personal one. Without getting into too much detail, I’ve always struggled with attention. By this, I don’t mean merely sitting through boring shit, but an actual lifelong difficulty with executing tasks that interest me. I’ve found ways to ameliorate it, but my resources still feel frustratingly underutilized on a regular basis. Just like a lifelong struggle with obesity can cause a person to look beyond a lack of “self-control” as the cause of their condition, noticing the odd discrepancies between my intentions and my actions led me to realize that “self control” is an explanation with no information or utility of any kind. Even if such a mechanism independently exists, perhaps commandeered by an ethereal homunuculus or some kind of genetic encoding, it doesn’t give you any actionable knowledge that you didn’t have before; in fact, I see little difference between unqualified ideas of “personal responsibility” and new age books like The Secret (but let’s not have that argument today.)*

My own struggles with what’s commonly called Attention Deficit Disorder certainly don’t make me unique or special in any way. I have yet to meet the person who feels satisfied with what they perceive as their self-control. Whether it’s not going to the gym enough, not getting enough sleep, or struggling to be more outgoing in social situations, we all have problems that manifest themselves as patterns of behavior. We’re told that it’s a choice, but we’re not told who exactly is choosing. Most would say that we can’t choose to crave junk foods but we can choose not to give in, but that sounds to me like saying that we can’t choose whether we get cancer but we can choose not to be sedentary. The more recent explanation put forward by behavioral psychologists is that we have a finite reserve of “willpower” that acts as a brake on our impulses and that it takes up physical energy, and that we make bad decisions under the condition of “ego depletion”, when we no longer have the energy to pass up instant gratification. This reserve relies on energy in the same way that the rest of our body does: sufficient blood sugar, rest, and available bandwidth; numerous experiments showing that memorizing digits and refusing temptation share the same resources, and that those resources become very strained if the person has low blood sugar.

The limits of this model become apparent to me, however, when considering the idea of willpower applied to dieting. The common explanation goes that if you’re fatigued that you won’t have as much willpower left over to resist junk food. Fair enough, but what if this fatigue is coming from low blood sugar? In this case, it can’t just be a matter of willpower, because the very craving for junk food comes from the fact that when blood sugar is low, the brain sends signals that demand that glucose be delivered to the body as quickly as possible; something that junk food tends to do. Worse, even if you resist the cravings using what willpower is left, your metabolism may slow down anyway in order to preserve energy. So you’d need to apply the willpower to exercise, but even assuming that your body won’t cut energy elsewhere to compensate, that would just drain more energy, eventually leaving you with no “willpower”. Taking all this into consideration, Occam’s razor would suggest that it’s metabolism, not a lack of willpower, that turns us into lazy overeaters.

I’ve already talked at length about the details of this metabolic process in a previous post, so I won’t go into further detail, but the purpose of this example was to show that whether or not our folk-concept of “willpower” as a manual override exists and influences our decisions about eating, it does not exist in a vacuum. Similarly, I’ve come to the conclusion that what we call “attention” is not a simple matter of “self discipline” but a complex nonlinear process that is fundamentally grounded in feedback.

Information Sensitivity and the Habit Loop

For those who read my post on allostasis, you might recall that the health of a complex system can in fact be defined by the ability to process feedback, which looks like high uncertainty and volatility from the outside. An inability to process feedback manifests as insensitivity, which can manifest itself in many forms: an insensitivity to insulin among the obese and diabetic, out of control inflation from excessive money printing, tolerance to a drug, hearing loss due to hearing the same noise over and over, not listening to the boy who cried “wolf!” because he was full of shit every other time. More recently, I came across an insightful but disappointingly flawed book called Money and Power: The Information Theory of Capitalism by George Gilder. Although there were many parts of the book that were so illogical, unrigorous, and ideologically driven that I almost devoted an entry on this blog to ripping the book apart, there were a few gems, including a very elegant metaphor to describe how cell phones and wireless internet expanded so much despite what seemed to be hard limits on the amount of bandwidth offered in the electromagnetic spectrum, and by extension, how an economies can grow at an exponential rate relative to the resources they consume.

Imagine a cocktail party in which everyone is talking to one another. You might have trouble hearing your friend because of all the noise, which causes you to speak up louder. Unfortunately, everyone else might do the same, which means that everyone has to raise their voice even more to be heard in the conversation. Eventually, it would get so loud that your voice gets hoarse and you can still barely hear your friend. Imagine, by contrast, if everyone decides that instead of raising their voice, they agree to keep their voice down and compensate for the background noise by having conversations in different languages. This way, the noise in the background is much less distracting since the noises in the background cannot be mistaken in any way for the words in your own conversation. In the field of wireless communications, the standard approach for most companies was to try to speak louder, which would allow their calls to overpower any data that slipped in from outside the communication channel. In addition, much bandwidth was dedicated to creating buffers between channels to minimize the amount of noise that could go into a certain channel. This approach is known as Time Division Multiple Access (TDMA). The alternative, pioneered by a company called Qualcomm was Code Division Multiple Access (CDMA), in which very little bandwidth is used, but each channel relies on a unique language, so that interfering bits of data do not register as anything intelligible. To see the difference: imagine that someone says “my train leaves at five”, but you hear “my train leaves at nine.” You wouldn’t know that you misheard. If, on the other hand, you heard “my train leaves at hobgoblin”, it won’t make any sense, so you’ll know that you didn’t get the right information. With this in mind, all that was needed was to create powerful decoders on each side of the conversation, which had far fewer constraints than the limits on bandwidth that limit the potential of using TDMA.

What gives CDMA its incredible advantage over TDMA is that rather than turning up the volume, it relies on a more sensitive device. The “sensitivity” of this device is not a matter of picking up more distant signals, but a matter of being able to detect patterns that other devices can’t. A murmur in the background often doesn’t register, but if you hear someone say your name, you’ll often find yourself turning around to see if someone was addressing you. In the same manner, an experienced firefighter might not have physically sensitive ears (quite the contrary if you consider all the extremely loud noises that happen over the course of fighting a single fire), but their understanding of relevant information is complex enough that even the slightest whiff of smoke or creak of the floorboards can cause them to immediately order everyone out of the building before they even know why they gave the order. The decoders used in CDMA phones do the same thing: they are not more sensitive to bits of data, they just have a better understanding of the relevant data. This doesn’t mean that the quiet is unnecessary: there’s still only so much bandwidth, and just like you need to speak louder than the background noise no matter what, the same goes for wireless communications.

A similar case can be made for how our attention works. Despite all the wonders of our inner logical faculties, we’re still just like every other animal in that we rely on feedback to learn how to navigate our environment. Although we often make deliberate choices and suppress our impulses, emotions, bodily sensations, and experience remain the primary determinants of our behavior. Our conscious minds, although undoubtedly impressive, are strongly limited in both their knowledge and control of the information we process and the choices that we make. The backbone of our decision making through the day is comprised of habits, each of which may be accessible to our consciousness to some degree. For a long time, I assumed that habits were just a matter of repetition, that if you repeat something enough, you start doing it automatically. This is definitely true to a degree, but there was another mechanism I wasn’t considering until I came across The Power of Habit by Charles Duhigg.

Duhigg’s fundamental insight is that habits operate as a loop, and must complete a circuit in order to be effective. The loop consists of three parts: a cue, an action, and a reward. A good, albeit very basic example, is the loop of hunger -> eating -> satiety. Forming a habit is a matter of feedback: a specific action in a specific context (cue) leads to a specific outcome (reward). Without such feedback, there is no basis for connecting a cue and a reward. This process, although very elementary, is the primary building block of learning and mastering skills. We instinctively engage in trial and error by creating behaviors to seek out a very specific reward: control over our surroundings. If I go outside and start shooting basketballs, every shot gives me some information about what to do the next time, and I get feedback in the form of making more baskets per shot. If I were to not improve even after an entire afternoon, however, I would get frustrated and not find the task very engrossing. This is very useful since it tells us not to waste time on things that we either can’t control or can’t make sense of. These habits of trial and error gradually layer on top of one another, and can become such sophisticated patterns that a chess grandmaster can feel excitement or anxiety at a configuration of pieces that means nothing to a novice.

When feedback becomes reliable and informative to the point of complete engrossment, the person is said to be in a state of flow, a term coined by the psychologist Mihaly Csikszentmihalyi. Although widely celebrated in many circles, a substantial case against it was made by the computer scientist and self-help author Cal Newport, who asserts that “flow” is a nice way to enjoy the skills one acquires through hard work, but that real practice is ultimately about delayed gratification. For a very long time, I almost entirely agreed with him. It certainly is the case, for example, that however much joy I take in programming or writing, it requires some mental effort, and while sometimes engrossing, does not usually feel like I’m gently coasting downstream. What I found odd, however, is how much of a struggle it has been at varying points in my life to stay focused on something that I’m genuinely passionate about. Even if I would rather be doing it than watching TV, there were times where I felt incapable of it. At some point in time, something else struck me as extremely odd: I never encountered these problems when playing video games. While mathematics was often a struggle due to the fact that I would make careless mistakes and lose track of what was on the page, a video game can so effortlessly take me away from reality that it’s not uncommon for me to not realize that several hours have passed. Nor did improving seem to necessarily require any kind of delayed gratification (though it is true that if you’re serious about, say, StarCraft [I'm not], then you need to resist the temptation to get easy wins through amateur moves and learn the techniques that will help you truly improve.) What was it about video games that could make my attention this laser sharp?

Two Faces of Flow: Learning vs. Addiction

I finally realized a probable answer a few weeks after I re-read a post on Ribbonfarm called The Calculus of Grit, in which the author argues that, contrary to popular belief, those who become masters of a craft do not have superhuman willpower. Rather, they’ve become adept enough at their own field that they can leverage their past experience as a means to exponentially accelerating feedback. What looks like “willpower” is actually very specific to the particular pursuit; an amazing writer may spend 8 hours a day completely focused on his writing, but may otherwise have horrible self-control when it comes to cleaning his house or sitting down to pay his taxes. You’d also be unlikely to find him sitting through a course on differential equations or spending 12 hours a week bodybuilding at the gym. All of this makes sense if you think about it in terms of aptitude: if you have no aptitude for programming, you won’t understand your own mistakes and the lack of feedback will make it a painful and most likely fruitless process. If you have some skill on the other hand, then you’ll progress so long as you’re interested and given the right set of challenges; too easy, and you’ll get bored, too difficult, and you’ll get frustrated.

All this suggests that self-discipline is a local phenomenon; it occurs almost entirely in places where there’s feedback. With this feedback comes improvement, and with improvement comes even greater returns on feedback. Why? Because the better you understand what you’re doing, the more sensitive you are to the feedback you’re getting. If someone like myself, who has never once been serious about playing chess (though I was naive enough to think that I was good at it when I was 12), were to study the record of a game between two grandmasters, I’d have trouble gleaning anything useful from it. For a serious student of chess, however, there exist all kinds of patterns that they could see from years of study and experience. It also goes without saying that a more experienced chess student would find the literature more engrossing than even the most enthusiastic novice. Putting two and two together, it finally hit me that attention can be specifically defined as sensitivity to feedback.

So what does all this have to do with video games and their uncanny ability to fully engross even the most scatterbrained individuals? The answer lies in the specific type of feedback provided by video games. Most video games, in comparison to many other challenges, provide feedback that is extremely loud, extremely frequent, and extremely simple. While a game like chess resides in a world of abstractions where a move in of itself provides little in the way of stimulation, a game like Doom or Halo accessorizes every decision, however minor or inconsequential, with a pattering of footsteps or a loud explosion. Immersed in a dazzling audiovisual spectacle, the player is constantly saturated with shiny objects that draw them into the world created by the game. Even a simple strategy game from the 90′s offers an immediate sensory thrill that can’t be rivaled by a book, a chessboard, or a board full of equations.

But the spectacle itself is not the central mechanism by which games hook themselves into the player, but rather a supplement. It’s the very structure of the feedback: most video games are designed so that the simplest tasks are rewarded by some tinge of satisfaction. A few more points for every monster slain, various badges for different accomplishments, and all done at a pacing that gives the player a cookie right before they get bored. The role of the game’s audio and visual elements is to create a sensory anchor for this feedback loop, providing visceral cues and rewards for the habit loop constructed by the game. The player literally sees and hears the satisfaction that will come from obeying the cue that has come up.

All of this comes at a price. Complex systems learn by adjusting to feedback, and feedback that is sufficiently loud and frequent will oversaturate the system’s inputs, leading it to reduce its overall sensitivity in order to register changes. When instant and immediate gratification becomes the norm, more subtle forms of feedback become harder to register. Getting engrossed in a book becomes increasingly difficult. The same goes for different kinds of stories: it’s easier to sit through an action movie than a drama because the story is simple and the movie is mostly comprised of satisfying bits of conflict resolution in the simple form of karate chops and shootouts. We might force ourselves to sit through a few chapters of Tolstoy, but the real issue is that we ultimately have to re-calibrate our receptivity to feedback in order to gain interest in more subtle flavors of experience.**

At this point, I may understandably sound like a puritanical naysayer conjuring the cultural paranoia of generations past, nor would I blame you. So I should clarify that video games are not categorically bad. Attention is a local phenomenon, and reduced sensitivity to a stimulus is a valid adaptation. We stop listening to the boy who cried “wolf!” because it’s a waste of time and energy to get into a frenzy over consistently false information. Similarly, becoming wired for more frequent and intense feedback might prove beneficial in some scenarios: while the internet might have made us a bit less singular in our focus, it can be slightly painful to watch people of the baby boomer generation work with a computer as if it were a complex and dangerous welding machine at a manufacturing facility. Nor would it be fair to say that video games never amount to anything more than a digital cocaine-pellet dispenser. While I myself don’t understand the appeal of being a professional StarCraft player, I’ve made the hobby of watching some professional games and have noted the degree to which these players have exhaustively studied the possibilities and developed a rigorous set of techniques that occasionally branch out into subtle novelties that throw the other player off guard. Unlike someone playing just for fun, they’ll watch replays of previous games and deliberately practice both fundamental maneuvers and techniques that they’re not used to in order to improve at the game.

Taking this into account, it’s apparent that while video games are just one of many sets of stimuli that we adapt to, there’s still a fine line between the sort of addictive behavior that constitutes sitting in front of the TV all day with nothing to show for it and becoming a professional gamer through the kind of consistent deliberate practice that most players wouldn’t feel compelled to engage in. To Cal Newport, this is the distinction between flow and deliberate practice: one involves the joyful feeling of getting lost that can only happen through reckless instant gratification while the other involves the hard work of resisting that temptation and practicing what is difficult and frustrating until you get it right. I don’t think he’s entirely wrong in practice, but I’m convinced that he has set up the wrong dichotomy; it hearkens back too easily to the folk concept of willpower, which involves hanging in there in the absence of meaningful feedback (in his defense, that’s not exactly what he’s saying, but he very clearly states his opposition to the idea of flow as being conducive to improvement.) By contrast, I think that the dichotomy is between two different kinds of flow, one that promotes growth, and another that promotes atrophy.

To get a rough idea of the difference between the two, imagine a very large linoleum board with many different interconnecting grooves etched into it. It has all kinds of rivers, hills, valleys, basins, and mountains. Now take a pinball and place it onto the board, watching it roll downhill traversing various nooks and crannies. Things stay interesting as long as the ball continues to roll onto a new path, not stopping for good at any one location. But imagine that it reaches a wide basin where it starts circling around but never gathers enough momentum to get out. From here, the path of least resistance is to stay in one place, slowly losing momentum as it never travels anywhere else, eventually coming to a full stop.

As long as the ball is moving, it’s learning: the path of least resistance offers territory that hasn’t been previously explored. By contrast, once it enters a basin, the ball’s prerogative is to stay there; following the path of least resistance has caused it to stay in a single place. This analogy is admittedly a very flawed one, but I chose it because not everyone who reads this blog is necessarily familiar with chaos theory, which provides a much more faithful version of the same idea. For the rest of you nerds, you can imagine that we are learning so long as we haven’t fallen into a basin of attraction, after which we cascade towards a point of total repetition. Even more technically, you can imagine that learning is a strange attractor, and that addiction is a state in which we get dragged closer to a stable attractor; the stable attractor itself being some kind of literal or figurative death:


A strange attractor, visualized: no point in the phase-space is ever repeated, leading to intricate and complex patterns. If and when it visits the same point twice, it never visits a new space again, thus ending the hope of any additional information. Image courtesy of Space Telescope Science Institute:

Whatever analogy you prefer, the key difference is that when we’re learning, the feedback guides us to new information and new possibilities, and when we’re addicted, we remain in a zone of comfort because the feedback encourages us to engage in repetitive behaviors. While this is easy to see in the case of learning how to play the guitar versus spending an entire weekend watching Netflix, it can also take more subtle forms. Many people who are allegedly “workaholics” find comfort in the validation of staying within zones where they feel strong. This behavior is not necessarily addiction: after all, we avoid excessive failure because there’s little point in spending time on something that we have no aptitude for. If taken too far on the other hand, workaholism can not only become a means to avoid other uncomfortable areas of life such as socializing and personal development, but can even arrest the person’s development at that particular task by driving them to consistently punch below their weight class in order to avoid the possibility of failure. The epitome of such a person might be Brian from Family Guy or Oscar from The Office, both of whom circle a drain of mediocrity as they validate themselves by chiming into conversations with fragments of mainstream quasi-intellectual trivia that nominally qualifies them as “the smart guy” of the group. This kind of identity-wearing is in fact a very strong sign of someone who engages in strength-based addictions. At a greater extreme, such addictions can take the form of alcoholism or drug addiction, which provides the means for instant pleasure and/or pain relief and is Brian’s eventual crutch as he goes from Brown dropout to depressed house pet.

Although alcoholism is not the same thing as avoiding failure, let alone doing some task with no purpose other than repetitive instant gratification, they are for our purposes the same systematic behavior, albeit at much different magnitudes. When feedback fails to foster growth, the inevitable outcome is atrophy, as the subject not only fails to expand their knowledge, but in fact becomes further trapped in a habit loop with diminishing returns as the subject’s sensitivity to feedback gets dulled by repetitive stimuli. It’s also a relative phenomena: Bobby Fischer may have been addicted to chess as a way of (self-admittedly) avoiding the world outside the board (Boris Spassky also saw chess as an escape, having found it as a sanctuary while growing up in poverty in the USSR), but he was nonetheless constantly pushing his limits at the game, never letting himself become complacent with his own abilities. The distinction between learning and addiction is useful here insofar that it explains when flow is conducive to improving at something, and when it facilitates the exact opposite. All this leaves the question of how we can control this process and ultimately engage in a state of flow while avoiding addiction.

Willpower Revisited: Stress Responses and Signal Amplification

While I fundamentally disagree with Cal Newport in his belief that flow is inherently opposed to deliberate practice, that doesn’t mean that his ideas are entirely wrong. Most of the things he says are compatible with the dichotomy that I outlined: that deep procrastination is the result of not having a viable plan (in other words, that procrastinating comes from a lack of intelligible feedback), that passion is the result of mastering something you have some aptitude for and not some pre-determined magic bullet, and that developing a sustainable and effective road to expertise requires taking an approach as a “crafstman”, gradually tinkering with what you do via small bets.

All three of these ideas fit in with the notion that expertise is based on a process of feedback, and certainly don’t contradict any of what I’ve said about flow. On a macro level, all of his ideas about how to become successful focus on working where there’s feedback and resisting the temptation to attempt go-for-broke efforts, which he identifies as the courage fallacy. On a micro level, however, the difference between our dichotomies becomes significant. Newport’s distrust of “flow” comes from the fact that, as the name suggests, it’s an act of following the path of least resistance. In terms of my dichotomy of learning vs. addiction, Newport would likely see flow being inherently addiction-based. To avoid such addictive behaviors requires “deliberate practice”, in which one applies their willpower to work outside of their zone of comfort.

The difficulty of deliberate practice, as Newport himself notes, lies in the fact that there’s a vacuum of novelty that we are constantly tempted to fill. To Newport, resisting this temptation is a matter of willpower, and requires that we cultivate the metacognitive skill of hard focus. Cultivating this skill requires that we build it up through training, in the same way that one might train for a marathon. From this point of view, “hard focus” habit that we reinforce through practice. The issue I have with this point of view is that habits do not exist in a vacuum: they rely on cues and rewards, which may vary wildly depending on the task being engaged in. We may be able to improve our general ability to diligently push ourselves through tasks by developing better metacognition about our habits, but just like playing chess only improves one’s memory for chess positions, our raw “focus” is likely task-specific. The belief in a more universal notion of willpower or focus instead seems to come from a general analogy we’ve drawn that willpower is a muscle; which ironically betrays the flaws behind our folk concept of not just mental performance, but also physical perfromance.

Although exercise often leads to an overall increase of muscle mass, it is hardly the only factor in our ability to perform physical tasks. A person’s ability to lift a certain amount of weight in a certain way depends not only on the muscles used, but also their neural coordination, the ability of their metabolism to apply energy in the right places, and the distribution of muscle fibers (fast twitch vs. slow twitch). In fact, if we use Newport’s chosen analogy of running a marathon (which comes from Haruki Murakami’s excellent book, What I Talk About When I Talk About Running), the analogy falls apart even further. Due to the nature of the Krebs Cycle, aerobic exercise works primarily by optimizing the body’s metabolic pathways for a specific task: a person who is a champion at the stair-master may be completely unable to run a decent mile outdoors or on a treadmill.

None of this is to invalidate Newport’s legitimate concern that the path of least resistance, at least in a world offering constant novelty, will likely lead us down a path of addictive behaviors that bombard us with the most frequent, loud, and easily acquired forms of novelty. Just as our temptations to gorge on sugar, starch, and fat are the legacy of a world where calories were relatively scarce, data only became abundant with the invention of the printing press, and such abundance has now been dwarved by the arrival of the internet. The issue is that the folk concept of “willpower” does not seem to offer much of a solution. Just like applying “willpower” to the issue of dieting separates our decisions from our metabolic signals with a causal hatchet, looking at focus through this same lens ignores the habitual, emotional, and physiological factors that play a role in our decision making about work. Luckily, the analogy of “willpower is a muscle” can also lead us to another view that doesn’t force us to create a mythical homunculus that polices some vaguely defined set of our actions.

In order to get rid of this causal separation of willpower from the rest of our decision making apparatus, we need to get rid of the notion that our willpower works completely independently of feedback from other systems. In recent decades, cognitive scientists have begun this process by studying phenomenon such as ego depletion, in which one loses their ability to resist temptation due to either physical fatigue or mental fatigue due to applying too much willpower without recovering. Studies have also shown that willpower is lessened when the subject is busy concentrating on some other task, due to a phenomenon called cognitive load. Understanding the phenomenon of impaired judgement under cognitive load is easy if you realize that the connection between our consciousness and our actions has a very low bandwidth and we do not have the resources to make very many conscious decisions at once. Imagine, by analogy, the president of the United States: he makes a lot of key decisions, but he certainly can’t micromanage every piece of legislation being passed in every county, municipality, and state. He can only make so many decisions, and must work from a bird’s eye view, leaving much of what happens to the logic of an intractably complex system.

This “bandwidth limit” is in fact one of the things that makes the concept of willpower so troublesome: it assumes that the best decisions are made consciously, which ignores that the vast majority of the information needed to make coherent decisions resides in complex systems that are invisible to our conscious selves. The idea that willpower is some separate mechanism that acts completely independently, albeit constrained by some “energy budget” (as ego depletion suggests), implies that there is some strict binary between a decision that’s consciously willed and one that’s not, something that doesn’t make sense if every decision requires a significant amount of unconscious information, and makes even less sense when we consider the emotional and physiological factors that continually shape our conscious experience. On a more practical level, this is important because sometimes our instincts are good for telling us what we “should” do, leaving the bigger question of how much “willpower” we ought to have in the first place.

With the help of some simple mathematical concepts, however, it’s possible to escape this superstitious separation of subject and object by re-framing “willpower” as an issue of information rather than some raw force of conscious will. Consider again the concept of ego depletion: while we may only be able to concentrate on a difficult math problem for so many hours in a day and feel fatigued afterwards, it doesn’t compromise our more basic mechanisms of self-control. We may be more tactless after a 12 hour shift at work, but barring severe intoxication, we do not enter some state of absolute zero inhibition. It’s not that there are no limits to our self control, it’s that our capacity for self control decreases not linearly, but geometrically. This realization is actually a much more faithful interpretation of the idea that willpower is a muscle when one considers the difference between fast twitch and slow twitch fibers: the former take days to recover and get used up when lifting a heavy load, whereas the latter can fully recover in mere minutes, which is why no amount of weightlifting, barring severe injury, will prevent us from walking out of the gym. One might ask whether this is just a matter of habit in which our willpower is not required because we’ve created a routine that gets rid of the need for willpower. While there is some truth to this, it’s hard to untangle habit from self-control because all of our decisions act on feedback of some kind. Luckily, we can account for this non-linearity without forcing a dichotomy between “willpower” and habits by modeling the folk concept of “willpower” as a signal whose efficacy is based on sensitivity, rather than as an independent mechanism that draws energy from a finite gas tank.

In particular, I’d like to suggest that “willpower” is a stress-response that happens in the absence of feedback. To get an idea of how a stress response works, consider the hormone cortisol. Cortisol is one of several hormones that is excreted in the human stress response, and helps us avoid danger by shifting our body’s resources so that getting rid of the threat is our top priority. Pain, normally a signal that tells us to avoid harm, is dulled, since surviving is more important than avoiding injury. Our digestive system also shuts down, and we are able to use up more of our body’s energy than usual, since there’s no point conserving energy if we’ll get killed doing so. Once the threat is gone, our body will go into rest mode, and we’ll become more sedentary than usual in order to recover the energy that was lost.

In a scenario where threats are sufficiently spaced out, there is no problem. Unfortunately, the modern world has brought on the phenomenon of chronic stress. Even though the stress is rarely as acute as a life-threatening situation, the stress-response is turned on for an abnormal amount of time and can easily enough desensitize us, leading to conditions such as adrenal fatigue. In more extreme scenarios, such as war, it can lead to conditions such as Post Traumatic Stress Disorder (PTSD). While the layman’s idea of PTSD is that it’s caused by acutely threatening situations, the evidence suggests a more nuanced view. Soldiers in the front lines actually have lower PTSD rates than logistics soldiers, due to the fact that they are in more apparent control of their situation than the people whose job is not to fight back but to keep the supply lines running. Even then, modern warfare involves being on high alert for hours, and sometimes days, on end, and often engaging with threats in a way that’s contrary to one’s individual self-interest. Like many disorders, PTSD likely comes not from the experience of acute stress, but from a mismatch between the signal of stress and the person’s response to said signal.

Another important fact about this stress-response is that it benefits us up to a certain point. Police officers, firefighters, and soldiers are put through a certain amount of training to blunt their response to dangerous situations, so that when a real emergency happens, their response hits the “sweet spot” at which focus and energy improve, rather than going too far and causing a total loss of control; something that can leave people too paralyzed to even dial 911. This upside-down-U curve of benefits, common to all such signals, is best visualized with a graph that I’ve already borrowed for so many posts:

Like all other responses to stressors, the stress-response to this lack of information follows an upside-down U shaped curve that benefits us up to a point before it starts to harm us. For the third time over the course of five posts, I’m going to post the exact same graph by Nassim Nicholas Taleb:


Because, yes, it’s pretty damn important.

Oddly, we seem to have a similar stress response regarding attention. Just as a perceived loss of control can raise cortisol levels and ultimately cause anxiety and depression, the same thing seems to happen when we lack good feedback. When not sufficiently engaged in meaningful tasks, boredom, and eventually anxiety, slips in. An episode of Breaking Bad is fun to watch after you’ve called it a day, but spending an entire day watching TV is often a desperate bid to alleviate a sense of boredom and dread (I say this from experience.) Given the higher incidence of stress among those who feel less in control, and the tendency of cortisol raising drugs such as caffeine, speed, and various ADHD medications to significantly raise people’s focus, I’m actually convinced that cortisol is one of the hormones involved in this stress response, and that the stress response I speak of now is simply a variation on the basic biochemical response that moves us out of harm’s way. But I digress, as I have no intention of speculating on the biochemistry behind the ideas beyond certain basic patterns.

There is, however, one more aspect of stress that I haven’t covered: in order to be in control, one has to be able to sufficiently predict the cause and effect of their actions. Prediction is in fact so vital to our sense of security that the most acute (and ultimately damaging) stress responses happen when harmful and threatening stimuli cannot be predicted at all. Shocks administered without any rhyme or reason are far more stress-inducing than those given with regularity. From an evolutionary standpoint, this also makes sense: if we are in an environment that we cannot predict, we’re in serious danger. If you know when the predators come out and where, you can use that information to make safer decisions. In the absence of such information, it’s imperative that you figure out what needs to be done or move to an environment you’re more familiar with. In other words, an absence of feedback will create a stress response, causing us to either double down and search for more information, or get out of harm’s way as quickly as we can. In the modern world, this stress response makes us decide whether to “buckle down” and filter out peripheral information (thus increasing our sensitivity to feedback), or walk away.

The region of the graph that rests above the horizontal line is our beloved “sweet spot”. Here, there is some absence of feedback that we respond to by learning and adapting, whereas to the right, we become increasingly frustrated and restless as we fail to make sense of our surroundings and become increasingly likely to quit (the same logic can also apply to being bored by something that is too abstruse or irrelevant). To the left lies the zone in which feedback is abundant and there is little ambiguity in what we are doing, leading to the kind of addictive behavior in which we grab the low-hanging fruit at the expense of development. Since such behaviors can de-sensitize our stress-response, this addictive behavior is harmful in that it actually causes atrophy by causing us to regress by default to even less nuanced feedback. Most importantly, the stress response does not get “trained” in any way, but is rather a means to helping us become more focused at specific tasks. When we are able to hit the “sweet spot” on a regular basis, we can engage in the kind of deliberate practice that Cal Newport advocates.

So what implications does this have for improving our efficacy at tasks? Is this just a long-winded way of restating the idea that we need to be diligent and not just take the path of least resistance? In part, yes, but not without some details that can give us more useful information than the folk concept of “self discipline.” By talking about attention as sensitivity to feedback and willpower as a stress response in the absence of feedback, we can revisit Gilder’s wireless communications story as a way of understanding how to approach the issue of focus on a more practical level. If you recall the metaphor of the cocktail party, the approach of everyone trying to talk over the noise of the crowd will result in little gain and hoarse voices. In wireless communications, the equivalent practice is to use additional energy to boost the clarity of a signal over a channel. Unfortunately, just like in the case of the cocktail party, the returns diminish quickly, as it takes a quadratically increasing amount of power to boost a channel’s signal. Meanwhile, the loud noise from everywhere else has to be compensated for by using spare bandwidth to create thick buffers that block out interference, diminishing the efficiency of the wireless network.

The stress-response that dictates deliberate self-control works in the same way. Although a certain amount of it is necessary and even beneficial, we quickly get diminishing returns on the signal. Add to this that the stress-response gets blunted over time as we become decreasingly sensitive, and returns will diminish even more quickly. We can also block out noisy stimuli by filtering out irrelevant stimuli, but given our limited bandwidth, this too is a drain on resources.

Just like our stress-response peaks in effectiveness before hitting diminishing returns, a similar factor is at play in Gilder’s explanation of wireless communications. Going back to the example of the cocktail party, if everyone tries to talk louder, it will become no easier to hear the other person, and worse, everyone’s voice will go hoarse and everyone’s hearing will be shot as they constantly try to talk over one another. The same thing happens even less ambiguously in wireless communications: boosting the signal (turning up the volume) of a channel requires a quadratically increasing amount of power, which means that you won’t be able to cost-effectively boost the signal past a certain point. Our stress response, when utilized, undergoes the same dynamics, becoming increasingly ineffective with overstimulation.

More speculatively, I suspect the stress response has two separate mechanisms that mirror the channel-boosting/insulating dynamics of TDMA. By amplifying the signal with additional power, we increase the neurological “volume” of feedback by creating a bigger rush of neurotransmitters (or something similar) for every stimulus. In addition, it uses up bandwidth in order to insulate ourselves from interfering stimuli from elsewhere, thus leaving us limited in our ability to make other decisions during demanding times. Yet another way of looking at it is that our stress response, by creating such tunnel-vision, narrows the possibilities we have: perhaps this is what’s behind the experience many have with stimulants in which it reduces their creativity With this comes an additional cost: the more over-saturated we are with feedback elsewhere, the more resources we have to devote to boosting the channel and insulating it from interference. Our energy and bandwidth are finite, and bandwidth that is spent blocking distractions is bandwidth that can’t do other things. Meanwhile, the more energy we spend getting a signal through a single channel, the more bandwidth is wasted that could have been used for more meaningful pursuits. Personally, I’m sick of it: I have too many days where I get home from work feeling too exhausted and unfocused to be productive, but still feel a nagging restlessness. Luckily, I believe that there’s an equivalent to CDMA that maximizes our available bandwidth, and greatly reduces the amount of energy needed to create a clear channel of feedback.

Attention Gardening: Via Negativia and Craftsmanship in Extremistan

Unsurprisingly, there’s a lot we can do if we think about all this in less linear terms. Going to the gym, even if one goes several days a week, only takes up a small slice of our waking hours, but the intensity of the effort significantly shapes our metabolism by initiating cascades of signals. Many, myself included, have found success in intermittent fasting by creating a large systemic impact using only a brief period of stress. In both cases, applying a relatively small amount of energy resulted in chains of feedback with convex effects. The upside-down-U curve that I showed earlier actually mimics this, as (up to a point) the value of f(x) not only increases faster than x, but in fact accelerates. The reason behind this is that the right chain of feedback will have compounding returns; in other words, it will be a positive feedback loop. More importantly, you don’t need to always get it right: as Aaron Brown in Red Blooded Risk explains about investing, “Successful risk taking is not about winning a big bet, or even a long series of bets. Success comes from winning a sufficient fraction of a series of bets, where your gains and losses are multiplicative. That pattern of gains and losses leads to exponential growth. This appears to observers as overnight success.” Nor am I the first to consider this approach to productivity, both Venkatesh Rao and Gregory Rader have talked about such an idea in terms of achieving “thrust” in order to make accelerating progress in a pursuit. Unsurprisingly, their parabolic thrust/drag model once again mimics the curve that I’ve repeatedly talked about in this entry (and this blog in general.)

The thrust-and-drag analogy has a lot in common with my own analogy to Gilder’s talk on TDMA vs CDMA in wireless communications. What they call “drag”, can be identified as interference that makes the channel noisy. Just as it costs a quadratic amount of power to boost the signal over a linear addition of noise, drag has a quadratic effect on the trajectory of a projectile. Therefore, reduction of drag is crucial to cultivating convex returns on your efforts. All this suggests that Jensen’s Inequality is a crucial element of any sound productivity strategy: the dose must be concentrated enough for a positive feedback loop to occur, so up to a point, returns are convex.

This still leaves two things left to consider: first, we only have so much “rocket fuel” to expend, and creating sufficient “thrust” will require a significant up-front investment. Is there any way that we could cut our investments into small pieces in the way Aaron Brown recommends? Second, a positive feedback loop can often (and may inherently be) an addiction in the way that I described earlier; all learning ultimately relies on a degree of negative feedback, which is behavior that corrects by filling informational gaps. It’s worth noting that Venkat himself, in the article that I posted, uses the word “addiction” to describe the flow of a creative task, and in fact inspired my own slightly more technical definition of addiction with his own similar idea known as Gollumization. These two points considered, how can we maximize such “thrust” without falling into addictive behaviors or falling back on naive ideas about self-discipline?

While Rao and Rader are both right on the money regarding the necessity of removing drag, I think that moving beyond a thermodynamic analogy can provide us with a better outlook on how to create momentum in our pursuits at minimal cost. In informational terms, drag is the absence of feedback that will be acted on via the stress response I talked about earlier. While too many distractions can definitely cause this, so can a lack of overall sensitivity, which can occur if we’re used to receiving intense novelty on a frequent basis at little cost. It’s also worth noting that this “drag” is reduced with increased adeptness, as we gain a greater sensibility for the actual subject at hand; like I said before, we cannot register feedback if we’re working with something we simply don’t understand. For this reason, it’s not only important to set the appropriate difficulty level (too easy is addictive, too hard is unintelligible), but to make sure that the pursuit automatically calibrates the difficulty level as we advance so that we don’t find ourselves struggling to stay focused after a series of promising initial gains.

To get an idea of how this “sensibility” matters, consider the use of unique codes in CDMA. These unique codes are usable because the decoders on the devices are extremely powerful and intelligent. By analogy, we can work according to feedback with minimal excess energy by having the sensibility and experience to see the nuances in the feedback that we’re receiving. I don’t think this is just a matter of using skills that we’ve already learned, but in fact, the reason why we can see such “accelerating returns” in creative pursuits, and even gain the metacognition to become better at focusing and learning at broad arrays of tasks. Although it’s known that becoming good at chess doesn’t improve memory in any area except for chess positions, it still seems to be the case that broad erudition makes us more suited for the uncertain future. Although I do not know of any empirical evidence, I strongly suspect that the ability to break domain dependence by reasoning through analogy allows us to draw more general lessons from earlier pursuits to accelerate the necessary learning curve of later ones. In a broad sense, becoming more sensitive to certain kinds of feedback not only frees up room to listen to new kinds of feedback, it also provides information of its own that we can apply to increase our sensitivity to those novel forms of feedback (thus freeing up even more room in a kind of virtuous cycle.)

The other issue with talking about chess is that as an activity, it’s a well-defined closed system. Closed systems can be mastered, albeit with difficulty, through a relatively straightforward kind of practice in which feedback comes at a reliable pace and common lessons can be easily passed down through more experienced practitioners. For less definite fields, feedback is not so straightforward, there is not nearly as much of an externally defined set of rules, and worst of all, even when the former and the latter both seem to be true, the logic of the system may be much more wild and unpredictable than it looks, even if it looks super calm.*** In the case of these open systems, metacognition has more of an impact, because there is no definite set of rules from which you can derive the logic of the system, and because the possibility of extreme outcomes means that insights can make you either fragile, antifragile. But once again I digress; this is getting into a whole other topic that I’d like to one day discuss, but simply can’t right now.

What’s worth noting about the analogy to CDMA is that it indeed does seem possible to quiet all of the individual channels down and maximize our productivity by learning how to become more sensitive to feedback. In fact, if one considers Claude Shannon’s Noisy-Chanel Coding Theorem, we can maximize the effectiveness (transmission rate) of our time (bandwidth/channel-capacity) by reducing interference (error) to virtually zero by creating the right transmission code. In this case, I think that there’s strong reason to believe that our transmission code is the sensibility we develop. Such a theorem suggests that the approach of creating more complex codes while avoiding power-boosting really is the optimal approach to wireless communications. This is admittedly speculation, as I still have much to learn about information theory and how to apply it, but I think that this possible insight can help us understand, by analogy, how to gain productivity through finesse rather than brute strength.

I think that the general idea can be achieved by using the logic of nonlinear cascades used above. By not overusing our “stress response”, increasing our sensitivity to feedback, and properly structuring/planning our tasks such that feedback comes to us in the right intervals, we can spread our energy widely without tiring ourselves out or obviating the possibility of compounding returns. To do this, I advocate for an approach in which we apply our energy not to running some kind of a marathon, but instead to “planting seeds”. Rather than trying to beat a challenge by force of will, or merely do the opposite and take the path of least resistance (which could just land us right into addictive patterns), we should make it our task to create the right conditions for productivity and allow the actual work to develop based on feedback. In other words, our job is not to make it happen, but to make sure that external conditions allow it to happen.

This is not just a matter of “rationing willpower”, but is actually key to an intrinsically better learning process. Our stress response, which comes from the absence of feedback, is a signal, and as such, is meant to tell us when to double down and when to walk away. This can be very helpful, since if you don’t have much aptitude for something, or if you don’t have a pressing reason to do something, then it’s probably a waste of time and energy to do it. If there were no limits to our “willpower”, we would be able to easily override our instincts, gut feelings, and tacit knowledge in such a way that our conscious awareness, armed with much less knowledge than we think, would endanger us by overwritnig the deeper logic that exists beyond our awareness. The reason why I am making an analogy to gardening is that plants grow according to their own fractal logic; something that we cannot and should not have any control over. Our job, as gardeners, is to facilitate this logic, such that the plant remains in a state of growth rather than a state of atrophy. With this in mind, how can we create such conditions?

First, we need the channels to be quiet. We only have so much “power”, as our stress response will lose its effectiveness with too much coffee, forced focus, and anxiety. The more that we can cut down on means of artificially raising the stress response, and the more that we can get away from noisy stimuli such as excessive TV, video games, social media, and news, the more sensitive we’ll be to feedback on the whole. Once there are few enough clouds in the area, we can decide to actually plant the seed. Planting the seed still requires some non-negligible amount of investment, but since this works as a stress-response and not as a finite gas tank of “willpower”, we can get more for our money by increasing our sensitivity to the stress response, and can preserve sensitivity to the stress response by having it work in moderate pulses rather than chronically activating it (yet another instance of Jensen’s Inequality). Once planted, our tree will grow according to the path of least resistance, dictated by an informationally rich logic that does most of the work for us. We may, however, have to prune the branches every so often if there’s something going seriously awry; if we enter into a cycle of addictive behavior, then it is time to intervene. Knowing when to do this is not an algorithm but a matter of metacognition, but it relates back to another one of Taleb’s heuristics involving Jensen’s Inequality: let minor problems take care of themselves, but do not hesitate to intervene in serious rare threats. Nor does it have to be perfect: since feedback is part of even our most high-level conscious experiences, we should be content to cut our trees in a wabi-sabi kind of way.

The tree will eventually bear fruit, and that will give us the means to plant yet more seeds. What’s important to note is that our conscious role in all this is the metacognition necessary to make the system run. Beyond that, too much tampering will replace the nuanced information of the feedback cycles with sloppy pseudo-approximations made within the limited scope of our awareness. So on a final note, don’t ever make “goals” or “plans” or “schedules” in the traditional sense. Such management is good if and only if it’s about setting up the conditions for an information-rich learning process. Although all of this seems like I’ve gone way too far to explain a few simple concepts, I think that this appreciative model allows us to move beyond platitudes and actually come up with real reasons why expectations and beating ourselves up over failure are not parts of a good strategy, nor does it make sense to treat focus like a raw force of will. Focus comes from being sufficiently sensitized to feedback, a product of a well-calibrated stress-response, fine-tuned sensibilities, and the proper alignment of skill and complexity. In other words, it’s a matter of preserving the integrity and clarity of new information. Learning this has led me to a much more hands-off approach where my primary concern is looking at what major events will trigger or inhibit compound intellectual and creative growth, and has made me wonder if we can see substantial changes in how we think about learning disabilities in the same way that we’ve gained a more nuanced and effective understanding of obesity. Best of all, I might just get over the fact that I’m far from satisfied with this essay.

*Some of you might be wondering if this is a sort of moral nihilism. Hardly: I believe that morality is a matter of accounting. Whatever the reason was that we did something wrong, it’s imperative that we be held accountable so that people aren’t encouraged to go do it. Justice is about the task of honest accounting, doing only what’s necessary for the sake of holding society together. Going beyond that is immoral.



**Some of you might be wondering if this is a sort of moral nihilism.  Hardly: I believe that morality is a matter of accountability.  Whatever the reason was that we did something wrong, it’s imperative that we be held accountable so that people aren’t encouraged to go do it.  Justice is about the necessary task of honest accounting for the sake of the greater good, not vengeance.  And yes, this is just my opinion.

**A similar argument was made by Nicholas Carr in The Shallows, but I had not connected his idea with video games, since I figured “I’m not multitasking.” I also don’t think that his argument was as fundamentally about feedback, though perhaps I’m not giving him enough credit.

***This is based on a relatively technical probabalistic/statistical point that the variance of a statistical sample does not necessarily reveal the variance of a true generator. In other words, the fact that something looks tame doesn’t mean that it actually is tame, as there may be black swans waiting. See The Black Swan or Antifragile for further reading.

Deconstructive Economics Part I: Economic Paradigms

In my last post that touched on the subject of economics, I considered the idea of a paradigm of economics based on allostasis. It left a lot of questions unanswered, but it left me with an even stronger suspicion of a hypothesis that I’ve been mulling over in my head for some time that may apply to complex systems in general: that an economy works not by allocating resources more “efficiently” but by continually learning. I put “efficiency” in quotes because the notion of efficiency can’t be discussed in a vacuum; an issue that’s helped lead me to my current hypothesis. Efficiency implies that there is a metric that is being optimized for; something that only exists in some unambiguous sense in the event of a major purpose such as a major war, or perhaps the renovation of a nation’s infrastructure. I should also note that in the presence of such goals, the idea that the free market is “more efficient” seems to be somewhat unsubstantiated: I have a hard time believing that the second world war would have been more effectively fought had nations relied entirely on “market solutions” to pump out the manpower and materiel needed for the massive undertaking.

Yet somehow, even in the absence of a definite notion of “efficiency”, there are still things that could obviously be considered “malinvestments”: if a restaurant is bailed out at all costs, no matter how terrible the food, it is uselessly monopolizing claims on all kinds of material wealth that would be better spent elsewhere. This left me with the question of how can we make any claim to something being wasteful in the absence of a clear notion of “value”. One might come up with reasons outside the scope of markets by making arguments for the intrinsic value of railroads or libraries, but when applied on a macroeconomic scale these arguments amount to epistemically arrogant just-so stories that can never be substantiated in any kind of logically rigorous way. Nor are libertarians off the hook: the “free market” in any incarnation is a structure that is built and maintained by central authorities, and while many make the argument that the government should limit its role to providing the absolute basic necessities for an ideal free-market, such an argument implies that there exists that there is an ideal “free market” that should be created and maintained, which itself assumes that there’s some way that a categorical notion of “efficiency” can be derived from some top-down model of reality.

The underlying issue is not just that our economic theories are models of a much more complex reality, but that the market, at any given point in time, in whatever incarnation, is a model of reality that is simultaneously propped up by and utilized by the encompassing entity we call the economy. Where the economy is the collective exchange and utilization of goods, services, land, labor, commodities, information, etc. carried out by society, the market is a model of reality, a set of scripts, that guides our economic behavior. In order to do so, they must do two things: (1) they need to provide information that is sufficiently clear and reliable for us to decide to follow the script, and (2) they need to continually update their instructions so that the information remains reliable. In other words, the system needs to maintain the ability to process information coherently; they must be allostatic.

There are many such scripts, and further reading can be found in places such as Venkatesh Rao’s essay on the unraveling of scripts, but markets are a very specific type of script. Prior to the emergence of industrialized society, markets were peripheral to everyday life and most household and community needs were met through autarchy. With the industrial age came what Karl Polanyi calls “the market pattern”, in which providing for one’s material well-being became increasingly dependent on specialization and exchange. This general “pattern”, which is so strongly entrenched in our culture that our textbooks assume that currency was preceded by barter despite the mountain of historical evidence to the contrary, is the template for all market-scripts, which share the intertwined assumptions of that goods are (1) exclusively owned by a single party, (2) fungible and interchangeable and (3) enumerable according to some ranking. By virtue of these three axioms, market scripts dictate, through the information embedded in currency, institutions, and laws, a set of assumptions about how to determine economic “value”.

The idea of economic value is relative, but that does not mean that it’s unfalsifiable. A market’s script for determining “value” is only viable insofar that it maintains a sufficient signal-to-noise ratio in its processing of feedback. When this fails to happen, price signals stop working and the economy grinds to a halt as people look to other means of economic well-being. At this point, feedback becomes increasingly weak until a new script is implemented. This period of economic crisis is inevitable due to the constant changing of conditions on the ground and the inevitable expiration of any model that makes sense of the world. For a better understanding of how such a process works, it helps to be familiar with the schema of scientific paradigms, as coined by Thomas Kuhn in his book The Structure of Scientific Revolutions.

Kuhn’s Ladder and the Languages of Knowledge

In today’s culture, science is held up with praise, and sometimes disdain, as being an enterprise of absolutes: absolute knowledge confirmed by the absolutes of experimentation and repetition. While I won’t deny that the law of gravity is absolute, the practice of science in many ways resembles Einstein’s relativistic view of the universe. Just as any notion of “up” can only be talked about relative to gravitational fields, the notion of objectivity in science is a social construction that relies on professional consensus regarding various ideas, definitions, technical practices, and accepted theories. This is most evident in practice of peer review, in which a study is not considered scientifically valid until it has been deemed sound by other scientists within the same fields. More subtle and important, however, is the fact that without the existence of such consensus, the scientific enterprise would helplessly drown in a sea of noise.

Consider the field of epigenetics as an example. Genes, as a concept, are considered a scientific fact. The debate surrounding epigenetics is not about the existence of genes but about if and how they do different things in different environments. Getting to this point requires an extremely detailed infrastructure of consensus, not just in terms of guiding theories, but down to the relative meanings of the data returned by an instrument. To get an idea of just how precise this is, imagine trying to explain to a scientist from 300 years ago what a virus is. Without any framework of microorganisms, germs, genetics, cells, or protein, it would be virtually impossible to give them any definition beyond “these little thingies jump from person to person and make you sick.” Even if they suspend their disbelief, what experiments would you be able to run to convince that that this was true? For any kind of scientific research to proceed, there needs to be a shared language. If you can’t agree on whether genes exist, you can’t have a debate about gene expression. The next rung on the ladder can only be reached if you can plant your foot on the previous rung–otherwise, there is nothing that can be labeled “up” or “down”.

These shared languages, known as scientific paradigms, can also be thought of as a kind of data compression. You don’t need to thoroughly understand every single observation and theory that came before it in order to become a scientist–you just have to know enough of it that you have a common semantic frame for building hypotheses and describing the setup and results of your experiments. Under these conditions, the field proceeds under what Kuhn calls normal science: a state in which a number of questions have emerged within the constraints of the paradigm and scientists can spend their time further elaborating on and classifying phenomena within the paradigm’s theoretical framework. This state can only last so long as the paradigm remains a cost effective way of compressing the data. If the paradigm fails to make meaningful predictions, scientists will slowly look for alternatives and lose faith in the current framework, leading to a period of extraordinary science. Prior to this, theories may be patched up so that they fit the data, and wrong predictions may be outright ignored, but this can only continue as long as the benefits of the paradigm outweigh the cost. If your inbox puts a few of your important e-mails in “miscellaneous”, it still might save you a good deal of energy. You probably wouldn’t say the same if that’s what happened to 80% of your important e-mails.

Most importantly, the theories that comprise a scientific paradigm are not formulated in some universal language of first principles. There are reasons why this is in fact impossible, but such ideas could fill up entire books, and in fact do. For our purposes, it suffices to say that the theories of paradigms are semantically grounded through a combination of shared language with other paradigms, subordination to other paradigms (such as a theory of metabolism being constrained by the laws of thermodynamics), and the possibility that a paradigm or group of paradigms contradicts itself due to an oversight regarding its initial assumptions. Due to the fundamental limits of any sufficiently complex logical system, scientific paradigms in fact hold the seeds of their own destruction, providing feedback as they encounter real-world observations before the feedback inevitably hits diminishing returns followed by an outright harmful ratio of noise to signal:


Courtesy of Nassim Nicholas Taleb: Antifragile

In this sense, every paradigm is ultimately “wrong”, but to look at it through the lens of right and wrong would be a mistake. Science does not, and cannot, happen in a vacuum: in order to get an answer, you first have to ask a question. Every scientific paradigm is fundamentally a set of questions, each of which with a range of intelligible answers (saying 2 + 2 = 5 is wrong but intelligible, saying 2 + 2 = “ham sandwich” doesn’t make any sense whatsoever.) Knowing which questions to ask requires having an idea of what you’re looking for, which can only be done by finding answers that reveal the contradictions in your original set of questions. Once you find a paradox, you can find a new frame to make sense of your data, but until then, what we cannot speak of must be passed over in silence.

Markets, Paradigms, and Disequilibrium

When I last talked about the phenomenon of feedback in an economy, I suggested that feedback was good up until the point that it compromised the system’s ability to process feedback. At the time, I had no good answer as to when this point was: after all, sometimes the system should outright fail so that a new system, better suited to new realities, can take place. If we frame markets as Kuhnian paradigms on the other hand, the question can be brought into much sharper focus. Just as a scientific paradigm provides scientists with guiding questions and theories to make sense of their observations and guide their experiments, the currency, laws, and institutions of a market work together to make sense of the feedback that occurs within an economy. In order to get an idea of how this works, we’ll have to revisit our old frienemy, the axiom of utility.

First things first: utility is not about “rationality” in the sense of “smoking is irrational because it’s bad for you.” It simply means that your preferences are consistent: that you do not prefer steak to chicken, chicken to salmon, and salmon to steak. While this is not actually how people behave, as confirmed by numerous psychological experiments, it’s nonetheless a useful concept when not looked at in a vacuum. Within the scope of the market, transactions are by definition an indicator of utility. If you’re willing to pay more for a pound of steak than for a pound of chicken, then that pound of steak is more important to you than that point of chicken. It might be for the most whimsical or irrational reasons, but in that moment, you’ve made the unambiguous decision that one thing is more valuable to you than another. In the framework of decisions within a market, currency is an accounting identity: you can choose to buy and sell whatever you want, but you have to make a decision about the relative value of everything you consume, sell, and save.

Within a scientific paradigm, scientists work to make sense of discrepancies between their observations and the tenets of the paradigm. Within markets, the same thing happens regarding discrepancies between what individual actors value and what the market values. This is most apparent in finance, where investors look to find discrepancies between the price of an asset as assigned by the market (itself an implicit prediction about the later price) and what the investor thinks the price will be later on. The same discrepancies also matter to businesses, which look to make a profit by selling something that’s worth more than what it cost to procure–a complex process that requires all kinds of consideration about present and future prices and the future needs of consumers. Even among consumers the same thing takes place as they strive to get something for nothing by paying less for goods than what they consider the goods to be worth. Each of these transactions act as feedback, with the market adjusting its prices to fill the gap between actual behavior and expected behavior. All of these examples are extreme simplifications, but the main idea is that economic actors generate feedback by exploiting the differences between what the market knows and what the actor knows, a process known in finance as arbitrage.

It would be a fatal mistake however to assume that this means that the market simply strives towards equilibrium as the discrepancies between supply and demand are flattened out. On the contrary, most of these behaviors push transactions away from equilibrium by adding more economic complexity: innovations create new demand for and dimensions of comparison between goods, investors place bets based on information that has not yet been accounted for, and gluts and scarcities of goods spur the use of substitutes that may not have been used otherwise. With each instance of feedback, actors fill the information gap with information that introduces new gaps. continues so long as the market can honestly account for the economic behavior of its constituent actors. This process, in which the market effectively processes feedback and creates wealth by reliably increasing in complexity, could be analogously called normal economics.

In the absence of such honest accounting, the market can no longer effectively process feedback and will collapse as it increasingly loses relevance with regards to people’s present needs. To give an example, let’s consider a highly skilled programmer who does work for open source projects. While he might work on these projects for recreational or altruistic purposes, he can only spend as much time on these projects as his finances will allow. Meanwhile, while others may benefit from his contributions, they will be spending no money on it no matter how valuable it is to them while spending more of their money on things that wouldn’t have as high a relative value were they forced to pay for the software. As a result, markets overstate the value of these other goods and services while understating the value of the software.

This is not to say that there is something categorically wrong with people giving things away for free; remember, all notions of “value” are defined relative to the axioms of the market, not as some categorical good. What it does mean is that the market as a paradigm becomes less useful because the information it provides about relative needs is less reliable. Just like too much of a mismatch between a scientific paradigm and its individual observations can render a it ineffective or even downright useless, a failure to account for a new technology or a potential collapse in credit can render a market useless. People will still continue to transact, but more and more of it will be off the books, and a new market will eventually form in order to streamline the extremely inefficient endeavor of performing transactions off the record. During this time, the economy enters a period of extraordinary economics, in which the current market does not make sufficient sense of the economy. We are in one such period now for several reasons, and explaining why may make this idea more clear.

The Theories of Currency: A Speculative Parable

At some point, I’d like to go into a much deeper historical digression to really get at the meat of the ideas posted above, but given the length of this post and my own lack of erudition, we’ll have to settle for a few key points about the past 100 years with some disgusting simplifications. Going forward, I’d like to state that this should all be read as a parable meant to demonstrate a broad idea, not an empirical hypothesis about the causes behind past and present economic crises. More specifically, but just as important, remember that this is about how markets themselves act as tacit models, not a discussion of macroeconomic theory.

The economic crisis of a few years ago spurred a lot of interest in a pivotal moment in American history: The Great Depression. The narrative, supported by the dominant Neo-Keynesian and Monetarist schools of economics, went that this time, with our better understanding of economics, we weren’t going to make the mistake made by fiscal conservatives back in the 1930′s. Unfortunately, things have not gone according to plan, with “improvements” in unemployment numbers coming from a combination of lower wages, reduced hours, and a shrinking of the labor force. GDP has not fared much better, showing little increase beyond the tautological increase in government debt. The common reaction to this by libertarians, fiscal conservatives, and members of the Austrian school is that Keynes was a charlatan who was wrong all along. While that may or may not be the case, I contest their claim on the basis that they’re talking completely out of historical context: just because Keynesian economics doesn’t make sense now, that doesn’t mean that it never made sense. Just as every market is a model of a particular time and place, every system of currencies also models within it certain assumptions. These assumptions are too complex to be fully summarized, but I can still get across the gist of what I mean.

During the period in which the Great Depression took place, there was a great deal of easy potential for economic growth. Oil was still a recent discovery and the process of mechanization was still in full swing. For many countries, especially the United States, discovery rates of oil were increasing rapidly with each year (the US did not hit a peak in oil production until 1970) and there was so much to go around that it was a waste not to do something with it. All this growth eventually led to a period of intense speculation, culminating in the events of Black Tuesday, when a collapse in the stock market and the resulting bank run led to a severe deflationary spiral.

None of this happened for lack of material wealth: sure, plenty was poorly invested during the boom years, but most of the resulting damage came from a vicious cycle in which a lack of available money caused cuts in spending, causing further cuts in wages and employment, which caused there to be even less money, and around and around ad-nauseam; all of this initially coming from the bank runs that caused most of the available credit in the market to disappear. Had the Federal Reserve been able to create more money, this may have been averted, but as it stood at the time, the United States was on a gold standard, meaning that any available money in the economy had to be backed by a fixed amount of gold. But before the Keynesians jump for joy and the Austrians burn me at the stake, I’d like to point out that this has to be taken into context: yes, there were misplaced investments that had to be corrected by the market, but beyond a certain point, the economy was creating a self-fulfilling state of scarcity despite the enormous amount of material wealth available. The gold standard, in which money is a static and fixed quantity, represents a world where wealth neither grows nor shrinks in the future. This is not only counter-productive in the case of a self-fulfilling deflationary-cycle, but is in fact a recipe for disaster as the economy grows too big with too little credit to support it. Although other factors, such as the forced deleveraging via wartime austerity, arguably played a major role in the end of the Great Depression, the world economy’s transition away from the gold standard and the subsequent economic recovery imply a paradigm shift in which a finite money supply based on gold gave way to the fiat money we have today.

Zoom to 2008, when the banks catastrophically failed and were bailed out by the government. Despite taking all the measures that helped end the Great Depression, the recovery has been very limited and some would say that it happened only on paper. Once again, it’s worthwhile to put this in historical context, something that can be done with the help of two pictures (courtesy of Chris Martenson and the EIA respectively):


The first picture is the ratio of credit market debt to GDP. Other than the spike to the left, which was caused not by a rise in credit (remember: gold standard) but by a rapid drop in GDP, the ratio of debt to GDP (private and public) has reached unprecedented levels in the past few decades. The reason for this literally exponential growth is that our current system of money is based on the issuing of debt. What that means is that money is created whenever someone takes out a loan from a bank. In order to pay off that loan, the debtor not only has to pay back the principal, but also the interest, meaning that they’re going to have to acquire more money than they originally had. Apply this to every dollar circulating in the economy, and it means that an amount of money proportional to the amount of money currently in the system has to be created out of thin air; something that is done not by directly printing money, but by having people take out more loans from more banks. Meanwhile, banks themselves need only keep a small fraction of their deposits in reserve–so for every dollar deposited to a bank, several more dollars are introduced into the market. The result is a money supply that grows exponentially (if you feel the need for further elaboration on this subject, I recommend this documentary.)

The issue here is the opposite of the gold standard. Whereas the gold standard fails when the economy becomes too big for its money supply, debt based currency can only go on so long as the debt is continually rolled over. If not, then credit will collapse as people default on their loans and banks become insolvent (remember: since a bank only needs to store a small fraction of the loans, that means that for every dollar a bank loses, the economy loses several dollars.) In the event that there’s easy wealth to be exploited that just requires more capital, government intervention has a decent chance of solving the problem. If, however, the money supply has far outpaced any plausible rate of growth in material wealth, then government intervention potentially delays the inevitable by further misdirecting available resources. Where the gold standard failed us by fooling us into thinking that there wasn’t enough to go around, currency based on debt constantly tells us to go ahead and borrow because the future will be more full of schwag then ever. The chart on the right is not very reassuring: the production of the world’s most important energy source remains stagnant even in spite of rising gas prices and the government intervention needed to provide sufficient capital.

Again, none of this should be taken too seriously. All of the ideas of scarcity and abundance that I’ve put forward are based on assumptions about the future availability and economic significance of fossil fuels. While we can make some educated guesses from 50,000 feet, the actual information comes from the feedback provided by the market in the form of currency-based signals. But if that’s so, then what allows me to call the level of debt problematic? Shouldn’t I take it as a signal that the future will be abundant enough to pay it off? The answer is not to look for an overt match with reality, but to look at the level of clarity provided by the current paradigm. In the case of our current monetary system, it helps to look at the signals provided by the centrally controlled discount window, which loans money to America’s major banks at an interest rate decided on by the Federal Reserve. These interest rates generally have a great influence on the cost of borrowing in general, since the cheaper it is for a bank to acquire cash, the more competitively they can price their own loans, which gives the Federal Reserve a way to influence signals of scarcity and abundance. Prior to the crash, interest rates were set extremely low in order to avoid a recession after the dot-com bubble, following which they remained that way in the belief that it was creating robust economic growth. This was not, however, matched up with reality: consumers, businesses, and banks all took on a dangerous amount of debt that failed to take into account the probability of a catastrophic crash. The paradigm’s predictions* miserably failed.

Since interest rates were low, there was little leeway left for lowering interest rates further. Even after resorting to making credit free, banks continued to hoard money and businesses failed to expand or hire. Meanwhile, the stock market has soared while banks pay record bonuses to their executives, creating a scenario in which both the relative and absolute wealth of the most powerful figures in the US economy has increased despite high unemployment and record numbers of people receiving emergency government assistance in order to get by. All of this signifies faulty feedback reminiscent of Kuhn’s extraordinary science, with the current paradigm getting patched up in such a way that it technically fixes the falsifications; corporate profits, GDP, the stock market, and money supply are all healthy as a result of monetary intervention, but the script only survives by fixing the game for a shrinking number of parties at everyone else’s expense. If you look at all of the unemployment data and not the fudged numbers of the official “unemployment rate”, you can see that fewer and fewer people are gainfully employed, as the recovery in the official numbers has been due to a combination of an increase in part time jobs and a decrease in the number of people counted in the labor force. This cannot be understated: the economic script followed by the United States depends on gainful employment. If you don’t have a full time job, you fall out of the system into the underclass, which is supported by an increasingly large amount of direct government spending. This propping up of a permanent underclass is yet another duct-tape fix that keeps the paradigm from being abandoned at the cost of information content (NB: I am NOT advocating that we starve the poor or get rid of our safety net. I am only pointing out that failing economic systems can push their failures under the table in order to stay afloat.)

What do I mean by information content? Think back to the importance of honest accounting: corporations and banks continue to make profits under the principles of the “free market”, but these profits are largely the result of government spending that props up both the corporations and the consumers who might otherwise not have money to spend on their products and services. Zombie corporations hog resources that may otherwise have been put to use differently, and people who may have found work in an updated economy instead must rely on government handouts as obsolete firms fail to make use of the spare labor around them. Every dollar spent attempting to preserve an outdated paradigm is a dollar that can’t work as feedback, diminishing the effectiveness of price signals as corporations and banks get a free lunch from a system whose resources are ultimately finite. Instead of creating wealth, these bailed out corporations simply relocate it, eventually compromising economic allostasis as ever fewer actors are left to contribute information to the larger economy.

All of this may sound like a staunch argument for an unfettered free market with minimal government intervention, but that is actually not what I’m saying. In this particular case, the fiscal and monetary policy of the United States seems to be a desperate attempt to preserve a paradigm that is no longer working, but that does not mean that unfettered markets generate the most wealth. Since there is actually no such thing as a totally free market, it’s indisputable that every market paradigm is formed by a combination of principles via positivia and principles via negativia and that any successful market must be constructed with both kinds of measures in mind. Many libertarian ideas currently make sense because there are many government interventions that do not make sense in the context of how price signals currently work, but that doesn’t change the fact that the very system of price signals in a market economy is based on an a-priori model of what constitutes an effective economy. There are plenty of instances, even now, where a lack of government enforcement is actually detrimental to proper market feedback. Take the example of digital media, where file-sharing has led to consumers being able to understate how much the media was actually worth to them while artists lose the capacity to produce more work due to a lack of compensation. In re-thinking our economic paradigm, including our system of currency, much will be constructed in a top-down manner no matter what.

When dealing with problems within a paradigm, it suffices to look at the internal contradictions and the degradation of feedback, but when constructing a new one, scientists inevitably look for new a-priori principles. Ours will inevitably be determined by a number of environmental, technological, geopolitical, and cultural factors; ideas that I would like to elaborate on should I find the stamina to write a second part. In particular, I’d like to get into how the intertwined history of industrialization, centralized states, and the corporation underlies the paradigm of the modern free market. I’d also like to consider some other systems of currency that could not be talked about in this short parable: the Bretton Woods system, privately issued bank notes, and derivatives; all of which broaden our ideas of how currency underpins the kind of feedback that occurs in a market economy. From there, I hope to take a more nuanced view of some of the more apparent problems in the near future: remuneration in an age of information, the tragedy of the commons concerning environmental problems, the loss of gainful employment due to outsourcing and robotics, and how we may be able to reduce economic fragility without compromising the complexity that has brought us so much wealth in the past few hundred years.


*For uncertainty geeks out there, take the word “prediction” with a grain of salt. I do not necessarily mean that banks or economists, or even economies as a whole, are supposed to predict a precise outcome. They are instead supposed to robustly account for present and future needs, often by correctly taking what is fundamentally not certain into account.

Dionysian Ethics and the Antihero’s Journey

Most of us, having grown up on Saturday morning cartoons, had a pretty simple idea of heroism as kids.  The frames by which we understand Star Wars or Batman are some of the first narrative templates we acquire, and as children it seems to pervade the romanticized accounts we have of various historical events; not least the John Wayne “good vs. evil” frames by which we see the American Revolution, the American Civil War, and World War II.  Movies such as Abraham Lincoln Vampire (which I haven’t seen, but know enough about the plot to talk about) are a prime example: the first scene in the movie shows Abraham Lincoln as a kid getting into a fight with a slaveowner as an act of protest against the racist forces of his time (in reality, even Harriet Beecher Stowe, one of the most influential abolitionists of all time, had opinions about blacks that would rightfully make her a bigot by today’s standards), and later on his wife is shown as admiring his allegedly anti-racist sentiments (in reality, his wife’s family owned tons of slaves.)  But you don’t need me to tell you that those childhood tales in which only the truly and irredeemably evil were killed were mere fantasies; the ugly complexities and horrors of the world quickly come into view when one starts reading more in adolescence, and while the problems initially seem like inexcusable acts that can be resisted with simple, albeit difficult, actions, it becomes more clear as time goes on that the modern world, in its vast complexity and interconnectedness, takes us along for the ride whether we like it or not.

Such thoughts led me to wonder about what is and what should be considered ethical in a world that has created a horrifying synergy between violence and complexity.  While an analytic approach would leave us fruitlessly looking for axioms and leave us running in circles from the inevitable creep of unexamined assumptions, a lot can be learned by examining the cultural archetypes that shape our unstable consensus about morality: particularly the hero, a figure that stands at the intersection of morality and agency.  For our more complex and cynical postmodern age, however, there is another trope that fascinates me: the antihero.  While the antihero has arguably existed as long as the hero (I think back to the seemingly ambiguous ethics of Hellenic heroes such as Odysseus), today’s ubiquity of antiheroes in TV, movies, and novels seems to reflect some of our collective confusion while simultaneously paying tribute to more timeless ideas that may have been overlooked in moments of idealism.

Where the stereotypical selfless hero serves the Apollonian ideas of order and generativity, contemporary antiheroes are Dionysian destroyers; sometimes selfishly so, other times for a more nuanced version of what we might tentatively call “The Greater Good”.  The use of such terms is not just pretentious window dressing either: the labels of “Apollonian” and “Dionysian” refer to the two main archetypes in Friedrich Nietzche’s The Birth of Tragedy, respectively representing creation and destruction.  While Nietzche was primarily concerned with aesthetics, his archetypes later found themselves conceptually re-imagined as the essential conflict between creation and destruction known as dialectic and has since shaped many writers such as Hegel, Marx, Schumpeter, Boyd, Derrida, and Taleb among many others.  It’s the key concept behind my entry on allostatic economics, and it also underlies one of the most common templates for understanding the heroic archetype: The Hero’s Journey, as elucidated in Joseph Campbell’s book The Hero with a Thousand Faces.  Before getting into that, however, let’s talk about “The Greater Good” (before we continue, here’s some comic relief in case you need it:

Feel Locally, Pause Globally

The issues of morality that we struggle with have more to do with scale than anything else.  While we may occasionally debate about whether to tell someone a “white lie” or tell them the harsh truth, our knowledge of morality on a micro-scale is hardly ever topic for debate.  Anyone with half a brain can tell you that if someone is in danger that helping them is the right thing to do, or that it’s not okay to murder someone.  The kinds of ethical rules that we follow when dealing with our friends and family and those who are in our immediate vicinity are what make up micro-morality.  Micro-morality is instinctive: it’s based on the response we have to what is up close and personal: our horror at seeing abused animals or starving children, or our instinct to run to help our child when they scrape their knee on the playground.  Rather than being derived from any kind of logical inference, we choose our responses based on our sentiments.

Our sentiments do very well for handling the specific, but they are utterly clueless when it comes to the general.  While it may seem that we are horrified by the wreckage of a natural disaster far away or the casualties of a major war, this cringing of ours only happens when we are thinking about a specific image or story about the event; one in which we can imagine a specific person that is affected by the tragedy; thus the dreary aphorism that “one death is a tragedy, a million a statistic.”  Unfortunately, an individual story can never be an appraisal of the big picture: the world is too complex, and more than just the aggregate of many statistically independent vectors.  Nor does utilitarianism fare any better (in fact, it fares much worse when we consider the kinds of utopian delusions it can lead to), as there is no simple numeraire, let alone method of calculation, that can tell us what is categorically best in the big picture.  Even if we could, it would likely look horrifyingly callous to our sentiment-based instincts: for every isolationist who sees Obama’s drone strikes as a heartless calculated slaughter, there’s a neoliberal who sees Ron Paul’s fiscal conservatism as an economic nightmare for those whose survival would be threatened by the resulting contraction of the world economy.  Even in the cases where one measure could obviously do more good than another, it often contradicts our basic moral instincts: the second set of casualties from the September 11th attacks came from people avoiding flying and driving their cars more often.  Cars are literally over 50 times as dangerous (in casualties per million miles traveled) as planes, and tens of thousands of people in the US die in car crashes every year, but we’ll unfailingly feel more terror at the instant one-time taking of 3,000 lives.

One might consider that to be a case for moral relativism: but this is something that I strongly disagree with.  The tired argument that there is no logically provable system of first principles for “right” and “wrong” runs in direct contradiction to our instinctively defined ideas about how people ought to be treated.  Those who say “well just because our genes are fooling us, doesn’t mean it’s true” somehow miss that our subjective experience as humans is fundamentally incorrigible, and that saying that logic trumps our experience is like saying that someone crying out in pain isn’t “actually in pain” because the wrong area of the brain is lighting up.  This has always been very tough for me to argue, and I blame it on how we currently use the dichotomy of “subjective” and “objective” to mean “relative” and “absolute”–but that implies that there’s something inherently “relative” about experience and something inherently “absolute” about things that are defined independently of experience.  Instead, I’d say that micro-morality is subjective but absolute due to its strict adherence to our emotional state, and macro-morality to be objective but relative due to the fact that it is something that we can reason about, but at the same time is too complex for us to ever find an absolute answer.

Of course, even if we had a good idea of what counts for “the greater good”, we still have the pesky issue of unintended consequences, which plague nearly every well-intentioned effort in our complex global civilization.  If the risk of basing our macro-morality on sentiment requires us to be cold and calculating, then the futility of quantifying what serves the “greater good” requires that we be merely cold; to pause and think twice before we choose our actions.  Put more simply: micro-morality is hot, macro-morality is cold.  Unsurprisingly, the courageous and selfless heroes we see in stories are almost always hot-blooded: they feel anger, passion, indignation, and determination.  Antiheroes (who, yes, I still have yet to define more clearly), by contrast, have cultivated a greater stillness; perhaps with the exception of their more personal hangups, which can serve to motivate them in their goals.  This is no easy task: it is inescapably human to see a wound and want to bandage it; but sometimes this very urge can get in our very way when what’s necessary is not creation but destruction.  The movie Batman Begins revolves around this idea, but rather than watch two hours of Christian Bale making funny noises (okay, okay, it was actually a good movie), I invite you to watch a ten minute presentation by Slavoj Zizek on the unintended consequences of charity.  I don’t fully agree with him, but his argument is framed along the same lines, and acknowledges the unpleasant necessity of creative destruction:

Smith’s Telos vs. Schumpeter’s Deities

Seeing destruction as good is inherently counter-intuitive.  Although we sometimes speak of “destroying” corruption or poverty or discrimination, these are gentle abstractions.  Real destruction always comes with morbidity, and oftentimes with casualties.  Our association of good with creation and evil with destruction is in fact a natural extension of our micro-morality, and is embedded deeply enough into our psyche that even Zizek, despite seeing charity as a tragic irony that keeps a failing system from being destroyed, immediately sees George Soros’ acts of economic destruction as a supposedly obvious example of why global capitalism is a morally problematic system (NB: I am not making a debate for or against capitalism, that’s an entirely different subject.)

Without such economic destruction, however, resources would be indefinitely tied up in places where they don’t do any good (for more elaboration, see my previous post Phenomenological Opacity, Accounting Identities, and Allostasis).  That is not to say, however, that this destruction is not a dirty job.  It is tempting here to even say that Soros is doing nothing wrong, that he is merely “allocating resources more efficiently”, but such an idea hearkens back to the flawed concept of utilitarianism by implying that there is something to be maximized, and by extension some final outcome that we get closer to with every improvement in efficiency.  The result is not a cyclical view of history, but rather a teleological one; one that has arguably entered our own modern times in the form of our popular faith in the notion of “human progress.”

Although I’m no expert on early liberal philosophy, the philosopher John Gray, in his book Black Mass, has taken note of the teleology inherent in philosophers and economists of the liberal tradition.  The popular notion of the “Invisible Hand” of the market originally was actually a reference to God, who he believed to be the guiding force behind the complex coordination of many individual actors.  This might just be a pantheistic interpretation of the process of self-organization, but Smith’s devout practice of Christianity suggests that this process was nonetheless directed towards some final end.  The presence of a divine benevolence behind these transactions also provides a comfortable means to reconcile our sentiments with the greater order of things, as God is a human face to put on what is otherwise an emergent network of individual heuristics.  From this more teleological viewpoint, there is hope of a sound justification, and in the idea that should we choose wisely, or perhaps submit to a higher power, we’ll have done what’s ultimately correct; an idea that I’ll revisit in a bit.

In the absence of efficiency, utilitarianism, or any kind of anthropomorphic teleological force, morality takes on a quality of absurdity.  With no cosmic plan on which to anchor our macro-morality, we are left to look at Soros’ creative destruction through the micro-moral lens of sentiment.  It would almost seem here that he really is doing no good through his acts of destruction, but as my peer Greg Linster put it, without death, there cannot be life.  Forest fires burn down trees and spread the nutrients contained in the ashes, genes improve through natural selection, and failed businesses go bankrupt and cede their resources to new ventures.  This is the essence of what the economist Joseph Schumpeter called “creative destruction”**  History, in this view, is cyclical rather than teleological; a fundamentally allostatic process where the flattening out of its cycles are mere death rather than any grand revelation.  In this view of things, destruction is necessary, but not in any way that is unambiguously reconcilable with our sentiments.  There’s no grand plan that tells us what to destroy, just destruction happening for its own reasons that vary with each case.

Even then, it is not only human, but also morally imperative that we do not entirely disconnect from our sentiments, as they still serve as the sole reason for our macro-morality; we still ought to cringe when we witness the suffering of individuals.  For this reason, the contradictions of macro-morality do not suggest moral relativism so much as they reveal some fundamental absurdity about the world.  Where Adam Smith’s invisible hand can be equated with a benevolent God that will eventually liberate us from a petty and convoluted reality, Schumpeter’s blind and ubiquitous creative destruction more closely resembles the rowdy and debauched deities of the classical era.  According to the Hero’s Journey, the hero always submits himself to a higher power; the antihero, should they take on their mission, does the same for Schumpeter’s raucous band of supernatural knaves.

The Dionysian Journey

While the concepts of dialectic and creative destruction may only have been formalized a short time ago, they’ve been passed down tacitly through myths for countless generations according to Campbell.  The central theme of the hero’s journey is rebirth: death symbolizes a fundamental rite of passage.  Thousands of years later, the same wisdom applies in the exact same way: in order to grow, your old self needs to be put to the test and broken; you need to die so you can emerge as something stronger.  Every adaptation is a kind of death, and those who become too fearful of such death inevitably submit themselves to a slow and painful process of atrophy.  The hero undergoes this transformation after he is given the call to action, in which some force has thrown the world out of balance.  In order to bring the world back into balance, he must undergo a process of transformation, after which he is able to restore the world as he knew it.  The same task is incumbent upon the antihero, but his transformation is a far harsher one that tests the limits of his very humanity.  For each antihero, it is different, and they fall into a number of different archetypes.

The most straightforward kind of antihero may be the ones that are known as “lawful neutral.”  They are not necessarily unethical, but their primary drive is a relatively rigid sense of duty rather than sentiment.  James Bond is a textbook example: many of the villains he faces are agents who were previously betrayed by either him or the organization in favor of the mission.  In newer incarnations of Bond, he is a much more dark and brooding character who’s learned the hard way that sentiment is a liability, and has stoically resigned himself to serving as an apparatus of order.  His duty is simultaneously his remaining connection to humanity–not necessarily teleological, but the priority of saving human lives is a micro-moral one that keeps some semblance of reason.  Jack Bauer is another good example of such a character: he is willing to kill, torture, and break the law in order to protect the country from existential threat; but not without a heavy moral toll that exacts itself upon him and culminates in his ritual of atonement with an Imam at the end of the penultimate season.  The link to humanity is nonetheless a precarious one: both Bond and Bauer have enemies who were originally on their side who have suffered too much from the violence of their role.  Some, like Alec Trevelyan and Tony Almeida, have simply taken too much damage from what they’ve been through (a near-death experience and the loss of a wife and kid respectively.)  Others, such as Stephen Saunders from the third season of 24, have lost faith in the system that they support, and decide that something drastic must be done.  They, too, are a kind of anti-hero, and one that I consider to be the opposite of the lawful neutral: the fundamentalist.

Where the lawful neutral usually plays the role of hero, the fundamentalist more often than not takes up the role of villain.  They are almost always seeking a kind of finality; unlike the conventional hero who works to keep the world in a kind of balance, the fundamentalist is looking for dramatic changes, revolution, and in some cases either apocalypse or utopia.  For this reason, they are oftentimes the villain of stories, as they cross the line from respecting Schumpeter’s deities to delusionally believing in Smith’s benevolent telos.  Saunders, whom I mentioned just a minute ago, is one such arguably delusional fundamentalist.  After years of torture in a POW camp in the Balkans, his perspective changes and he finds the system upheld by Bauer and his counter-terrorist friends to be morally abhorrent.  His grievances against the United States, although arguably valid (NB: not making an anti-American argument here, just acknowledging that empires commit large scale crimes), are taken to an extreme in his plans to cripple the American empire once and for all by releasing a deadly contagious virus that kills 90% of its victims.  A more subtle example can be found in Alan Moore’s graphic novel The Watchmen, in which a former superhero by the name of Ozymandias stops an impending nuclear war by faking an alien invasion that results in the death of hundreds of thousands in the city of New York.  His act was very likely necessary for the survival of civilization, and his own words show just how aware he was of the gravity of his actions:

“What’s significant is that I know.  I know I’ve struggled on the backs of murdered innocents to save humanity… But someone had to take the weight of that awful, necessary crime.”

It would seem from this moment of alleged self-awareness that he had indeed grown from an idealistic young crime-fighter who believed that violence could be solved like a simple optimization problem, to a man who embraced the worst violence imaginable to prevent an even bigger catastrophe.  His utopian hubris, however, is shown in full view as he gloats to his would-be saboteurs about what he has supposedly done for humanity:

“My new world demands less obvious heroism, making your schoolboy heroics redundant.  What have they achieved?  Failing to prevent Earth’s salvation is your only triumph, and yet that failure overshadows every past success!  By default you usher in an age of illumination so dazzling that humanity will reject the darkness in its heart…”

The lawful neutral and the fundamentalist both find manifestations in real life as well.  Modern warfare exacts suffering on a horrifying scale, but it most likely continues to be a necessary action even in the best of cases.  Two people could spend a lifetime arguing whether Henry Kissinger was a war criminal or a national hero, but for our purposes it suffices to say that he felt obliged to keep peace by maintaining a balance of power between the world’s two superpowers and protect the well being and safety of the American people.  The fundamentalist also embraces this ambiguity in real life, though I’m convinced that fundamentalist organizations such as Al-Qaeda are engaging in a much more unambiguously senseless brand of violence that comes from utopian fantasies; a matter that I may revisit in later posts.

Perhaps most true to the antihero is a third kind, which taking a note from Venkatesh Rao’s essay, The Gervais Principle, I have labeled the sociopath.  Rao’s sociopath is not so much a sociopath in layman’s terms as he is someone who has withdrawn from the socially constructed reality of his former companions (who make up two other subgroups, the “losers” and the “clueless”) in order to answer to what he considers a higher ethical code.  In this way, he is very similar to my own description of the antihero, but differs in that he gains a more fundamentally nihilist outlook and may find himself completely disconnected from any trace of human sentiment.  Having already addressed order and revolution as two different antiheroic moral codes, my own categorization of the sociopath is one that has abandoned much of their morality to seek their own personal gain.  Unlike the villain, however, they are usually more sympathetically presented and are ultimately looking for some kind of redemption.  Walter White, the main character in the show Breaking Bad is a prime example: originally cooking meth in an attempt to save his family’s finances before he dies of cancer, he slowly slips away from his loved ones and becomes caught in a struggle against his own addiction to the new-found power he feels as he ascends to the status of drug kingpin.  In the realm of video games, Sarah Kerrigan, one of several protagonists in the Starcraft franchise undergoes a similar transformation; originally an idealistic freedom fighter with a past history of family deaths, abduction, and experimentation, she is eventually mutated into an alien-human hybrid who quickly becomes known as the scourge of the sector.  Consumed with rage at the betrayal that led to the incident, she is seen as a sympathetic character despite this status as arch-villain.  When she miraculously regains her humanity, she quickly realizes that it is her inevitable fate to transform back in to the hybrid, and while her actions become more moral, she nonetheless decimates entire worlds in order to get back at those who betrayed her and prepare her army to face a greater power that threatens to the entire sector.

An arguable sub-category of the sociopath is the narcissist; although in real life the narcissist is different than the sociopath, they are very similar in my current taxonomy.  They are both self-interested, but the quest of the narcissist is more particular: it is a quest for identity.  Walter White in fact falls under this, because his quest for power is ultimately one for validation.  A more striking example, however, is Don Draper of Mad Men; a man whose only purpose is to build an inhabit his new identity.  Prior to faking his own death, he was Dick Whitman.  Since then, he will do everything in his power to prevent anybody from finding out his past, even declining to tell his first wife and keeping traces of his old life in a locked cabinet.  It is not just this overt scam that he is trying to preserve, however: everything from his marriages, to his affairs, to his quest for power are part of his quest to create a convincing identity to inhabit that is as far away as possible from his old self.  As the facade breaks down in various places, he ends the most recent season by making plans to move to California, yet another scheme that gives him the hope of truly “starting over.”  The narcissist, interestingly enough, could be seen as a cross between the sociopath and the fundamentalist, since their own quest for identity is its own utopianism; a belief that if only they could play the perfect role, all would be well.

Like all antiheroic quests, however, it is not one that lends itself to clean endings and revelations.  Some antiheroes, like the lawful neutral, are more likely to understand this than others such as the fundamentalist.    Like the hero, however, the antihero undergoes change, and so the fundamentalist can always take on a less ruthless goal than utopia while the lawful neutral may one day become disillusioned with the continual injustice he decides to prop up.  At the end of The Watchmen, we are left to wonder what choice Ozymandias will make when he asks the godlike being Dr. Manhattan about the true significance of his actions:

Ozymandias: I did the right thing, didn’t I?  It all works out in the end.

Dr. Manhattan: “In the end”?  Nothing ends, Adrian.  Nothing ever ends.





*This is one of the reasons why I continue to see literary analysis as a field that is far from irrelevant, even if it produces a great deal of frivolity in the process.


**There was some philosopher well before Schumpeter that came up with this idea, but I don’t remember who that is.  Whether we like it or not, the concept is popularly attributed to Schumpeter.


Phenomenological Opacity, Accounting Identities, and Allostasis

In my previous post, I made a distinction between cybernetic theories, which address the internal decision making process of a system, and phenomenological theories, which identify stable correlations between observable properties.  In that post, I suggested that we can use cybernetic theories to figure out which phenomenological theories can give us the most leverage with regards to changing outcomes; for example, indirectly controlling your body’s energy balance through changing what you eat is a more leveraged strategy than trying to directly control your calorie intake.  The truth, however, gets even more complex: there are some phenomenological constructs that are so basic yet so shrouded by complexity that you cannot observe them in a very meaningful way: instead, they can only be used as a construction that makes more complex predictive theories logically sound.  Here, I’d like to show that these two concepts, opacity and accounting identities, can illuminate how systems primarily manage and adapt to feedback in order to stay alive, and how this changes the way we should look at economics among many other fields.

Calories, Again

In my nutrition example, I advocated for an approach to eating that emphasis what you eat rather than how much you eat. My explanation at the time had to do with the concept of leverage; and while it is true that there is more leverage in this approach, there was another fact that I simply left out: we don’t have an accurate idea of our calorie intake and expenditure.  Despite the fact that people calorie count by logging what they eat, what they do at the gym, how much they walk, and so forth, it is still a very crude approximation.  Not only do we not know exactly what is in our food and exactly how much any given exercise session will burn, we also need to account for all kinds of things such as resting metabolism (which is affected by all sorts of factors), thermogenesis, whether calories are going to fat or muscle, the calories burned by our brain (I’m hungrier at lunch on days where I have to concentrate a lot), where your energy comes from while exercising, how efficiently your body performs a specific exercise, and so on.  You might think that even with all this, it’s reasonable to approximate; the problem with this is that it only takes 3500 excess calories to gain a pound of weight.  That means that eating just 50 more calories per day (about 3% of your standard 2000 calorie diet, which is considered the margin of error for simple statistics and most likely an unrealisitcally low margin of error for something as imprecise as calorie counting) will mean gaining a pound in a little over two months.  That on its own doesn’t sound like much, but consider that that means gaining 5 pounds per year, which would add up to quite a bit over a few years.

You might think that this is a simple matter of errors cancelling one another out, that you’ll have as many days where you’re 50 pounds below your target as you will where you’re 50 pounds above your target.  In order to explain why this thinking is flawed, I’ll take a detour into different kinds of randomness.  The most commonly known kind is Gaussian randomness.  This kind of randomness is predictable and works as follows: imagine that you have a coin and decide to toss it 8 times.  The odds of it coming up heads is always 50%, and it’s always the same on every toss.  That means that you can easily get the odds of how many times you get heads out of the 8 tosses.  The chances that there are no heads and no tails are pretty low (50% to the eighth power), because there is only one way to get to such a configuration.  On the other hand, there’s a very high chance of getting four heads and four tails, or three heads and five tails, or five heads and three tails, because there are many different timelines that will get you to that configuration (maybe the first four tosses are heads and the second four are tails, or maybe it alternates, or any number of things.)  In fact, the odds of getting all heads on as little as 8 coins are so low you should never worry about it (about 1 in 200).  You can in fact see the probabilities of various outcomes (all tails on the left, all heads on the right) in a simple (and well known) curve:

Source: Wikipedia

Why is that important?  Because you know that to a certain degree, your coin tosses will almost certainly cancel one another out.  The problem is that the reason this works for the coins is the same reason it won’t work for other things: the outcomes of the coins are independent of one another.  A coin coming up heads on one toss does not affect the probability of a coin coming up heads on the next toss.  On the other hand, there’s no way you can measure this if factors interact.  And this is exactly what the problem is with calorie expenditure: your diet and exercise is constantly interacting with the various processes in your body that are beyond your control, and even if you eat and exercise exactly as you’ve planned, your body will still be making decisions about all kinds of processes you don’t control.  When you have these interactions, you have a curve that looks something more like this:

Source:  (please contact me if you are the owner and don’t want this image used.)

If we used the dark blue curve for coin tosses, that would imply a higher probability for something like all 8 coins coming up heads–and it would actually be true if the outcomes of the coin tosses actually affected one another.  What’s more important to note for our purposes is that there is not a guarantee that individual outcomes cancel one another out–which was the reason why in our original example we didn’t have to worry about getting 8 heads in a row.  Note that I didn’t even add to all this the fact that our behavior is not totally in our control, and that even if we superficially maintain some rules, there will always be subtle ways to work around them (maybe you start running twice a week but then end up spending more of your spare time parked in front of the TV.)

A fair question to ask right now is Alex, what’s the difference between this and what you were talking about yesterday?  Isn’t this just more stuff about leverage?  Not quite.  In my previous entry, I was talking about how much control we have over a given variable.  Here, I’m talking about how much knowledge we have of a given variable.  It’s not just that we have little direct control over calorie intake, we can’t even get a reasonable approximation of how many calories we eat and expend in a short period of time.  In other words, our energy balance is opaque.

So what makes this a phenomenological variable at all if we can’t observe it?  The answer is that the phenomenon is (to an extent) observable; we know for a fact that the mathematics do work out such that organisms get bigger with calorie surpluses and smaller with calorie deficits, but when we look at the big picture we simply can’t know or predict the exact rate at which calories are entering and leaving the body at any given moment.  The problem is that we believe we can; but let’s take a look at the actual definition of “energy balance”:

Energy Intake = Internal Heat Produced + External Work + Energy Stored

Note that all this does is take four variables and relate them to one another–that if you’re gaining weight (an increase in “energy stored”), then by definition we are either taking in more energy, producing less internal heat, or doing less external work.  At no point is there any kind of inference happening–these variables simply describe what is happening.  These definitions are important for making sure that any theory of weight gain or weight loss is consistent with thermodynamics, but that does not endow them with any kind of inferential power.  What we are left with is an accounting identity, a mathematical definition that unfalsifiably relates variables to one another.  Even though the laws of thermodynamics are actually falsifiable, for the purposes of nutrition, if we were to find ourselves gaining weight, we would not question the laws of thermodynamics; we know from this definition that it would either have to be a rise in energy intake, a drop in thermogenesis (body heat production), or a drop in exercise.  And of course, that isn’t even addressing whether the extra weight is muscle or fat; most importantly, however, it does not predict anything.

And yet, despite all this opacity, we are all remarkably stable in our weights.  This is not only true of people of an average weight–it’s also the case for people who are obese; they do not keep gaining weight indefinitely.  As pointed out in my last entry, the body can regulate itself with a remarkable degree of sophistication; and it must–although we constantly speak of calories, it is absurd to forget that our diet requires many different nutrients at varying levels, which themselves control all of the processes that ultimately decide the flow of energy; and that’s just one of many nuances in our overall nutrition.  If you believe in calorie counting, then I have one piece of advice: instead of thinking of it as “I’m going to try to control how many calories I eat”, instead think “I’m going to try to implement a pattern of eating and exercise that results in a calorie deficit.”  In other words, use calories as a proxy for whether what you’re doing makes sense or not.  If cutting down calories means feeling dizzy and irritable, you’re doing it wrong; your brain is not supposed to go on a diet.

But the significance of accounting identities and the opacity of the phenomena that they represent may apply much more deeply to a field whose language games are far more sinister than that of nutrition: economics.


The Elusive Concept of “Wealth”

When I was younger and knew even less about economics than the paltry amount that I know now, I found myself confused by the abstract numbers and concepts that seemed to dominate any discussion on the economy: GDP, inflation, interest rates, employment, and so forth.  Although many of these numbers serve their purpose, I found, and continue to find, that many of them act as if there is absolutely no real world behind the economy from which we get finite resources and use them with our finite amounts of energy and time: a problem that is really the inverse of the “calorie fallacy” (impromptu name.)  This led me to an analogy that I still continue to use to this day: talking about economics without natural resources is like talking about metabolism without food.

Rather than hearing much about things like the world’s supply of oil or the amount of energy needed to procure food, economists think in terms of prices, credit, liquidity, employment, and other factors that are not about the wealth itself but about the system that controls all of the wealth.  To someone who has never read any economics, or perhaps has never lived in a society that has used money, this must seem absurd: isn’t what matters how much actual wealth we have?  Well, yes; that, and our ability to allocate that wealth, are what actually matter.  But this begs two questions: (1) what counts as “wealth”? How do we compare food and fuel, or luxury and necessity?  Is a pound of corn of the same value as a pound of barley?  What about less tangible things such as safety or the satisfaction of our emotional needs?  (2) how do we, as a society, choose how to allocate our resources in such a way that we can meet our needs and grow our collective wealth?

As a tentative answer to question (1), I will define wealth as surplus thermodynamic energy.  This may seem a bit strange, but it will make more sense upon explanation.  For answer (2), I will have to go into a little bit of economic theory, explaining the concept of comparative advantage, which is arguably the cornerstone of classical economic theory.  These two concepts, surplus energy and comparative advantage, are tightly linked and when put together illuminate a third concept that I would have trouble explaining otherwise.

So what do I mean by surplus energy?  The definition of energy is quite simple: the ability to do work.  In classical mechanics, work is defined as the ability to move an object that is in a state of rest or to stop an object that is in a state of emotion–in other words, the ability to overcome inertia.  The thermodynamic definition of work is more nuanced and would be more comprehensive, but all we need to know for our purposes is that we need energy to grow food, to stay warm, to reproduce, to protect ourselves from predators, to maintain the rule of law, to conduct symphonies, etc.  In fact, it’s required for any kind of activity, mental or physical.  The more energy we have, the more of these things we can do.

In early agricultural societies, most of this work went to the bare necessities, staying fed, staying warm, and staying safe.  Almost all of the energy provided by the food grown was spent on growing more food and doing anything else that was necessary for survival.  With so little energy left, there isn’t much capacity for doing other things; so in a primitive society you may have a priest or a shaman of some kind for spiritual guidance, along with a few other simple specialists.  On the other hand, should this society domesticate animals that are capable of doing heavy lifting, they’ll be able to grow more food with less energy, leaving spare energy for people to pursue more specialized pursuits and creating a more complex society.  The same may happen with a labor saving device such as the plow or some fertilizer that makes crops more nutritious.

You may notice, however, that this is not simply “free energy” coming out of the ether.  In the case of domesticated animals, the animals still have to be fed, or else they’ll starve and won’t be able to do any work at all.  As for labor saving devices, someone still has to put in the work, just not quite as much.  In other words, the surplus energy comes from the tribe becoming more efficient with the energy that they have.  A horse may require food to run a plow, but running a horse with a plow gets much more food grown per calorie spent than having a human do the same thing with a simple shovel.  This notion of efficiency is also the basis for comparative advantage, and by extension, for the entire science of economics.

So what is comparative advantage?  This could best be described with a thought experiment.  Let’s take two tribes, the Oomphs and the Bumps.  The Oomphs are expert lumberjacks, chopping down trees with incredible efficiency and organization; but their farming system is quite inefficient, and so they spend that saved up energy on making up for their lackluster farming abilities.  Meanwhile, the Bumps are most excellent farmers, but they are quite atrocious at cutting down trees.  How can these two tribes improve their lot?  The answer is easy: by trading.  The Oomphs can buy food from the Bumps using their spare lumber.  Since they are so much better at woodcutting than they are at farming, they’ll spend much less energy cutting down the extra lumber to trade than they would by growing the food themselves.  Meanwhile, the Bumps can do the exact same thing with their food supply.  This means that both tribes have much more energy to spare, which can be spent on all manner of things.

But what’s truly important is that this doesn’t just apply to trade between societies–it also is how a modern economy works on an individual level.  Instead of having to grow my own food, prepare my own self-defense, and build my own house, I can simply pay someone else to do it, and earn the necessary money by doing what I’m good at.  Note that this is basically what money is for: it allows people to offer to trade their services without having to precisely know exactly what other people want or need.  Now, money is actually far more complicated than this simplified concept, but we can get to those questions later.  What’s important to note as of now is that comparative advantage optimizes our use of energy, and in doing so, gives us energy to spare and allowing us to create a more complex society.

But one can only optimize energy so much, whether through dividing labor or discovering other ways to use energy more efficiently, leaving the question of how further economic growth happens.  There are two relatively simple answers: either grow the population, or discover new sources of energy.  New sources of energy have been discovered throughout the entirety of human history: fire was discovered millions of years ago, and with it, we were able to cook our food, which metabolizes a lot of the food before we have to do any of the work ourselves; this meant we needed less time to digest our food and could devote more energy to other enhancements such as increased intelligence or better hunting abilities.  The energy provided by the wind became the primary means of propulsion for ships and a way to mechanically grind grains.  The examples go on and on, but the most potent one is the discovery of fossil fuels, or more accurately, the discovery of how to put fossil fuels to use through combustion.  It’s no coincidence that since this discovery, economic growth has accelerated at an unbelievable pace; the amount of energy provided by a single gallon of gas is estimated to be around 500 man-hours of manual labor.

You may have noticed by now something else that’s important: if we want to discover new sources of energy, we’ll need surplus energy.  The discovery and utilization of a new source of energy is an effort carried on by tons of trial and error on the part of scientists, entrepreneurs, tinkerers, and specialists of all kinds.  Solar power has become increasingly advanced and affordable thanks to materials and designs of such complexity that it takes hundreds of people with extremely specific jobs all working together to develop them.  Even the tinkerers that have found more simple ways could not have done so without the amount of spare time given to us by the conveniences of modern society.  Even the extraction of crude oil now requires amazing complexity as more and more of what’s left is drilled out of reservoirs that exist thousands of feet beneath the sea.

Now that I’ve taken you through the process of specialization and the importance of surplus energy, one could easily identify that specialization is a cybernetic theory and surplus energy is a phenomenological theory.  The problem, however, is that one can’t easily measure “surplus energy”: since a lot of it comes from increased efficiency, we can’t simply measure the amount of electricity, combustible energy, and dietary calories expended by a society in a given year.  In addition, I’ve only been using the notion of “efficiency” in the context of the amount of energy that doesn’t simply get lost in transmission (every transfer of energy loses at least some of the energy to irreversible entropy), and have not considered that a person may just be spending the energy foolishly.  Nor have we taken into account something else that is much more important: which natural resources will lead to more energy?  Just like our body need many different nutrients and can’t use all calories in the same way, our society needs different raw materials and skills to do different things: rare earth metals for solar power and electric cars, rubber for creating tires, plastics for insulating circuits, etc.  All of these resources work together in complex ways to determine what energy we can extract, what energy we can save, and how much energy it will cost to ultimately meet our real needs and preferences.   Saying that the economy needs to take in more energy than it spends in order to grow is every bit as banal as saying that a person needs to take in less calories than they expend in order to lose weight.


No Accounting for Taste

Unlike the human body, however, we can’t even reliably use energy balance as an accounting identity because we simply have no real idea of what “efficiency” is, since we don’t have any true sense of what ultimately benefits us.  In nutrition, we know that body fat is (up to a point) wasted energy, so we know that if we have less than 15% body fat, we have no serious problems with body composition (and even then, body composition does not tell the whole picture about health, there are all sorts of other illnesses and morbidities that can still occur.)  Instead, we need a different accounting identity.  In classical economics, this need was answered by the idea of utility: every person has a set of preferences for what they want; the only rule being that you can’t prefer apples to oranges, oranges to bananas, and bananas to apples all at the same time, since this would not be consistent.

Utility, however, can only be a theoretical construct.  Ignoring for the moment that people don’t even have consistent needs and preferences, the concept of utility would also imply perfect information about the present and the future; something that only an omnipotent being could have.  Instead, we use money; a highly unstable and crude signifier of wealth.  It is a signifier (as opposed to an indicator) of wealth because, as we saw earlier, the concept of “wealth”, let alone “value” is intractable.  But even if money can’t act as a gauge of wealth, it can still act as a unit of account by allowing us to create stable accounting identities for economics.  Just like the rules of energy balance hold, so do the rules of monetary transaction; if you owe more than you have, you are in debt, and if you import more than you export, you have a trade deficit.  If you have a trade deficit, it can only be shrunk by exporting more “wealth” as denominated in money; this may be done by devaluing the currency (you sell the goods for more money, but that money is worth less) or by consuming less and exporting the excess wealth, but no matter what the method, the money itself must unambiguously balance out.  Another place identity that uses money as a unit of account is the “size” of an economy:

GDP = Consumption + Savings + Government Spending + (Exports – Imports) [net exports]

Note that this is just saying “the total amount of economic activity has to be the sum of how much money is spent, how much money is saved, and how much money is made from exports that wasn’t spent on imports.  That last item on the list may be tricky to understand, but think about it this way: all imports are already accounted for as consumption, so if you counted the money made from exports that was spent on imports, you’d be double-counting the consumption of imports.  What’s important to note is that this identity is not making any inferences, but only making the clear unambiguous rule that every dollar that goes through the economy must be categorized in one of these four variables.

Since currency does not actually signify wealth in any tractable way, this can be at best a rough approximation.  Although economists talk about “real growth” and “real incomes” by “adjusting” for “inflation”, the truth is that the very concept of inflation is based on comparing money to wealth, which for reasons we’ve already been over is extremely problematic.  So if GDP can’t measure growth or prosperity in any way at all, what’s the point of talking about this or any other money-based accounting identity?  The answer is that we’re asking the wrong question.  It’s not just money that’s the problem, it’s that the very concept of “growth” or “prosperity” is fundamentally the wrong way to think about economics.  Along the same lines, “conservation” or “sustainability” is no better when we consider that we cannot anticipate our future needs any better.  That’s not to say that we shouldn’t worry about the world’s supply of water, oil, topsoil, or food; but addressing those issues in a simplistic top-down manner won’t work because they are so phenomenologically opaque that the only accounting identities we have available to us have to use money as a unit of account.

So what is the right question if it’s not about how to grow or how to conserve?  Before going into the answer, consider the function of money: it provides information to the economy and influences behavior.  You, as an individual, know what you can and can’t own based on not just how much money you have on hand, but also by how expensive it is to borrow money and how available new revenue is.  In other words, money is also a cybernetic entity; it provides feedback, which allows the economy to adapt to novel needs and challenges as they arise.  The purpose of economics is adaptation, with money being one of many mechanisms that provide the information essential to this function.  While more money does not translate to more adaptability, one should remember that calories are not unambiguously linked to health: instead, the dynamics of calories and the dynamics of money both provide us the necessary constraints to make further inferences.  In the case of money, we’ll be able to use its mathematical constraints to illuminate how economies work as systems of adaptation:


Allostatic Economics

The most simple form of feedback in economics is supply and demand, which itself is mediated by money.  The price of something goes up if demand outpaces supply, and will continue to do so until either fewer people want it (for that price) or more of it is supplied (it goes without saying that this also applies vice-versa.)  The same thing also happens with money itself: if there is more money, the “price” of money goes down–both in the form of borrowed money costing less interest and other goods costing more money.  The closing of these gaps is a form of negative feedback, and could be considered the most basic kind of feedback in an economy.  There are, however, more intense versions of this feedback, such as when some type of good or service is extremely overpriced (often because people see its price going up and want to try to buy it and re-sell it for a more expensive price) and then finally the price drops down to something more reasonable.  Another more intense version may come from a change in the outside world, such as the price of some important item skyrocketing due to scarcity, in which case people must cut back their consumption of other goods or find alternatives to the item in question, leading to prices dropping elsewhere and an overall decrease in wealth that makes some sectors of the economy unsupportable.  In all of these cases, the behavior of individuals will have to adjust, and in order to make that adjustment, the amount of money circulating in the economy will decrease since lost jobs, lost sales, failed investments, etc. will require that people conserve what they have.  Keep in mind throughout all of this that we can simply think about this in terms of money, and do not have to think beyond a very rudimentary level about the material wealth underlying the money.

From this point of view, recessions, while painful, are necessary feedback.  If credit has been lent too freely, then interest rates (the price of acquiring credit) should commensurately go up; and if houses have become overpriced due to bubble behavior, then we should not continue to pay more for houses.  The same goes for gasoline: high gas prices signal that we need to be wiser about how we use gas, or that we should look harder for new sources of energy.  While this is all true, there’s one major problem: feedback does not exist in a vacuum.  If the economy is harmed too much, it may compromise the very mechanisms that process this feedback.  Consider, for example, lifting heavy weights at the gym.  Up to a certain point, it will feel stressful and may even hurt a bit; you’re giving it your all and dripping sweat on the floor.  After all this pain, you go walk it off and rest for a few days and come back to the gym able to lift an even heavier weight because of the adaptation.  Now consider that this next time, you decide that you can do even more, and raise the weight by a much higher amount than usual.  In the middle of your set, you feel a sharp pain and before you know it you’ve torn a muscle in your arm.  Now you’ll certainly get weaker, at least in that arm, due to the fact that you won’t be able to do any heavy exercise with it for at least a few weeks.  That’s the difference between just enough pain and too much pain.

With recessions, the same logic applies.  For example, if too many people are out of work, they won’t be able to buy anything and more places either lay off workers or go out of business entirely.  When that happens, it can turn into a vicious cycle; or, if you read my previous entry, a positive feedback loop.  While some pain will correct the relative prices of goods and weed out irrelevant skills and unsustainable businesses, too much at once can lead to a runaway chain reaction.  So we want harm, but not too much concentrated harm.  More specifically, we want negative feedback, because that’s the kind of feedback that results in a correction, as opposed to positive feedback, where pain begets more pain.  Even then, however, there’s a problem: we don’t necessarily know what’s going to spiral out of control and what’s going to ultimately act as beneficial feedback.  In fact, we want the feedback to be sufficiently concentrated up to a point.  To show why, let’s go back to the gym: this time, you’re benching 150 pounds.  After about 10 repetitions, you can’t do another one, and you call it a day.  Your friend next to you, although just as strong, benches 15 pounds and stops after 100 repetitions (for those who don’t believe me, I’ll give you a more extreme example: your friend benches 1.5 pounds 1000 times.)  You both got feedback from the stressors, but you’ll benefit much much more than him because of the concentrated dose.  What does this suggest?  That intensity of feedback has accelerating benefits before it starts to cause harm, an idea that has been explored in more depth by Nassim Nicholas Taleb in his book Antifragile.

Source: Antifragile by Nassim Nicholas Taleb

So why should feedback work better if it’s concentrated if all that matters is eventually correcting discrepancies?  Before getting into that, I need to address something that has been mostly ignored thus far: economic booms.  Economic recessions almost always follow a time of rapid economic growth (denominated in whatever currency you’d like.)  It is during this time that the discrepancies are built, since people have more money to spend, and this money ends up getting spent in inefficient and wasteful ways.  Economists of the Austrian school call these built up discrepancies “malinvestments”: investments in which resources are wasted (or if you want more mathematical precision, investments in which resources are not invested optimally).  As we noted before, these kinds of discrepancies are happening at all times, but oftentimes in very small amounts with booms and busts happening when many of these things happen at once; which happens more often and with more intensity than even many economists realize because of how interconnected economic events are (recall my spiel on probability distributions at the beginning of this post.)

Due to the intractability of both our present and our future needs, these malinvestments are inevitable.  Fortunately, they are also desirable (to an extent) for the exact same reason.  Consider the internet as it is now; it is extremely fast and ubiquitous, to the point where it is free to instantly communicate with somebody on the other side of the world.  The infrastructure for this is in part made up by sprawling networks of fiber-optic cables that traverse entire oceans and continents.  Many of these were built up during the dot-com bubble in the late 90s and early 2000s, and it was possible due to the amount of money people were foolishly willing to invest in all kinds of digital technologies.  Eventually, most people lost their shirts in these investments and a recession followed, but not without making all of these fiber-optic cables dirt cheap due to investors’ needs to sell off what assets remain, providing the world with a whole new infrastructure.

But why the bust, you may ask?  Can’t we just get this growth and try to cushion any fall that happens afterwards?  The problem is that just because we don’t ultimately know what is wasteful, it doesn’t mean that there’s no such thing as waste.  If a collapse in housing prices is propped up by the government giving subsidies to consumers, then the government will have to pay for it somewhere; if not by cutting costs elsewhere, then by raising taxes elsewhere or by printing money.  While printing money may sound like the solution, one needs to remember that behind all of the money is a finite, though often growing, amount of material wealth, and buying more of one thing means buying less of another.  The labor, raw materials, and loans that may have gone somewhere else are now tied up in a place where it wasn’t worth it.  Just consider if every restaurant were propped up: there would be tons of real-estate, personnel, food, electricity, and gas tied up in restaurants that almost nobody wants to eat at.

The common retort is that this cushion doesn’t matter because growth will eventually outstrip it, but this neglects the possibility that misdirecting too many of our limited resources may in fact hamper future growth by not allowing adaptations to occur.  I blame the common emphasis on the word “growth” for this misunderstanding: when the focus of economics becomes adaptation rather than growth, the boom and the bust are suddenly two sides of the same coin; both of them an essential part of making the changes that better suit us to both present and future needs.  Consider, in addition, that when we measure “growth”, we are talking about it in terms of money, which is not a direct measurement of wealth but a feedback mechanism that follows certain basic constraints.  Booms and busts are increases and decreases in the activity of money, so we should realize that what we’re looking at is not a pattern of abundance and scarcity per se, but signals of abundance and scarcity.  This might seem contrary to ideas such as stagflation, but consider there that this is a phenomenon in which the purchasing power of money goes down while GDP, the amount of money circulating in the economy, stays stagnant.

Noting that these ideas of growth and recession are fundamentally about information, I can now make a big claim: it is not growth or atrophy that matters, it is the pattern of growth and atrophy.  This statement, along with the fact that we patently need to both do stupid things and pay for our stupidity (rather than be smart), means that while an economy strives for adaptation, it does not do so through homeostasis, since it does not thrive by staying close to some equilibrium.  The correct word is allostasis, long-term quasi-stability achieved through volatility.  Without this volatility, the economy would be extremely brittle, as all of its decisions would be based on the market’s current (implicit) hypothesis about our current and future needs, allowing no room for the randomness that is necessary to compensate for what is unknown.  More importantly, however much it may seem otherwise, the money itself is just information; our actual security, material wealth, and future challenges are a sea of chaos that is traversed through feedback and adaptation.

What then, makes a healthy economy?  The answer is volatility above all other things.*  Money does not provide knowledge, but it provides feedback.  Volatility is an indicator of feedback in two ways: the negative feedback loops make corrections as errors come, while the positive feedback loops provide a level of randomness that appropriately handles the uncertainty of what’s unknown.  So if you want to see whether or not an economy is doing well, don’t look at its growth, but rather at its variance; the more wide gaps between boom and bust, the better.  The same also goes for living things: despite the craze for a low basal heart rate, the evidence seems to suggest that it is the variation in heart rate that may ultimately matter.  But forget longevity for a second: anybody who isn’t completely neurotic understands that health is the ability to live a good life, not a long one; and living a good life means having the capacity for wild swings of both good times and bad times.  That’s why in those pharmaceutical commercials you see those scenes with the dad going mountain biking with his kids because he finally got rid of his COPD–because he can now have more intense experiences without choking to death.  A better measure, for that reason, might be metabolic range: by how much can you multiply your metabolic output?  I don’t know much about the measurement, but there is a metric, and it seems to be linked to your peak physical capacity.

After all this, chances are that more questions than answers have come up; not least of which when we know the difference between out-of-control positive feedback and very large swings of negative feedback.  A related concern is that there are probably many layers of feedback mechanisms rather than just one, such that less effective feedback mechanisms should be destroyed to make room for new ones–that alone is a headache to think about.  While there is no simple answer to these things, we can still keep our sights in a reasonable range by remembering what Keynes once said: ”In the long run, we’re all dead.”  At the same time understanding the centrality of allostasis may mean that we can finally get away from the clusterfuck that is occurring between the neo-Keynesians, the Austrians, the conservationists, and probably many more schools of thought that I’ve forgotten.


*For further discussion on volatility, I strongly recommend Antifragile, which I have cited multiple times here.  It is a somewhat less theoretical, but much more empirical, treatment of many themes in this post’s subject matter

Cybernetic and Phenomenological Theories

In a number of debates I’ve had in the past few years, I started to see a pattern in which I came to the same fundamental impasse with people again and again.  It wasn’t about disagreeing over facts, but about a semantic difference that I could not describe until around half a year ago (and have since found so daunting to write about that I’ve put it off for all those months.)  The difference came up specifically in debates about things of enough complexity that we do not understand what drives its behavior on the inside, but have a good idea of how the more apparent and observable properties are related.  The result was a constant battle of language games in which theories were seen as nonsensical because they were supposedly in contradiction to things that were much more apparent.  How could carbs/genes/hormones/etc be responsible for obesity if the “real” cause was taking in too many calories?  How could unemployment lead to less overall wealth if jobs are only a means to an end?  If someone is depressed and behaving in self-destructive ways, why can’t they simply choose to do something to help themselves?  The problem is that none of these questions were dealing with things that were mutually exclusive; in all of these cases, they were dealing with two different types of theories.

The theories dealing with larger structures such as genetics, employment, and behavioral disorders are ones that I describe as cybernetic theories.  Cybernetics is the study of how a system regulates its inputs and outputs in order to maintain stability.  That can apply to something as simple as how a thermostat regulates its own parts, to how a human body regulates its metabolism, energy levels, and behavior in order to maintain homeostasis.  Rather than looking at mere correlations between things that happen, it looks at the actual decision making of a system.  But what makes a particular cybernetic theory?  A cybernetic theory is a hypothesis that attempts to explain a mechanism by which a system’s behavior can be predicted.

The more apparent causes that can be seen through observation are ones that I describe as phenomenological theories.  In science, a phenomenological theory is A theory that expresses mathematically the results of observed phenomena without paying detailed attention to their fundamental significance” (Thewlis, J. (Ed.) (1973). Concise Dictionary of Physics. Oxford: Pergamon Press, p. 248.)  An example of this would be something like the fact that we can observe that an organism loses mass when it consumes less calories than it expends, and gains mass when it consumes more calories than it expends; we don’t have to know why it’s true to see that it is.  One can also note that the prosperity of a nation is dependent not on abstract economic numbers, but on actual material wealth; fuel, food, infrastructure, etc.  More abstract entities such as currency are things that help decide where resources are allocated and who gets what–so that is a cybernetic theory explaining how resources are acquired, distributed, and used; it doesn’t make more resources pop out of the ground, but it does give people an incentive to look for resources that are in demand and helps prioritize who should get what resources.  In the same way, everyone can agree that jobs are not an end in themselves (otherwise, it would just be useless work), but most of us see employment as an important number because if not enough people have jobs, it would require that we devise a completely new system for distributing wealth to people.

Domain Cybernetic Phenomenological
Nutrition Hormones Calories
Economics Currency, Employment, Interest Resources, Labor
Psychology Pathology Behavior


Your Decisions vs. Your Body’s Decisions

Now that I’ve gotten the general gist across, we can get into examples.  In order to keep things clean, I’ll only go into one: nutrition.  This is where I’ve encountered endless language games in which many people make the ridiculous accusation that those who go beyond calories-in-calories-out are denying the rules of thermodynamics.  Even among some of the smartest people I’ve read, I’ve seen this problem, such as a debate between Martin Berkhan of and Gary Taubes, author of Why We Get Fat, having an argument about the problems of overeating.  Their views are mostly similar (though not entirely), but their biggest disagreements seem to largely come from arguments that are ultimately about semantics.  Taubes says that overeating is not the true cause of obesity, but merely an inevitable side effect of the true cause, which is a bad diet.  Berkhan responds by saying that you don’t magically burn of all of the food if you eat more calories than you expend, but then says that the reason that dietary fat is less fattening is that fat is more satiating than carbohydrates.  What Berkhan missed is that Taubes would agree–it’s not that the calories magically disappear, it’s that the amount of calories eaten is regulated by a mechanism that responds differently to carbohydrates than it does to fat.  I would personally add that not only is that case, but that a good diet and a healthy body mean that the excess energy in your body is more easily accessible, and so you will not only have an easier time burning it, but will be naturally inclined to do it.  Body fat is a battery, and obesity occurs when the body keeps charging the battery but not using any of it.*

In both cases, they’re actually agreeing about the cybernetics of this: in both cases, eating more fat and protein and less carbohydrates leads to the decision to consume less calories; what we experience as hunger and satiety are expressions of more fundamental mechanisms that interact in order to regulate the system’s decisions; the most central of these being hormones.  Hormones in our body act as messengers and end up deciding how hungry we feel, where calories in our bodies go, how physically restless or restful we feel, and so on.  If food is the natural resource base of our body, then metabolism is the web of economic links, with hormones perhaps acting as our financial and monetary system (interestingly, I believe that there is an analogue between the hormone insulin and the effect of interest rates on economies–a topic I’ll briefly revisit later in this post.)  At the same time, Taubes does not deny that there is an absolute correlation between calorie surplus and weight gain–the difference is that he rightfully points out that nobody is answering the question of why this calorie surplus is happening:

We don’t get fat because we overeat; we overeat because we’re getting fat.

Taubes, Gary (2010-12-28). Why We Get Fat: And What to Do About It (Kindle Locations 1431-1432). Knopf Doubleday Publishing Group. Kindle Edition.

This seems like gobbledygook, or at least weird wording, when one first reads it, but the logic is actually simple: obesity is the condition upon which the body decides to overeat and allocate the excess calories to fat.  Sounds implausible?  Then consider this: kids run a constant calorie surplus because their body is telling them that they need to grow–it’s not like they consciously deciding to grow.  In both cases, the word decision is key–obesity, just like growing, is a cybernetic phenomenon in which the body is accumulating calories because that’s what it decided to do.  A more detailed explanation of both what is believed to happen and the scientific evidence backing it up is a topic fit for entire books, so I can’t go into it here, but Taubes is a great place to start.  What’s important to note here is that whether we end up running a surplus or deficit of calories is a decision made by the body.

But what about self-control?  That’s an important question, and it makes this post controversial because the first thing I have to say is, no, you don’t have total autonomy over what your body does.  Yes, you can use your conscious will to keep calories under a certain level, but will it work?  If you are not taking in enough energy to get through a workout, then you’ll become more sedentary in response.  In fact, in severe cases of metabolic syndrome, the condition that causes obesity, starvation may cause the body to break down muscle, bone, and even organ mass before burning through all of its fat reserves.  Why would the body do that?  This requires understanding the essence of cybernetic systems: feedback.  

Feedback or Die

The most basic example used to demonstrate cybernetics is a thermostat.  It has a built in thermometer, which consists of mercury that either expands or contracts due to changes in temperature.  This allows the thermostat to measure some discrepancy between its target temperature and the actual temperature outside–it will then turn on a heating or cooling system until the discrepancy goes away.  This kind of feedback is known as negative feedback because the feedback causes the discrepancy to shrink.  What’s important to note here is that the information received by a cybernetic system is based on some difference between two absolutes–the thermostat behaves the way it does because it makes its decision based on whether the volume of the mercury inside its thermometer is less than, greater than, or roughly the same as some defined target.

The human body, while operating on these same principles, is much more complex in its rules.  That said, there are still insights that can be gleaned from having a rough idea of how some of its key systems work.  One such system is the hormone known as insulin.  (NB: from here on, I am making a theoretical point with examples that may not exactly match up with the most up-to-date scientific theories.  The point of the following is a thought experiment meant to give an intuitive sketch of how feedback works in a cybernetic system.  I repeat, I am not making an empirical claim, I am using a simplification in order to illustrate a concept.)  Insulin is a hormone that is charged with the task of absorbing any glucose that is found in the bloodstream and transporting it to various parts of the body (namely fat and muscle.)  The fat cells and muscle cells that absorb the insulin do so by means of insulin receptors, which calibrate their sensitivity such that they absorb a certain amount of insulin before stopping.

These cells are very much like the thermostat, except that their target will be raised or lowered based on the relative amount of insulin running through the system.  The reason for this is that the body’s goal is to properly distribute nutrients and this distribution is determined by the insulin sensitivity of various parts of the body.  If a receptor is receiving too much insulin, its sensitivity reduces so as not to take in more than it needs.  Currency works like this as well: if an excess of money is flowing through the system but the amount of actual wealth (yes, loaded term, but bear with me) stays the same, then the purchasing power of the currency drops.  Insulin works the same way: just as currency represents a non-fixed amount of wealth, insulin represents a non-fixed amount of nutrients.

The condition known as insulin resistance comes when these receptors become so insensitive that they are no longer absorbing any significant amount of insulin.  The result of this is that the insulin, and any glucose that it might be transporting, remains circulating in the bloodstream.  In order to get the glucose out of the bloodstream, more insulin is produced.  In theory, this should be okay; eventually the same amount of glucose is being transported around, it just needs more total insulin to represent it.  The same goes with money–100 years ago a dollar was worth a lot more, but nothing has broken down because it was gradual enough that at any given moment people had a stable sense of their purchasing power and there was enough time for wages to rise accordingly (it’s not this simple, but my point stands that the system did not collapse.)  In other words, everything is fine if enough of the system can recognize that everything is the same except that the yardstick has changed.

When the change happens too rapidly, however, the yardstick gets mismatched with reality and inefficient behaviors arise; and in extreme cases, the yardstick can become entirely useless.  In economics, the former case matches up to the phenomenon of deflation, in which the purchasing power of money has increased due to lower prices, but unemployment results due to wages not lowering nearly as fast (Keynes called this “sticky wages”.)  In the case of hyperinflation, prices rise so fast that the currency is no longer a reliable yardstick, and any information the money represented about who owns what is vanished.  While this may sound like an egalitarian’s dream, the problem is that so many vital systems rely on this information that the result is terrible poverty.

But how do these breakdowns occur?  What would make a discrepancy emerge so quickly and grow so fast that it can’t be compensated for?  The answer is positive feedback: where negative feedback closes a gap, positive feedback increases it.  And since the positive feedback increases the gap, it’s likely that the same behavior will repeat because the gap is still there.  Although not all positive feedback is necessarily bad, systems break when they enter some cycle of positive feedback that they can’t get out of.  In the case of deflation, the unemployment caused by falling prices and lower wages means that people will spend even less.  The result?  Prices drop even further and more people are put out of work.  Whether or not bailouts and stimulus packages are a good idea, their intent is to nip the cycle in the bud while it’s still affordable to do so.  In the case of metabolism, the issue is that once the cells are too insensitive to insulin, the body will produce even more massive amounts of insulin in order to compensate, but this will inevitably lower the insulin sensitivity of the already resistant receptors.  This can go two ways: the insulin secretion eventually outpaces the receptors’ reduction in insulin sensitivity, in which case some stable point is reached; sadly, this often happens through the body creating new fat cells and eventually becoming obese enough to stabilize the situation.  For those who were wondering why the body would make a decision in which fat absorbs the lion’s share of nutrients to the detriment of everything else, now you know: (relatively) insulin-sensitive fat cells have been recruited to help keep excess glucose out of the blood-stream; they are the nouveau riche of your metabolic system.  The second way is much less pretty: insulin stops being secreted for good, the yardstick is gone; this is diabetes.  From then on, insulin must be regulated through artificial means (injecting insulin manually whenever a meal is just eaten.)

The big jump to make is to realize that any sufficiently complex entity requires reliable feedback.  All of the materials in the world are useless if they cannot work together to create the necessary complex behavior.  If the feedback becomes too unreliable, the behavior becomes at best unpredictable, and at worst too incoherent for anything to work properly.  The loss of the ability to produce insulin is the loss of an entire feedback mechanism, and the only reason that diabetes does not guarantee death is that humans have enough metacognition to use conscious regulation as a backup system to regulate glucose manually.  But take that away, and a more fundamental point becomes clear: a system’s health and survival depend on the integrity of its feedback.

Leveraged Phenomenology

This is not to say, however, that phenomenology is useless.  On the contrary, it is actually essential to sound decision-making: the truth is that everything I’ve written above about insulin is an oversimplification of a very complex theory that even with all of its details and nuances cannot fully account for the complexity of the human body.  But then why look at cybernetic theories at all?  If we’re interested in weight loss and we ultimately can only rely on phenomenological theories, wouldn’t it just be best to look at calorie intake and expenditure?

Not so fast; there’s is one caveat that has not been stated here: you don’t have direct control over your calorie intake.  It’s not just that you don’t directly control what goes to muscle and what goes to fat; your actual behavior with regards to diet and exercise is largely dependent on the messages of your metabolism.  Too much sugar intake will most definitely affect your levels of hunger and your body’s ability to process nutrients efficiently.  Whereas the target of a thermostat is something we have complete control over (we just turn the dial), there is no equivalent part of our body that we have such direct control over.  This doesn’t, however, mean that we have no control; instead, we have differing degrees of control over different inputs.

Since different inputs allow different amounts of control, we need to go by the phenomenological theories that provide us the greatest degree of leverage.  Calorie counting, when it works, works because we make the decision to eat foods that give us more bang for our buck.  While it might be phenomenologically true that we’ll lose weight should we take in less calories than we expend, this is something that on its own does not provide us very much in the way of leverage at all.  On the other hand, there is much phenomenological evidence to show that cutting out certain types of food or engaging in intense exercise sessions a couple of times per week also does the same thing–and these are inputs over which we have a much greater degree of control.  But how do we know what will provide us leverage and what won’t?  The answer is simple: for any input, get an idea of how dependent it is on feedback from the system’s prior behavior.  The more feedback-dependence, the less direct control.

Without understanding cybernetic theories, we would not be equipped to see this difference.  Cybernetic theories offer us the ability to see how different phenomena are related through cascades of feedback and consequently allow us to see what phenomenological theories provide us the most control over future outcomes.  But the example I’ve given here only scratches the surface–however powerful a framework cybernetics is for appreciating complex decision making, it is virtually impossible to decode the entirety of something as complex as the human body, let alone the world economy (which is likely even more complex due to the fact that financial numbers have no theoretical limit.)  It goes without saying that this greatly complicates what began as something that felt simple–but the goal of this entry was to clear up a language game that hinders further inquiry into these ideas; and as such, I’ll have to leave countless questions that I haven’t even mentioned for another time.

*If I’ve misremembered or misphrased this argument in any way, please let me know.  I have no interest in putting words in anyone’s mouth.