Category Archives: Economics

Deconstructive Economics Part I: Economic Paradigms

In my last post that touched on the subject of economics, I considered the idea of a paradigm of economics based on allostasis. It left a lot of questions unanswered, but it left me with an even stronger suspicion of a hypothesis that I’ve been mulling over in my head for some time that may apply to complex systems in general: that an economy works not by allocating resources more “efficiently” but by continually learning. I put “efficiency” in quotes because the notion of efficiency can’t be discussed in a vacuum; an issue that’s helped lead me to my current hypothesis. Efficiency implies that there is a metric that is being optimized for; something that only exists in some unambiguous sense in the event of a major purpose such as a major war, or perhaps the renovation of a nation’s infrastructure. I should also note that in the presence of such goals, the idea that the free market is “more efficient” seems to be somewhat unsubstantiated: I have a hard time believing that the second world war would have been more effectively fought had nations relied entirely on “market solutions” to pump out the manpower and materiel needed for the massive undertaking.

Yet somehow, even in the absence of a definite notion of “efficiency”, there are still things that could obviously be considered “malinvestments”: if a restaurant is bailed out at all costs, no matter how terrible the food, it is uselessly monopolizing claims on all kinds of material wealth that would be better spent elsewhere. This left me with the question of how can we make any claim to something being wasteful in the absence of a clear notion of “value”. One might come up with reasons outside the scope of markets by making arguments for the intrinsic value of railroads or libraries, but when applied on a macroeconomic scale these arguments amount to epistemically arrogant just-so stories that can never be substantiated in any kind of logically rigorous way. Nor are libertarians off the hook: the “free market” in any incarnation is a structure that is built and maintained by central authorities, and while many make the argument that the government should limit its role to providing the absolute basic necessities for an ideal free-market, such an argument implies that there exists that there is an ideal “free market” that should be created and maintained, which itself assumes that there’s some way that a categorical notion of “efficiency” can be derived from some top-down model of reality.

The underlying issue is not just that our economic theories are models of a much more complex reality, but that the market, at any given point in time, in whatever incarnation, is a model of reality that is simultaneously propped up by and utilized by the encompassing entity we call the economy. Where the economy is the collective exchange and utilization of goods, services, land, labor, commodities, information, etc. carried out by society, the market is a model of reality, a set of scripts, that guides our economic behavior. In order to do so, they must do two things: (1) they need to provide information that is sufficiently clear and reliable for us to decide to follow the script, and (2) they need to continually update their instructions so that the information remains reliable. In other words, the system needs to maintain the ability to process information coherently; they must be allostatic.

There are many such scripts, and further reading can be found in places such as Venkatesh Rao’s essay on the unraveling of scripts, but markets are a very specific type of script. Prior to the emergence of industrialized society, markets were peripheral to everyday life and most household and community needs were met through autarchy. With the industrial age came what Karl Polanyi calls “the market pattern”, in which providing for one’s material well-being became increasingly dependent on specialization and exchange. This general “pattern”, which is so strongly entrenched in our culture that our textbooks assume that currency was preceded by barter despite the mountain of historical evidence to the contrary, is the template for all market-scripts, which share the intertwined assumptions of that goods are (1) exclusively owned by a single party, (2) fungible and interchangeable and (3) enumerable according to some ranking. By virtue of these three axioms, market scripts dictate, through the information embedded in currency, institutions, and laws, a set of assumptions about how to determine economic “value”.

The idea of economic value is relative, but that does not mean that it’s unfalsifiable. A market’s script for determining “value” is only viable insofar that it maintains a sufficient signal-to-noise ratio in its processing of feedback. When this fails to happen, price signals stop working and the economy grinds to a halt as people look to other means of economic well-being. At this point, feedback becomes increasingly weak until a new script is implemented. This period of economic crisis is inevitable due to the constant changing of conditions on the ground and the inevitable expiration of any model that makes sense of the world. For a better understanding of how such a process works, it helps to be familiar with the schema of scientific paradigms, as coined by Thomas Kuhn in his book The Structure of Scientific Revolutions.

Kuhn’s Ladder and the Languages of Knowledge

In today’s culture, science is held up with praise, and sometimes disdain, as being an enterprise of absolutes: absolute knowledge confirmed by the absolutes of experimentation and repetition. While I won’t deny that the law of gravity is absolute, the practice of science in many ways resembles Einstein’s relativistic view of the universe. Just as any notion of “up” can only be talked about relative to gravitational fields, the notion of objectivity in science is a social construction that relies on professional consensus regarding various ideas, definitions, technical practices, and accepted theories. This is most evident in practice of peer review, in which a study is not considered scientifically valid until it has been deemed sound by other scientists within the same fields. More subtle and important, however, is the fact that without the existence of such consensus, the scientific enterprise would helplessly drown in a sea of noise.

Consider the field of epigenetics as an example. Genes, as a concept, are considered a scientific fact. The debate surrounding epigenetics is not about the existence of genes but about if and how they do different things in different environments. Getting to this point requires an extremely detailed infrastructure of consensus, not just in terms of guiding theories, but down to the relative meanings of the data returned by an instrument. To get an idea of just how precise this is, imagine trying to explain to a scientist from 300 years ago what a virus is. Without any framework of microorganisms, germs, genetics, cells, or protein, it would be virtually impossible to give them any definition beyond “these little thingies jump from person to person and make you sick.” Even if they suspend their disbelief, what experiments would you be able to run to convince that that this was true? For any kind of scientific research to proceed, there needs to be a shared language. If you can’t agree on whether genes exist, you can’t have a debate about gene expression. The next rung on the ladder can only be reached if you can plant your foot on the previous rung–otherwise, there is nothing that can be labeled “up” or “down”.

These shared languages, known as scientific paradigms, can also be thought of as a kind of data compression. You don’t need to thoroughly understand every single observation and theory that came before it in order to become a scientist–you just have to know enough of it that you have a common semantic frame for building hypotheses and describing the setup and results of your experiments. Under these conditions, the field proceeds under what Kuhn calls normal science: a state in which a number of questions have emerged within the constraints of the paradigm and scientists can spend their time further elaborating on and classifying phenomena within the paradigm’s theoretical framework. This state can only last so long as the paradigm remains a cost effective way of compressing the data. If the paradigm fails to make meaningful predictions, scientists will slowly look for alternatives and lose faith in the current framework, leading to a period of extraordinary science. Prior to this, theories may be patched up so that they fit the data, and wrong predictions may be outright ignored, but this can only continue as long as the benefits of the paradigm outweigh the cost. If your inbox puts a few of your important e-mails in “miscellaneous”, it still might save you a good deal of energy. You probably wouldn’t say the same if that’s what happened to 80% of your important e-mails.

Most importantly, the theories that comprise a scientific paradigm are not formulated in some universal language of first principles. There are reasons why this is in fact impossible, but such ideas could fill up entire books, and in fact do. For our purposes, it suffices to say that the theories of paradigms are semantically grounded through a combination of shared language with other paradigms, subordination to other paradigms (such as a theory of metabolism being constrained by the laws of thermodynamics), and the possibility that a paradigm or group of paradigms contradicts itself due to an oversight regarding its initial assumptions. Due to the fundamental limits of any sufficiently complex logical system, scientific paradigms in fact hold the seeds of their own destruction, providing feedback as they encounter real-world observations before the feedback inevitably hits diminishing returns followed by an outright harmful ratio of noise to signal:

hormesis

Courtesy of Nassim Nicholas Taleb: Antifragile

In this sense, every paradigm is ultimately “wrong”, but to look at it through the lens of right and wrong would be a mistake. Science does not, and cannot, happen in a vacuum: in order to get an answer, you first have to ask a question. Every scientific paradigm is fundamentally a set of questions, each of which with a range of intelligible answers (saying 2 + 2 = 5 is wrong but intelligible, saying 2 + 2 = “ham sandwich” doesn’t make any sense whatsoever.) Knowing which questions to ask requires having an idea of what you’re looking for, which can only be done by finding answers that reveal the contradictions in your original set of questions. Once you find a paradox, you can find a new frame to make sense of your data, but until then, what we cannot speak of must be passed over in silence.

Markets, Paradigms, and Disequilibrium

When I last talked about the phenomenon of feedback in an economy, I suggested that feedback was good up until the point that it compromised the system’s ability to process feedback. At the time, I had no good answer as to when this point was: after all, sometimes the system should outright fail so that a new system, better suited to new realities, can take place. If we frame markets as Kuhnian paradigms on the other hand, the question can be brought into much sharper focus. Just as a scientific paradigm provides scientists with guiding questions and theories to make sense of their observations and guide their experiments, the currency, laws, and institutions of a market work together to make sense of the feedback that occurs within an economy. In order to get an idea of how this works, we’ll have to revisit our old frienemy, the axiom of utility.

First things first: utility is not about “rationality” in the sense of “smoking is irrational because it’s bad for you.” It simply means that your preferences are consistent: that you do not prefer steak to chicken, chicken to salmon, and salmon to steak. While this is not actually how people behave, as confirmed by numerous psychological experiments, it’s nonetheless a useful concept when not looked at in a vacuum. Within the scope of the market, transactions are by definition an indicator of utility. If you’re willing to pay more for a pound of steak than for a pound of chicken, then that pound of steak is more important to you than that point of chicken. It might be for the most whimsical or irrational reasons, but in that moment, you’ve made the unambiguous decision that one thing is more valuable to you than another. In the framework of decisions within a market, currency is an accounting identity: you can choose to buy and sell whatever you want, but you have to make a decision about the relative value of everything you consume, sell, and save.

Within a scientific paradigm, scientists work to make sense of discrepancies between their observations and the tenets of the paradigm. Within markets, the same thing happens regarding discrepancies between what individual actors value and what the market values. This is most apparent in finance, where investors look to find discrepancies between the price of an asset as assigned by the market (itself an implicit prediction about the later price) and what the investor thinks the price will be later on. The same discrepancies also matter to businesses, which look to make a profit by selling something that’s worth more than what it cost to procure–a complex process that requires all kinds of consideration about present and future prices and the future needs of consumers. Even among consumers the same thing takes place as they strive to get something for nothing by paying less for goods than what they consider the goods to be worth. Each of these transactions act as feedback, with the market adjusting its prices to fill the gap between actual behavior and expected behavior. All of these examples are extreme simplifications, but the main idea is that economic actors generate feedback by exploiting the differences between what the market knows and what the actor knows, a process known in finance as arbitrage.

It would be a fatal mistake however to assume that this means that the market simply strives towards equilibrium as the discrepancies between supply and demand are flattened out. On the contrary, most of these behaviors push transactions away from equilibrium by adding more economic complexity: innovations create new demand for and dimensions of comparison between goods, investors place bets based on information that has not yet been accounted for, and gluts and scarcities of goods spur the use of substitutes that may not have been used otherwise. With each instance of feedback, actors fill the information gap with information that introduces new gaps. continues so long as the market can honestly account for the economic behavior of its constituent actors. This process, in which the market effectively processes feedback and creates wealth by reliably increasing in complexity, could be analogously called normal economics.

In the absence of such honest accounting, the market can no longer effectively process feedback and will collapse as it increasingly loses relevance with regards to people’s present needs. To give an example, let’s consider a highly skilled programmer who does work for open source projects. While he might work on these projects for recreational or altruistic purposes, he can only spend as much time on these projects as his finances will allow. Meanwhile, while others may benefit from his contributions, they will be spending no money on it no matter how valuable it is to them while spending more of their money on things that wouldn’t have as high a relative value were they forced to pay for the software. As a result, markets overstate the value of these other goods and services while understating the value of the software.

This is not to say that there is something categorically wrong with people giving things away for free; remember, all notions of “value” are defined relative to the axioms of the market, not as some categorical good. What it does mean is that the market as a paradigm becomes less useful because the information it provides about relative needs is less reliable. Just like too much of a mismatch between a scientific paradigm and its individual observations can render a it ineffective or even downright useless, a failure to account for a new technology or a potential collapse in credit can render a market useless. People will still continue to transact, but more and more of it will be off the books, and a new market will eventually form in order to streamline the extremely inefficient endeavor of performing transactions off the record. During this time, the economy enters a period of extraordinary economics, in which the current market does not make sufficient sense of the economy. We are in one such period now for several reasons, and explaining why may make this idea more clear.

The Theories of Currency: A Speculative Parable

At some point, I’d like to go into a much deeper historical digression to really get at the meat of the ideas posted above, but given the length of this post and my own lack of erudition, we’ll have to settle for a few key points about the past 100 years with some disgusting simplifications. Going forward, I’d like to state that this should all be read as a parable meant to demonstrate a broad idea, not an empirical hypothesis about the causes behind past and present economic crises. More specifically, but just as important, remember that this is about how markets themselves act as tacit models, not a discussion of macroeconomic theory.

The economic crisis of a few years ago spurred a lot of interest in a pivotal moment in American history: The Great Depression. The narrative, supported by the dominant Neo-Keynesian and Monetarist schools of economics, went that this time, with our better understanding of economics, we weren’t going to make the mistake made by fiscal conservatives back in the 1930′s. Unfortunately, things have not gone according to plan, with “improvements” in unemployment numbers coming from a combination of lower wages, reduced hours, and a shrinking of the labor force. GDP has not fared much better, showing little increase beyond the tautological increase in government debt. The common reaction to this by libertarians, fiscal conservatives, and members of the Austrian school is that Keynes was a charlatan who was wrong all along. While that may or may not be the case, I contest their claim on the basis that they’re talking completely out of historical context: just because Keynesian economics doesn’t make sense now, that doesn’t mean that it never made sense. Just as every market is a model of a particular time and place, every system of currencies also models within it certain assumptions. These assumptions are too complex to be fully summarized, but I can still get across the gist of what I mean.

During the period in which the Great Depression took place, there was a great deal of easy potential for economic growth. Oil was still a recent discovery and the process of mechanization was still in full swing. For many countries, especially the United States, discovery rates of oil were increasing rapidly with each year (the US did not hit a peak in oil production until 1970) and there was so much to go around that it was a waste not to do something with it. All this growth eventually led to a period of intense speculation, culminating in the events of Black Tuesday, when a collapse in the stock market and the resulting bank run led to a severe deflationary spiral.

None of this happened for lack of material wealth: sure, plenty was poorly invested during the boom years, but most of the resulting damage came from a vicious cycle in which a lack of available money caused cuts in spending, causing further cuts in wages and employment, which caused there to be even less money, and around and around ad-nauseam; all of this initially coming from the bank runs that caused most of the available credit in the market to disappear. Had the Federal Reserve been able to create more money, this may have been averted, but as it stood at the time, the United States was on a gold standard, meaning that any available money in the economy had to be backed by a fixed amount of gold. But before the Keynesians jump for joy and the Austrians burn me at the stake, I’d like to point out that this has to be taken into context: yes, there were misplaced investments that had to be corrected by the market, but beyond a certain point, the economy was creating a self-fulfilling state of scarcity despite the enormous amount of material wealth available. The gold standard, in which money is a static and fixed quantity, represents a world where wealth neither grows nor shrinks in the future. This is not only counter-productive in the case of a self-fulfilling deflationary-cycle, but is in fact a recipe for disaster as the economy grows too big with too little credit to support it. Although other factors, such as the forced deleveraging via wartime austerity, arguably played a major role in the end of the Great Depression, the world economy’s transition away from the gold standard and the subsequent economic recovery imply a paradigm shift in which a finite money supply based on gold gave way to the fiat money we have today.

Zoom to 2008, when the banks catastrophically failed and were bailed out by the government. Despite taking all the measures that helped end the Great Depression, the recovery has been very limited and some would say that it happened only on paper. Once again, it’s worthwhile to put this in historical context, something that can be done with the help of two pictures (courtesy of Chris Martenson and the EIA respectively):

usdebtpeakoil

The first picture is the ratio of credit market debt to GDP. Other than the spike to the left, which was caused not by a rise in credit (remember: gold standard) but by a rapid drop in GDP, the ratio of debt to GDP (private and public) has reached unprecedented levels in the past few decades. The reason for this literally exponential growth is that our current system of money is based on the issuing of debt. What that means is that money is created whenever someone takes out a loan from a bank. In order to pay off that loan, the debtor not only has to pay back the principal, but also the interest, meaning that they’re going to have to acquire more money than they originally had. Apply this to every dollar circulating in the economy, and it means that an amount of money proportional to the amount of money currently in the system has to be created out of thin air; something that is done not by directly printing money, but by having people take out more loans from more banks. Meanwhile, banks themselves need only keep a small fraction of their deposits in reserve–so for every dollar deposited to a bank, several more dollars are introduced into the market. The result is a money supply that grows exponentially (if you feel the need for further elaboration on this subject, I recommend this documentary.)

The issue here is the opposite of the gold standard. Whereas the gold standard fails when the economy becomes too big for its money supply, debt based currency can only go on so long as the debt is continually rolled over. If not, then credit will collapse as people default on their loans and banks become insolvent (remember: since a bank only needs to store a small fraction of the loans, that means that for every dollar a bank loses, the economy loses several dollars.) In the event that there’s easy wealth to be exploited that just requires more capital, government intervention has a decent chance of solving the problem. If, however, the money supply has far outpaced any plausible rate of growth in material wealth, then government intervention potentially delays the inevitable by further misdirecting available resources. Where the gold standard failed us by fooling us into thinking that there wasn’t enough to go around, currency based on debt constantly tells us to go ahead and borrow because the future will be more full of schwag then ever. The chart on the right is not very reassuring: the production of the world’s most important energy source remains stagnant even in spite of rising gas prices and the government intervention needed to provide sufficient capital.

Again, none of this should be taken too seriously. All of the ideas of scarcity and abundance that I’ve put forward are based on assumptions about the future availability and economic significance of fossil fuels. While we can make some educated guesses from 50,000 feet, the actual information comes from the feedback provided by the market in the form of currency-based signals. But if that’s so, then what allows me to call the level of debt problematic? Shouldn’t I take it as a signal that the future will be abundant enough to pay it off? The answer is not to look for an overt match with reality, but to look at the level of clarity provided by the current paradigm. In the case of our current monetary system, it helps to look at the signals provided by the centrally controlled discount window, which loans money to America’s major banks at an interest rate decided on by the Federal Reserve. These interest rates generally have a great influence on the cost of borrowing in general, since the cheaper it is for a bank to acquire cash, the more competitively they can price their own loans, which gives the Federal Reserve a way to influence signals of scarcity and abundance. Prior to the crash, interest rates were set extremely low in order to avoid a recession after the dot-com bubble, following which they remained that way in the belief that it was creating robust economic growth. This was not, however, matched up with reality: consumers, businesses, and banks all took on a dangerous amount of debt that failed to take into account the probability of a catastrophic crash. The paradigm’s predictions* miserably failed.

Since interest rates were low, there was little leeway left for lowering interest rates further. Even after resorting to making credit free, banks continued to hoard money and businesses failed to expand or hire. Meanwhile, the stock market has soared while banks pay record bonuses to their executives, creating a scenario in which both the relative and absolute wealth of the most powerful figures in the US economy has increased despite high unemployment and record numbers of people receiving emergency government assistance in order to get by. All of this signifies faulty feedback reminiscent of Kuhn’s extraordinary science, with the current paradigm getting patched up in such a way that it technically fixes the falsifications; corporate profits, GDP, the stock market, and money supply are all healthy as a result of monetary intervention, but the script only survives by fixing the game for a shrinking number of parties at everyone else’s expense. If you look at all of the unemployment data and not the fudged numbers of the official “unemployment rate”, you can see that fewer and fewer people are gainfully employed, as the recovery in the official numbers has been due to a combination of an increase in part time jobs and a decrease in the number of people counted in the labor force. This cannot be understated: the economic script followed by the United States depends on gainful employment. If you don’t have a full time job, you fall out of the system into the underclass, which is supported by an increasingly large amount of direct government spending. This propping up of a permanent underclass is yet another duct-tape fix that keeps the paradigm from being abandoned at the cost of information content (NB: I am NOT advocating that we starve the poor or get rid of our safety net. I am only pointing out that failing economic systems can push their failures under the table in order to stay afloat.)

What do I mean by information content? Think back to the importance of honest accounting: corporations and banks continue to make profits under the principles of the “free market”, but these profits are largely the result of government spending that props up both the corporations and the consumers who might otherwise not have money to spend on their products and services. Zombie corporations hog resources that may otherwise have been put to use differently, and people who may have found work in an updated economy instead must rely on government handouts as obsolete firms fail to make use of the spare labor around them. Every dollar spent attempting to preserve an outdated paradigm is a dollar that can’t work as feedback, diminishing the effectiveness of price signals as corporations and banks get a free lunch from a system whose resources are ultimately finite. Instead of creating wealth, these bailed out corporations simply relocate it, eventually compromising economic allostasis as ever fewer actors are left to contribute information to the larger economy.

All of this may sound like a staunch argument for an unfettered free market with minimal government intervention, but that is actually not what I’m saying. In this particular case, the fiscal and monetary policy of the United States seems to be a desperate attempt to preserve a paradigm that is no longer working, but that does not mean that unfettered markets generate the most wealth. Since there is actually no such thing as a totally free market, it’s indisputable that every market paradigm is formed by a combination of principles via positivia and principles via negativia and that any successful market must be constructed with both kinds of measures in mind. Many libertarian ideas currently make sense because there are many government interventions that do not make sense in the context of how price signals currently work, but that doesn’t change the fact that the very system of price signals in a market economy is based on an a-priori model of what constitutes an effective economy. There are plenty of instances, even now, where a lack of government enforcement is actually detrimental to proper market feedback. Take the example of digital media, where file-sharing has led to consumers being able to understate how much the media was actually worth to them while artists lose the capacity to produce more work due to a lack of compensation. In re-thinking our economic paradigm, including our system of currency, much will be constructed in a top-down manner no matter what.

When dealing with problems within a paradigm, it suffices to look at the internal contradictions and the degradation of feedback, but when constructing a new one, scientists inevitably look for new a-priori principles. Ours will inevitably be determined by a number of environmental, technological, geopolitical, and cultural factors; ideas that I would like to elaborate on should I find the stamina to write a second part. In particular, I’d like to get into how the intertwined history of industrialization, centralized states, and the corporation underlies the paradigm of the modern free market. I’d also like to consider some other systems of currency that could not be talked about in this short parable: the Bretton Woods system, privately issued bank notes, and derivatives; all of which broaden our ideas of how currency underpins the kind of feedback that occurs in a market economy. From there, I hope to take a more nuanced view of some of the more apparent problems in the near future: remuneration in an age of information, the tragedy of the commons concerning environmental problems, the loss of gainful employment due to outsourcing and robotics, and how we may be able to reduce economic fragility without compromising the complexity that has brought us so much wealth in the past few hundred years.

Notes:

*For uncertainty geeks out there, take the word “prediction” with a grain of salt. I do not necessarily mean that banks or economists, or even economies as a whole, are supposed to predict a precise outcome. They are instead supposed to robustly account for present and future needs, often by correctly taking what is fundamentally not certain into account.

Phenomenological Opacity, Accounting Identities, and Allostasis

In my previous post, I made a distinction between cybernetic theories, which address the internal decision making process of a system, and phenomenological theories, which identify stable correlations between observable properties.  In that post, I suggested that we can use cybernetic theories to figure out which phenomenological theories can give us the most leverage with regards to changing outcomes; for example, indirectly controlling your body’s energy balance through changing what you eat is a more leveraged strategy than trying to directly control your calorie intake.  The truth, however, gets even more complex: there are some phenomenological constructs that are so basic yet so shrouded by complexity that you cannot observe them in a very meaningful way: instead, they can only be used as a construction that makes more complex predictive theories logically sound.  Here, I’d like to show that these two concepts, opacity and accounting identities, can illuminate how systems primarily manage and adapt to feedback in order to stay alive, and how this changes the way we should look at economics among many other fields.

Calories, Again

In my nutrition example, I advocated for an approach to eating that emphasis what you eat rather than how much you eat. My explanation at the time had to do with the concept of leverage; and while it is true that there is more leverage in this approach, there was another fact that I simply left out: we don’t have an accurate idea of our calorie intake and expenditure.  Despite the fact that people calorie count by logging what they eat, what they do at the gym, how much they walk, and so forth, it is still a very crude approximation.  Not only do we not know exactly what is in our food and exactly how much any given exercise session will burn, we also need to account for all kinds of things such as resting metabolism (which is affected by all sorts of factors), thermogenesis, whether calories are going to fat or muscle, the calories burned by our brain (I’m hungrier at lunch on days where I have to concentrate a lot), where your energy comes from while exercising, how efficiently your body performs a specific exercise, and so on.  You might think that even with all this, it’s reasonable to approximate; the problem with this is that it only takes 3500 excess calories to gain a pound of weight.  That means that eating just 50 more calories per day (about 3% of your standard 2000 calorie diet, which is considered the margin of error for simple statistics and most likely an unrealisitcally low margin of error for something as imprecise as calorie counting) will mean gaining a pound in a little over two months.  That on its own doesn’t sound like much, but consider that that means gaining 5 pounds per year, which would add up to quite a bit over a few years.

You might think that this is a simple matter of errors cancelling one another out, that you’ll have as many days where you’re 50 pounds below your target as you will where you’re 50 pounds above your target.  In order to explain why this thinking is flawed, I’ll take a detour into different kinds of randomness.  The most commonly known kind is Gaussian randomness.  This kind of randomness is predictable and works as follows: imagine that you have a coin and decide to toss it 8 times.  The odds of it coming up heads is always 50%, and it’s always the same on every toss.  That means that you can easily get the odds of how many times you get heads out of the 8 tosses.  The chances that there are no heads and no tails are pretty low (50% to the eighth power), because there is only one way to get to such a configuration.  On the other hand, there’s a very high chance of getting four heads and four tails, or three heads and five tails, or five heads and three tails, because there are many different timelines that will get you to that configuration (maybe the first four tosses are heads and the second four are tails, or maybe it alternates, or any number of things.)  In fact, the odds of getting all heads on as little as 8 coins are so low you should never worry about it (about 1 in 200).  You can in fact see the probabilities of various outcomes (all tails on the left, all heads on the right) in a simple (and well known) curve:

gaussian
Source: Wikipedia

Why is that important?  Because you know that to a certain degree, your coin tosses will almost certainly cancel one another out.  The problem is that the reason this works for the coins is the same reason it won’t work for other things: the outcomes of the coins are independent of one another.  A coin coming up heads on one toss does not affect the probability of a coin coming up heads on the next toss.  On the other hand, there’s no way you can measure this if factors interact.  And this is exactly what the problem is with calorie expenditure: your diet and exercise is constantly interacting with the various processes in your body that are beyond your control, and even if you eat and exercise exactly as you’ve planned, your body will still be making decisions about all kinds of processes you don’t control.  When you have these interactions, you have a curve that looks something more like this:

powerlaw
Source: http://ross.typepad.com/blog/2004/03/power_laws_and_.html  (please contact me if you are the owner and don’t want this image used.)

If we used the dark blue curve for coin tosses, that would imply a higher probability for something like all 8 coins coming up heads–and it would actually be true if the outcomes of the coin tosses actually affected one another.  What’s more important to note for our purposes is that there is not a guarantee that individual outcomes cancel one another out–which was the reason why in our original example we didn’t have to worry about getting 8 heads in a row.  Note that I didn’t even add to all this the fact that our behavior is not totally in our control, and that even if we superficially maintain some rules, there will always be subtle ways to work around them (maybe you start running twice a week but then end up spending more of your spare time parked in front of the TV.)

A fair question to ask right now is Alex, what’s the difference between this and what you were talking about yesterday?  Isn’t this just more stuff about leverage?  Not quite.  In my previous entry, I was talking about how much control we have over a given variable.  Here, I’m talking about how much knowledge we have of a given variable.  It’s not just that we have little direct control over calorie intake, we can’t even get a reasonable approximation of how many calories we eat and expend in a short period of time.  In other words, our energy balance is opaque.

So what makes this a phenomenological variable at all if we can’t observe it?  The answer is that the phenomenon is (to an extent) observable; we know for a fact that the mathematics do work out such that organisms get bigger with calorie surpluses and smaller with calorie deficits, but when we look at the big picture we simply can’t know or predict the exact rate at which calories are entering and leaving the body at any given moment.  The problem is that we believe we can; but let’s take a look at the actual definition of “energy balance”:

Energy Intake = Internal Heat Produced + External Work + Energy Stored

Note that all this does is take four variables and relate them to one another–that if you’re gaining weight (an increase in “energy stored”), then by definition we are either taking in more energy, producing less internal heat, or doing less external work.  At no point is there any kind of inference happening–these variables simply describe what is happening.  These definitions are important for making sure that any theory of weight gain or weight loss is consistent with thermodynamics, but that does not endow them with any kind of inferential power.  What we are left with is an accounting identity, a mathematical definition that unfalsifiably relates variables to one another.  Even though the laws of thermodynamics are actually falsifiable, for the purposes of nutrition, if we were to find ourselves gaining weight, we would not question the laws of thermodynamics; we know from this definition that it would either have to be a rise in energy intake, a drop in thermogenesis (body heat production), or a drop in exercise.  And of course, that isn’t even addressing whether the extra weight is muscle or fat; most importantly, however, it does not predict anything.

And yet, despite all this opacity, we are all remarkably stable in our weights.  This is not only true of people of an average weight–it’s also the case for people who are obese; they do not keep gaining weight indefinitely.  As pointed out in my last entry, the body can regulate itself with a remarkable degree of sophistication; and it must–although we constantly speak of calories, it is absurd to forget that our diet requires many different nutrients at varying levels, which themselves control all of the processes that ultimately decide the flow of energy; and that’s just one of many nuances in our overall nutrition.  If you believe in calorie counting, then I have one piece of advice: instead of thinking of it as “I’m going to try to control how many calories I eat”, instead think “I’m going to try to implement a pattern of eating and exercise that results in a calorie deficit.”  In other words, use calories as a proxy for whether what you’re doing makes sense or not.  If cutting down calories means feeling dizzy and irritable, you’re doing it wrong; your brain is not supposed to go on a diet.

But the significance of accounting identities and the opacity of the phenomena that they represent may apply much more deeply to a field whose language games are far more sinister than that of nutrition: economics.

 

The Elusive Concept of “Wealth”

When I was younger and knew even less about economics than the paltry amount that I know now, I found myself confused by the abstract numbers and concepts that seemed to dominate any discussion on the economy: GDP, inflation, interest rates, employment, and so forth.  Although many of these numbers serve their purpose, I found, and continue to find, that many of them act as if there is absolutely no real world behind the economy from which we get finite resources and use them with our finite amounts of energy and time: a problem that is really the inverse of the “calorie fallacy” (impromptu name.)  This led me to an analogy that I still continue to use to this day: talking about economics without natural resources is like talking about metabolism without food.

Rather than hearing much about things like the world’s supply of oil or the amount of energy needed to procure food, economists think in terms of prices, credit, liquidity, employment, and other factors that are not about the wealth itself but about the system that controls all of the wealth.  To someone who has never read any economics, or perhaps has never lived in a society that has used money, this must seem absurd: isn’t what matters how much actual wealth we have?  Well, yes; that, and our ability to allocate that wealth, are what actually matter.  But this begs two questions: (1) what counts as “wealth”? How do we compare food and fuel, or luxury and necessity?  Is a pound of corn of the same value as a pound of barley?  What about less tangible things such as safety or the satisfaction of our emotional needs?  (2) how do we, as a society, choose how to allocate our resources in such a way that we can meet our needs and grow our collective wealth?

As a tentative answer to question (1), I will define wealth as surplus thermodynamic energy.  This may seem a bit strange, but it will make more sense upon explanation.  For answer (2), I will have to go into a little bit of economic theory, explaining the concept of comparative advantage, which is arguably the cornerstone of classical economic theory.  These two concepts, surplus energy and comparative advantage, are tightly linked and when put together illuminate a third concept that I would have trouble explaining otherwise.

So what do I mean by surplus energy?  The definition of energy is quite simple: the ability to do work.  In classical mechanics, work is defined as the ability to move an object that is in a state of rest or to stop an object that is in a state of emotion–in other words, the ability to overcome inertia.  The thermodynamic definition of work is more nuanced and would be more comprehensive, but all we need to know for our purposes is that we need energy to grow food, to stay warm, to reproduce, to protect ourselves from predators, to maintain the rule of law, to conduct symphonies, etc.  In fact, it’s required for any kind of activity, mental or physical.  The more energy we have, the more of these things we can do.

In early agricultural societies, most of this work went to the bare necessities, staying fed, staying warm, and staying safe.  Almost all of the energy provided by the food grown was spent on growing more food and doing anything else that was necessary for survival.  With so little energy left, there isn’t much capacity for doing other things; so in a primitive society you may have a priest or a shaman of some kind for spiritual guidance, along with a few other simple specialists.  On the other hand, should this society domesticate animals that are capable of doing heavy lifting, they’ll be able to grow more food with less energy, leaving spare energy for people to pursue more specialized pursuits and creating a more complex society.  The same may happen with a labor saving device such as the plow or some fertilizer that makes crops more nutritious.

You may notice, however, that this is not simply “free energy” coming out of the ether.  In the case of domesticated animals, the animals still have to be fed, or else they’ll starve and won’t be able to do any work at all.  As for labor saving devices, someone still has to put in the work, just not quite as much.  In other words, the surplus energy comes from the tribe becoming more efficient with the energy that they have.  A horse may require food to run a plow, but running a horse with a plow gets much more food grown per calorie spent than having a human do the same thing with a simple shovel.  This notion of efficiency is also the basis for comparative advantage, and by extension, for the entire science of economics.

So what is comparative advantage?  This could best be described with a thought experiment.  Let’s take two tribes, the Oomphs and the Bumps.  The Oomphs are expert lumberjacks, chopping down trees with incredible efficiency and organization; but their farming system is quite inefficient, and so they spend that saved up energy on making up for their lackluster farming abilities.  Meanwhile, the Bumps are most excellent farmers, but they are quite atrocious at cutting down trees.  How can these two tribes improve their lot?  The answer is easy: by trading.  The Oomphs can buy food from the Bumps using their spare lumber.  Since they are so much better at woodcutting than they are at farming, they’ll spend much less energy cutting down the extra lumber to trade than they would by growing the food themselves.  Meanwhile, the Bumps can do the exact same thing with their food supply.  This means that both tribes have much more energy to spare, which can be spent on all manner of things.

But what’s truly important is that this doesn’t just apply to trade between societies–it also is how a modern economy works on an individual level.  Instead of having to grow my own food, prepare my own self-defense, and build my own house, I can simply pay someone else to do it, and earn the necessary money by doing what I’m good at.  Note that this is basically what money is for: it allows people to offer to trade their services without having to precisely know exactly what other people want or need.  Now, money is actually far more complicated than this simplified concept, but we can get to those questions later.  What’s important to note as of now is that comparative advantage optimizes our use of energy, and in doing so, gives us energy to spare and allowing us to create a more complex society.

But one can only optimize energy so much, whether through dividing labor or discovering other ways to use energy more efficiently, leaving the question of how further economic growth happens.  There are two relatively simple answers: either grow the population, or discover new sources of energy.  New sources of energy have been discovered throughout the entirety of human history: fire was discovered millions of years ago, and with it, we were able to cook our food, which metabolizes a lot of the food before we have to do any of the work ourselves; this meant we needed less time to digest our food and could devote more energy to other enhancements such as increased intelligence or better hunting abilities.  The energy provided by the wind became the primary means of propulsion for ships and a way to mechanically grind grains.  The examples go on and on, but the most potent one is the discovery of fossil fuels, or more accurately, the discovery of how to put fossil fuels to use through combustion.  It’s no coincidence that since this discovery, economic growth has accelerated at an unbelievable pace; the amount of energy provided by a single gallon of gas is estimated to be around 500 man-hours of manual labor.

You may have noticed by now something else that’s important: if we want to discover new sources of energy, we’ll need surplus energy.  The discovery and utilization of a new source of energy is an effort carried on by tons of trial and error on the part of scientists, entrepreneurs, tinkerers, and specialists of all kinds.  Solar power has become increasingly advanced and affordable thanks to materials and designs of such complexity that it takes hundreds of people with extremely specific jobs all working together to develop them.  Even the tinkerers that have found more simple ways could not have done so without the amount of spare time given to us by the conveniences of modern society.  Even the extraction of crude oil now requires amazing complexity as more and more of what’s left is drilled out of reservoirs that exist thousands of feet beneath the sea.

Now that I’ve taken you through the process of specialization and the importance of surplus energy, one could easily identify that specialization is a cybernetic theory and surplus energy is a phenomenological theory.  The problem, however, is that one can’t easily measure “surplus energy”: since a lot of it comes from increased efficiency, we can’t simply measure the amount of electricity, combustible energy, and dietary calories expended by a society in a given year.  In addition, I’ve only been using the notion of “efficiency” in the context of the amount of energy that doesn’t simply get lost in transmission (every transfer of energy loses at least some of the energy to irreversible entropy), and have not considered that a person may just be spending the energy foolishly.  Nor have we taken into account something else that is much more important: which natural resources will lead to more energy?  Just like our body need many different nutrients and can’t use all calories in the same way, our society needs different raw materials and skills to do different things: rare earth metals for solar power and electric cars, rubber for creating tires, plastics for insulating circuits, etc.  All of these resources work together in complex ways to determine what energy we can extract, what energy we can save, and how much energy it will cost to ultimately meet our real needs and preferences.   Saying that the economy needs to take in more energy than it spends in order to grow is every bit as banal as saying that a person needs to take in less calories than they expend in order to lose weight.

 

No Accounting for Taste

Unlike the human body, however, we can’t even reliably use energy balance as an accounting identity because we simply have no real idea of what “efficiency” is, since we don’t have any true sense of what ultimately benefits us.  In nutrition, we know that body fat is (up to a point) wasted energy, so we know that if we have less than 15% body fat, we have no serious problems with body composition (and even then, body composition does not tell the whole picture about health, there are all sorts of other illnesses and morbidities that can still occur.)  Instead, we need a different accounting identity.  In classical economics, this need was answered by the idea of utility: every person has a set of preferences for what they want; the only rule being that you can’t prefer apples to oranges, oranges to bananas, and bananas to apples all at the same time, since this would not be consistent.

Utility, however, can only be a theoretical construct.  Ignoring for the moment that people don’t even have consistent needs and preferences, the concept of utility would also imply perfect information about the present and the future; something that only an omnipotent being could have.  Instead, we use money; a highly unstable and crude signifier of wealth.  It is a signifier (as opposed to an indicator) of wealth because, as we saw earlier, the concept of “wealth”, let alone “value” is intractable.  But even if money can’t act as a gauge of wealth, it can still act as a unit of account by allowing us to create stable accounting identities for economics.  Just like the rules of energy balance hold, so do the rules of monetary transaction; if you owe more than you have, you are in debt, and if you import more than you export, you have a trade deficit.  If you have a trade deficit, it can only be shrunk by exporting more “wealth” as denominated in money; this may be done by devaluing the currency (you sell the goods for more money, but that money is worth less) or by consuming less and exporting the excess wealth, but no matter what the method, the money itself must unambiguously balance out.  Another place identity that uses money as a unit of account is the “size” of an economy:

GDP = Consumption + Savings + Government Spending + (Exports – Imports) [net exports]

Note that this is just saying “the total amount of economic activity has to be the sum of how much money is spent, how much money is saved, and how much money is made from exports that wasn’t spent on imports.  That last item on the list may be tricky to understand, but think about it this way: all imports are already accounted for as consumption, so if you counted the money made from exports that was spent on imports, you’d be double-counting the consumption of imports.  What’s important to note is that this identity is not making any inferences, but only making the clear unambiguous rule that every dollar that goes through the economy must be categorized in one of these four variables.

Since currency does not actually signify wealth in any tractable way, this can be at best a rough approximation.  Although economists talk about “real growth” and “real incomes” by “adjusting” for “inflation”, the truth is that the very concept of inflation is based on comparing money to wealth, which for reasons we’ve already been over is extremely problematic.  So if GDP can’t measure growth or prosperity in any way at all, what’s the point of talking about this or any other money-based accounting identity?  The answer is that we’re asking the wrong question.  It’s not just money that’s the problem, it’s that the very concept of “growth” or “prosperity” is fundamentally the wrong way to think about economics.  Along the same lines, “conservation” or “sustainability” is no better when we consider that we cannot anticipate our future needs any better.  That’s not to say that we shouldn’t worry about the world’s supply of water, oil, topsoil, or food; but addressing those issues in a simplistic top-down manner won’t work because they are so phenomenologically opaque that the only accounting identities we have available to us have to use money as a unit of account.

So what is the right question if it’s not about how to grow or how to conserve?  Before going into the answer, consider the function of money: it provides information to the economy and influences behavior.  You, as an individual, know what you can and can’t own based on not just how much money you have on hand, but also by how expensive it is to borrow money and how available new revenue is.  In other words, money is also a cybernetic entity; it provides feedback, which allows the economy to adapt to novel needs and challenges as they arise.  The purpose of economics is adaptation, with money being one of many mechanisms that provide the information essential to this function.  While more money does not translate to more adaptability, one should remember that calories are not unambiguously linked to health: instead, the dynamics of calories and the dynamics of money both provide us the necessary constraints to make further inferences.  In the case of money, we’ll be able to use its mathematical constraints to illuminate how economies work as systems of adaptation:

 

Allostatic Economics

The most simple form of feedback in economics is supply and demand, which itself is mediated by money.  The price of something goes up if demand outpaces supply, and will continue to do so until either fewer people want it (for that price) or more of it is supplied (it goes without saying that this also applies vice-versa.)  The same thing also happens with money itself: if there is more money, the “price” of money goes down–both in the form of borrowed money costing less interest and other goods costing more money.  The closing of these gaps is a form of negative feedback, and could be considered the most basic kind of feedback in an economy.  There are, however, more intense versions of this feedback, such as when some type of good or service is extremely overpriced (often because people see its price going up and want to try to buy it and re-sell it for a more expensive price) and then finally the price drops down to something more reasonable.  Another more intense version may come from a change in the outside world, such as the price of some important item skyrocketing due to scarcity, in which case people must cut back their consumption of other goods or find alternatives to the item in question, leading to prices dropping elsewhere and an overall decrease in wealth that makes some sectors of the economy unsupportable.  In all of these cases, the behavior of individuals will have to adjust, and in order to make that adjustment, the amount of money circulating in the economy will decrease since lost jobs, lost sales, failed investments, etc. will require that people conserve what they have.  Keep in mind throughout all of this that we can simply think about this in terms of money, and do not have to think beyond a very rudimentary level about the material wealth underlying the money.

From this point of view, recessions, while painful, are necessary feedback.  If credit has been lent too freely, then interest rates (the price of acquiring credit) should commensurately go up; and if houses have become overpriced due to bubble behavior, then we should not continue to pay more for houses.  The same goes for gasoline: high gas prices signal that we need to be wiser about how we use gas, or that we should look harder for new sources of energy.  While this is all true, there’s one major problem: feedback does not exist in a vacuum.  If the economy is harmed too much, it may compromise the very mechanisms that process this feedback.  Consider, for example, lifting heavy weights at the gym.  Up to a certain point, it will feel stressful and may even hurt a bit; you’re giving it your all and dripping sweat on the floor.  After all this pain, you go walk it off and rest for a few days and come back to the gym able to lift an even heavier weight because of the adaptation.  Now consider that this next time, you decide that you can do even more, and raise the weight by a much higher amount than usual.  In the middle of your set, you feel a sharp pain and before you know it you’ve torn a muscle in your arm.  Now you’ll certainly get weaker, at least in that arm, due to the fact that you won’t be able to do any heavy exercise with it for at least a few weeks.  That’s the difference between just enough pain and too much pain.

With recessions, the same logic applies.  For example, if too many people are out of work, they won’t be able to buy anything and more places either lay off workers or go out of business entirely.  When that happens, it can turn into a vicious cycle; or, if you read my previous entry, a positive feedback loop.  While some pain will correct the relative prices of goods and weed out irrelevant skills and unsustainable businesses, too much at once can lead to a runaway chain reaction.  So we want harm, but not too much concentrated harm.  More specifically, we want negative feedback, because that’s the kind of feedback that results in a correction, as opposed to positive feedback, where pain begets more pain.  Even then, however, there’s a problem: we don’t necessarily know what’s going to spiral out of control and what’s going to ultimately act as beneficial feedback.  In fact, we want the feedback to be sufficiently concentrated up to a point.  To show why, let’s go back to the gym: this time, you’re benching 150 pounds.  After about 10 repetitions, you can’t do another one, and you call it a day.  Your friend next to you, although just as strong, benches 15 pounds and stops after 100 repetitions (for those who don’t believe me, I’ll give you a more extreme example: your friend benches 1.5 pounds 1000 times.)  You both got feedback from the stressors, but you’ll benefit much much more than him because of the concentrated dose.  What does this suggest?  That intensity of feedback has accelerating benefits before it starts to cause harm, an idea that has been explored in more depth by Nassim Nicholas Taleb in his book Antifragile.

hormesis
Source: Antifragile by Nassim Nicholas Taleb

So why should feedback work better if it’s concentrated if all that matters is eventually correcting discrepancies?  Before getting into that, I need to address something that has been mostly ignored thus far: economic booms.  Economic recessions almost always follow a time of rapid economic growth (denominated in whatever currency you’d like.)  It is during this time that the discrepancies are built, since people have more money to spend, and this money ends up getting spent in inefficient and wasteful ways.  Economists of the Austrian school call these built up discrepancies “malinvestments”: investments in which resources are wasted (or if you want more mathematical precision, investments in which resources are not invested optimally).  As we noted before, these kinds of discrepancies are happening at all times, but oftentimes in very small amounts with booms and busts happening when many of these things happen at once; which happens more often and with more intensity than even many economists realize because of how interconnected economic events are (recall my spiel on probability distributions at the beginning of this post.)

Due to the intractability of both our present and our future needs, these malinvestments are inevitable.  Fortunately, they are also desirable (to an extent) for the exact same reason.  Consider the internet as it is now; it is extremely fast and ubiquitous, to the point where it is free to instantly communicate with somebody on the other side of the world.  The infrastructure for this is in part made up by sprawling networks of fiber-optic cables that traverse entire oceans and continents.  Many of these were built up during the dot-com bubble in the late 90s and early 2000s, and it was possible due to the amount of money people were foolishly willing to invest in all kinds of digital technologies.  Eventually, most people lost their shirts in these investments and a recession followed, but not without making all of these fiber-optic cables dirt cheap due to investors’ needs to sell off what assets remain, providing the world with a whole new infrastructure.

But why the bust, you may ask?  Can’t we just get this growth and try to cushion any fall that happens afterwards?  The problem is that just because we don’t ultimately know what is wasteful, it doesn’t mean that there’s no such thing as waste.  If a collapse in housing prices is propped up by the government giving subsidies to consumers, then the government will have to pay for it somewhere; if not by cutting costs elsewhere, then by raising taxes elsewhere or by printing money.  While printing money may sound like the solution, one needs to remember that behind all of the money is a finite, though often growing, amount of material wealth, and buying more of one thing means buying less of another.  The labor, raw materials, and loans that may have gone somewhere else are now tied up in a place where it wasn’t worth it.  Just consider if every restaurant were propped up: there would be tons of real-estate, personnel, food, electricity, and gas tied up in restaurants that almost nobody wants to eat at.

The common retort is that this cushion doesn’t matter because growth will eventually outstrip it, but this neglects the possibility that misdirecting too many of our limited resources may in fact hamper future growth by not allowing adaptations to occur.  I blame the common emphasis on the word “growth” for this misunderstanding: when the focus of economics becomes adaptation rather than growth, the boom and the bust are suddenly two sides of the same coin; both of them an essential part of making the changes that better suit us to both present and future needs.  Consider, in addition, that when we measure “growth”, we are talking about it in terms of money, which is not a direct measurement of wealth but a feedback mechanism that follows certain basic constraints.  Booms and busts are increases and decreases in the activity of money, so we should realize that what we’re looking at is not a pattern of abundance and scarcity per se, but signals of abundance and scarcity.  This might seem contrary to ideas such as stagflation, but consider there that this is a phenomenon in which the purchasing power of money goes down while GDP, the amount of money circulating in the economy, stays stagnant.

Noting that these ideas of growth and recession are fundamentally about information, I can now make a big claim: it is not growth or atrophy that matters, it is the pattern of growth and atrophy.  This statement, along with the fact that we patently need to both do stupid things and pay for our stupidity (rather than be smart), means that while an economy strives for adaptation, it does not do so through homeostasis, since it does not thrive by staying close to some equilibrium.  The correct word is allostasis, long-term quasi-stability achieved through volatility.  Without this volatility, the economy would be extremely brittle, as all of its decisions would be based on the market’s current (implicit) hypothesis about our current and future needs, allowing no room for the randomness that is necessary to compensate for what is unknown.  More importantly, however much it may seem otherwise, the money itself is just information; our actual security, material wealth, and future challenges are a sea of chaos that is traversed through feedback and adaptation.

What then, makes a healthy economy?  The answer is volatility above all other things.*  Money does not provide knowledge, but it provides feedback.  Volatility is an indicator of feedback in two ways: the negative feedback loops make corrections as errors come, while the positive feedback loops provide a level of randomness that appropriately handles the uncertainty of what’s unknown.  So if you want to see whether or not an economy is doing well, don’t look at its growth, but rather at its variance; the more wide gaps between boom and bust, the better.  The same also goes for living things: despite the craze for a low basal heart rate, the evidence seems to suggest that it is the variation in heart rate that may ultimately matter.  But forget longevity for a second: anybody who isn’t completely neurotic understands that health is the ability to live a good life, not a long one; and living a good life means having the capacity for wild swings of both good times and bad times.  That’s why in those pharmaceutical commercials you see those scenes with the dad going mountain biking with his kids because he finally got rid of his COPD–because he can now have more intense experiences without choking to death.  A better measure, for that reason, might be metabolic range: by how much can you multiply your metabolic output?  I don’t know much about the measurement, but there is a metric, and it seems to be linked to your peak physical capacity.

After all this, chances are that more questions than answers have come up; not least of which when we know the difference between out-of-control positive feedback and very large swings of negative feedback.  A related concern is that there are probably many layers of feedback mechanisms rather than just one, such that less effective feedback mechanisms should be destroyed to make room for new ones–that alone is a headache to think about.  While there is no simple answer to these things, we can still keep our sights in a reasonable range by remembering what Keynes once said: ”In the long run, we’re all dead.”  At the same time understanding the centrality of allostasis may mean that we can finally get away from the clusterfuck that is occurring between the neo-Keynesians, the Austrians, the conservationists, and probably many more schools of thought that I’ve forgotten.

 

*For further discussion on volatility, I strongly recommend Antifragile, which I have cited multiple times here.  It is a somewhat less theoretical, but much more empirical, treatment of many themes in this post’s subject matter

Cybernetic and Phenomenological Theories

In a number of debates I’ve had in the past few years, I started to see a pattern in which I came to the same fundamental impasse with people again and again.  It wasn’t about disagreeing over facts, but about a semantic difference that I could not describe until around half a year ago (and have since found so daunting to write about that I’ve put it off for all those months.)  The difference came up specifically in debates about things of enough complexity that we do not understand what drives its behavior on the inside, but have a good idea of how the more apparent and observable properties are related.  The result was a constant battle of language games in which theories were seen as nonsensical because they were supposedly in contradiction to things that were much more apparent.  How could carbs/genes/hormones/etc be responsible for obesity if the “real” cause was taking in too many calories?  How could unemployment lead to less overall wealth if jobs are only a means to an end?  If someone is depressed and behaving in self-destructive ways, why can’t they simply choose to do something to help themselves?  The problem is that none of these questions were dealing with things that were mutually exclusive; in all of these cases, they were dealing with two different types of theories.

The theories dealing with larger structures such as genetics, employment, and behavioral disorders are ones that I describe as cybernetic theories.  Cybernetics is the study of how a system regulates its inputs and outputs in order to maintain stability.  That can apply to something as simple as how a thermostat regulates its own parts, to how a human body regulates its metabolism, energy levels, and behavior in order to maintain homeostasis.  Rather than looking at mere correlations between things that happen, it looks at the actual decision making of a system.  But what makes a particular cybernetic theory?  A cybernetic theory is a hypothesis that attempts to explain a mechanism by which a system’s behavior can be predicted.

The more apparent causes that can be seen through observation are ones that I describe as phenomenological theories.  In science, a phenomenological theory is A theory that expresses mathematically the results of observed phenomena without paying detailed attention to their fundamental significance” (Thewlis, J. (Ed.) (1973). Concise Dictionary of Physics. Oxford: Pergamon Press, p. 248.)  An example of this would be something like the fact that we can observe that an organism loses mass when it consumes less calories than it expends, and gains mass when it consumes more calories than it expends; we don’t have to know why it’s true to see that it is.  One can also note that the prosperity of a nation is dependent not on abstract economic numbers, but on actual material wealth; fuel, food, infrastructure, etc.  More abstract entities such as currency are things that help decide where resources are allocated and who gets what–so that is a cybernetic theory explaining how resources are acquired, distributed, and used; it doesn’t make more resources pop out of the ground, but it does give people an incentive to look for resources that are in demand and helps prioritize who should get what resources.  In the same way, everyone can agree that jobs are not an end in themselves (otherwise, it would just be useless work), but most of us see employment as an important number because if not enough people have jobs, it would require that we devise a completely new system for distributing wealth to people.

Domain Cybernetic Phenomenological
Nutrition Hormones Calories
Economics Currency, Employment, Interest Resources, Labor
Psychology Pathology Behavior

 

Your Decisions vs. Your Body’s Decisions

Now that I’ve gotten the general gist across, we can get into examples.  In order to keep things clean, I’ll only go into one: nutrition.  This is where I’ve encountered endless language games in which many people make the ridiculous accusation that those who go beyond calories-in-calories-out are denying the rules of thermodynamics.  Even among some of the smartest people I’ve read, I’ve seen this problem, such as a debate between Martin Berkhan of Leangains.com and Gary Taubes, author of Why We Get Fat, having an argument about the problems of overeating.  Their views are mostly similar (though not entirely), but their biggest disagreements seem to largely come from arguments that are ultimately about semantics.  Taubes says that overeating is not the true cause of obesity, but merely an inevitable side effect of the true cause, which is a bad diet.  Berkhan responds by saying that you don’t magically burn of all of the food if you eat more calories than you expend, but then says that the reason that dietary fat is less fattening is that fat is more satiating than carbohydrates.  What Berkhan missed is that Taubes would agree–it’s not that the calories magically disappear, it’s that the amount of calories eaten is regulated by a mechanism that responds differently to carbohydrates than it does to fat.  I would personally add that not only is that case, but that a good diet and a healthy body mean that the excess energy in your body is more easily accessible, and so you will not only have an easier time burning it, but will be naturally inclined to do it.  Body fat is a battery, and obesity occurs when the body keeps charging the battery but not using any of it.*

In both cases, they’re actually agreeing about the cybernetics of this: in both cases, eating more fat and protein and less carbohydrates leads to the decision to consume less calories; what we experience as hunger and satiety are expressions of more fundamental mechanisms that interact in order to regulate the system’s decisions; the most central of these being hormones.  Hormones in our body act as messengers and end up deciding how hungry we feel, where calories in our bodies go, how physically restless or restful we feel, and so on.  If food is the natural resource base of our body, then metabolism is the web of economic links, with hormones perhaps acting as our financial and monetary system (interestingly, I believe that there is an analogue between the hormone insulin and the effect of interest rates on economies–a topic I’ll briefly revisit later in this post.)  At the same time, Taubes does not deny that there is an absolute correlation between calorie surplus and weight gain–the difference is that he rightfully points out that nobody is answering the question of why this calorie surplus is happening:

We don’t get fat because we overeat; we overeat because we’re getting fat.

Taubes, Gary (2010-12-28). Why We Get Fat: And What to Do About It (Kindle Locations 1431-1432). Knopf Doubleday Publishing Group. Kindle Edition.

This seems like gobbledygook, or at least weird wording, when one first reads it, but the logic is actually simple: obesity is the condition upon which the body decides to overeat and allocate the excess calories to fat.  Sounds implausible?  Then consider this: kids run a constant calorie surplus because their body is telling them that they need to grow–it’s not like they consciously deciding to grow.  In both cases, the word decision is key–obesity, just like growing, is a cybernetic phenomenon in which the body is accumulating calories because that’s what it decided to do.  A more detailed explanation of both what is believed to happen and the scientific evidence backing it up is a topic fit for entire books, so I can’t go into it here, but Taubes is a great place to start.  What’s important to note here is that whether we end up running a surplus or deficit of calories is a decision made by the body.

But what about self-control?  That’s an important question, and it makes this post controversial because the first thing I have to say is, no, you don’t have total autonomy over what your body does.  Yes, you can use your conscious will to keep calories under a certain level, but will it work?  If you are not taking in enough energy to get through a workout, then you’ll become more sedentary in response.  In fact, in severe cases of metabolic syndrome, the condition that causes obesity, starvation may cause the body to break down muscle, bone, and even organ mass before burning through all of its fat reserves.  Why would the body do that?  This requires understanding the essence of cybernetic systems: feedback.  

Feedback or Die

The most basic example used to demonstrate cybernetics is a thermostat.  It has a built in thermometer, which consists of mercury that either expands or contracts due to changes in temperature.  This allows the thermostat to measure some discrepancy between its target temperature and the actual temperature outside–it will then turn on a heating or cooling system until the discrepancy goes away.  This kind of feedback is known as negative feedback because the feedback causes the discrepancy to shrink.  What’s important to note here is that the information received by a cybernetic system is based on some difference between two absolutes–the thermostat behaves the way it does because it makes its decision based on whether the volume of the mercury inside its thermometer is less than, greater than, or roughly the same as some defined target.

The human body, while operating on these same principles, is much more complex in its rules.  That said, there are still insights that can be gleaned from having a rough idea of how some of its key systems work.  One such system is the hormone known as insulin.  (NB: from here on, I am making a theoretical point with examples that may not exactly match up with the most up-to-date scientific theories.  The point of the following is a thought experiment meant to give an intuitive sketch of how feedback works in a cybernetic system.  I repeat, I am not making an empirical claim, I am using a simplification in order to illustrate a concept.)  Insulin is a hormone that is charged with the task of absorbing any glucose that is found in the bloodstream and transporting it to various parts of the body (namely fat and muscle.)  The fat cells and muscle cells that absorb the insulin do so by means of insulin receptors, which calibrate their sensitivity such that they absorb a certain amount of insulin before stopping.

These cells are very much like the thermostat, except that their target will be raised or lowered based on the relative amount of insulin running through the system.  The reason for this is that the body’s goal is to properly distribute nutrients and this distribution is determined by the insulin sensitivity of various parts of the body.  If a receptor is receiving too much insulin, its sensitivity reduces so as not to take in more than it needs.  Currency works like this as well: if an excess of money is flowing through the system but the amount of actual wealth (yes, loaded term, but bear with me) stays the same, then the purchasing power of the currency drops.  Insulin works the same way: just as currency represents a non-fixed amount of wealth, insulin represents a non-fixed amount of nutrients.

The condition known as insulin resistance comes when these receptors become so insensitive that they are no longer absorbing any significant amount of insulin.  The result of this is that the insulin, and any glucose that it might be transporting, remains circulating in the bloodstream.  In order to get the glucose out of the bloodstream, more insulin is produced.  In theory, this should be okay; eventually the same amount of glucose is being transported around, it just needs more total insulin to represent it.  The same goes with money–100 years ago a dollar was worth a lot more, but nothing has broken down because it was gradual enough that at any given moment people had a stable sense of their purchasing power and there was enough time for wages to rise accordingly (it’s not this simple, but my point stands that the system did not collapse.)  In other words, everything is fine if enough of the system can recognize that everything is the same except that the yardstick has changed.

When the change happens too rapidly, however, the yardstick gets mismatched with reality and inefficient behaviors arise; and in extreme cases, the yardstick can become entirely useless.  In economics, the former case matches up to the phenomenon of deflation, in which the purchasing power of money has increased due to lower prices, but unemployment results due to wages not lowering nearly as fast (Keynes called this “sticky wages”.)  In the case of hyperinflation, prices rise so fast that the currency is no longer a reliable yardstick, and any information the money represented about who owns what is vanished.  While this may sound like an egalitarian’s dream, the problem is that so many vital systems rely on this information that the result is terrible poverty.

But how do these breakdowns occur?  What would make a discrepancy emerge so quickly and grow so fast that it can’t be compensated for?  The answer is positive feedback: where negative feedback closes a gap, positive feedback increases it.  And since the positive feedback increases the gap, it’s likely that the same behavior will repeat because the gap is still there.  Although not all positive feedback is necessarily bad, systems break when they enter some cycle of positive feedback that they can’t get out of.  In the case of deflation, the unemployment caused by falling prices and lower wages means that people will spend even less.  The result?  Prices drop even further and more people are put out of work.  Whether or not bailouts and stimulus packages are a good idea, their intent is to nip the cycle in the bud while it’s still affordable to do so.  In the case of metabolism, the issue is that once the cells are too insensitive to insulin, the body will produce even more massive amounts of insulin in order to compensate, but this will inevitably lower the insulin sensitivity of the already resistant receptors.  This can go two ways: the insulin secretion eventually outpaces the receptors’ reduction in insulin sensitivity, in which case some stable point is reached; sadly, this often happens through the body creating new fat cells and eventually becoming obese enough to stabilize the situation.  For those who were wondering why the body would make a decision in which fat absorbs the lion’s share of nutrients to the detriment of everything else, now you know: (relatively) insulin-sensitive fat cells have been recruited to help keep excess glucose out of the blood-stream; they are the nouveau riche of your metabolic system.  The second way is much less pretty: insulin stops being secreted for good, the yardstick is gone; this is diabetes.  From then on, insulin must be regulated through artificial means (injecting insulin manually whenever a meal is just eaten.)

The big jump to make is to realize that any sufficiently complex entity requires reliable feedback.  All of the materials in the world are useless if they cannot work together to create the necessary complex behavior.  If the feedback becomes too unreliable, the behavior becomes at best unpredictable, and at worst too incoherent for anything to work properly.  The loss of the ability to produce insulin is the loss of an entire feedback mechanism, and the only reason that diabetes does not guarantee death is that humans have enough metacognition to use conscious regulation as a backup system to regulate glucose manually.  But take that away, and a more fundamental point becomes clear: a system’s health and survival depend on the integrity of its feedback.

Leveraged Phenomenology

This is not to say, however, that phenomenology is useless.  On the contrary, it is actually essential to sound decision-making: the truth is that everything I’ve written above about insulin is an oversimplification of a very complex theory that even with all of its details and nuances cannot fully account for the complexity of the human body.  But then why look at cybernetic theories at all?  If we’re interested in weight loss and we ultimately can only rely on phenomenological theories, wouldn’t it just be best to look at calorie intake and expenditure?

Not so fast; there’s is one caveat that has not been stated here: you don’t have direct control over your calorie intake.  It’s not just that you don’t directly control what goes to muscle and what goes to fat; your actual behavior with regards to diet and exercise is largely dependent on the messages of your metabolism.  Too much sugar intake will most definitely affect your levels of hunger and your body’s ability to process nutrients efficiently.  Whereas the target of a thermostat is something we have complete control over (we just turn the dial), there is no equivalent part of our body that we have such direct control over.  This doesn’t, however, mean that we have no control; instead, we have differing degrees of control over different inputs.

Since different inputs allow different amounts of control, we need to go by the phenomenological theories that provide us the greatest degree of leverage.  Calorie counting, when it works, works because we make the decision to eat foods that give us more bang for our buck.  While it might be phenomenologically true that we’ll lose weight should we take in less calories than we expend, this is something that on its own does not provide us very much in the way of leverage at all.  On the other hand, there is much phenomenological evidence to show that cutting out certain types of food or engaging in intense exercise sessions a couple of times per week also does the same thing–and these are inputs over which we have a much greater degree of control.  But how do we know what will provide us leverage and what won’t?  The answer is simple: for any input, get an idea of how dependent it is on feedback from the system’s prior behavior.  The more feedback-dependence, the less direct control.

Without understanding cybernetic theories, we would not be equipped to see this difference.  Cybernetic theories offer us the ability to see how different phenomena are related through cascades of feedback and consequently allow us to see what phenomenological theories provide us the most control over future outcomes.  But the example I’ve given here only scratches the surface–however powerful a framework cybernetics is for appreciating complex decision making, it is virtually impossible to decode the entirety of something as complex as the human body, let alone the world economy (which is likely even more complex due to the fact that financial numbers have no theoretical limit.)  It goes without saying that this greatly complicates what began as something that felt simple–but the goal of this entry was to clear up a language game that hinders further inquiry into these ideas; and as such, I’ll have to leave countless questions that I haven’t even mentioned for another time.


*If I’ve misremembered or misphrased this argument in any way, please let me know.  I have no interest in putting words in anyone’s mouth.