Monthly Archives: August 2013

Deconstructive Economics Part I: Economic Paradigms

In my last post that touched on the subject of economics, I considered the idea of a paradigm of economics based on allostasis. It left a lot of questions unanswered, but it left me with an even stronger suspicion of a hypothesis that I’ve been mulling over in my head for some time that may apply to complex systems in general: that an economy works not by allocating resources more “efficiently” but by continually learning. I put “efficiency” in quotes because the notion of efficiency can’t be discussed in a vacuum; an issue that’s helped lead me to my current hypothesis. Efficiency implies that there is a metric that is being optimized for; something that only exists in some unambiguous sense in the event of a major purpose such as a major war, or perhaps the renovation of a nation’s infrastructure. I should also note that in the presence of such goals, the idea that the free market is “more efficient” seems to be somewhat unsubstantiated: I have a hard time believing that the second world war would have been more effectively fought had nations relied entirely on “market solutions” to pump out the manpower and materiel needed for the massive undertaking.

Yet somehow, even in the absence of a definite notion of “efficiency”, there are still things that could obviously be considered “malinvestments”: if a restaurant is bailed out at all costs, no matter how terrible the food, it is uselessly monopolizing claims on all kinds of material wealth that would be better spent elsewhere. This left me with the question of how can we make any claim to something being wasteful in the absence of a clear notion of “value”. One might come up with reasons outside the scope of markets by making arguments for the intrinsic value of railroads or libraries, but when applied on a macroeconomic scale these arguments amount to epistemically arrogant just-so stories that can never be substantiated in any kind of logically rigorous way. Nor are libertarians off the hook: the “free market” in any incarnation is a structure that is built and maintained by central authorities, and while many make the argument that the government should limit its role to providing the absolute basic necessities for an ideal free-market, such an argument implies that there exists that there is an ideal “free market” that should be created and maintained, which itself assumes that there’s some way that a categorical notion of “efficiency” can be derived from some top-down model of reality.

The underlying issue is not just that our economic theories are models of a much more complex reality, but that the market, at any given point in time, in whatever incarnation, is a model of reality that is simultaneously propped up by and utilized by the encompassing entity we call the economy. Where the economy is the collective exchange and utilization of goods, services, land, labor, commodities, information, etc. carried out by society, the market is a model of reality, a set of scripts, that guides our economic behavior. In order to do so, they must do two things: (1) they need to provide information that is sufficiently clear and reliable for us to decide to follow the script, and (2) they need to continually update their instructions so that the information remains reliable. In other words, the system needs to maintain the ability to process information coherently; they must be allostatic.

There are many such scripts, and further reading can be found in places such as Venkatesh Rao’s essay on the unraveling of scripts, but markets are a very specific type of script. Prior to the emergence of industrialized society, markets were peripheral to everyday life and most household and community needs were met through autarchy. With the industrial age came what Karl Polanyi calls “the market pattern”, in which providing for one’s material well-being became increasingly dependent on specialization and exchange. This general “pattern”, which is so strongly entrenched in our culture that our textbooks assume that currency was preceded by barter despite the mountain of historical evidence to the contrary, is the template for all market-scripts, which share the intertwined assumptions of that goods are (1) exclusively owned by a single party, (2) fungible and interchangeable and (3) enumerable according to some ranking. By virtue of these three axioms, market scripts dictate, through the information embedded in currency, institutions, and laws, a set of assumptions about how to determine economic “value”.

The idea of economic value is relative, but that does not mean that it’s unfalsifiable. A market’s script for determining “value” is only viable insofar that it maintains a sufficient signal-to-noise ratio in its processing of feedback. When this fails to happen, price signals stop working and the economy grinds to a halt as people look to other means of economic well-being. At this point, feedback becomes increasingly weak until a new script is implemented. This period of economic crisis is inevitable due to the constant changing of conditions on the ground and the inevitable expiration of any model that makes sense of the world. For a better understanding of how such a process works, it helps to be familiar with the schema of scientific paradigms, as coined by Thomas Kuhn in his book The Structure of Scientific Revolutions.

Kuhn’s Ladder and the Languages of Knowledge

In today’s culture, science is held up with praise, and sometimes disdain, as being an enterprise of absolutes: absolute knowledge confirmed by the absolutes of experimentation and repetition. While I won’t deny that the law of gravity is absolute, the practice of science in many ways resembles Einstein’s relativistic view of the universe. Just as any notion of “up” can only be talked about relative to gravitational fields, the notion of objectivity in science is a social construction that relies on professional consensus regarding various ideas, definitions, technical practices, and accepted theories. This is most evident in practice of peer review, in which a study is not considered scientifically valid until it has been deemed sound by other scientists within the same fields. More subtle and important, however, is the fact that without the existence of such consensus, the scientific enterprise would helplessly drown in a sea of noise.

Consider the field of epigenetics as an example. Genes, as a concept, are considered a scientific fact. The debate surrounding epigenetics is not about the existence of genes but about if and how they do different things in different environments. Getting to this point requires an extremely detailed infrastructure of consensus, not just in terms of guiding theories, but down to the relative meanings of the data returned by an instrument. To get an idea of just how precise this is, imagine trying to explain to a scientist from 300 years ago what a virus is. Without any framework of microorganisms, germs, genetics, cells, or protein, it would be virtually impossible to give them any definition beyond “these little thingies jump from person to person and make you sick.” Even if they suspend their disbelief, what experiments would you be able to run to convince that that this was true? For any kind of scientific research to proceed, there needs to be a shared language. If you can’t agree on whether genes exist, you can’t have a debate about gene expression. The next rung on the ladder can only be reached if you can plant your foot on the previous rung–otherwise, there is nothing that can be labeled “up” or “down”.

These shared languages, known as scientific paradigms, can also be thought of as a kind of data compression. You don’t need to thoroughly understand every single observation and theory that came before it in order to become a scientist–you just have to know enough of it that you have a common semantic frame for building hypotheses and describing the setup and results of your experiments. Under these conditions, the field proceeds under what Kuhn calls normal science: a state in which a number of questions have emerged within the constraints of the paradigm and scientists can spend their time further elaborating on and classifying phenomena within the paradigm’s theoretical framework. This state can only last so long as the paradigm remains a cost effective way of compressing the data. If the paradigm fails to make meaningful predictions, scientists will slowly look for alternatives and lose faith in the current framework, leading to a period of extraordinary science. Prior to this, theories may be patched up so that they fit the data, and wrong predictions may be outright ignored, but this can only continue as long as the benefits of the paradigm outweigh the cost. If your inbox puts a few of your important e-mails in “miscellaneous”, it still might save you a good deal of energy. You probably wouldn’t say the same if that’s what happened to 80% of your important e-mails.

Most importantly, the theories that comprise a scientific paradigm are not formulated in some universal language of first principles. There are reasons why this is in fact impossible, but such ideas could fill up entire books, and in fact do. For our purposes, it suffices to say that the theories of paradigms are semantically grounded through a combination of shared language with other paradigms, subordination to other paradigms (such as a theory of metabolism being constrained by the laws of thermodynamics), and the possibility that a paradigm or group of paradigms contradicts itself due to an oversight regarding its initial assumptions. Due to the fundamental limits of any sufficiently complex logical system, scientific paradigms in fact hold the seeds of their own destruction, providing feedback as they encounter real-world observations before the feedback inevitably hits diminishing returns followed by an outright harmful ratio of noise to signal:

hormesis

Courtesy of Nassim Nicholas Taleb: Antifragile

In this sense, every paradigm is ultimately “wrong”, but to look at it through the lens of right and wrong would be a mistake. Science does not, and cannot, happen in a vacuum: in order to get an answer, you first have to ask a question. Every scientific paradigm is fundamentally a set of questions, each of which with a range of intelligible answers (saying 2 + 2 = 5 is wrong but intelligible, saying 2 + 2 = “ham sandwich” doesn’t make any sense whatsoever.) Knowing which questions to ask requires having an idea of what you’re looking for, which can only be done by finding answers that reveal the contradictions in your original set of questions. Once you find a paradox, you can find a new frame to make sense of your data, but until then, what we cannot speak of must be passed over in silence.

Markets, Paradigms, and Disequilibrium

When I last talked about the phenomenon of feedback in an economy, I suggested that feedback was good up until the point that it compromised the system’s ability to process feedback. At the time, I had no good answer as to when this point was: after all, sometimes the system should outright fail so that a new system, better suited to new realities, can take place. If we frame markets as Kuhnian paradigms on the other hand, the question can be brought into much sharper focus. Just as a scientific paradigm provides scientists with guiding questions and theories to make sense of their observations and guide their experiments, the currency, laws, and institutions of a market work together to make sense of the feedback that occurs within an economy. In order to get an idea of how this works, we’ll have to revisit our old frienemy, the axiom of utility.

First things first: utility is not about “rationality” in the sense of “smoking is irrational because it’s bad for you.” It simply means that your preferences are consistent: that you do not prefer steak to chicken, chicken to salmon, and salmon to steak. While this is not actually how people behave, as confirmed by numerous psychological experiments, it’s nonetheless a useful concept when not looked at in a vacuum. Within the scope of the market, transactions are by definition an indicator of utility. If you’re willing to pay more for a pound of steak than for a pound of chicken, then that pound of steak is more important to you than that point of chicken. It might be for the most whimsical or irrational reasons, but in that moment, you’ve made the unambiguous decision that one thing is more valuable to you than another. In the framework of decisions within a market, currency is an accounting identity: you can choose to buy and sell whatever you want, but you have to make a decision about the relative value of everything you consume, sell, and save.

Within a scientific paradigm, scientists work to make sense of discrepancies between their observations and the tenets of the paradigm. Within markets, the same thing happens regarding discrepancies between what individual actors value and what the market values. This is most apparent in finance, where investors look to find discrepancies between the price of an asset as assigned by the market (itself an implicit prediction about the later price) and what the investor thinks the price will be later on. The same discrepancies also matter to businesses, which look to make a profit by selling something that’s worth more than what it cost to procure–a complex process that requires all kinds of consideration about present and future prices and the future needs of consumers. Even among consumers the same thing takes place as they strive to get something for nothing by paying less for goods than what they consider the goods to be worth. Each of these transactions act as feedback, with the market adjusting its prices to fill the gap between actual behavior and expected behavior. All of these examples are extreme simplifications, but the main idea is that economic actors generate feedback by exploiting the differences between what the market knows and what the actor knows, a process known in finance as arbitrage.

It would be a fatal mistake however to assume that this means that the market simply strives towards equilibrium as the discrepancies between supply and demand are flattened out. On the contrary, most of these behaviors push transactions away from equilibrium by adding more economic complexity: innovations create new demand for and dimensions of comparison between goods, investors place bets based on information that has not yet been accounted for, and gluts and scarcities of goods spur the use of substitutes that may not have been used otherwise. With each instance of feedback, actors fill the information gap with information that introduces new gaps. continues so long as the market can honestly account for the economic behavior of its constituent actors. This process, in which the market effectively processes feedback and creates wealth by reliably increasing in complexity, could be analogously called normal economics.

In the absence of such honest accounting, the market can no longer effectively process feedback and will collapse as it increasingly loses relevance with regards to people’s present needs. To give an example, let’s consider a highly skilled programmer who does work for open source projects. While he might work on these projects for recreational or altruistic purposes, he can only spend as much time on these projects as his finances will allow. Meanwhile, while others may benefit from his contributions, they will be spending no money on it no matter how valuable it is to them while spending more of their money on things that wouldn’t have as high a relative value were they forced to pay for the software. As a result, markets overstate the value of these other goods and services while understating the value of the software.

This is not to say that there is something categorically wrong with people giving things away for free; remember, all notions of “value” are defined relative to the axioms of the market, not as some categorical good. What it does mean is that the market as a paradigm becomes less useful because the information it provides about relative needs is less reliable. Just like too much of a mismatch between a scientific paradigm and its individual observations can render a it ineffective or even downright useless, a failure to account for a new technology or a potential collapse in credit can render a market useless. People will still continue to transact, but more and more of it will be off the books, and a new market will eventually form in order to streamline the extremely inefficient endeavor of performing transactions off the record. During this time, the economy enters a period of extraordinary economics, in which the current market does not make sufficient sense of the economy. We are in one such period now for several reasons, and explaining why may make this idea more clear.

The Theories of Currency: A Speculative Parable

At some point, I’d like to go into a much deeper historical digression to really get at the meat of the ideas posted above, but given the length of this post and my own lack of erudition, we’ll have to settle for a few key points about the past 100 years with some disgusting simplifications. Going forward, I’d like to state that this should all be read as a parable meant to demonstrate a broad idea, not an empirical hypothesis about the causes behind past and present economic crises. More specifically, but just as important, remember that this is about how markets themselves act as tacit models, not a discussion of macroeconomic theory.

The economic crisis of a few years ago spurred a lot of interest in a pivotal moment in American history: The Great Depression. The narrative, supported by the dominant Neo-Keynesian and Monetarist schools of economics, went that this time, with our better understanding of economics, we weren’t going to make the mistake made by fiscal conservatives back in the 1930′s. Unfortunately, things have not gone according to plan, with “improvements” in unemployment numbers coming from a combination of lower wages, reduced hours, and a shrinking of the labor force. GDP has not fared much better, showing little increase beyond the tautological increase in government debt. The common reaction to this by libertarians, fiscal conservatives, and members of the Austrian school is that Keynes was a charlatan who was wrong all along. While that may or may not be the case, I contest their claim on the basis that they’re talking completely out of historical context: just because Keynesian economics doesn’t make sense now, that doesn’t mean that it never made sense. Just as every market is a model of a particular time and place, every system of currencies also models within it certain assumptions. These assumptions are too complex to be fully summarized, but I can still get across the gist of what I mean.

During the period in which the Great Depression took place, there was a great deal of easy potential for economic growth. Oil was still a recent discovery and the process of mechanization was still in full swing. For many countries, especially the United States, discovery rates of oil were increasing rapidly with each year (the US did not hit a peak in oil production until 1970) and there was so much to go around that it was a waste not to do something with it. All this growth eventually led to a period of intense speculation, culminating in the events of Black Tuesday, when a collapse in the stock market and the resulting bank run led to a severe deflationary spiral.

None of this happened for lack of material wealth: sure, plenty was poorly invested during the boom years, but most of the resulting damage came from a vicious cycle in which a lack of available money caused cuts in spending, causing further cuts in wages and employment, which caused there to be even less money, and around and around ad-nauseam; all of this initially coming from the bank runs that caused most of the available credit in the market to disappear. Had the Federal Reserve been able to create more money, this may have been averted, but as it stood at the time, the United States was on a gold standard, meaning that any available money in the economy had to be backed by a fixed amount of gold. But before the Keynesians jump for joy and the Austrians burn me at the stake, I’d like to point out that this has to be taken into context: yes, there were misplaced investments that had to be corrected by the market, but beyond a certain point, the economy was creating a self-fulfilling state of scarcity despite the enormous amount of material wealth available. The gold standard, in which money is a static and fixed quantity, represents a world where wealth neither grows nor shrinks in the future. This is not only counter-productive in the case of a self-fulfilling deflationary-cycle, but is in fact a recipe for disaster as the economy grows too big with too little credit to support it. Although other factors, such as the forced deleveraging via wartime austerity, arguably played a major role in the end of the Great Depression, the world economy’s transition away from the gold standard and the subsequent economic recovery imply a paradigm shift in which a finite money supply based on gold gave way to the fiat money we have today.

Zoom to 2008, when the banks catastrophically failed and were bailed out by the government. Despite taking all the measures that helped end the Great Depression, the recovery has been very limited and some would say that it happened only on paper. Once again, it’s worthwhile to put this in historical context, something that can be done with the help of two pictures (courtesy of Chris Martenson and the EIA respectively):

usdebtpeakoil

The first picture is the ratio of credit market debt to GDP. Other than the spike to the left, which was caused not by a rise in credit (remember: gold standard) but by a rapid drop in GDP, the ratio of debt to GDP (private and public) has reached unprecedented levels in the past few decades. The reason for this literally exponential growth is that our current system of money is based on the issuing of debt. What that means is that money is created whenever someone takes out a loan from a bank. In order to pay off that loan, the debtor not only has to pay back the principal, but also the interest, meaning that they’re going to have to acquire more money than they originally had. Apply this to every dollar circulating in the economy, and it means that an amount of money proportional to the amount of money currently in the system has to be created out of thin air; something that is done not by directly printing money, but by having people take out more loans from more banks. Meanwhile, banks themselves need only keep a small fraction of their deposits in reserve–so for every dollar deposited to a bank, several more dollars are introduced into the market. The result is a money supply that grows exponentially (if you feel the need for further elaboration on this subject, I recommend this documentary.)

The issue here is the opposite of the gold standard. Whereas the gold standard fails when the economy becomes too big for its money supply, debt based currency can only go on so long as the debt is continually rolled over. If not, then credit will collapse as people default on their loans and banks become insolvent (remember: since a bank only needs to store a small fraction of the loans, that means that for every dollar a bank loses, the economy loses several dollars.) In the event that there’s easy wealth to be exploited that just requires more capital, government intervention has a decent chance of solving the problem. If, however, the money supply has far outpaced any plausible rate of growth in material wealth, then government intervention potentially delays the inevitable by further misdirecting available resources. Where the gold standard failed us by fooling us into thinking that there wasn’t enough to go around, currency based on debt constantly tells us to go ahead and borrow because the future will be more full of schwag then ever. The chart on the right is not very reassuring: the production of the world’s most important energy source remains stagnant even in spite of rising gas prices and the government intervention needed to provide sufficient capital.

Again, none of this should be taken too seriously. All of the ideas of scarcity and abundance that I’ve put forward are based on assumptions about the future availability and economic significance of fossil fuels. While we can make some educated guesses from 50,000 feet, the actual information comes from the feedback provided by the market in the form of currency-based signals. But if that’s so, then what allows me to call the level of debt problematic? Shouldn’t I take it as a signal that the future will be abundant enough to pay it off? The answer is not to look for an overt match with reality, but to look at the level of clarity provided by the current paradigm. In the case of our current monetary system, it helps to look at the signals provided by the centrally controlled discount window, which loans money to America’s major banks at an interest rate decided on by the Federal Reserve. These interest rates generally have a great influence on the cost of borrowing in general, since the cheaper it is for a bank to acquire cash, the more competitively they can price their own loans, which gives the Federal Reserve a way to influence signals of scarcity and abundance. Prior to the crash, interest rates were set extremely low in order to avoid a recession after the dot-com bubble, following which they remained that way in the belief that it was creating robust economic growth. This was not, however, matched up with reality: consumers, businesses, and banks all took on a dangerous amount of debt that failed to take into account the probability of a catastrophic crash. The paradigm’s predictions* miserably failed.

Since interest rates were low, there was little leeway left for lowering interest rates further. Even after resorting to making credit free, banks continued to hoard money and businesses failed to expand or hire. Meanwhile, the stock market has soared while banks pay record bonuses to their executives, creating a scenario in which both the relative and absolute wealth of the most powerful figures in the US economy has increased despite high unemployment and record numbers of people receiving emergency government assistance in order to get by. All of this signifies faulty feedback reminiscent of Kuhn’s extraordinary science, with the current paradigm getting patched up in such a way that it technically fixes the falsifications; corporate profits, GDP, the stock market, and money supply are all healthy as a result of monetary intervention, but the script only survives by fixing the game for a shrinking number of parties at everyone else’s expense. If you look at all of the unemployment data and not the fudged numbers of the official “unemployment rate”, you can see that fewer and fewer people are gainfully employed, as the recovery in the official numbers has been due to a combination of an increase in part time jobs and a decrease in the number of people counted in the labor force. This cannot be understated: the economic script followed by the United States depends on gainful employment. If you don’t have a full time job, you fall out of the system into the underclass, which is supported by an increasingly large amount of direct government spending. This propping up of a permanent underclass is yet another duct-tape fix that keeps the paradigm from being abandoned at the cost of information content (NB: I am NOT advocating that we starve the poor or get rid of our safety net. I am only pointing out that failing economic systems can push their failures under the table in order to stay afloat.)

What do I mean by information content? Think back to the importance of honest accounting: corporations and banks continue to make profits under the principles of the “free market”, but these profits are largely the result of government spending that props up both the corporations and the consumers who might otherwise not have money to spend on their products and services. Zombie corporations hog resources that may otherwise have been put to use differently, and people who may have found work in an updated economy instead must rely on government handouts as obsolete firms fail to make use of the spare labor around them. Every dollar spent attempting to preserve an outdated paradigm is a dollar that can’t work as feedback, diminishing the effectiveness of price signals as corporations and banks get a free lunch from a system whose resources are ultimately finite. Instead of creating wealth, these bailed out corporations simply relocate it, eventually compromising economic allostasis as ever fewer actors are left to contribute information to the larger economy.

All of this may sound like a staunch argument for an unfettered free market with minimal government intervention, but that is actually not what I’m saying. In this particular case, the fiscal and monetary policy of the United States seems to be a desperate attempt to preserve a paradigm that is no longer working, but that does not mean that unfettered markets generate the most wealth. Since there is actually no such thing as a totally free market, it’s indisputable that every market paradigm is formed by a combination of principles via positivia and principles via negativia and that any successful market must be constructed with both kinds of measures in mind. Many libertarian ideas currently make sense because there are many government interventions that do not make sense in the context of how price signals currently work, but that doesn’t change the fact that the very system of price signals in a market economy is based on an a-priori model of what constitutes an effective economy. There are plenty of instances, even now, where a lack of government enforcement is actually detrimental to proper market feedback. Take the example of digital media, where file-sharing has led to consumers being able to understate how much the media was actually worth to them while artists lose the capacity to produce more work due to a lack of compensation. In re-thinking our economic paradigm, including our system of currency, much will be constructed in a top-down manner no matter what.

When dealing with problems within a paradigm, it suffices to look at the internal contradictions and the degradation of feedback, but when constructing a new one, scientists inevitably look for new a-priori principles. Ours will inevitably be determined by a number of environmental, technological, geopolitical, and cultural factors; ideas that I would like to elaborate on should I find the stamina to write a second part. In particular, I’d like to get into how the intertwined history of industrialization, centralized states, and the corporation underlies the paradigm of the modern free market. I’d also like to consider some other systems of currency that could not be talked about in this short parable: the Bretton Woods system, privately issued bank notes, and derivatives; all of which broaden our ideas of how currency underpins the kind of feedback that occurs in a market economy. From there, I hope to take a more nuanced view of some of the more apparent problems in the near future: remuneration in an age of information, the tragedy of the commons concerning environmental problems, the loss of gainful employment due to outsourcing and robotics, and how we may be able to reduce economic fragility without compromising the complexity that has brought us so much wealth in the past few hundred years.

Notes:

*For uncertainty geeks out there, take the word “prediction” with a grain of salt. I do not necessarily mean that banks or economists, or even economies as a whole, are supposed to predict a precise outcome. They are instead supposed to robustly account for present and future needs, often by correctly taking what is fundamentally not certain into account.