Wednesday, September 30, 2015

Theory vs. Data in economics

OK, I promised a more pompous/wanky followup to my last post about "theory vs. data", so here it is. What's really going on in econ? Here are my guesses.

First of all, there's a difference between empirics and empiricism. Empirics is just the practice of analyzing data. Empiricism is a philosophy - it's about how much you believe theories in the absence of data. You can be a pure theorist and still subscribe to empiricism - you just don't believe your theories (or anyone else's theories) until they've been successfully tested against data. Of course, empiricism isn't a binary, yes-or-no-thing, nor can it be quantitatively measured. It's just a general idea. Empiricism can encompass things like having diffuse priors, incorporating model uncertainty into decision-making, heavily penalizing Type 1 errors, etc.

Traditionally, econ doesn't seem to have been very empiricist. Economists had strong priors. They tended to believe their theories in the absence of evidence to the contrary - and since evidence of any kind was sparse before the IT Revolution, that meant that people believed a lot of untested theories. It was an age of great theoryderp.

That created a scientific culture that valued theory very highly. Valuable skills included the ability to make theories (math skill), and the ability to argue for your theories (rhetorical skill). Econ courses taught math skill, while econ seminars taught rhetorical skill.

Then came the IT Revolution, which dramatically reduced the costs of gathering data, transmitting data, and analyzing data. It became much much easier to do both high-quality empirical econ and low-quality empirical econ.

But at the same time, doing mediocre theory became easier and easier. The DSGE revolution established a paradigm - an example plus a framework - that made it really easy to do mediocre theory. Just make some assumptions, plug them into an RBC-type model, and see what pops out. With tools like Dynare, doing this kind of plug-and-chug theory became almost as easy as running regressions.

But Dynare and RBC didn't make it any easier to do really good theory. Really good theory requires either incorporating new math techniques, or coming up with new intuition. Computers still can't do that for us, and the supply of humans who can do that can't be easily increased.

So the supply of both good and mediocre empirics has increased, but only the supply of mediocre theory has increased. And demand for good papers - in the form of top-journal publications - is basically constant. The natural result is that empirical papers are crowding out theory papers.

But - and here comes some vigorous hand-waving - it takes some time for culture to adjust. Econ departments were slow to realize that these supply shifts would be as dramatic and swift as they were. So they focused too much on teaching people how to do (mediocre) theory, and not enough on teaching them how to do empirics. Plus you have all the old folks who learned to rely on theory in a theory-driven age. That probably left a lot of economists with skill mismatch, and those people are going to be mad.

At the same time (more hand-waving) the abruptness of the shift probably creates the fear that older economists - who review papers, grant tenure, etc. - won't be able to tell good empirical econ from mediocre. Hence, even empirical economists are quick to police the overuse of sloppy empirical methods, to separate the wheat from the chaff.

Now add two more factors - 1) philosophy, and 2) politics.

People have a deep-seated need to think we know how the world works. We have a very hard time living with uncertainty - most of us are not like Feynman. When all we have is theory, we believe it. We hate Popperianism - we recoil against the idea that we can only falsify theories, but never confirm them.

But when we have both facts and theory, and the two come into a local conflict, we tend to go with the facts over the theory. The stronger the facts (i.e. the more plausible the identification strategy seems), the more this is true.

The data revolution, especially the "credibility revolution" (natural experiments), means that more and more econ theories are getting locally falsified. But unlike in the lab sciences, where experiments allow you to test theories much more globally, these new facts are killing a lot of econ theories but not confirming many others. It's a Popperian nightmare. Local evidence is telling us a lot about what doesn't work, but not a lot about what does.

In physics it's easy to be a philosophical empiricist. As a physics theorist, you don't need to be afraid that the data will leave you adrift in the waters of existential uncertainty for very long. Physics is very non-Popperian - experimental evidence kills the bad theories, but it also confirms the good ones. In the early 20th century, a bunch of experimental results poked holes in classical theories, but quickly confirmed that relativity and quantum mechanics were good replacements. Crisis averted.

But that doesn't work in econ. A natural experiment can tell you that raising the minimum wage from $4.25 to $5.05 in New Jersey in 1992 didn't cause big drops in employment. But it doesn't tell you why. Since you can't easily repeat that natural experiment for other regions, other wage levels, and other time periods, you don't get a general understanding of how employment responds to minimum wages, or how labor markets work in general. Crisis not averted.

So philosophical empiricism is far more frightening for economists than for natural scientists. Living in a world of theoryderp is easy and comforting. Moving from that world into a Popperian void of uncertainty and frustration is a daunting prospect. But that is exactly what the credibility revolution demands.

So that's probably going to cause some instinctive pushback to the empirical revolution.

The final factor is politics. Theoretical priors tend to be influenced to some degree by politics (in sociology, that's usually left-wing politics, while in econ it tends to be more libertarian politics, though some left-wing politics is also out there). A long age of theoryderp created a certain mix of political opinions in the econ profession. New empirical results are certain to contradict those political biases in many cases. That's going to add to the pushback against empirics.

So there are a lot of reasons that the econ profession will tend to push back against the empirical tide: skill mismatch, the limitations of natural experiments, and the existing mix of political ideology.

Of course, all this is just my hand-waving guess as to what's going on in the profession. My guess is that econ will be dragged kicking and screaming into the empiricist fold, but will get there in the end.

Monday, September 28, 2015

A bit of pushback against the empirical tide

There has naturally been a bit of pushback against empiricist triumphalism in econ. Here are a couple of blog posts that I think represent the pushback fairly well, and probably represent some of the things that are being said at seminars and the like.

First, Ryan Decker has a post about how the results of natural experiments give you only limited information about policy choices:
[T]he “credibility revolution”...which in my view has dramatically elevated the value and usefulness of the profession, typically produces results that are local to the data used. Often it's reasonable to assume that the "real world" is approximately linear locally, which is why this research agenda is so useful and successful. But...the usefulness of such results declines as the policies motivated by them get further from the specific dataset with which the results were derived. The only way around this is to make assumptions about the linearity of the “real world”[.] (emphasis mine)
Great point. For example, suppose one city hikes minimum wages from $10 to $11, and careful econometric analysis shows that the effect on employment was tiny. We can probably assume that going to $11.50 wouldn't be a lot worse. But how about $13? How about $15? By the time we try to push our luck all the way to $50, we're almost certainly going to be outside of the model's domain of applicability.

I have not seen economists spend much time thinking about domains of applicability (what physicists usually call "scope conditions"). But it's an important topic to think about.

Ryan doesn't say it, but his post also shows one reason why natural experiments are still not as good as lab experiments. With lab experiments you can retest and retest a hypothesis over a wide set of different conditions. This allows you to effectively test whole theories. Of course, at some point your ability to build ever bigger particle colliders will fail, so you can never verify that you have The Final Theory of Everything. But you can get a really good sense of whether a theory is reliable for any practical application.

Not so in econ. You have to take natural experiments as they come. You can test hypotheses locally, but you usually can't test whole theories. There are exceptions, especially in micro, where for example you can test out auction theories over a huge range of auction situations. But in terms of policy-relevant theories, you're usually stuck with only a small epsilon-sized ball of knowledge, and no one tells you how large epsilon is.

This, I think, is why economists talk about "theory vs. data", whereas you almost never hear lab scientists frame it as a conflict. In econ policy-making or policy-recommending, you're often left with a choice of A) extending a local empirical result with a simple linear theory and hoping it holds, or B) buying into a complicated nonlinear theory that sounds plausible but which hasn't really been tested in the relevant domain. That choice is really what the "theory vs. data" argument is all about.

Anyway, the second blog post is Kevin Grier on Instrumental Variables. Grier basically says IV sucks and you shouldn't use it, because people can always easily question your identification assumptions:
First of all, no matter what you may have read or been taught, identification is always and everywhere an ASSUMPTION. You cannot prove your IV is valid...
I pretty much refuse to let my grad students go on the market with an IV in the job market paper. No way, no how. Even the 80 year old deadwoods in the back of the seminar room at your job talk know how to argue about the validity of your instruments. It's one of the easiest ways to lose control of your seminar. 
We've had really good luck placing students who used Diff in diff (in diff), propensity score matching, synthetic control, and even regression discontinuity. All of these approaches have their own problems, but they are like little grains of sand compared to the boulder-sized issues in IV.
He's absolutely right about the seminar thing. Every IV seminar degenerates into hand-waving about whether the instrument is valid. He doesn't mention the problem of weak instruments, either, which is a big problem that has been recognized for decades.

Now, Kevin is being hyperbolic when he categorically rejects IV as a technique. If you find a great instrument, it's really no different than regression discontinuity. And when you find a really good instrument, even the "deadwoods" in the back of the room are going to recognize it.

As for IV's weakness in the job market, that's probably somewhat due to the fact that it's been eclipsed by other methods that have not been around as long as IV. If and when people overuse those methods, it's highly probable that people will start making a lot of noise about their limitations. And as Ed Leamer reminds us, there will always be holes to poke.

Anyway, these posts both make good points, though Kevin's is a little over-the-top. Any research trend will have a pushback. In a later, more pompous/wanky post, I'll try to think about how this will affect the overall trend toward empiricism in econ... (Update: Here you go!)

Saturday, September 19, 2015

Is the EMH research project dead?

Brad DeLong:
[I]t is, I think, worth stepping back to recognize how very little is left of the original efficient market hypothesis project, and how far the finance community has drifted--nay, galloped--away from it, all the while claiming that it has not done so... 
The original EMH claim was...[y]ou can expect to earn higher average returns [than the market], but only by taking on unwarranted systematic risks that place you at a lower expected utility... 
[But f]inance today has given up any preference that the--widely fluctuating over time--expected systematic risk premium has anything to do with [risk]...It is very, very possible for the average person to beat the market in a utility sense and quite probably in a money sense by [buying portfolios of systematically mispriced assets].
DeLong cites the interesting new paper "Mispricing Factors", by Robert Stambaugh and Yu Yuan. The paper puts sentiment-based mispricing into the form of a traditional factor model.

Is DeLong right? Is the Efficient Markets research project dead?

Well, no. Models that explain time-varying risk premia (really, time-varying excess returns) as the result of time-varying utility are far from dead. The finance academia community doesn't use these models exclusively, but they are still very common. Probably the most popular of these is the "long-run risks" model of Bansal and Yaron (2004), which relies on Epstein-Zin preferences to produce time-varying risk aversion. As far as I am aware, lots of people in finance academia still consider this to be the best explanation for "excess volatility" (the time-series part of the EMH anomalies literature). In a different paper from around the same time, Bansal et al. claim that this approach can also explain the cross-section of expected returns.

(Note: As Brad mentions in the comments, Epstein-Zin preferences are different from Von Neumann-Morganstern expected utility. It represents a departure from the standard model of risk preferences, but not from the core idea of the risk-return tradeoff.)

So the idea of explaining asset returns with funky risk preferences is not dead by any means. But this literature does seem to have diverged a bit from the literature on factor models.

As soon as multifactor models like Fama-French started coming out, people pointed out that they weren't microfounded in economic behavior. There was no concrete reason to think that size and value should be associated with higher risk to the marginal investor. EMH-leaning supporters of the models - like Fama himself - waved their hands and suggested that these factors might be connected to the business cycle, and thus possibly to risk preferences. But in the end, it didn't really matter. The models seemed to work - they fit the data, so practitioners started using them.

But since factor models aren't explicitly connected to preferences, there's no reason not to simply treat apparent mispricings as factors in a factor model. Really, the first example of this was "momentum factors". But the new Stambaugh and Yuan paper takes this approach further. From their abstract:
A four-factor model with two "mispricing" factors, in addition to market and size factors, accommodates a large set of anomalies better than notable four- and five-factor alternative models...The mispricing factors aggregate information across 11 prominent anomalies...Investor sentiment predicts the mispricing factors...consistent with a mispricing interpretation and the asymmetry in ease of buying versus shorting. Replacing book-to-market with a single composite mispricing factor produces a better-performing three-factor model.
Stambaugh and Yuan take the "mispricing factors" approach further than in the past, by looking at limits to arbitrage and at investor sentiment. Limits to arbitrage and investor sentimennt are microfoundations - they are an explanation of mispricing factors in terms of deeper things in the financial markets. In other words, Stambaugh and Yuan aren't just fitting curves, as the momentum factor people were. This is behavioral finance in action.

Now this doesn't mean that the EMH research project is dead. First of all,  Stambaugh and Yuan still have to compete with papers by Bansal and other people working on the EMH research project. Second of all, increased attention to the "mispricing factors", or decreases in the institutional limits to arbitrage, may make them go away in the future. Third, risk-preference-based factors may still coexist with mispricing factors. And fourth, even if the mispricing factors are robust, the EMH is still a great jumping-off-point for thinking about financial markets.

So I think the rise of mispricing factors doesn't really signal the death of the EMH research project. What I think it signals is that finance researchers as a group are open-minded and eclectic, unwilling to restrict themselves to a single paradigm. Which I think is a good thing, and something econ people could stand to learn from...

Wednesday, September 09, 2015

Whig vs. Haan

If you want to understand Whig History, just look at the difference between the traditional European and the Disney versions of The Little Mermaid (spoiler alert!). Up until the end, they're pretty much the same - the mermaid dreams of love, and makes a deal with the evil witch, but she fails to get the prince to kiss her, and as a result she forfeits her life to the witch. In the European version, the mermaid dies and turns into sea foam, her dreams dashed. In the American version, however, the mermaid and the prince simply stab the witch in the chest with a broken bowsprit, and everyone lives happily ever after.

I think this difference is no coincidence. Around 1800, history had a structural break. Suddenly, the old Malthusian cycle of boom and bust was broken, and living standards entered a rapid exponential increase that is still going today. No wonder Americans love the Hollywood ending. In an economic sense, that's all we've ever really known. 

So Whig History - the notion that everything gets better and better - overcame Malthusian History. But there's another challenge to historical optimism that's much less easy to overcome. This is the notion that no matter how much better things get, society is fundamentally evil and unfair. 

I know of only one good name for this: the Korean word "Haan". (It's often spelled "Han," but I'll use the double "a" to avoid confusion with the Chinese race, the Chinese dynasty, and the Korean surname.) Wikipedia defines Haan thus:
Haan is a concept in Korean culture attributed as a unique Korean cultural trait which has resulted from Korea's frequent exposure to invasions by overwhelming foreign powers. [Haan] denotes a collective feeling of oppression and isolation in the face of insurmountable odds (the overcoming of which is beyond the nation's capabilities on its own). It connotes aspects of lament and unavenged injustice. 
The [writer] Suh Nam-dong describes [haan] as a "feeling of unresolved resentment against injustices suffered, a sense of helplessness because of the overwhelming odds against one, a feeling of acute pain in one's guts and bowels, making the whole body writhe and squirm, and an obstinate urge to take revenge and to right the wrong—all these combined."... 
Some scholars theorize the concept of [Haan] evolved from Korea's history of having been invaded by other neighboring nations, such as Han China, the Khitans, the Manchu/Jurchens, the Mongols, and the Japanese.
Though Korean writers claim that Haan is a uniquely and indescribably Korean experience, there seem to be parallels in certain other cultures. A number of Koreans have told me that "Korea is the Ireland of the East," comparing Korea's frequent subjugation to the domination of Ireland by England. 

Now, I am hugely skeptical of cultural essentialism. I doubt Haan is either unique to certain cultures or indelible. In fact, I bet that economic progress will drastically reduce it. There are signs that this is already happening - young Koreans are much, much less antagonistic toward Japan than the older generation.

But in a more general sense, Haan seems to describe an undercurrent of thought that runs through many modern, rich societies. You see it, for example, in leftist resistance to Steve Pinker's thesis that violence has decreased hugely. Pinker brought huge reams of data showing that violent crime and war have been in a long-term decline for centuries now. Leftist critics respond by citing anecdotal examples of war, atrocity, and injustice that still exist. 

This seems like a Haan view to me. The idea is that as long as examples of serious violence exist, it's not just incorrect but immoral to celebrate the fact that they are much more rare and generally less severe than in past times. 

Actually, talking about Pinker can often draw out what I think of as Haan attitudes. I was talking about Pinker to a friend of mine, a very sensitive lefty writer type. Instead of citing ISIS or the Iraq War as counterexamples, she talked about the problem of transphobia, and how "trans panic" legal defenses were still being used to excuse the murder of transsexual people. I checked, and this has in fact happened once or twice. My friend presented this as evidence that - contra Pinker - the world isn't really getting better. Injustice anywhere, under Haan thinking, invalidates justice everywhere else.

Another example of Haan is Ta-Nehisi Coates' view of history. The subheading of Coates' epic article, "The Case for Reparations," is this:
Two hundred fifty years of slavery. Ninety years of Jim Crow. Sixty years of separate but equal. Thirty-five years of racist housing policy. Until we reckon with our compounding moral debts, America will never be whole.
Now unless Coates gets to write his own subheadings, he didn't write those words. But they accurately sum up the message of the piece. The idea is that these wrongs against African Americans cause a moral debt that need to be repaid. It's not clear, of course, how the debt could be repaid, or what "reparations" actually would entail. But what's clear is the anti-Whig perspective. Progress does not fix things. The fact that Jim Crow was less horrible than slavery, and that redlining was less horrible than Jim Crow, and that today's housing policy is less horrible than redlining, does not mean that things are getting better. What matters is not just the flow of current injustice, but the stock of past injustices.

Haan presents a vision of stasis that is different from the Malthusian version. By focusing on the accumulated weight of history instead of the current situation, and by focusing on the injustices and atrocities and negative aspects of history, it asserts that the modern age, for all its comforts and liberties and sensitivity, is inherently wrong.

And Haan asserts that the world will remain wrong, until...what? That's usually not clearly specified. For Korean Haan theorists, it's a vague notion of "vengeance." For Coates, it's "reparations". For leftists, it's usually a revolution - a massive social upheaval that will overthrow all aspects of current power, hierarchy, and privilege, and make a new society ex nihilo. The details of that revolution are usually left a bit ambiguous.

But the vagueness and ambiguity of the imagined deliverance doesn't seem to be a big problem for most Haan thinking. What's important seems to be the constant struggle. In a world pervaded and defined by injustice and wrongness, the only true victory is in resistance. Ta-Nehisi Coates expressed this in an open letter to his son, when he wrote: "You are called to struggle, not because it assures you victory but because it assures you an honorable and sane life."

Haan thinking presents a big challenge for Whig thinking.

Whig History didn't have much trouble beating the old Malthusian version of history - after a hundred years of progress, people realized that this time was different. But Haan thinking presents a much bigger challenge, because progress doesn't automatically disprove Haan ideas. Making the world better satisfies Whigs, but doesn't remove the accumulated weight of history that fuels Haan. 

Nor can all instances of injustice be eliminated. It will never be a perfect world, and the better the world gets, the more each case of remaining injustice stands out to an increasingly sensitive populace. One or two cases of "trans panic" murder would barely have merited mention in the America of 1860. But precisely because there has been so much progress - precisely because our world is so much more peaceful and so much more just now than  it was then - those cases stick out like a sore thumb now. So Whig progress makes Haan anger easier, by raising people's expectations.

There's also the question: Should Whigs even want to defeat the Haan mentality? After all, if we trust in the inevitability of progress, it may sap our motivation to fight for further progress. Optimism can lead to complacency. So Haan resentment might be the fuel that Whigs need to see our visions fulfilled.

But Haan carries some risks. Massive social revolutions, when they happen, are capable of producing nightmare regimes like the USSR. With a few exceptions, the kind of progress Whigs like is usually achieved by the amelioration of specific ills - either by gradual reform, or by violent action like the Civil War - rather than by a comprehensive revolution that seeks to remake society from scratch. In other words, as one might expect, Whig goals are usually best achieved by Whig ends.

As a character would always say in a video game I used to play, "I am a staunch believer in amelioration."

In any case, I personally like the Whig view of the world, and I want to see it triumph. The idea of a world that gets better and better is appealing on every level. I don't just want to believe in it (though I do believe in it). I want to actually make it happen. And when I make it happen, or when I see it happen, I want to feel good about that. I want to savor the victories of progress, and the expectation of future victories, rather than to be tormented by the weight of unhappy history that can never be undone. I want to be able to think not just about the people around the world who are still suffering from deprivation, violence, and injustice, but also about the people who are no longer suffering from these things.

To me, the Whig view of history and progress is the only acceptable one. But Haan presents a stern challenge to that view - a challenge that Whigs have yet to find a way to overcome.

Update: Thabiti Anyabwile, writing in The Atlantic, says similar things in reference to Coates' writings.

Monday, September 07, 2015

"Loan fairness" as redistribution

I've noticed an interesting desire, especially on the political left, to want to use loans as a means of redistribution. The idea is that lenders should be willing to make loans to poor people when the risk-return tradeoff is worse than for loans to rich people. This could mean, for example, loaning money to high-default-risk poor borrowers at the same interest rate as to low-default-risk rich borrowers. Or it could mean extending loans to poor people whose perceived default risk would previously have prevented them from getting loans. The notion that this is "fair" - or that lenders "owe" it to poor people to give them favorable lending terms - pervades such works as David Graeber's Debt: The First 5000 Years.

A more recent example is Cathy O'Neil's recent post on Big Data and disparate impact in lending:
Did you hear about this recent story whereby Facebook just got a patent to measure someone’s creditworthiness by looking at who their friends are and what their credit scores are? They idea is, you are more likely to be able to pay back your loans if the people you’re friends with pay back their loans... 
[This] sounds like an unfair way to distribute loans... 
[In the neoliberal mindset], why would anyone want to loan money to a poor person? That wouldn’t make economic sense. Or, more relevantly, why would anyone not distinguish between a poor person and a rich person before making a loan? That’s the absolute heart of how the big data movement operates. Changing that would be like throwing away money. 
Since every interaction boils down to game theory and strategies for winning, “fairness” doesn’t come into the equation (note, the more equations the better!) of an individual’s striving for more opportunity and more money. Fairness isn’t even definable unless you give context, and context is exactly what this [neoliberal] mindset ignores. 
Here’s how I talk to someone when this subject comes up. I right away distinguish between the goal of the loaner – namely, accuracy and profit – and the goal of the public at large, namely that we have a reasonable financial system that doesn’t exacerbate the current inequalities or send people into debt spirals. This second goal has a lot to do with fairness and definitely pertains broadly to groups of people.
I don't get the random swipe at "equations", but the rest all seems pretty clear, even if it is couched in vague terms like "context", "reasonable", and "pertains broadly to groups of people". The basic idea is simple: Society is more fair when lenders give poor borrowers favorable terms relative to rich borrowers.

Let's think about this idea.

One problem with the idea would be that following it might force lenders to accept negative expected returns, which would drive them into bankruptcy. But let's assume for the moment that this doesn't happen - that lenders can lend to poor people and make lower, but still positive, profit margins overall. Loan "fairness" would then act as a subsidy from lenders to borrowers - a form of redistribution via a tax on loan-making businesses.

Another problem would be a more subtle version of the first problem - the implicit "fairness tax" on lenders might reduce the amount that they lend overall, and thus hurt the economy. This would be an example of the "leaky bucket" of taxation, in which we trade efficiency losses for welfare gains.

But let's ignore that issue. Let's think not about efficiency concerns, but only about the fairness of this type of redistribution.

Obviously fairness is a a matter of opinion, but there are some things we can clarify. Who are the recipients of "loan fairness" redistribution? Answer: Poor people who ask for loans.

Some poor people ask for loans because they have businesses to start, or for standard consumption-smoothing reasons. If these people are currently subject to borrowing constraints because of asymmetric information - in other words, if they can't get a loan because lenders don't realize they can and will pay it back - then these borrowing constraints will be ameliorated by "loan fairness" redistribution. That seems like a good (and fair) thing to me.

Other poor people ask for loans that they are unlikely to be able to pay back. This might be because they don't realize that their chances of repayment are low. Or it might be because they don't really intend to pay the loans back. Both of these groups of people will benefit from "loan fairness" redistribution.

One effect of implementing "loan fairness" redistribution would be an incentive for more people to join the latter group. Once poor people realize that society's desire for redistribution has given them the opportunity to get loans on more favorable terms, some poor people - it's not clear how many, but more than zero - will certainly take advantage of this by taking out a bunch of loans that they can't or don't intend to pay back.

A final group will be those poor people who don't ask for loans. Some will probably have ideas of morality that tell them to work hard, save money, and "neither a borrower nor a lender be". Others will think it unfair to request loans that they know they are unlikely to pay back. Others will simply not need to borrow that much. These groups of poor people will not benefit from "loan fairness" redistribution, because they will not ask for loans.

This introduces what I see as a source of unfairness. Poor people who are honest, and who refuse to borrow money that they know they can't pay back, will suffer compared to poor people who are dishonest and will just borrow as much as they can without any intention of returning the money. I think one could probably find some evidence of this kind of behavior among poor-country governments that borrow money and then ask for loan "forgiveness".

That seems clearly unfair. But there also seems to be another source of borderline unfairness here. Poor people whose moral values prevent them from asking for loans will be disadvantaged relative to poor people who have no moral problem asking for loans. Morality-based redistribution sounds a little iffy to me in the fairness department.

So purely in terms of the fairness of "loan fairness" redistribution - without even talking about efficiency concerns - I see some big problems with the idea of opportunistically redistributing money to only those poor people who are willing to walk into a lender's office and ask for a loan.

A more intuitively fair method of redistribution might simply be to tax rich people and give the money to poor people. Crazy idea, I know.

Sunday, September 06, 2015

"The Case For Mindless Economics", 10 years on

Ten years ago, two economic theorists, Faruk Gul and Wolfgang Pesendorfer, wrote an essay called "The Case for Mindless Economics". The essay pushes back against the enthusiasm for neuroeconomics and behavioral economics. It's a very interesting read, both for people interested in philosophy of science, and anyone who wants to know how economists think about what they do. (Before you read this post, consider reading the whole essay, because there's lots in there that I gloss over.)

Gul and Pesendorfer don't discount the possibility that neurological and psychological research can be useful in economics. They write:
Neuroeconomics goes beyond the common practice of economists to use psychological insights as inspiration for economic modeling or to take into account experimental evidence that challenges behavioral assumptions of economic models. Neuroeconomics appeals directly to the neuroscience evidence to reject standard economic models or to question economic constructs.
In other words, GP aren't arguing against using neuroscience and psychology to inform economic model-making. What are they arguing against? Two things:

1. The use of neuro/psych findings to support or reject economic models, and

2. The use of neuro/psych to establish new welfare criteria, e.g. happiness.

GP's argument against using neuro/psych to test economic models can basically be summed up by these excerpts:
Standard economics focuses on revealed preference because economic data come in this form. Economic data can — at best — reveal what the agent wants (or has chosen) in a particular situation...The standard approach provides no methods for utilizing non-choice data to calibrate preference parameters. The individual’s coefficient of risk aversion, for example, cannot be identified through a physiological examination; it can only be revealed through choice behavior. If an economist proposes a new theory based on non-choice evidence then either the new theory leads to novel behavioral predictions, in which case it can be tested with revealed preference evidence, or it does not, in which case the modification is vacuous. In standard economics, the testable implications of a theory are its content; once they are identified, the non-choice evidence that motivated a novel theory becomes irrelevant.
I'm not sure I buy the logic of this argument. In general, preemptively throwing away any entire category of evidence seems dangerous to me. Why should economists only validate/reject their models based on choice data?

Here's a concrete example to help explain what I mean. In finance, there is a big and ongoing debate over the reason for Shiller's excess volatility finding - i.e., the finding that market returns are slightly predictable over the long run. Some people say it's due to time-varying risk aversion. Others say that it's due to non-rational expectations. As John Cochrane has pointed out, price data - i.e., choice data - can't distinguish between these explanations. The standard asset pricing equation is of the form p = E[mx], where p is price, m is related to utility, and x is related to expectations/beliefs. You'll never be able to use price data alone to know whether price movements are due to changes in m or changes in x. To do that, you need additional evidence - direct measures of either preferences, beliefs, or both.

That's the kind of evidence that psychology might - in principle - be able to provide. For example, suppose psychologists find that most human beings are incapable of forming the kind of expectations that time-varying utility models say they do. That would mean one of two things. It could mean that the economy as a whole behaves qualitatively differently than the individuals who make it up (in physics jargon, that would mean that the representative agent is "emergent"). Or it could mean that time-varying utility models must not be the reason for excess volatility.

So GP might respond something along the lines of: "So? Why do we care?" Of what use would be the knowledge that excess volatility is caused by psychological constraints rather than time-varying utility, if both ideas lead to the same predictions about prices? The answer is: They don't lead to the same predictions, if you expand the data set. For example, suppose you find that survey expectations can predict price movements. What once could be modeled only as randomness now becomes a partially predictable process. You can make some money with that knowledge! All you do is take a bunch of surveys, and place bets based on the results.

Are survey responses "economic data"? Are they choice data? That question is a bit academic. What if you could use brain scans to predict market movements?

In other words, I think there's not really any conceptual difference between what GP say psych can be used for ("tak[ing] into account experimental evidence that challenges behavioral assumptions of economic models") and what they say it can't be used for ("appeal[ing] directly to the neuroscience evidence to reject standard economic models or to question economic constructs"). It's all just the same thing - using evidence to create theories that help you predict stuff.

Anyway, GP's second point - that psych/neuro evidence can't provide new welfare criteria - also doesn't make sense to me, in principle. Here, in a nutshell, is their argument:
Welfare analysis for neuroeconomics is a form of social activism; it is a recommendation for someone to change his preferences or for someone in a position of authority to intervene on behalf of someone else. In contrast, welfare economics in the standard economic model is integrated with the model’s positive analysis; it takes agents’ preferences as given and evaluates the performance of economic institutions.
I don't see this distinction at all. To be blunt, all welfare criteria seem fairly arbitrary and made-up to me. Data on choices do not automatically give you a welfare measure - you have to decide how to aggregate those choices. Why simply add up people's utilities with equal weights to get welfare? Why not use the utility of the minimum-utility individual (a Rawlsian welfare function)? Or why not use a Nash welfare function? There seems no objective principle to select from the vast menu of welfare criteria already available. The selection of a welfare criterion thus seems like a matter of opinion - i.e., a normative question, or what GP call "social activism". So why not include happiness among the possible welfare criteria? Why restrict our set of possible welfare criteria to choice-based criteria? I don't see any reason, other than pure tradition and habit.

So personally, I find the logic of both of GP's main arguments unconvincing. In principle, it seems like psych/neuro data could help choose between models when choice data is insufficient to do so. And in principle, it seems like neuro-based or psych-based welfare criteria are no more arbitrary than choice-based welfare criteria (or any other welfare criteria, like "virtue").

But that's in principle. What about in practice? It's been 10 years since GP's essay, and many more since psychology and neuroscience entered the economist's toolbox. Psychology seems to have made real contributions to certain areas of economics, in particular finance. In general, those contributions have come in the form of generating hypotheses about constraints - for example, attention constraints - rather than by motivating new behavioral assumptions for standard models. In other words, psych ideas have occasionally given economists power to predict real data in ways that standard behavioral models didn't allow. These contributions have been modest overall, but real.

But I can't really think of examples where neuroscience has made much of a successful contribution to economics yet. That might be because neuroscience is still too rudimentary. It might be that it has, and I just haven't heard of it. Or it might be that it's just incredibly hard to map from neuro concepts to econ models. In fact, GP spend much of their essay showing how incredibly hard it is to map from neuro to econ. They are right about this. (And in fact, GP's essay should be required reading for economists, because the difficulty of mapping between disciplines really gets at the heart of what models are and what we can expect them to do.)

Also, in practice, no psychology-based welfare criterion, including happiness, has gained much popular traction as a replacement for traditional utilitarian welfare criteria based on choices. So while welfare is a matter of opinion, most opinion seems to have sided with GP.

All this doesn't mean I think neuroeconomics is doomed to be useless, just that it seems like it's in its very early days. There are a few hints that neuro might be used to select between competing economic models. And the topic of using happiness as a measure of economic success occasionally crops up in the media. But the task of using neuro (and psych) for economics has turned out to be much harder than wild-eyed optimists probably assumed when the fields of neuroeconomics and behavioral economics were conceived.

So I think that while Gul and Pesendorfer didn't make a watertight logical case, their warnings about the difficulty of using neuro evidence for econ have been borne out in practice - so far. Ten years might seem like a long time, but let's see what happens in forty years.

Wednesday, September 02, 2015

RBC as gaslighting

"Say it wasn't you"
- Shaggy

On my last post, I wrote that "RBC gaslighting knows no shame." To which Steve Williamson said "You're a real meany with the poor RBC guys." Which reminds me that it's been a while since I wrote a gratuitous, cruel RBC-bashing post! (Fortunately the "poor RBC guys" all have high-paying jobs, secure legacies, and widespread intellectual respect that sometimes includes Nobel Prizes, so a mean blog post or two from lil' old me is unlikely to cause them any harm.)

Anyway, I used the word "gaslighting", but in case you don't know what it means, here's the def'n:
Gaslighting or gas-lighting is a form of mental abuse in which information is twisted or spun, selectively omitted to favor the abuser, or false information is presented with the intent of making victims doubt their own memory, perception, and sanity.
Basically, this is what Shaggy advises Rikrok to do in the famous 1990s song "It wasn't me." Rikrok's girlfriend saw him cheating, but Rikrok just keeps repeating his blatantly absurd defense until his girlfriend - presumably - starts to wonder if she's going crazy. Another classic example is the cheating wife in the third episode of Black Mirror.

The basic 1982 Nobel-winning RBC model - a complete-markets, representative-agent theory of business cycles where productivity shocks, leisure preference shocks, and/or government policy distortions drive business cycles - has never been very good at matching the data. This didn't take long to figure out - a lot of its implications seemed fishy right from the start and required patching. Simple patches, like news shocks, didn't really improve the fit that much. The model isn't very robust to small frictions, either. And one of the main data techniques used in RBC models - the Hodrick-Prescott filter - has been mathematically shown to be very dodgy. Furthermore, the Nobel-winning empirical work of Chris Sims showed that the main policy implication of RBC - that monetary policy can't be used to stabilize the real economy - doesn't hold up.

Now, that doesn't mean RBC is a total failure. There are some cases, as with large oil discoveries, when it sort of looks like it's describing what's going on. And very advanced modifications of basic RBC - labor search models, heterogeneous-agent models, network models, etc. - offer some hope that models that rely on TFP shocks as the main stochastic driver of aggregate volatility may eventually fit the macro data.

But that's not enough for RBC fans! The idea of RBC as one potentially small ingredient of an eventual useful theory of the business cycle is not enough. RBC fans maintain that RBC is the basic workhorse business cycle model.

For example, just last year, Ed Prescott and Ellen McGrattan released a paper claiming that if you just patch basic RBC up with one additional type of capital, it fits the data just fine. As if this were the only empirical problem with RBC, and as if this new type of capital has empirical support!

2007 paper by Gomme, Ravikumar and Rupert (which I mentioned in a previous post) refers to RBC as "the standard business-cycle model". As if anyone actually uses it as such!

A 2015 Handbook of Macroeconomics chapter by Valerie Ramey says:
Of course, [the] view [that monetary policy is not an important factor in business cycles] was significantly strengthened by Kydland and Prescott’s (1982) seminal demonstration that business cycles could be explained with technology shocks.
As if any such thing was actually demonstrated!

There are a number of other examples.

This strikes me as a form of gaslighting - RBC fans just blithely repeat, again and again, that the 1982 RBC model was a great empirical success, that it is now the standard model, and that any flaws are easily and simply patched up. They do this without engaging with or even acknowledging the bulk of evidence from the 1990s and early 2000s showing numerous data holes and troubling implications for the model. They don't argue, they just bypass. Eventually, like the victims of gaslighting, skeptical readers may begin to wonder if maybe their reasoning capacity is broken.

Why do RBC fans keep on blithely repeating that RBC was a huge success, needs only minor patches, and is now the standard model? One reason might be a struggle over history. In case you haven't noticed from reading the blogs of Paul Romer, Roger Farmer, Steve Williamson, Simon Wren-Lewis, Robert Waldmann, Brad DeLong, John Cochrane, and Paul Krugman (to name just a few), there is a very contentious debate over whether the macro revolutions of the late 1970s and early 1980s were a good thing or a wrong turn. If RBC was refuted - or relegated to a minor role in more modern theories - it means that the Lucas/Prescott/Sargent revolution looks just a little bit more like a wrong turn. But if RBC sailed on victorious, then that revolution looks like an unmitigated victory for science. We may be through with the past, but the past is not through with us!

Or maybe RBC represents a form of wish fulfillment. If RBC is right, stabilization policy - which, if you believe Hayek, just might be the thin edge of a socialist wedge - is just a "rain dance". Maybe people just really hope that recessions are caused by technological slowdowns, outbreaks of laziness, and/or government meddling.

It could also be a sort of high-level debating tactic. Paul Krugman talks about how Lucas and other "freshwater" economists basically failed to engage with "saltwater" ideas, preferring instead to dismiss them (Prescott and McGrattan's paper does exactly this). Maybe the blithe insistence that RBC is the standard model is simply a dig at a competitor.

Anyway, whatever the reason, it's kind of entertaining to watch. For those who are secure in the knowledge of their own sanity, watching people try to gaslight can be a form of entertainment. And besides...who cares about any of this? It's not like anyone who opposes stabilization policy ever needed an RBC model to back them up.

Monday, August 31, 2015

Non-intuitive Neo-Fisherism

John Cochrane has another excellent post explaining the Neo-Fisherian view of monetary policy. Some key grafs (I think "graf" means "excerpt"):
Why is there so little inflation now? How will a rate rise affect inflation? How can we trust models of the latter that are so wrong on the former? 
Well, why don't we turn to the most utterly standard model for the answers to this question -- the sticky-price intertemporal substitution model. (It's often called "new-Keynesian" but I'm trying to avoid that word since its operation and predictions turn out to be diametrically opposed to anything "Keyneisan," as we'll see.) 
The basic simplest [New Keynesian] model makes a sharp and surprising [Neo-Fisherian] prediction... 
I started with the observation that it would be nice if the model we use to analyze the rate rise gave a vaguely plausible description of recent reality. 
The graph shows the Federal Funds rate (green), the 10 year bond rate (red) and core CPI inflation (blue). 
The conventional way of reading this graph is that inflation is unstable, and so needs the Fed to actively adjust rates...When inflation declines a bit, the Fed drives the funds rate down to push inflation back up...When inflation rises a bit, the Fed similarly quickly raises the funds rate. 
That view represents the conventional doctrine, that an interest rate peg is unstable, and will lead quickly to either hyperinflation (Milton Friedman's famous 1968 analysis) or to a deflationary "spiral" or "vortex."... 
But in 2008, interest rates hit zero...The conventional view predicted that the broom will topple. Traditional Keynesians warned that a deflationary "spiral" or "vortex" would break out. Traditional monetarists looked at QE, and warned hyperinflation would break out... 
The amazing thing about the last 7 years in the US and Europe -- and 20 in Japan -- is that nothing happened! After the recession ended, inflation continued its gently downward trend. 
This is monetary economics Michelson–Morley moment. We set off what were supposed to be atomic bombs -- reserves rose from $50 billion to $3,000 billion, the crucial stabilizer of interest rate movements was stuck, and nothing happened.  
This is a powerful argument, and I think that those who sneer at Neo-Fisherism don't take it seriously enough.

That said, there are some serious caveats. The first is that although the recent American and Japanese experience with QE are powerful pieces of evidence, they are by no means the only pieces of evidence or the only policy experiments. What about the Volcker disinflation, when Fed interest rate hikes were followed by disinflation? I assume there have been at least one or two similar episodes around the world in the last few decades.

Next, are we sure we want to think about interest rate policy as a series of interest rate pegs, each of which people believe will last forever? In the typical New Keynesian model, people believe something much more complicated - they believe that the Fed sets interest rates according to a Taylor-type rule, and monetary policy changes only cause people to change their beliefs when they represent a regime change - i.e. a change in the rule.

But the last reason we should be a little wary of the Neo-Fisherian idea is that it goes against our basic partial-equilibrium Marshallian idea of supply and demand.

Our basic supply-and-demand intuition says that demand curves slope down and supply curves slope up. Dump a lot of a commodity on the market, and its price will fall. Start buying up a commodity, and its price will rise.

Neo-Fisherianism goes against this intuition. Suppose the Fed lowers interest rates. Abstracting from banks, reserves, etc., it does this by printing money and using that money to buy bonds from people in the private sector. That increase in demand for bonds makes the price of bonds go up, and since interest rates are inversely related to bond prices, it makes interest rates go down.

Now, you can write down a model in which this doesn't happen - for example, a model in which Fed money-printing-and-bond-buying stimulates the economy so much that interest rates end up rising instead of falling. But in practice, it looks like the Fed has total control over interest rates (at least, the Federal Funds Rate; let's put aside the question of heterogeneous interest rates).

So when the Fed lowers interest rates, it prints money in order to do so. But in a Neo-Fisherian world, that makes inflation fall - in other words, it makes money more valuable. That's worth repeating: In a Neo-Fisherian world, dumping a ton of new money on the market makes money a more valuable commodity.

That is weird! That totally goes against our Econ 101 intuition! How does dumping money on the market make money more valuable?? Well, it could be one of those weird general equilibrium results, like the "paradox of thrift" or the "paradox of toil". Or it could be because Neo-Fisherians make very strong assumptions about what the fiscal authority does. As Cochrane writes:
One warning. In the above model, the interest rate peg is stable only so long as fiscal policy is solvent. Technically, I assume that fiscal surpluses are enough to pay off government debt at whatever inflation or deflation occurs.  Historically, pegs have fallen apart many times, and always when the government did not have the fiscal resources or fiscal desire to support them. The statement "an interest rate peg is stable" needs this huge asterisk.
This makes sense, and it seems like a good reason to wonder if interest rate policy really is best viewed as a series of pegs. If interest rate pegs historically fall apart because the fiscal authority couldn't do its part in maintaining them, it stands to reason that people wouldn't generally expect the current interest rate target to be permanent. Instead, it might be more reasonable for people to expect something more along the lines of a Taylor-type rule, as in the standard New Keynesian model.

Anyway, Neo-Fisherianism continues to be an interesting idea, but I continue to have serious doubts. I want to see international evidence, and evidence with "high" pegs as well as "low" ones, before I believe we've seen a Michelson-Morely moment. I do agree, however, that everyone who still has a standard, Milton Friedman type concept of how monetary policy affects inflation needs to be doing some serious rethinking right now.


In the comments, Steve Williamson writes:
[I]n the VAR evidence, it can be hard to get rid of the "price puzzle." That was called a puzzle because tight monetary policy led to higher prices. Maybe that's not so puzzling.
He points me to this Handbook of Macroeconomics chapter by Valerie Ramey, whose section 3 concerns VAR studies of monetary policy. Ramey describes the Price Puzzle on p. 27:
Another issue that arose during this period was the “Price Puzzle,” a term coined by Eichenbaum (1992) to describe the common result that a contractionary shock to monetary policy appeared to raise the price level in the short-run... 
Christiano, Eichenbaum, and Evans’ 1999 Handbook of Macroeconomics chapter...summarized and explored the implications of many of the 1990 innovations in studying monetary policy shocks. Perhaps the most important message of the chapter was the robustness of the finding that monetary policy shocks, however measured, had significant effects on output. On the other hand, the pesky price puzzle continued to pop up in many specifications.
So the evidence from the 1990s and earlier says that monetary policy works in the classically expected direction (rate hikes lower inflation, rate cuts boost it), but that in the very short term after a policy change, the direction of the effect is often reversed.

But if you look at Cochrane's Neo-Fisherian impulse response graph, that's exactly the opposite of what Ramey talks about:

In this graph, a rate hike is followed first by an indeterminate or perhaps negative jump in inflation, then by a slow convergence in the direction of the interest rate. But Ramey's summary of the evidence is that a rate hike is followed by an indeterminate or perhaps positive jump in inflation (the Price Puzzle), followed by a longer-term downward movement in inflation. In other words, exactly the opposite of the above graph.

So I still think this is a puzzle for Neo-Fisherism.

Steve also has a post responding to mine. Particularly interesting is the argument that Volcker's rate hikes in the early 1980s actually made the inflation situation worse, and that it was his subsequent rate cuts that actually whipped inflation. I'm probably more open to that story than most people, but I think there are a number of things about it that are very fishy, e.g. the fact that inflation started going down after the rate hikes instead of rising further.

Steve also shows a graph that displays a positive correlation between interest rates and inflation. However, this sort of logic leads would also lead us to believe that going to the doctor is the cause of illness, so I would rather trust the VAR evidence that Ramey cites in the chapter Steve linked to. Of course, I don't trust that VAR evidence that much, since it's hard to get credible structural identification on a VAR.

As a final random fun note, the Ramey chapter - which appears on the Hoover Institute's website - contains the following footnote on page 25:
Of course, this view was significantly strengthened by Kydland and Prescott’s (1982) seminal demonstration that business cycles could be explained with technology shocks.
LOL. RBC gaslighting knows no shame.

Saturday, August 29, 2015

The macro/micro validity tradeoff

Michael Lind wrote an article recently suggesting that universities abolish the social sciences. He unfairly credits me with the term "mathiness", which of course is Paul Romer's thing. But anyway, I tweeted the article (though I disagree with it pretty strongly), and that provoked an interesting discussion with Ryan Decker.

When economists defend the use of mathematical modeling, they often argue - as Ryan does - that mathematical modeling is good because it makes you lay our your assumptions clearly. If you lay out your assumptions clearly, you can think about how plausible they are (or aren't). But if you hide your assumptions behind a fog of imprecise English words, you can't pin down the assumptions and therefore you can't evaluate their plausibility.

True enough. But here's another thing I've noticed. Many economists insist that the realism of their assumptions is not important - the only important thing is that at the end of the day, the model fits the data of whatever phenomenon it's supposed to be modeling. This is called an "as if" model. For example, maybe individuals don't have rational expectations, but if the economy behaves as if they do, then it's OK to use a rational expectations model.

So I realized that there's a fundamental tradeoff here. The more you insist on fitting the micro data (plausibility), the less you will be able to fit the macro data ("as if" validity). I tried to write about this earlier, but I think this is a cleaner way of putting it: There is a tradeoff between macro validity and micro validity.

How severe is the tradeoff? It depends. For example, in physical chemistry, there's barely any tradeoff at all. If you use more precise quantum mechanics to model a molecule (micro validity), it will only improve your modeling of chemical reactions involving that molecule (macro validity). That's because, as a positivist might say, quantum mechanics really is the thing that is making the chemical reactions happen.

In econ, the tradeoff is often far more severe. For example, Smets-Wouters type macro models fit some aggregate time-series really well, but they rely on a bunch of pretty dodgy assumptions to do it. Another example is the micro/macro conflict over the Frisch elasticity of labor supply.

Why is the macro/micro validity tradeoff often so severe in econ? I think this happens when an entire theoretical framework is weak - i.e., when there are basic background assumptions that people don't question or tinker with, that are messing up the models.

For example, suppose our basic model of markets is that prices and quantities are set based purely on norms. People charge - and pay - what their conscience tells them they ought to, and people consume - and produce - the amount of stuff that people think they ought to, in the moral sense. 

Now suppose we want to explain the price and quantity consumed of strawberries. Microeconomists measure people's norms about how much strawberries ought to cost, and how many strawberries people ought to eat. They do surveys, they do experiments, they look for quasi-experimental shifts that might be expected to create shifts in these norms. They get estimates for price and quantity norms. But they can't match the actual prices and quantities of strawberries. Not only that, they can't match other macro facts, like the covariance of strawberry prices with weather in strawberry-growing regions. (A few microeconomists even whisper about discarding the idea of norm-driven prices, but these heretics are harshly ridiculed on blogs and around the drink table at AEA meetings.)

So the macroeconomists take a crack at it. They make up a class of highly mathematical models that involve a lot of complicated odd-sounding mechanisms for the creation of strawberry-related norms. These assumptions don't look plausible at all, and in fact we know that some of them aren't realistic - for example, the macro people assume that weather creates new norms that then spread from person to person, which is something people have never actually observed happening. But anyway, after making these wacky, tortured models, the macro people manage to fit the facts - their models fit the observed patterns of strawberry prices and strawberry consumption, and other facts like the dependence on weather.

Now you get to choose. You can accept the macro models, with all of their weird assumptions, and say "The economy works as if norms spread from the weather", etc. etc. Or you can believe the micro evidence, and argue that the macro people are using implausible assumptions, and frame the facts as "puzzles" - the "strawberry weather premium puzzle" and so on. You have a tradeoff between valuing macro validity and valuing micro validity. 

But the real reason you have this tradeoff is because you have big huge unchallenged assumptions in the background governing your entire model-making process. By focusing on norms you ignore production costs, consumption utility, etc. You can tinker with the silly curve-fitting assumptions in the macro model all you like, but it won't do you any good, because you're using the wrong kind of model in the first place. 

So when we see this kind of tradeoff popping up a lot, I think it's a sign that there are some big deep problems with the modeling framework. 

What kind of big deep problems might there be in business cycle models? Well, people or firms might not have rational expectations. They might not act as price-takers. They might not be very forward-looking. Norms might actually matter a lot. Their preferences might be something very weird that no one has thought of yet. Or several of the above might be true.

But anyway, until we figure out what the heck is, as a positivist might say, really going on in economies, we're going to have to choose between having plausible assumptions and having models that work "as if" they're true.

Saturday, August 22, 2015

A great critique of Rational Expectations

How did I miss this great critique of Rational Expectations? Charles Manski, an econometrician at Northwestern University, published a paper in 2004 in Econometrica looking at the way economists measure expectations. Here is the final working-paper version. Manski spends a lot of his time discussing the possibility of measuring expectations through surveys. But in one section he critiques the idea of Rational Expectations, which is assumed in most economic models. Manski writes:
Suppose that the true state of nature actually is the realization of a random variable distributed P. A decision maker attempting to learn P faces the same inferential problems – identification and induction from finite samples – that empirical economists confront in their research. Whoever one is, decision maker or empirical economist, the inferences that one can logically draw are determined by the available data and the assumptions that one brings to bear. Empirical economists seldom are able to completely learn objective probability distributions of interest, and they often cannot learn much at all. It therefore seems hopelessly optimistic to suppose that, as a rule, expectations are either literally or approximately rational.
Rational Expectations basically say that economic agents behave as if the true model of the economy is the same as the model the economist is currently writing down. But that model includes stochastic processes. And in most situations, it's impossible to pin down the stochastic processes governing the economy - you have to make some guesses. Rational Expectations forces you to assume that economic agents are making all the same guesses you are. That goes way beyond rationality. It is also highly implausible, when you think about it, especially since econometricians themselves will almost always disagree on which guesses are appropriate.

Manski continues:
I would particularly stress that decision makers and empirical economists alike must contend with the logical unobservability of counterfactual outcomes. Much as economists attempt to infer the returns to schooling from data on schooling choices and outcomes, youth may attempt to learn through observation of the outcomes experienced by family, friends, and others who have made their own past schooling decisions. However, youth cannot observe the outcomes that these people would have experienced had they made other decisions. The possibilities for inference, and the implications for decision making, depend fundamentally on the assumptions that youth maintain about these counterfactual outcomes. 
In other words, economic agents just have no physical way of learning about all of the possible outcomes in an economy that never end up happening. 

Here's a simple example. Suppose I think that if I use pachinko machine A, I'll win with a 51% chance and lose with a 49% chance. And suppose that I think that if I use pachinko machine B, I'll win with a 40% chance and lose with a 60% chance. What do I do? I use pachinko machine A every time. Now suppose that I'm right about the odds of machine A (which I confirm by multiple uses), but wrong about machine B. Suppose that machine B actually has odds of 55% win, 45% lose. I should be using machine B, but I never do, so I never find out that I'm wrong, and I keep making the wrong decision! 

Now, if there are lots of people playing on lots of machines and we can all observe each other, it's clear that we'll figure out the odds of all the machines. But many economic models are macro models. The macroeconomy can only make one decision at a time. What would have happened if we had stayed on the gold standard in the Great Depression? We can make guesses, but we'll never really know. So this kind of limited knowledge makes Rational Expectations especially difficult to swallow in the context of macro.

Note that a lot of people think that Rational Expectations becomes a better and better assumption as the economy settles down into a long-term steady state. But the pachinko example above shows how this may not be the case, since in the steady state, the decision maker never learns the truth.

So why does everyone and their dog use Rational Expectations? Manski says that, basically, it's because A) it's easy, and B) there's no obviously better alternative:
Why do economists so often assume that they and the decision makers they study share rational expectations? Part of the reason may be the elegant manner in which these assumptions close an economic model. A researcher specifies his own vision of how the economy works, and he assumes that the persons who populate the economy share this vision. This is tidy and self-gratifying. 
Another part of the reason must be the data used in empirical research. As illustrated in Section 2, choice data do not necessarily enable one to infer the expectations that decision makers hold. Hence, researchers who are uncomfortable with rational expectations assumptions can do no better than invoke some other unsubstantiated assumption. Rather than speculate on how expectations actually are formed, they follow convention and assume rational expectations.
I'd add a third, more cynical reason: Rational Expectations can't be challenged on data grounds. If you measure expectations with surveys, people can poke holes not just in your theoretical model, but in the expectations data that you gathered and the econometric methods that you used to extract a signal from it. But if you assume Rational Expectations, they can only poke holes in the model itself. Basically, substituting theoretical assumptions for empirical results makes a model a more hardened target. If it makes the model less able to fit the data at the end of the day, well..."all models are wrong", right?

Anyway, everyone should go read Manzi's entire paper. Very interesting stuff, even if a decade old.

Thursday, August 20, 2015

Have interest rates actually risen?

The Council of Economic Advisors recently put out a report on the long, steady decline in long-term interest rates over the last two decades. John Cochrane called the report "excellent", and reposts the following graph:

These are government bond yields - they represent the government's cost of borrowing. Steve Williamson, however, notes that the return on government bond yields is not the same thing as the return on capital. He writes:
Some of this discussion seems to work from the assumption that the rate of return on government debt and the rate of return on capital are the same thing...Bernanke appears to think that low real Treasury yields are associated with low rates of return on capital.
Williamson is responding to a quote by Bernanke stating that if (real) interest rates get low enough, investment will eventually be stimulated. Williamson points out the distinction between government borrowing rates and rates of return on capital in order to argue (I think) that pushing down government bond rates will not necessarily induce companies to invest.

In defense of this thesis, Williamson cites a St. Louis Fed report by Paul Gomme, B. Ravikumar, and Peter Rupert, which in turn draws on this 2011 paper by the same authors (though the data series have been updated). Gomme et al. measure what they call the "real return on capital" by dividing an income measure by a measure of book value. Here, via Williamson, is the chart of what they find:

I am not sure whether Gomme et al. are measuring the return on capital correctly here. But I am pretty sure that Williamson is making sort of an error here - or at least overlooking an important distinction. What should matter for business investment is not businesses' return on capital, but the difference between their return on capital and their cost of capital. 

That's just basic corporate finance. If your internal rate of return (basically, your return on capital) is higher than your cost of capital, you buy the capital (i.e. you invest), and you undertake the project. 

Gomme et al.'s time series - whether or not it's a good measure of the return on capital or not - is not a measure of the cost of capital. And if we're comparing government borrowing costs to business borrowing costs, we want to look at the cost of capital. 

Now, there are two basic types of capital, equity capital and debt capital. To find the cost of equity capital - which is an opportunity cost - we need a model of risk. But to find the cost of debt capital is easy - it's just the yield on corporate bonds. So here, via FRED, is the nominal yield on Aaa and Baa corporate bonds:

Here we see the same story that we saw in the CEA graph. Nominal borrowing costs for businesses have been falling more-or-less steadily since the mid-80s.

How about real rates? Here are real rates (annual, not monthly like the previous series):

Again, same exact story.

So real corporate borrowing costs have been falling more-or-less steadily for decades, just like government borrowing costs. Gomme et al.'s work on rates of return on capital does not measure a risk premium, and it does not bear on this basic story.

That does, of course, lead to a little bit of a puzzle: If Gomme et al. measure rates of return on capital correctly, then why haven't falling real costs of capital combined with rising rates of return on capital lead to a business investment boom? Even if the opportunity cost of equity capital (the equity risk premium) went up, wouldn't that just cause an investment boom, accompanies by a shift from equity financing to debt financing? For this reason, I suspect that either 1) Gomme et al. have measured the return on capital incorrectly, or 2) basic corporate finance theory doesn't capture what's going on in our economy, or 3) both. (Note: As Robert Waldmann and many others have pointed out, Gomme et al.'s series is an average rate of return, where what should matter for investment is the marginal rate. So given that, it's not even clear what kind of conclusions we could draw from Gomme et al.'s time series.)

(Fun random tidbit: While I was looking at Gomme et al.'s 2011 paper, I noticed that the 2011 abstract uses the words "A fairly basic real business cycle model". But the 2007 working paper abstract uses the words "The standard business cycle model" to refer to the same thing. Hehehe. The standard business cycle model, eh? Riiiiight...)

Wednesday, August 19, 2015

Science vs. politics

Ever since Paul Romer went on the attack against what he sees as the politicization of growth theory, there has been a lively Twitter discussion about whether and how politics and science should be combined. Should we try to keep politics out of science? And what does that even mean? Sociology grad student Dan Hirschman challenged me to lay out my thoughts in a blog post, so here it is.

One thing I absolutely don't mean by "separate politics and science" is that scientists should refrain from political activism. I think scientists should definitely be free to engage in any political activism they like. I just think that they should try their best to avoid incorporating their activism into their science. To make an analogy, I think particle physicists should refrain from having sex inside a particle collider (cue SMBC comic!), but that doesn't mean I want particle physicists to be celibate.

Another thing I don't mean by "separate politics and science" is to claim that it is possible to do this 100%. It is inevitable that scientists' political views will sometimes seep into their assessment of the facts. But just because it's inevitable doesn't mean it's desirable. To make an analogy, every desk has some water particles on it, but that doesn't mean you shouldn't dry off your desk if you spill some water on it.

So that's what I don't mean. On to what I do mean.

I'm making a normative statement - I'm telling scientists what I think they ought to do. More specifically, I'm telling them about what they ought to try to do. I'm telling them what I think their objective function ought to be when they do science. In econ-ese, I have preferences over their preferences.

I'm assuming that there's a fundamental difference between factual assessments and desires. They affect each other, sure - no one is totally objective, and people's desires are also shaped by what they think is possible. But they aren't the same thing.

I'm saying that when doing science, people ought to try to ignore their desires and just assess facts. Basically, they should try to be as objective as they possibly can.

To be more precise, I think there ought to be an activity called "science" that consists only of people trying as hard as they can to ignore all desires and just assess facts. 

Now you might ask: "Noah, why do you think there ought to be such an activity?"

Well, I could just reply that it's purely my moral intuition, and as a Humean, I don't need any other justification. In fact, any justification I give will open itself up to questions of "But why?", until I finally just say "Because that's just how I feel", or "Oh come ON!". But just for fun, let me try to explain some of the "good" consequences I think will generally result from people following my science-and-politics norm.

Basically, I think societies where scientists obey this norm will generally be more effective - whatever their goals - then societies that don't. For example, suppose there are two societies, Raccoonia and Wombatistan, and both are suffering from lots of bacterial diseases. Both countries generally subscribe to a religion that says that invisible gnomes cause disease. But Raccoonia is committed to the norm of science that I described above, while in Wombatistan people think that politics and science should be mixed. In Raccoonia, scientists put aside their religion and discover that antibiotics fight bacterial disease, while in Wombatistan, scientists publish papers calling the Racconian papers into doubt, and arguing for gnome-based theories. Raccoonia will discover the truth more quickly and manage to save a lot of its people.

WAIT!, you say. Isn't the goal of stopping disease itself a political goal? Well, sure. There's a clear division of labor here: The politicians tell the scientists a goal ("Find the cause of disease!"), and the scientists pursue the goal (actually, the scientists could even assign themselves the goal for political reasons, then try to disregard politics while pursuing it, and they'd still be following my norm). When the scientists go into a "science mode" in which they disregard all political considerations, they are more effective in reaching the goal.

This norm I'm suggesting won't solve all of society's problems, obviously, because that depends on what you think is a problem. If you have bad politics - for example, if you think disease is a just punishment for sins and shouldn't be cured - then all the scientific discoveries in the world won't help you much (I think the Soviets kind of demonstrated this). But whatever your goals, following my norm of science will make you more effective in accomplishing them.

Now, I don't think this norm is universal and overriding. I'm a Humean, not a deontologist - I have no need to establish a priori moral axioms that encompass all situations. I can think of extreme situations where I'd violate this norm. If the Nazis tell you to build a nuke, go ahead and sabotage that project!

But I think that in the long term, the human race benefits from being able to do more things, not less. Fundamentally, that's what my science norm is all about - empowering the human species as a whole. Over the long term I trust the human species with power. Your mileage may vary.