A Primer for financial structural econometrics: the 2017 Mitsui Summer School

This week, I was fortunate to attend this year’s University of Michigan’s summer school on structural econometrics in financial economics, co-organized by Luke Taylor and Toni Whited (LW). There was a very dense web of knowledge to be gained from the camp, especially for structural newbies such as myself, and I would like to share a few of these insights with those of you who might (and should!) attend in future years, whether as an author or a consumer of this literature.

LW start their first lecture with the obvious question: (in my own words) why should we even bother with structural? This may seem obvious ex-post but I have to confess that, in graduate school, this question was never really asked because, duh, structural is about estimating economic models like an economist.  But LW offer a more detailed answer, by telling us about three applied benefits of structural work. A structural approach

  1. Estimates economic primitives, often in the form of institutional or behavioral characteristics that determine a choice. Because we have an explicit model of choice, we can claim that we are drawing from revealed preferences, a core axiom of economic analysis.
  2. Provides deep tests of a theory that go beyond a directional prediction. Are the magnitudes of a theory economically significant? Which of multiple theories contribute the most to explain a phenomenon? Which are suitable proxies to test the theory?
  3. Allows researchers to conduct counter-factuals, such as inferring potential consequences of a change in policy that was NOT observed in the empirical sample.

Let’s not hide it: structural work is hard! However, hard (to implement, or to do well..) does not mean obscure. In fact, if one were to summarize one big non-technical insight from the course, it is the following:

The quality of structural work is in its ability to make assumptions and how they extract economic insights from data sets (i.e., identification) transparent.

This, again, seems obvious ex-post but it is a template to organizing and reading papers in this area. Because it is new in some fields, I would like to cover next various means in which this general idea is implemented (of course, LW did not present this as a hard set of rules for any paper – so, reviewers, hold your rejection checklist – but as potential suggestions to organize’s one mind on a problem).

Step 1: Features of the data and model. The model picks up on a subset of presumed first-order effects in a setting of interest but LW strongly recommend, before any fitting exercise, to identify which features of the data are the object of interest AND whether these features of the data pin down the parameters of the model. How do we do this? By knowing one’s economic model very, very well: which characteristics of the model (a moment, a distribution, etc.) appear to change the most with parameters of the model, which do not? By plotting comparative statics over observable implications of the model, we can see which of these observables would be best suited to incorporate in the estimation.

Step 2: Being conscious of data-model limitations (LW use the term of not taking a model too seriously but I do not like that term because not taking a model seriously is as sinful as not taking data seriously and adds obscurity about what part of model is serious and which is not). Many data sets used in the social sciences have serious limitations, whether unobserved heterogeneneity, alternate economic forces or variables that are not quite what we think they are, to note a few examples. Building a consistent general model of all these limitations is not feasible at this stage of the science (and LW argue, might not even be that useful because it would soon turn into a black box). So what do we do? LW suggest to use empirical methods that are suited to these limitations. A few examples follow: use GMM or SMM for simpler models as they are more robust to misspecification and can target specific aspects of interest; control as much as possible for sources of variation outside of the economic model (e.g., scale variables to remove size effects, run a time-firm fixed effects regression on the variables and extract the residual).

Step 3: Recognize that the value of models is in understanding an economic trade-off. LW spent some time noting a misunderstanding in empirical research about model rejection (noting that this is still ongoing disagreement with some of their co-authors). Quote “All models are bad” and will be rejected (if one tries hard enough!). So, we learn little from whether or not we are able to reject a model and that a J test fails is not a deal-breaker. However, we learn a lot from analyzing where the model has failed, because it informs researchers on improvements to an approach. In fact, over-fixing a model by adding more structure to fit better can be counter-productive in that it turns it into a black box that no longer makes it clear what is or is not explained. LW suggest to look at moments that were not matched well and examine (as clearly as possible) why these aspects would not be explained well.

Step 4: LW note that this is not possible for any paper or any data set, but recommend using out-of-sample to provide external validity to structural estimates. Disclaimer: I do not like the term “out-of-sample” because out-of-sample refers to a sample that was not used in the estimation (different setting, different periods, etc.), but what they suggest is completely different. Semantics aside, their advice is very, very important for applied researcher. How do we suspend disbelief given some of the big questions answered by structural papers? We go back to the data and look at other implications of the model, seeing if they match aspects of the data for which the field may have strong-enough priors. Did the model do well to match moments that were matched in the estimation? Should we expect the coefficients to go one way or another in particular subsamples? Can we estimate the model in a subsample with an exogenous shock such that we strong prior about changes in parameters?

Another place where the summer school was highly successful is in giving a sense of where the literature stands at this point. With the increase in computing speeds and economic methods, there is an enormous opportunity in using these techniques in places where this was not possible even a decade in the past. LW note that only 2% of theoretical models have been estimated (I think it is a lot less..) and that entire areas are yet to be approaches: very little has yet been done on bankruptcy, household finance, labor finance, financial reporting.. and areas such as banking are receiving growing interest.

So, okay, you’re sold, what to do next? Well, first thing first, the summer school was intended primarily for Ph.D students and I would recommend to get in touch with one of the attendees and keep posted for next year’s announcement and program (here). Otherwise, the summer school had a great learning project that was done over the four-day course period. It is also useful, before going the summer school, to gain some working experience with Matlab or some other equivalent language. Although LW and their TAs shared their code and helped attendees along with their project, I find it very useful to come with some moderate programming experience to better appreciate the course.

Leave a comment