Where does identification come from in structural work?

At Stanford summer camp a couple months ago, a presenter started his talk noting that structural estimation was the gold standard for identification but natural experiments were the platinum standard. I could not agree more. After all, platinum is about the same price per gram as gold but was hyped up by clever marketing during 70s and 80s to differentiate the discriminating elite from gold, which had become a staple for the middle class.

Yet, more seriously, natural experiments seemingly have a hedge in clean identification of an exogenous shock, so let me ask the question:

Where does identification come from in structural work?

I think the answer to this is best obtained by considering identification in the natural experiment: there is an observable exogenous shock and we see responses to this shock. But, what if we do not directly observe the shock, should we give up on identification? This is exactly where structural estimation comes in: under certain situations, we can fill in the gaps with plausible theory to analyze consequences of explicitly described exogenous, but not directly observable, shocks. That is, exogenous shocks are central to the entire process of structural estimation. People would sometimes call these shocks “the data-generating process.”

I am slightly curious about extremists claiming that we should only rely on observable exogenous shocks. Let me do a quick off-topic parenthesis. I wonder how much of modern physics would survive if we required any fundamental particle to be directly observed, or processes that happen at the core of cosmological objects to be seen. I’m not sure I know anybody who’ve seen the Big Bang directly but I’d venture to say it’s well accepted.


These assumptions are precisely where structural estimation shines, because – when you think about it – it is truly a joint identification: identification of the hidden exogenous shock and of the economic mechanism to obtain the true equations of how the world works. By contrast, very few natural experiments tell us much about the mechanism, the ‘Why.’

Consider the excellent paper by B. Gipper from U. of Chicago (here), who examines the consequences of a shock to compensation disclosure, observing that compensation levels go up given greater disclosure. He pitches that this is because firms will now try to hire away underpaid CEOs. Certainly possible. Or, I have a paper (here) that shows that disclosure could make higher-powered pay a signal of future performance, but then causing a higher payment to compensate for greater risk. Or, it could be that that matching was inefficient prior, and the greater transparency has increased the efficiency of the labor market which will translate in shared benefits for all.

Any attempt to pick an explanation is speculation and, yet, these explanations have near-opposite policy implications (so it does matter!). In the first explanation, disclosure is largely redistributive. In the second, disclosure is probably bad because it increases signalling costs. In the third, disclosure is good because it makes CEOs work for the company where they are the most useful. So, the evidence is clearly good to know but it hasn’t reached an answer form that could be used for policy, simply because its welfare implications are completely obscure.

I heard the economist U. Malmendier offer a good resolution of the, mostly non-constructive debates about which methods is the most precious. She notes that the view that natural experiments should be better or worse than structural work presumes that they are substitutes but (more plausibly) they are complements. Nothing prevents a paper that has a natural experiment from using a structural model to recover the details of the theory – in fact, such should be necessary, if on a second step, to fully understand the mechanism as she has done in her own work (here). And, vice-versa, a theory that is being used in a setting where shocks are unobservable should be validated in a different setting with observable exogenous shocks.

So this is my personal call for future research: let us leverage on the institutional settings in which shocks are exogenous and try to identify the mechanisms. It’s a great place to envision structural work.


The many promises of structural estimation in accounting: Will it deliver?

“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory”
K. Popper (1980), p. 132.

At the 2015 conference of the Journal of Accounting Research, the editors set up a paper and panel discussion about structural estimation. Structural estimation is an estimation procedure that recovers the estimated equations from a formal economic model, often but not always with an explicit reference to utility maximization. Setting such a panel is courageous, given that very little such work exists in accounting at this date. The panel included several distinguished empiricists and econometricians (but, surprisingly, no theory), and generally concluded that it is a worthwhile method to explore.

However, I am afraid that lack of perspective about structural estimation might lead to the wrong conclusion about its place within the literature, by giving a misleading impression about what the method can deliver. The two econometricians on the panel, Peter Reiss and Christian Hansen, were both very, very clear that structural estimation does not magically grant identification in the sense of a randomized trial or field experiment. Endogeneity is a problem of lack of information in the data, and cannot be addressed using a more sophisticated technique, should it be instrumental variables or structural estimation.

The role and place of structural estimation lies much closer to reduced-form papers that provide ‘consistent evidence with a model’ as in Theory and Evidence papers. By noting that many theoretical predictions appear to be consistent with data, the reader of such a paper earns some confidence that the model is correctly reflecting the underlying data-generating process; of course, with more risky predictions being satisfied by the data, confidence in the model increases. With this approach, we can never be absolutely sure whether some other endogenous effect might be actually driving the results but there is, clearly, some revision in the reader’s posterior belief.

Structural estimation works in the same manner, but at a much more extreme degree. The procedure forces the researchers to explicitly use all the assumptions made in the theory, including its implied functional forms or moment restrictions. This makes for very, very risky predictions, because structural estimates try to predict not only directions of the effects (as in Theory and Tests) but also magnitudes. A structural model that explains data provides a greater degree of confidence in a theory. In this light, structural estimation is a companion to carefully-crafted reduced-form evidence and, indeed, works with the same scientific approach. Then, its success in accounting will, very probably, be tied to its ability to tie into the extensive empirical literature, and structural estimation should not be marketed, as I heard, as a ‘revolution’.

Unfortunately, the novelty of the technique has created a lot of confusion by some who have never used it, do not know much about it and (this appears to be a constancy in our area) still feel like they should talk about it. A young panelist at the conference stated that structural estimation is very difficult, takes two years to even estimate, and any small change in an assumption would take six months to implement; in fact, he noted, the technique will not pick up until there is a PROC STRUC available. Of course, this comment is stupid and does not refer to any survey of the literature in economics, finance or marketing.

First, structural estimation relies on relatively simple strategic interactions and, unlike theory work, does not require a complete solution of a game, or the derivation of its many properties, since the computer will do the work. Most applied structural estimation is done in Matlab and relies on far fewer lines of code or commands that the typical data cleaning in SAS. As to math, most programs rely on basic matrix algebra, not advanced mathematical concept, and the Matlab automatically provides many optimization routines, just like SAS or Stata will do for statistical analysis. Second, the vast majority of structural models run between an hour and a month, and those based on closed-form moments restrictions are usually on the low end of this range; there are exceptions of course, but those exceptions exist also in reduced-form analysis using, for example, gigantic datasets in physics or computer science. If the estimation takes longer, it can usually be run parallelized (with a single extra command) or, for the lazy researcher like me, can be run by Amazon for an extra fee. Third, many changes, if they do not involve complete generalizations of the model, can be addressed with a simple tweak to the code, as in the case of regression. And, no, the point of structural estimation is to adapt the estimation to the institutional and economic context: there will never be a PROC STRUCT that will create a structural model on demand.

So let us say it: structural estimation will not recover all the answers from endogenous data, will not replace reduced-form statistical analysis (or randomized experiments) but: it is fresh and has great potential to add to the body of knowledge. It is also nascent in accounting, which means that it is, at a point, where the barriers to entry are small because we are writing the very first, simple, such estimations. In my own work, for example, the structural estimation of the Dye model for sporadic (non-sticky) forecasters only requires a few lines of code on Matlab or Stata, essentially solve one non-linear equation, as compared with pages of Stata to clean up the data. The barrier to entry is coding, not mathematics, and empirical researchers have the right skill set to do it, a lot more than theorists (theorists have the motivation, that counts as well though). This makes structural estimation a great place to start for any empiricist with interests in trying something new and different, opening up an area ripe with the unexplored.

If you are structural estimation curious, consider the following further readings:

Earnings management and earnings quality: theory and evidence, by A. Beyer, I. Guttman and I. Marinovic
Causal inference in accounting research, by D. Larker, I. Gow and P. Reiss
Competition in the Audit Market: Policy Implications, by J. Gerakos and C. Syverson
Identifying Accounting Quality, by V. Nikolaev
How often do managers withhold information?, by J. Bertomeu, I. Marinovic and P. Ma
Are Top Management Teams Compensated as Teams? A Structural Approach, by C. Li
Measuring Intentional GAAP Violations: A Structural Approach, by A. Zakolyukina

Why is accounting theory in decline? Knowing the numbers and finding a way forward.


In this first entry, I want to discuss three questions. First, is there a decline of theory? Second, should accountants be worried about it? Third, what is the root cause of the decline and what can we do about it?

As to the first question, the numbers are clear. In the 1985 issue of The Accounting Review, 23 papers were published, of which 6 were theoretical (26%). Compare this to the 2014 issue, in which 69 papers were published, of which 5 were theoretical (7%), fewer than in 2014 despite a tripling of the number of papers published. This pattern is repeated in other journals, with currently about 10% theoretical work, versus 3 theoretical papers out of 9 published papers (33%) in the first 1979 issue of the Journal of Accounting and Economics, and 6 theoretical papers out of 17 (a whopping 35%) in the 1978 issue of the Journal of Accounting Research.

The second question is more difficult to answer, and I would like to try to answer it trying to avoid, as much as possible, dogmatic statements about what research should be. One explanation is that theory has become less useful. If this is true, we should expect a similar trend in our sister areas of economics and finance. In the first 1980 issue of the American Economic Review (AER), 19 papers were published, of which 10 (53%) were theoretical. In the January 2015 issue of AER, 15 papers were published, of which 11 (73%) involved formal theory.  In the first 1980 issue of the Journal of Finance (JF), 13 papers were published, 5 of which theoretical (38%). In the first 2015 issue of JF, 12 papers were published, of which 3 (25%) included formal theory.

In a recent paper in the Journal of Financial Reporting, in a piece that summarizes a panel discussion at the 2015 Jr. Accounting Theory Conference, Qi Chen, Joe Gerakos, Vincent Glode and Dan Taylor computed worrying recent trends in accounting vs. finance.


im2.jpgIn short, I do not see a decline of theory in economics – although it now often comes bundled with empirical research – and I see some decline in finance, although nothing nearly comparable to the trend in accounting. There is no reason to believe that theory work should be less (or, for that matter, more) useful in accounting, than it is in finance or economics.

We should be greatly alarmed about this, and I would like to take the perspective of empirical work. In finance and economics, theory organizes empirical work, identifying the questions to be answered, the theories to be tested, or offering some structure on the data. Accounting is in bad place: we have lots of observational data but do not know what to do with it because the theories that make sense of this data do not exist. Accountants are, therefore, forced to recycle theories from economics that do not fit details of the institutional setting. As a result, empirical results often do not fit within a precise theoretical narrative and are read as disconnected individual pieces.

As to the root cause of the decline, here are couple sentences that you will hear about theory work in accounting. “In theory, the senior people eat their young” (heard at a major conference), “you have to be very good to do theory” (folk advice given to PhD students), “I do theory too, you’re just a modeler” (from an empirical person) or, my personal favorite, “theory is something that only a couple people do in Minnesota” (an SSRN staffer).

A widespread belief that the general decline of theory is something organized from the top, editors, senior reviewers, associations, etc, who have stacked the deck against the next generation. According to the story, some prior generation believed that they it had all figured out, creating no need for other papers except their own and, to make sure of that, restricting the range of topics to few specialized issues on which they are experts. The story has been, again and again, used as an excuse for researchers giving up on writing new innovative papers.

This story is plainly wrong and does not stand up to the facts. First, rejection rates are the greatest among junior faculty (yes: I mean you!), not tenured faculty. Assumption-tolerance is an acquired taste. Trigger happy young reviewers will summarily reject a defensible assumption, giving no opportunity for the author to respond or ignoring the explanation given in text. Second, seasoned authors have published work across a wide range of topics and methods that, unless they are accused of schizophrenia, this does not fit the story that ‘X only likes type of model Y’. Third, rejection rates are no greater in theory work than they are with other methodologies. Errors, or published papers widely believed to be bad, do not seem any less common in theoretical papers.

The root cause of the decline of theory is more primitive. The majority of Ph.D granting universities do not have any theory training and few require a qualifying exam that ensures that graduates can read models in accounting, economics and finance. Often, the “theory” course, taken outside of accounting (often, this takes the form of Micro 101) is a supplemental requirement. In the BYU accounting ranking for 2014, out of the top 40 schools in terms of publications, 18 do not have any theory. Out of the next 40 schools, 24 do not have any theory. This means that we can (conservatively) estimate that about half of new accounting Ph.D graduates come out of their program with no functional knowledge of, say, Nash equilibrium or rational expectations. Theorists are concentrated in few schools that give their students a concentrated dose of theory, Carnegie Mellon, Northwestern, Stanford, Chicago, Wharton, Minnesota, NYU, among others.

I will conclude with various prescriptions:

Theorists: please stop clustering together, as if theory advanced better when isolated from other methodologies.

Empiricists/Experimentalists: curious theorists enjoy and want to contribute to empirical research, the incremental value of a theorist to an all-empirical department is probably greater than an additional person with the same methodology.

Ph.D Students: if your school does not offer it, invest in knowing how to solve models. You may write models in the future, or use your knowledge to extract some delicate testable predictions from the models you read.

Ph.D administrators: Micro 101, or a Corporate Finance elective, is not a substitute for a proper theory course. Non-accounting theory courses can be useful, but they do not train students about most questions and details relevant to accounting research.