I’ve delayed this long enough, and decided I should say a word about the situation in accounting where meaningless reviews destroy incentives to innovate and send to the garbage bin often years of hard work by authors. So this is a question: How should you review a theory paper?
I believe I have credibility in answering this question. I received an award from one of the accounting journals. I got this award not for quantity (I don’t review that much for this journal) and not for rejecting papers. I got it for accepting (good) papers and, instead of throwing a paper away and moving on with my own research, putting a lot of work to understand the mindset of a paper and helping out. I started doing this because I got fed up with evil behaviors as a struggling posdoc in which competitive researchers play a zero-sum game and ignore the damage of dishing out rejections that are not commensurate with the quality in a Journal.
Ok, so here is the short answer: if you write a review, read at least two recent issues of the Journal – all papers, all methods – read them carefully and learn to appreciate questions and approaches. Do not criticize, keep a open attitude and assume these are all useful contributions. Now, set the bar for accepting within the bottom tercile of the in this set, across methods. Recognize it’s hard to obtain perfect or even deep answers regardless of the method used and benchmark a reasonable expectation to this. Once you have done this, think if, with sufficient work by you and the reviewer, the paper has either met this target or may meet it after two revisions. If the answer is yes, then do not reject the paper. When in doubt, always leave the door open for editors to make the final call.
Few people talk about this but let me next offer reasons that should NEVER, EVER show up as a primary motive for rejection. Then, I’ll give the three primary reasons why you may, sometimes, err toward rejection. Caveats: (1) only for top journals, (2) are primary but there could be secondary comments that were neither necessary nor sufficient (but could be useful for revising), (3) even these reasons should be applied with extra caution, maybe not at all, for unusual papers that step out of common paradigms, (4) always apply with caution: when in doubt, give the authors a chance.
Four reasons not to reject a paper
- Not realistic. The word realistic should never show up as a referee report as a motive for rejection. Stylized models are meant to be unrealistic because their point is to refine a trade-off by assuming away aspects that have nothing to do with the trade-off. It’s very, very rare that something assumed away would completely subsume the trade-off and, then, this should be shown formally. A realistic model is a bad model, because it would necessarily fail at providing a clean argument. If you don’t believe me, check the most successful theory papers in accounting, from Verrecchia (1983), Dye (1985) to Kanodia and Lee (1998) or Fischer and Verrecchia (2000), among many, many others. Are these models remotely realistic? In fact, quite the opposite, ask authors to be realistic and you will get ugly models with lots of things going on and unclear trade-offs.
- Math too hard, model too general. Lazy reviewers will sometimes reject the paper because they don’t feel they want to check that the appendix is correct or there may be an error somewhere. A version of that is results that are surprising but require a lot of analysis to get there. This forces authors to restrict their papers to closed-form binary/uniform/normals even when the argument is much more robust. Ideally, a paper should be as general as it can be as long as this preserves the main result and comp statics. The tendency of accounting papers to be unnecessarily dumbed down has caused many cases in which an economics paper comes out that solves the same problem slightly more generally (but with no extra insight) and then appropriates the credit. So, don’t do it: if authors have a nice general elegant model that they spent years to craft: reward them, don’t punish them. Keep in mind that you’re not responsible for errors – yes, you’re not. You’re responsible for basic due diligence on the appendix just like an audit does not check every single transaction. The authors, only the authors, are responsible for errors.
- Not my framework. A very large portion of bad reviews look like this. You model this, but there is a type of model that looks at this problem and I like (or maybe I don’t like, but I’ve seen it somewhere) and you don’t use it. So, what you are doing must be wrong. This type of approach often dismisses entire branches of literature by imposing that everyone should adopt a particular framework, even when this framework does not directly speak about the question, would not subsume the insight and would be very cumbersome to drag along. It also completely neutralizes innovation by rejecting any research that does not follow a particular dominant paradigm. These reviews completely ignore the fact that, in most cases, authors point out that the other framework that they use is actually well-accepted and useful. What’s going on here? It’s not your job as a reviewer to judge broad frameworks of the literature. If you’re not interested, don’t reject, have the courage to tell the editor that’s not a framework you want to review because you have no interest in it. Don’t ask authors to revise a framework that works well (or the semantics of their framework) because you have a taste against it. In the end, you should always ask yourself whether adding an ingredient to a model, even one that is among plausible first-order effects, really does alter the main economic trade-off or intuition; if not, this ingredient should NOT be part of the model. A theoretical model is about illustrating one force in a stylized setting, not about incorporating all first-order and second-order effects into a general messy theory.
- The revision does not obey. This is not everyone but some reviewers have a dominance problem where they think they should impose their will on authors. In revisions, these reviewers will view as ‘word of truth’ anything they wrote down in a prior round and evaluate whether the author has completed ALL demands in a prior report. As a reviewer, don’t do that. Even for the primary concerns, I need to check whether I was correct whether to ask something, and it can turn out that this was not to the benefit of the paper. Authors live in fear of pushing back against even a bad comments, because of this attitude of certain reviewers. I am confident enough to admit if I get something wrong and my role here as a reviewer is to make sure this does not contaminate the process or, worse, contaminate the published paper by adding mess to an otherwise clean argument. I have a process for this: if a point is not addressed, it should be stated in the author letter and one needs to be very, very careful, as a reviewer, to not repeat the point (which is very bad, intellectually deaf) but responding to what the authors say in the paper and in text, and why the concern remains or does not remain. Ask yourself whether this ingredient of the framework would remove an insight. Ask yourself why you think this other framework is indefensible and, yet, many good people use it to generate insight.
The only three reasons to have concerns
It’s not a competition: the field works better when everyone contributes and, ideally, we would not need to have rejections and would simply have projects being built until they meet the objectives for broad circulation. Yet, as the system is designed right now, there are places where an editor would need to know about some concerns. Whether these concerns are sufficient to advise rejection, or should be left to the authors to address is not something I can easily answer. However, what I can do is to speak about what I think are the ONLY reasons why a paper might need more work. Caveat: this does not describe the editor’s problem, but remember that as a reviewer you are in charge of providing input to the editor, not of making the actual decision.
- No distance between assumptions and results. I’m sure that if I write this, at least some people will start putting this in their reviews, and I’ll start being blamed for rejection.. Anyways, I do strongly believe in this. Our work is theorist is not ‘to assume’ but to build bridges from assumptions to insights. If the result is what we assume, then there is no work being done and it all becomes a very matter-of-fact assertion about what the world is. Evaluating the distance is not checking the length of an appendix, so don’t do that. To evaluate, there is a simple a test. Just write down the assumption in word, and write down the result, then try to argue it away and check how much the formalism has helped in making that argument a tight one that is not confusing to a listener. Do it with someone that has not read the model yet and check for flaws in the verbal argument; check if the work done by the assumptions resolves this verbal logic flaws.
- Model and results have been done before. We can’t repeat the same model, same trade-off ad infinitum if this amounts to the same insight, even if the model is interesting. Now, an important detail here. We use a model as a building block to many arguments, and that’s fine, and we rely on the trade-off to have more things to say, that’s fine as well. What’s important here is to make sure that the same exact result within the same exact model is not already there in the literature. In that case, some vague notion that this is well-known is not okay: the claim must be supported by a clear reference to the result and a reference to a paper that derives that exact result. Say, showing that full-information is not optimal is true in many models, but whether the same exact form occurs for the same exact reasons is what we want to check.
- Unreadable mess. Among all the reasons that’s the very worst one. Theory is about explaining things, so what can we do when the paper is incomprehensible? Maybe the assumptions are unclear. Maybe the results are unclear: lots of cases in all directions that never get regrouped, or a lot of algebra that is ambiguous about effects all the way. I don’t like that at all as this loses the reason why we do theory. We make strong assumptions to get clean insight and so messy results are evidence that the assumptions are not well-built. A theory is not meant to show that things are complicated in the real world – we know that already – it is to show why and when things go one way or the other. We understand there may be cases, because context matters, but it’s important to have a sense when we should apply cases. A big mess where the implication depends on complicated unobservable forces or we only end up with a result that is so limited in scope that it does not answer the question, is not helpful. So, be general if you want, but don’t be so general that you don’t get any result.