Uncovering JAPA

A Framework for Systematizing Rigor in Planning Research

Any researcher who has approached a new topic is familiar with the literature review, a type of article that gathers relevant studies and categorizes or assesses them based on different metrics or benchmarks. By compiling, synthesizing, and assessing findings on a given topic, literature reviews are a critical component of inquiry in the social sciences. They facilitate informed decision-making among practitioners and policy-makers by taking a critical stance on the existing body of literature and making a judgment on the overall quality of the evidence presented.

Quantitative Rigor in Urban Planning

Traditional literature reviews are narratives and more qualitative, but many disciplines have developed more quantitative ways of testing rigor in reviews. Such methods, however, have "yet to be incorporated into urban planning," observe Léa Ravensbergen and Ahmed El-Geneidy in the viewpoint article "Toward Evidence-Based Urban Planning: Integrating Quality Assessments in Literature Reviews" (Journal of the American Planning Association, Vol. 89, No. 3).

Borrowing tools commonly used in public health literature reviews and offering examples of how they might be adapted for planning research, the authors present a "starting point" to incorporating more rigor in reviews, in the context of an ongoing "move toward a more evidence-based planning approach that many planning authorities have begun adopting in recent years."

In the main part of the article, Ravensbergen and El-Geneidy describe two common tools and provide examples of how they can be used in planning articles, specifically focusing on survey-based approaches:

  1. The Risk of Bias Assessment (RoB) is used to assess the risk that findings are biased, based on an article's methodology section, though the authors write, "It is surprising how infrequently we have seen RoBs used in our discipline." Though some types of bias are not relevant to planning studies — such as those found in clinical trials — the authors present an adapted tool with "guiding questions" that researchers can respond to, which can then be used to outline potential bias in surveys.
  2. The Evaluation of Certainty of Evidence (ECE) allows authors of literature reviews "to state how confident they are about the evidence as a whole." A higher ECE grade on a topic denotes a greater number of high-quality studies demonstrating a given relationship. The sum ECE grade represents "the final level of certainty," based on all of the constituent outcomes or findings.
Table 3: An example of how an ECE grade might be used in an urban planning literature review, based on a 2021 review of the relationship between active transport and physical activity.

Table 3: An example of how an ECE grade might be used in an urban planning literature review, based on a 2021 review of the relationship between active transport and physical activity.

Bridging Theory and Practice in Planning

The article's premise — the presence or need for traditional empirical 'rigor' in planning research — was very intriguing and timely, and testifies to the substantive and methodological breadth of planning research: It can run the gamut from high-sample-size quantitative studies to individual case studies that feel almost ethnographic.

Without a careful, non-prescriptive approach, efforts to introduce empirical rigor into studies can lead to odd applications of incongruous or irrelevant theoretical frameworks. Thankfully, the authors don't prioritize some kinds of research over others, accounting for the fact that "urban planners make use of a wide array of research tools" that might draw from "different epistemological, ontological, and methodological foundations."

For the RoB metric for example, the authors concede that some quality assessment tools for qualitative research might miss the point, as "different framings result in different ways to assess rigor." Similarly, for the ECE grade, the authors eschew traditional sample size benchmarks, which can bias against small studies, "many of which explore new and important topics."

The article fits into the long-standing debate within planning about the relationship between theory and practice, and the authors' proposals offer, in their words, "a starting point" to bridge that gap. These debates also raise fundamental questions of the role of the planner: where they get information from and on what basis — knowledge, intuition, evidence, expertise, etc. — they make decisions. I was especially intrigued in this regard by one of the benefits of this type of work that the authors mention: how rigor in planning reviews can lead to better interdisciplinary collaboration.

Planning by nature implicates many other disciplines and domains of intervention in the built environment, politics, and society. Though not all measures of rigor are relevant to planning, we should be mindful of how our work is presented to diverse audiences.

By making our research accessible and compelling to a variety of groups, planning can enhance its position and influence in important discussions. However, though interventions should always be supported by some empirical foundation, we should remember that the power of planning also lies in a high-level, holistic understanding of interconnected spatial processes, a healthy dose of ethics and an eye for equity, and employing good old judgment and discretion.

Top image: iStock/Getty Images Plus - gevende


About the author
Akiva Blander is a recent graduate of the master in urban planning program at Harvard University.

June 8, 2023

By Akiva Blander