Start of Main Content

To build the Tools for Atrocity Prevention resource, we reviewed existing research and (for selected tools) interviewed experienced practitioners. We briefly outline below the methods used for each. Learn more about our methodology in our full report on research methods and procedures.

Research Review

Our approach drew on the practice of “systematic reviews,” which strive “to comprehensively identify, appraise, and synthesize all the relevant studies on a given topic” (Petticrew & Roberts 2006, 19).

While systematic reviews aspire to be comprehensive, it is important to acknowledge that our review surely falls short of that goal. In particular, our review was limited to English language reports and those that could be found via electronic database and web searches.

Our research review methodology entailed three main tasks: (1) searching systematically for relevant studies; (2) summarizing relevant findings from each study; and (3) aggregating findings across studies and assessing the strength of evidence in support of each.

Searching for Relevant Studies

To ensure that the results of our search were relatively comprehensive and unbiased, we developed an explicit, replicable process for identifying and screening studies for inclusion. The first step was a series of keyword searches in five electronic databases for papers/reports published from 1990 through 2022. The searches included terms associated with (1) an atrocity prevention tool (e.g., “peace operations” and “peacekeeping”); and (2) mass atrocities (e.g., “genocide,” “mass killing,” and “crimes against humanity”) or closely-related outcomes (e.g., “civil war”). We also used similar keywords to conduct separate online searches for relevant policy reports and working papers, or “gray literature,” from a predetermined list of research organizations. 

We screened the titles and abstracts of each candidate study based on five inclusion criteria:

  1. Empirical: All articles needed to base their findings on one or more real-world cases, not solely on theoretical arguments. 

  2. Original: All articles needed to base their findings on original research, not just a review of other literature.

  3. Relevant outcomes: All articles needed to focus on mass atrocities or a closely-related outcome, which we defined to include violent conflict, human rights, and violence against civilians.

  4. Specific tools: All articles needed to draw conclusions about the effects of a specific atrocity prevention tool.

  5. No unpublished undergraduate or MA papers: No articles should be unpublished manuscripts that were written for undergraduate or MA thesis requirements.

​View the full-text search strings and learn more about our inclusion criteria.

Summarizing Research Findings

We summarized relevant findings in a standard way to enable synthesis of similar findings across multiple studies. We recorded two types of findings: (1) the overall or average effect of an atrocity prevention tool on mass atrocities or closely related outcomes (i.e., was the tool associated with higher or lower levels of mass atrocities, or did it have no or mixed effects?); and (2) factors that were associated with greater or lesser effectiveness of the prevention tool. Factors included characteristics of the context in which a tool was used (contextual factors) and the manner in which the tool was designed and implemented (design factors). Two researchers collected this information from each article independently and resolved any discrepancies by referring back to the text and agreeing on a best interpretation.

Aggregating Research Findings and Assessing Strength of Evidence

We aggregated similar findings and rated the strength of each finding through a modified “vote counting” procedure. The greater the number of positive findings and the lower the number of negative and null findings, the stronger we considered the evidence. We weighed evidence about effects on mass atrocities more highly than evidence about effects on related outcomes.

Learn more about the vote-counting procedure and the rules we used to transform the vote-count rating into labels of stronger, moderate, and weaker evidence. 

Practitioner Interviews

To complement insights from the research literature, for selected atrocity prevention tools we interviewed experienced practitioners. We developed an initial list of potential respondents who had at least several years of policy or operational experience working on the tool, excluding current government officials. We expanded the pool of potential respondents by asking each interviewee who else we should speak with.

In each interview, we asked respondents to assess the average effects of the tool in helping prevent mass atrocities and to identify contextual and design factors that influence the effectiveness of the atrocity prevention tool. This procedure maximized the comparability of the findings from practitioner interviews with those from the research review.

As of April 2024, we have completed practitioner interviews about targeted sanctions and begun practitioner interviews about peace operations. Findings from completed practitioner interviews are reflected in the interactive Tools for Atrocity Prevention website and discussed in short, standalone reports. We expect to report findings from practitioner interviews about additional tools on a rolling basis.

Learn more about the interview protocol for practitioner interviews.