Principles for Planning and Conducting Comparative Effectiveness Research

A group of researchers developed a set of 13 best practice principles for comparative effectiveness research. The 13 principles were published as part of a peer-reviewed study in the September 2012 issue of the Journal of Comparative Effectiveness Research.

Authors: Luce, BR; Drummond, MF; Dubois, RW; Neumann, PJ; Jonsson, B; Siebert, U; Schwartz, JS.
Publication: Journal of Comparative Effectiveness Research, September 2012

Slides (member login required)

A group of leading researchers developed a set of 13 best practice principles that could help to ensure more consistency in how comparative effectiveness research (CER) is conducted and applied in real world situations. These guiding principles were published as part of peer-reviewed study in the September 2012 issue of the Journal of Comparative Effectiveness Research.

Principles for the Conduct of Comparative Effectiveness Research

To arrive at these best research practice principles, the researchers examined existing health technology assessment principles and engaged multiple CER experts and stakeholders for feedback. The researchers acknowledge that “no one study will be able to fully meet all the principles,” but that CER should endeavor to fulfill most of these principles when feasible.

Specifically, the 13 principles are:

  • Study objectives: The objective of a CER study should be meaningful, explicitly stated and relevant for informing important clinical or health care decisions
  • Stakeholders: All relevant stakeholders should, to the extent feasible, be actively engaged, or at least consulted and informed, during key stages of a CER study
  • Perspective: CER studies should address the perspectives of affected decision makers
  • Relevance: From planning to conduct, study relevance should be evaluated in light of decision maker needs
  • Bias and transparency: Attempts should be made to minimize potential bias in CER studies and to conduct them in a transparent manner
  • Broad consideration of alternatives: CER studies should make all reasonable efforts to include the full range of all relevant intervention, prevention, delivery, and organizational strategies
  • Outcomes: CER studies should evaluate those clinical, other health-related, and system outcomes most relevant to decision makers
  • Data: CER studies should take advantage of all relevant, available data, including information that becomes available during the course of the study
  • Methods: CER studies should incorporate appropriate methods for assessing outcomes of alternative interventions and intervention strategies
  • Heterogeneity: CER studies should identify and endeavor to evaluate intervention effects across patients, subpopulations, and systems
  • Uncertainty: CER studies should explicitly characterize the uncertainty in key study parameters and outcomes
  • Generalizability: CER studies should consider the generalizability and transferability of study findings across patients, settings, geography and systems of care
  • Follow-through: CER studies should include a plan for dissemination, implementation and evaluation