AMCP Call for Comment on Addenda/Appendices to AMCP’s Format for Formulary Submissions Version 3.0

The National Pharmaceutical Council (NPC) welcomes the opportunity to comment on the Format addendum on Comparative Effectiveness Research (CER). As a policy research organization dedicated to the advancement of good evidence and science, we are happy to provide additional information or engage in additional dialogue to support this important initiative.

1. Definition

While many formal and informal definitions for CER exist, we suggest using the multi-stakeholder Institute of Medicine (IOM) committee definition. The IOM committee defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.”i Simply put, CER provides the information about the right care to the right patient at the right time, in the most appropriate setting.

2. Overview of CER Study Methods and Assessing the Quality of Studies

CER focuses on the “real-world” settings and includes both systematic reviews and generation of new research. In addition to the studies outlined in the Format, there are additional designs and analytic methods that may be employed to answer specific CER questions.

  • Making Informed Decisions: Assessing the Strengths and Weaknesses of Study Designs and Analytic Methods for Comparative Effectiveness. This white paper provides a brief description of various study designs and analytic methods. Each research approach has strengths and limitations due to the reliability, ability to address confounding, or in the feasibility and practicality. For example, if comparing all available comparators is of greatest interest, observational studies may be the optimal design due to the prohibitive costs and required sample size associated with a multiple comparator randomized controlled trial. Examples of each study design or analytic technique are provided to demonstrate the use of the described methods in the literature.ii
  • Observational Studies Quality.The GRACE Principles address good practices for the design, conduct, analysis and reporting of observational studies.iii,iv GRACE 2.0 is developing and validating a quantitative tool that can be applied to evaluate the quality of an observational CER study. Development has included input from an advisory panel of 11 experts. To date it has been through 2 rounds of review, with 180 assessments having been completed by 113 raters.
  • Pragmatic Trial Quality. To guide the decisions by payers, pragmatic clinical trials offer an opportunity to provide useful information. Unlike registration trials, which aim to determine whether a clinical intervention is effective under optimal circumstances, these trials seek to determine the benefits as they occur in routine clinical practice such as including a broader range of patients (reducing the inclusion/exclusion criteria traditionally found in RCTs), or community-based study sites. We suggest that resources such as the PRECIS: Pragmatic Explanatory Continuum Indicator summaryv or the CONSORT-for Pragmatic Trials be considered as supplementary materials in the
  • Appendix of Good Research Practices. The AMCP/ISPOR/NPC CER Collaborative may offer additional insights in how to best evaluate and identify high quality outcomes research for individual types of observational studies (including prospective, retrospective, modeling and indirect treatment comparison studies). While this initiative is still underway, the Appendix by Dreyer et al may provide a useful reference of existing good research practices for various types of CER.vii

3. Overview of CER Study Methods and Assessing the Quality of Studies

  • AMCP/ISPOR/NPC CER Collaborative. The AMCP/ISPOR/NPC CER Collaborative Initiative seeks to provide greater uniformity and transparency in the use and evaluation of outcomes research information including observational studies for coverage and health care decision making by providing a user-friendly toolkit to help decision makers navigate through the various types of observational study methods and to synthesize various types of evidence. We suggest that given the multi-organization collaborative and expertise among the three organizations could be considered as best practices for the purposes of analyzing and interpreting CER and be incorporated in the addendum when it is available.
  • Demystifying Comparative Effectiveness Research: A Case Study Learning Guide. This guide provides a reference evaluating three prominent types of CER (randomized controlled trials, meta-analysis, and observational studies). Using checklists designed for CER consumers, the guide walks through three questions to understand, interpret, and consider CER studies. It asks 1) For whom the findings are applicable? 2) Are there any aspects of the study design that might greatly affect the results? and 3) Are the findings likely to change with new research? Both an executive summary and full report are available.viii,ix

4. Incorporating Population and Individual Treatment Effects in Analysis and Reporting.

All patients do not respond in the same way to the same treatment, so the best treatment for an individual patient may vary. In fact, what is best on “average” may be highly effective for many patients, less effective for some and may not work at all in other patients. While subpopulations and heterogeneity are noted in Section 2.2.1 Disease Description and Appendix E, there are few other opportunities to provide evidence on individual treatment effects in the Format. We suggest that this be considered in future editions of the Format.

5. Impact on Innovation

In order to ensure that CER informs appropriate decision-making and helps improve the quality of healthcare effectively, the Academy and the Format committee should consider the impact that CER may have on medical innovation. Practices for how CER will be conducted, including the length and size of trials and the identification of appropriate comparators and endpoints of interest, as well as how CER will be considered by the payer community, may provide both incentives and disincentives for medical innovation.

NPC looks forward to participating in the dialogue to advance the development of good evidence and science for formulary decision-making and looks forward to future collaborations with the Academy and the Format committee.

Best Regards,
Dr. Robert W. Dubois, MD, PhD
Chief Science Officer

i IOM (Institute of Medicine). 2009. Initial National Priorities for Comparative Effectiveness Research. Washington DC: The National Academies Press.
ii Velengtas P, Mohr P, Messner DA. Making Informed Decisions: Assessing the Strengths and Weaknesses of Study Designs and Analytic Methods for comparative Effectiveness Research. February 2012. Available at [
iii Good ReseArch for Comparative Effectiveness. GRACE. Available at []
iv Dreyer NA, Schneeweiss S, McNeil BJ, et al. GRACE Principles: Recognizing High-Quality Observational Studies of Comparative Effectiveness. Am J Manag Care. 2010;16(6):467-471.
v Thorpe KE, Zwarenstein M, Oxman AD, Treweek S, Furberg CD, Altman DG, et al. A pragmatic-explanatory continuum indicator summary (PRECIS): a tool to help trial designers. Journal of Clinical Epidemiology 2009;62:464-475.
vi Zwarenstein M, Treweek S, Gagnier JJ, Altman DG, Tunis S, Haynes B, et al; CONSORT group. Pragmatic trials in healthcare (Practice) group. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ 2008;337:a2390.
viiDreyer NA, Tunis SR, Berger M, Ollendorf D, Mattox P, Gliklich R. Why observational studies should be among the tools used in comparative effectiveness research. Health Aff (Millwood). 2010;29(10):1818-25. Appendix Table 1. Exemplary Guidelines and Good Practices for Comparative Effectiveness Research Evidence. Available at []
viii Dubois RW, Kindermann SL. Demystifying Comparative Effectiveness Research: A Case Study Learning Guide. November 2009. Available at []
ix Dubois RW, Kindermann SL. Demystifying Comparative Effectiveness Research: A Case Study Learning Guide Executive Summary. November 2009. Available at [