Well-designed studies will help researchers and other stakeholders draw meaningful conclusions about which interventions are the most effective treatment options for specific conditions and patients.
By contrast, there is a cost associated with poorly designed research. In the comparative effectiveness arena, faulty studies could mean that consumers may make important treatment decisions based on incorrect data. To give patients the best chance to improve their health, it is crucial that they and their health care providers have sound evidence in hand to guide medical decisions.
Efforts to identify best practices and standards for collecting and analyzing real-world clinical experience evidence have been made by a number of organizations, including the National Pharmaceutical Council. Among these efforts:
- The Comparative Effectiveness Research Collaborative, composed of the Academy of Managed Care Pharmacy, the International Society of Pharmacoeconomics and Outcomes Research (ISPOR) and NPC, developed a questionnaire and online tool to assist decision-makers in assessing the relevance and credibility of prospective and retrospective real-world studies.
- The Good ReseArch for Comparative Effectiveness (GRACE) Principles and the 11-item GRACE Checklist were developed to guide good practices for the design, conduct, analysis and reporting of observational studies for researchers and others.
- A series of papers in the May 2012 Journal of Comparative Effectiveness Research shed light on the topic by exploring the need for better resources. To develop the series, NPC teamed up with the Center for Medical Technology Policy and Outcome, a Quintiles Company, to explore the need for a translation table researchers can use to determine the best approach to studying a given question. These three groups also developed Making Informed Decisions: Assessing the Strengths and Weaknesses of Study Designs and Analytic Methods for Comparative Effectiveness Research, a booklet that describes both experimental and nonexperimental study designs and methods that may be used to address CER study questions.
Despite the increased availability of standards, there is little agreement among researchers and other stakeholders on the best methods and standards for conducting analyses using real-world clinical experience evidence.
- A study conducted by NPC and others that compared and contrasted nine existing guidelines and standards for analyzing clinical experience evidence found that at a high level, there is general agreement on the basic elements required; however, there is variation in how various guidelines and standards recommend those elements should be conducted.
- The research found that following one of these nine guidelines may be subsequently found deficient if measured against a different set of good practices or disparities in what is considered credible research and ultimately, what evidence is used to guide care.
- To encourage the flow of high-quality research from researchers conducting real-world clinical experience evidence to health care decision-makers applying evidence, the paper suggests that a common set of agreed upon standards and guidelines is needed.
Frameworks for when studies from real-world clinical experience are useful exist, but they require adaptation for use in the regulatory environment.
NPC conducted several studies that examined, developed and tested frameworks for the consideration of research for use in health care decision-making:
- Incorporating Stakeholder Perspectives in Developing a Translation Table Framework for Comparative Effectiveness Research used a stakeholder driven process to understand the factors that should be considered when determining which types of research (e.g., real-world evidence or randomized controlled trials) are most appropriate to answer a variety of clinical research questions.
- When is the Evidence Enough? Identifying the Factors Associated with Adoption of Evidence for Decision-Making developed a conceptual framework to identify when research is appropriate for adoption in health care treatment decisions. Factors identified included: the validity, reliability, and maturity of the science, communication of the science, economic drivers, and patients’ and physicians’ ability to apply finding to clinical needs. The factors that drive the adoption of evidence in practice may identify opportunities for regulatory science to provide critical information for care decisions.
- Fit for Purpose: Developing a Framework for Tradeoffs Amongst Randomized Controlled Trials and Real-World Evidence identified and tested with payers key considerations associated with determining when different types of evidence meet their needs. Real-world clinical experience evidence was sought based upon the outcomes of interest, care coordination, quality measures, and treatment differences among specific patient populations.
As more resources like these are developed, it will help all stakeholders use meaningful criteria and standards in designing and evaluating research. Only well-designed studies will help researchers and other stakeholders draw meaningful conclusions about which interventions are the most effective treatment options for specific conditions and patients.