Guiding Practices for Patient-Centered Value Assessment

As our health care system continues to evolve from a volume-based system toward a value-based one, there is increasing interest in assessing value for all components of health care. Toward that end, a number of value assessment tools have emerged over the past year and more may emerge in the future. Value assessment tools are one of many important inputs to complex decisions related to treatments. They have the potential for considerable impact on patients either through their use by patients and their doctors as a shared decision-making tool or by payers to make coverage and reimbursement decisions, so maintaining patient-centricity in the assessment process is critical. Furthermore, assessment processes should not unduly delay patient access to innovation. Because this is a new and evolving area, it is important that good practices are established to guide meaningful value assessments.

Value encompasses the balance of benefits and costs experienced by patients and society over time. There is no single answer to a value assessment. The results will depend on the evidence, methods, models, and assumptions underlying the assessment. Sensitivity analyses will introduce a range of possible results. Varying weights to reflect the preferences of and parameters facing the individual user (e.g., patient or payer) will further vary the results. Assessments should value continued scientific and medical progress by accounting for personalized medicine, the step-wise nature of progress, and the inherent value of innovation. Establishing good practices to guide value assessments can help ensure they are effective tools to support value in patient care and outcomes, rather than well-intentioned but flawed tools that impede it.

The National Pharmaceutical Council has developed PDF icon Guiding Practices for Patient-Centered Value Assessment, and an accompanying infographic, that include 28 specific elements, which are broken out into six key aspects of value assessments: the assessment process, methodology, benefits, costs, evidence, and dissemination and utilization. Guiding practices for budget impact assessment are outlined separately as budget impact is not a measure of value.

Assessment Process

I.  Proposed assessment topic, process, and timelines should be announced in advance to enable stakeholder participation and feedback. Announcing assessment plans in advance provides interested stakeholders with ample opportunity to set aside needed resources to provide input into upcoming assessments.

II.  Interested stakeholders should be involved in the assessment process to represent all perspectives.1 2 Requesting comments from interested stakeholders at key points in the assessment process - such as the release of a draft report - ensures all perspectives are considered and provides the opportunity to fully vet the assessment. Provider and patient perspectives are especially important.

III.  The scope of an assessment should be defined a priori and incorporate stakeholder input.3 4 Requesting comments from interested stakeholders on draft key questions and scope prior to beginning an assessment ensures all perspectives are considered and provides the opportunity to fully vet the planned scope and questions, and refine them where indicated.

IV.  Public comment periods should be included, with sufficient time to review materials and submit comments, and with transparency around how comments are addressed by the convening body.5 6 Allowing sufficient time for interested stakeholders to review materials and prepare comments ensures that stakeholders are able to thoughtfully and comprehensively respond to the comment request. Providing transparency around how comments are addressed builds credibility and trust in the process.

V.  Assessments should be regularly reviewed and updated to keep pace with and account for medical innovation. There should be a continuous open process for stakeholders to request a timely review of an assessment to account for new technology or other changes in the evidence base.7 Changes in technology and the evidence base can cause an assessment to become outdated, and those outdated results could adversely impact patient care and outcomes. Having a regular review cycle, along with a process for requesting an updated review when indicated, can ensure assessment results remain current and provide the timeliest information to guide shared decision-making and patient care.

VI.  Sufficient time, staff and resources should be dedicated to support a thorough and robust assessment process. Considerable infrastructure and resources are needed to support a thorough and robust assessment process. Attempting to conduct assessments without sufficient time, staff and resources can lead to assessments of lesser quality which could adversely impact patient care and outcomes.

Methodology

VII.  
Value assessments should focus broadly on all aspects of the healthcare system, not just on medications.8 9 10 Focusing on one component of an interconnected system does not provide a complete perspective on the system. Medications are one component of the healthcare system. Focusing only on medications, and excluding the rest of the healthcare system (e.g., procedures, diagnostic tests, hospitalizations, office visits), will result in an incomplete assessment.

VIII.  Methods should be based on established health economic methodologies, consistent with accepted standards. Health economic assessment is a very complex and sophisticated undertaking and many bodies of work and years of debate have shaped the methods. Following accepted methodological standards (e.g., ISPOR Good Practices, Cochrane) 11 1213 14 is necessary to produce a meaningful and credible assessment of value.

IX.  Methods, models, and assumptions should be transparent and assessment results should be reproducible. To build credibility and trust in an assessment, the methods, models (including all calculations), and assumptions included in the assessment should be transparent to interested stakeholders,15 16 and they should be able to reproduce the assessment results on their own.

X.  Base case assumptions must represent reality.17 As the base case is the underpinning for all assessment results, it is critical that the assumptions inherent in the base case are realistic and accurate. Value assessment includes many assumptions, and these assumptions will drive the final results; unrealistic assumptions will drive unrealistic results.

XI.  Sensitivity analyses should be performed, taking into account input from external stakeholders. Where sensitivity analyses result in material changes to the interpretation of the results, a focused discussion should be included.18 19 Performing sensitivity analyses around key assumptions will identify how results could vary in differing scenarios, and will generate a range of potential results. The implications for the user may vary across this range, so clear guidance will be needed to help them understand which assumptions are driving the differences and why.

XII.  Weights should be included to accommodate varying user preferences. The user should be able to adjust the assessment assumptions and parameters to accommodate individual preferences for different outcomes and factors (e.g., patient preferences for clinical benefit vs. side effects) and make adjustments to represent different scenarios (e.g., payer ability to vary the population).

Benefits

XIII.  The measurement of value should include a broad array of factors that are important to patients and society.20 21 Patients and society value a variety of factors such as survival, quality of life, the ability to participate in daily activities, caregiver burden, worker productivity, short-term disability, unmet need for diseases with limited or no treatments, burden of disease and innovation. Not including these factors in a value assessment provides an incomplete picture of a treatment’s value.

XIV.  Clinical benefits and harms should be incorporated in a manner that recognizes the heterogeneity of treatment effect rather than the average response.22 Patients respond to treatments differently. Building flexibility into an assessment to account for this heterogeneity can make the assessment more meaningful for the full spectrum of patients.

XV.  The time horizon for value should be long-term, ideally lifetime.23 Many of the benefits of treatments, such as avoided events (e.g., heart attacks), show up in the longer-term. To capture the full value of a treatment, the time horizon for clinical and care value should be long enough to capture these benefits, ideally covering a patient’s lifetime.

Costs

XVI.  All healthcare costs and cost offsets should be included.24 Treatments may have up-front costs that lead to long-term improvements in patient health. Those improvements may have “cost offsets,” or reductions in resource needs, such as reduced hospitalizations. By including both the costs and cost offsets, the full value of a treatment can be assessed. Only considering the treatment costs, but not the potential cost offsets, would lead to an incomplete assessment of value.

XVII.  The time horizon for costs should be long enough to incorporate the benefits of the treatment and the lower costs of medications when they become generic. Many of the cost offset benefits of treatment, such as avoided hospitalizations, show up in the longer term. To measure the full value of a treatment, the time horizon for costs should be long enough to capture these cost offsets,25  and to account for the lower costs of medications when generics and biosimilars are introduced.

XVIII.  Costs should be representative of the net price most relevant to the user.26 Costs are a driving component of a value assessment, and care should be taken to ensure that costs are as representative of actual price as possible in order to achieve an accurate assessment. For biopharmaceuticals, following International Society for Pharmacoeconomics and Outcomes Research (ISPOR) good research practices for measuring drug costs can help achieve this objective. Additionally, the included costs should be those most relevant for the user; if the user is the patient, measuring their copay will be more meaningful than measuring what their plan pays.

XIX.  Thresholds should be developed in a transparent manner, may vary by population and disease, and should undergo a multi-stakeholder evaluation process. Since thresholds are an emerging area, their development and application should be transparent27 and subject to a multi-stakeholder evaluation process reflecting societal values related to disease conditions and innovation. No single threshold can or should be universally applicable; thresholds are likely to vary by population and disease.

Evidence

XX.  Evidence should be identified in a systematic, transparent and robust manner. To maximize credibility and trust in the assessment process, the manner in which evidence is identified for the assessment should be systematic, transparent and robust.

XXI.  Stakeholders should be given the opportunity to submit relevant evidence, such as clinical trial and real-world evidence beyond the published literature.28 Stakeholders may have pertinent evidence that is not available in the published literature. To ensure the evidence base is as comprehensive as possible, stakeholders should be given the opportunity to submit this evidence for consideration.

XXII.  Best available evidence should be used for the assessment.29 30 Understanding a treatment’s impact on patient-centered outcomes is critical in an assessment of value. In certain circumstances, only randomized clinical trial evidence may be available. In others, real-world evidence may provide an additional understanding of how a treatment is used for typical patients, and its comparative assessment to alternative patient care options. Both high quality clinical trial and real-world evidence should be considered in any value assessment.

XXIII.  Accepted methods should be used to assess quality of evidence, certainty of evidence and conflicting evidence.31 The results of an assessment depend on the evidence that underlies it. Evidence can be of varying quality and certainty, and the findings from individual studies can conflict with each other. To produce a meaningful and credible assessment, accepted methods should be used to evaluate quality and certainty of evidence and to determine how to handle conflicting evidence.

XXIV.  Where evidence synthesis is warranted, formal analysis should be conducted, in accordance with accepted methodologies. The process of synthesizing evidence is a complex one. When there is a need to combine multiple sources of quantitative evidence, accepted methodologies should be followed in order to ensure a meaningful and credible assessment.

XXV.  Subjective evidence should be used minimally, if at all, and its inclusion should be clearly labeled.32 In situations where high-quality evidence is lacking, subjective evidence, such as expert opinion, might be considered. Expert opinion may be biased by the expert’s experiences or beliefs, making it less reliable. As such, it should be treated as lesser quality evidence and its use should be minimized. Subjective evidence should be transparently labeled and the user should be made aware of the potential limitations.

Dissemination and Utilization

XXVI.  Assessment results should be presented in a manner that is simple for the user to interpret and apply.33 The process and output of a value assessment can be complicated. Presenting the results in a manner that can be easily understood and applied by the user is critical for the value assessment to achieve its intended impact. Developing educational materials to assist the user in interpretation and application is recommended.

XXVII.  Value assessment should clearly state the intended use and audience to avoid misuse.34 With the broad interest in value assessments, there comes a risk that assessment results will be misused by an unintended audience. For example, a value assessment designed for payers may not be appropriate for shared decision-making between patients and their doctors, and vice versa. Safeguards against misuse should be incorporated, such as creating a guidance statement that is explicit about how assessments should (and should not) be used.

XXVIII.  Press releases should only be issued for final assessments, include limitations of the assessment, and highlight areas where sensitivity analyses result in material changes to the interpretation of the results.35 A draft value assessment is, by definition, a preliminary assessment. The final assessment incorporates the benefit of stakeholder input and is often materially different than the draft assessment. Issuing a press release for a draft assessment calls media attention to preliminary results and encourages widespread reporting of these preliminary findings. In the past, the media has reported on draft assessments and paid little attention to the final assessments, with the end result that the preliminary results are the ones that remain top of the public’s mind.

GUIDING PRACTICES FOR BUDGET IMPACT ASSESSMENT

The ISPOR Budget Impact Analysis Good Practice II Task Force defines budget impact analysis (BIA) as an estimation of “…the expected changes in expenditure of a healthcare system after the adoption of a new intervention.”36 A BIA is a measure of resource use, not a measure of value. It can inform the user about what they are paying, but not about what they are paying for – value. Labeling a BIA as a measure of value is inaccurate and misleading; the label for a BIA should make clear that it is an assessment of budget impact, not of value.

BIAs have the potential to have considerable impact on patients through their use by payers to make coverage and reimbursement decisions. The way they are collectively used has the potential to have considerable impact on society – for example, disincentivizing innovation in highly prevalent diseases or not giving treatments with greater clinical benefit a larger share of the available budget. Given this potential impact, it is important to establish methodologic best practices. Recommended guiding practices are outlined below.

I.  Budget impact assessments should examine all aspects of the healthcare system, not just medications.37 Use of medications will have an impact on the use of other healthcare services (e.g., increased laboratory testing, decreased hospitalizations) and hence an impact on other condition-related costs. Considering only the medication cost in a BIA will result in an incomplete and inaccurate assessment.

II.  Budget impact assessments should be separate from value assessments.38 A BIA is a measure of resource use, not a measure of value. It can inform the user about what they are paying, but not about what they are paying for – value. This is reinforced in the Academy of Managed Care Pharmacy’s (AMCP) draft format for formulary submissions, version 4.0, which says, “Budget impact models are not intended to establish the overall value of healthcare technologies because they do not include the full impact of the technology on clinical and patient outcomes.”39 Attempting to combine the two concepts causes confusion and obscures the individual results from the two assessments.

III.  Budget impact assessments should include time frames that are long enough to incorporate the benefits of the innovation40 and the lower costs of medications when they become generic. Many of the cost-offset benefits of treatment, such as avoided hospitalizations, show up in the longer-term. To fully measure the budget impact of a treatment, assessments should include a time horizon for costs that is long enough to capture these cost offsets, and to account for the lower costs of medications when they become generic.

IV.  Budget impact assessments should include realistic estimates regarding the uptake rate. Stakeholders may have done extensive assessments of potential uptake and should be given the opportunity to submit their results. A sensitivity analysis of different uptake rates should be conducted.41 Many factors will influence the uptake rate, such as: the approved indication, utilization management restrictions, induced demand from previously untreated patients, and changes in provider patterns of use. Stakeholders who have conducted assessments of potential uptake should be given the opportunity to share their results to help inform the estimate. Sensitivity analysis should be performed to examine the impact of different assumptions about the size of the treated population and ranges should be reported.

V.  Budget impact assessments should acknowledge the considerable uncertainty in the inputs by incorporating sensitivity analyses and reporting ranges around estimates.42 There is considerable uncertainty in all inputs for a BIA and these will vary by healthcare system. For all key inputs, sensitivity analysis should be performed to examine the impact of varying assumptions and ranges should be reported.

VI.  A BIA is simply an assessment of budget impact, and should not be judged against artificial affordability caps. A BIA is an estimation of a healthcare system’s expenditure changes from a new treatment, not an assessment of whether the healthcare system can afford the new treatment. Given the uncertainty inherent in BIA estimates, and the system-specificity of affordability concerns, it is not the role of a BIA to make artificial determinations of affordability.

VII.  Assessments of ways to address budget impact concerns should include all relevant stakeholders and consider all approaches.43 If there are stakeholder concerns that a treatment that society values may be unaffordable, all interested stakeholders (e.g., patients, providers, employers, health plans) should be involved in considering alternative approaches for achieving affordability (e.g., alternative financing models, utilization management, reinsurance).

---------------------------------------------------------------------------

  1. Drummond M, Schwartz JS, Jansson B, Luce BR, Neumann BR, Seibert U, Sullivan SD. Principle 10. Key Principles for the Improved Conduct of Health Technology Assessments for Resource Allocation Decisions. International Journal of Technology Assessment in Health Care. 2008. 24:3:250.
  2. Luce BR, Drummond MF, Dubois RW, Neumann PJ, Jansson B, Seibert U, Schwartz JS. Principles 2 and 3. Principles for planning and conducting comparative effectiveness research. Journal of Comparative Effectiveness Research. 2012. 1 (5):433.
  3. Drummond MF, Schwartz JS, et al. Principle 1. 247.
  4. Luce BR, Drummond MF, et al. Principle 2. 433.
  5. Oliver A, Mossialos E, Robinson R. Health technology assessment and its influence on health-care priority setting. International Journal of Technology Assessment in Health Care. 2004. 20:1. 9.
  6. Drummond M, Schwartz JS, et al. Principle 10. 253.
  7. Drummond MF, Schwartz JS, et al. Principle 13. 255.
  8. Drummond MF, Schwartz JS, et al. Principle 3. 249.
  9. Donaldson MS, Sox HC. Setting priorities for health technology assessment: a model process. Recommendation 2.  Institute of Medicine. 1992. 7.
  10. Luce BR, Drummond MF, et al. Principle 6. 434.
  11. ISPOR Good Practices for Outcomes Research Index. http://www.ispor.org/workpaper/practices_index.asp accessed 12/22/2015.
  12. Cochrane Methods. http://methods.cochrane.org/ accessed 12/22/2015.
  13. Drummond M, Schwartz JS, et al. Principle 5. 250.
  14. Luce BR, Drummond MF. Principle 9. 435.
  15. Drummond M, Schwartz JS, et al. Principle 2. 248.
  16. Luce BR, Drummond MF, et al. Principle 5. 434.
  17. Drummond M, Schwartz JS, et al. Principle 5. 251.
  18. Drummond M, Schwartz JS, et al. Principle 8. 252.
  19. Luce BR, Drummond MF, et al. Principle 11. 436.
  20. Luce BR, Drummond MF, et al. Principle 7. 434.
  21. Drummond M, Schwartz JS, et al. Principle 6. 252.
  22. Luce BR, Drummond MF, et al. Principle 10. 436.
  23. Mathes T, Jacobs E, Morfeld JC, Pieper D. Methods of international health technology assessment agencies for economic evaluations- a comparative analysis. BMC Health Services Research 2013, 13:371.
  24. Drummond M, Schwartz JS, et al. Principle 7. 252.
  25. Hay JW, Smeeding J, Carroll NV, et al. Good research practices for measuring drug costs in cost effectiveness analyses: issues and recommendations: the ISPOR drug cost task force report – Part I. Value Health 2010;13:3-7. Recommendation 5. 6.
  26. Hay JW, Smeeding J, et al. Recommendations 1, 4, 8. 6.
  27. Drummond MF, Schwartz JS, et al. Principle 15. 256.
  28. Drummond MF, Schwartz JS, et al. Principle 10. 254.
  29. Drummond MF, Schwartz JS, et al. Principle 11. 254.
  30. Luce BR, Drummond MF, et al. Principle 8. 435.
  31. Luce BR, Drummond MF, et al. Principle 9. 435.
  32. Balshem H, Helfand M, et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology 2011;64:401-406.
  33. Drummond MF, Schwartz JS, et al. Principle 14. 255.
  34. Donaldson MS, Sox HC. Guiding Principle 2. 53.
  35. NewsStory Review Criteria. Criteria 4. http://www.healthnewsreview.org/about-us/review-criteria/ accessed 1/18/2016.
  36. Sullivan SD, Mauskopf JA, Augustovski F, et al. Budget impact analysis – principles of good practice: report of the ISPOR 2012 budget impact analysis good practice II task force. Value in Health 2014;17:5-14.
  37. Sullivan SD, Mauskopf JA, et al. Impact on Other Costs. 8.
  38. Sullivan SD, Mauskopf JA, et al. Reporting BIAs Alongside CEAs. 13.
  39. AMCP Format for Formulary Submissions, Version 4.0 (draft), http://www.amcp.org/WorkArea/DownloadAsset.aspx?id=20528 accessed 1/19/2016.
  40. Sullivan SD, Mauskopf JA, et al. Time Horizon. 9.
  41. Sullivan SD, Mauskopf JA, et al. Eligible Population. 8.
  42. Sullivan SD, Mauskopf JA, et al. Uncertainty and Scenario Analyses. 9.
  43. Drummond MF, Schwartz JS, et al. Principle 7. 252. 8.