B. Decision tree for budget scenario with drug E
9.1 Introduction to Validation of Budget Impact Analyses
The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) and the Society of Medical Decision Making (SMDM) appointed a task force to create recommendations for good modeling research practices. One of the task force’s charges was to make recommendations on transparency and validation of decision models (Eddy et al. 2012). Since BIA is typically performed using deci- sion-analytic techniques or modeling, these research practices are applicable here.
Chapter Goal
To demonstrate the importance to the budget holder of validation of the analy- sis and to provide guidance and examples for validation of the structure, assumptions, input parameter values, and results of the budget impact analysis
J. Mauskopf (*) • S. Earnshaw
RTI Health Solutions, RTI International, Research Triangle Park, Durham, NC, USA
e-mail: jmauskopf@rti.org
In Box 9.1, we present the ISPOR-SMDM cost-effectiveness task force’s justifi- cation for transparency and ensuring the validity of economic models. We believe that their justification for transparency and validation is also applicable to models designed for performing budget impact analyses.
The task force identified five main types of validation: face validity, verification (or internal validity), cross validity, external validity, and predictive validity (Eddy et al. 2012). In this chapter, we describe these methods as they can be used for vali- dating a budget impact model for a new drug1 in three sections:
• Establishing face validity for the model structure, structural assumptions, param- eter values, and results including sensitivity analyses
• Establishing internal validity/verification of the computer program used to esti- mate the budget impact
• Establishing external validity of the results of the analysis by cross validity with other models, observed validity by comparing the results of the analysis with observed data, and predictive validity in which an opportunity arises to compare and contrast the results with actual budget impacts observed over the analysis time horizon We should note that, in practice, budget impact analyses usually undergo internal validation/verification through quality checking of the computer program used to generate the model estimates. They also are frequently checked for face validity through review by clinicians and other budget holders familiar with the condition
1 In this chapter we make the simplifying assumption that the budget impact analysis is based on the introduction of a new drug to the current mix of drugs for treatment of a condition. Changes in our recommended approaches to estimate the budget impacts of other types of health care interven- tions (i.e., vaccines, diagnostics, surgery, and devices) are discussed in Chap. 13.
Box 9.1. ISPOR Task Force: Rationale for Validation of Economic Models ((Eddy et al. 2012), page 844)
The purpose of health care models is to provide decision makers with quantitative information about the consequences of the options being considered. For a model to be useful for this purpose, decision makers need confidence in the model’s results.
Specifically, they need to know how accurately the model predicts the outcomes of interest and account for that information when deciding how to use the model results.
Modelers can impart such confidence and enhance model credibility in two main ways: 1) transparency—clearly describing the model structure, equations, parameter values, and assumptions to enable interested parties to understand the model and 2) validation—subjecting the model to tests such as comparing the model’s results with events observed in reality.
Reprinted from Value in Health, 15 (Eddy et al. 2012) Copyright 2012, with permission from Elsevier
141
being modeled. However, cross validity, external validity, and predictive validity of these models through comparison of the model structure, assumptions, inputs, and estimates with other costing or budget impact models or comparison of the model estimates with observed cost data or with costs after introduction of the new product are not generally included and are only briefly mentioned in published guidelines for performing budget impact analyses. Nevertheless, such validation is important to the budget holders.
In Box 9.2, we present statements on validity by the ISPOR Budget Impact Task Force and by those commenting on the Task Force report.
Box 9.2. ISPOR Budget Impact Task Force Comments on Validation ((Sullivan et al. 2014), abstract, and page 9; (Watkins and Danielson 2014), page 3)
In the ISPOR budget impact analysis guidelines, the following statements are included relating to model validation (Sullivan et al. 2014):
The validation of the model should include at least face validity with decision mak- ers and verification of the calculations.
The computing framework and input data used for a BIA [budget-impact analy- sis] must be sufficiently valid to credibly inform the budget holder’s decisions. Two of the standard steps in validation should be applied in the BIA: 1) determine face validity through agreement with relevant decision makers on the computing frame- work, aspects included, and how they are addressed (e.g., access restrictions and time horizon); and 2) verification of the cost calculator or model implementation, including all formulas (Eddy et al. 2012). In addition, where possible, the observed costs in a health plan with the current interventions should be compared with the initial-year estimates from a BIA. For research purposes, after the new intervention is introduced, data could be collected and compared with the estimates from a BIA. Although this would not be relevant for the decision already taken, if the results are close then it would provide confidence in the approach for future interventions.
Reprinted from Value in Health, 17 (Sullivan et al. 2014) Copyright 2014, with permission from Elsevier
A statement in an editorial (Watkins and Danielson 2014) commenting on the ISPOR Budget Impact Analysis Task Force report:
It cannot be overemphasized that the usefulness of an economic model to a user is limited by the accuracy with which it represents the realities of clinical practice in that user’s setting. Common threats to validity include unrealistic assumptions about clinical care pathways, frequency of certain diagnostic tests, and patient adherence outside of controlled trials. Models based on unrealistic clinical assumptions have little or no value to us.
Reprinted from Value in Health, 17 (Watkins and Danielson 2014) Copyright 2014, with permission from Elsevier
9 Validation