AHRM Blog

Data-Driven Pricing: Supporting Access through Clinical and Economic Endpoints

Posted by Raf Magar on Tue, Oct 22, 2019 @ 02:45 PM

A near-constant headline over the past several years has been the rising cost of healthcare, whether referring to a hospital visit, a novel (or—in some cases—existing) pharmaceutical compound, or cutting-edge surgical procedures. As medical technologies become more advanced, the cost of research and development increases; while at the same time, the economic impact of interventions on payers (whether governmental, commercial, or individual) is a serious concern. How is it possible to balance these forces pushing and pulling on the economic proposition of a medical intervention?

In nearly every developed market, Health Technology Assessment (HTA) agencies are tasked with trying to understand the clinical and economic impact of healthcare interventions. These agencies may be government bodies (such as NICE in the UK and CADTH in Canada) or private institutions (such as ICER in the USA). In any case, they all seek to develop quantitative evaluations of the clinical benefit and economic impact of healthcare interventions. Their evaluations may be directly or indirectly used by payers to influence access or pricing of interventions. Thus, the economic endpoints of budget holders now represent a significant influence on the market access of a medical intervention in addition to the regulatory bodies that over-see clinical endpoints.

With this in mind, companies working to develop healthcare interventions are now tasked with demonstrating a value proposition focused on defensible data-driven pricing. In practice this means performing research that can establish clinical and real-world evidence in support of the pricing for the intervention, and that the economics of the intervention fit within guidelines of the various HTA and the payers they may represent. Accomplishing this requires consideration of health-economic endpoints earlier in the R&D cycle of an intervention so that data such as patient-reported outcomes or other quality-of-life measures can be collected and used to develop defensible and data-driven pricing. In AHRM’s experience, including these data points in Phase II-IV studies is becoming more and more important. In the end, the goal is to anticipate the findings of HTA agencies and develop an early clinical and economic strategy that will minimize friction. 

Bringing an intervention to market without a serious data-driven pricing strategy presents a difficult situation, and doing so opens the door for significant influence of HTA agencies in how the intervention is priced and in some cases if the intervention is granted access or reimbursement in that market at all. There are no shortage of instances in which budget holders simply say that an intervention is too expensive for the clinical benefit it provides and subsequently limiting access.

It is important to note that there is no upper-limit on absolute cost of an intervention. In repeated discussions with HTA and payers, we have learned that expensive treatments and medications are certainly acceptable, but that they must be worth the cost. Interventions costing over USD $1M are no longer a hypothetical, but in order to warrant that price the intervention must deliver a tremendous improvement over the standard-of-care: interventions seeking a premium price should present research that shows a premium benefit to patients and payers.

AHRM’s familiarity with HTA evaluations can provide insight into what analyses they will be looking to perform, what endpoints may be useful to demonstrate patient impact, and how to correspond with HTA about these matters. The ability for AHRM to predict the likely outcomes of HTA evaluations can provide an edge for companies developing a novel intervention or looking to expand access.

For further information or discussion, please contact:

Raf Magar, MBA

rmagar@ahrminc.com

+1-919-758-8203 

Topics: Health Economics, Outcomes Research, HEOR, Clinical Trial, budget impact, Market Access, Drug Pricing

Shifting Expectations: Machine Learning and Artificial Intelligence in the World of Healthcare Outcomes Research

Posted by Christopher Tyson, Ph.D. on Wed, Jan 16, 2019 @ 11:48 AM

The applications of machine learning (ML) and artificial intelligence (AI) continue to grow each year as more resources are devoted to developing mathematical methodologies and computational hardware that expand the environments in which ML and AI can be used. The most direct interaction that people have with these technologies is with several popular consumer electronics devices and services: Amazon Alexa and Google Home both use AI for speech recognition and natural language processing, streaming services such as Spotify may use ML to generate recommendations for users based on prior media consumption, Amazon leverages ML to offer products in which it believes the user may have interest, and financial institutions utilize ML/AI to predict and detect fraudulent account activity.

Use of ML/AI in healthcare research to improve efficiencies and outcomes is growing as well. At AHRM we are working to tackle problems that may be unique in the healthcare realm, and one “non-engineering” issue that we will consider here is that the output from such ML/AI research is different in scope to the output in a conventional research project. Because of this, a shift in expectations are sometimes required when presenting ML/AI as a research option.

In a conventional research project, a hypothesis is proposed and a protocol is designed to test that hypothesis, including a specific set of statistical analyses that seek to answer whether the hypothesis is true or false within some range of uncertainty. On the other hand, the result of research using ML or AI for healthcare outcomes is typically a predictive or classification model that can be applied to other datasets to generate estimates for future outcomes. Models as outputs for healthcare research isn’t entirely new, as most people are familiar with regression methods that generate equation(s) to describe relationships between exposures and outcomes. However, where regression models start to lose effectiveness and power with many variables or complicated relationships that cannot be defined by explicit equations, ML/AI models can pick up at that point and move forward. At AHRM we often say that ML/AI picks up where conventional statistics lets off. Let’s explore a case study to clarify what this really means.

Suppose we are examining drugs used to treat heart disease in a cohort of patients. Speaking in general terms, with a conventional research project we would establish a hypothesis regarding some important endpoints, then isolate each therapy and perform statistical analyses for each therapy to reach the specified endpoints. We may also examine covariates or exposures across each therapy and perform analyses taking those into consideration. Then the endpoint results would be compared across the treatments and the extent to which the original hypothesis is correct would be determined. So the mean change in some biomarker under therapy A may be 40% greater than the mean change in that same biomarker under therapy B, when controlled for covariates, and if the statistical power and error associated with the change are within the prescribed limits we can say that the therapies achieve different outcomes.

On the other hand, research using ML/AI in a similar situation would take into account all therapies, relevant exposures, and desired endpoints to develop a mathematical model relating all of those aspects. Rather than a single comparison statistic between therapies, we would have a model that describes the likely outcome for many combinations of therapy and covariates. Thus, we would be able to estimate the likely outcomes for patients on therapy A or B and any combination of covariates T, U, V, X, Y, and Z and so on. Because of this, ML/AI research may not necessarily provide a direct comparison between therapies and while there are techniques that can be used with the model to move towards some kind of comparison, if a direct comparison is desired then in many cases it is better to just utilize a conventional hypothesis-driven investigation. 

Those who are familiar with regression models may be able to make the jump to ML/AI more easily, but regardless of research experience this difference in expectation can prove difficult for regulatory agencies, sponsors, IRBs, and decision-makers alike. A co-operative relationship between AHRM and the University at Buffalo Center for Computational Research not only provides nearly 4000 cores and 72 teraflops of supercomputing power for data-intensive ML/AI projects, but also serves as an extension of AHRM’s own knowledge and experience in navigating issues related to ML/AI research projects. Furthermore, at AHRM we also understand that although ML/AI is one of the more revolutionary technologies we have at our disposal, it is not a magic bullet and is not a replacement for other research methods. Nonetheless AHRM sees machine learning and artificial intelligence as an integral part of the future of healthcare and are committed to investigating how and why to use these technologies for healthcare outcomes research.

For further information or discussion, please contact:

Christopher Tyson, Ph.D.

ctyson@ahrminc.com

+1-716-881-7565 

Topics: Outcomes Research, Real World, HEOR, New

Health Economic Evaluations: Observational vs Simulation

Posted by Chris Purdy on Tue, Oct 14, 2014 @ 03:09 PM

Pros and Cons: Observational and Simulation Economic Studies

Over the last several years, our company has been commissioned to carry out studies whose primary purpose is to provide evidence to support reimbursement of a specific healthcare technology (either pharmaceutical or device).  In general, for both the US and abroad, there has been an increase in health economic literature because there has been a greater demand for this type of information from the reimbursement agencies, whether they are public or private.  In our experience, these studies have typically taken one of two forms: simulation modeling or an observational study.  This blog explores some of the trends in terms of publications; as well as the pros and cons of the respective approaches. 

Over the past five years, there has been an increase in publications associated with the health economics in general.  For instance, an advanced search (i.e. All Fields and Date of Publication) of PubMed for “cost” or “cost-utility” from 2007 to 2013 increased from 23,919, to 37,624 (57% increase) and from 151 to 294 (95% increase) [1].  Similarly, for the same time period, there has been an increase in studies associated with the keyword “observational” from 4,915 to 13,622.  Although a crude method of categorizing these types of studies, the approach indicates an increase of 57% for cost studies and an increase of 177% for observational studies which suggests a greater increase in observational studies.

This trend has paralleled some of our own experience over the past few years.  The limitations of cost-effectiveness analyses (i.e. simulation modeling) have been often mentioned [2].   The modeling effort itself depends upon data from previously published studies, and the data quality is an important factor to consider.  While RCTs are generally considered the gold standard for data quality there are other factors to consider when utilizing the efficacy results from RCTs for subsequent economic modeling efforts.  The similarity in study design, completeness and consistency of reporting, and protocol driven resource utilization are all factors to consider when compiling the evidence which will be the foundation of the simulation model. 

Under “good circumstances”, there may be several high quality studies available for each intervention of interest such as the reported outcomes (e.g. treatment success) are very similar for the set of studies.  Definition of the patient populations should be as similar as possible.  Also, if a cost-utility analysis is desired, it is necessary that generic health preference scores be available, or there be a mapping algorithm from a disease specific instrument [3].

However, in our experience, as well as the experience of others, there are many limitations in this type of approach [2].  In many cases, there are significant differences with respect to study design for the studies used to inform the models.  In many cases the QALY information is not ideal.  While it is becoming more common to consider quality of life information earlier in the research process, very often the generic health preferences are not available from the studies which were used to inform the efficacy parameters.  In some cases, it is necessary to map scores from one instrument to another, such as a mapping from the SF-12 to the EQ-5D [4].

While the above discussion covers some of the key limitations associated with economic modeling (i.e. simulation modeling), there are potentially additional limitations.  In summary, there are situations where the data quality is high, and the required data is available, and the similarity between the key studies is high, then the resulting economic model has a high likelihood of being both reasonably accurate and persuasive.  However, more often than not, the actual situation is more of a mixed bag of higher quality data and other data which is not ideal or from small samples.  In these cases, when the full set of assumptions and limitations are considered, the resulting analysis may not be as persuasive as would be desirable.

While observational studies have limitations as well, some of the limitations which are part and parcel of simulation modeling can be overcome.  Since the studies are generally prospective, there is the opportunity to design the study and capture all of the necessary data elements.  Also, since the study is indeed “observational”, the limitations associated with protocol driven resource utilization (i.e. from RCTs) is overcome.  The other factor to consider is that the resulting dataset is an original dataset, as opposed to an amalgamation of data elements from a series of previously published set of studies; this factor increases the interest and publishability of the work. 

In summary, while it is somewhat more expensive and time-consuming to conduct a prospective observational study, the benefits of this type of effort may be well worth it.  The trends in publication numbers indicate that more sponsors are likely choosing to shift from health economic modeling toward observational studies, in order to support the reimbursement of their technologies.

 

References:

  1. http://www.ncbi.nlm.nih.gov/pubmed/
  2. Weintraub WS, Cohen DJ.  The limits of cost-effectiveness analysis.  Circulation: Cardiovascular Quality and Outcomes. 2009; 2:55-58.
  3. Parker M, Haycox A, Graves J. Estimating the relationship between preference-based generic utility instruments and disease-specific quality-of-life measures in severe chronic constipation: challenges in practice. Pharmacoeconomics. 2011 Aug; 29(8):719-30.
  4. Sullivan PW, Ghushchyan V. Mapping the EQ-5D index from the SF-12: US general population preferences in a nationally representative sample. Med Decis Making. 2006 Jul-Aug; 26(4):401-9.

 

Topics: Health Economics, Outcomes Research, Real World, HEOR