AHRM Blog

Data-Driven Pricing: Supporting Access through Clinical and Economic Endpoints

Posted by Raf Magar on Tue, Oct 22, 2019 @ 02:45 PM

A near-constant headline over the past several years has been the rising cost of healthcare, whether referring to a hospital visit, a novel (or—in some cases—existing) pharmaceutical compound, or cutting-edge surgical procedures. As medical technologies become more advanced, the cost of research and development increases; while at the same time, the economic impact of interventions on payers (whether governmental, commercial, or individual) is a serious concern. How is it possible to balance these forces pushing and pulling on the economic proposition of a medical intervention?

In nearly every developed market, Health Technology Assessment (HTA) agencies are tasked with trying to understand the clinical and economic impact of healthcare interventions. These agencies may be government bodies (such as NICE in the UK and CADTH in Canada) or private institutions (such as ICER in the USA). In any case, they all seek to develop quantitative evaluations of the clinical benefit and economic impact of healthcare interventions. Their evaluations may be directly or indirectly used by payers to influence access or pricing of interventions. Thus, the economic endpoints of budget holders now represent a significant influence on the market access of a medical intervention in addition to the regulatory bodies that over-see clinical endpoints.

With this in mind, companies working to develop healthcare interventions are now tasked with demonstrating a value proposition focused on defensible data-driven pricing. In practice this means performing research that can establish clinical and real-world evidence in support of the pricing for the intervention, and that the economics of the intervention fit within guidelines of the various HTA and the payers they may represent. Accomplishing this requires consideration of health-economic endpoints earlier in the R&D cycle of an intervention so that data such as patient-reported outcomes or other quality-of-life measures can be collected and used to develop defensible and data-driven pricing. In AHRM’s experience, including these data points in Phase II-IV studies is becoming more and more important. In the end, the goal is to anticipate the findings of HTA agencies and develop an early clinical and economic strategy that will minimize friction. 

Bringing an intervention to market without a serious data-driven pricing strategy presents a difficult situation, and doing so opens the door for significant influence of HTA agencies in how the intervention is priced and in some cases if the intervention is granted access or reimbursement in that market at all. There are no shortage of instances in which budget holders simply say that an intervention is too expensive for the clinical benefit it provides and subsequently limiting access.

It is important to note that there is no upper-limit on absolute cost of an intervention. In repeated discussions with HTA and payers, we have learned that expensive treatments and medications are certainly acceptable, but that they must be worth the cost. Interventions costing over USD $1M are no longer a hypothetical, but in order to warrant that price the intervention must deliver a tremendous improvement over the standard-of-care: interventions seeking a premium price should present research that shows a premium benefit to patients and payers.

AHRM’s familiarity with HTA evaluations can provide insight into what analyses they will be looking to perform, what endpoints may be useful to demonstrate patient impact, and how to correspond with HTA about these matters. The ability for AHRM to predict the likely outcomes of HTA evaluations can provide an edge for companies developing a novel intervention or looking to expand access.

For further information or discussion, please contact:

Raf Magar, MBA

rmagar@ahrminc.com

+1-919-758-8203 

Topics: Health Economics, Outcomes Research, HEOR, Clinical Trial, budget impact, Market Access, Drug Pricing

Shifting Expectations: Machine Learning and Artificial Intelligence in the World of Healthcare Outcomes Research

Posted by Christopher Tyson, Ph.D. on Wed, Jan 16, 2019 @ 11:48 AM

The applications of machine learning (ML) and artificial intelligence (AI) continue to grow each year as more resources are devoted to developing mathematical methodologies and computational hardware that expand the environments in which ML and AI can be used. The most direct interaction that people have with these technologies is with several popular consumer electronics devices and services: Amazon Alexa and Google Home both use AI for speech recognition and natural language processing, streaming services such as Spotify may use ML to generate recommendations for users based on prior media consumption, Amazon leverages ML to offer products in which it believes the user may have interest, and financial institutions utilize ML/AI to predict and detect fraudulent account activity.

Use of ML/AI in healthcare research to improve efficiencies and outcomes is growing as well. At AHRM we are working to tackle problems that may be unique in the healthcare realm, and one “non-engineering” issue that we will consider here is that the output from such ML/AI research is different in scope to the output in a conventional research project. Because of this, a shift in expectations are sometimes required when presenting ML/AI as a research option.

In a conventional research project, a hypothesis is proposed and a protocol is designed to test that hypothesis, including a specific set of statistical analyses that seek to answer whether the hypothesis is true or false within some range of uncertainty. On the other hand, the result of research using ML or AI for healthcare outcomes is typically a predictive or classification model that can be applied to other datasets to generate estimates for future outcomes. Models as outputs for healthcare research isn’t entirely new, as most people are familiar with regression methods that generate equation(s) to describe relationships between exposures and outcomes. However, where regression models start to lose effectiveness and power with many variables or complicated relationships that cannot be defined by explicit equations, ML/AI models can pick up at that point and move forward. At AHRM we often say that ML/AI picks up where conventional statistics lets off. Let’s explore a case study to clarify what this really means.

Suppose we are examining drugs used to treat heart disease in a cohort of patients. Speaking in general terms, with a conventional research project we would establish a hypothesis regarding some important endpoints, then isolate each therapy and perform statistical analyses for each therapy to reach the specified endpoints. We may also examine covariates or exposures across each therapy and perform analyses taking those into consideration. Then the endpoint results would be compared across the treatments and the extent to which the original hypothesis is correct would be determined. So the mean change in some biomarker under therapy A may be 40% greater than the mean change in that same biomarker under therapy B, when controlled for covariates, and if the statistical power and error associated with the change are within the prescribed limits we can say that the therapies achieve different outcomes.

On the other hand, research using ML/AI in a similar situation would take into account all therapies, relevant exposures, and desired endpoints to develop a mathematical model relating all of those aspects. Rather than a single comparison statistic between therapies, we would have a model that describes the likely outcome for many combinations of therapy and covariates. Thus, we would be able to estimate the likely outcomes for patients on therapy A or B and any combination of covariates T, U, V, X, Y, and Z and so on. Because of this, ML/AI research may not necessarily provide a direct comparison between therapies and while there are techniques that can be used with the model to move towards some kind of comparison, if a direct comparison is desired then in many cases it is better to just utilize a conventional hypothesis-driven investigation. 

Those who are familiar with regression models may be able to make the jump to ML/AI more easily, but regardless of research experience this difference in expectation can prove difficult for regulatory agencies, sponsors, IRBs, and decision-makers alike. A co-operative relationship between AHRM and the University at Buffalo Center for Computational Research not only provides nearly 4000 cores and 72 teraflops of supercomputing power for data-intensive ML/AI projects, but also serves as an extension of AHRM’s own knowledge and experience in navigating issues related to ML/AI research projects. Furthermore, at AHRM we also understand that although ML/AI is one of the more revolutionary technologies we have at our disposal, it is not a magic bullet and is not a replacement for other research methods. Nonetheless AHRM sees machine learning and artificial intelligence as an integral part of the future of healthcare and are committed to investigating how and why to use these technologies for healthcare outcomes research.

For further information or discussion, please contact:

Christopher Tyson, Ph.D.

ctyson@ahrminc.com

+1-716-881-7565 

Topics: Outcomes Research, Real World, HEOR, New

Clinical Data Management and the Data Manager

Posted by Laura Dalfonso on Mon, Jan 12, 2015 @ 03:59 PM

What is Data Management?

Data Management is a term encompassing various functions and applicable within several industries.   Within the field of research it is often referred to more specifically as Clinical Data Management.   Data Management is an integral part of doing research and can best be described as a process for collecting, validating and reporting the data produced during a clinical trial or other type of research study.   Highly effective Data Management is crucial for the generation of reproducible and reliable study results.     The degree of Data Management will vary from one research effort to another, but all research efforts will require some level of data management prior to the data being analyzed and published.  How the data is to be collected, validated and reported are precisely outlined in a document called the Data Management Plan.   This helps to ensure that the way in which data is reported and collected is consistent among all sites participating in the research effort, as well as the consistency in which the data is analyzed. 

 

What is the role of a data manager?

A data manager is an important member of the research team, whose main priority, is to ensure the integrity of the data that is generated for use during a clinical trial or other research effort.   They can be employed by pharmaceutical or medical device firms, as well as by contract research organizations.  Some large hospitals or clinics may also hire data managers if their involvement in research is great enough to support position(s).  Often data managers at hospitals and clinics have other responsibilities as well; including direct patient care.  For the remainder or this section, we will focus on the responsibilities of data managers employed by either pharmaceutical/medical device firms or by contract research organizations.  

The data manager is charged with a variety of tasks related to developing the processes and procedures for the collection; validation and reporting of the data generated for use during a clinical trial other research effort.   Among these responsibilities, are the development and processing of case report forms (CRF), the identification and generation of necessary logic checks and writing and resolving queries.  Because much of a data manager’s time is spent performing one of these three tasks, we will take a deeper look at each of these. 

Case Report Forms (CRFs) are forms used during a clinical trial or other research effort to collect and report the required subject level data.  They can be in either electronic (eCRF) or paper (CRF) format.   The number of CRFs used for a given research effort will depend on how much data is being collected, what types of data are being collected and how often data is being collected on each subject.  It will also depend on how each form is designed and how much information is included on a single form.  Sometimes a CRF may be designed where data related to Medical History and the results of a Physical exam are combined, while other CRFs separate this information onto two separate forms. 

Logic Checks, as indicated by their name, look for a logical pattern to the data that has been reported on the CRF/eCRF.   They are also referred to as edit checks and are generated to identify errors in the data.  The errors range from simple identification of missing data to more complex issues, like lack of consistency between a data point reported on Form A, for example, and a related data point reported on Form B.  An example of this is that on Form A, the subject’s gender is reported as Male, but on Form B, the results of the screening pregnancy test are reported as negative, instead of not applicable.   They also identify data that is out of range, which means that the value is abnormal and not likely to be true.  An example to this would be a subject who is reported as being 152 years old. 

Writing and resolving queries is often the most time intensive part of a data manager’s job; especially early on in the data collection process.  It is not abnormal for several queries to be written on each CRF submitted for the first several subjects.  Generally speaking, as the research effort progresses, the queries begin to decrease in number because the necessary corrections are made in the collection and reporting process.   Although queries cannot be completely eliminated by the use of eCRFs, they do significantly decrease the number of queries generated, by disallowing certain data to be entered, such as an age of 152 years old, and also requiring that all mandatory fields have been completed in real-time.  

Querying advice for the data manager

When writing queries for research conducted using paper CRFs, double check the Site Number, Subject Number, CRF Title and Question Number before sending the query.  Often there is an error in one of these which makes it unlikely or impossible for the site to answer.  If they are able to determine the correct Site Number, Subject Number, CRF Title or Question Number, it will require unnecessary time that most site staff do not have the luxury of.   These errors will also most likely result in having to send an additional query to the site, for which they will have to again spend time to resolve.   Other helpful tips include:

-          Do not combine queries on separate CRFs into a single query.

-          If a response is to be reported as a predefined value, list the options for the response, such as negative, positive, or not applicable. 

The best advice you can give to a participating hospital or clinic regarding queries, is to complete queries as soon as possible after receiving them.  This will alert them to errors/omissions sooner rather than later, and prevent them from continuing to complete additional forms in a manner that will continue to generate the same queries.  

Topics: Outcomes Research, CRO, Query, Data Management, Clinical Trial, CRF, Validation