AHRM Blog

Shifting Expectations: Machine Learning and Artificial Intelligence in the World of Healthcare Outcomes Research

Posted by Christopher Tyson, Ph.D. on Wed, Jan 16, 2019 @ 11:48 AM

The applications of machine learning (ML) and artificial intelligence (AI) continue to grow each year as more resources are devoted to developing mathematical methodologies and computational hardware that expand the environments in which ML and AI can be used. The most direct interaction that people have with these technologies is with several popular consumer electronics devices and services: Amazon Alexa and Google Home both use AI for speech recognition and natural language processing, streaming services such as Spotify may use ML to generate recommendations for users based on prior media consumption, Amazon leverages ML to offer products in which it believes the user may have interest, and financial institutions utilize ML/AI to predict and detect fraudulent account activity.

Use of ML/AI in healthcare research to improve efficiencies and outcomes is growing as well. At AHRM we are working to tackle problems that may be unique in the healthcare realm, and one “non-engineering” issue that we will consider here is that the output from such ML/AI research is different in scope to the output in a conventional research project. Because of this, a shift in expectations are sometimes required when presenting ML/AI as a research option.

In a conventional research project, a hypothesis is proposed and a protocol is designed to test that hypothesis, including a specific set of statistical analyses that seek to answer whether the hypothesis is true or false within some range of uncertainty. On the other hand, the result of research using ML or AI for healthcare outcomes is typically a predictive or classification model that can be applied to other datasets to generate estimates for future outcomes. Models as outputs for healthcare research isn’t entirely new, as most people are familiar with regression methods that generate equation(s) to describe relationships between exposures and outcomes. However, where regression models start to lose effectiveness and power with many variables or complicated relationships that cannot be defined by explicit equations, ML/AI models can pick up at that point and move forward. At AHRM we often say that ML/AI picks up where conventional statistics lets off. Let’s explore a case study to clarify what this really means.

Suppose we are examining drugs used to treat heart disease in a cohort of patients. Speaking in general terms, with a conventional research project we would establish a hypothesis regarding some important endpoints, then isolate each therapy and perform statistical analyses for each therapy to reach the specified endpoints. We may also examine covariates or exposures across each therapy and perform analyses taking those into consideration. Then the endpoint results would be compared across the treatments and the extent to which the original hypothesis is correct would be determined. So the mean change in some biomarker under therapy A may be 40% greater than the mean change in that same biomarker under therapy B, when controlled for covariates, and if the statistical power and error associated with the change are within the prescribed limits we can say that the therapies achieve different outcomes.

On the other hand, research using ML/AI in a similar situation would take into account all therapies, relevant exposures, and desired endpoints to develop a mathematical model relating all of those aspects. Rather than a single comparison statistic between therapies, we would have a model that describes the likely outcome for many combinations of therapy and covariates. Thus, we would be able to estimate the likely outcomes for patients on therapy A or B and any combination of covariates T, U, V, X, Y, and Z and so on. Because of this, ML/AI research may not necessarily provide a direct comparison between therapies and while there are techniques that can be used with the model to move towards some kind of comparison, if a direct comparison is desired then in many cases it is better to just utilize a conventional hypothesis-driven investigation. 

Those who are familiar with regression models may be able to make the jump to ML/AI more easily, but regardless of research experience this difference in expectation can prove difficult for regulatory agencies, sponsors, IRBs, and decision-makers alike. A co-operative relationship between AHRM and the University at Buffalo Center for Computational Research not only provides nearly 4000 cores and 72 teraflops of supercomputing power for data-intensive ML/AI projects, but also serves as an extension of AHRM’s own knowledge and experience in navigating issues related to ML/AI research projects. Furthermore, at AHRM we also understand that although ML/AI is one of the more revolutionary technologies we have at our disposal, it is not a magic bullet and is not a replacement for other research methods. Nonetheless AHRM sees machine learning and artificial intelligence as an integral part of the future of healthcare and are committed to investigating how and why to use these technologies for healthcare outcomes research.

For further information or discussion, please contact:

Christopher Tyson, Ph.D.

ctyson@ahrminc.com

+1-716-881-7565 

Topics: Outcomes Research, Real World, HEOR, New

Barriers to Conducting Research: Helping the Private Practice Physician Overcome the Limitations

Posted by Laura Dalfonso on Thu, May 04, 2017 @ 11:28 AM

Practicing physicians have consistently cited three major barriers to conducting independent research and/or participating in industry sponsored research: lack of time, lack of money, and lack of research staff. This is especially true for physicians in private practice without direct access to the type of resources available to those practicing in academic-affiliated institutions. 

Practicing physicians hold patient care as their highest priority and often are unable to devote the time necessary to research funding opportunities, develop conceptual research designs, protocols, and CRFs necessary for conducting research.  This is unfortunate because often these physicians have questions or hypotheses regarding treatment comparisons, potential new indications, and other related research ideas based upon what they witness in real world practice. Following through with such research ideas could contribute to a further understanding of medications, diseases, economic impacts, and patient outcomes. 

All research requires either direct or indirect assets in order to complete the necessary steps to bring it from concept to published results that can be shared with the scientific community, patients, payers, and other interested parties. Once a physician has a research concept, potential funding opportunities need to be identified and applications must be made.  When a source of funding has been secured, a protocol must be written, case report forms designed, and a database built and tested.  The research may require an informed consent, along with IRB submission and a statistical analysis plan. All of this while trying to run a successful practice and keeping the care of patients at the forefront?  Clearly this seems virtually impossible for the practicing private physician, regardless of their desire to conduct research.  

 Opportunities to participate in industry Sponsored research can also present limitations.  The processes of subject screening, obtaining informed consent and completion of case report forms is much too time intensive for the typical physician to simply add it to his/her list of current responsibilities.  When you consider all of this, it quickly becomes apparent just how difficult it can be to do so without a team in place.

We here at AHRM Inc. have witnessed this over the past several years and so we have developed a service to assist physicians who have a strong desire to conduct research. We work to collaborate with them to complete all of the necessary steps to bring their research idea to fruition.  We will assist in procurement of funding, completion and submission of applications, development of protocols, CRFs, informed consent documents, etc.  We can build the database, perform the analysis, train the staff and provide monitoring services.  Our services are provided free-of-charge to the physician and are subcontracted through the funding source(s).  This collaboration allows valuable research to be conducted by helping to break down the barriers faced by private practice physicians.   

If you have an idea for research that you are interested in conducting within your practice, or would like to participate in an industry sponsored opportunity that has been made available, please email (ldalfonso@ahrminc.com)  or call +1(716)994-7912 to schedule an initial discussion.

 

Topics: Real World, Data Management, Clinical Trial, CRF

Health Economic Evaluations: Observational vs Simulation

Posted by Chris Purdy on Tue, Oct 14, 2014 @ 03:09 PM

Pros and Cons: Observational and Simulation Economic Studies

Over the last several years, our company has been commissioned to carry out studies whose primary purpose is to provide evidence to support reimbursement of a specific healthcare technology (either pharmaceutical or device).  In general, for both the US and abroad, there has been an increase in health economic literature because there has been a greater demand for this type of information from the reimbursement agencies, whether they are public or private.  In our experience, these studies have typically taken one of two forms: simulation modeling or an observational study.  This blog explores some of the trends in terms of publications; as well as the pros and cons of the respective approaches. 

Over the past five years, there has been an increase in publications associated with the health economics in general.  For instance, an advanced search (i.e. All Fields and Date of Publication) of PubMed for “cost” or “cost-utility” from 2007 to 2013 increased from 23,919, to 37,624 (57% increase) and from 151 to 294 (95% increase) [1].  Similarly, for the same time period, there has been an increase in studies associated with the keyword “observational” from 4,915 to 13,622.  Although a crude method of categorizing these types of studies, the approach indicates an increase of 57% for cost studies and an increase of 177% for observational studies which suggests a greater increase in observational studies.

This trend has paralleled some of our own experience over the past few years.  The limitations of cost-effectiveness analyses (i.e. simulation modeling) have been often mentioned [2].   The modeling effort itself depends upon data from previously published studies, and the data quality is an important factor to consider.  While RCTs are generally considered the gold standard for data quality there are other factors to consider when utilizing the efficacy results from RCTs for subsequent economic modeling efforts.  The similarity in study design, completeness and consistency of reporting, and protocol driven resource utilization are all factors to consider when compiling the evidence which will be the foundation of the simulation model. 

Under “good circumstances”, there may be several high quality studies available for each intervention of interest such as the reported outcomes (e.g. treatment success) are very similar for the set of studies.  Definition of the patient populations should be as similar as possible.  Also, if a cost-utility analysis is desired, it is necessary that generic health preference scores be available, or there be a mapping algorithm from a disease specific instrument [3].

However, in our experience, as well as the experience of others, there are many limitations in this type of approach [2].  In many cases, there are significant differences with respect to study design for the studies used to inform the models.  In many cases the QALY information is not ideal.  While it is becoming more common to consider quality of life information earlier in the research process, very often the generic health preferences are not available from the studies which were used to inform the efficacy parameters.  In some cases, it is necessary to map scores from one instrument to another, such as a mapping from the SF-12 to the EQ-5D [4].

While the above discussion covers some of the key limitations associated with economic modeling (i.e. simulation modeling), there are potentially additional limitations.  In summary, there are situations where the data quality is high, and the required data is available, and the similarity between the key studies is high, then the resulting economic model has a high likelihood of being both reasonably accurate and persuasive.  However, more often than not, the actual situation is more of a mixed bag of higher quality data and other data which is not ideal or from small samples.  In these cases, when the full set of assumptions and limitations are considered, the resulting analysis may not be as persuasive as would be desirable.

While observational studies have limitations as well, some of the limitations which are part and parcel of simulation modeling can be overcome.  Since the studies are generally prospective, there is the opportunity to design the study and capture all of the necessary data elements.  Also, since the study is indeed “observational”, the limitations associated with protocol driven resource utilization (i.e. from RCTs) is overcome.  The other factor to consider is that the resulting dataset is an original dataset, as opposed to an amalgamation of data elements from a series of previously published set of studies; this factor increases the interest and publishability of the work. 

In summary, while it is somewhat more expensive and time-consuming to conduct a prospective observational study, the benefits of this type of effort may be well worth it.  The trends in publication numbers indicate that more sponsors are likely choosing to shift from health economic modeling toward observational studies, in order to support the reimbursement of their technologies.

 

References:

  1. http://www.ncbi.nlm.nih.gov/pubmed/
  2. Weintraub WS, Cohen DJ.  The limits of cost-effectiveness analysis.  Circulation: Cardiovascular Quality and Outcomes. 2009; 2:55-58.
  3. Parker M, Haycox A, Graves J. Estimating the relationship between preference-based generic utility instruments and disease-specific quality-of-life measures in severe chronic constipation: challenges in practice. Pharmacoeconomics. 2011 Aug; 29(8):719-30.
  4. Sullivan PW, Ghushchyan V. Mapping the EQ-5D index from the SF-12: US general population preferences in a nationally representative sample. Med Decis Making. 2006 Jul-Aug; 26(4):401-9.

 

Topics: Health Economics, Outcomes Research, Real World, HEOR