July 29, 2019
Opportunities for AI in Healthcare and Big Data Challenges
In a panel discussion at the MedCity CONVERGE conference exploring the opportunities and challenges posed by the data underpinning AI tools, panelists representing big pharma, health tech and clinical data network shared some of their insights.
Although artificial intelligence in healthcare is an area of intense interest, exploring the topic is akin to peeling an onion — each layer revealing a new set of opportunities and obstacles.
While clinical decision support and identifying targets for drug development are frequently cited as examples of where AI is impacting healthcare, a deeper dive on this topic soon reveals the limitations of some data sources, how some companies are addressing them, and where standards and best practices are needed for this relatively young and vibrant aspect of health tech to continue to evolve in life sciences and healthcare.
In a panel discussion at the MedCity CONVERGE conference exploring the opportunities and challenges posed by the data underpinning AI tools, panelists representing big pharma, health tech and clinical data networks shared some of their insights.
The participants included Chris Boone, Head of Real World Data and Analytics for Pfizer; Gaurav Singal, chief data officer at Foundation Medicine; Janak Joshi, chief technology officer and head of strategy at Life Image; and Nate Nussbaum, senior medical director at Flatiron Health. Brenda Hodge, chief marketing officer for healthcare at Nuance, served as the moderator.
One of the challenges in harnessing the data to support AI in healthcare is simply gathering it. Nussbaum explained how Flatiron Health sorts out the data:
“Teams of human abstractors review EHR data to understand what that unstructured documentation actually means and to pull the data out in a way so that we can use that as a source of truth and then use it to build models, use it to understand the quality of models, and then understand things like how much bias a machine-learning model has, introducing so that we can then ask research questions and have confidence in the answers.”
AI in oncology care — what’s being done and where is the potential?
Joshi cited the need for the clinical context found in unstructured data to assess therapy effectiveness. He cited collaborations between Life Image and life science organizations that are working to understand the potential indicators.
“We are working with a couple of companies on non-small cell lung cancer to identify the signals that can potentially indicate therapy effectiveness. How do you conduct comparative effectiveness by marrying both generic biomarker data as well as imaging data?
Radiomics is a relatively new concept of marrying generic biomarker data with imaging data. Currently, the output of this model is unknown. But what we are finding is an increasing need, utility and, most importantly, clinical relevancy of not using only medical claims data or structured data sets coming from EHRs but the unstructured data coming from everything else that surrounds the patient.
Joshi also cited another project the company is working on that illustrates how difficult it can be to develop accurate machine learning algorithms that effectively read and understand medical images in a clinical context.
“Writing a simple query that indicates how many patients diagnosed with non-small cell lung cancer were former smokers with cancer diagnosed specifically for the left lung is, actually, quite burdensome. The indication of ‘left lung’ is very hard to find in imaging data sets coming from PACS systems in hospitals. It is often a manually curated effort where a human says, ‘This is a left lung; this is a right lung,’ but, if you flip the image, you end up some of the false positives and false negatives.Life Image is essentially using [Cloud] AutoML functionality to identify that label. But, more important than the label, is going to be the classification around it. Once you know if it’s a left lung, you need to determine how many other left lungs exist in your data set and if there is a pattern at the pixel level associated with that. The labeling, classification, and normalization across multiple different vendors is a really hard problem to solve.”
Next week: A look at the democratization of data and the need to balance this with ethical considerations.