ARTICLE
24 April 2017

Forget Bigger Data – Personalized Medicine Needs Smarter Data To Reach Its Full Potential

FL
Foley & Lardner

Contributor

Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
The advent of big data has helped enable the growth of personalized medicine.
United States Food, Drugs, Healthcare, Life Sciences
To print this article, all you need is to be registered or login on Mondaq.com.

The advent of big data has helped enable the growth of personalized medicine. But if machine learning and analytics are to truly help transform health care, it won't be through bigger data, but through harmonized, smarter data.

That was the key takeaway from a panel discussion at the Business of Personalized Medicine Summit on March 28 in San Francisco. Led by Foley partner Beni Surpin, the panel explored how data systems and analytics tools could usher in a revolution in personalized medicine. Surpin and the four assembled experts touched on a wide range of salient topics and expressed a broad diversity of viewpoints. But throughout the discussion the panelists emphasized the importance of data quality.

"With all the data we're able to generate, the question now is whether bigger is better," said Surpin to kick off the conversation.

The answer is no, according to Nicholas Donoghoe, partner at McKinsey & Co. "We're quickly approaching a moment where we hit a bottleneck around interpretation," he said. "Where separating the signal from the noise will be extremely difficult."

As providers and payers collect and combine data sets, the key will be ensuring that those sets are correctly linked and that the data itself provides enough depth to yield real insights, Donoghoe said. For instance, doctors in China might record a million patients' responses to a single question. That would create a large data set, but if the question is too simplistic – "What brings you in today?" for example – the data will have little value in developing personalized treatments.

That points to the need to set consistent standards for collecting data – a challenge in the fragmented world of health care providers. "If we're going to produce useful data we need to have a common vocabulary," said Gary (Yuan) Gao, Co-founder, President and Chief Executive at Med Data Quest and Co-Founder and Chairman at Singlera Genomics. "And we need to focus on consistency of data, not just quantity." He argued that physicians understand the value of personalized medicine, and want to embrace it, but need to be educated through medical journals and other trusted sources.

Tara Maddala, Head of Biostatistics and Data Management at GRAIL, one of the leading providers of data analytics for personalized medicine, said the way to get to clinical adoption is through interpretation. A statistician by training, she emphasized the importance of returning to first principles in data analytics. "Big data is a means to an end, but it's not the end," she said. "We need to think about the end points so we can harmonize the data, and that requires statistical analysis."

The proliferation of different, proprietary data sets is also slowing the growth and penetration of personalized medicine, the panel agreed. Each pharmaceutical company, bio bank and research organization has already collected large amounts of data from clinical trials, patients, providers and other sources. But the data an organization owns might not contain the insights it needs to achieve a breakthrough in personalized medicine.

There is talk in the industry of creating a collected data-sharing architecture that would allow all the players to access all the data. A so-called pre-competitive master database could help the field of personalized medicine take off, Donoghoe said, by enabling firms to shift the focus from data collection to data interpretation. "At that point you're not competing on what data you have," he said. "You're distinguishing yourself by how you analyze it."

But creating a collaborative database requires caution, said Leeland Ekstrom, Managing Director, BioVU Partnerships at Vanderbilt University Medical Center and Founder of Nashville Biosciences. Vanderbilt and other organizations will be reluctant to share data that they acquired at significant cost and effort. "When I hear pre-competitive what I think is 'You want to use my data without paying me for it,'" he said. To address that concern, he suggested creating an honest broker to maintain the data, and to ensure that all the users extract value equal to what they put in.

Such a clearinghouse could also help standardize the language around data analytics – another important step to achieving significant adoption, and to unleashing the power of personalized medicine.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More