Strong federal role needed to organize productive use of patient data
The era of big data is approaching. It’s hard to underestimate its potential for revolutionizing the practice of medicine, developing and improving technologies and upgrading the delivery system’s overall efficacy and efficiency. Yet none of that will happen unless the government plays a prominent role in organizing the collection and use of the massive amounts of patient information that will soon be within reach.
To gauge the importance of the federal role, look no farther than the $22.5 billion it will spend on physicians and hospitals to adopt electronic health records. It accelerated a process that, sadly, had lagged behind other industries. But now, digitization of medical records is irreversible—adoption officially passed the halfway point last month. That creates a golden opportunity for researchers and analysts to gather privacy-protected data and put it to productive use.
Within hospitals, system officials will have the capacity to quickly analyze variations in practice patterns, determine what works best, and encourage practitioners to make adjustments that improve patient outcomes. Officials can pinpoint the causes of high readmission rates, spot medication errors and identify and better manage high-cost patients.
More broadly, the era of big data allows providers and payers to merge their de-identified patient records into huge cloud-based databases. Given access to this trove of information, researchers could run retrospective clinical trials, conduct comparative effectiveness research and analyze the real world outcomes of highly touted technologies to see if they actually lived up to their promise.
No area will benefit more from the era of big data than oncology. Nearly 200 forms of cancer strike more than 1.2 million Americans every year and cause more than 500,000 deaths. Yet no one collects or analyzes the outcomes from the varying treatment regimens that people receive.
Earlier this month, Public Health England announced it would begin collecting and genetically analyzing tumor samples while tracking treatment regimens and outcomes for every single cancer patient in its National Health Service. The database will allow researchers to analyze how every cancer sub-type responds to different regimens, and adjust future treatments accordingly. So what is happening here? A few years ago, the National Cancer Institute formed a Cancer Biomedical Informatics Grid (Ca BIG) to collect similar information. But the effort fell apart because few institutions chose to participate, and the agency lost interest.
The American Society of Clinical Oncology, the oncologists’ professional society, recently announced its own datamining effort for breast cancer with 150,000 medical records. That’s a pittance compared to the quarter million American women newly diagnosed with breast cancer every year.
Not every federal agency has fumbled the ball. The Food and Drug Administration launched its Sentinel project, which is collecting more than 125 million patient drug records to look for safety problems in approved drugs. Project managers are already churning out studies. But creating a cloud-based database for drug research is easy compared to the complex payment and patient records from the broader healthcare system. That will require collaboration between practicing physicians, hospitals and post-acute-care settings on the care side, and the claims data from public and private payers.
Given that hundreds of consultants and companies are now offering big data analytics to their clients, it raises the specter of a cacophony of competitive and non-communicative “solutions.” It will take the strong hand of government to ensure the era of big data in healthcare doesn’t take as long to arrive as the onset of EHRs.