This past year, President Obama and Vice President Biden brought new energy to our country’s efforts to end cancer as we know it with the Cancer Moonshot initiative. The goal of Moonshot is admirable – accelerating the progress of cancer research – but even its most fervent supporters know that there are barriers to achieving success.
First, collaboration between institutions, researchers and industry has long been an obstacle to moving the needle faster, in part because the healthcare system is set up to reward those who make diagnostic and drug discoveries. We reward patenting and publishing. With limited public funding and philanthropic support, investigators are often motivated to build walls around their research and silo their data instead of sharing ideas and democratizing their findings.
Second, highly regulated industries tend to adopt technology more slowly than non-regulated ones. As a result, cancer research has not kept pace in integrating information technology and modern software analytics that have impacted so many other sectors of our economy. The core technology platforms that are prevalent in most major hospitals are antiquated by today’s standards, leaving physicians without the tools they need to quickly access and analyze critical information.
Which brings me to the third barrier: data. There is simply not enough data for researchers to analyze and for clinicians to work with to affect change. While personalized medicine is happening in isolation, it is nearly impossible to scale these efforts without vast amounts of phenotypic, therapeutic, and molecular data. As of today, the two largest combined public data sets include data on roughly 20,000 patients, a tiny portion of the nearly 50 million people who are living with cancer worldwide. And when you break these data sets down by cancer subtype, you are often left with a few hundred patients in totality; far too few to produce statistically significant patterns.
While the Cancer Moonshot is admirable, there is a structural problem that has to be addressed concurrently if we hope to tackle a disease that has the fortitude of over 50 million years of evolution on its side. The healthcare system needs an independent data and analytics platform that physicians can connect to and utilize in both a research and clinical setting. In other words, we need an Operating System for cancer.
The first step in building such a system is to foster the collection of molecular data. We need to take genomic sequencing and molecular profiling from being a tool largely used for research to one that becomes the status quo when a patient is diagnosed with cancer (especially late-stage or metastatic cancer). The second step is to combine molecular data with vast amounts of phenotypic, therapeutic, and outcome data extracted from electronic medical records, so we can analyze patients at a far more granular level looking for clinically relevant patterns. And the final step (for now) is to build a scalable technology platform that allows physicians to test different therapies digitally, as well as biologically through in vivo and in vitro modeling, to see which ones might have the greatest impact on a patient’s disease.
I’ve built my career by using software, data and analytics to disrupt industries such as printing, logistics, media, manufacturing and local commerce. If we hope to have an impact on the nearly 1.7 million people who will be newly diagnosed with cancer this year in the United States alone, we need to disrupt the system; and that disruption begins by assembling all the necessary components of an Operating System that unifies the collection and analysis of clinically relevant data.
That’s the goal behind Tempus, a company I launched about a year ago to enable physicians to deliver more personalized cancer care by using the best attributes of big data and machine learning as tools they can harness when treating their patients.
We have recruited a team of accomplished geneticists, computational biologists, data scientists and engineers who have developed software and analytic tools that work within a hospital’s existing infrastructure to augment the care that physicians are able to provide – arming oncologists, pathologists, radiologists, and surgeons with data and insights that can help them make real-time, data-driven decisions.
We have built a platform with the capacity to analyze the molecular and clinical data of millions of patients fighting cancer. With this, we can provide physicians (and their patients) with the insight generated from those who have come before.
In order to win the battle against cancer we need to bring together technologists, scientists, and physicians to work toward a common solution; all contributing data and insight to a collective system. Only through the universal adoption of a truly ubiquitous learning system (an Operating System for cancer) can we hope to lay the foundation for precision medicine in cancer care.
When the personal computer was first built, in a garage in California, it was just a pile of sensors and circuit boards, until someone wrote the first operating system. That system connected the keyboard to the screen. It fired up processors when the power was turned on. It ran applications and allowed people to enter in commands. It unified a bunch of disparate functions into a cohesive experience.
We need the same thing in cancer today. We need one system that unifies disparate activities and provides the basic technology infrastructure that physicians need to dispense care.
It’s time that we put technology and big data to use in personalizing cancer treatment – giving physicians, patients and their families the fighting chance they deserve.