The May 10 issue of The Cancer Letter details a recent publication explaining the investigation of a new AI tool that may be able to match cancer drugs more precisely to patients.
Authored by Sanju Sinha, PhD, assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys, and the NCI’s Eytan Ruppin, MD, PhD, the “Trials & Tribulations” feature describes a first-of-its-kind computational tool to systematically predict patient response to cancer drugs at single-cell resolution. The study regarding this new tool was published on April 18, 2024, in the journal Nature Cancer.
The Cancer Letter was founded in 1973 and focuses its coverage on the development of cancer therapies, drug regulation, legislation, cancer research funding, health care finance and public health.
Institute News
NIH director highlights Sanford Burnham Prebys and National Cancer Institute project to improve precision oncology
The NIH director’s blog features a recent publication detailing the study of a new AI tool that may be able to match cancer drugs more precisely to patients.
Monica M. Bertagnolli, MD, director of the National Institutes of Health (NIH), highlighted a collaboration between scientists at Sanford Burnham Prebys and the National Cancer Institute (NCI) on the NIH director’s blog. Bertagnoli noted advances that have been made in precision oncology approaches using a growing array of tests to uncover molecular or genetic profiles of tumors that can help guide treatments. She also recognizes that much more research is needed to realize the full potential of precision oncology.
The spotlighted Nature Cancer study demonstrates the potential to better predict how patients will respond to cancer drugs by using a new AI tool to analyze the sequences of the RNA within each cell of a tumor sample. Current precision oncology methods take an average of the DNA and RNA in all the cells in a tumor sample, which the research team hypothesized could hide certain subpopulations of cells—known as clones—that are more resistant to specific drugs.
Bertagnoli said, “Interestingly, their research shows that having just one clone in a tumor that is resistant to a particular drug is enough to thwart a response to that drug. As a result, the clone with the worst response in a tumor will best explain a person’s overall treatment response.”
Sanju Sinha, PhD, assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys, is the first author on the featured study.
Institute News
Media coverage of AI study predicting responses to cancer therapy ranks top 5% among published research
Last week, Sanford Burnham Prebys and the National Cancer Institute shared findings regarding a first-of-its-kind computational tool to systematically predict patient response to cancer drugs at single-cell resolution.
Many news outlets and trade publications took note of this study and the computational tool’s potential future use in hospitals and clinics. This coverage placed the paper in the top 5% of all manuscripts ranked by Altmetric—a service that tracks and analyzes online attention of published research to improve the understanding and value of research and how it affects people and communities.
The results from the highlighted study were published on April 18, 2024, in the journal Nature Cancer.
“Our goal is to create a clinical tool that can predict the treatment response of individual cancer patients in a systematic, data-driven manner. We hope these findings spur more data and more such studies, sooner rather than later,” says first author Sanju Sinha, PhD, assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys.
Here are a few of the venues that helped spread the word about this research:
AP News: “Researchers … suggest that such single-cell RNA sequencing data could one day be used to help doctors more precisely match cancer patients with drugs that will be effective for their cancer.”
Politico, fourth story in Future Pulse newsletter: “Our hope is that being able to characterize the tumors on a single-cell resolution will enable us to treat and target potentially the most resistant and aggressive [cells], which are currently missed.”
NIH.gov: “The researchers discovered that if just one clone were resistant to a particular drug, the patient would not respond to that drug, even if all the other clones responded.”
Inside Precision Medicine: “The model was validated by predicting the response to monotherapy and combination treatment in three independent, recently published clinical trials for multiple myeloma, breast, and lung cancer.”
“I’m very pleased with how many news outlets covered our work,” Sinha says. “It is important and will help us continue improving the tool with more data so it can one day benefit cancer patients.”
Institute News
From postdoc to PI, it’s a journey. Don’t forget to pack some support
The journal Nature Cancer asked a dozen early-career investigators to share their thoughts and experiences about starting their own labs in 2023. Among them: Sanju Sinha, PhD, who joined Sanford Burnham Prebys in June. Below is his essay. You can read the rest here.
Don’t forget to pack support
Against the backdrop of a world emerging from a pandemic, starting my laboratory in 2023 was a whirlwind of excitement and anxiety, against the backdrop of a world emerging from a pandemic.
The goal for my laboratory is to understand cancer initiation and use this knowledge to develop preventative therapies—a goal appreciated by many, yet understudied and underfunded. We are aiming to achieve this by developing computational techniques based on machine-learning and leveraging big data from various sources, such as healthy tissues, pre-cancerous lesions and tumors. This journey has taken several unexpected turns, with its fair share of delights and challenges.
One significant hurdle appeared early: hiring. I recall the advice I received: “Forget it, you can’t hire a postdoc as an early-stage laboratory.” This made me ponder—if I were to choose right now, would I pursue a postdoc? My immediate answer was no. It struck me then: the traditional postdoc route needed a revamp.
Determined to instigate change, I introduced a new role: computational biologist. This position, an alternative to a postdoc, was tailored for transitioning to industry and offered better pay. The response was staggering—more than 400 applications.
Now, I’m proud to lead a fantastic team of three computational biologists from whom I am continually learning. This experience taught me a valuable lesson: crafting roles that serve both the goals of the laboratory and the career aspirations of the applicants can make a world of difference. I urge new principal investigators to shatter norms and design roles that provide fair compensation and smooth industry transition—reflecting the reality of the current job market.
However, the path to establishing a new laboratory was not without setbacks. Rejection is common in this field. I have already experienced a grant rejection and, considering the average grant success rate, I am prepared for many more.
Amid these challenges, my support system proved to be my lifeline. I’m grateful to be part of Sanford Burnham Prebys, which has proved to be more than just a top biomedical research institution. It is a community that provides unparalleled support for early principal investigators through generous startup packages, administrative assistance, hiring and grant-writing guidance, and a network of compassionate peers and mentors.
Equally important is my personal support system—my family, partner and friends who remind me that there is life beyond science, helping me maintain my well-being. This balance, I have realized, is the most crucial tool for anyone on a similar journey—so do not forget to pack support for the ride.
Institute News
Is cloud computing a game changer in cancer research? Three big questions for Lukas Chavez
As an assistant professor at Sanford Burnham Prebys and director of the Neuro-Oncology Molecular Tumor Board at Rady Children’s Hospital, Lukas Chavez, PhD, leverages modern technology for precision diagnostics and for uncovering new treatment options for the most aggressive childhood brain cancers.
We spoke to Chavez about his work and asked him how modern technology—particularly cloud computing—is shifting the approach to cancer research.
How are you using new technologies to advance your research?
New technologies are helping us generate a huge amount of data as well as many new types of data. All this new information at our disposal has created a pressing need for tools to make sense of it and maximize their benefits. That’s where computational biology and bioinformatics come into play. The childhood brain cancers I work on are very rare, which has historically made it difficult to study large numbers of cases and identify patterns.
Now, data for thousands of cases can be stored in the cloud. By creating data analysis tools, we can reveal insights that we would never have seen otherwise. For example, we’ve developed tools that can use patient data in the cloud to categorize brain cancers into subtypes we’ve never identified before, and we’re learning that there are many more types of brain tumors than we’ve previously understood. We’re basically transforming the classic histo-pathological approach that people have studied for decades by looking at tumor tissues under the microscope and turning that into data science.
How is cloud computing improving cancer research in general?
Assembling big datasets delays everything, so I believe the main idea of cloud computing is really to store data in the cloud, then bring the computational tools to the data, not the other way around.
My team did one study where we assembled publicly available data, and basically downloaded everything locally. The data assembly process alone took at least two to three years because of all the data access agreements and legal offices that were involved.
And that is the burden that cloud computing infrastructures remove. All of this personalized cancer data can be centrally stored in the cloud, which makes it available to more researchers while keeping it secure to protect patient privacy. Researchers can get access without downloading the data, so they are not responsible for data protection anymore. It’s both faster and more secure to just bring your tools to the data.
Are there any risks we need to be aware of?
Like any new technology, we need to be clear about how we use it. The technology is another tool in the toolbox of patient care. It will never entirely replace physicians and researchers, but it can complement and assist them.
Also, because we use costly and sophisticated tools that are being built and trained on very specific patient groups, we need to be careful that these tools are not only helping wealthier segments of society. Ideally, these tools will be expanded worldwide to help everybody affected by cancer.
Institute News
Three big questions for cutting-edge biologist Will Wang
Will Wang’s spatial omics approach to studying neuromuscular diseases is unique.
He works at the intersection of biology and computer science to study how complex systems of cells interact, specifically focusing on the connections between nerves, muscles, and the immune response and their role in neuromuscular diseases.
We sat down with Wang, who recently joined the Institute as an assistant professor, to discuss his work and how computer technology is shaping the landscape of biomedical research.
How is your team taking advantage of computer technology to study neuromuscular diseases?
No cell exists in isolation. All our cells are organized into complex tissues with different types of cells interacting with each other. We study what happens at these points of interaction, such as where nerves connect to muscle cells. Combining many different types of data such as single cell sequencing, spatial proteomics, and measures of cell-cell signaling helps us get a more holistic look at how interactions between cells determine tissue function, as well as how these interactions are disrupted in injury and disease. Artificial neural networks help us make sense of these different types of data by finding patterns and insights the human brain can’t see on its own. And because computers can learn from the vast modality of data that we gather, we can also use them to help predict how biological systems will behave in the lab. The process goes both ways – from biology to computers and from computers to biology.
How will these technologies shape the future of biomedical research?
Biology and computer programming are two different languages. There are a lot of mathematicians and programmers who are great at coming up with solutions to process data, but biological questions can get lost in translation and it’s easy to miss the bigger picture. And pure biologists don’t necessarily understand the full scope of what computers can do for them. If we’re going to get the most out of this technology in biomedical research, we need people with enough expertise in both areas that they can bridge the gap, which is what our lab is trying to do. Over time we’re going to see more and more labs that combine traditional biological experiments and data analysis approaches with artificial intelligence and machine learning.
Are there any potential risks to these new technologies?
Artifical intelligence is here to accelerate discovery. Mundane tasks and measurements that took me weeks to carry out as a graduate student can be automated to a matter of minutes. We can now find patterns in high dimensional images that the human brain can’t easily visualize. However, any kind of artificial intelligence comes with a certain amount of risk if people don’t understand when and how to use the tools. If you just take the absolute word of the algorithm, there will inevitably be times where it’s not correct. As scientists, we use artificial intelligence as a cutting-edge discovery tool, but need to validate the findings in terms of the biology. At the end of the day, it is us, scientists, who are here to drive the discovery process and design real life experiments to make sure our therapies are safe and efficacious.
Institute News
Simulation matters at Lake Nona Research Day, from cells to big data
Scientists, physicians and trainees recently gathered at the first Lake Nona Research Day to share the latest research and technologies that are contributing to innovations in health care.. The event brought together senior and junior practitioners from Medical City’s five institutions.
“As we planned the symposium, we decided to focus on the trainees, who then became the glue that brought everything together,” said Philip Wood, D.V.M., PhD, director of academic affairs at Sanford Burnham Prebys Medical Discovery Institute (SBP) at Lake Nona and chair of the Medical City Research Council. “Their enthusiasm to share their science is evident in the 120 research posters that highlight the research emerging from SBP, the University of Central Florida, the University of Florida, Nemours Children’s Hospital, and the Orlando VA Medical Center.” The symposium was presented by the Lake Nona Institute.
Disease modeling by high-tech simulation and data mining were themes of featured talks. Lawrence Lesko, PhD, professor, Center for Pharmacometrics at UF, described using biosimulation to project drug performance in virtual patients. “What we do is like a flight simulator—we evaluate drug impact before testing in patients, frequently focusing on drug-drug interactions,” said Lesko.
Similarly, Daniel Kelly, MD, scientific director of SBP at Lake Nona, spoke about his lab’s work to study the changes in mitochondria function that are seen in heart failure patients and to simulate disease in a dish using human induced pluripotent stem cell-derived cardiomyocytes. “We need to become mitochondrial doctors to treat heart failure,” said Kelly. “These models will help us discover therapeutic approaches tailored to the etiology of a subset of heart failure cases that could be given earlier than current treatments.”
Steven Kern, PhD, deputy director, Quantitative Sciences at the Bill and Melinda Gates Foundation, delivered the keynote on using data to decide how to invest $1 billion in precision public health projects on a global scale. “We build drug-disease models to determine how to prevent epidemics like malaria. In our Healthy Birth and Growth Project, we model real-world data to determine the right interventions, in the right dosage, to get the right response—to get children to the healthiest stage at 100 days of life,” explained Kern.
David Odahowski, president and CEO of the Edyth Bush Charitable Foundation, which sponsored the symposium, concluded the program by observing that innovation often comes from the intersection of disciplines. “I think what we learned today is that collaboration is the true measure of success and that is especially true here in Medical City.”
Institute News
Why share data from clinical trials? SBP’s CEO Perry Nisen weighs in
Sharing clinical trial data with researchers who weren’t involved in the original study maximizes the value of patients’ participation, allowing more research questions to be answered than those of the original study. However, figuring out what data should be shared and how to do it has proven to be difficult.
The most recent issue of the New England Journal of Medicine devoted three perspective articles and an editorial on the topic of data sharing. Perry Nisen, MD, PhD, CEO of Sanford Burnham Prebys Medical Discovery Institute (SBP) and his colleagues discuss efforts to share clinical trial data and the hurdles that investigators still face.
“One of the risks is that there will not be a single simple system where these data can be accessed and analyzed, and the benefits of meta-analyzing data from multiple studies will be limited by cost and complexities,” said Nisen.
GlaxoSmithKline was a first mover in making anonymized patient-level data available from clinical trials. In 2013, the Clinical Study Data Request was established. The site is now managed by the Wellcome Trust, an independent, non-sponsor safe harbor, and includes more than 3,000 trials from 13 industry sponsors.
Nisen answers key questions about the future of clinical data sharing:
Q: Why should research sponsors go to the expense of sharing data?
Clinical data sharing is the right thing to do for science and society. First, it increases transparency of clinical trial data. It maximizes the contribution of trial participants to new knowledge and understanding. This allows researchers to confirm or refute findings, and enables them to generate other hypotheses. Scientific research globally is moving toward more transparency in clinical trial reporting and this is an important step toward building trust.
Q: What are the challenges to a one-stop shop for sharing all clinical trials data?
Protecting patient privacy and confidentiality is a major concern. Also, ensuring the data are used for valid scientific investigation, preventing erroneous claims of benefit or risk, and controlling the cost associated with anonymizing data in formats investigators can utilize effectively.
Other challenges inherent in data sharing include patient consent, data standards, standards for re-use, conflicts of interest, and intellectual property.
The editorial, also co-authored by Frank Rockhold, PhD, professor of biostatistics and bioinformatics at the Duke Clinical Research Institute, and Andrew Freeman, BSc, head of medical policy at GlaxoSmithKline, is available online here.
Institute News
Aspiring scientists tackle big data at Sanford Burnham Prebys Medical Discovery Institute
Growing up, Courtney Astore was inquisitive about science and technology. So when she had the opportunity to participate in middle school science fairs, she jumped at the chance. In high school, Astore’s research in behavioral and social science, medicine and elaborate statistical algorithms led to her being a finalist at the Intel International Science and Engineering Fair three times.
Today, as an incoming sophomore at the University of Central Florida (UCF), Astore is majoring in Biotechnology with a focus on Bioinformatics. Together, with her lifelong friend Rebecca Elsishans at the University of Florida, she plans to launch a start-up company called Enasci-x that will use genetic analysis to aid in vaccine development.
Executives at UCF’s business incubator contacted Leslie Molony, PhD, senior director of Business Development for Sanford Burnham Prebys Medical Discovery Institute’s (SBP) Lake Nona campus to inquire about providing training to aspiring scientists enrolled in the National Science Foundation- iCorps™ funded LaunchPad program.
The LaunchPad program fosters entrepreneurial research designed to help the commercialization of technology. Molony guided Astore and Elsishans in the biological science and business aspects of forming a start-up for their first product-in-development called Genes4Vaccines.
Her students received guidance on a top-level list of ‘how-to’s’
how to understand protein structures
how to generate data that can lead to new drug discovery
how to define new products and commercialize them
how to develop business plans and ‘pitching’ strategies
“Courtney and Rebecca are phenomenal young women who are very eager to understand how the medical research process—vaccine discovery–can lead to commercial products,” said Molony. “They have great potential to become software service providers, or to use their talents to discover new vaccine targets that may lead to partnerships with pharmaceutical companies.”
“In terms of where we are today and how we’ve been able to map out what we need to do, we couldn’t have done any of this without Dr. Molony,” said Astore. “Her drug discovery background and business development expertise have opened our eyes to the potential of what we can accomplish, and what we need to do to get there. We know our next big steps are to finalize our minimum value product, get data validation in the lab and then attract investors.”
Big data for medical research, adds Molony, is a growing niche in the field of infectious disease where vaccine and therapy needs arise quickly and unexpectedly.
To augment her student’s training, she connected Astore with Fraydoon Rastinejad, PhD, professor in SBP’s Center for Metabolic Origins of Disease at Lake Nona who offered her a summer internship where she’ll be collecting data and analyzing human disease databases.
“Dr. Rastinejad is one of the most renowned researchers in the field, and I’m honored to have the opportunity to work with him. My internship will give me a deeper base of scientific knowledge to advance my research,” said Astore. “To work hands-on in his lab, analyzing data that recognizes patterns and clues to disease development is a dream come true.”
It’s called the Precision Medicine Initiative (PMI) Cohort Program, and it was just announced in February by President Obama. If you join the cohort (group of subjects tracked over a long period of time), you can help researchers improve precision medicine, in which doctors select the treatments and preventive strategies that will work best for each patient. This program is just one component of the larger Precision Medicine Initiative announced during last year’s State of the Union address.
What’s the goal? According to NIH Director Francis Collins, the cohort program “seeks to extend precision medicine to all diseases by building a national research cohort of one million or more U.S. participants,” all enrolled by 2019.
Why recruit so many people? Since the program is intended to benefit people affected by many diseases and conditions, it must include large, representative samples of people with each type. Large samples increase the likelihood that studies using these data will find new associations and interactions among genes, environmental factors, and disease risk.
What will participants do? Volunteers will share their health records, complete surveys on lifestyle and environmental exposures, undergo a physical, and provide a biological sample (e.g. blood) for genetic testing.
How will people benefit? Participants will be considered partners in research—they’ll have access to their genetic data and, where possible, how their genes, surroundings, and habits affect their health. They’ll also have a say in how the research is conducted and what questions it should address.
Who’s running it? The NIH is overseeing the whole program, but it will be directly run from multiple institutions (which are currently being selected). The pilot phase will be led by Vanderbilt University and Verily (formerly Google Life Sciences).
What’s the cost? $130 million has been allotted in this fiscal year, but more money will be needed to keep the program going.
Should I be excited about it? Maybe. Some leaders in the health field have criticized the program for throwing money at the latest big thing instead of more low-tech problems like unequal access to healthcare, but such a huge data resource is bound to lead to answers to many important questions.
What are the challenges for the PMI?
Scale—The program will generate one of the largest clinical databases yet, and it’s not clear how difficult it will be to make systems that can store and analyze it.
Privacy—Data will be anonymized, but keeping the health information of a million people in one place might represent a target for hackers sophisticated enough to figure out participants’ identities.
Interoperability—Health record systems are notoriously incompatible with one another. Though the PMI also has provisions to correct this, it likely won’t be a quick fix.
How can I sign up? Enrollment has not yet begun, but the NIH will announce when the public can get involved. So stay tuned…