AI Series Archives - Sanford Burnham Prebys
Institute News

Acceleration by automation 

AuthorGreg Calhoun
Date

September 5, 2024

Increases in the scale and pace of research and drug discovery are being made possible by robotic automation of time-consuming tasks that must be repeated with exhausting exactness.

Humans have long been fascinated by automata, objects that can or appear to move and act of their own volition. From the golems of Jewish folklore to Pinocchio and Frankenstein’s Creature—among the subjects of many other tales—storytellers have long explored the potential consequences of creating beings that range from obedient robots to sentient saboteurs.

While the power of our imagination preceded the available technology for such feats of automation, many scientists and engineers throughout history succeeded in creating automata that were as amusing as they were examples of technical mastery. Three doll automata made by inventor Pierre Jaquet-Droz traveled around the world to delight kings and emperors by writing, drawing and playing music, and they now fascinate visitors to the Musée d’Art et d’Histoire of Neuchâtel, Switzerland.

While these more whimsical machinations can be found in collections from the House on the Rock in Spring Green, Wis., to the Hermitage Museum in Saint Petersburg, Russia, applications in certain forms of labor have made it so more modern automation is located in factories and workshops. There is no comparing the level of automation at research institutions to that of many manufacturing facilities more than 110 years since the introduction of the assembly line, nor should there be given the differing aims. However, the mechanization of certain tasks in the scientific process has been critical to increasing the accessibility of the latest biomedical research techniques and making current drug discovery methods possible.

researcher at work in Prebys Center

As a premier drug discovery center, the Conrad Prebys Center for Chemical Genomics team is well-versed in using automation to enable the testing of hundreds of thousands of chemicals to find new potential medicines.

“Genomic sequencing has become a very important procedure for experiments in many labs,” says Ian Pass, PhD, director of High-Throughput Screening at the Conrad Prebys Center for Chemical Genomics (Prebys Center) at Sanford Burnham Prebys. “Looking back just 20-30 years, the first sequenced human genome required the building of a robust international infrastructure and more than 12 years of active research. Now, with how we’ve refined and automated the process, I could probably have my genome sampled and sequenced in an afternoon.”

While many tasks in academic research labs require hands-on manipulation of pipettes, petri dishes, chemical reagents and other tools of the trade, automation has been a major factor enabling omics and other methods that process and sequence hundreds or thousands of samples to capture incredible amounts of information in a single experiment. Many of these sophisticated experiments would be simply too labor-intensive and expensive to conduct by hand.

Where some of the automation of yore would play a tune, enact a puppet show or tell a vague fortune upon inserting a coin, scientists now prepare samples for instruments equipped with advanced robotics, precise fluid handling technologies, cameras and integrated data analysis capabilities. Automation in liquid handling has enabled one of the biggest steps forward as it allows tests to be miniaturized. This not only results in major cost savings, but also it allows experiments to have many replicas, generating very high-quality, reliable data. These characteristics in data are a critical underpinning for ensuring the integrity of the scientific community’s findings and maintaining the public’s trust.

“At their simplest, many robotic platforms amount to one or more arms that have a grip that can be programmed to move objects around,” explains Pass. “If a task needs to be repeated just a few times, then it probably isn’t worth the effort to deploy a robot. But, once that step needs to be repeated thousands of times at precise intervals, and handled the exact same way each time, then miniaturization and automation are the answers.”

Ian Pass headshot

Ian Pass, PhD, is the director of High-Throughput Screening at the Conrad Prebys Center for Chemical Genomics.

As a premier drug discovery center, the Prebys Center team is well-versed in using automation to enable the testing of hundreds of thousands of chemicals to find new potential medicines. The center installed its first robotics platform, affectionately called “big yellow,” in the late 2000s to enable what is known as ultra-high-throughput screening (uHTS). Between 2009 and 2014, this robot was the workhorse for completing over 100 uHTS of a large chemical library. It generated tens of millions of data points as part of an initiative funded by the National Institutes of Health (NIH) called the Molecular Libraries Program that involved more than 50 research institutions across the US. The output of the program was the identification of hundreds of chemical probes that have been used to accelerate drug discovery and launch the field of chemical biology.

“Without automation, we simply couldn’t have done this,” says Pass. “If we were doing it manually, one experiment at a time, we’d still be on the first screen.”

Over the past 10 years the Center has shifted focus from discovering chemical probes to discovering drugs. Fortunately, much of the process is the same, but the scale of the experiments is even bigger, with screens of over 750,000 chemicals. To screen such large libraries, highly miniaturized arrays are used in which 1536 tests are conducted in parallel. Experiments are miniaturized to such an extent that hand pipetting is not possible and acoustic dispensing (i.e. sound waves) are used to precisely move the tiny amounts of liquid in a touchless, tipless automated process. In this way, more than 250,000 tests can be accomplished in a single day, allowing chemicals that bind to the drug target to be efficiently identified. Once the Prebys Center team identifies compounds that bind, these prototype drugs are then improved by the medicinal chemistry team, ultimately generating drugs with properties suitable for advancing to phase I clinical trials in humans.

Within the last year, the Prebys Center has retired “big yellow” and replaced it with three acoustic dispensing enabled uHTS robotic systems using 1536 well high-density arrays that can run fully independently.

“We used to use big yellow for just uHTP library screening, but now, with the new line up of robots, we use them for everything in the lab we can,” notes Pass. “It has really changed how we use automation to support and accelerate our science. Having multiple systems allows us to run simultaneous experiments and avoid scheduling conflicts. It also allows us to stay operational if one of the systems requires maintenance.”

One of the many drug discovery projects at the Prebys Center focuses on the national epidemic of opioid addiction. In 2021, fentanyl and other synthetic opioids accounted for nearly 71,000 of 107,000 fatal drug overdoses in the U.S. By comparison, in 1999 drug-involved overdose deaths totaled less than 20,000 among all ages and genders.

Like other addictive substances, opioids are intimately related to the brain’s dopamine-based reward system. Dopamine is a neurotransmitter that serves critical roles in memory, movement, mood and attention. Michael Jackson, PhD, senior vice president of Drug Discovery and Development at the Prebys Center and co-principal investigator Lawrence Barak, MD, PhD, at Duke University, have been developing a completely new class of drugs that works by targeting a receptor on neurons called neurotensin 1 receptor or NTSR1, that regulates dopamine release.

The researchers received a $6.3 million award from NIH and the National Institute on Drug Abuse (NIDA) in 2023 to advance their addiction drug candidate, called SBI-810, to the clinic. SBI-810 is an improved version of SBI-533, which previously had been shown to modulate NTSR1 signaling and demonstrated robust efficacy in mouse models of addiction without adverse side effects.

Michael Jackson profile photo

Michael Jackson, PhD, is the senior vice president of Drug Discovery and Development at the Conrad Prebys Center for Chemical Genomics.

Prebys Center researchers at work

The funding from the NIH and NIDA will be used to complete preclinical studies and initiate a Phase 1 clinical trial to evaluate safety in humans.

“The novel mechanism of action and broad efficacy of SBI-810 in preclinical models hold the promise of a truly new, first-in-class treatment for patients affected by addictive behaviors,” says Jackson.


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Simulating science or science fiction? 

AuthorGreg Calhoun
Date

August 27, 2024

By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.

While scientists have always employed a vast set of methods to observe the immense worlds among and beyond our solar system, in our planet’s many ecosystems, and within the biology of Earth’s inhabitants, the public’s perception tends to reduce this mosaic to a single portrait.

A Google image search will reaffirm that the classic image of the scientist remains a person in a white coat staring intently at a microscope or sample in a beaker or petri dish. Many biomedical researchers do still use their fair share of glassware and plates while running experiments. These scientists, however, now often need advanced computational techniques to analyze the results of their studies, expanding the array of tools researchers must master to push knowledge forward. For every scientist pictured pipetting, we should imagine others writing code or sending instructions to a supercomputer.

In some cases, scientists are testing whether computers can be used to simulate the experiments themselves. Computational tools such as generative artificial intelligence (AI) may be able to help scientists improve data inputs, create scenarios and generate synthetic data by simulating biological processes, clinical outcomes and public health campaigns. Advances in simulation one day might help scientists more quickly narrow in on promising results that can be confirmed more efficiently through real-world experiments.

“There are many different types of simulation in the life sciences,” says Kevin Yip, PhD, professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys and director of the Bioinformatics Shared Resource. “Molecular simulators, for example, have been used for a long time to show how certain molecules will change their shape and interact with other molecules.”

“One of the most successful examples is in structural biology with the program AlphaFold, which is used to predict protein structures and interactions,” adds Yip. “This program was built on a very solid foundation of actual experiments determining the structures of many proteins. This is something that other fields of science can work to emulate, but in most other cases simulation continues to be a work in progress rather than a trusted technique.”

In the Sanford Burnham Prebys Conrad Prebys Center for Chemical Genomics (Prebys Center), scientists are using simulation-based techniques to more effectively and efficiently find new potential drugs.

Click to Play VideoNanome Virtual Reality demonstration

To expedite their drug discovery and optimization efforts, the Prebys Center team uses a suite of computing tools to run simulations that model the fit between proteins and potential drugs, how long it will take for drugs to break down in the body, and the likelihood of certain harmful side effects, among other properties.

“In my group, we know what the proteins of interest look like, so we can simulate how certain small molecules would fit into those proteins to try and design ones that fit really well,” says Steven Olson, PhD, executive director of Medicinal Chemistry at the Prebys Center. In addition to fit, Olson and team look for drugs that won’t be broken down too quickly after being taken.

“That can be the difference between a once-a-day drug and one you have to take multiple times a day, and we know that patients are less likely to take the optimal prescribed dose when it is more than once per day,” notes Olson. 

Steven Olson, PhD, profile photo

Steven Olson, PhD, is the executive director of Medicinal Chemistry at the Prebys Center.

“We can use computers now to design drugs that stick around and achieve concentrations that are pharmacologically effective and active. What the computers produce are just predictions that still need to be confirmed with actual experiments, but it is still incredibly useful.”

In one example, Olson is working with a neurobiologist at the University of California Santa Barbara and an x-ray crystallographer at the University of California San Diego on new potential drugs for Alzheimer’s disease and other forms of dementia.

“This protein called farnesyltransferase was a big target for cancer drug discovery in the 1990s,” explains Olson. “While targeting it never showed promise in cancer, my collaborator showed that a farnesyltransferase inhibitor stopped proteins from aggregating in the brains of mice and creating tangles, which are a pathological hallmark of Alzheimer’s.”

“We’re working together to make drugs that would be safe enough and penetrate far enough into the brain to be potentially used in human clinical trials. We’ve made really good progress and we’re excited about where we’re headed.”

To expedite their drug discovery and optimization efforts, Olson’s team uses a suite of computing tools to run simulations that model the fit between proteins and potential drugs, how long it will take for drugs to break down in the body, and the likelihood of certain harmful side effects, among other properties. The Molecular Operating Environment program is one commercially available application that enables the team to visualize candidate drugs’ 3D structures and simulate interactions with proteins. Olson and his collaborators can manipulate the models of their compounds even more directly in virtual reality by using another software application known as Nanome. DeepMirror is an AI tool that helps predict the potency of new drugs while screening for side effects, while StarDrop uses learning models to enable the team to design drugs that aren’t metabolized too quickly or too slowly.

Steven Olson et al using VR in Prebys Center

The Prebys Center team demonstrates how the software application known as Nanome allows scientists to manipulate the models of potential drug compounds directly in virtual reality.

“In addition, there are certain interactions that can only be understood by modeling with quantum mechanics,” Olson notes. “We use a program called Gaussian for that, and it is so computationally intense that we have to run it over the weekend and wait for the results.”

“We use these tools to help us visualize the drugs, make better plans and give us inspiration on what we should make. They also can help explain the results of our experiments. And as AI improves, it’s helping us to predict side effects, metabolism and all sorts of other properties that previously you would have to learn by trial and error.”

While simulation is playing an active and growing role in drug discovery, Olson continues to see it as complementary to the human expertise required to synthesize new drugs and put predictions to the test with actual experiments.

“The idea that we’re getting to a place where we can simulate the entire drug design process, that’s science fiction,” says Olson. “Things are evolving really fast right now, but I think in the future you’re still going to need a blend of human brainpower and computational brainpower to design drugs.”


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Mapping the human body to better treat disease

AuthorGreg Calhoun
Date

August 20, 2024

Scientists build supersized sets of biological data to better treat diseases and reveal the secrets to youth by mapping the body at the single-cell level.

Scientists at Sanford Burnham Prebys are investigating the inner workings of our bodies and the trillions of cells within them at a level of detail that few futurists could have predicted. 

“The scale of the data we can generate and analyze has certainly exploded,” says Yu Xin (Will) Wang, PhD, assistant professor in the Development, Aging and Regeneration Program at Sanford Burnham Prebys. “When I was a graduate student, I would take about a hundred pictures for my experiment and spend weeks manually classifying certain characteristics of the imaged cells.” 

“Now, a single experiment would capture probably hundreds of thousands of images and study the gene and protein expression patterns of millions of individual cells.” 

The Wang lab specializes in advanced spatial multi-omic analyses that capture the location of cells, proteins and other molecules in the body. Wang uses spatial multi-omics to explore how dysfunctional autoimmune responses—when the immune system attacks the body’s own tissues—can interfere with its ability to repair and regenerate. As well as being relevant to disease, autoimmune responses also play a role in “inflammaging,” the low-level, chronic inflammation that occurs with age. Inflammaging is thought to contribute to many of the physical signs of aging.  

“My team thinks about diseases from the perspective of how cells behave in response to changes in the body,” says Wang. “We’re interested in how interactions between the immune and peripheral nervous systems change as people age and make us susceptible to frailty and disease.” 

spectrum of immune cells

A spectrum of immune cells being studied by Will Wang’s lab at Sanford Burnham Prebys. Image courtesy of postdoctoral associate Beatrice Silvestri, PhD.

Yu Xin (Will) Wang, PhD

Yu Xin (Will) Wang, PhD, is an assistant professor in the Development, Aging and Regeneration Program at Sanford Burnham Prebys.

This spatial multi-omics approach is helping scientific teams across the world on projects to understand how the body works at the cellular level. Efforts such as the Human Cell Atlas and the Human BioMolecular Atlas Program seek to develop a cellular map of the human body.  Researchers at Sanford Burnham Prebys are now using these tools to map complex diseases including cancer and degenerative conditions such as muscular dystrophy and ischemic injuries. Wang is also working to map cellular changes in aging through the San Diego Tissue Mapping Center of the Cellular Senescence Network (SenNet), a collaborative effort led by Peter D. Adams, PhD, director of, and professor in, the Institute’s Cancer Genome and Epigenetics Program and Bing Ren, PhD, professor of Cellular and Molecular Medicine at UC San Diego.  

“Integrating multiple types of -omics data can give us a much more comprehensive picture as we study health and disease,” notes Wang. Each additional layer of imaging and sequencing data adds more complexity to how Wang and his peers process and analyze their results. This has driven Wang and his colleagues to develop computational algorithms and AI tools to find patterns and novel translatable targets from these “big data” experiments. 

Wang credits San Diego-based biotechnology company Illumina for playing a major role by creating next-generation sequencing technology that improved the speed and accuracy of genome sequencing. The cost of sequencing steadily declined after Illumina launched the Genome Analyzer platform in 2007, making this research method more accessible to scientists at Sanford Burnham Prebys and around the globe. 

A series of additional technology platforms and research disciplines have followed, allowing scientists to study other parts of biological systems in similarly exhaustive detail. These include epigenomics, transcriptomics, proteomics and metabolomics. Scientists are now able to incorporate more than one of these levels of inquiry into an experiment, which is known as multi-omics.    

Connections in the brain

Connections in the brain photographed during experiments at the Institute.
Image courtesy of postdoctoral associate Sara Ancel, PhD, and Annanya Sethiya, MS, research associate II.

“The amount of information you get back from these sequencing platforms, as well as the application of highly multiplexed biomolecular imaging, has exponentially increased, which really helps us to resolve what we couldn’t before to better understand the genetic regulation of cells and diseases,” says Wang. “The most challenging part is the work to derive the meaning from these massive amounts of information. Thankfully, that’s also the most fun part of what we do.” 


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Dodging AI and other computational biology dangers

AuthorGreg Calhoun
Date

August 13, 2024

Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using artificial intelligence and computational biology techniques in biomedical research helps maximize benefits while minimizing concerns

ChatGPT, an artificial intelligence (AI) “chatbot” that can understand and generate human language, steals most headlines related to AI along with the rising concerns about using AI tools to create false “deepfake” images, audio and video that appear convincingly real.

But scientific applications of AI and other computational biology methods are gaining a greater share of the spotlight as research teams successfully employ these techniques to make new discoveries such as predicting how patients will respond to cancer drugs.

AI and computational biology have proven to be boons to scientists searching for patterns in massive datasets, but some researchers are raising alarms about how AI and other computational tools are developed and used.

“We cannot just purely trust AI,” says Yu Xin (Will) Wang, PhD, assistant professor in the Development, Aging and Regeneration Program at Sanford Burnham Prebys. “You need to understand its limitations, what it’s able to do and what it’s not able to do. Probably one of the simplest examples would be people asking ChatGPT about current events as they happen.”

(ChatGPT has access only to news information up to certain cutoff dates based on the training set of websites and other information used for the most current version. Thus, its awareness of current events is not necessarily current.)

“I see a misconception where some people think that AI is so intelligent that you can just throw data at an AI model and it will figure it all out by itself,” says Andrei Osterman, PhD, vice dean and associate dean of curriculum for the Graduate School of Biomedical Sciences and professor in the Immunity and Pathogenesis Program at Sanford Burnham Prebys.

Yu Xin (Will) Wang, PhD

Yu Xin (Will) Wang, PhD, is an assistant professor in the Development, Aging and Regeneration Program at Sanford Burnham Prebys.

“In many cases, it’s not that simple. We can’t look at these models as black boxes where you put the data in and get an answer out, where you have no idea how the answer was determined, what it means and how it is applicable and generalizable.”

“The very first thing to focus on when properly applying computational methods or AI methods is data quality,” adds Kevin Yip, PhD, professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys and director of the Bioinformatics Shared Resource. “Our mantra is ‘garbage in, garbage out.’”

Andrei Osterman, PhD

Andrei Osterman, PhD, is a professor in the Immunity and Pathogenesis Program at Sanford Burnham Prebys.

Once researchers have ensured the quality of their data, Yip says the next step is to be prepared to confirm the results.

“Once we actually plug into certain tools, how can we actually tell whether they are doing a good job or not?” asks Yip. “We cannot just trust them. We need to have ways to validate either experimentally or even computationally using other ways to cross-check the findings.”

Yip is concerned that AI-based research and computational biology are moving too fast in some cases, contributing to challenges reproducing and generalizing results.

“There are so many new algorithms, so many tools published every day,” adds Yip. “Sometimes, they are not maintained very well, and the investigators cannot be reached when we can’t run their code or download the data they analyzed.”

For AI and computational biology techniques to continue their rapid development, it is important for the scientific community to be responsible, transparent and collaborative in sharing data and either code or trained AI models so that studies can be reproduced to enhance trust as these fields grow.

Privacy is another potential breeding ground for mistrust in research using AI algorithms to analyze medical data, from electronic health records to insurance claims data to biopsied patient samples.

“It is completely understandable that members of the public are concerned about the privacy of their personal data as it is a primary topic I discuss with colleagues at conferences,” says Yip. “When we work with patient data, there are very strict rules and policies that we have to follow.”

Yip adds that the most important rule is for scientists to never re-identify the samples without proper consent, which means using algorithms to predict which patient provided certain data.

Kevin Yip, PhD

Kevin Yip, PhD, is a professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys.

Ultimately for Yip, using AI and computational methods appropriately—within their limitations and without violating patients’ privacy—is a matter of professional integrity for the owners and users of these emerging technologies.

“As creators of AI and computational tools, we need to maintain our code and models and make sure they are accessible along with our data. On the other side, users need to understand the limitations and how to make good use of what we create without overstepping and claiming findings beyond the capability of the tools.”

 “This level of shared responsibility is very important for the future of biomedical research during the data revolution.”


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Scripting their own futures

AuthorGreg Calhoun
Date

August 8, 2024

At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers

Although not every scientist-in-training will need to be an ace programmer, the next generation of scientists will need to take advantage of advances in artificial intelligence (AI) and computing that are shaping biomedical research. Scientists who understand how to best process, store, access and employ algorithms to analyze ever-increasing amounts of information will help lead the data revolution rather than follow in its wake.

“I think the way to do biology is very different from just a decade or so ago,” says Kevin Yip, PhD, a professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys and the director of the Bioinformatics Shared Resource. “Looking back, I could not have imagined playing much of a role as a data scientist, and now I see that my peers and I are at the core of the whole discovery process.”

In 2017, bioinformatics experts suggested in Genome Biology that graduate education programs should focus on teaching computational biology to all learners rather than just those with a special interest in programming or data science. The authors noted that the changing nature of the life sciences required researchers to respond in kind. Teams of scientists must be able to formulate algorithms to keep pace and detect new discoveries obscured within oceans of data too vast to parse with prior methods.

“I think most people now would agree that data science and the use of computational methods—AI included—are indispensable in biology,” says Yip. “To use these approaches to the greatest effect, computational biologists and bench laboratory scientists need to be trained to speak a common language.”

Kevin Yip, PhD

Kevin Yip, PhD, is a professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys.

When Yip joined Sanford Burnham Prebys in 2022, he was tasked with directing a course on computational biology for the Institute’s Graduate School of Biomedical Sciences.

“We believe that the new generation of graduate students needs to have the ability to understand what algorithms are and how they work, rather than just treating those tools as black boxes,” says Yip. “They may not be able to invent new algorithms right out of the course, but they’ll be better equipped to participate in collaborative projects.”

Andrei Osterman, PhD

Andrei Osterman, PhD, is a professor in the Immunity and Pathogenesis Program at Sanford Burnham Prebys.

Yip’s work developing the course has been well-received by graduate students based on their evaluations of the class.  

“I loved the computational biology course,” says Katya Marchetti, a second-year PhD student in the lab of  Karen Ocorr, PhD, and the recipient of an Association for Women in Science scholarship.

“It was so helpful to learn skills that I could immediately see incorporating into my own research. I’m so glad I had this course. I know for a fact that I will need this knowledge and experience to be successful in whatever comes after my PhD. The people who have these skills objectively do better in postdoctoral fellowships or in the biotechnology industry.”

Yip and his fellow faculty members in the graduate school see an opportunity to further expand their approach to computational biology and data science topics.

“In the current course, students learn to use computational methods to analyze transcriptomics data,” says Andrei Osterman, PhD, vice dean and associate dean of Curriculum for the Graduate School and a professor in the Immunity and Pathogenesis Program at Sanford Burnham Prebys. “This is very useful hands-on training, but not advanced enough for some students.”

“We are seeing students with a computer science background coming into our graduate program,” notes Yip. “We are thinking about adding a new elective course for students who want to go beyond what our current class is offering.”

Graduate education is quickly evolving at Sanford Burnham Prebys and throughout the biomedical research community to match the demands of an era defined by effectively integrating computation and biology.

“Mutual understanding among data scientists and biologists is very important for where research is heading,” says Yip. “We will keep improving our training to set our students up for success.”

Katya Marchetti

Katya Marchetti is a second-year PhD student at Sanford Burnham Prebys.


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Coding clinic

AuthorGreg Calhoun
Date

August 6, 2024

Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting healthcare providers

The wealth of data stored in electronic medical records has long been considered a veritable treasure trove for scientists able to properly plumb its depths.  

Emerging computational techniques and data management technologies are making this more possible, while also addressing complicated clinical research challenges, such as optimizing the design of clinical trials and quickly matching eligible patients most likely to benefit.  

Scientists are also using new methods to find meaning in previously published studies and creating even larger, more accessible datasets.  

“While we are deep in the hype cycle of artificial intelligence [AI] right now, the more important topic is data,” says Sanju Sinha, PhD, an assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys. “Integrating data together in a clear, structured format and making it accessible to everyone is crucial to new discoveries in basic and clinical biomedical research.” 

Sinha is referring to resources such as the  St. Jude-Washington University Pediatric Cancer Genome Project, which makes available to scientists whole genome sequencing data from cancerous and normal cells for more than 800 patients.

Medulloblastoma tumor cells with hundreds of circular DNA pieces

The Chavez lab uses fluorescent markers to observe circular extra-chromosomal DNA elements floating in cancer cells. Research has shown that these fragments of DNA are abundant in solid pediatric tumors and associated with poor clinical outcomes. Image courtesy of Lukas Chavez.

The Children’s Brain Tumor Network is another important repository for researchers studying pediatric brain cancer, such as Lukas Chavez, PhD, an assistant professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys. 

“We have analyzed thousands of whole genome sequencing datasets that we were able to access in these invaluable collections and have identified all kinds of structural rearrangements and mutations,” says Chavez. “Our focus is on a very specific type of structural rearrangement called circular extra-chromosomal DNA elements.” 

Sanju Sinha, PhD

Sanju Sinha, PhD, is an assistant professor in the Cancer Molecular Therapeutics Program at  Sanford Burnham Prebys.

Circular extra-chromosomal DNA elements (ecDNA) are pieces of DNA that have broken off normal chromosomes and then been stitched together by DNA repair mechanisms. This phenomenon leads to circular DNA elements floating around in a cancer cell.  

“We have shown that they are much more abundant in solid pediatric tumors than we previously thought,” adds Chavez. “And we have also shown that they are associated with very poor outcomes.” 

To help translate this discovery for clinicians and their patients, Chavez is testing the use of deep learning AI algorithms to identify tumors with ecDNA by analyzing the biopsy slides that are routinely created by pathologists to diagnose brain cancer. 

“We have already done the genomic analysis, and we are now turning our attention to the histopathological images to see how much of the genomic information can be predicted from these images,” says Chavez. “Our hope is that we can identify tumors that have ecDNA by evaluating the images without having to go through the genomic sequencing process.”  

Currently, this approach serves only as a clinical biomarker of a challenging prognosis, but Chavez believes it can also be a diagnostic tool—and a game changer for patients.  

“I’m optimistic that in the future we will have drugs that target these DNA circles and improve the therapeutic outcome of patients,” says Chavez.  

“Once medicine catches up, we need to be able to find the patients and match them to the right medicine,” says Chavez. “We’re not there yet, but that’s the goal.” 

Chavez is also advancing his work as scientific director of the Pediatric Neuro-Oncology Molecular Tumor Board at Rady Children’s Hospital in San Diego.  

“Recently, it has been shown that new sequencing technologies coupled with machine learning tools make it possible to compress the time it takes to sequence and classify types of tumors from days or weeks to about 70 minutes,” says Chavez. “This is quick enough to take that technology into the operating room and use a surgical biopsy to classify a tumor.  

“Then we could get feedback to the surgeon in real time so that more or less tissue can be removed depending on if it is a high- or low-grade tumor—and this could dramatically affect patient outcomes.  

“When I talk to neurosurgeons, they are always in a pickle between trying to be aggressive to reduce recurrence risk or being conservative to preserve as much cognitive function and memory as possible for these patients.  

“If the surgeon knows during surgery that it’s a tumor type that’s resistant to treatment versus one that responds very favorably to chemotherapy, radiation or other therapies, that will help in determining how to strike that surgical balance.” 

Lukas Chavez, PhD

Lukas Chavez, PhD, is an assistant professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys.

Artist’s rendering X-shaped chromosomes floating in a cell

Artist’s rendering of X-shaped chromosomes floating in a cell alongside circular extra-chromosomal DNA elements.

Rady Children’s Hospital has also contributed to the future of genomic and computational medicine through BeginNGS, a pilot project to complement traditional newborn health screening with genomic sequencing that screens for approximately 400 genetic conditions. 

“The idea is that if there is a newborn baby with a rare disease, their family often faces a very long odyssey before ever reaching a diagnosis,” says Chavez. “By sequencing newborns, this program has generated success stories, such as identifying genetic variants that have allowed the placement of a child on a specific diet to treat a metabolic disorder, and a child to receive a gene therapy to restore a functional immune system.”


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Objective omics

AuthorGreg Calhoun
Date

August 1, 2024

Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress

Biological techniques that study the entire landscape of a sample’s genes or proteins—genomics or proteomics, respectively—help scientists discover new results without becoming too narrowly focused on what they predicted would happen. Although some scientists pursuing studies with this wider lens have been accused of going on “fishing expeditions,” many researchers counter that they now are able to investigate their hypotheses without missing other important results.

“I am a major proponent of omics, and especially unbiased omics,” says Sanju Sinha, PhD, an assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys. “If someone now doesn’t show me unbiased results, it deeply bothers me. If every experiment only shows results from one pathway, it’s concerning and increases my skepticism about the study.”

Sinha Lab

The Sinha Lab

An omics approach differs from traditional hypothesis-driven research in that it includes a comprehensive perspective about the phenomenon a scientist is studying and what might be causing it.

“Unbiased omics look at the global picture of how everything is changing,” explains Sinha. “If you’re looking at genetic factors, you present all 20,000 genes and how they change, rather than just one pathway and maybe 10 genes.”

Sanju Sinha, PhD

Sanju Sinha, PhD, is an assistant professor in the Cancer Molecular Therapeutics Program at  Sanford Burnham Prebys.

This method reflects the dynamic nature of biomedical research.

“Biomedical research is currently experiencing a period of accelerating and metamorphic discoveries fueled by unparalleled technologies that generate enormous amounts of data that, in turn, spur and spawn avenues of new inquiry and questions previously unimagined,” says David A. Brenner, MD, president and CEO of Sanford Burnham Prebys.

“An effective and successful biomedical researcher in the 21st century requires input from different disciplines that previously were not part of standard practice or the scientific method.”

Sinha agrees. “People used to work in small silos. They could work on the same biological pathway for 30 years.” The new model, he said, is quickly shifting to more multidisciplinary, team-based science where experts from many fields collaborate to make the most of new technology and the rich data it can provide.

Some teams employing these omics approaches have been criticized for conducting aimless studies due to the lack of traditional hypotheses. Sinha is quick to defend against these claims.

“I don’t mind these so-called fishing expeditions. I like to say that there are only two kinds of science: applied science and not-yet-applied science. Fishing expeditions are valuable if the data is made available and other scientists can make discoveries with it for years to come.”

“We should remember that fishing expeditions in biomedical research have done a great service to humanity.”

The hypothesis is not an endangered species destined to be replaced by unbiased omics approaches. On the contrary, omics experiments can often be kick-starters that help scientists generate new hypotheses to explore.

A team of scientists at Sanford Burnham Prebys and their collaborators are using an omics technique called resistomics to develop a new class of antibiotics effective against a drug-resistant pathogen.

n a paper published on January 3, 2024 in Nature, a multi-institutional team including  Andrei Osterman, PhD, a professor in the Immunity and Pathogenesis Program at Sanford Burnham Prebys, with colleagues at  Roche—the Swiss-based pharmaceutical/healthcare company—and others, describe a novel class of small-molecule-tethered macrocyclic peptide (MCP) antibiotics with potent antibacterial activity against carbapenem-resistant  Acinetobacter baumannii  (CRAB).

The World Health Organization and the Centers for Disease Control and Prevention have both categorized multidrug-resistant  A. baumannii as a top-priority pathogen and public health threat.

In the study, Osterman and colleagues applied an experimental evolution approach to help identify the drug target (the LPS transporter complex) of a new class of antibiotics—a macrocyclic peptide called Zosurabalpin—and elucidate the dynamics and mechanisms of acquired drug resistance in four distinct strains of A. baumannii. 

Andrei Osterman, PhD

Andrei Osterman, PhD, is a professor in the Immunity and Pathogenesis program at Sanford Burnham Prebys.

They used an integrative workflow that employs continuous bacterial culturing in an “evolution machine” (morbidostat) followed by time-resolved, whole-genome sequencing and bioinformatics analysis to map resistance-inducing mutations. 

“This comprehensive mapping of the drug-resistance landscape yields valuable insights for a variety of practical applications,” says Osterman, “from therapy optimization via genomics-based assessment of drug resistance/susceptibility of bacterial pathogens to a rational development of novel drugs with minimized resistibility potential.”


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.
Institute News

Using machines to personalize patient care

AuthorGreg Calhoun
Date

July 30, 2024

Artificial intelligence (AI) and other computational techniques are aiding scientists and physicians in their quest to create treatments for individuals rather than populations

The Human Genome Project captured the public’s imagination with its global quest to better understand the genetic blueprint stored on the DNA within our cells. The project succeeded in delivering the first-ever sequence of the human genome while foreshadowing a future for medicine once considered to be science fiction. The project presaged the possibility that health care could be personalized based on clues within a patient’s unique genetic code.

Chavez lab

The Chavez Lab

While many more people have undergone genetic testing through consumer genealogy and health services such as 23andMe and Ancestry than through health care systems, genomic sequencing has influenced clinical care in some specialties. Personalized medicine—also known as precision medicine or genomic medicine—has been especially helpful for people suffering from rare diseases that historically have been difficult to diagnose and treat.

Scientists at Sanford Burnham Prebys are employing new technologies and expertise to test ways to improve diagnoses and customize treatments for many diseases based on unique characteristics within tumors, blood samples and other biopsies.

AI and other computational techniques are enabling patient samples to be rapidly analyzed and compared to data from vast numbers of individuals who have been treated for the same condition. Physicians can use AI and other tools to identify subtypes of cancers and other conditions, as well as improve selection of eligible candidates for clinical trials.

“I think we’ve gotten a lot better at precision diagnostics,” says Lukas Chavez, PhD, an assistant professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys. “In my work at Rady Children’s Hospital in cancer, we can characterize a tumor based on mutations, including predicting how quickly different tumors will spread. What we too often lack, however, are better treatment approaches or medicines. That will be the next generation of precision medicine.”

Sanju Sinha, PhD, an assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys, is developing projects to help bridge the gap between precision diagnostics and treatment. He is partnering with the National Cancer Institute on a first-of-its-kind computational tool to systematically predict patient response to cancer drugs at single-cell resolution.

A study published in the journal  Nature Cancer discussed how the tool, called PERCEPTION, was successfully validated by predicting the response to individual therapies and combination treatments in three independent published clinical trials for multiple myeloma, breast and lung cancer.

Lukas Chavez, PhD

Lukas Chavez, PhD, is an assistant professor in the Cancer Genome and Epigenetics Program at Sanford Burnham Prebys.

In each case, PERCEPTION correctly stratified patients into responder and non-responder categories. In lung cancer, it even captured the development of drug resistance as the disease progressed, a notable discovery with great potential.

Sanju Sinha, PhD

Sanju Sinha, PhD, is an assistant professor in the Cancer Molecular Therapeutics Program at Sanford Burnham Prebys.

“The ability to monitor the emergence of resistance is the most exciting part for me,” says Sinha. “It has the potential to allow us to adapt to the evolution of cancer cells and even modify our treatment strategy.”

While PERCEPTION is not yet ready for clinics, Sinha hopes that widespread adoption of this technology will generate more data, which can be used to further develop and refine the technology for use by health care providers.

In another project, Sinha is focused on patients being treated for potential cancers that may never progress into dangerous conditions warranting treatment and its accompanying side effects.

“Many women who are diagnosed with precancerous changes in the breast seek early treatment,” says Sinha. “Most precancerous cells never lead to cancer, so it may be that as many as eight of 10 women with this diagnosis are being overtreated, which is a huge issue.”

To try and counter this phenomenon, Sinha is training AI models on images of biopsied samples in conjunction with multi-omics sequencing data. His team’s goal is to develop a tool capable of predicting which patients’ cancers would progress based on the imaged samples alone.

“In the field of precancer, insurance does not cover the cost of computing this omics data,” says Sinha. “Health care systems do routinely generate histopathological slides from patient biopsies, so we feel that a tool leveraging these images could be a scalable and accessible solution.”

If Sinha’s team is successful, an AI tool integrated into clinics would predict whether precancerous cells would progress within the next 10 years to guide treatment decisions and how patients are monitored.

“With precision medicine, our hope is not to just treat patients with better drugs, but also to make sure that patients are not unnecessarily treated and made to bear needless costs and side effects that disrupt their quality of life.”


Programming in a Petri Dish, an 8-part series

How artificial intelligence, machine learning and emerging computational technologies are changing biomedical research and the future of health care

  • Part 1 – Using machines to personalize patient care. Artificial intelligence and other computational techniques are aiding scientists and physicians in their quest to prescribe or create treatments for individuals rather than populations.
  • Part 2 – Objective omics. Although the hypothesis is a core concept in science, unbiased omics methods may reduce attachments to incorrect hypotheses that can reduce impartiality and slow progress.
  • Part 3 – Coding clinic. Rapidly evolving computational tools may unlock vast archives of untapped clinical information—and help solve complex challenges confronting health care providers.
  • Part 4 – Scripting their own futures. At Sanford Burnham Prebys Graduate School of Biomedical Sciences, students embrace computational methods to enhance their research careers.
  • Part 5 – Dodging AI and computational biology dangers. Sanford Burnham Prebys scientists say that understanding the potential pitfalls of using AI and other computational tools to guide biomedical research helps maximize benefits while minimizing concerns.
  • Part 6 – Mapping the human body to better treat disease. Scientists synthesize supersized sets of biological and clinical data to make discoveries and find promising treatments.
  • Part 7 – Simulating science or science fiction? By harnessing artificial intelligence and modern computing, scientists are simulating more complex biological, clinical and public health phenomena to accelerate discovery.
  • Part 8 – Automation accelerating research.