Doctors and scientists have long relied on microscopes to study human tissue and diagnose disease. But today’s medical research produces far more information than the human eye alone can handle, including detailed maps of genes and proteins inside cells.
A new study from Yale University researchers shows how artificial intelligence can bring these different kinds of data together, offering a clearer picture of what is happening inside the body and how diseases develop. The study is published in Nature Biomedical Engineering.
In the study, researchers introduce a new computer system called spEMO, short for “spatial multi-modal embeddings.” The system uses artificial intelligence to combine images of tissue slides with information about gene and protein activity, allowing scientists to analyze biological data in a more complete and meaningful way.
“Each type of data tells part of the story, but on its own it’s incomplete,” said Tianyu Liu, the study’s lead author and a PhD candidate working in the field of computational biology and biomedical informatics at Yale. “Our goal was to design a method that could integrate all of these signals so we can better understand how cells behave in real tissue.”
Traditionally, pathologists examine stained tissue samples under a microscope to identify signs of disease. At the same time, modern technologies can measure which genes are turned on or off at precise locations in those tissues. The challenge is that these different data types—images, gene activity, protein levels, and text-based biological knowledge—are difficult to analyze together.