November 13, 2019 -- Researchers from Rensselaer Polytechnic Institute developed a new technique that applies deep learning to quantify images generated from fluorescence lifetime imaging. Details of the deep neural network they developed, called FLI-Net, are described in the November 12 issue of the Proceedings of the National Academy of Sciences.
Molecular imaging has long been an effective tool for researchers to understand fundamental components of biology which can lead to clinical applications. Specifically, fluorescence lifetime imaging (FLI) has provided a way to examine sample non-invasively to gain unique insights into the cellular microenvironment. FLI can be used to quantify protein-protein interactions, biosensor activity and ligand-receptor engagement in vivo. However, this indirect imaging method requires post-collection processing with iterative optimization to analyze datasets. Limitations of the system include computational constraints.
The research team developed a 3D Convolutional Neural Network (CNN) specifically designed to process data from FLI systems. This system does not require any user-defined parameter entry. Traditionally, analysis requires a lot of time and complex mathematical tools that rely heavily on the user, which makes it hard to produce consistent and reproducible images. Those difficulties have been barriers to using this technology in a clinical setting.
Moreover, it can be trained efficiently using a synthetic data generator and validated with experimental data sets, avoiding the need to acquire massive training datasets experimentally. The CNN can process experimental fluorescent decays acquired by Time Correlated Single Photon Counting (TCSCP)- or gated ICCD- based instruments, the most common FLI technologies.
The 3D CNN, "FLI-Net" (Fluorescence Lifetime Imaging - Network), is designed to mimic a curve fitting approach using layers of convolutional operations and non-linear activation functions. FLI-Net is designed such that time- and spatially resolved fluorescence decays are input as 3D data cube (x,y,t) and bi-exponential parameters (two lifetimes and one fractional amplitude) are estimated at each pixel to be provided in output images of the same dimension as the input (x,y). The result is a comprehensive view of multiple biological processes happening within the tissues and cells.
To test FLI-Net, the researchers imaged cancer cells using FLI to visualize the metabolic status of live cells and report Förster Resonance Energy Transfer (FRET) to measure levels of receptor engagement in a cell. The team found that the deep neural network performed as well as, and in some cases better than, commercial software currently being used. Moreover, they also found that this technique required less light to produce detailed images.
"We are providing tools that are going to be far more amenable for the end-users, meaning the biologists, but also the surgeon," said Xavier Intes, a professor of biomedical engineering who led this research for Rensselaer.
The researchers found that the FLI-Net architecture is well suited for image formation paradigms such as FLI. Therefore, their deep-learning framework can be applied as a generalized new tool for fit-free analysis of complex fluorescence lifetime imaging processes.
The goal of the research is to bring the benefits of FLI into the clinical setting as a key tool in precision medicine. "This is an enabling technology for many clinical applications. For instance, it may be used for in vivo real-time imaging of a tumor, which may help surgeons see the lesion during their procedures, enabling them to completely remove cancer tissue with minimal damage to healthy tissue," said Pingkun Yan, an assistant professor of biomedical engineering at Rensselaer.
Do you have a unique perspective on your research related to bio-imaging and artificial intelligence? Contact the editor today to learn more.