• Homepage
  • mmc2021 Abstract Database
  • Deep learning assisted Fourier transform imaging spectroscopy
  • Deep learning assisted Fourier transform imaging spectroscopy

    Abstract number
    80
    Presentation Form
    Submitted Talk
    Corresponding Email
    [email protected]
    Session
    Stream 2: Machine Learning for Image Analysis
    Authors
    Cory Juntunen (1), Isabel Woller (1), Yongjin Sung (1)
    Affiliations
    1. University of Wisconsin Milwaukee
    Keywords

    FTS, Deep Learning, Multi-Fluorescence Imaging

    Abstract text

    Summary

    We present a method of multi-color fluorescence imaging with unprecedented throughput by combining Fourier-transform spectroscopy (FTS) and deep learning.

    Introduction

    Multi-fluorescence imaging allows several fluorophores to be identified simultaneously without the need for washing and re-staining the sample, minimizing contamination. The existing filter-based methods have low imaging throughput, and the number of fluorophores that can be used simultaneously is typically small. FTS-based multi-color imaging has been used when a high spectral resolution is required, but the imaging throughput is even lower than the filter-based methods due to the requirement of recording several thousand images. Here we show that deep learning can dramatically increase the imaging throughput of FTS.

    Methods/Materials

    The proposed technique is built upon the combination of FTS and deep learning. The FTS module, which is built upon a Michelson style interferometer with a separate beam path for the He-Ne correction, is attached to a custom-built epi-fluorescence microscope. All fluorescent dyes in the sample are excited simultaneously using a while-light light-emitting diode (LED) source and a multi-band fluorescence filter cube. The resulting interferogram images are captured by an electron-multiplying charge-coupled device (EMCCD) camera. We train a 1-dimensional convolutional neural network (1DCNN), to accurately determine the amount of fluorescent signal for each pixel. 840,000 interfeorgrams are used for training, which have been acquired from 28 samples, and 30,000 interferograms used for validation from a different sample. The 1DCNN is tested by predicting the type and amount of fluorescent signal at each pixel of a new unknown sample, and then comparing these results with the ground truth FTS reconstructed fluorescent image. The 1DCNN consists of several convolutional, max pooling, and fully connected layers with ReLU activation, L2 regularization, and sigmoid activation in the output layer.

    Results and Discussion

    Built on an interferometer, FTS typically records more than 1000 samples (i.e., intensities of interferogram) for varying optical path differences. Using deep learning, we demonstrate the sampling number can be reduced to 1/20 for three-channel fluorescence imaging. Even though compressed sensing has been demonstrated to reduce the sampling number, the deep-learning-based approach is able to classify the types of fluorescent dyes without reconstructing the emission spectrum; thereby, it can further reduce the required sampling number. We also demonstrate a robust classification is possible without relying on the laser interferometer, which is typically installed in parallel with the main beam path to monitor the actual optical path differences. We experimentally demonstrate our method using bovine pulmonary artery endothelial (BPAE) cells labeled with three fluorophores.

    Conclusion

    Using deep learning, we have demonstrated the interferogram sampling can be reduced to 1/20 of the existing FTS method for a three-channel fluorescence imaging.


    References