Currently, we are pursuing the following research topics:

Ultrafast Biophotonics

Space and time, two key physical dimensions, constitute the basis of modern metrology. In bio-imaging, as recognized by the 2014 Nobel Prize in chemistry, there have been breathtaking advances in improving the spatial resolution of microscopic imaging, resulting in an impressive arsenal of nanoscopy tools that can break the diffraction limit of light. Despite equally important, the pursuit of a high-temporal resolution has only recently caught attention thanks to the emergence of several enabling technologies. The motivation to develop these ultrafast imagers originates from the landscape shift of the contemporary biology from morphological explorations and phenotypic probing of organisms to seeking quantitative insights into underlying mechanisms at molecular levels. The transient molecular events occur at a timescale varying from tens and hundreds of microseconds that ligands take to bind, to tens of femtoseconds that molecules take to vibrate. Ultrafast imaging, therefore, is essential for observation and characterization of such dynamic events.

Heretofore, most ultrafast phenomena at microscopic scales were probed using non-imaging-based methods. However, since most transient molecular events are a consequence of a cascade of molecular interactions, rather than occurring in isolation, the lack of images limits the scope of the analysis. On the other hand, despite the capability of capturing two-dimensional images, conventional cameras based on electronic image sensors, such as CCD and CMOS, fall short in providing a high frame rate under desirable imaging conditions due to electronic bandwidth limitations (data transfer, digitalization, and writing).

To solve this fundamental problem, our strategy is to introduce the paradigm of compressed sensing into high-speed optical imaging. Rather than measuring each spatiotemporal voxel of an event datacube, we will leverage the compressibility of biological scenes and thereby utilizes the camera’s bandwidth more efficiently—the image data is compressed before being digitalized and transferred to the host computer. This feature will make our approaches especially advantageous for recording high-speed image data, which otherwise would require tremendous camera bandwidth and hardware resources if measured under Nyquist sampling. Based on this strategy, we will explore ultrafast bioimaging at a frame rate from a few MHz to ten THz, a range which is essential for understanding the biomolecular behaviors but currently inaccessible by conventional high-speed cameras. The resultant research program will ultimately lead to a new generation of ultrafast bioimagers and make transformative advancements to the state-of-the-art methods.

See astonishing slow-motion movies captured by our CUP camera: Compressed Ultrafast Photography tops 100 billion fps

Representative publications:

  • J. Liang, C. Ma, L. Zhu, Y. Chen, L. Gao, L. V. Wang, “Single-shot real-time video recording of a photonic Mach cone induced by a scattered light pulse”, Science Advances, 3, e1601814 (2017)
  • L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second”, Nature, 516, 74-77 (2014)
  • J. Liang, L. Gao, P. Hai, and L. V. Wang, “Encrypted three-dimensional dynamic imaging using snapshot compressed ultrafast photography”, Sci Rep 5 (2015), 15504

High-speed plenoptic medical cameras 

Optical imaging probes like otoscopes and laryngoscopes are essential tools used by doctors to see deep into the human body. Until now, they have been crucially limited to two-dimensional (2D) views of tissue lesions in vivo that frequently jeopardize their diagnostic usefulness. My team has developed a high-speed, three-dimensional (3D) plenoptic imaging tool that revolutionizes diagnosis with unprecedented sensitivity and specificity in the images produced.

Depth imaging is critically needed in medical diagnostics because most tissue lesions manifest themselves as abnormal 3D structural changes. For example, otitis media is the most frequently diagnosed bacterial infection in children. There are over 20 million visits made to office-based physicians in the US for otitis media each year. The 3D shape of the tympanic membrane is one of the most effective diagnostic features that differentiate between a normal eardrum (generally flat), an eardrum with acute otitis media (AOM) (bulging), and an eardrum with otitis media with effusion (OME) (retraction)1. While AOM is generally treated with antibiotics, OME is managed more conservatively without treating with an antimicrobial agent. Nevertheless, it is a non-trivial task to differentiate OME from AOM using conventional otoscopes, which provide only a 2D image of the eardrum. The misjudgment of eardrum bulging contributes to the over-prescription of antibiotics, the most common mistake made in pediatrician clinics. Another striking example impacts approximately 7.5 million people in the US who suffer from voice disorders due to either trauma or diseases. Human vocal fold vibration is a complex 3D movement, and its frequency varies from 100 (typical males) to 200 Hz (typical females). The aberrant vibration amplitude and frequency are hallmarks for vocal cord polyps, vocal cord nodules, recurrent nerve paralysis, and laryngeal cancer, the four primary conditions that cause hoarseness. The current standard-of-care methods for diagnosing these voice disorders include videostroboscopy and high-speed videoendoscopy. Both techniques only image the horizontal movements of vocal folds, making it very difficult to diagnose. Despite its vital importance in clinical diagnosis as well as fundamental phonation research, the absolute vibration movement of vocal folds along the air flow direction simply cannot be easily measured using conventional laryngoscopes.

To make transformative advances to the state-of-the-art approaches, we have developed an innovative tool with many potential applications. Our plenoptic medical cameras enable high-speed 3D imaging for a variety of medical imaging applications (see a typical configuration below). Based on computational optical imaging, the camera captures light ray angles and locations emitted from a scene within a single exposure. Given prior knowledge about disparity-to-depth mapping, a 3D scene can then be estimated from the measured light field data using algorithms such as scale-depth space transform. Because no scanning is needed, the volumetric frame rate is limited only by the camera’s data transfer bandwidth and up to 1000 Hz. Additionally, since plenoptic imaging measures depths at all spatial locations rather than selected points or lines, it yields a significantly higher spatial and depth resolution over those acquired by laser triangulation and stereoscopy. Furthermore, the overall layout of a plenoptic medical camera is simple, and it can be built using only off-the-shelf optical components, such as lenses and prisms. The projected manufacturing cost is comparable to the conventional medical scopes that are routinely used in the primary care clinics. Also, thanks to its simplicity, the system requires rudimentary training to operate.

Representative Publications:

  • Shuaishuai Zhu, Andy Lai, Katherine Eaton, Peng Jin, and Liang Gao, “On the fundamental comparison between unfocused and focused light field cameras,” Appl. Opt. 57, A1-A11 (2018)
  • Gao, Liang, Bedard, N., & Tošić, I. (2016, July). Disparity-to-depth calibration in light field imaging. In Computational Optical Sensing and Imaging (pp. CW3D-2). Optical Society of America.
  • Gao, Liang, Ivana Tosic, and Noah Bedard. “Optical Design of a Light Field Otoscope.” U.S. Patent Application No. 15/197,601.
  • Liang Gao, Ivana Tosic, “Construction of an Individual Eye Model Using a Plenoptic Camera” United States US20170105615 A1

Photoacoustic imaging 

Photoacoustic imaging (PAI) has seen immense growth in the past decade, providing unprecedented spatial resolution and functional information at depths in the optical diffusive regime. PAI uniquely combines the advantages of optical excitation and acoustic detection. The hybrid imaging modality features high sensitivity to optical absorption and wide scalability of spatial resolution with the desired imaging depth. A major barrier to reaching its full potentials is lack of selective agents and systems for multiscale PAIs. Our group focuses on developing the next-generation PAI modalities and contrast agents for multiscale imaging applications, ranging from single molecule analysis to whole body imaging. We envision our research will push the boundaries of imaging resolution, depths, and sensitivity and specificity through the integrated efforts of chemists, physicists biologists and engineers across the Illinois campus.

Figure. Multiscale photoacoustic imaging of (a) vasculature structure in a mouse ear, (b) melanin in melanoma cells, and (c) cell nuclei.

Representative Publications:

  • L. V. Wang and L. Gao, “Photoacoustic microscopy and computed tomography: from bench to bedside”, Annu. Rev. Biomed. Eng. 16, 155 (2014).
  • L. Zhu, L. Li, L. Gao, and L. V. Wang, “Multi-view optical resolution photoacoustic microscopy”, Optica. 1(4), 217 (2014).
  • J. Liang, L. Gao, C. Li, and L. V. Wang, “Spatially Fourier-encoded photoacoustic microscopy using a digital micromirror device”, Opt. Letts. 39(3), 430 (2014).
  • L. Gao, C. Zhang, C. Li, and L. V. Wang, “Intracellular temperature mapping with fluorescence-assisted photoacoustic-thermometry,” Appl. Phys. Lett. 102, 193705 (2013).
  • L. Gao, L. Wang, C. Li, Y. Liu, H. Ke, C. Zhang, and L. V. Wang, “Single-cell photoacoustic thermometry,” J. Biomed. Opt. 18, 026003 (2013).

Near-eye 3D display

Near-eye three-dimensional displays have seen rapid growth and held great promise in a variety of applications, such as gaming, film viewing, and professional scene simulations. Currently, most near-eye three-dimensional displays are based on computer stereoscopy, which presents two images with parallax in front of the viewer’s eyes. Stimulated by binocular disparity cues, the viewer’s brain then creates an impression of the three-dimensional structure of the portrayed scene. However, the stereoscopic displays suffer from a major drawback—the vergence-accommodation conflict—which reduces the viewer’s ability to fuse the binocular stimuli while causing discomfort and fatigue. Because the images are displayed on one surface, the focus cues specify the depth of the display screen (i.e., accommodation distance) rather than the depths of the depicted scenes (i.e., vergence distance). This is opposite to the viewer’s perception in the real world where these two distances are always the same. To alleviate this problem, one must present correct focus cues that are consistent with binocular stereopsis.

In our lab, we developed an optical mapping near-eye (OMNI) three-dimensional display method which provides correct focus cues for depth perception, thus eliminating the vergence-accommodation conflict. Through mapping different sub-panel images of a display screen to various axial depths, we can create a high-resolution three-dimensional image over a wide depth range. The image dynamic range and refresh rate are limited by only the display screen itself and up to 12 bits and 30 Hz, respectively. The OMNI display method will lead to a new generation of three-dimensional display and exhibit great potentials for various wearable applications.

Figure. Optical schematic of OMNI display

Representative Publications:

  • Wei Cui and Liang Gao, “Optical mapping near-eye three-dimensional display with correct focus cues,” Opt. Lett. 42, 2475-2478 (2017)