by Rohini Subrahmanyam, Harvard University
DEEP2 validation results on mouse pyramidal neurons with dendritic arbors at 2 and 6 scattering lengths (SLS) below the surface. PSTPM images of mouse pyramidal neurons were recorded. Their corresponding simulated DEEP-TFM image stacks were generated using the forward model. A subset of the data was used to train the DEEP2 inverse model, and the remaining unseen data were used to validate the model performance. a, d, g, j Four representative simulated DEEP-TFM image stacks (averaged over the 32 patterns) used for validation. b, e, h, k The corresponding PSTPM ground truths for the (a), (d), (g) and (j) instances. c, f, i, l DEEP2 reconstructions corresponding to (a), (d), (g) and (j) instances. The intensity along the yellow lines M, N, O, and P are visualized in (m–p) plots. Credit: Light: Science & Applications (2023). DOI: 10.1038/s41377-023-01248-6
When an algorithm-driven microscopy technique developed in 2021 (and able to run on a fraction of the images earlier techniques required) isn't fast enough, what do you do?
Dive DEEPer, and square it. At least, that was the solution used by Dushan Wadduwage, John Harvard Distinguished Science Fellow at the FAS Center for Advanced Imaging.
Scientists have worked for decades to image the depths of a living brain. They first tried fluorescence microscopy, a century-old technique that relies on fluorescent molecules and light. However, the wavelengths weren't long enough and they scattered before they reached an appreciable distance.
The invention of two-photon microscopy in 1990 brought longer wavelengths of light shine onto the tissue, causing fluorescent molecules to absorb not one but two photons. The longer wavelengths used to excite the molecules scattered less and could penetrate farther.
But two-photon microscopy can typically only excite one point on the tissue at a time, which makes for a long process requiring many measurements. A faster way to image would be to illuminate multiple points at once using a wider field of view but this, too, had its drawbacks.
"If you excite multiple points at the same time, then you can't resolve them," Wadduwage said. "When it comes out, all the light is scattered, and you don't know where it comes from."
To overcome this difficulty, Wadduwage's group began using a special type of microscopy, described in Science Advances in 2021. The team excited multiple points on the tissue in a wide-field mode, using different pre-encoded excitation patterns. This technique—called De-scattering with Excitation Patterning, or DEEP—works with the help of a computational algorithm.
Stephanie Mitchell/Harvard Staff Photographer. Credit: Dushan Wadduwage
"The idea is that we use multiple excitation codes, or multiple patterns to excite, and we detect multiple images," Wadduwage said. "We can then use the information about the excitation patterns and the detected images and computationally reconstruct a clean image."
The results are comparable in quality to images produced by point-scanning two-photon microscopy. Yet they can be produced with just hundreds of images, rather than to the hundreds of thousands typically needed for point-scanning. With the new technique, Wadduwage's group was able to look as far as 300 microns deep into live mouse brains.
Still not good enough. Wadduwage wondered: Could DEEP produce a clear image with only tens of images?
In a recent paper published in Light: Science and Applications, he turned to machine learning to make the imaging technique even faster. He and his co-authors used AI to train a neural network-driven algorithm on multiple sets of images, eventually teaching it to reconstruct a perfectly resolved image with only 32 scattered images (rather than the 256 reported in their first paper). They named the new method DEEP-squared: Deep learning powered de-scattering with excitation patterning.
The team took images produced by typical two-photon point-scanning microscopy, providing what Wadduwage called the "ground-truth." The DEEP microscope then used physics to make a computational model of the image formation process and put it to work simulating scattered input images. These trained their DEEP-squared AI model. Once AI produced reconstructed images that resembled Wadduwage's ground-truth reference, the researchers used it to capture new images of blood vessels in a mouse brain.
"It is like a step-by-step process," Wadduwage said. "In the first paper we worked on the optics side and reached a good working state, and in the second paper we worked on the algorithm side and tried to push the boundary all the way and understand the limits. We now have a better understanding that this is probably the best we can do with the current data we acquire."
Still, Wadduwage has more ideas for boosting the capabilities of DEEP-squared, including improving instrument design to acquire data faster. He said DEEP-squared exemplifies cross-disciplinary cooperation, as will any future innovations on the technology.
"Biologists who did the animal experiments, physicists who built the optics, and computer scientists who developed the algorithms all came together to build one solution," he said.
More information: Navodini Wijethilake et al, DEEP-squared: deep learning powered De-scattering with Excitation Patterning, Light: Science & Applications (2023). DOI: 10.1038/s41377-023-01248-6
Journal information: Light: Science & Applications , Science Advances
Provided by Harvard University
This story is published courtesy of the Harvard Gazette, Harvard University's official newspaper. For additional university news, visit Harvard.edu.
Post comments