The microscope is a symbol of mankind’s entry into the atomic age. Microscopes are important analytical instruments in scientific research and medical fields. With the advancement of science, the performance requirements of microscopes have increased. The introduction of artificial intelligence (AI) can help microscopes see more clearly, process more data faster, and be more real-time, accurate, and automated.
In recent years, many companies, scientific research institutions, and universities have invested a lot of energy in applying AI to microscopy research. Currently, microscope intelligent technology is developing rapidly.
Over the years, people have worked hard to improve the resolution and clarity of microscopes. With the continuous advancement of computer technology and tools, the continuous improvement of relevant theories and methods, coupled with the improvement of the performance of raw materials, the continuous improvement of processes and detection methods, and the innovation of observation methods, the imaging quality of microscopes has been improved, and the data processing speed has been accelerated, making it more Automation and intelligence.
The latest research progress of AI applied to microscopy
Over the years, people have worked hard to improve the resolution and clarity of microscopes. With the continuous advancement of computer technology and tools, the continuous improvement of relevant theories and methods, coupled with the improvement of the performance of raw materials, the continuous improvement of processes and detection methods, and the innovation of observation methods, the imaging quality of microscopes has been improved, and the data processing speed has been accelerated, making it more Automation and intelligence.
Taking medical care as an example, smart microscopes have four advantages over traditional microscopes:
- Efficiency
The smart microscope is automatic and efficient, and can count cell number, area and other information in real time.
- Accuracy
The smart microscope is accurate and stable to avoid differences in interpretation by different doctors.
- Ease of use
The smart microscope is user-friendly, and the results are directly fed back to the eyepiece without interfering with the doctor’s reading and recording.
- High-cost performance
The smart microscope’s algorithm can be upgraded to support new diseases.
Among them, five studies focused on artificial intelligence-assisted microscopy imaging.
1. PyJAMAS: Open source, multi-modal segmentation and analysis of microscope images
Scientists’ ability to resolve fine details using light microscopy continues to improve, along with an increasing need for quantitative images to detect and measure phenotypes. Despite their central role in cell biology, many image analysis tools require financial investment, are released as proprietary software, or are implemented in languages that are not beginner-friendly and therefore are used as black boxes.
To overcome these limitations, researchers at the University of Toronto developed PyJAMAS, an open-source tool for image processing and analysis written in Python. PyJAMAS provides a variety of segmentation tools, including watershed- and machine learning-based methods; utilizes Jupyter notebooks to display and reproduce data analyses; and is available through a cross-platform graphical user interface or as a Python script through a comprehensive application programming interface Partially used.
2. Microscopy analysis neural network solves the detection, enumeration, and segmentation of image-level annotations
The development of deep learning methods for detecting, segmenting, or classifying structures of interest has transformed the field of quantitative microscopy. High-throughput quantitative image analysis presents challenges due to the complexity of image content and the difficulty of retrieving precisely annotated data sets. Methods capable of reducing the annotation burden associated with training deep neural networks on microscopy images become primitive.
Here, researchers from Canada’s CERVO Brain Research Center introduce a weakly supervised microscopy analysis neural network (MICRA-Net) that can be trained on simple primary classification tasks using image-level annotations to solve multiple more complex tasks, such as semantic segmentation. When no precisely annotated dataset is available, MICRA-Net relies on latent information embedded in the trained model to achieve similar performance to established architectures.
This learned information is extracted from the network using combined gradient class activation maps to generate detailed feature maps of biological structures of interest. The researchers demonstrate how MICRA-Net can significantly ease the expert annotation process on a variety of microscopy datasets and can be used for high-throughput quantitative analysis of microscopy images.
3. AI microscopy technology can quickly identify dead cells and accelerate research on neurodegenerative diseases.
Cellular events in neurodegenerative diseases can be captured by longitudinal intravital microscopy of neurons. While the advent of robot-assisted microscopy has helped expand such work to high-throughput protocols with the statistical power to detect transient events, it requires time-intensive manual annotation.
Researchers at the Gladstone Institutes have addressed this fundamental limitation with Biomarker Optimized Convolutional Neural Networks (BO-CNN): interpretable computer vision models trained directly on biosensor activity. The researchers demonstrated BO-CNN’s ability to detect cell death, which trained annotators typically measure.
BO-CNNs detect cell death with superhuman accuracy and speed by learning to recognize subcellular morphology that correlates with cell viability, albeit without explicit supervision that relies on these features. The models also revealed an intranuclear morphological signal that is difficult to detect with the naked eye and has not previously been linked to cell death, but reliably indicates death. BO-CNN is widely used in analytical intravital microscopy and is critical for interpreting high-throughput experiments.
4. Relevant image learning of chemical mechanics in phase change solids
Constitutive laws underlie most physical processes in nature. However, learning such equations in heterogeneous solids (e.g. due to phase separation) is challenging. One such relationship is that between composition and intrinsic strain, which governs the chemical-mechanical expansion of solids.
Here, Stanford researchers develop a generalizable, physically constrained image learning framework to algorithmically learn nanoscale chemical-mechanical constitutive laws from correlated four-dimensional scanning transmission electron microscopy and X-ray spectro-ptychography images. The researchers demonstrated this approach on the technologically relevant battery cathode material LiXFePO4.
This study reveals the functional form of the composition-characteristic strain relationship of this two-phase binary solid over the entire composition range (0 ≤ X ≤ 1), including within the thermodynamically unstable miscibility gap. The learned relationships directly validate Vegard’s linear response law at the nanoscale. Their physically constrained data-driven approach directly visualizes residual strain fields (by removing component and coherent strains) that otherwise cannot be quantified. Misfit dislocations cause heterogeneity in residual strains and were independently verified by X-ray diffraction line profiling analysis.
This work provides a method to simultaneously quantify chemical expansion, coherent strain, and dislocations in battery electrodes, which have implications for rate capability and lifetime. Broadly speaking, this work also highlights the potential of integrating correlational microscopy and image learning to extract material properties and physics.
5. Learning Motifs and Their Hierarchy in Atomic Resolution Microscopy
Characterizing materials at atomic resolution and first-principles structural property prediction are two pillars of accelerating functional materials discovery. However, scientists still lack a fast, noise-resistant framework for extracting multi-level atomic structural motifs from complex materials to complement, inform, and guide their first-principles models.
Here, researchers from the National University of Singapore present a machine-learning framework that can quickly extract hierarchies of complex structural patterns from atomically resolved images. The researchers show how this hierarchy of motifs can quickly reconstruct specimens with a variety of defects. Extracting complex specimens with simplified patterns allowed them to discover previously unknown structures in Mo─V─Te─Nb polyoxometalates (POMs) and quantify relative disorder in twisted bilayers of MoS2.
Furthermore, these motif hierarchies provide statistically based clues about favored and frustrated pathways during self-assembly. This framework frames motifs and their hierarchies in coarse-grained disorder, enabling researchers to understand a wider range of multiscale samples with functional defects and non-trivial topological stages.