Abstract
Label-free modes of light-microscopy are a central pillar of routine workflows in cell culture laboratories. Scientists rely on this method, combined with their deep insight into cell biology and their experience in judging such images, to assess the state of their cell cultures and drive critical decisions. Here, we review an analysis paradigm that employs deep learning methodologies as the key ingredient to turn label-free microscopy into a universal, quantitative assay technique:
The human expert uses a labeling software to highlight structures of interest on top of the original images. A suitable Artificial Neural Network (ANN) is trained to reproduce the labels. The performance of the ANN is validated on new images, which have not been used in training.
In a number of examples, we demonstrate that very fine structures like neurites can be detected, treatment-induced effects can be quantified without the need to previously stain the structure and that segmentation of standard cell regions like nucleus and cytoplasm can be achieved with precision. Such a standardized and generic approach of cell morphology quantification opens the door to an un-biased, multiparametric characterization and analysis of complex biological effects.
To exploit the full potential of this methodology in productive routine use, it is best implemented in an automated system where microscopy, machine learning and data handling are tightly integrated. This minimizes waste of precious data scientist's time as well as excessive variability and human error.