High content screening

Synonyms
HCS

MIA

Description

ModularImageAnalysis (MIA) is an ImageJ plugin which provides a modular framework for assembling image and object analysis workflows. Detected objects can be transformed, filtered, measured and related. Analysis workflows are batch-enabled by default, allowing easy processing of high-content datasets.

MIA is designed for “out-of-the-box” compatibility with spatially-calibrated 5D images, yielding measurements in both pixel and physical units.  Functionality can be extended both internally, via integration with SciJava’s scripting interface, and externally, with Java modules that extend the MIA framework. Both have full access to all objects and images in the analysis workspace.

Workflows are, by default, compatible with batch processing multiple files within a single folder. Thanks to Bio-Formats, MIA has native support for multi-series image formats such as Leica .lif and Nikon .nd2.

Workflows can be automated from initial image loading through processing, object detection, measurement extraction, visualisation, and data exporting. MIA includes near 200 modules integrated with key ImageJ plugins such as Bio-Formats, TrackMate and Weka Trainable Segmentation.

Module(s) can be turned on/off dynamically in response to factors such as availability of images and objects, user inputs and measurement-based filters. Switches can also be added to “processing view” for easy workflow control.

MIA is developed in the Wolfson Bioimaging Facility at the University of Bristol.

Description

Phindr3D is a comprehensive shallow-learning framework for automated quantitative phenotyping of three-dimensional (3D) high content screening image data using unsupervised data-driven voxel-based feature learning, which enables computationally facile classification, clustering and data visualization.

Please see our GitHub page and the original publication for details.

Description

KNIME workflow to visualize a dataset described by multiple quantitative features (ex: a list of samples or cells, each described with multiple morphological features) as a 3D cloud of points (each point corresponding to one sample/cell) as well as a line plot (1 line per sample/cell).

For the 3D plot, the workflow uses Principal Component Analysis (PCA) for dimensionality reduction, ie it simplifies the information for each sample from n-features to 3 pseudo-features which are used as x,y,z-coordinates for each sample. The original features should cover similar value range, to make sure the PCA is not biased towards the large values features. One option is to normalize the values (min/max or Z-score). 

Also make sure that the resulting PCA represents a decent % of the original data variance (at least 70%). Otherwise the PCA plot will not be representative of the original data-distribution. The % is shown in the title of the PCA plot.

The workflow is interactive and so selecting in one panel of the figure will highlight in the other panel too.

It was originally published for the visualization of phenotypic kidney features in zebrafish, but the workflow is generic by design and can be reused for any quantitative feature set. 

KNIME-Workflow
Description

Multi-template matching can be used to localize multiple objects using one or a set of template images.

Contrary to previous implementations that allow to use only one template, here a set of templates can be used or the initial template(s) can be transformed by rotation/flipping.

Multiple objects detection without redundant detections is possible thanks to a Non-Maxima Supression relying on the degree of overlap between detections.

The solution is available as a Fiji plugin (Multi-Template Matching AND IJ-OpenCV update sites), as a Python package (Multi-Template-Matching on PyPI) and as a KNIME workflow (via KNIME Hub).

need a thumbnail