time-series

Description

This plugin is able to stitch an arbitrary collection or grid of images, it does not matter if it is 2d, 3d, 4d or 5d images as long as all images are of the same type. In contrast to the Pairwise Stitching of two images, this plugins will load (and potentially save) the images from/to harddisc.

grid stiching Fiji
Description

Quote:

The Pairwise Stitching first queries for two input images that you intend to stitch. They can contain rectangular ROIs which limit the search to those areas, however, the full images will be stitched together. Once you selected the input images it will show the actual dialog for the Pairwise Stitching.

has function
need a thumbnail
Description

3DeeCellTracker is a deep-learning based pipeline for tracking cells in 3D time-lapse images of deforming/moving organs.

The installation comprises a set of Jupyter notebooks and a library they depend on. The workflow steps include separate training and segmentation/tracking.

Examples of cell tracking from the reference publication are: ~100 cells in a freely moving nematode brain, ~100 cells in a beating zebrafish heart, and ~1000 cells in a 3D tumor spheroid.

Overall procedures of our method (Wen et al. eLife, 2021–Figure 1)
Description

napari-lattice is a napari plugin designed for the analysis and visualization of Lattice Lightsheet Microscopy (LLSM) and Oblique Plane Microscopy (OPM) data, particularly focusing on data acquired from Zeiss Lattice Lightsheet systems. Also available as lls-core - a command line version of the same tool which does not require napari.

napari-lattice allows users to deskew and deconlolve lattice light sheet, or any oblique plane microscopy, data. To speed processing, users can provide ROIs to be cropped and processed separately.  This significantly speeds up processing time and allows many options for parallelisation. 

Description

This workflow is the integration of YOLO (You Only Look Once) machine learning models, image pre-processing scripts and labeling tools within the Galaxy platform. Galaxy is an open, web-based platform used primarily for data analysis in computational biology, but it also has applications in image processing and other fields. 

How the Galaxy YOLO image segmentation tool works

The combination of Galaxy and YOLO allows researchers to perform object detection and image analysis without requiring extensive programming knowledge. Here's how it generally works: 

  • Web-based interface: Galaxy provides a graphical, user-friendly interface to access powerful analysis tools. Users can simply upload their image data, select the YOLO tool, and run the analysis.
  • YOLO model execution: The Galaxy tool executes a pre-trained YOLO model, often from the Ultralytics framework, on the input images. These models can perform tasks like object detection (drawing bounding boxes) or instance segmentation (creating pixel-level masks).
  • Training and prediction: Some tools allow for both model training and prediction. Users can train a custom YOLO model on their own labeled datasets to detect specific objects of interest. For example, bioimage analysis may involve detecting cells or other structures.
  • Other integrations: Other machine-learning tools can be integrated with YOLO in Galaxy. For instance, the AnyLabeling tool supports YOLO for semi-automated and active learning-based data annotation.