Cellpose
Date Published

Background: Cellpose is a widely used, generalist deep‑learning segmentation algorithm developed for microscopy images. It was designed to work “out of the box” across diverse imaging modalities and object types by training on a large, heterogeneous dataset of annotated cells and nuclei. The project has evolved into a family of releases — including Cellpose 1/2/3 and the newer Cellpose‑SAM variant — that add capabilities such as human‑in‑the‑loop model training, one‑click image restoration and improved generalization. The code and models are open source (BSD/OSI‑approved license) and maintained on GitHub, with detailed documentation on readthedocs and example notebooks for reproducible workflows. Core capabilities: At its core Cellpose produces instance masks (per‑object segmentation) and associated outputs compatible with common analysis tools. It handles single‑channel and multi‑channel images and includes support for multi‑Z (3D) stacks by reusing 2D models for 3D segmentation. The toolkit provides pretrained generalist models that are automatically downloaded on first run, and recent model variants (Cellpose‑SAM) emphasize robust generalization to images with shot noise, (an)isotropic blur, undersampling, contrast inversion and unusual channel orders or object sizes. GPU acceleration (via PyTorch) speeds mask creation in 2D and 3D; Mac Silicon (MPS) is supported for many operations although some new mask‑creation code may not yet be available on Mac. Interfaces and deployment: Cellpose is accessible through multiple entry points to fit different user needs. A desktop GUI makes it easy to drag‑and‑drop typical image files (TIFF, PNG, JPG, GIF), calibrate object size, visualize flows and masks, and manually edit segmentations or create training labels. A command‑line interface and Python API support scripted batch processing and pipeline integration. The project provides Colab notebooks to run Cellpose (including training and 3D examples) without local installation, and a Hugging Face Space allows cloud batch processing for small jobs. Executable builds are available for Windows and macOS for users who prefer a bundled application. For very large datasets, there are contributions and docs for distributed Cellpose processing. Training, fine‑tuning and human‑in‑the‑loop: While the generalist models work well on many image types, Cellpose also supports training custom models. Cellpose 2.0 introduced human‑in‑the‑loop workflows that let users iteratively correct segmentations and refine a model with minimal labeled data. The GitHub repository and example notebooks include step‑by‑step tutorials for training and fine‑tuning models on user‑provided labels. Model weights are portable and the project documents where to place downloaded models (~.cellpose/models/) for offline use. Outputs, formats and integrations: Segmentation outputs are saved in convenient formats for downstream analysis: GPU‑accelerated mask arrays, _seg.npy files (used in Cellpose3 image restoration workflows), and ROI exports compatible with ImageJ’s ROI manager. The software also supports tiling and test‑time augmentations to improve segmentation on large or heterogeneous images (note: the public web demo omits some of these to save compute). Common downstream workflows include export into CellProfiler, Fiji/ImageJ, or custom Python pipelines for object quantification, tracking and spatial analyses. Practical notes and licensing: Installation is flexible — Cellpose can be installed via pip (with optional gui extras) or conda (recommended for dependency management), or run from the GitHub repository in editable mode. GPU users must ensure correct CUDA drivers or follow PyTorch install instructions; Colab is a convenient alternative for short 3D jobs or when local MKL/GPU compatibility is an issue. The Cellpose‑SAM training data and annotated dataset are distributed under CC‑BY‑NC terms and some downloadable datasets require acceptance of site terms. For support, users are encouraged to consult the project’s documentation, open issues on GitHub, or run the provided Colab and example notebooks.