Documentation

Documentation

From creating your first project to deploying trained models.

Getting started

Choose your starting point

New project

Go

Start fresh with a new segmentation project

  1. 1. Go to Projects
  2. 2. Click New Project
  3. 3. Define ROI classes
  4. 4. Upload images

Import existing data

Already have annotated masks from other tools

  1. 1. Create project with matching classes
  2. 2. Upload images
  3. 3. Import masks (indexed or RGB)
  4. 4. Validate palette mapping

Run inference

Use a trained model on new images

  1. 1. Select project
  2. 2. Go to Inference tab
  3. 3. Choose model and dataset
  4. 4. Review results

Complete workflow

From raw images to exported segmentation results

1

Create Project

Start by creating a new project to organize your segmentation work.

  • ·Navigate to the Projects dashboard
  • ·Click the New Project button
  • ·Enter a descriptive name for your project
  • ·A default background class is created automatically
Use descriptive names like 'Kidney Glomeruli Study' for easy identification
2

Define Classes

Set up the ROI classes you want to segment in your images.

  • ·Access the Classes tab in your project
  • ·Add classes for each region type (e.g., Tumor, Stroma)
  • ·Assign distinct colors for visual clarity
  • ·Set label values (integers starting from 1)
  • ·Mark one class as background (typically label_value=0)
Plan your class structure before uploading images to avoid rework
3

Upload Images & Masks

Add your histology images and optionally import pre-existing masks.

  • ·Create a dataset in the Datasets tab
  • ·Supported formats: PNG, JPEG, TIFF
  • ·Bulk upload multiple images at once
  • ·Optional: Import pre-annotated masks
  • ·Mask types: Indexed (preferred) or RGB with palette mapping
Validate imported masks to ensure correct class mapping before proceeding
4

Annotate Images

Use the annotation editor to label regions of interest.

  • ·Open any image in the annotate view
  • ·Select a class from the dropdown menu
  • ·Use polygon tool: click to add points, close to complete
  • ·Edit polygons by dragging points
  • ·Delete by selecting and pressing Delete key
  • ·Changes auto-save as you work
Zoom in for precise boundary tracing on complex structures
5

Train Model

Train a deep learning model on your annotated data.

  • ·Go to the Training tab
  • ·Select training and validation datasets
  • ·Choose a preset: Baseline, Fast Dev, or High Quality
  • ·Or customize: epochs, batch size, learning rate, augmentation
  • ·Architecture: UNet with configurable encoder
  • ·Monitor progress with real-time loss curves
Start with Baseline preset, then adjust based on validation metrics
6

Run Inference

Apply your trained model to segment new images.

  • ·Navigate to the Inference tab
  • ·Select a trained model from your project
  • ·Choose the target dataset to process
  • ·Start the inference job
  • ·Progress is shown in real-time
Use models trained on similar tissue types for best results
7

Review & Export

Review predictions, make corrections, and export results.

  • ·View predicted masks overlaid on original images
  • ·ROI table shows measurements: area, perimeter, bounding box
  • ·Click any ROI to highlight it on the image
  • ·Edit predictions if needed
  • ·Export masks, ROI data (CSV), and metrics
Review edge cases where the model may be uncertain

Features reference

Detailed documentation of key features

Annotation Tools

Polygon ToolClick to place vertices, close the shape by clicking near the start point.
Class SelectionChoose the active class before drawing. Each class has a distinct color.
EditingSelect a polygon to show handles. Drag vertices to adjust boundaries.
Zoom & PanUse mouse wheel to zoom, drag to pan when zoomed in.
Auto-saveAnnotations are saved automatically as you work.

Training Options

PresetsBaseline (50 epochs), Fast Dev (10 epochs), High Quality (100 epochs).
ArchitectureUNet with ResNet34 encoder (pretrained on ImageNet).
AugmentationLight, Medium, or Heavy. Includes rotations, flips, color jitter.
Batch SizeHigher values use more GPU memory but train faster.
Learning RateDefault 0.001 works well for most cases.

Supported Formats

ImagesPNG, JPEG, TIFF (8-bit and 16-bit supported).
Masks (Indexed)Single-channel PNG where pixel values are class IDs (0=background, 1..N=classes).
Masks (RGB)Color masks converted via palette mapping. Each unique color maps to a class.
ExportMasks as indexed PNG, ROI data as CSV, metrics as JSON.

Data Management

DatasetsOrganize images into datasets (e.g., Train, Validation, Test).
VersioningEach training run captures a snapshot of the configuration and dataset state.
ProvenanceResults include metadata about which model, dataset, and parameters were used.
StorageAll data stored in S3-compatible storage (MinIO).

Technical architecture

How HistoScope is built and deployed

HistoScope is a full-stack platform consisting of a web application for project management, annotation, and result review, backed by a REST API and asynchronous task processing for compute-intensive operations like model training and inference. All data is stored in S3-compatible object storage, with metadata tracked in PostgreSQL.

LayerStack
FrontendNext.js 14, TypeScript, Tailwind CSS, Canvas, Recharts
BackendFastAPI, SQLAlchemy, Pydantic, PostgreSQL
StorageMinIO (S3-compatible)
ML PipelineCelery, Redis, PyTorch, segmentation-models, Albumentations
DeploymentDocker Compose, Nginx, SSL

Data flow

Upload — Images to MinIO, metadata to PostgreSQL
Training — Worker reads from MinIO, saves checkpoints
Inference — Model loaded, results written to storage
Export — Bundle assembled from storage

Mask format

Canonical — Indexed (single-channel) PNG
Values — 0=background, 1..N=class IDs
RGB import — Converted via palette detection

FAQ

Quick answers to common questions

PNG, JPEG, and TIFF images. Both 8-bit and 16-bit are accepted. For best results, use PNG format which preserves quality without compression artifacts.
You can import masks in two formats: (1) Indexed masks where pixel values directly represent class IDs (preferred), or (2) RGB color masks where each class has a unique color. For RGB masks, HistoScope will detect the palette and ask you to map colors to your project classes.
The background class (label_value=0) represents areas not part of any ROI. During training, the model learns to distinguish background from regions of interest. Make sure one class is marked as background in your project settings.
Common causes: no annotated images in training dataset, mismatch between mask values and project classes, out of memory (try reducing batch size), or invalid image files. Check the Training tab for detailed error messages.
After inference, go to the Inference tab and view results. You can export predicted masks as indexed PNG files, ROI measurements as CSV, and run metadata as JSON. Each export includes provenance information.
Yes, after inference you can review each prediction and make corrections using the annotation tools. Edited results are saved separately and marked as edited to maintain provenance tracking.
Clone the repository, copy .env.example to .env, then run docker compose up -d. The web app is at localhost:3000 and the API at localhost:8000.
A GPU with at least 4GB VRAM is recommended. Without a GPU, training runs on CPU but is much slower. Inference can run on CPU with reasonable performance for small batches.
HistoScope — Built with Next.js, FastAPI, and PyTorch