neuroGlia5D: GPU-based 5D segmentation, rendering and topological analysis for live biological cells

A GPU-native system designed to let you inhabit and explore dense, labeled 4D biological systems under real hardware constraints, without motion correction, preprocessing pipelines, or HPC clusters.

We invoice universities (NET 30) via purchase orders.

5D Segmentation for Any Cell Type

Advanced 4D and 5D segmentation for microglia, neurons, macrophages, immune cells, and any motile cells. Multi-channel (1-2 color) 3D + time-lapse analysis with automated cell detection and morphology tracking, even with rapid shape changes.

4D/5D Rendering

Real-time GPU-accelerated 5D rendering and interactive visualization. Volume ray-casting for two-photon imaging, confocal microscopy, and multiphoton datasets. Resolves sub-cellular features at ~0.5 µm lateral resolution and 0.1 Hz volume rate.

Specialized Cell Analysis

Quantify cell morphology, branch dynamics, and motility in live imaging. Track neuron-glia interactions, microglial activation, macrophage migration, and immune cell responses across 3D volumes over time.

If this sounds familiar…

  • 5D live microscopy datasets from two-photon imaging piling up because 5D segmentation is difficult.
  • Cells (microglia, macrophages, immune cells, neurons) that move or drift across time-lapse recordings.
  • Rapid morphology changes in motile cells that break naive 4D segmentation.
  • Need to track cell-cell interactions (neuron-glia, immune-neuron) in live imaging.
  • No professional 5D rendering software for multiphoton microscopy data.
  • Need to resolve sub-cellular features at moderate resolution (~0.5 µm) and fast volume rates (0.1 Hz).
  • Working with 1-color or 2-color channel datasets that need synchronized analysis.

Meet neuroGlia5D — GPU-based 5D Analysis for Live Biological Cells

Cell5D proudly presents neuroGlia5D, a pioneering product unlike any other commercial or open-source software, that alleviates the need for both motion correction and High Performance Computing (HPC) clusters. You just need a decent gaming laptop, and you can use the laptop casually while neuroGlia5D is running in full throttle on your GPU.

We have made this possible in neuroGlia5D by designing each and every algorithm, even as simple as reading your data, from the ground up to leverage massive parallelization on GPU, which has left CPU far behind due to high demand of the gaming market. neuroGlia5D works with a wide range of 3D/4D time-lapse data. It will work even if you do not have a time-lapse, in which case, you will see 3D/4D renderings of segmented cells instead of a time-lapse and additional time-based statistics that indicate topological evolution of each cell, even the ones getting eaten up or eating another cell, all while moving in three dimensions.

neuroGlia5D integrates the tracking of fluorescence variation (ΔF/F) over time for each cell, making it redundant to use any separate calcium imaging analysis pipeline. Thanks to neuroGlia5D's superior background removal technique, false positive firing events are particularly low compared to gold standards. neuroGlia5D was primarily designed to analyze microglia-neuron interactions from zebrafish whole-brain two-photon imaging data after our attempts with available software solutions were exhausted because they required high spatio-temporal resolution, which could not be achieved without zooming into a small region, and which do not tell the real story. While originally designed to investigate glial-neuron interactions, neuroGlia5D works with any cell type — we recommend that you choose sparsely expressed or more dynamic macrophage-like cells as the primary channel and any other type of cells (less dynamic) as the secondary channel. If you have more than two cell types, you simply need to repeat the process. neuroGlia5D was able to resolve sub-cellular features of neurons and microglia from the live whole-brain dataset captured at 0.465 µm lateral resolution, 2.0 µm Z-step size and 0.1 Hz volumetric rate. The attainable resolution on sub-cellular features can vary with microscope and imaging conditions, even with the exact same parameters.

Lastly, neuroGlia5D offers an optional interactive GUI which lets you classify the actively displayed 4D/5D rendered object into various morphology types and temporal dynamics with single clicks. Each rendering is saved as a high-quality MP4 video in a publication-ready format.

For the very first time, neuroGlia5D has made it possible to leverage affordable gaming hardware to analyze one of the world's most complex computer vision data to produce scientifically accurate and publication-ready figures and statistics. We will soon be posting tutorials and examples on our YouTube channel. neuroGlia5D is still a few weeks from launch. You can show your support by signing up for a trial version and be one of our early beta testers.

Why neuroGlia5D Exists

Live microscopy is constrained by the balance between spatial and temporal resolution, forcing existing workflows to zoom into a small field of view to preserve dynamic sub-cellular features. To capture large-scale biological context, they require zooming out and accepting lower resolution, slower acquisition, or heavy averaging. As a result, many analyses are confined to small regions containing only a few cells, even when the biology of interest is distributed across large volumes. This tradeoff is especially damaging for dynamic cell types such as microglia, macrophages, immune cells, and neurons. Their behavior depends on spatial context, long-range interactions, and rare events that cannot be captured by imaging a small volume, which often means missing the very phenomena under study or misinterpreting a local statistical outlier as evidence for a robust scientific hypothesis. This becomes especially crucial in the pharmacogenomic studies, where the transcriptomic profile across the brain governs the scientific observations.

neuroGlia5D takes a significant leap forward, alleviating this bottleneck by using state-of-the-art classical and machine learning algorithms. Instead of treating resolution and field of view as mutually exclusive, neuroGlia5D treats large-scale imaging as a navigation problem rather than static rendering of motion-corrected timeframes. Even in noisy environments where fine cellular processes can occasionally smear into the background, neuroGlia5D can reconstruct them from adjacent timepoints. No matter how large the dataset is (several times more than RAM), neuroGlia5D can detect each and every object across all timepoints and provide detailed statistics, along with 5D rendering of your entire dataset walking through the forest of segmented cells. The system streams and renders only what is relevant to the current view, while preserving object identity across space and time. This enables users to fluidly move from whole-volume context down to individual sub-cellular processes without losing track of object identities or breaking temporal continuity.

This design also transforms functional analysis. neuroGlia5D integrates per-cell fluorescence variation (ΔF/F) directly into the same object-centric framework used for segmentation and tracking, making separate calcium imaging analysis pipelines unnecessary. Because background estimation and removal are performed at the level of individual objects rather than entire frames, false positive firing events are reduced compared to common frame-based workflows.

The result is that high-resolution functional dynamics and large-scale structural context are no longer in conflict. Data can be acquired at the resolution required to resolve sub-cellular processes, then explored across whole-brain or whole-volume datasets after acquisition, moving from global context to individual cellular processes without redefining regions of interest or reprocessing data.

neuroGlia5D exists because the biology demands resolution and context simultaneously, and because existing software architectures force irreversible acquisition and analysis decisions before the relevant biological events can be observed.

Engine-Level Architecture

neuroGlia5D is built around GPU-native algorithms and does not rely on generic image-processing frameworks or repurposed CPU algorithms. Core operations, including data access, filtering, resampling, segmentation, and topology tracking, are designed with GPU memory behavior, execution models, and large-scale parallelism as first-class constraints.

Rather than treating labels as visual overlays, neuroGlia5D models each segmented cell as a persistent 4D object. Objects retain identity across space and time, enabling direct navigation through dense forests of thousands of cells. As users move and zoom, the engine dynamically loads, unloads, and resamples data to remain responsive within GPU memory limits.

neuroGlia5D includes a native 5D neural analysis engine that operates directly on persistent objects rather than independent frames. This enables analysis of cell states, morphology, temporal behavior, and cell–cell interactions such as contact, approach, engulfment, and separation without breaking temporal continuity or exporting data to external pipelines.

When users approach an object closely enough, its identity is revealed directly in the visualization. Time is treated as a first-class dimension throughout the system.

How neuroGlia5D Compares

The following comparisons describe differences in architectural focus and intended use, not performance rankings.

Cell5D is not affiliated with, endorsed by, or sponsored by any of the third-party products referenced below.

Several commercial and academic tools address parts of this problem space, but approach it using different abstractions.

Arivis Vision4D supports large volumetric microscopy datasets using streaming, multi-resolution pyramids, and GPU rendering. Labels primarily function as visual entities rather than as persistent drivers of navigation and analysis.

Imaris provides polished visualization and tracking of neurons and microglia using multi-resolution bricks and GPU streaming. Its core abstractions are tracks and surfaces rather than continuous 4D label volumes, and navigation is typically manual or scripted.

Amira / Avizo and VGStudio Max are general-purpose volume visualization systems with strong memory management. Labels are treated as annotations rather than semantic objects that guide navigation.

From adjacent fields, Unreal Engine (Nanite and virtual texturing) shares architectural concepts such as view-dependent streaming and resource budgets, but does not include biological or temporal semantics. NVIDIA IndeX offers out-of-core GPU volume rendering infrastructure without domain-specific modeling for live-cell biology.

Among open tools, Neuroglancer supports multiscale visualization with labels, but is browser-based and not designed for interactive GPU navigation through dense, time-resolved object forests.

ilastik is an interactive machine-learning annotation tool rather than a streaming visualization engine. It focuses on supervised labeling workflows and does not enforce GPU memory budgets or object-centric temporal navigation.

Imaris excels at viewing and track/surface workflows, and ilastik excels at interactive labeling. neuroGlia5D unifies aspects of both approaches and extends them with object-centric algorithms and GPU-native visualization techniques inspired by real-time rendering systems.

Questions?

Email cell5d@outlook.com. We're happy to help with datasets and performance tuning.

Watch tutorials and see Cell5D in action: YouTube @Cell5D