Getting Started¶
zea
provides a framework for cognitive ultrasound imaging. At the heart of zea
are Data (zea.data
), Pipeline (zea.Pipeline
), and Models (zea.Models
) modules. These modules provide the necessary tools to load, process, and analyze ultrasound data.
Tip
A more complete set of examples can be found on the Examples page.
Let’s take a quick look at how to use zea
to load and process ultrasound data.
import zea
# setting up cpu / gpu usage
zea.init_device()
# loading a config file from Hugging Face, but can also load a local config file
config = zea.Config.from_hf(
"zeahub/configs", "config_picmus_rf.yaml", repo_type="dataset",
)
path = config.data.dataset_folder + "/" + config.data.file_path
with zea.File(path) as file:
data = file.load_data("raw_data", indices=0)
probe = file.probe()
scan = file.scan(**config.scan)
# using the pipeline as specified in the config file
pipeline = zea.Pipeline.from_config(
config.pipeline,
with_batch_dim=False,
)
# preparing the parameters (converting to tensors)
parameters = pipeline.prepare_parameters(probe, scan)
# running the pipeline!
image = pipeline(data=data, **parameters)["data"]
Similarly, we can easily load one of the pretrained models from the zea.models
module and use it for inference.
import zea
from zea.models.echonet import EchoNetDynamic
zea.init_device()
# presets can also paths to local checkpoints of the model
model = EchoNetDynamic.from_preset("echonet-dynamic")
# we'll load a single file from the dataset
with zea.Dataset("hf://zeahub/camus-sample/", "image_sc") as dataset:
file = dataset[0]
image = file.load_data("image_sc", indices=0)
image = zea.utils.translate(image, config.data.dynamic_range, (-1, 1))
masks = model(image[None, ..., None])
zea
also provides a simple command line interface (CLI) to quickly visualize a zea
data file.
zea --config configs/config_picmus_rf.yaml
Installation¶
A simple pip command will install the latest version of zea
from PyPI. For more installation instructions, please refer to the Installation page.
pip install zea
Backend¶
zea
is written in Python on top of Keras 3. This means that under the hood we use the Keras framework to implement the pipeline and models. Keras allows you to set a backend, which means you can use zea
alongside your project that uses any of your preferred machine learning framework.
To use zea
, you need to install one of the supported machine learning backends: JAX, PyTorch or TensorFlow zea
will not run without a backend installed.
If you are unsure which backend to use, we recommend JAX as it is currently the fastest backend.
After installing a backend, set the KERAS_BACKEND
environment variable to one of the following:
# at the top of your script before other imports
import os
os.environ["KERAS_BACKEND"] = "jax"
import zea
conda env config vars set KERAS_BACKEND=jax
export KERAS_BACKEND=jax
# at the top of your script before other imports
import os
os.environ["KERAS_BACKEND"] = "torch"
import zea
conda env config vars set KERAS_BACKEND=torch
export KERAS_BACKEND=torch
# at the top of your script before other imports
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import zea
conda env config vars set KERAS_BACKEND=tensorflow
export KERAS_BACKEND=tensorflow
# at the top of your script before other imports
# note NumPy backend has limited functionality
import os
os.environ["KERAS_BACKEND"] = "numpy"
import zea
# note NumPy backend has limited functionality
conda env config vars set KERAS_BACKEND=numpy
# note NumPy backend has limited functionality
export KERAS_BACKEND=numpy
Citation¶
If you use zea in your research, please cite using [B-1] and [B-2]. Also, in case you use them, don’t forget to ensure proper attribution to authors of specific models and datasets that are supported by zea.
Tristan S.W. Stevens, Wessel L. van Nierop, Ben Luijten, Vincent van de Schaft, Oisín I. Nolan, Beatrice Federici, Simon W. Penninga, Noortje I.P. Schueler, and Ruud J.G. van Sloun. Zea: a toolbox for cognitive ultrasound imaging. 2025. URL: https://github.com/tue-bmd/zea.
Ruud JG Van Sloun. Active inference and deep generative modeling for cognitive ultrasound. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 2024.
Or you can use the following BibTeX entry:
@misc{stevens2025zea,
author = {Stevens, Tristan S.W. and van Nierop, Wessel L. and Luijten, Ben and van de Schaft, Vincent and Nolan, Oisín I. and Federici, Beatrice and Penninga, Simon W. and Schueler, Noortje I.P. and van Sloun, Ruud J.G.},
title = {zea: A Toolbox for Cognitive Ultrasound Imaging},
url = {https://github.com/tue-bmd/zea},
year = {2025},
}