Models¶
Collection of (generative) models for ultrasound imaging.
zea
contains a collection of models for various tasks, all located in the zea.models
package.
Currently, the following models are available (all inherited from zea.models.BaseModel
):
zea.models.echonet.EchoNetDynamic
: A model for echocardiography segmentation.zea.models.carotid_segmenter.CarotidSegmenter
: A model for carotid artery segmentation.zea.models.unet.UNet
: A simple U-Net implementation.zea.models.lpips.LPIPS
: A model implementing the perceptual similarity metric.zea.models.taesd.TinyAutoencoder
: A tiny autoencoder model for image compression.
Presets for these models can be found in zea.models.presets
.
To use these models, you can import them directly from the zea.models
module and load the pretrained weights using the from_preset()
method. For example:
from zea.models import UNet
model = UNet.from_preset("unet-echonet-inpainter")
You can list all available presets using the presets
attribute:
presets = list(UNet.presets.keys())
print(f"Available built-in zea presets for UNet: {presets}")
zea generative models¶
In addition to models, zea provides both classical and deep generative models for tasks such as image generation, inpainting, and denoising. These models inherit from zea.models.generative.GenerativeModel
or zea.models.deepgenerative.DeepGenerativeModel
.
Typically, these models have some additional methods, such as:
fit()
for training the model on datasample()
for generating new samples from the learned distributionposterior_sample()
for drawing samples from the posterior given measurementslog_density()
for computing the log-probability of data under the model
The following generative models are currently available:
zea.models.diffusion.DiffusionModel
: A deep generative diffusion model for ultrasound image generation.zea.models.gmm.GaussianMixtureModel
: A Gaussian Mixture Model.
An example of how to use the zea.models.diffusion.DiffusionModel
is shown below:
from zea.models import DiffusionModel
model = DiffusionModel.from_preset("diffusion-echonet-dynamic")
samples = model.sample(n_samples=4)
Contributing and adding new models¶
Please follow the guidelines in the Contributing page if you would like to contribute a new model to zea.
The following steps are recommended when adding a new model:
Create a new module in the
zea.models
package for your model:zea.models.mymodel
.Add a model class that inherits from
zea.models.base.Model
. For generative models, inherit fromzea.models.generative.GenerativeModel
orzea.models.deepgenerative.DeepGenerativeModel
as appropriate. Make sure you implement thecall()
method.Upload the pretrained model weights to our Hugging Face. Should be a
config.json
and amodel.weights.h5
file. See Keras documentation how those can be saved from your model. Simply drag and drop the files to the Hugging Face website to upload them.Tip
It is recommended to use the mentioned saving procedure. However, alternate saving methods are also possible, see the
zea.models.echonet.EchoNet
module for an example. You do now have to implement acustom_load_weights()
method in your model class.Add a preset for the model in
zea.models.presets
. This basically allows you to have multiple weights presets for a given model architecture.Make sure to register the presets in your model module by importing the presets module and calling
register_presets
with the model class as an argument.