bioimageio.core

License PyPI conda-version downloads conda-forge downloads code style coverage

bioimageio.core

Python specific core utilities for bioimage.io resources (in particular DL models).

Get started

To get started we recommend installing bioimageio.core with conda together with a deep learning framework, e.g. pytorch, and run a few bioimageio commands to see what bioimage.core has to offer:

  1. install with conda (for more details on conda environments, checkout the conda docs)

    conda install -c conda-forge bioimageio.core pytorch
    
  2. test a model

    $ bioimageio test powerful-chipmunk
    ...
    

    (Click to expand output)

      ✔️                 bioimageio validation passed
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      source            https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/powerful-chipmunk/1/files/rdf.yaml
      format version    model 0.4.10
      bioimageio.spec   0.5.3post4
      bioimageio.core   0.6.8
    
    
    
      ❓   location                                     detail
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      ✔️                                                 initialized ModelDescr to describe model 0.4.10
    
      ✔️                                                 bioimageio.spec format validation model 0.4.10
      🔍   context.perform_io_checks                    True
      🔍   context.root                                 https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/powerful-chipmunk/1/files
      🔍   context.known_files.weights.pt               3bd9c518c8473f1e35abb7624f82f3aa92f1015e66fb1f6a9d08444e1f2f5698
      🔍   context.known_files.weights-torchscript.pt   4e568fd81c0ffa06ce13061327c3f673e1bac808891135badd3b0fcdacee086b
      🔍   context.warning_level                        error
    
      ✔️                                                 Reproduce test outputs from test inputs
    
      ✔️                                                 Reproduce test outputs from test inputs
    

    or

    $ bioimageio test impartial-shrimp
    ...
    

    (Click to expand output)

      ✔️                 bioimageio validation passed
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      source            https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/impartial-shrimp/1.1/files/rdf.yaml
      format version    model 0.5.3
      bioimageio.spec   0.5.3.2
      bioimageio.core   0.6.9
    
    
      ❓   location                    detail
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
      ✔️                                initialized ModelDescr to describe model 0.5.3
    
    
      ✔️                                bioimageio.spec format validation model 0.5.3
    
      🔍   context.perform_io_checks   False
      🔍   context.warning_level       error
    
      ✔️                                Reproduce test outputs from test inputs (pytorch_state_dict)
    
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n:
    
                                      0
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n:
    
                                      0
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n:
    
                                      1
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n:
    
                                      1
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 1 and size parameter n:
    
                                      2
    
      ✔️                                Run pytorch_state_dict inference for inputs with batch_size: 2 and size parameter n:
    
                                      2
    
      ✔️                                Reproduce test outputs from test inputs (torchscript)
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 1 and size parameter n: 0
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 2 and size parameter n: 0
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 1 and size parameter n: 1
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 2 and size parameter n: 1
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 1 and size parameter n: 2
    
    
      ✔️                                Run torchscript inference for inputs with batch_size: 2 and size parameter n: 2
    

  3. run prediction on your data
  • display the bioimageio-predict command help to get an overview:

    $ bioimageio predict --help
    ...
    

    (Click to expand output)

    usage: bioimageio predict [-h] [--inputs Sequence[Union[str,Annotated[Tuple[str,...],MinLenmin_length=1]]]]
                              [--outputs {str,Tuple[str,...]}] [--overwrite bool] [--blockwise bool] [--stats Path]
                              [--preview bool]
                              [--weight_format {typing.Literal['keras_hdf5','onnx','pytorch_state_dict','tensorflow_js','tensorflow_saved_model_bundle','torchscript'],any}]
                              [--example bool]
                              SOURCE
    
    bioimageio-predict - Run inference on your data with a bioimage.io model.
    
    positional arguments:
      SOURCE                Url/path to a bioimageio.yaml/rdf.yaml file
                            or a bioimage.io resource identifier, e.g. 'affable-shark'
    
    optional arguments:
      -h, --help            show this help message and exit
      --inputs Sequence[Union[str,Annotated[Tuple[str,...],MinLen(min_length=1)]]]
                            Model input sample paths (for each input tensor)
    
                            The input paths are expected to have shape...
                            - (n_samples,) or (n_samples,1) for models expecting a single input tensor
                            - (n_samples,) containing the substring '{input_id}', or
                            - (n_samples, n_model_inputs) to provide each input tensor path explicitly.
    
                            All substrings that are replaced by metadata from the model description:
                            - '{model_id}'
                            - '{input_id}'
    
                            Example inputs to process sample 'a' and 'b'
                            for a model expecting a 'raw' and a 'mask' input tensor:
                            --inputs="[["a_raw.tif","a_mask.tif"],["b_raw.tif","b_mask.tif"]]"
                            (Note that JSON double quotes need to be escaped.)
    
                            Alternatively a `bioimageio-cli.yaml` (or `bioimageio-cli.json`) file
                            may provide the arguments, e.g.:
                            ```yaml
                            inputs:
                            - [a_raw.tif, a_mask.tif]
                            - [b_raw.tif, b_mask.tif]
                       
    
                        `.npy` and any file extension supported by imageio are supported.
                        Aavailable formats are listed at
                        https://imageio.readthedocs.io/en/stable/formats/index.html#all-formats.
                        Some formats have additional dependencies.
    
                          (default: ('{input_id}/001.tif',))
    

    --outputs {str,Tuple[str,...]} Model output path pattern (per output tensor)

                        All substrings that are replaced:
                        - '{model_id}' (from model description)
                        - '{output_id}' (from model description)
                        - '{sample_id}' (extracted from input paths)
    
                          (default: outputs_{model_id}/{output_id}/{sample_id}.tif)
    

    --overwrite bool allow overwriting existing output files (default: False) --blockwise bool process inputs blockwise (default: False) --stats Path path to dataset statistics (will be written if it does not exist, but the model requires statistical dataset measures)   (default: dataset_statistics.json) --preview bool preview which files would be processed and what outputs would be generated. (default: False) --weight_format {typing.Literal['keras_hdf5','onnx','pytorch_state_dict','tensorflow_js','tensorflow_saved_model_bundle','torchscript'],any} The weight format to use. (default: any) --example bool generate and run an example

                        1. downloads example model inputs
                        2. creates a `{model_id}_example` folder
                        3. writes input arguments to `{model_id}_example/bioimageio-cli.yaml`
                        4. executes a preview dry-run
                        5. executes prediction with example input
    
                          (default: False)
    

    </details>

- create an example and run prediction locally!

    ```console
    $ bioimageio predict impartial-shrimp --example=True
    ...
<details>
<summary>(Click to expand output)</summary>


🛈 bioimageio prediction preview structure:
{'{sample_id}': {'inputs': {'{input_id}': '<input path>'},
                'outputs': {'{output_id}': '<output path>'}}}
🔎 bioimageio prediction preview output:
{'1': {'inputs': {'input0': 'impartial-shrimp_example/input0/001.tif'},
      'outputs': {'output0': 'impartial-shrimp_example/outputs/output0/1.tif'}}}
predict with impartial-shrimp: 100%|███████████████████████████████████████████████████| 1/1 [00:21<00:00, 21.76s/sample]
🎉 Sucessfully ran example prediction!
To predict the example input using the CLI example config file impartial-shrimp_example\bioimageio-cli.yaml, execute `bioimageio predict` from impartial-shrimp_example:
$ cd impartial-shrimp_example
$ bioimageio predict "impartial-shrimp"

Alternatively run the following command in the current workind directory, not the example folder:
$ bioimageio predict --preview=False --overwrite=True --stats="impartial-shrimp_example/dataset_statistics.json" --inputs="[[\"impartial-shrimp_example/input0/001.tif\"]]" --outputs="impartial-shrimp_example/outputs/{output_id}/{sample_id}.tif" "impartial-shrimp"
(note that a local 'bioimageio-cli.json' or 'bioimageio-cli.yaml' may interfere with this)
</details>

Installation

Via Conda

The bioimageio.core package can be installed from conda-forge via

conda install -c conda-forge bioimageio.core

If you do not install any additional deep learning libraries, you will only be able to use general convenience functionality, but not any functionality depending on model prediction. To install additional deep learning libraries add pytorch, onnxruntime, keras or tensorflow.

Deeplearning frameworks to consider installing alongside bioimageio.core:

Via pip

The package is also available via pip (e.g. with recommended extras onnx and pytorch):

pip install "bioimageio.core[onnx,pytorch]"

🐍 Use in Python

bioimageio.core is a python package that implements prediction with bioimageio models including standardized pre- and postprocessing operations. These models are described by---and can be loaded with---the bioimageio.spec package.

In addition bioimageio.core provides functionality to convert model weight formats.

Documentation

Here you find the bioimageio.core documentation.

Presentations

Examples

Notebooks that save and load resource descriptions and validate their format (using bioimageio.spec, a dependency of bioimageio.core)
load_model_and_create_your_own.ipynb Open In Colab
dataset_creation.ipynb Open In Colab
Use the described resources in Python with bioimageio.core
model_usage.ipynb Open In Colab

💻 Use the Command Line Interface

bioimageio.core installs a command line interface (CLI) for testing models and other functionality. You can list all the available commands via:

bioimageio

For examples see Get started.

CLI inputs from file

For convenience the command line options (not arguments) may be given in a bioimageio-cli.json or bioimageio-cli.yaml file, e.g.:

# bioimageio-cli.yaml
inputs: inputs/*_{tensor_id}.h5
outputs: outputs_{model_id}/{sample_id}_{tensor_id}.h5
overwrite: true
blockwise: true
stats: inputs/dataset_statistics.json

Set up Development Environment

To set up a development conda environment run the following commands:

conda env create -f dev/env.yaml
conda activate core
pip install -e . --no-deps

There are different environment files available that only install tensorflow or pytorch as dependencies, see dev folder.

Logging level

bioimageio.spec and bioimageio.core use loguru for logging, hence the logging level may be controlled with the LOGURU_LEVEL environment variable.

Changelog

0.8.0

0.7.0

  • breaking:
    • bioimageio CLI now has implicit boolean flags
  • non-breaking:
    • use new ValidationDetail.recommended_env in ValidationSummary
    • improve get_io_sample_block_metas()
      • now works for sufficiently large, but not exactly shaped inputs
    • update to support zipfile.ZipFile object with bioimageio.spec==0.5.3.5
    • add io helpers resolve and resolve_and_extract
    • added enable_determinism function and determinism input argument for testing with seeded random generators and optionally (determinsim=="full") instructing DL frameworks to use deterministic algorithms.

0.6.10

  • fix #423

0.6.9

  • improve bioimageio command line interface (details in #157)
    • add predict command
    • package command input path is now required

0.6.8

  • testing model inference will now check all weight formats (previously only the first one for which model adapter creation succeeded had been checked)
  • fix predict with blocking (Thanks @thodkatz)

0.6.7

0.6.6

  • add aliases to match previous API more closely

0.6.5

  • improve adapter error messages

0.6.4

  • add bioimageio validate-format command
  • improve error messages and display of command results

0.6.3

  • Fix #386
  • (in model inference testing) stop assuming model inputs are tileable

0.6.2

0.6.1

0.6.0

  • add compatibility with new bioimageio.spec 0.5 (0.5.2post1)
  • improve interfaces

0.5.10

  1"""
  2.. include:: ../../README.md
  3"""
  4
  5from bioimageio.spec import (
  6    build_description,
  7    dump_description,
  8    load_dataset_description,
  9    load_description,
 10    load_description_and_validate_format_only,
 11    load_model_description,
 12    save_bioimageio_package,
 13    save_bioimageio_package_as_folder,
 14    save_bioimageio_yaml_only,
 15    validate_format,
 16)
 17
 18from . import (
 19    axis,
 20    block_meta,
 21    cli,
 22    commands,
 23    common,
 24    digest_spec,
 25    io,
 26    model_adapters,
 27    prediction,
 28    proc_ops,
 29    proc_setup,
 30    sample,
 31    stat_calculators,
 32    stat_measures,
 33    tensor,
 34)
 35from ._prediction_pipeline import PredictionPipeline, create_prediction_pipeline
 36from ._resource_tests import (
 37    enable_determinism,
 38    load_description_and_test,
 39    test_description,
 40    test_model,
 41)
 42from ._settings import settings
 43from .axis import Axis, AxisId
 44from .backends import create_model_adapter
 45from .block_meta import BlockMeta
 46from .common import MemberId
 47from .prediction import predict, predict_many
 48from .sample import Sample
 49from .stat_calculators import compute_dataset_measures
 50from .stat_measures import Stat
 51from .tensor import Tensor
 52from .utils import VERSION
 53from .weight_converters import add_weights
 54
 55__version__ = VERSION
 56
 57
 58# aliases
 59test_resource = test_description
 60"""alias of `test_description`"""
 61load_resource = load_description
 62"""alias of `load_description`"""
 63load_model = load_model_description
 64"""alias of `load_model_description`"""
 65
 66__all__ = [
 67    "__version__",
 68    "add_weights",
 69    "axis",
 70    "Axis",
 71    "AxisId",
 72    "block_meta",
 73    "BlockMeta",
 74    "build_description",
 75    "cli",
 76    "commands",
 77    "common",
 78    "compute_dataset_measures",
 79    "create_model_adapter",
 80    "create_prediction_pipeline",
 81    "digest_spec",
 82    "dump_description",
 83    "enable_determinism",
 84    "io",
 85    "load_dataset_description",
 86    "load_description_and_test",
 87    "load_description_and_validate_format_only",
 88    "load_description",
 89    "load_model_description",
 90    "load_model",
 91    "load_resource",
 92    "MemberId",
 93    "model_adapters",
 94    "predict_many",
 95    "predict",
 96    "prediction",
 97    "PredictionPipeline",
 98    "proc_ops",
 99    "proc_setup",
100    "sample",
101    "Sample",
102    "save_bioimageio_package_as_folder",
103    "save_bioimageio_package",
104    "save_bioimageio_yaml_only",
105    "settings",
106    "stat_calculators",
107    "stat_measures",
108    "Stat",
109    "tensor",
110    "Tensor",
111    "test_description",
112    "test_model",
113    "test_resource",
114    "validate_format",
115]
__version__ = '0.8.0'
def add_weights( model_descr: bioimageio.spec.ModelDescr, *, output_path: Annotated[pathlib.Path, PathType(path_type='dir')], source_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']] = None, target_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']] = None, verbose: bool = False) -> Optional[bioimageio.spec.ModelDescr]:
 18def add_weights(
 19    model_descr: ModelDescr,
 20    *,
 21    output_path: DirectoryPath,
 22    source_format: Optional[WeightsFormat] = None,
 23    target_format: Optional[WeightsFormat] = None,
 24    verbose: bool = False,
 25) -> Optional[ModelDescr]:
 26    """Convert model weights to other formats and add them to the model description
 27
 28    Args:
 29        output_path: Path to save updated model package to.
 30        source_format: convert from a specific weights format.
 31                       Default: choose automatically from any available.
 32        target_format: convert to a specific weights format.
 33                       Default: attempt to convert to any missing format.
 34        devices: Devices that may be used during conversion.
 35        verbose: log more (error) output
 36
 37    Returns:
 38        - An updated model description if any converted weights were added.
 39        - `None` if no conversion was possible.
 40    """
 41    if not isinstance(model_descr, ModelDescr):
 42        if model_descr.type == "model" and not isinstance(model_descr, InvalidDescr):
 43            raise TypeError(
 44                f"Model format {model_descr.format} is not supported, please update"
 45                + f" model to format {ModelDescr.implemented_format_version} first."
 46            )
 47
 48        raise TypeError(type(model_descr))
 49
 50    # save model to local folder
 51    output_path = save_bioimageio_package_as_folder(
 52        model_descr, output_path=output_path
 53    )
 54    # reload from local folder to make sure we do not edit the given model
 55    _model_descr = load_model_description(output_path, perform_io_checks=False)
 56    assert isinstance(_model_descr, ModelDescr)
 57    model_descr = _model_descr
 58    del _model_descr
 59
 60    if source_format is None:
 61        available = set(model_descr.weights.available_formats)
 62    else:
 63        available = {source_format}
 64
 65    if target_format is None:
 66        missing = set(model_descr.weights.missing_formats)
 67    else:
 68        missing = {target_format}
 69
 70    originally_missing = set(missing)
 71
 72    if "pytorch_state_dict" in available and "torchscript" in missing:
 73        logger.info(
 74            "Attempting to convert 'pytorch_state_dict' weights to 'torchscript'."
 75        )
 76        from .pytorch_to_torchscript import convert
 77
 78        try:
 79            torchscript_weights_path = output_path / "weights_torchscript.pt"
 80            model_descr.weights.torchscript = convert(
 81                model_descr,
 82                output_path=torchscript_weights_path,
 83                use_tracing=False,
 84            )
 85        except Exception as e:
 86            if verbose:
 87                traceback.print_exception(e)
 88
 89            logger.error(e)
 90        else:
 91            available.add("torchscript")
 92            missing.discard("torchscript")
 93
 94    if "pytorch_state_dict" in available and "torchscript" in missing:
 95        logger.info(
 96            "Attempting to convert 'pytorch_state_dict' weights to 'torchscript' by tracing."
 97        )
 98        from .pytorch_to_torchscript import convert
 99
100        try:
101            torchscript_weights_path = output_path / "weights_torchscript_traced.pt"
102
103            model_descr.weights.torchscript = convert(
104                model_descr,
105                output_path=torchscript_weights_path,
106                use_tracing=True,
107            )
108        except Exception as e:
109            if verbose:
110                traceback.print_exception(e)
111
112            logger.error(e)
113        else:
114            available.add("torchscript")
115            missing.discard("torchscript")
116
117    if "torchscript" in available and "onnx" in missing:
118        logger.info("Attempting to convert 'torchscript' weights to 'onnx'.")
119        from .torchscript_to_onnx import convert
120
121        try:
122            onnx_weights_path = output_path / "weights.onnx"
123            model_descr.weights.onnx = convert(
124                model_descr,
125                output_path=onnx_weights_path,
126            )
127        except Exception as e:
128            if verbose:
129                traceback.print_exception(e)
130
131            logger.error(e)
132        else:
133            available.add("onnx")
134            missing.discard("onnx")
135
136    if "pytorch_state_dict" in available and "onnx" in missing:
137        logger.info("Attempting to convert 'pytorch_state_dict' weights to 'onnx'.")
138        from .pytorch_to_onnx import convert
139
140        try:
141            onnx_weights_path = output_path / "weights.onnx"
142
143            model_descr.weights.onnx = convert(
144                model_descr,
145                output_path=onnx_weights_path,
146                verbose=verbose,
147            )
148        except Exception as e:
149            if verbose:
150                traceback.print_exception(e)
151
152            logger.error(e)
153        else:
154            available.add("onnx")
155            missing.discard("onnx")
156
157    if missing:
158        logger.warning(
159            f"Converting from any of the available weights formats {available} to any"
160            + f" of {missing} failed or is not yet implemented. Please create an issue"
161            + " at https://github.com/bioimage-io/core-bioimage-io-python/issues/new/choose"
162            + " if you would like bioimageio.core to support a particular conversion."
163        )
164
165    if originally_missing == missing:
166        logger.warning("failed to add any converted weights")
167        return None
168    else:
169        logger.info("added weights formats {}", originally_missing - missing)
170        # resave model with updated rdf.yaml
171        _ = save_bioimageio_package_as_folder(model_descr, output_path=output_path)
172        tested_model_descr = load_description_and_test(model_descr)
173        assert isinstance(tested_model_descr, ModelDescr)
174        return tested_model_descr

Convert model weights to other formats and add them to the model description

Arguments:
  • output_path: Path to save updated model package to.
  • source_format: convert from a specific weights format. Default: choose automatically from any available.
  • target_format: convert to a specific weights format. Default: attempt to convert to any missing format.
  • devices: Devices that may be used during conversion.
  • verbose: log more (error) output
Returns:
  • An updated model description if any converted weights were added.
  • None if no conversion was possible.
@dataclass
class Axis:
49@dataclass
50class Axis:
51    id: AxisId
52    type: Literal["batch", "channel", "index", "space", "time"]
53
54    def __post_init__(self):
55        if self.type == "batch":
56            self.id = AxisId("batch")
57        elif self.type == "channel":
58            self.id = AxisId("channel")
59
60    @classmethod
61    def create(cls, axis: AxisLike) -> Axis:
62        if isinstance(axis, cls):
63            return axis
64        elif isinstance(axis, Axis):
65            return Axis(id=axis.id, type=axis.type)
66        elif isinstance(axis, v0_5.AxisBase):
67            return Axis(id=AxisId(axis.id), type=axis.type)
68        elif isinstance(axis, str):
69            return Axis(id=AxisId(axis), type=_guess_axis_type(axis))
70        else:
71            assert_never(axis)
Axis( id: AxisId, type: Literal['batch', 'channel', 'index', 'space', 'time'])
id: AxisId
type: Literal['batch', 'channel', 'index', 'space', 'time']
@classmethod
def create( cls, axis: Union[AxisId, Literal['b', 'i', 't', 'c', 'z', 'y', 'x'], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexInputAxis, bioimageio.spec.model.v0_5.TimeInputAxis, bioimageio.spec.model.v0_5.SpaceInputAxis], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexOutputAxis, Annotated[Union[Annotated[bioimageio.spec.model.v0_5.TimeOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.TimeOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Axis]) -> Axis:
60    @classmethod
61    def create(cls, axis: AxisLike) -> Axis:
62        if isinstance(axis, cls):
63            return axis
64        elif isinstance(axis, Axis):
65            return Axis(id=axis.id, type=axis.type)
66        elif isinstance(axis, v0_5.AxisBase):
67            return Axis(id=AxisId(axis.id), type=axis.type)
68        elif isinstance(axis, str):
69            return Axis(id=AxisId(axis), type=_guess_axis_type(axis))
70        else:
71            assert_never(axis)
class AxisId(bioimageio.spec._internal.types.LowerCaseIdentifier):
229class AxisId(LowerCaseIdentifier):
230    root_model: ClassVar[Type[RootModel[Any]]] = RootModel[
231        Annotated[
232            LowerCaseIdentifierAnno,
233            MaxLen(16),
234            AfterValidator(_normalize_axis_id),
235        ]
236    ]

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

root_model: ClassVar[Type[pydantic.root_model.RootModel[Any]]] = <class 'pydantic.root_model.RootModel[Annotated[str, MinLen, AfterValidator, AfterValidator, Annotated[TypeVar, Predicate], MaxLen, AfterValidator]]'>

the pydantic root model to validate the string

@dataclass(frozen=True)
class BlockMeta:
 46@dataclass(frozen=True)
 47class BlockMeta:
 48    """Block meta data of a sample member (a tensor in a sample)
 49
 50    Figure for illustration:
 51    The first 2d block (dashed) of a sample member (**bold**).
 52    The inner slice (thin) is expanded by a halo in both dimensions on both sides.
 53    The outer slice reaches from the sample member origin (0, 0) to the right halo point.
 54
 55    ```terminal
 56    ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─  ─ ─ ─ ─ ─ ─ ─ ┐
 57    ╷ halo(left)                         ╷
 58    ╷                                    ╷
 59    ╷  (0, 0)┏━━━━━━━━━━━━━━━━━┯━━━━━━━━━┯━━━➔
 60    ╷        ┃                 │         ╷  sample member
 61    ╷        ┃      inner      │         ╷
 62    ╷        ┃   (and outer)   │  outer  ╷
 63    ╷        ┃      slice      │  slice  ╷
 64    ╷        ┃                 │         ╷
 65    ╷        ┣─────────────────┘         ╷
 66    ╷        ┃   outer slice             ╷
 67    ╷        ┃               halo(right) ╷
 68    └ ─ ─ ─ ─┃─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┘
 69
 70    ```
 71
 72    note:
 73    - Inner and outer slices are specified in sample member coordinates.
 74    - The outer_slice of a block at the sample edge may overlap by more than the
 75        halo with the neighboring block (the inner slices will not overlap though).
 76
 77    """
 78
 79    sample_shape: PerAxis[int]
 80    """the axis sizes of the whole (unblocked) sample"""
 81
 82    inner_slice: PerAxis[SliceInfo]
 83    """inner region (without halo) wrt the sample"""
 84
 85    halo: PerAxis[Halo]
 86    """halo enlarging the inner region to the block's sizes"""
 87
 88    block_index: BlockIndex
 89    """the i-th block of the sample"""
 90
 91    blocks_in_sample: TotalNumberOfBlocks
 92    """total number of blocks in the sample"""
 93
 94    @cached_property
 95    def shape(self) -> PerAxis[int]:
 96        """axis lengths of the block"""
 97        return Frozen(
 98            {
 99                a: s.stop - s.start + (sum(self.halo[a]) if a in self.halo else 0)
100                for a, s in self.inner_slice.items()
101            }
102        )
103
104    @cached_property
105    def padding(self) -> PerAxis[PadWidth]:
106        """padding to realize the halo at the sample edge
107        where we cannot simply enlarge the inner slice"""
108        return Frozen(
109            {
110                a: PadWidth(
111                    (
112                        self.halo[a].left
113                        - (self.inner_slice[a].start - self.outer_slice[a].start)
114                        if a in self.halo
115                        else 0
116                    ),
117                    (
118                        self.halo[a].right
119                        - (self.outer_slice[a].stop - self.inner_slice[a].stop)
120                        if a in self.halo
121                        else 0
122                    ),
123                )
124                for a in self.inner_slice
125            }
126        )
127
128    @cached_property
129    def outer_slice(self) -> PerAxis[SliceInfo]:
130        """slice of the outer block (without padding) wrt the sample"""
131        return Frozen(
132            {
133                a: SliceInfo(
134                    max(
135                        0,
136                        min(
137                            self.inner_slice[a].start
138                            - (self.halo[a].left if a in self.halo else 0),
139                            self.sample_shape[a]
140                            - self.inner_shape[a]
141                            - (self.halo[a].left if a in self.halo else 0),
142                        ),
143                    ),
144                    min(
145                        self.sample_shape[a],
146                        self.inner_slice[a].stop
147                        + (self.halo[a].right if a in self.halo else 0),
148                    ),
149                )
150                for a in self.inner_slice
151            }
152        )
153
154    @cached_property
155    def inner_shape(self) -> PerAxis[int]:
156        """axis lengths of the inner region (without halo)"""
157        return Frozen({a: s.stop - s.start for a, s in self.inner_slice.items()})
158
159    @cached_property
160    def local_slice(self) -> PerAxis[SliceInfo]:
161        """inner slice wrt the block, **not** the sample"""
162        return Frozen(
163            {
164                a: SliceInfo(
165                    self.halo[a].left,
166                    self.halo[a].left + self.inner_shape[a],
167                )
168                for a in self.inner_slice
169            }
170        )
171
172    @property
173    def dims(self) -> Collection[AxisId]:
174        return set(self.inner_shape)
175
176    @property
177    def tagged_shape(self) -> PerAxis[int]:
178        """alias for shape"""
179        return self.shape
180
181    @property
182    def inner_slice_wo_overlap(self):
183        """subslice of the inner slice, such that all `inner_slice_wo_overlap` can be
184        stiched together trivially to form the original sample.
185
186        This can also be used to calculate statistics
187        without overrepresenting block edge regions."""
188        # TODO: update inner_slice_wo_overlap when adding block overlap
189        return self.inner_slice
190
191    def __post_init__(self):
192        # freeze mutable inputs
193        if not isinstance(self.sample_shape, Frozen):
194            object.__setattr__(self, "sample_shape", Frozen(self.sample_shape))
195
196        if not isinstance(self.inner_slice, Frozen):
197            object.__setattr__(self, "inner_slice", Frozen(self.inner_slice))
198
199        if not isinstance(self.halo, Frozen):
200            object.__setattr__(self, "halo", Frozen(self.halo))
201
202        assert all(
203            a in self.sample_shape for a in self.inner_slice
204        ), "block has axes not present in sample"
205
206        assert all(
207            a in self.inner_slice for a in self.halo
208        ), "halo has axes not present in block"
209
210        if any(s > self.sample_shape[a] for a, s in self.shape.items()):
211            logger.warning(
212                "block {} larger than sample {}", self.shape, self.sample_shape
213            )
214
215    def get_transformed(
216        self, new_axes: PerAxis[Union[LinearAxisTransform, int]]
217    ) -> Self:
218        return self.__class__(
219            sample_shape={
220                a: (
221                    trf
222                    if isinstance(trf, int)
223                    else trf.compute(self.sample_shape[trf.axis])
224                )
225                for a, trf in new_axes.items()
226            },
227            inner_slice={
228                a: (
229                    SliceInfo(0, trf)
230                    if isinstance(trf, int)
231                    else SliceInfo(
232                        trf.compute(self.inner_slice[trf.axis].start),
233                        trf.compute(self.inner_slice[trf.axis].stop),
234                    )
235                )
236                for a, trf in new_axes.items()
237            },
238            halo={
239                a: (
240                    Halo(0, 0)
241                    if isinstance(trf, int)
242                    else Halo(self.halo[trf.axis].left, self.halo[trf.axis].right)
243                )
244                for a, trf in new_axes.items()
245            },
246            block_index=self.block_index,
247            blocks_in_sample=self.blocks_in_sample,
248        )

Block meta data of a sample member (a tensor in a sample)

Figure for illustration: The first 2d block (dashed) of a sample member (bold). The inner slice (thin) is expanded by a halo in both dimensions on both sides. The outer slice reaches from the sample member origin (0, 0) to the right halo point.

┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─  ─ ─ ─ ─ ─ ─ ─ ┐
╷ halo(left)                         ╷
╷                                    ╷
╷  (0, 0)┏━━━━━━━━━━━━━━━━━┯━━━━━━━━━┯━━━➔
╷        ┃                 │         ╷  sample member
╷        ┃      inner      │         ╷
╷        ┃   (and outer)   │  outer  ╷
╷        ┃      slice      │  slice  ╷
╷        ┃                 │         ╷
╷        ┣─────────────────┘         ╷
╷        ┃   outer slice             ╷
╷        ┃               halo(right) ╷
└ ─ ─ ─ ─┃─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┘
         ⬇

note:

  • Inner and outer slices are specified in sample member coordinates.
  • The outer_slice of a block at the sample edge may overlap by more than the halo with the neighboring block (the inner slices will not overlap though).
BlockMeta( sample_shape: Mapping[AxisId, int], inner_slice: Mapping[AxisId, bioimageio.core.common.SliceInfo], halo: Mapping[AxisId, bioimageio.core.common.Halo], block_index: int, blocks_in_sample: int)
sample_shape: Mapping[AxisId, int]

the axis sizes of the whole (unblocked) sample

inner_slice: Mapping[AxisId, bioimageio.core.common.SliceInfo]

inner region (without halo) wrt the sample

halo enlarging the inner region to the block's sizes

block_index: int

the i-th block of the sample

blocks_in_sample: int

total number of blocks in the sample

shape: Mapping[AxisId, int]
 94    @cached_property
 95    def shape(self) -> PerAxis[int]:
 96        """axis lengths of the block"""
 97        return Frozen(
 98            {
 99                a: s.stop - s.start + (sum(self.halo[a]) if a in self.halo else 0)
100                for a, s in self.inner_slice.items()
101            }
102        )

axis lengths of the block

padding: Mapping[AxisId, bioimageio.core.common.PadWidth]
104    @cached_property
105    def padding(self) -> PerAxis[PadWidth]:
106        """padding to realize the halo at the sample edge
107        where we cannot simply enlarge the inner slice"""
108        return Frozen(
109            {
110                a: PadWidth(
111                    (
112                        self.halo[a].left
113                        - (self.inner_slice[a].start - self.outer_slice[a].start)
114                        if a in self.halo
115                        else 0
116                    ),
117                    (
118                        self.halo[a].right
119                        - (self.outer_slice[a].stop - self.inner_slice[a].stop)
120                        if a in self.halo
121                        else 0
122                    ),
123                )
124                for a in self.inner_slice
125            }
126        )

padding to realize the halo at the sample edge where we cannot simply enlarge the inner slice

outer_slice: Mapping[AxisId, bioimageio.core.common.SliceInfo]
128    @cached_property
129    def outer_slice(self) -> PerAxis[SliceInfo]:
130        """slice of the outer block (without padding) wrt the sample"""
131        return Frozen(
132            {
133                a: SliceInfo(
134                    max(
135                        0,
136                        min(
137                            self.inner_slice[a].start
138                            - (self.halo[a].left if a in self.halo else 0),
139                            self.sample_shape[a]
140                            - self.inner_shape[a]
141                            - (self.halo[a].left if a in self.halo else 0),
142                        ),
143                    ),
144                    min(
145                        self.sample_shape[a],
146                        self.inner_slice[a].stop
147                        + (self.halo[a].right if a in self.halo else 0),
148                    ),
149                )
150                for a in self.inner_slice
151            }
152        )

slice of the outer block (without padding) wrt the sample

inner_shape: Mapping[AxisId, int]
154    @cached_property
155    def inner_shape(self) -> PerAxis[int]:
156        """axis lengths of the inner region (without halo)"""
157        return Frozen({a: s.stop - s.start for a, s in self.inner_slice.items()})

axis lengths of the inner region (without halo)

local_slice: Mapping[AxisId, bioimageio.core.common.SliceInfo]
159    @cached_property
160    def local_slice(self) -> PerAxis[SliceInfo]:
161        """inner slice wrt the block, **not** the sample"""
162        return Frozen(
163            {
164                a: SliceInfo(
165                    self.halo[a].left,
166                    self.halo[a].left + self.inner_shape[a],
167                )
168                for a in self.inner_slice
169            }
170        )

inner slice wrt the block, not the sample

dims: Collection[AxisId]
172    @property
173    def dims(self) -> Collection[AxisId]:
174        return set(self.inner_shape)
tagged_shape: Mapping[AxisId, int]
176    @property
177    def tagged_shape(self) -> PerAxis[int]:
178        """alias for shape"""
179        return self.shape

alias for shape

inner_slice_wo_overlap
181    @property
182    def inner_slice_wo_overlap(self):
183        """subslice of the inner slice, such that all `inner_slice_wo_overlap` can be
184        stiched together trivially to form the original sample.
185
186        This can also be used to calculate statistics
187        without overrepresenting block edge regions."""
188        # TODO: update inner_slice_wo_overlap when adding block overlap
189        return self.inner_slice

subslice of the inner slice, such that all inner_slice_wo_overlap can be stiched together trivially to form the original sample.

This can also be used to calculate statistics without overrepresenting block edge regions.

def get_transformed( self, new_axes: Mapping[AxisId, Union[bioimageio.core.block_meta.LinearAxisTransform, int]]) -> Self:
215    def get_transformed(
216        self, new_axes: PerAxis[Union[LinearAxisTransform, int]]
217    ) -> Self:
218        return self.__class__(
219            sample_shape={
220                a: (
221                    trf
222                    if isinstance(trf, int)
223                    else trf.compute(self.sample_shape[trf.axis])
224                )
225                for a, trf in new_axes.items()
226            },
227            inner_slice={
228                a: (
229                    SliceInfo(0, trf)
230                    if isinstance(trf, int)
231                    else SliceInfo(
232                        trf.compute(self.inner_slice[trf.axis].start),
233                        trf.compute(self.inner_slice[trf.axis].stop),
234                    )
235                )
236                for a, trf in new_axes.items()
237            },
238            halo={
239                a: (
240                    Halo(0, 0)
241                    if isinstance(trf, int)
242                    else Halo(self.halo[trf.axis].left, self.halo[trf.axis].right)
243                )
244                for a, trf in new_axes.items()
245            },
246            block_index=self.block_index,
247            blocks_in_sample=self.blocks_in_sample,
248        )
def build_description( content: Dict[str, YamlValue], /, *, context: Optional[bioimageio.spec.ValidationContext] = None, format_version: Union[Literal['latest', 'discover'], str] = 'discover') -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], bioimageio.spec.InvalidDescr]:
173def build_description(
174    content: BioimageioYamlContent,
175    /,
176    *,
177    context: Optional[ValidationContext] = None,
178    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
179) -> Union[ResourceDescr, InvalidDescr]:
180    """build a bioimage.io resource description from an RDF's content.
181
182    Use `load_description` if you want to build a resource description from an rdf.yaml
183    or bioimage.io zip-package.
184
185    Args:
186        content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
187        context: validation context to use during validation
188        format_version: (optional) use this argument to load the resource and
189                        convert its metadata to a higher format_version
190
191    Returns:
192        An object holding all metadata of the bioimage.io resource
193
194    """
195
196    return build_description_impl(
197        content,
198        context=context,
199        format_version=format_version,
200        get_rd_class=_get_rd_class,
201    )

build a bioimage.io resource description from an RDF's content.

Use load_description if you want to build a resource description from an rdf.yaml or bioimage.io zip-package.

Arguments:
  • content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
  • context: validation context to use during validation
  • format_version: (optional) use this argument to load the resource and convert its metadata to a higher format_version
Returns:

An object holding all metadata of the bioimage.io resource

def compute_dataset_measures( measures: Iterable[Annotated[Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)]], dataset: Iterable[Sample]) -> Dict[Annotated[Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Union[float, Annotated[Tensor, BeforeValidator(func=<function tensor_custom_before_validator at 0x7f25f54c6f20>, json_schema_input_type=PydanticUndefined), PlainSerializer(func=<function tensor_custom_serializer at 0x7f25f54c7100>, return_type=PydanticUndefined, when_used='always')]]]:
578def compute_dataset_measures(
579    measures: Iterable[DatasetMeasure], dataset: Iterable[Sample]
580) -> Dict[DatasetMeasure, MeasureValue]:
581    """compute all dataset `measures` for the given `dataset`"""
582    sample_calculators, calculators = get_measure_calculators(measures)
583    assert not sample_calculators
584
585    ret: Dict[DatasetMeasure, MeasureValue] = {}
586
587    for sample in dataset:
588        for calc in calculators:
589            calc.update(sample)
590
591    for calc in calculators:
592        ret.update(calc.finalize().items())
593
594    return ret

compute all dataset measures for the given dataset

@final
@classmethod
def create_model_adapter( model_description: Union[bioimageio.spec.model.v0_4.ModelDescr, bioimageio.spec.ModelDescr], *, devices: Optional[Sequence[str]] = None, weight_format_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']]] = None):
 72    @final
 73    @classmethod
 74    def create(
 75        cls,
 76        model_description: Union[v0_4.ModelDescr, v0_5.ModelDescr],
 77        *,
 78        devices: Optional[Sequence[str]] = None,
 79        weight_format_priority_order: Optional[Sequence[SupportedWeightsFormat]] = None,
 80    ):
 81        """
 82        Creates model adapter based on the passed spec
 83        Note: All specific adapters should happen inside this function to prevent different framework
 84        initializations interfering with each other
 85        """
 86        if not isinstance(model_description, (v0_4.ModelDescr, v0_5.ModelDescr)):
 87            raise TypeError(
 88                f"expected v0_4.ModelDescr or v0_5.ModelDescr, but got {type(model_description)}"
 89            )
 90
 91        weights = model_description.weights
 92        errors: List[Exception] = []
 93        weight_format_priority_order = (
 94            DEFAULT_WEIGHT_FORMAT_PRIORITY_ORDER
 95            if weight_format_priority_order is None
 96            else weight_format_priority_order
 97        )
 98        # limit weight formats to the ones present
 99        weight_format_priority_order_present: Sequence[SupportedWeightsFormat] = [
100            w for w in weight_format_priority_order if getattr(weights, w) is not None
101        ]
102        if not weight_format_priority_order_present:
103            raise ValueError(
104                f"None of the specified weight formats ({weight_format_priority_order}) is present ({weight_format_priority_order_present})"
105            )
106
107        for wf in weight_format_priority_order_present:
108            if wf == "pytorch_state_dict":
109                assert weights.pytorch_state_dict is not None
110                try:
111                    from .pytorch_backend import PytorchModelAdapter
112
113                    return PytorchModelAdapter(
114                        model_description=model_description, devices=devices
115                    )
116                except Exception as e:
117                    errors.append(e)
118            elif wf == "tensorflow_saved_model_bundle":
119                assert weights.tensorflow_saved_model_bundle is not None
120                try:
121                    from .tensorflow_backend import create_tf_model_adapter
122
123                    return create_tf_model_adapter(
124                        model_description=model_description, devices=devices
125                    )
126                except Exception as e:
127                    errors.append(e)
128            elif wf == "onnx":
129                assert weights.onnx is not None
130                try:
131                    from .onnx_backend import ONNXModelAdapter
132
133                    return ONNXModelAdapter(
134                        model_description=model_description, devices=devices
135                    )
136                except Exception as e:
137                    errors.append(e)
138            elif wf == "torchscript":
139                assert weights.torchscript is not None
140                try:
141                    from .torchscript_backend import TorchscriptModelAdapter
142
143                    return TorchscriptModelAdapter(
144                        model_description=model_description, devices=devices
145                    )
146                except Exception as e:
147                    errors.append(e)
148            elif wf == "keras_hdf5":
149                assert weights.keras_hdf5 is not None
150                # keras can either be installed as a separate package or used as part of tensorflow
151                # we try to first import the keras model adapter using the separate package and,
152                # if it is not available, try to load the one using tf
153                try:
154                    try:
155                        from .keras_backend import KerasModelAdapter
156                    except Exception:
157                        from .tensorflow_backend import KerasModelAdapter
158
159                    return KerasModelAdapter(
160                        model_description=model_description, devices=devices
161                    )
162                except Exception as e:
163                    errors.append(e)
164            else:
165                assert_never(wf)
166
167        assert errors
168        if len(weight_format_priority_order) == 1:
169            assert len(errors) == 1
170            raise errors[0]
171
172        else:
173            msg = (
174                "None of the weight format specific model adapters could be created"
175                + " in this environment."
176            )
177            if sys.version_info[:2] >= (3, 11):
178                raise ExceptionGroup(msg, errors)
179            else:
180                raise ValueError(msg) from Exception(errors)

Creates model adapter based on the passed spec Note: All specific adapters should happen inside this function to prevent different framework initializations interfering with each other

def create_prediction_pipeline( bioimageio_model: Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], *, devices: Optional[Sequence[str]] = None, weight_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, weights_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, dataset_for_initial_statistics: Iterable[Union[Sample, Sequence[Tensor]]] = (), keep_updating_initial_dataset_statistics: bool = False, fixed_dataset_statistics: Mapping[Annotated[Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Union[float, Annotated[Tensor, BeforeValidator(func=<function tensor_custom_before_validator>, json_schema_input_type=PydanticUndefined), PlainSerializer(func=<function tensor_custom_serializer>, return_type=PydanticUndefined, when_used='always')]]] = mappingproxy({}), model_adapter: Optional[bioimageio.core.backends._model_adapter.ModelAdapter] = None, ns: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int], NoneType] = None, default_blocksize_parameter: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int]] = 10, **deprecated_kwargs: Any) -> PredictionPipeline:
317def create_prediction_pipeline(
318    bioimageio_model: AnyModelDescr,
319    *,
320    devices: Optional[Sequence[str]] = None,
321    weight_format: Optional[SupportedWeightsFormat] = None,
322    weights_format: Optional[SupportedWeightsFormat] = None,
323    dataset_for_initial_statistics: Iterable[Union[Sample, Sequence[Tensor]]] = tuple(),
324    keep_updating_initial_dataset_statistics: bool = False,
325    fixed_dataset_statistics: Mapping[DatasetMeasure, MeasureValue] = MappingProxyType(
326        {}
327    ),
328    model_adapter: Optional[ModelAdapter] = None,
329    ns: Optional[BlocksizeParameter] = None,
330    default_blocksize_parameter: BlocksizeParameter = 10,
331    **deprecated_kwargs: Any,
332) -> PredictionPipeline:
333    """
334    Creates prediction pipeline which includes:
335    * computation of input statistics
336    * preprocessing
337    * model prediction
338    * computation of output statistics
339    * postprocessing
340
341    Args:
342        bioimageio_model: A bioimageio model description.
343        devices: (optional)
344        weight_format: deprecated in favor of **weights_format**
345        weights_format: (optional) Use a specific **weights_format** rather than
346            choosing one automatically.
347            A corresponding `bioimageio.core.model_adapters.ModelAdapter` will be
348            created to run inference with the **bioimageio_model**.
349        dataset_for_initial_statistics: (optional) If preprocessing steps require input
350            dataset statistics, **dataset_for_initial_statistics** allows you to
351            specifcy a dataset from which these statistics are computed.
352        keep_updating_initial_dataset_statistics: (optional) Set to `True` if you want
353            to update dataset statistics with each processed sample.
354        fixed_dataset_statistics: (optional) Allows you to specify a mapping of
355            `DatasetMeasure`s to precomputed `MeasureValue`s.
356        model_adapter: (optional) Allows you to use a custom **model_adapter** instead
357            of creating one according to the present/selected **weights_format**.
358        ns: deprecated in favor of **default_blocksize_parameter**
359        default_blocksize_parameter: Allows to control the default block size for
360            blockwise predictions, see `BlocksizeParameter`.
361
362    """
363    weights_format = weight_format or weights_format
364    del weight_format
365    default_blocksize_parameter = ns or default_blocksize_parameter
366    del ns
367    if deprecated_kwargs:
368        warnings.warn(
369            f"deprecated create_prediction_pipeline kwargs: {set(deprecated_kwargs)}"
370        )
371
372    model_adapter = model_adapter or create_model_adapter(
373        model_description=bioimageio_model,
374        devices=devices,
375        weight_format_priority_order=weights_format and (weights_format,),
376    )
377
378    input_ids = get_member_ids(bioimageio_model.inputs)
379
380    def dataset():
381        common_stat: Stat = {}
382        for i, x in enumerate(dataset_for_initial_statistics):
383            if isinstance(x, Sample):
384                yield x
385            else:
386                yield Sample(members=dict(zip(input_ids, x)), stat=common_stat, id=i)
387
388    preprocessing, postprocessing = setup_pre_and_postprocessing(
389        bioimageio_model,
390        dataset(),
391        keep_updating_initial_dataset_stats=keep_updating_initial_dataset_statistics,
392        fixed_dataset_stats=fixed_dataset_statistics,
393    )
394
395    return PredictionPipeline(
396        name=bioimageio_model.name,
397        model_description=bioimageio_model,
398        model_adapter=model_adapter,
399        preprocessing=preprocessing,
400        postprocessing=postprocessing,
401        default_blocksize_parameter=default_blocksize_parameter,
402    )

Creates prediction pipeline which includes:

  • computation of input statistics
  • preprocessing
  • model prediction
  • computation of output statistics
  • postprocessing
Arguments:
  • bioimageio_model: A bioimageio model description.
  • devices: (optional)
  • weight_format: deprecated in favor of weights_format
  • weights_format: (optional) Use a specific weights_format rather than choosing one automatically. A corresponding bioimageio.core.model_adapters.ModelAdapter will be created to run inference with the bioimageio_model.
  • dataset_for_initial_statistics: (optional) If preprocessing steps require input dataset statistics, dataset_for_initial_statistics allows you to specifcy a dataset from which these statistics are computed.
  • keep_updating_initial_dataset_statistics: (optional) Set to True if you want to update dataset statistics with each processed sample.
  • fixed_dataset_statistics: (optional) Allows you to specify a mapping of DatasetMeasures to precomputed MeasureValues.
  • model_adapter: (optional) Allows you to use a custom model_adapter instead of creating one according to the present/selected weights_format.
  • ns: deprecated in favor of default_blocksize_parameter
  • default_blocksize_parameter: Allows to control the default block size for blockwise predictions, see BlocksizeParameter.
def dump_description( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], bioimageio.spec.InvalidDescr], /, *, exclude_unset: bool = True, exclude_defaults: bool = False) -> Dict[str, YamlValue]:
66def dump_description(
67    rd: Union[ResourceDescr, InvalidDescr],
68    /,
69    *,
70    exclude_unset: bool = True,
71    exclude_defaults: bool = False,
72) -> BioimageioYamlContent:
73    """Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.
74
75    Args:
76        rd: bioimageio resource description
77        exclude_unset: Exclude fields that have not explicitly be set.
78        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
79    """
80    return rd.model_dump(
81        mode="json", exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
82    )

Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.

Arguments:
  • rd: bioimageio resource description
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).
def enable_determinism( mode: Literal['seed_only', 'full'] = 'full', weight_formats: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']]] = None):
 75def enable_determinism(
 76    mode: Literal["seed_only", "full"] = "full",
 77    weight_formats: Optional[Sequence[SupportedWeightsFormat]] = None,
 78):
 79    """Seed and configure ML frameworks for maximum reproducibility.
 80    May degrade performance. Only recommended for testing reproducibility!
 81
 82    Seed any random generators and (if **mode**=="full") request ML frameworks to use
 83    deterministic algorithms.
 84
 85    Args:
 86        mode: determinism mode
 87            - 'seed_only' -- only set seeds, or
 88            - 'full' determinsm features (might degrade performance or throw exceptions)
 89        weight_formats: Limit deep learning importing deep learning frameworks
 90            based on weight_formats.
 91            E.g. this allows to avoid importing tensorflow when testing with pytorch.
 92
 93    Notes:
 94        - **mode** == "full"  might degrade performance or throw exceptions.
 95        - Subsequent inference calls might still differ. Call before each function
 96          (sequence) that is expected to be reproducible.
 97        - Degraded performance: Use for testing reproducibility only!
 98        - Recipes:
 99            - [PyTorch](https://pytorch.org/docs/stable/notes/randomness.html)
100            - [Keras](https://keras.io/examples/keras_recipes/reproducibility_recipes/)
101            - [NumPy](https://numpy.org/doc/2.0/reference/random/generated/numpy.random.seed.html)
102    """
103    try:
104        try:
105            import numpy.random
106        except ImportError:
107            pass
108        else:
109            numpy.random.seed(0)
110    except Exception as e:
111        logger.debug(str(e))
112
113    if (
114        weight_formats is None
115        or "pytorch_state_dict" in weight_formats
116        or "torchscript" in weight_formats
117    ):
118        try:
119            try:
120                import torch
121            except ImportError:
122                pass
123            else:
124                _ = torch.manual_seed(0)
125                torch.use_deterministic_algorithms(mode == "full")
126        except Exception as e:
127            logger.debug(str(e))
128
129    if (
130        weight_formats is None
131        or "tensorflow_saved_model_bundle" in weight_formats
132        or "keras_hdf5" in weight_formats
133    ):
134        try:
135            os.environ["TF_ENABLE_ONEDNN_OPTS"] = "0"
136            try:
137                import tensorflow as tf  # pyright: ignore[reportMissingTypeStubs]
138            except ImportError:
139                pass
140            else:
141                tf.random.set_seed(0)
142                if mode == "full":
143                    tf.config.experimental.enable_op_determinism()
144                # TODO: find possibility to switch it off again??
145        except Exception as e:
146            logger.debug(str(e))
147
148    if weight_formats is None or "keras_hdf5" in weight_formats:
149        try:
150            try:
151                import keras  # pyright: ignore[reportMissingTypeStubs]
152            except ImportError:
153                pass
154            else:
155                keras.utils.set_random_seed(0)
156        except Exception as e:
157            logger.debug(str(e))

Seed and configure ML frameworks for maximum reproducibility. May degrade performance. Only recommended for testing reproducibility!

Seed any random generators and (if mode=="full") request ML frameworks to use deterministic algorithms.

Arguments:
  • mode: determinism mode
    • 'seed_only' -- only set seeds, or
    • 'full' determinsm features (might degrade performance or throw exceptions)
  • weight_formats: Limit deep learning importing deep learning frameworks based on weight_formats. E.g. this allows to avoid importing tensorflow when testing with pytorch.
Notes:
  • mode == "full" might degrade performance or throw exceptions.
  • Subsequent inference calls might still differ. Call before each function (sequence) that is expected to be reproducible.
  • Degraded performance: Use for testing reproducibility only!
  • Recipes:
def load_dataset_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')]:
177def load_dataset_description(
178    source: Union[PermissiveFileSource, ZipFile],
179    /,
180    *,
181    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
182    perform_io_checks: Optional[bool] = None,
183    known_files: Optional[Dict[str, Sha256]] = None,
184    sha256: Optional[Sha256] = None,
185) -> AnyDatasetDescr:
186    """same as `load_description`, but addtionally ensures that the loaded
187    description is valid and of type 'dataset'.
188    """
189    rd = load_description(
190        source,
191        format_version=format_version,
192        perform_io_checks=perform_io_checks,
193        known_files=known_files,
194        sha256=sha256,
195    )
196    return ensure_description_is_dataset(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'dataset'.

def load_description_and_test( source: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], Dict[str, YamlValue]], *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', weight_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, devices: Optional[Sequence[str]] = None, determinism: Literal['seed_only', 'full'] = 'seed_only', expected_type: Optional[str] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None, stop_early: bool = False, **deprecated: Unpack[bioimageio.core._resource_tests.DeprecatedKwargs]) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], bioimageio.spec.InvalidDescr]:
432def load_description_and_test(
433    source: Union[ResourceDescr, PermissiveFileSource, BioimageioYamlContent],
434    *,
435    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
436    weight_format: Optional[SupportedWeightsFormat] = None,
437    devices: Optional[Sequence[str]] = None,
438    determinism: Literal["seed_only", "full"] = "seed_only",
439    expected_type: Optional[str] = None,
440    sha256: Optional[Sha256] = None,
441    stop_early: bool = False,
442    **deprecated: Unpack[DeprecatedKwargs],
443) -> Union[ResourceDescr, InvalidDescr]:
444    """Test a bioimage.io resource dynamically,
445    for example run prediction of test tensors for models.
446
447    See `test_description` for more details.
448
449    Returns:
450        A (possibly invalid) resource description object
451        with a populated `.validation_summary` attribute.
452    """
453    if isinstance(source, ResourceDescrBase):
454        root = source.root
455        file_name = source.file_name
456        if (
457            (
458                format_version
459                not in (
460                    DISCOVER,
461                    source.format_version,
462                    ".".join(source.format_version.split(".")[:2]),
463                )
464            )
465            or (c := source.validation_summary.details[0].context) is None
466            or not c.perform_io_checks
467        ):
468            logger.debug(
469                "deserializing source to ensure we validate and test using format {} and perform io checks",
470                format_version,
471            )
472            source = dump_description(source)
473    else:
474        root = Path()
475        file_name = None
476
477    if isinstance(source, ResourceDescrBase):
478        rd = source
479    elif isinstance(source, dict):
480        # check context for a given root; default to root of source
481        context = get_validation_context(
482            ValidationContext(root=root, file_name=file_name)
483        ).replace(
484            perform_io_checks=True  # make sure we perform io checks though
485        )
486
487        rd = build_description(
488            source,
489            format_version=format_version,
490            context=context,
491        )
492    else:
493        rd = load_description(
494            source, format_version=format_version, sha256=sha256, perform_io_checks=True
495        )
496
497    rd.validation_summary.env.add(
498        InstalledPackage(name="bioimageio.core", version=VERSION)
499    )
500
501    if expected_type is not None:
502        _test_expected_resource_type(rd, expected_type)
503
504    if isinstance(rd, (v0_4.ModelDescr, v0_5.ModelDescr)):
505        if weight_format is None:
506            weight_formats: List[SupportedWeightsFormat] = [
507                w for w, we in rd.weights if we is not None
508            ]  # pyright: ignore[reportAssignmentType]
509        else:
510            weight_formats = [weight_format]
511
512        enable_determinism(determinism, weight_formats=weight_formats)
513        for w in weight_formats:
514            _test_model_inference(rd, w, devices, **deprecated)
515            if stop_early and rd.validation_summary.status == "failed":
516                break
517
518            if not isinstance(rd, v0_4.ModelDescr):
519                _test_model_inference_parametrized(
520                    rd, w, devices, stop_early=stop_early
521                )
522                if stop_early and rd.validation_summary.status == "failed":
523                    break
524
525    # TODO: add execution of jupyter notebooks
526    # TODO: add more tests
527
528    if rd.validation_summary.status == "valid-format":
529        rd.validation_summary.status = "passed"
530
531    return rd

Test a bioimage.io resource dynamically, for example run prediction of test tensors for models.

See test_description for more details.

Returns:

A (possibly invalid) resource description object with a populated .validation_summary attribute.

def load_description_and_validate_format_only( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> bioimageio.spec.ValidationSummary:
229def load_description_and_validate_format_only(
230    source: Union[PermissiveFileSource, ZipFile],
231    /,
232    *,
233    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
234    perform_io_checks: Optional[bool] = None,
235    known_files: Optional[Dict[str, Sha256]] = None,
236    sha256: Optional[Sha256] = None,
237) -> ValidationSummary:
238    """same as `load_description`, but only return the validation summary.
239
240    Returns:
241        Validation summary of the bioimage.io resource found at `source`.
242
243    """
244    rd = load_description(
245        source,
246        format_version=format_version,
247        perform_io_checks=perform_io_checks,
248        known_files=known_files,
249        sha256=sha256,
250    )
251    assert rd.validation_summary is not None
252    return rd.validation_summary

same as load_description, but only return the validation summary.

Returns:

Validation summary of the bioimage.io resource found at source.

def load_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], bioimageio.spec.InvalidDescr]:
 56def load_description(
 57    source: Union[PermissiveFileSource, ZipFile],
 58    /,
 59    *,
 60    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
 61    perform_io_checks: Optional[bool] = None,
 62    known_files: Optional[Dict[str, Sha256]] = None,
 63    sha256: Optional[Sha256] = None,
 64) -> Union[ResourceDescr, InvalidDescr]:
 65    """load a bioimage.io resource description
 66
 67    Args:
 68        source: Path or URL to an rdf.yaml or a bioimage.io package
 69                (zip-file with rdf.yaml in it).
 70        format_version: (optional) Use this argument to load the resource and
 71                        convert its metadata to a higher format_version.
 72        perform_io_checks: Wether or not to perform validation that requires file io,
 73                           e.g. downloading a remote files. The existence of local
 74                           absolute file paths is still being checked.
 75        known_files: Allows to bypass download and hashing of referenced files
 76                     (even if perform_io_checks is True).
 77        sha256: Optional SHA-256 value of **source**
 78
 79    Returns:
 80        An object holding all metadata of the bioimage.io resource
 81
 82    """
 83    if isinstance(source, ResourceDescrBase):
 84        name = getattr(source, "name", f"{str(source)[:10]}...")
 85        logger.warning("returning already loaded description '{}' as is", name)
 86        return source  # pyright: ignore[reportReturnType]
 87
 88    opened = open_bioimageio_yaml(source, sha256=sha256)
 89
 90    context = get_validation_context().replace(
 91        root=opened.original_root,
 92        file_name=opened.original_file_name,
 93        perform_io_checks=perform_io_checks,
 94        known_files=known_files,
 95    )
 96
 97    return build_description(
 98        opened.content,
 99        context=context,
100        format_version=format_version,
101    )

load a bioimage.io resource description

Arguments:
  • source: Path or URL to an rdf.yaml or a bioimage.io package (zip-file with rdf.yaml in it).
  • format_version: (optional) Use this argument to load the resource and convert its metadata to a higher format_version.
  • perform_io_checks: Wether or not to perform validation that requires file io, e.g. downloading a remote files. The existence of local absolute file paths is still being checked.
  • known_files: Allows to bypass download and hashing of referenced files (even if perform_io_checks is True).
  • sha256: Optional SHA-256 value of source
Returns:

An object holding all metadata of the bioimage.io resource

def load_model_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]:
128def load_model_description(
129    source: Union[PermissiveFileSource, ZipFile],
130    /,
131    *,
132    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
133    perform_io_checks: Optional[bool] = None,
134    known_files: Optional[Dict[str, Sha256]] = None,
135    sha256: Optional[Sha256] = None,
136) -> AnyModelDescr:
137    """same as `load_description`, but addtionally ensures that the loaded
138    description is valid and of type 'model'.
139
140    Raises:
141        ValueError: for invalid or non-model resources
142    """
143    rd = load_description(
144        source,
145        format_version=format_version,
146        perform_io_checks=perform_io_checks,
147        known_files=known_files,
148        sha256=sha256,
149    )
150    return ensure_description_is_model(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'model'.

Raises:
  • ValueError: for invalid or non-model resources
def load_model( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]:
128def load_model_description(
129    source: Union[PermissiveFileSource, ZipFile],
130    /,
131    *,
132    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
133    perform_io_checks: Optional[bool] = None,
134    known_files: Optional[Dict[str, Sha256]] = None,
135    sha256: Optional[Sha256] = None,
136) -> AnyModelDescr:
137    """same as `load_description`, but addtionally ensures that the loaded
138    description is valid and of type 'model'.
139
140    Raises:
141        ValueError: for invalid or non-model resources
142    """
143    rd = load_description(
144        source,
145        format_version=format_version,
146        perform_io_checks=perform_io_checks,
147        known_files=known_files,
148        sha256=sha256,
149    )
150    return ensure_description_is_model(rd)
def load_resource( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, bioimageio.spec._internal.io_basics.Sha256]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], bioimageio.spec.InvalidDescr]:
 56def load_description(
 57    source: Union[PermissiveFileSource, ZipFile],
 58    /,
 59    *,
 60    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
 61    perform_io_checks: Optional[bool] = None,
 62    known_files: Optional[Dict[str, Sha256]] = None,
 63    sha256: Optional[Sha256] = None,
 64) -> Union[ResourceDescr, InvalidDescr]:
 65    """load a bioimage.io resource description
 66
 67    Args:
 68        source: Path or URL to an rdf.yaml or a bioimage.io package
 69                (zip-file with rdf.yaml in it).
 70        format_version: (optional) Use this argument to load the resource and
 71                        convert its metadata to a higher format_version.
 72        perform_io_checks: Wether or not to perform validation that requires file io,
 73                           e.g. downloading a remote files. The existence of local
 74                           absolute file paths is still being checked.
 75        known_files: Allows to bypass download and hashing of referenced files
 76                     (even if perform_io_checks is True).
 77        sha256: Optional SHA-256 value of **source**
 78
 79    Returns:
 80        An object holding all metadata of the bioimage.io resource
 81
 82    """
 83    if isinstance(source, ResourceDescrBase):
 84        name = getattr(source, "name", f"{str(source)[:10]}...")
 85        logger.warning("returning already loaded description '{}' as is", name)
 86        return source  # pyright: ignore[reportReturnType]
 87
 88    opened = open_bioimageio_yaml(source, sha256=sha256)
 89
 90    context = get_validation_context().replace(
 91        root=opened.original_root,
 92        file_name=opened.original_file_name,
 93        perform_io_checks=perform_io_checks,
 94        known_files=known_files,
 95    )
 96
 97    return build_description(
 98        opened.content,
 99        context=context,
100        format_version=format_version,
101    )
def predict_many( *, model: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], bioimageio.spec.model.v0_4.ModelDescr, bioimageio.spec.ModelDescr, PredictionPipeline], inputs: Union[Iterable[Mapping[bioimageio.spec.model.v0_5.TensorId, Union[Tensor, xarray.core.dataarray.DataArray, numpy.ndarray[Any, numpy.dtype[Any]], pathlib.Path]]], Iterable[Union[Tensor, xarray.core.dataarray.DataArray, numpy.ndarray[Any, numpy.dtype[Any]], pathlib.Path]]], sample_id: str = 'sample{i:03}', blocksize_parameter: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int], NoneType] = None, skip_preprocessing: bool = False, skip_postprocessing: bool = False, save_output_path: Union[pathlib.Path, str, NoneType] = None) -> Iterator[Sample]:
131def predict_many(
132    *,
133    model: Union[
134        PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline
135    ],
136    inputs: Union[Iterable[PerMember[TensorSource]], Iterable[TensorSource]],
137    sample_id: str = "sample{i:03}",
138    blocksize_parameter: Optional[
139        Union[
140            v0_5.ParameterizedSize_N,
141            Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N],
142        ]
143    ] = None,
144    skip_preprocessing: bool = False,
145    skip_postprocessing: bool = False,
146    save_output_path: Optional[Union[Path, str]] = None,
147) -> Iterator[Sample]:
148    """Run prediction for a multiple sets of inputs with a bioimage.io model
149
150    Args:
151        model: Model to predict with.
152            May be given as RDF source, model description or prediction pipeline.
153        inputs: An iterable of the named input(s) for this model as a dictionary.
154        sample_id: The sample id.
155            note: `{i}` will be formatted as the i-th sample.
156            If `{i}` (or `{i:`) is not present and `inputs` is not an iterable `{i:03}`
157            is appended.
158        blocksize_parameter: (optional) Tile the input into blocks parametrized by
159            blocksize according to any parametrized axis sizes defined in the model RDF.
160        skip_preprocessing: Flag to skip the model's preprocessing.
161        skip_postprocessing: Flag to skip the model's postprocessing.
162        save_output_path: A path to save the output to.
163            Must contain:
164            - `{sample_id}` to differentiate predicted samples
165            - `{output_id}` (or `{member_id}`) if the model has multiple outputs
166    """
167    if save_output_path is not None and "{sample_id}" not in str(save_output_path):
168        raise ValueError(
169            f"Missing `{{sample_id}}` in save_output_path={save_output_path}"
170            + " to differentiate predicted samples."
171        )
172
173    if isinstance(model, PredictionPipeline):
174        pp = model
175    else:
176        if not isinstance(model, (v0_4.ModelDescr, v0_5.ModelDescr)):
177            loaded = load_description(model)
178            if not isinstance(loaded, (v0_4.ModelDescr, v0_5.ModelDescr)):
179                raise ValueError(f"expected model description, but got {loaded}")
180            model = loaded
181
182        pp = create_prediction_pipeline(model)
183
184    if not isinstance(inputs, collections.abc.Mapping):
185        if "{i}" not in sample_id and "{i:" not in sample_id:
186            sample_id += "{i:03}"
187
188        total = len(inputs) if isinstance(inputs, collections.abc.Sized) else None
189
190        for i, ipts in tqdm(enumerate(inputs), total=total):
191            yield predict(
192                model=pp,
193                inputs=ipts,
194                sample_id=sample_id.format(i=i),
195                blocksize_parameter=blocksize_parameter,
196                skip_preprocessing=skip_preprocessing,
197                skip_postprocessing=skip_postprocessing,
198                save_output_path=save_output_path,
199            )

Run prediction for a multiple sets of inputs with a bioimage.io model

Arguments:
  • model: Model to predict with. May be given as RDF source, model description or prediction pipeline.
  • inputs: An iterable of the named input(s) for this model as a dictionary.
  • sample_id: The sample id. note: {i} will be formatted as the i-th sample. If {i} (or {i:) is not present and inputs is not an iterable {i:03} is appended.
  • blocksize_parameter: (optional) Tile the input into blocks parametrized by blocksize according to any parametrized axis sizes defined in the model RDF.
  • skip_preprocessing: Flag to skip the model's preprocessing.
  • skip_postprocessing: Flag to skip the model's postprocessing.
  • save_output_path: A path to save the output to. Must contain:
    • {sample_id} to differentiate predicted samples
    • {output_id} (or {member_id}) if the model has multiple outputs
def predict( *, model: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], bioimageio.spec.model.v0_4.ModelDescr, bioimageio.spec.ModelDescr, PredictionPipeline], inputs: Union[Sample, Mapping[bioimageio.spec.model.v0_5.TensorId, Union[Tensor, xarray.core.dataarray.DataArray, numpy.ndarray[Any, numpy.dtype[Any]], pathlib.Path]], Tensor, xarray.core.dataarray.DataArray, numpy.ndarray[Any, numpy.dtype[Any]], pathlib.Path], sample_id: Hashable = 'sample', blocksize_parameter: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int], NoneType] = None, input_block_shape: Optional[Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, int]]] = None, skip_preprocessing: bool = False, skip_postprocessing: bool = False, save_output_path: Union[pathlib.Path, str, NoneType] = None) -> Sample:
 29def predict(
 30    *,
 31    model: Union[
 32        PermissiveFileSource, v0_4.ModelDescr, v0_5.ModelDescr, PredictionPipeline
 33    ],
 34    inputs: Union[Sample, PerMember[TensorSource], TensorSource],
 35    sample_id: Hashable = "sample",
 36    blocksize_parameter: Optional[BlocksizeParameter] = None,
 37    input_block_shape: Optional[Mapping[MemberId, Mapping[AxisId, int]]] = None,
 38    skip_preprocessing: bool = False,
 39    skip_postprocessing: bool = False,
 40    save_output_path: Optional[Union[Path, str]] = None,
 41) -> Sample:
 42    """Run prediction for a single set of input(s) with a bioimage.io model
 43
 44    Args:
 45        model: Model to predict with.
 46            May be given as RDF source, model description or prediction pipeline.
 47        inputs: the input sample or the named input(s) for this model as a dictionary
 48        sample_id: the sample id.
 49            The **sample_id** is used to format **save_output_path**
 50            and to distinguish sample specific log messages.
 51        blocksize_parameter: (optional) Tile the input into blocks parametrized by
 52            **blocksize_parameter** according to any parametrized axis sizes defined
 53            by the **model**.
 54            See `bioimageio.spec.model.v0_5.ParameterizedSize` for details.
 55            Note: For a predetermined, fixed block shape use **input_block_shape**.
 56        input_block_shape: (optional) Tile the input sample tensors into blocks.
 57            Note: Use **blocksize_parameter** for a parameterized block shape to
 58                run prediction independent of the exact block shape.
 59        skip_preprocessing: Flag to skip the model's preprocessing.
 60        skip_postprocessing: Flag to skip the model's postprocessing.
 61        save_output_path: A path with to save the output to. M
 62            Must contain:
 63            - `{output_id}` (or `{member_id}`) if the model has multiple output tensors
 64            May contain:
 65            - `{sample_id}` to avoid overwriting recurrent calls
 66    """
 67    if isinstance(model, PredictionPipeline):
 68        pp = model
 69        model = pp.model_description
 70    else:
 71        if not isinstance(model, (v0_4.ModelDescr, v0_5.ModelDescr)):
 72            loaded = load_description(model)
 73            if not isinstance(loaded, (v0_4.ModelDescr, v0_5.ModelDescr)):
 74                raise ValueError(f"expected model description, but got {loaded}")
 75            model = loaded
 76
 77        pp = create_prediction_pipeline(model)
 78
 79    if save_output_path is not None:
 80        if (
 81            "{output_id}" not in str(save_output_path)
 82            and "{member_id}" not in str(save_output_path)
 83            and len(model.outputs) > 1
 84        ):
 85            raise ValueError(
 86                f"Missing `{{output_id}}` in save_output_path={save_output_path} to "
 87                + "distinguish model outputs "
 88                + str([get_member_id(d) for d in model.outputs])
 89            )
 90
 91    if isinstance(inputs, Sample):
 92        sample = inputs
 93    else:
 94        sample = create_sample_for_model(
 95            pp.model_description, inputs=inputs, sample_id=sample_id
 96        )
 97
 98    if input_block_shape is not None:
 99        if blocksize_parameter is not None:
100            logger.warning(
101                "ignoring blocksize_parameter={} in favor of input_block_shape={}",
102                blocksize_parameter,
103                input_block_shape,
104            )
105
106        output = pp.predict_sample_with_fixed_blocking(
107            sample,
108            input_block_shape=input_block_shape,
109            skip_preprocessing=skip_preprocessing,
110            skip_postprocessing=skip_postprocessing,
111        )
112    elif blocksize_parameter is not None:
113        output = pp.predict_sample_with_blocking(
114            sample,
115            skip_preprocessing=skip_preprocessing,
116            skip_postprocessing=skip_postprocessing,
117            ns=blocksize_parameter,
118        )
119    else:
120        output = pp.predict_sample_without_blocking(
121            sample,
122            skip_preprocessing=skip_preprocessing,
123            skip_postprocessing=skip_postprocessing,
124        )
125    if save_output_path:
126        save_sample(save_output_path, output)
127
128    return output

Run prediction for a single set of input(s) with a bioimage.io model

Arguments:
  • model: Model to predict with. May be given as RDF source, model description or prediction pipeline.
  • inputs: the input sample or the named input(s) for this model as a dictionary
  • sample_id: the sample id. The sample_id is used to format save_output_path and to distinguish sample specific log messages.
  • blocksize_parameter: (optional) Tile the input into blocks parametrized by blocksize_parameter according to any parametrized axis sizes defined by the model. See bioimageio.spec.model.v0_5.ParameterizedSize for details. Note: For a predetermined, fixed block shape use input_block_shape.
  • input_block_shape: (optional) Tile the input sample tensors into blocks. Note: Use blocksize_parameter for a parameterized block shape to run prediction independent of the exact block shape.
  • skip_preprocessing: Flag to skip the model's preprocessing.
  • skip_postprocessing: Flag to skip the model's postprocessing.
  • save_output_path: A path with to save the output to. M Must contain:
    • {output_id} (or {member_id}) if the model has multiple output tensors May contain:
    • {sample_id} to avoid overwriting recurrent calls
class PredictionPipeline:
 51class PredictionPipeline:
 52    """
 53    Represents model computation including preprocessing and postprocessing
 54    Note: Ideally use the `PredictionPipeline` in a with statement
 55        (as a context manager).
 56    """
 57
 58    def __init__(
 59        self,
 60        *,
 61        name: str,
 62        model_description: AnyModelDescr,
 63        preprocessing: List[Processing],
 64        postprocessing: List[Processing],
 65        model_adapter: ModelAdapter,
 66        default_ns: Optional[BlocksizeParameter] = None,
 67        default_blocksize_parameter: BlocksizeParameter = 10,
 68        default_batch_size: int = 1,
 69    ) -> None:
 70        """Use `create_prediction_pipeline` to create a `PredictionPipeline`"""
 71        super().__init__()
 72        default_blocksize_parameter = default_ns or default_blocksize_parameter
 73        if default_ns is not None:
 74            warnings.warn(
 75                "Argument `default_ns` is deprecated in favor of"
 76                + " `default_blocksize_paramter` and will be removed soon."
 77            )
 78        del default_ns
 79
 80        if model_description.run_mode:
 81            warnings.warn(
 82                f"Not yet implemented inference for run mode '{model_description.run_mode.name}'"
 83            )
 84
 85        self.name = name
 86        self._preprocessing = preprocessing
 87        self._postprocessing = postprocessing
 88
 89        self.model_description = model_description
 90        if isinstance(model_description, v0_4.ModelDescr):
 91            self._default_input_halo: PerMember[PerAxis[Halo]] = {}
 92            self._block_transform = None
 93        else:
 94            default_output_halo = {
 95                t.id: {
 96                    a.id: Halo(a.halo, a.halo)
 97                    for a in t.axes
 98                    if isinstance(a, v0_5.WithHalo)
 99                }
100                for t in model_description.outputs
101            }
102            self._default_input_halo = get_input_halo(
103                model_description, default_output_halo
104            )
105            self._block_transform = get_block_transform(model_description)
106
107        self._default_blocksize_parameter = default_blocksize_parameter
108        self._default_batch_size = default_batch_size
109
110        self._input_ids = get_member_ids(model_description.inputs)
111        self._output_ids = get_member_ids(model_description.outputs)
112
113        self._adapter: ModelAdapter = model_adapter
114
115    def __enter__(self):
116        self.load()
117        return self
118
119    def __exit__(self, exc_type, exc_val, exc_tb):  # type: ignore
120        self.unload()
121        return False
122
123    def predict_sample_block(
124        self,
125        sample_block: SampleBlockWithOrigin,
126        skip_preprocessing: bool = False,
127        skip_postprocessing: bool = False,
128    ) -> SampleBlock:
129        if isinstance(self.model_description, v0_4.ModelDescr):
130            raise NotImplementedError(
131                f"predict_sample_block not implemented for model {self.model_description.format_version}"
132            )
133        else:
134            assert self._block_transform is not None
135
136        if not skip_preprocessing:
137            self.apply_preprocessing(sample_block)
138
139        output_meta = sample_block.get_transformed_meta(self._block_transform)
140        local_output = self._adapter.forward(sample_block)
141
142        output = output_meta.with_data(local_output.members, stat=local_output.stat)
143        if not skip_postprocessing:
144            self.apply_postprocessing(output)
145
146        return output
147
148    def predict_sample_without_blocking(
149        self,
150        sample: Sample,
151        skip_preprocessing: bool = False,
152        skip_postprocessing: bool = False,
153    ) -> Sample:
154        """predict a sample.
155        The sample's tensor shapes have to match the model's input tensor description.
156        If that is not the case, consider `predict_sample_with_blocking`"""
157
158        if not skip_preprocessing:
159            self.apply_preprocessing(sample)
160
161        output = self._adapter.forward(sample)
162        if not skip_postprocessing:
163            self.apply_postprocessing(output)
164
165        return output
166
167    def get_output_sample_id(self, input_sample_id: SampleId):
168        warnings.warn(
169            "`PredictionPipeline.get_output_sample_id()` is deprecated and will be"
170            + " removed soon. Output sample id is equal to input sample id, hence this"
171            + " function is not needed."
172        )
173        return input_sample_id
174
175    def predict_sample_with_fixed_blocking(
176        self,
177        sample: Sample,
178        input_block_shape: Mapping[MemberId, Mapping[AxisId, int]],
179        *,
180        skip_preprocessing: bool = False,
181        skip_postprocessing: bool = False,
182    ) -> Sample:
183        if not skip_preprocessing:
184            self.apply_preprocessing(sample)
185
186        n_blocks, input_blocks = sample.split_into_blocks(
187            input_block_shape,
188            halo=self._default_input_halo,
189            pad_mode="reflect",
190        )
191        input_blocks = list(input_blocks)
192        predicted_blocks: List[SampleBlock] = []
193        logger.info(
194            "split sample shape {} into {} blocks of {}.",
195            {k: dict(v) for k, v in sample.shape.items()},
196            n_blocks,
197            {k: dict(v) for k, v in input_block_shape.items()},
198        )
199        for b in tqdm(
200            input_blocks,
201            desc=f"predict {sample.id or ''} with {self.model_description.id or self.model_description.name}",
202            unit="block",
203            unit_divisor=1,
204            total=n_blocks,
205        ):
206            predicted_blocks.append(
207                self.predict_sample_block(
208                    b, skip_preprocessing=True, skip_postprocessing=True
209                )
210            )
211
212        predicted_sample = Sample.from_blocks(predicted_blocks)
213        if not skip_postprocessing:
214            self.apply_postprocessing(predicted_sample)
215
216        return predicted_sample
217
218    def predict_sample_with_blocking(
219        self,
220        sample: Sample,
221        skip_preprocessing: bool = False,
222        skip_postprocessing: bool = False,
223        ns: Optional[
224            Union[
225                v0_5.ParameterizedSize_N,
226                Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N],
227            ]
228        ] = None,
229        batch_size: Optional[int] = None,
230    ) -> Sample:
231        """predict a sample by splitting it into blocks according to the model and the `ns` parameter"""
232
233        if isinstance(self.model_description, v0_4.ModelDescr):
234            raise NotImplementedError(
235                "`predict_sample_with_blocking` not implemented for v0_4.ModelDescr"
236                + f" {self.model_description.name}."
237                + " Consider using `predict_sample_with_fixed_blocking`"
238            )
239
240        ns = ns or self._default_blocksize_parameter
241        if isinstance(ns, int):
242            ns = {
243                (ipt.id, a.id): ns
244                for ipt in self.model_description.inputs
245                for a in ipt.axes
246                if isinstance(a.size, v0_5.ParameterizedSize)
247            }
248        input_block_shape = self.model_description.get_tensor_sizes(
249            ns, batch_size or self._default_batch_size
250        ).inputs
251
252        return self.predict_sample_with_fixed_blocking(
253            sample,
254            input_block_shape=input_block_shape,
255            skip_preprocessing=skip_preprocessing,
256            skip_postprocessing=skip_postprocessing,
257        )
258
259    # def predict(
260    #     self,
261    #     inputs: Predict_IO,
262    #     skip_preprocessing: bool = False,
263    #     skip_postprocessing: bool = False,
264    # ) -> Predict_IO:
265    #     """Run model prediction **including** pre/postprocessing."""
266
267    #     if isinstance(inputs, Sample):
268    #         return self.predict_sample_with_blocking(
269    #             inputs,
270    #             skip_preprocessing=skip_preprocessing,
271    #             skip_postprocessing=skip_postprocessing,
272    #         )
273    #     elif isinstance(inputs, collections.abc.Iterable):
274    #         return (
275    #             self.predict(
276    #                 ipt,
277    #                 skip_preprocessing=skip_preprocessing,
278    #                 skip_postprocessing=skip_postprocessing,
279    #             )
280    #             for ipt in inputs
281    #         )
282    #     else:
283    #         assert_never(inputs)
284
285    def apply_preprocessing(self, sample: Union[Sample, SampleBlockWithOrigin]) -> None:
286        """apply preprocessing in-place, also updates sample stats"""
287        for op in self._preprocessing:
288            op(sample)
289
290    def apply_postprocessing(
291        self, sample: Union[Sample, SampleBlock, SampleBlockWithOrigin]
292    ) -> None:
293        """apply postprocessing in-place, also updates samples stats"""
294        for op in self._postprocessing:
295            if isinstance(sample, (Sample, SampleBlockWithOrigin)):
296                op(sample)
297            elif not isinstance(op, BlockedOperator):
298                raise NotImplementedError(
299                    "block wise update of output statistics not yet implemented"
300                )
301            else:
302                op(sample)
303
304    def load(self):
305        """
306        optional step: load model onto devices before calling forward if not using it as context manager
307        """
308        pass
309
310    def unload(self):
311        """
312        free any device memory in use
313        """
314        self._adapter.unload()

Represents model computation including preprocessing and postprocessing Note: Ideally use the PredictionPipeline in a with statement (as a context manager).

PredictionPipeline( *, name: str, model_description: Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], preprocessing: List[Union[bioimageio.core.proc_ops.AddKnownDatasetStats, bioimageio.core.proc_ops.Binarize, bioimageio.core.proc_ops.Clip, bioimageio.core.proc_ops.EnsureDtype, bioimageio.core.proc_ops.FixedZeroMeanUnitVariance, bioimageio.core.proc_ops.ScaleLinear, bioimageio.core.proc_ops.ScaleMeanVariance, bioimageio.core.proc_ops.ScaleRange, bioimageio.core.proc_ops.Sigmoid, bioimageio.core.proc_ops.UpdateStats, bioimageio.core.proc_ops.ZeroMeanUnitVariance]], postprocessing: List[Union[bioimageio.core.proc_ops.AddKnownDatasetStats, bioimageio.core.proc_ops.Binarize, bioimageio.core.proc_ops.Clip, bioimageio.core.proc_ops.EnsureDtype, bioimageio.core.proc_ops.FixedZeroMeanUnitVariance, bioimageio.core.proc_ops.ScaleLinear, bioimageio.core.proc_ops.ScaleMeanVariance, bioimageio.core.proc_ops.ScaleRange, bioimageio.core.proc_ops.Sigmoid, bioimageio.core.proc_ops.UpdateStats, bioimageio.core.proc_ops.ZeroMeanUnitVariance]], model_adapter: bioimageio.core.backends._model_adapter.ModelAdapter, default_ns: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int], NoneType] = None, default_blocksize_parameter: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int]] = 10, default_batch_size: int = 1)
 58    def __init__(
 59        self,
 60        *,
 61        name: str,
 62        model_description: AnyModelDescr,
 63        preprocessing: List[Processing],
 64        postprocessing: List[Processing],
 65        model_adapter: ModelAdapter,
 66        default_ns: Optional[BlocksizeParameter] = None,
 67        default_blocksize_parameter: BlocksizeParameter = 10,
 68        default_batch_size: int = 1,
 69    ) -> None:
 70        """Use `create_prediction_pipeline` to create a `PredictionPipeline`"""
 71        super().__init__()
 72        default_blocksize_parameter = default_ns or default_blocksize_parameter
 73        if default_ns is not None:
 74            warnings.warn(
 75                "Argument `default_ns` is deprecated in favor of"
 76                + " `default_blocksize_paramter` and will be removed soon."
 77            )
 78        del default_ns
 79
 80        if model_description.run_mode:
 81            warnings.warn(
 82                f"Not yet implemented inference for run mode '{model_description.run_mode.name}'"
 83            )
 84
 85        self.name = name
 86        self._preprocessing = preprocessing
 87        self._postprocessing = postprocessing
 88
 89        self.model_description = model_description
 90        if isinstance(model_description, v0_4.ModelDescr):
 91            self._default_input_halo: PerMember[PerAxis[Halo]] = {}
 92            self._block_transform = None
 93        else:
 94            default_output_halo = {
 95                t.id: {
 96                    a.id: Halo(a.halo, a.halo)
 97                    for a in t.axes
 98                    if isinstance(a, v0_5.WithHalo)
 99                }
100                for t in model_description.outputs
101            }
102            self._default_input_halo = get_input_halo(
103                model_description, default_output_halo
104            )
105            self._block_transform = get_block_transform(model_description)
106
107        self._default_blocksize_parameter = default_blocksize_parameter
108        self._default_batch_size = default_batch_size
109
110        self._input_ids = get_member_ids(model_description.inputs)
111        self._output_ids = get_member_ids(model_description.outputs)
112
113        self._adapter: ModelAdapter = model_adapter
name
model_description
def predict_sample_block( self, sample_block: bioimageio.core.sample.SampleBlockWithOrigin, skip_preprocessing: bool = False, skip_postprocessing: bool = False) -> bioimageio.core.sample.SampleBlock:
123    def predict_sample_block(
124        self,
125        sample_block: SampleBlockWithOrigin,
126        skip_preprocessing: bool = False,
127        skip_postprocessing: bool = False,
128    ) -> SampleBlock:
129        if isinstance(self.model_description, v0_4.ModelDescr):
130            raise NotImplementedError(
131                f"predict_sample_block not implemented for model {self.model_description.format_version}"
132            )
133        else:
134            assert self._block_transform is not None
135
136        if not skip_preprocessing:
137            self.apply_preprocessing(sample_block)
138
139        output_meta = sample_block.get_transformed_meta(self._block_transform)
140        local_output = self._adapter.forward(sample_block)
141
142        output = output_meta.with_data(local_output.members, stat=local_output.stat)
143        if not skip_postprocessing:
144            self.apply_postprocessing(output)
145
146        return output
def predict_sample_without_blocking( self, sample: Sample, skip_preprocessing: bool = False, skip_postprocessing: bool = False) -> Sample:
148    def predict_sample_without_blocking(
149        self,
150        sample: Sample,
151        skip_preprocessing: bool = False,
152        skip_postprocessing: bool = False,
153    ) -> Sample:
154        """predict a sample.
155        The sample's tensor shapes have to match the model's input tensor description.
156        If that is not the case, consider `predict_sample_with_blocking`"""
157
158        if not skip_preprocessing:
159            self.apply_preprocessing(sample)
160
161        output = self._adapter.forward(sample)
162        if not skip_postprocessing:
163            self.apply_postprocessing(output)
164
165        return output

predict a sample. The sample's tensor shapes have to match the model's input tensor description. If that is not the case, consider predict_sample_with_blocking

def get_output_sample_id(self, input_sample_id: Hashable):
167    def get_output_sample_id(self, input_sample_id: SampleId):
168        warnings.warn(
169            "`PredictionPipeline.get_output_sample_id()` is deprecated and will be"
170            + " removed soon. Output sample id is equal to input sample id, hence this"
171            + " function is not needed."
172        )
173        return input_sample_id
def predict_sample_with_fixed_blocking( self, sample: Sample, input_block_shape: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, int]], *, skip_preprocessing: bool = False, skip_postprocessing: bool = False) -> Sample:
175    def predict_sample_with_fixed_blocking(
176        self,
177        sample: Sample,
178        input_block_shape: Mapping[MemberId, Mapping[AxisId, int]],
179        *,
180        skip_preprocessing: bool = False,
181        skip_postprocessing: bool = False,
182    ) -> Sample:
183        if not skip_preprocessing:
184            self.apply_preprocessing(sample)
185
186        n_blocks, input_blocks = sample.split_into_blocks(
187            input_block_shape,
188            halo=self._default_input_halo,
189            pad_mode="reflect",
190        )
191        input_blocks = list(input_blocks)
192        predicted_blocks: List[SampleBlock] = []
193        logger.info(
194            "split sample shape {} into {} blocks of {}.",
195            {k: dict(v) for k, v in sample.shape.items()},
196            n_blocks,
197            {k: dict(v) for k, v in input_block_shape.items()},
198        )
199        for b in tqdm(
200            input_blocks,
201            desc=f"predict {sample.id or ''} with {self.model_description.id or self.model_description.name}",
202            unit="block",
203            unit_divisor=1,
204            total=n_blocks,
205        ):
206            predicted_blocks.append(
207                self.predict_sample_block(
208                    b, skip_preprocessing=True, skip_postprocessing=True
209                )
210            )
211
212        predicted_sample = Sample.from_blocks(predicted_blocks)
213        if not skip_postprocessing:
214            self.apply_postprocessing(predicted_sample)
215
216        return predicted_sample
def predict_sample_with_blocking( self, sample: Sample, skip_preprocessing: bool = False, skip_postprocessing: bool = False, ns: Union[int, Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, AxisId], int], NoneType] = None, batch_size: Optional[int] = None) -> Sample:
218    def predict_sample_with_blocking(
219        self,
220        sample: Sample,
221        skip_preprocessing: bool = False,
222        skip_postprocessing: bool = False,
223        ns: Optional[
224            Union[
225                v0_5.ParameterizedSize_N,
226                Mapping[Tuple[MemberId, AxisId], v0_5.ParameterizedSize_N],
227            ]
228        ] = None,
229        batch_size: Optional[int] = None,
230    ) -> Sample:
231        """predict a sample by splitting it into blocks according to the model and the `ns` parameter"""
232
233        if isinstance(self.model_description, v0_4.ModelDescr):
234            raise NotImplementedError(
235                "`predict_sample_with_blocking` not implemented for v0_4.ModelDescr"
236                + f" {self.model_description.name}."
237                + " Consider using `predict_sample_with_fixed_blocking`"
238            )
239
240        ns = ns or self._default_blocksize_parameter
241        if isinstance(ns, int):
242            ns = {
243                (ipt.id, a.id): ns
244                for ipt in self.model_description.inputs
245                for a in ipt.axes
246                if isinstance(a.size, v0_5.ParameterizedSize)
247            }
248        input_block_shape = self.model_description.get_tensor_sizes(
249            ns, batch_size or self._default_batch_size
250        ).inputs
251
252        return self.predict_sample_with_fixed_blocking(
253            sample,
254            input_block_shape=input_block_shape,
255            skip_preprocessing=skip_preprocessing,
256            skip_postprocessing=skip_postprocessing,
257        )

predict a sample by splitting it into blocks according to the model and the ns parameter

def apply_preprocessing( self, sample: Union[Sample, bioimageio.core.sample.SampleBlockWithOrigin]) -> None:
285    def apply_preprocessing(self, sample: Union[Sample, SampleBlockWithOrigin]) -> None:
286        """apply preprocessing in-place, also updates sample stats"""
287        for op in self._preprocessing:
288            op(sample)

apply preprocessing in-place, also updates sample stats

def apply_postprocessing( self, sample: Union[Sample, bioimageio.core.sample.SampleBlock, bioimageio.core.sample.SampleBlockWithOrigin]) -> None:
290    def apply_postprocessing(
291        self, sample: Union[Sample, SampleBlock, SampleBlockWithOrigin]
292    ) -> None:
293        """apply postprocessing in-place, also updates samples stats"""
294        for op in self._postprocessing:
295            if isinstance(sample, (Sample, SampleBlockWithOrigin)):
296                op(sample)
297            elif not isinstance(op, BlockedOperator):
298                raise NotImplementedError(
299                    "block wise update of output statistics not yet implemented"
300                )
301            else:
302                op(sample)

apply postprocessing in-place, also updates samples stats

def load(self):
304    def load(self):
305        """
306        optional step: load model onto devices before calling forward if not using it as context manager
307        """
308        pass

optional step: load model onto devices before calling forward if not using it as context manager

def unload(self):
310    def unload(self):
311        """
312        free any device memory in use
313        """
314        self._adapter.unload()

free any device memory in use

@dataclass
class Sample:
 46@dataclass
 47class Sample:
 48    """A dataset sample.
 49
 50    A `Sample` has `members`, which allows to combine multiple tensors into a single
 51    sample.
 52    For example a `Sample` from a dataset with masked images may contain a
 53    `MemberId("raw")` and `MemberId("mask")` image.
 54    """
 55
 56    members: Dict[MemberId, Tensor]
 57    """The sample's tensors"""
 58
 59    stat: Stat
 60    """Sample and dataset statistics"""
 61
 62    id: SampleId
 63    """Identifies the `Sample` within the dataset -- typically a number or a string."""
 64
 65    @property
 66    def shape(self) -> PerMember[PerAxis[int]]:
 67        return {tid: t.sizes for tid, t in self.members.items()}
 68
 69    def as_arrays(self) -> Dict[str, NDArray[Any]]:
 70        """Return sample as dictionary of arrays."""
 71        return {str(m): t.data.to_numpy() for m, t in self.members.items()}
 72
 73    def split_into_blocks(
 74        self,
 75        block_shapes: PerMember[PerAxis[int]],
 76        halo: PerMember[PerAxis[HaloLike]],
 77        pad_mode: PadMode,
 78        broadcast: bool = False,
 79    ) -> Tuple[TotalNumberOfBlocks, Iterable[SampleBlockWithOrigin]]:
 80        assert not (
 81            missing := [m for m in block_shapes if m not in self.members]
 82        ), f"`block_shapes` specified for unknown members: {missing}"
 83        assert not (
 84            missing := [m for m in halo if m not in block_shapes]
 85        ), f"`halo` specified for members without `block_shape`: {missing}"
 86
 87        n_blocks, blocks = split_multiple_shapes_into_blocks(
 88            shapes=self.shape,
 89            block_shapes=block_shapes,
 90            halo=halo,
 91            broadcast=broadcast,
 92        )
 93        return n_blocks, sample_block_generator(blocks, origin=self, pad_mode=pad_mode)
 94
 95    def as_single_block(self, halo: Optional[PerMember[PerAxis[Halo]]] = None):
 96        if halo is None:
 97            halo = {}
 98        return SampleBlockWithOrigin(
 99            sample_shape=self.shape,
100            sample_id=self.id,
101            blocks={
102                m: Block(
103                    sample_shape=self.shape[m],
104                    data=data,
105                    inner_slice={
106                        a: SliceInfo(0, s) for a, s in data.tagged_shape.items()
107                    },
108                    halo=halo.get(m, {}),
109                    block_index=0,
110                    blocks_in_sample=1,
111                )
112                for m, data in self.members.items()
113            },
114            stat=self.stat,
115            origin=self,
116            block_index=0,
117            blocks_in_sample=1,
118        )
119
120    @classmethod
121    def from_blocks(
122        cls,
123        sample_blocks: Iterable[SampleBlock],
124        *,
125        fill_value: float = float("nan"),
126    ) -> Self:
127        members: PerMember[Tensor] = {}
128        stat: Stat = {}
129        sample_id = None
130        for sample_block in sample_blocks:
131            assert sample_id is None or sample_id == sample_block.sample_id
132            sample_id = sample_block.sample_id
133            stat = sample_block.stat
134            for m, block in sample_block.blocks.items():
135                if m not in members:
136                    if -1 in block.sample_shape.values():
137                        raise NotImplementedError(
138                            "merging blocks with data dependent axis not yet implemented"
139                        )
140
141                    members[m] = Tensor(
142                        np.full(
143                            tuple(block.sample_shape[a] for a in block.data.dims),
144                            fill_value,
145                            dtype=block.data.dtype,
146                        ),
147                        dims=block.data.dims,
148                    )
149
150                members[m][block.inner_slice] = block.inner_data
151
152        return cls(members=members, stat=stat, id=sample_id)

A dataset sample.

A Sample has members, which allows to combine multiple tensors into a single sample. For example a Sample from a dataset with masked images may contain a MemberId("raw") and MemberId("mask") image.

Sample( members: Dict[bioimageio.spec.model.v0_5.TensorId, Tensor], stat: Dict[Annotated[Union[Annotated[Union[bioimageio.core.stat_measures.SampleMean, bioimageio.core.stat_measures.SampleStd, bioimageio.core.stat_measures.SampleVar, bioimageio.core.stat_measures.SampleQuantile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='scope', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Union[float, Annotated[Tensor, BeforeValidator(func=<function tensor_custom_before_validator>, json_schema_input_type=PydanticUndefined), PlainSerializer(func=<function tensor_custom_serializer>, return_type=PydanticUndefined, when_used='always')]]], id: Hashable)

The sample's tensors

stat: Dict[Annotated[Union[Annotated[Union[bioimageio.core.stat_measures.SampleMean, bioimageio.core.stat_measures.SampleStd, bioimageio.core.stat_measures.SampleVar, bioimageio.core.stat_measures.SampleQuantile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='scope', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Union[float, Annotated[Tensor, BeforeValidator(func=<function tensor_custom_before_validator at 0x7f25f54c6f20>, json_schema_input_type=PydanticUndefined), PlainSerializer(func=<function tensor_custom_serializer at 0x7f25f54c7100>, return_type=PydanticUndefined, when_used='always')]]]

Sample and dataset statistics

id: Hashable

Identifies the Sample within the dataset -- typically a number or a string.

shape: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, int]]
65    @property
66    def shape(self) -> PerMember[PerAxis[int]]:
67        return {tid: t.sizes for tid, t in self.members.items()}
def as_arrays(self) -> Dict[str, numpy.ndarray[Any, numpy.dtype[Any]]]:
69    def as_arrays(self) -> Dict[str, NDArray[Any]]:
70        """Return sample as dictionary of arrays."""
71        return {str(m): t.data.to_numpy() for m, t in self.members.items()}

Return sample as dictionary of arrays.

def split_into_blocks( self, block_shapes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, int]], halo: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, Union[int, Tuple[int, int], bioimageio.core.common.Halo]]], pad_mode: Literal['edge', 'reflect', 'symmetric'], broadcast: bool = False) -> Tuple[int, Iterable[bioimageio.core.sample.SampleBlockWithOrigin]]:
73    def split_into_blocks(
74        self,
75        block_shapes: PerMember[PerAxis[int]],
76        halo: PerMember[PerAxis[HaloLike]],
77        pad_mode: PadMode,
78        broadcast: bool = False,
79    ) -> Tuple[TotalNumberOfBlocks, Iterable[SampleBlockWithOrigin]]:
80        assert not (
81            missing := [m for m in block_shapes if m not in self.members]
82        ), f"`block_shapes` specified for unknown members: {missing}"
83        assert not (
84            missing := [m for m in halo if m not in block_shapes]
85        ), f"`halo` specified for members without `block_shape`: {missing}"
86
87        n_blocks, blocks = split_multiple_shapes_into_blocks(
88            shapes=self.shape,
89            block_shapes=block_shapes,
90            halo=halo,
91            broadcast=broadcast,
92        )
93        return n_blocks, sample_block_generator(blocks, origin=self, pad_mode=pad_mode)
def as_single_block( self, halo: Optional[Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[AxisId, bioimageio.core.common.Halo]]] = None):
 95    def as_single_block(self, halo: Optional[PerMember[PerAxis[Halo]]] = None):
 96        if halo is None:
 97            halo = {}
 98        return SampleBlockWithOrigin(
 99            sample_shape=self.shape,
100            sample_id=self.id,
101            blocks={
102                m: Block(
103                    sample_shape=self.shape[m],
104                    data=data,
105                    inner_slice={
106                        a: SliceInfo(0, s) for a, s in data.tagged_shape.items()
107                    },
108                    halo=halo.get(m, {}),
109                    block_index=0,
110                    blocks_in_sample=1,
111                )
112                for m, data in self.members.items()
113            },
114            stat=self.stat,
115            origin=self,
116            block_index=0,
117            blocks_in_sample=1,
118        )
@classmethod
def from_blocks( cls, sample_blocks: Iterable[bioimageio.core.sample.SampleBlock], *, fill_value: float = nan) -> Self:
120    @classmethod
121    def from_blocks(
122        cls,
123        sample_blocks: Iterable[SampleBlock],
124        *,
125        fill_value: float = float("nan"),
126    ) -> Self:
127        members: PerMember[Tensor] = {}
128        stat: Stat = {}
129        sample_id = None
130        for sample_block in sample_blocks:
131            assert sample_id is None or sample_id == sample_block.sample_id
132            sample_id = sample_block.sample_id
133            stat = sample_block.stat
134            for m, block in sample_block.blocks.items():
135                if m not in members:
136                    if -1 in block.sample_shape.values():
137                        raise NotImplementedError(
138                            "merging blocks with data dependent axis not yet implemented"
139                        )
140
141                    members[m] = Tensor(
142                        np.full(
143                            tuple(block.sample_shape[a] for a in block.data.dims),
144                            fill_value,
145                            dtype=block.data.dtype,
146                        ),
147                        dims=block.data.dims,
148                    )
149
150                members[m][block.inner_slice] = block.inner_data
151
152        return cls(members=members, stat=stat, id=sample_id)
def save_bioimageio_package_as_folder( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile, Dict[str, YamlValue], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='dir')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Annotated[pathlib.Path, PathType(path_type='dir')]:
120def save_bioimageio_package_as_folder(
121    source: Union[BioimageioYamlSource, ResourceDescr],
122    /,
123    *,
124    output_path: Union[NewPath, DirectoryPath, None] = None,
125    weights_priority_order: Optional[  # model only
126        Sequence[
127            Literal[
128                "keras_hdf5",
129                "onnx",
130                "pytorch_state_dict",
131                "tensorflow_js",
132                "tensorflow_saved_model_bundle",
133                "torchscript",
134            ]
135        ]
136    ] = None,
137) -> DirectoryPath:
138    """Write the content of a bioimage.io resource package to a folder.
139
140    Args:
141        source: bioimageio resource description
142        output_path: file path to write package to
143        weights_priority_order: If given only the first weights format present in the model is included.
144                                If none of the prioritized weights formats is found all are included.
145
146    Returns:
147        directory path to bioimageio package folder
148    """
149    package_content = _prepare_resource_package(
150        source,
151        weights_priority_order=weights_priority_order,
152    )
153    if output_path is None:
154        output_path = Path(mkdtemp())
155    else:
156        output_path = Path(output_path)
157
158    output_path.mkdir(exist_ok=True, parents=True)
159    for name, src in package_content.items():
160        if isinstance(src, collections.abc.Mapping):
161            write_yaml(src, output_path / name)
162        elif isinstance(src, ZipPath):
163            extracted = Path(src.root.extract(src.name, output_path))
164            if extracted.name != src.name:
165                try:
166                    shutil.move(str(extracted), output_path / src.name)
167                except Exception as e:
168                    raise RuntimeError(
169                        f"Failed to rename extracted file '{extracted.name}'"
170                        + f" to '{src.name}'."
171                        + f" (extracted from '{src.name}' in '{src.root.filename}')"
172                    ) from e
173        else:
174            try:
175                shutil.copy(src, output_path / name)
176            except shutil.SameFileError:
177                pass
178
179    return output_path

Write the content of a bioimage.io resource package to a folder.

Arguments:
  • source: bioimageio resource description
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

directory path to bioimageio package folder

def save_bioimageio_package( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], zipfile.ZipFile, Dict[str, YamlValue], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, compression: int = 8, compression_level: int = 1, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Annotated[pathlib.Path, PathType(path_type='file')]:
182def save_bioimageio_package(
183    source: Union[BioimageioYamlSource, ResourceDescr],
184    /,
185    *,
186    compression: int = ZIP_DEFLATED,
187    compression_level: int = 1,
188    output_path: Union[NewPath, FilePath, None] = None,
189    weights_priority_order: Optional[  # model only
190        Sequence[
191            Literal[
192                "keras_hdf5",
193                "onnx",
194                "pytorch_state_dict",
195                "tensorflow_js",
196                "tensorflow_saved_model_bundle",
197                "torchscript",
198            ]
199        ]
200    ] = None,
201) -> FilePath:
202    """Package a bioimageio resource as a zip file.
203
204    Args:
205        rd: bioimageio resource description
206        compression: The numeric constant of compression method.
207        compression_level: Compression level to use when writing files to the archive.
208                           See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
209        output_path: file path to write package to
210        weights_priority_order: If given only the first weights format present in the model is included.
211                                If none of the prioritized weights formats is found all are included.
212
213    Returns:
214        path to zipped bioimageio package
215    """
216    package_content = _prepare_resource_package(
217        source,
218        weights_priority_order=weights_priority_order,
219    )
220    if output_path is None:
221        output_path = Path(
222            NamedTemporaryFile(suffix=".bioimageio.zip", delete=False).name
223        )
224    else:
225        output_path = Path(output_path)
226
227    write_zip(
228        output_path,
229        package_content,
230        compression=compression,
231        compression_level=compression_level,
232    )
233    with get_validation_context().replace(warning_level=ERROR):
234        if isinstance((exported := load_description(output_path)), InvalidDescr):
235            raise ValueError(
236                f"Exported package '{output_path}' is invalid:"
237                + f" {exported.validation_summary}"
238            )
239
240    return output_path

Package a bioimageio resource as a zip file.

Arguments:
  • rd: bioimageio resource description
  • compression: The numeric constant of compression method.
  • compression_level: Compression level to use when writing files to the archive. See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

path to zipped bioimageio package

def save_bioimageio_yaml_only( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue], bioimageio.spec.InvalidDescr], /, file: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], TextIO], *, exclude_unset: bool = True, exclude_defaults: bool = False):
199def save_bioimageio_yaml_only(
200    rd: Union[ResourceDescr, BioimageioYamlContent, InvalidDescr],
201    /,
202    file: Union[NewPath, FilePath, TextIO],
203    *,
204    exclude_unset: bool = True,
205    exclude_defaults: bool = False,
206):
207    """write the metadata of a resource description (`rd`) to `file`
208    without writing any of the referenced files in it.
209
210    Args:
211        rd: bioimageio resource description
212        file: file or stream to save to
213        exclude_unset: Exclude fields that have not explicitly be set.
214        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
215
216    Note: To save a resource description with its associated files as a package,
217    use `save_bioimageio_package` or `save_bioimageio_package_as_folder`.
218    """
219    if isinstance(rd, ResourceDescrBase):
220        content = dump_description(
221            rd, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
222        )
223    else:
224        content = rd
225
226    write_yaml(cast(YamlValue, content), file)

write the metadata of a resource description (rd) to file without writing any of the referenced files in it.

Arguments:
  • rd: bioimageio resource description
  • file: file or stream to save to
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).

Note: To save a resource description with its associated files as a package, use save_bioimageio_package or save_bioimageio_package_as_folder.

settings = Settings(cache_path=PosixPath('/home/runner/.cache/bioimageio'), collection_http_pattern='https://hypha.aicell.io/bioimage-io/artifacts/{bioimageio_id}/files/rdf.yaml', id_map='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map.json', id_map_draft='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map_draft.json', resolve_draft=True, perform_io_checks=True, log_warnings=True, github_username=None, github_token=None, CI='true', user_agent=None, keras_backend='torch')
Stat = typing.Dict[typing.Annotated[typing.Union[typing.Annotated[typing.Union[bioimageio.core.stat_measures.SampleMean, bioimageio.core.stat_measures.SampleStd, bioimageio.core.stat_measures.SampleVar, bioimageio.core.stat_measures.SampleQuantile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)], typing.Annotated[typing.Union[bioimageio.core.stat_measures.DatasetMean, bioimageio.core.stat_measures.DatasetStd, bioimageio.core.stat_measures.DatasetVar, bioimageio.core.stat_measures.DatasetPercentile], Discriminator(discriminator='name', custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='scope', custom_error_type=None, custom_error_message=None, custom_error_context=None)], typing.Union[float, typing.Annotated[Tensor, BeforeValidator(func=<function tensor_custom_before_validator>, json_schema_input_type=PydanticUndefined), PlainSerializer(func=<function tensor_custom_serializer>, return_type=PydanticUndefined, when_used='always')]]]
class Tensor(bioimageio.core._magic_tensor_ops.MagicTensorOpsMixin):
 49class Tensor(MagicTensorOpsMixin):
 50    """A wrapper around an xr.DataArray for better integration with bioimageio.spec
 51    and improved type annotations."""
 52
 53    _Compatible = Union["Tensor", xr.DataArray, _ScalarOrArray]
 54
 55    def __init__(
 56        self,
 57        array: NDArray[Any],
 58        dims: Sequence[Union[AxisId, AxisLike]],
 59    ) -> None:
 60        super().__init__()
 61        axes = tuple(
 62            a if isinstance(a, AxisId) else AxisInfo.create(a).id for a in dims
 63        )
 64        self._data = xr.DataArray(array, dims=axes)
 65
 66    def __array__(self, dtype: DTypeLike = None):
 67        return np.asarray(self._data, dtype=dtype)
 68
 69    def __getitem__(
 70        self, key: Union[SliceInfo, slice, int, PerAxis[Union[SliceInfo, slice, int]]]
 71    ) -> Self:
 72        if isinstance(key, SliceInfo):
 73            key = slice(*key)
 74        elif isinstance(key, collections.abc.Mapping):
 75            key = {
 76                a: s if isinstance(s, int) else s if isinstance(s, slice) else slice(*s)
 77                for a, s in key.items()
 78            }
 79        return self.__class__.from_xarray(self._data[key])
 80
 81    def __setitem__(self, key: PerAxis[Union[SliceInfo, slice]], value: Tensor) -> None:
 82        key = {a: s if isinstance(s, slice) else slice(*s) for a, s in key.items()}
 83        self._data[key] = value._data
 84
 85    def __len__(self) -> int:
 86        return len(self.data)
 87
 88    def _iter(self: Any) -> Iterator[Any]:
 89        for n in range(len(self)):
 90            yield self[n]
 91
 92    def __iter__(self: Any) -> Iterator[Any]:
 93        if self.ndim == 0:
 94            raise TypeError("iteration over a 0-d array")
 95        return self._iter()
 96
 97    def _binary_op(
 98        self,
 99        other: _Compatible,
100        f: Callable[[Any, Any], Any],
101        reflexive: bool = False,
102    ) -> Self:
103        data = self._data._binary_op(  # pyright: ignore[reportPrivateUsage]
104            (other._data if isinstance(other, Tensor) else other),
105            f,
106            reflexive,
107        )
108        return self.__class__.from_xarray(data)
109
110    def _inplace_binary_op(
111        self,
112        other: _Compatible,
113        f: Callable[[Any, Any], Any],
114    ) -> Self:
115        _ = self._data._inplace_binary_op(  # pyright: ignore[reportPrivateUsage]
116            (
117                other_d
118                if (other_d := getattr(other, "data")) is not None
119                and isinstance(
120                    other_d,
121                    xr.DataArray,
122                )
123                else other
124            ),
125            f,
126        )
127        return self
128
129    def _unary_op(self, f: Callable[[Any], Any], *args: Any, **kwargs: Any) -> Self:
130        data = self._data._unary_op(  # pyright: ignore[reportPrivateUsage]
131            f, *args, **kwargs
132        )
133        return self.__class__.from_xarray(data)
134
135    @classmethod
136    def from_xarray(cls, data_array: xr.DataArray) -> Self:
137        """create a `Tensor` from an xarray data array
138
139        note for internal use: this factory method is round-trip save
140            for any `Tensor`'s  `data` property (an xarray.DataArray).
141        """
142        return cls(
143            array=data_array.data, dims=tuple(AxisId(d) for d in data_array.dims)
144        )
145
146    @classmethod
147    def from_numpy(
148        cls,
149        array: NDArray[Any],
150        *,
151        dims: Optional[Union[AxisLike, Sequence[AxisLike]]],
152    ) -> Tensor:
153        """create a `Tensor` from a numpy array
154
155        Args:
156            array: the nd numpy array
157            axes: A description of the array's axes,
158                if None axes are guessed (which might fail and raise a ValueError.)
159
160        Raises:
161            ValueError: if `axes` is None and axes guessing fails.
162        """
163
164        if dims is None:
165            return cls._interprete_array_wo_known_axes(array)
166        elif isinstance(dims, (str, Axis, v0_5.AxisBase)):
167            dims = [dims]
168
169        axis_infos = [AxisInfo.create(a) for a in dims]
170        original_shape = tuple(array.shape)
171
172        successful_view = _get_array_view(array, axis_infos)
173        if successful_view is None:
174            raise ValueError(
175                f"Array shape {original_shape} does not map to axes {dims}"
176            )
177
178        return Tensor(successful_view, dims=tuple(a.id for a in axis_infos))
179
180    @property
181    def data(self):
182        return self._data
183
184    @property
185    def dims(self):  # TODO: rename to `axes`?
186        """Tuple of dimension names associated with this tensor."""
187        return cast(Tuple[AxisId, ...], self._data.dims)
188
189    @property
190    def dtype(self) -> DTypeStr:
191        dt = str(self.data.dtype)  # pyright: ignore[reportUnknownArgumentType]
192        assert dt in get_args(DTypeStr)
193        return dt  # pyright: ignore[reportReturnType]
194
195    @property
196    def ndim(self):
197        """Number of tensor dimensions."""
198        return self._data.ndim
199
200    @property
201    def shape(self):
202        """Tuple of tensor axes lengths"""
203        return self._data.shape
204
205    @property
206    def shape_tuple(self):
207        """Tuple of tensor axes lengths"""
208        return self._data.shape
209
210    @property
211    def size(self):
212        """Number of elements in the tensor.
213
214        Equal to math.prod(tensor.shape), i.e., the product of the tensors’ dimensions.
215        """
216        return self._data.size
217
218    @property
219    def sizes(self):
220        """Ordered, immutable mapping from axis ids to axis lengths."""
221        return cast(Mapping[AxisId, int], self.data.sizes)
222
223    @property
224    def tagged_shape(self):
225        """(alias for `sizes`) Ordered, immutable mapping from axis ids to lengths."""
226        return self.sizes
227
228    def argmax(self) -> Mapping[AxisId, int]:
229        ret = self._data.argmax(...)
230        assert isinstance(ret, dict)
231        return {cast(AxisId, k): cast(int, v.item()) for k, v in ret.items()}
232
233    def astype(self, dtype: DTypeStr, *, copy: bool = False):
234        """Return tensor cast to `dtype`
235
236        note: if dtype is already satisfied copy if `copy`"""
237        return self.__class__.from_xarray(self._data.astype(dtype, copy=copy))
238
239    def clip(self, min: Optional[float] = None, max: Optional[float] = None):
240        """Return a tensor whose values are limited to [min, max].
241        At least one of max or min must be given."""
242        return self.__class__.from_xarray(self._data.clip(min, max))
243
244    def crop_to(
245        self,
246        sizes: PerAxis[int],
247        crop_where: Union[
248            CropWhere,
249            PerAxis[CropWhere],
250        ] = "left_and_right",
251    ) -> Self:
252        """crop to match `sizes`"""
253        if isinstance(crop_where, str):
254            crop_axis_where: PerAxis[CropWhere] = {a: crop_where for a in self.dims}
255        else:
256            crop_axis_where = crop_where
257
258        slices: Dict[AxisId, SliceInfo] = {}
259
260        for a, s_is in self.sizes.items():
261            if a not in sizes or sizes[a] == s_is:
262                pass
263            elif sizes[a] > s_is:
264                logger.warning(
265                    "Cannot crop axis {} of size {} to larger size {}",
266                    a,
267                    s_is,
268                    sizes[a],
269                )
270            elif a not in crop_axis_where:
271                raise ValueError(
272                    f"Don't know where to crop axis {a}, `crop_where`={crop_where}"
273                )
274            else:
275                crop_this_axis_where = crop_axis_where[a]
276                if crop_this_axis_where == "left":
277                    slices[a] = SliceInfo(s_is - sizes[a], s_is)
278                elif crop_this_axis_where == "right":
279                    slices[a] = SliceInfo(0, sizes[a])
280                elif crop_this_axis_where == "left_and_right":
281                    slices[a] = SliceInfo(
282                        start := (s_is - sizes[a]) // 2, sizes[a] + start
283                    )
284                else:
285                    assert_never(crop_this_axis_where)
286
287        return self[slices]
288
289    def expand_dims(self, dims: Union[Sequence[AxisId], PerAxis[int]]) -> Self:
290        return self.__class__.from_xarray(self._data.expand_dims(dims=dims))
291
292    def item(
293        self,
294        key: Union[
295            None, SliceInfo, slice, int, PerAxis[Union[SliceInfo, slice, int]]
296        ] = None,
297    ):
298        """Copy a tensor element to a standard Python scalar and return it."""
299        if key is None:
300            ret = self._data.item()
301        else:
302            ret = self[key]._data.item()
303
304        assert isinstance(ret, (bool, float, int))
305        return ret
306
307    def mean(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
308        return self.__class__.from_xarray(self._data.mean(dim=dim))
309
310    def pad(
311        self,
312        pad_width: PerAxis[PadWidthLike],
313        mode: PadMode = "symmetric",
314    ) -> Self:
315        pad_width = {a: PadWidth.create(p) for a, p in pad_width.items()}
316        return self.__class__.from_xarray(
317            self._data.pad(pad_width=pad_width, mode=mode)
318        )
319
320    def pad_to(
321        self,
322        sizes: PerAxis[int],
323        pad_where: Union[PadWhere, PerAxis[PadWhere]] = "left_and_right",
324        mode: PadMode = "symmetric",
325    ) -> Self:
326        """pad `tensor` to match `sizes`"""
327        if isinstance(pad_where, str):
328            pad_axis_where: PerAxis[PadWhere] = {a: pad_where for a in self.dims}
329        else:
330            pad_axis_where = pad_where
331
332        pad_width: Dict[AxisId, PadWidth] = {}
333        for a, s_is in self.sizes.items():
334            if a not in sizes or sizes[a] == s_is:
335                pad_width[a] = PadWidth(0, 0)
336            elif s_is > sizes[a]:
337                pad_width[a] = PadWidth(0, 0)
338                logger.warning(
339                    "Cannot pad axis {} of size {} to smaller size {}",
340                    a,
341                    s_is,
342                    sizes[a],
343                )
344            elif a not in pad_axis_where:
345                raise ValueError(
346                    f"Don't know where to pad axis {a}, `pad_where`={pad_where}"
347                )
348            else:
349                pad_this_axis_where = pad_axis_where[a]
350                d = sizes[a] - s_is
351                if pad_this_axis_where == "left":
352                    pad_width[a] = PadWidth(d, 0)
353                elif pad_this_axis_where == "right":
354                    pad_width[a] = PadWidth(0, d)
355                elif pad_this_axis_where == "left_and_right":
356                    pad_width[a] = PadWidth(left := d // 2, d - left)
357                else:
358                    assert_never(pad_this_axis_where)
359
360        return self.pad(pad_width, mode)
361
362    def quantile(
363        self,
364        q: Union[float, Sequence[float]],
365        dim: Optional[Union[AxisId, Sequence[AxisId]]] = None,
366    ) -> Self:
367        assert (
368            isinstance(q, (float, int))
369            and q >= 0.0
370            or not isinstance(q, (float, int))
371            and all(qq >= 0.0 for qq in q)
372        )
373        assert (
374            isinstance(q, (float, int))
375            and q <= 1.0
376            or not isinstance(q, (float, int))
377            and all(qq <= 1.0 for qq in q)
378        )
379        assert dim is None or (
380            (quantile_dim := AxisId("quantile")) != dim and quantile_dim not in set(dim)
381        )
382        return self.__class__.from_xarray(self._data.quantile(q, dim=dim))
383
384    def resize_to(
385        self,
386        sizes: PerAxis[int],
387        *,
388        pad_where: Union[
389            PadWhere,
390            PerAxis[PadWhere],
391        ] = "left_and_right",
392        crop_where: Union[
393            CropWhere,
394            PerAxis[CropWhere],
395        ] = "left_and_right",
396        pad_mode: PadMode = "symmetric",
397    ):
398        """return cropped/padded tensor with `sizes`"""
399        crop_to_sizes: Dict[AxisId, int] = {}
400        pad_to_sizes: Dict[AxisId, int] = {}
401        new_axes = dict(sizes)
402        for a, s_is in self.sizes.items():
403            a = AxisId(str(a))
404            _ = new_axes.pop(a, None)
405            if a not in sizes or sizes[a] == s_is:
406                pass
407            elif s_is > sizes[a]:
408                crop_to_sizes[a] = sizes[a]
409            else:
410                pad_to_sizes[a] = sizes[a]
411
412        tensor = self
413        if crop_to_sizes:
414            tensor = tensor.crop_to(crop_to_sizes, crop_where=crop_where)
415
416        if pad_to_sizes:
417            tensor = tensor.pad_to(pad_to_sizes, pad_where=pad_where, mode=pad_mode)
418
419        if new_axes:
420            tensor = tensor.expand_dims(new_axes)
421
422        return tensor
423
424    def std(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
425        return self.__class__.from_xarray(self._data.std(dim=dim))
426
427    def sum(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
428        """Reduce this Tensor's data by applying sum along some dimension(s)."""
429        return self.__class__.from_xarray(self._data.sum(dim=dim))
430
431    def transpose(
432        self,
433        axes: Sequence[AxisId],
434    ) -> Self:
435        """return a transposed tensor
436
437        Args:
438            axes: the desired tensor axes
439        """
440        # expand missing tensor axes
441        missing_axes = tuple(a for a in axes if a not in self.dims)
442        array = self._data
443        if missing_axes:
444            array = array.expand_dims(missing_axes)
445
446        # transpose to the correct axis order
447        return self.__class__.from_xarray(array.transpose(*axes))
448
449    def var(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
450        return self.__class__.from_xarray(self._data.var(dim=dim))
451
452    @classmethod
453    def _interprete_array_wo_known_axes(cls, array: NDArray[Any]):
454        ndim = array.ndim
455        if ndim == 2:
456            current_axes = (
457                v0_5.SpaceInputAxis(id=AxisId("y"), size=array.shape[0]),
458                v0_5.SpaceInputAxis(id=AxisId("x"), size=array.shape[1]),
459            )
460        elif ndim == 3 and any(s <= 3 for s in array.shape):
461            current_axes = (
462                v0_5.ChannelAxis(
463                    channel_names=[
464                        v0_5.Identifier(f"channel{i}") for i in range(array.shape[0])
465                    ]
466                ),
467                v0_5.SpaceInputAxis(id=AxisId("y"), size=array.shape[1]),
468                v0_5.SpaceInputAxis(id=AxisId("x"), size=array.shape[2]),
469            )
470        elif ndim == 3:
471            current_axes = (
472                v0_5.SpaceInputAxis(id=AxisId("z"), size=array.shape[0]),
473                v0_5.SpaceInputAxis(id=AxisId("y"), size=array.shape[1]),
474                v0_5.SpaceInputAxis(id=AxisId("x"), size=array.shape[2]),
475            )
476        elif ndim == 4:
477            current_axes = (
478                v0_5.ChannelAxis(
479                    channel_names=[
480                        v0_5.Identifier(f"channel{i}") for i in range(array.shape[0])
481                    ]
482                ),
483                v0_5.SpaceInputAxis(id=AxisId("z"), size=array.shape[1]),
484                v0_5.SpaceInputAxis(id=AxisId("y"), size=array.shape[2]),
485                v0_5.SpaceInputAxis(id=AxisId("x"), size=array.shape[3]),
486            )
487        elif ndim == 5:
488            current_axes = (
489                v0_5.BatchAxis(),
490                v0_5.ChannelAxis(
491                    channel_names=[
492                        v0_5.Identifier(f"channel{i}") for i in range(array.shape[1])
493                    ]
494                ),
495                v0_5.SpaceInputAxis(id=AxisId("z"), size=array.shape[2]),
496                v0_5.SpaceInputAxis(id=AxisId("y"), size=array.shape[3]),
497                v0_5.SpaceInputAxis(id=AxisId("x"), size=array.shape[4]),
498            )
499        else:
500            raise ValueError(f"Could not guess an axis mapping for {array.shape}")
501
502        return cls(array, dims=tuple(a.id for a in current_axes))

A wrapper around an xr.DataArray for better integration with bioimageio.spec and improved type annotations.

Tensor( array: numpy.ndarray[typing.Any, numpy.dtype[typing.Any]], dims: Sequence[Union[AxisId, Literal['b', 'i', 't', 'c', 'z', 'y', 'x'], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexInputAxis, bioimageio.spec.model.v0_5.TimeInputAxis, bioimageio.spec.model.v0_5.SpaceInputAxis], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexOutputAxis, Annotated[Union[Annotated[bioimageio.spec.model.v0_5.TimeOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.TimeOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Axis]])
55    def __init__(
56        self,
57        array: NDArray[Any],
58        dims: Sequence[Union[AxisId, AxisLike]],
59    ) -> None:
60        super().__init__()
61        axes = tuple(
62            a if isinstance(a, AxisId) else AxisInfo.create(a).id for a in dims
63        )
64        self._data = xr.DataArray(array, dims=axes)
@classmethod
def from_xarray(cls, data_array: xarray.core.dataarray.DataArray) -> Self:
135    @classmethod
136    def from_xarray(cls, data_array: xr.DataArray) -> Self:
137        """create a `Tensor` from an xarray data array
138
139        note for internal use: this factory method is round-trip save
140            for any `Tensor`'s  `data` property (an xarray.DataArray).
141        """
142        return cls(
143            array=data_array.data, dims=tuple(AxisId(d) for d in data_array.dims)
144        )

create a Tensor from an xarray data array

note for internal use: this factory method is round-trip save for any Tensor's data property (an xarray.DataArray).

@classmethod
def from_numpy( cls, array: numpy.ndarray[typing.Any, numpy.dtype[typing.Any]], *, dims: Union[AxisId, Literal['b', 'i', 't', 'c', 'z', 'y', 'x'], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexInputAxis, bioimageio.spec.model.v0_5.TimeInputAxis, bioimageio.spec.model.v0_5.SpaceInputAxis], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexOutputAxis, Annotated[Union[Annotated[bioimageio.spec.model.v0_5.TimeOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.TimeOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Axis, Sequence[Union[AxisId, Literal['b', 'i', 't', 'c', 'z', 'y', 'x'], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexInputAxis, bioimageio.spec.model.v0_5.TimeInputAxis, bioimageio.spec.model.v0_5.SpaceInputAxis], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[bioimageio.spec.model.v0_5.BatchAxis, bioimageio.spec.model.v0_5.ChannelAxis, bioimageio.spec.model.v0_5.IndexOutputAxis, Annotated[Union[Annotated[bioimageio.spec.model.v0_5.TimeOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.TimeOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxis, Tag(tag='wo_halo')], Annotated[bioimageio.spec.model.v0_5.SpaceOutputAxisWithHalo, Tag(tag='with_halo')]], Discriminator(discriminator=<function _get_halo_axis_discriminator_value>, custom_error_type=None, custom_error_message=None, custom_error_context=None)]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Axis]], NoneType]) -> Tensor:
146    @classmethod
147    def from_numpy(
148        cls,
149        array: NDArray[Any],
150        *,
151        dims: Optional[Union[AxisLike, Sequence[AxisLike]]],
152    ) -> Tensor:
153        """create a `Tensor` from a numpy array
154
155        Args:
156            array: the nd numpy array
157            axes: A description of the array's axes,
158                if None axes are guessed (which might fail and raise a ValueError.)
159
160        Raises:
161            ValueError: if `axes` is None and axes guessing fails.
162        """
163
164        if dims is None:
165            return cls._interprete_array_wo_known_axes(array)
166        elif isinstance(dims, (str, Axis, v0_5.AxisBase)):
167            dims = [dims]
168
169        axis_infos = [AxisInfo.create(a) for a in dims]
170        original_shape = tuple(array.shape)
171
172        successful_view = _get_array_view(array, axis_infos)
173        if successful_view is None:
174            raise ValueError(
175                f"Array shape {original_shape} does not map to axes {dims}"
176            )
177
178        return Tensor(successful_view, dims=tuple(a.id for a in axis_infos))

create a Tensor from a numpy array

Arguments:
  • array: the nd numpy array
  • axes: A description of the array's axes, if None axes are guessed (which might fail and raise a ValueError.)
Raises:
  • ValueError: if axes is None and axes guessing fails.
data
180    @property
181    def data(self):
182        return self._data
dims
184    @property
185    def dims(self):  # TODO: rename to `axes`?
186        """Tuple of dimension names associated with this tensor."""
187        return cast(Tuple[AxisId, ...], self._data.dims)

Tuple of dimension names associated with this tensor.

dtype: Literal['bool', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'uint8', 'uint16', 'uint32', 'uint64']
189    @property
190    def dtype(self) -> DTypeStr:
191        dt = str(self.data.dtype)  # pyright: ignore[reportUnknownArgumentType]
192        assert dt in get_args(DTypeStr)
193        return dt  # pyright: ignore[reportReturnType]
ndim
195    @property
196    def ndim(self):
197        """Number of tensor dimensions."""
198        return self._data.ndim

Number of tensor dimensions.

shape
200    @property
201    def shape(self):
202        """Tuple of tensor axes lengths"""
203        return self._data.shape

Tuple of tensor axes lengths

shape_tuple
205    @property
206    def shape_tuple(self):
207        """Tuple of tensor axes lengths"""
208        return self._data.shape

Tuple of tensor axes lengths

size
210    @property
211    def size(self):
212        """Number of elements in the tensor.
213
214        Equal to math.prod(tensor.shape), i.e., the product of the tensors’ dimensions.
215        """
216        return self._data.size

Number of elements in the tensor.

Equal to math.prod(tensor.shape), i.e., the product of the tensors’ dimensions.

sizes
218    @property
219    def sizes(self):
220        """Ordered, immutable mapping from axis ids to axis lengths."""
221        return cast(Mapping[AxisId, int], self.data.sizes)

Ordered, immutable mapping from axis ids to axis lengths.

tagged_shape
223    @property
224    def tagged_shape(self):
225        """(alias for `sizes`) Ordered, immutable mapping from axis ids to lengths."""
226        return self.sizes

(alias for sizes) Ordered, immutable mapping from axis ids to lengths.

def argmax(self) -> Mapping[AxisId, int]:
228    def argmax(self) -> Mapping[AxisId, int]:
229        ret = self._data.argmax(...)
230        assert isinstance(ret, dict)
231        return {cast(AxisId, k): cast(int, v.item()) for k, v in ret.items()}
def astype( self, dtype: Literal['bool', 'float32', 'float64', 'int8', 'int16', 'int32', 'int64', 'uint8', 'uint16', 'uint32', 'uint64'], *, copy: bool = False):
233    def astype(self, dtype: DTypeStr, *, copy: bool = False):
234        """Return tensor cast to `dtype`
235
236        note: if dtype is already satisfied copy if `copy`"""
237        return self.__class__.from_xarray(self._data.astype(dtype, copy=copy))

Return tensor cast to dtype

note: if dtype is already satisfied copy if copy

def clip(self, min: Optional[float] = None, max: Optional[float] = None):
239    def clip(self, min: Optional[float] = None, max: Optional[float] = None):
240        """Return a tensor whose values are limited to [min, max].
241        At least one of max or min must be given."""
242        return self.__class__.from_xarray(self._data.clip(min, max))

Return a tensor whose values are limited to [min, max]. At least one of max or min must be given.

def crop_to( self, sizes: Mapping[AxisId, int], crop_where: Union[Literal['left', 'right', 'left_and_right'], Mapping[AxisId, Literal['left', 'right', 'left_and_right']]] = 'left_and_right') -> Self:
244    def crop_to(
245        self,
246        sizes: PerAxis[int],
247        crop_where: Union[
248            CropWhere,
249            PerAxis[CropWhere],
250        ] = "left_and_right",
251    ) -> Self:
252        """crop to match `sizes`"""
253        if isinstance(crop_where, str):
254            crop_axis_where: PerAxis[CropWhere] = {a: crop_where for a in self.dims}
255        else:
256            crop_axis_where = crop_where
257
258        slices: Dict[AxisId, SliceInfo] = {}
259
260        for a, s_is in self.sizes.items():
261            if a not in sizes or sizes[a] == s_is:
262                pass
263            elif sizes[a] > s_is:
264                logger.warning(
265                    "Cannot crop axis {} of size {} to larger size {}",
266                    a,
267                    s_is,
268                    sizes[a],
269                )
270            elif a not in crop_axis_where:
271                raise ValueError(
272                    f"Don't know where to crop axis {a}, `crop_where`={crop_where}"
273                )
274            else:
275                crop_this_axis_where = crop_axis_where[a]
276                if crop_this_axis_where == "left":
277                    slices[a] = SliceInfo(s_is - sizes[a], s_is)
278                elif crop_this_axis_where == "right":
279                    slices[a] = SliceInfo(0, sizes[a])
280                elif crop_this_axis_where == "left_and_right":
281                    slices[a] = SliceInfo(
282                        start := (s_is - sizes[a]) // 2, sizes[a] + start
283                    )
284                else:
285                    assert_never(crop_this_axis_where)
286
287        return self[slices]

crop to match sizes

def expand_dims( self, dims: Union[Sequence[AxisId], Mapping[AxisId, int]]) -> Self:
289    def expand_dims(self, dims: Union[Sequence[AxisId], PerAxis[int]]) -> Self:
290        return self.__class__.from_xarray(self._data.expand_dims(dims=dims))
def item( self, key: Union[NoneType, bioimageio.core.common.SliceInfo, slice, int, Mapping[AxisId, Union[bioimageio.core.common.SliceInfo, slice, int]]] = None):
292    def item(
293        self,
294        key: Union[
295            None, SliceInfo, slice, int, PerAxis[Union[SliceInfo, slice, int]]
296        ] = None,
297    ):
298        """Copy a tensor element to a standard Python scalar and return it."""
299        if key is None:
300            ret = self._data.item()
301        else:
302            ret = self[key]._data.item()
303
304        assert isinstance(ret, (bool, float, int))
305        return ret

Copy a tensor element to a standard Python scalar and return it.

def mean( self, dim: Union[AxisId, Sequence[AxisId], NoneType] = None) -> Self:
307    def mean(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
308        return self.__class__.from_xarray(self._data.mean(dim=dim))
def pad( self, pad_width: Mapping[AxisId, Union[int, Tuple[int, int], bioimageio.core.common.PadWidth]], mode: Literal['edge', 'reflect', 'symmetric'] = 'symmetric') -> Self:
310    def pad(
311        self,
312        pad_width: PerAxis[PadWidthLike],
313        mode: PadMode = "symmetric",
314    ) -> Self:
315        pad_width = {a: PadWidth.create(p) for a, p in pad_width.items()}
316        return self.__class__.from_xarray(
317            self._data.pad(pad_width=pad_width, mode=mode)
318        )
def pad_to( self, sizes: Mapping[AxisId, int], pad_where: Union[Literal['left', 'right', 'left_and_right'], Mapping[AxisId, Literal['left', 'right', 'left_and_right']]] = 'left_and_right', mode: Literal['edge', 'reflect', 'symmetric'] = 'symmetric') -> Self:
320    def pad_to(
321        self,
322        sizes: PerAxis[int],
323        pad_where: Union[PadWhere, PerAxis[PadWhere]] = "left_and_right",
324        mode: PadMode = "symmetric",
325    ) -> Self:
326        """pad `tensor` to match `sizes`"""
327        if isinstance(pad_where, str):
328            pad_axis_where: PerAxis[PadWhere] = {a: pad_where for a in self.dims}
329        else:
330            pad_axis_where = pad_where
331
332        pad_width: Dict[AxisId, PadWidth] = {}
333        for a, s_is in self.sizes.items():
334            if a not in sizes or sizes[a] == s_is:
335                pad_width[a] = PadWidth(0, 0)
336            elif s_is > sizes[a]:
337                pad_width[a] = PadWidth(0, 0)
338                logger.warning(
339                    "Cannot pad axis {} of size {} to smaller size {}",
340                    a,
341                    s_is,
342                    sizes[a],
343                )
344            elif a not in pad_axis_where:
345                raise ValueError(
346                    f"Don't know where to pad axis {a}, `pad_where`={pad_where}"
347                )
348            else:
349                pad_this_axis_where = pad_axis_where[a]
350                d = sizes[a] - s_is
351                if pad_this_axis_where == "left":
352                    pad_width[a] = PadWidth(d, 0)
353                elif pad_this_axis_where == "right":
354                    pad_width[a] = PadWidth(0, d)
355                elif pad_this_axis_where == "left_and_right":
356                    pad_width[a] = PadWidth(left := d // 2, d - left)
357                else:
358                    assert_never(pad_this_axis_where)
359
360        return self.pad(pad_width, mode)

pad tensor to match sizes

def quantile( self, q: Union[float, Sequence[float]], dim: Union[AxisId, Sequence[AxisId], NoneType] = None) -> Self:
362    def quantile(
363        self,
364        q: Union[float, Sequence[float]],
365        dim: Optional[Union[AxisId, Sequence[AxisId]]] = None,
366    ) -> Self:
367        assert (
368            isinstance(q, (float, int))
369            and q >= 0.0
370            or not isinstance(q, (float, int))
371            and all(qq >= 0.0 for qq in q)
372        )
373        assert (
374            isinstance(q, (float, int))
375            and q <= 1.0
376            or not isinstance(q, (float, int))
377            and all(qq <= 1.0 for qq in q)
378        )
379        assert dim is None or (
380            (quantile_dim := AxisId("quantile")) != dim and quantile_dim not in set(dim)
381        )
382        return self.__class__.from_xarray(self._data.quantile(q, dim=dim))
def resize_to( self, sizes: Mapping[AxisId, int], *, pad_where: Union[Literal['left', 'right', 'left_and_right'], Mapping[AxisId, Literal['left', 'right', 'left_and_right']]] = 'left_and_right', crop_where: Union[Literal['left', 'right', 'left_and_right'], Mapping[AxisId, Literal['left', 'right', 'left_and_right']]] = 'left_and_right', pad_mode: Literal['edge', 'reflect', 'symmetric'] = 'symmetric'):
384    def resize_to(
385        self,
386        sizes: PerAxis[int],
387        *,
388        pad_where: Union[
389            PadWhere,
390            PerAxis[PadWhere],
391        ] = "left_and_right",
392        crop_where: Union[
393            CropWhere,
394            PerAxis[CropWhere],
395        ] = "left_and_right",
396        pad_mode: PadMode = "symmetric",
397    ):
398        """return cropped/padded tensor with `sizes`"""
399        crop_to_sizes: Dict[AxisId, int] = {}
400        pad_to_sizes: Dict[AxisId, int] = {}
401        new_axes = dict(sizes)
402        for a, s_is in self.sizes.items():
403            a = AxisId(str(a))
404            _ = new_axes.pop(a, None)
405            if a not in sizes or sizes[a] == s_is:
406                pass
407            elif s_is > sizes[a]:
408                crop_to_sizes[a] = sizes[a]
409            else:
410                pad_to_sizes[a] = sizes[a]
411
412        tensor = self
413        if crop_to_sizes:
414            tensor = tensor.crop_to(crop_to_sizes, crop_where=crop_where)
415
416        if pad_to_sizes:
417            tensor = tensor.pad_to(pad_to_sizes, pad_where=pad_where, mode=pad_mode)
418
419        if new_axes:
420            tensor = tensor.expand_dims(new_axes)
421
422        return tensor

return cropped/padded tensor with sizes

def std( self, dim: Union[AxisId, Sequence[AxisId], NoneType] = None) -> Self:
424    def std(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
425        return self.__class__.from_xarray(self._data.std(dim=dim))
def sum( self, dim: Union[AxisId, Sequence[AxisId], NoneType] = None) -> Self:
427    def sum(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
428        """Reduce this Tensor's data by applying sum along some dimension(s)."""
429        return self.__class__.from_xarray(self._data.sum(dim=dim))

Reduce this Tensor's data by applying sum along some dimension(s).

def transpose(self, axes: Sequence[AxisId]) -> Self:
431    def transpose(
432        self,
433        axes: Sequence[AxisId],
434    ) -> Self:
435        """return a transposed tensor
436
437        Args:
438            axes: the desired tensor axes
439        """
440        # expand missing tensor axes
441        missing_axes = tuple(a for a in axes if a not in self.dims)
442        array = self._data
443        if missing_axes:
444            array = array.expand_dims(missing_axes)
445
446        # transpose to the correct axis order
447        return self.__class__.from_xarray(array.transpose(*axes))

return a transposed tensor

Arguments:
  • axes: the desired tensor axes
def var( self, dim: Union[AxisId, Sequence[AxisId], NoneType] = None) -> Self:
449    def var(self, dim: Optional[Union[AxisId, Sequence[AxisId]]] = None) -> Self:
450        return self.__class__.from_xarray(self._data.var(dim=dim))
def test_description( source: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], Dict[str, YamlValue]], *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', weight_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, devices: Optional[Sequence[str]] = None, determinism: Literal['seed_only', 'full'] = 'seed_only', expected_type: Optional[str] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None, stop_early: bool = False, runtime_env: Union[Literal['currently-active', 'as-described'], pathlib.Path, bioimageio.spec.BioimageioCondaEnv] = 'currently-active', run_command: Callable[[Sequence[str]], NoneType] = <function default_run_command>, **deprecated: Unpack[bioimageio.core._resource_tests.DeprecatedKwargs]) -> bioimageio.spec.ValidationSummary:
188def test_description(
189    source: Union[ResourceDescr, PermissiveFileSource, BioimageioYamlContent],
190    *,
191    format_version: Union[FormatVersionPlaceholder, str] = "discover",
192    weight_format: Optional[SupportedWeightsFormat] = None,
193    devices: Optional[Sequence[str]] = None,
194    determinism: Literal["seed_only", "full"] = "seed_only",
195    expected_type: Optional[str] = None,
196    sha256: Optional[Sha256] = None,
197    stop_early: bool = False,
198    runtime_env: Union[
199        Literal["currently-active", "as-described"], Path, BioimageioCondaEnv
200    ] = ("currently-active"),
201    run_command: Callable[[Sequence[str]], None] = default_run_command,
202    **deprecated: Unpack[DeprecatedKwargs],
203) -> ValidationSummary:
204    """Test a bioimage.io resource dynamically,
205    for example run prediction of test tensors for models.
206
207    Args:
208        source: model description source.
209        weight_format: Weight format to test.
210            Default: All weight formats present in **source**.
211        devices: Devices to test with, e.g. 'cpu', 'cuda'.
212            Default (may be weight format dependent): ['cuda'] if available, ['cpu'] otherwise.
213        determinism: Modes to improve reproducibility of test outputs.
214        expected_type: Assert an expected resource description `type`.
215        sha256: Expected SHA256 value of **source**.
216                (Ignored if **source** already is a loaded `ResourceDescr` object.)
217        stop_early: Do not run further subtests after a failed one.
218        runtime_env: (Experimental feature!) The Python environment to run the tests in
219            - `"currently-active"`: Use active Python interpreter.
220            - `"as-described"`: Use `bioimageio.spec.get_conda_env` to generate a conda
221                environment YAML file based on the model weights description.
222            - A `BioimageioCondaEnv` or a path to a conda environment YAML file.
223                Note: The `bioimageio.core` dependency will be added automatically if not present.
224        run_command: (Experimental feature!) Function to execute (conda) terminal commands in a subprocess
225            (ignored if **runtime_env** is `"currently-active"`).
226    """
227    if runtime_env == "currently-active":
228        rd = load_description_and_test(
229            source,
230            format_version=format_version,
231            weight_format=weight_format,
232            devices=devices,
233            determinism=determinism,
234            expected_type=expected_type,
235            sha256=sha256,
236            stop_early=stop_early,
237            **deprecated,
238        )
239        return rd.validation_summary
240
241    if runtime_env == "as-described":
242        conda_env = None
243    elif isinstance(runtime_env, (str, Path)):
244        conda_env = BioimageioCondaEnv.model_validate(read_yaml(Path(runtime_env)))
245    elif isinstance(runtime_env, BioimageioCondaEnv):
246        conda_env = runtime_env
247    else:
248        assert_never(runtime_env)
249
250    with TemporaryDirectory(ignore_cleanup_errors=True) as _d:
251        working_dir = Path(_d)
252        if isinstance(source, (dict, ResourceDescrBase)):
253            file_source = save_bioimageio_package(
254                source, output_path=working_dir / "package.zip"
255            )
256        else:
257            file_source = source
258
259        return _test_in_env(
260            file_source,
261            working_dir=working_dir,
262            weight_format=weight_format,
263            conda_env=conda_env,
264            devices=devices,
265            determinism=determinism,
266            expected_type=expected_type,
267            sha256=sha256,
268            stop_early=stop_early,
269            run_command=run_command,
270            **deprecated,
271        )

Test a bioimage.io resource dynamically, for example run prediction of test tensors for models.

Arguments:
  • source: model description source.
  • weight_format: Weight format to test. Default: All weight formats present in source.
  • devices: Devices to test with, e.g. 'cpu', 'cuda'. Default (may be weight format dependent): ['cuda'] if available, ['cpu'] otherwise.
  • determinism: Modes to improve reproducibility of test outputs.
  • expected_type: Assert an expected resource description type.
  • sha256: Expected SHA256 value of source. (Ignored if source already is a loaded ResourceDescr object.)
  • stop_early: Do not run further subtests after a failed one.
  • runtime_env: (Experimental feature!) The Python environment to run the tests in
    • "currently-active": Use active Python interpreter.
    • "as-described": Use bioimageio.spec.get_conda_env to generate a conda environment YAML file based on the model weights description.
    • A BioimageioCondaEnv or a path to a conda environment YAML file. Note: The bioimageio.core dependency will be added automatically if not present.
  • run_command: (Experimental feature!) Function to execute (conda) terminal commands in a subprocess (ignored if runtime_env is "currently-active").
def test_model( source: Union[bioimageio.spec.model.v0_4.ModelDescr, bioimageio.spec.ModelDescr, Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)]], weight_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, devices: Optional[List[str]] = None, *, determinism: Literal['seed_only', 'full'] = 'seed_only', sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None, stop_early: bool = False, **deprecated: Unpack[bioimageio.core._resource_tests.DeprecatedKwargs]) -> bioimageio.spec.ValidationSummary:
160def test_model(
161    source: Union[v0_4.ModelDescr, v0_5.ModelDescr, PermissiveFileSource],
162    weight_format: Optional[SupportedWeightsFormat] = None,
163    devices: Optional[List[str]] = None,
164    *,
165    determinism: Literal["seed_only", "full"] = "seed_only",
166    sha256: Optional[Sha256] = None,
167    stop_early: bool = False,
168    **deprecated: Unpack[DeprecatedKwargs],
169) -> ValidationSummary:
170    """Test model inference"""
171    return test_description(
172        source,
173        weight_format=weight_format,
174        devices=devices,
175        determinism=determinism,
176        expected_type="model",
177        sha256=sha256,
178        stop_early=stop_early,
179        **deprecated,
180    )

Test model inference

def test_resource( source: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[bioimageio.spec.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[bioimageio.spec.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[bioimageio.spec.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[bioimageio.spec.NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[bioimageio.spec.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, Annotated[pydantic_core._pydantic_core.Url, UrlConstraints(max_length=2083, allowed_schemes=['http', 'https'], host_required=None, default_host=None, default_port=None, default_path=None)], Dict[str, YamlValue]], *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', weight_format: Optional[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_saved_model_bundle', 'torchscript']] = None, devices: Optional[Sequence[str]] = None, determinism: Literal['seed_only', 'full'] = 'seed_only', expected_type: Optional[str] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None, stop_early: bool = False, runtime_env: Union[Literal['currently-active', 'as-described'], pathlib.Path, bioimageio.spec.BioimageioCondaEnv] = 'currently-active', run_command: Callable[[Sequence[str]], NoneType] = <function default_run_command>, **deprecated: Unpack[bioimageio.core._resource_tests.DeprecatedKwargs]) -> bioimageio.spec.ValidationSummary:
188def test_description(
189    source: Union[ResourceDescr, PermissiveFileSource, BioimageioYamlContent],
190    *,
191    format_version: Union[FormatVersionPlaceholder, str] = "discover",
192    weight_format: Optional[SupportedWeightsFormat] = None,
193    devices: Optional[Sequence[str]] = None,
194    determinism: Literal["seed_only", "full"] = "seed_only",
195    expected_type: Optional[str] = None,
196    sha256: Optional[Sha256] = None,
197    stop_early: bool = False,
198    runtime_env: Union[
199        Literal["currently-active", "as-described"], Path, BioimageioCondaEnv
200    ] = ("currently-active"),
201    run_command: Callable[[Sequence[str]], None] = default_run_command,
202    **deprecated: Unpack[DeprecatedKwargs],
203) -> ValidationSummary:
204    """Test a bioimage.io resource dynamically,
205    for example run prediction of test tensors for models.
206
207    Args:
208        source: model description source.
209        weight_format: Weight format to test.
210            Default: All weight formats present in **source**.
211        devices: Devices to test with, e.g. 'cpu', 'cuda'.
212            Default (may be weight format dependent): ['cuda'] if available, ['cpu'] otherwise.
213        determinism: Modes to improve reproducibility of test outputs.
214        expected_type: Assert an expected resource description `type`.
215        sha256: Expected SHA256 value of **source**.
216                (Ignored if **source** already is a loaded `ResourceDescr` object.)
217        stop_early: Do not run further subtests after a failed one.
218        runtime_env: (Experimental feature!) The Python environment to run the tests in
219            - `"currently-active"`: Use active Python interpreter.
220            - `"as-described"`: Use `bioimageio.spec.get_conda_env` to generate a conda
221                environment YAML file based on the model weights description.
222            - A `BioimageioCondaEnv` or a path to a conda environment YAML file.
223                Note: The `bioimageio.core` dependency will be added automatically if not present.
224        run_command: (Experimental feature!) Function to execute (conda) terminal commands in a subprocess
225            (ignored if **runtime_env** is `"currently-active"`).
226    """
227    if runtime_env == "currently-active":
228        rd = load_description_and_test(
229            source,
230            format_version=format_version,
231            weight_format=weight_format,
232            devices=devices,
233            determinism=determinism,
234            expected_type=expected_type,
235            sha256=sha256,
236            stop_early=stop_early,
237            **deprecated,
238        )
239        return rd.validation_summary
240
241    if runtime_env == "as-described":
242        conda_env = None
243    elif isinstance(runtime_env, (str, Path)):
244        conda_env = BioimageioCondaEnv.model_validate(read_yaml(Path(runtime_env)))
245    elif isinstance(runtime_env, BioimageioCondaEnv):
246        conda_env = runtime_env
247    else:
248        assert_never(runtime_env)
249
250    with TemporaryDirectory(ignore_cleanup_errors=True) as _d:
251        working_dir = Path(_d)
252        if isinstance(source, (dict, ResourceDescrBase)):
253            file_source = save_bioimageio_package(
254                source, output_path=working_dir / "package.zip"
255            )
256        else:
257            file_source = source
258
259        return _test_in_env(
260            file_source,
261            working_dir=working_dir,
262            weight_format=weight_format,
263            conda_env=conda_env,
264            devices=devices,
265            determinism=determinism,
266            expected_type=expected_type,
267            sha256=sha256,
268            stop_early=stop_early,
269            run_command=run_command,
270            **deprecated,
271        )
def validate_format( data: Dict[str, YamlValue], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', context: Optional[bioimageio.spec.ValidationContext] = None) -> bioimageio.spec.ValidationSummary:
204def validate_format(
205    data: BioimageioYamlContent,
206    /,
207    *,
208    format_version: Union[Literal["discover", "latest"], str] = DISCOVER,
209    context: Optional[ValidationContext] = None,
210) -> ValidationSummary:
211    """Validate a dictionary holding a bioimageio description.
212    See `bioimagieo.spec.load_description_and_validate_format_only`
213    to validate a file source.
214
215    Args:
216        data: Dictionary holding the raw bioimageio.yaml content.
217        format_version: Format version to (update to and) use for validation.
218        context: Validation context, see `bioimagieo.spec.ValidationContext`
219
220    Note:
221        Use `bioimagieo.spec.load_description_and_validate_format_only` to validate a
222        file source instead of loading the YAML content and creating the appropriate
223        `ValidationContext`.
224
225        Alternatively you can use `bioimagieo.spec.load_description` and access the
226        `validation_summary` attribute of the returned object.
227    """
228    with context or get_validation_context():
229        rd = build_description(data, format_version=format_version)
230
231    assert rd.validation_summary is not None
232    return rd.validation_summary

Validate a dictionary holding a bioimageio description. See bioimagieo.spec.load_description_and_validate_format_only to validate a file source.

Arguments:
  • data: Dictionary holding the raw bioimageio.yaml content.
  • format_version: Format version to (update to and) use for validation.
  • context: Validation context, see bioimagieo.spec.ValidationContext
Note:

Use bioimagieo.spec.load_description_and_validate_format_only to validate a file source instead of loading the YAML content and creating the appropriate ValidationContext.

Alternatively you can use bioimagieo.spec.load_description and access the validation_summary attribute of the returned object.