bioimageio.spec

License PyPI conda-version downloads conda-forge downloads code style coverage

Specifications for bioimage.io

This repository contains the specifications of the standard format defined by the bioimage.io community for the content (i.e., models, datasets and applications) in the bioimage.io website. Each item in the content is always described using a YAML 1.2 file named rdf.yaml or bioimageio.yaml. This rdf.yaml \ bioimageio.yaml--- along with the files referenced in it --- can be downloaded from or uploaded to the bioimage.io website and may be produced or consumed by bioimage.io-compatible consumers (e.g., image analysis software like ilastik).

These are the rules and format that bioimage.io-compatible resources must fulfill.

Note that the Python package PyYAML does not support YAML 1.2 . We therefore use and recommend ruyaml. For differences see https://ruamelyaml.readthedocs.io/en/latest/pyyaml.

Please also note that the best way to check whether your rdf.yaml file is bioimage.io-compliant is to call bioimageio.core.validate from the bioimageio.core Python package. The bioimageio.core Python package also provides the bioimageio command line interface (CLI) with the validate command:

bioimageio validate path/to/your/rdf.yaml

Format version overview

All bioimage.io description formats are defined as Pydantic models.

Type Format Version Documentation1 Developer Documentation2
model 0.5
0.4
model 0.5
model 0.4
ModelDescr_v0_5
ModelDescr_v0_4
dataset 0.3
0.2
dataset 0.3
dataset 0.2
DatasetDescr_v0_3
DatasetDescr_v0_2
notebook 0.3
0.2
notebook 0.3
notebook 0.2
NotebookDescr_v0_3
NotebookDescr_v0_2
application 0.3
0.2
application 0.3
application 0.2
ApplicationDescr_v0_3
ApplicationDescr_v0_2
generic 0.3
0.2
- GenericDescr_v0_3
GenericDescr_v0_2

JSON Schema

Simplified descriptions are available as JSON Schema (generated with Pydantic):

bioimageio.spec version JSON Schema documentation3
latest bioimageio_schema_latest.json latest documentation
0.5 bioimageio_schema_v0-5.json 0.5 documentation

Note: bioimageio_schema_v0-5.json and bioimageio_schema_latest.json are identical, but bioimageio_schema_latest.json will eventually refer to the future bioimageio_schema_v0-6.json.

Flattened, interactive docs

A flattened view of the types used by the spec that also shows values constraints.

rendered

You can also generate these docs locally by running PYTHONPATH=./scripts python -m interactive_docs

Examples

We provide some bioimageio.yaml/rdf.yaml example files to describe models, applications, notebooks and datasets; more examples are available at bioimage.io. There is also an example notebook demonstrating how to programmatically access the models, applications, notebooks and datasets descriptions in Python. For integration of bioimageio resources we recommend the bioimageio.core Python package.

💁 Recommendations

  • Use the bioimageio.core Python package to not only validate the format of your bioimageio.yaml/rdf.yaml file, but also test and deploy it (e.g. model inference).
  • bioimageio.spec keeps evolving. Try to use and upgrade to the most current format version! Note: The command line interface bioimageio (part of bioimageio.core) has the update-format command to help you with that.

⌨ bioimageio command-line interface (CLI)

The bioimageio CLI has moved to bioimageio.core.

🖥 Installation

bioimageio.spec can be installed with either conda or pip. We recommend installing bioimageio.core instead to get access to the Python programmatic features available in the BioImage.IO community:

conda install -c conda-forge bioimageio.core

or

pip install -U bioimageio.core

Still, for a lighter package or just testing, you can install the bioimageio.spec package solely:

conda install -c conda-forge bioimageio.spec

or

pip install -U bioimageio.spec

🏞 Environment variables

TODO: link to settings in dev docs

🤝 How to contribute

♥ Contributors

<a href=bioimageio.spec contributors" src="https://contrib.rocks/image?repo=bioimage-io/spec-bioimage-io" />

Made with contrib.rocks.

🛈 Versioining scheme

To keep the bioimageio.spec Python package version in sync with the (model) description format version, bioimageio.spec is versioned as MAJOR.MINRO.PATCH.LIB, where MAJOR.MINRO.PATCH correspond to the latest model description format version implemented and LIB may be bumpbed for library changes that do not affect the format version. This change was introduced with bioimageio.spec 0.5.3.1.

Δ Changelog

The changelog of the bioimageio.spec Python package and the changes to the resource description format it implements can be found here.


  1. JSON Schema based documentation generated with json-schema-for-humans

  2. JSON Schema based documentation generated with json-schema-for-humans

  3. Part of the bioimageio.spec package documentation generated with pdoc

  1"""
  2.. include:: ../../README.md
  3"""
  4
  5from . import (
  6    application,
  7    common,
  8    conda_env,
  9    dataset,
 10    generic,
 11    model,
 12    pretty_validation_errors,
 13    summary,
 14    utils,
 15)
 16from ._description import (
 17    LatestResourceDescr,
 18    ResourceDescr,
 19    SpecificResourceDescr,
 20    build_description,
 21    dump_description,
 22    validate_format,
 23)
 24from ._get_conda_env import BioimageioCondaEnv, get_conda_env
 25from ._internal import settings
 26from ._internal.common_nodes import InvalidDescr
 27from ._internal.constants import VERSION
 28from ._internal.validation_context import ValidationContext, get_validation_context
 29from ._io import (
 30    load_dataset_description,
 31    load_description,
 32    load_description_and_validate_format_only,
 33    load_model_description,
 34    save_bioimageio_yaml_only,
 35    update_format,
 36    update_hashes,
 37)
 38from ._package import (
 39    get_resource_package_content,
 40    save_bioimageio_package,
 41    save_bioimageio_package_as_folder,
 42    save_bioimageio_package_to_stream,
 43)
 44from .application import AnyApplicationDescr, ApplicationDescr
 45from .dataset import AnyDatasetDescr, DatasetDescr
 46from .generic import AnyGenericDescr, GenericDescr
 47from .model import AnyModelDescr, ModelDescr
 48from .notebook import AnyNotebookDescr, NotebookDescr
 49from .pretty_validation_errors import enable_pretty_validation_errors_in_ipynb
 50from .summary import ValidationSummary
 51
 52__version__ = VERSION
 53
 54__all__ = [
 55    "__version__",
 56    "AnyApplicationDescr",
 57    "AnyDatasetDescr",
 58    "AnyGenericDescr",
 59    "AnyModelDescr",
 60    "AnyNotebookDescr",
 61    "application",
 62    "ApplicationDescr",
 63    "BioimageioCondaEnv",
 64    "build_description",
 65    "common",
 66    "conda_env",
 67    "dataset",
 68    "DatasetDescr",
 69    "dump_description",
 70    "enable_pretty_validation_errors_in_ipynb",
 71    "generic",
 72    "GenericDescr",
 73    "get_conda_env",
 74    "get_resource_package_content",
 75    "get_validation_context",
 76    "InvalidDescr",
 77    "LatestResourceDescr",
 78    "load_dataset_description",
 79    "load_description_and_validate_format_only",
 80    "load_description",
 81    "load_model_description",
 82    "model",
 83    "ModelDescr",
 84    "NotebookDescr",
 85    "pretty_validation_errors",
 86    "ResourceDescr",
 87    "save_bioimageio_package_as_folder",
 88    "save_bioimageio_package_to_stream",
 89    "save_bioimageio_package",
 90    "save_bioimageio_yaml_only",
 91    "settings",
 92    "SpecificResourceDescr",
 93    "summary",
 94    "update_format",
 95    "update_hashes",
 96    "utils",
 97    "validate_format",
 98    "ValidationContext",
 99    "ValidationSummary",
100]
__version__ = '0.5.4.3'
AnyApplicationDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')]
AnyDatasetDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')]
AnyGenericDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], typing.Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]
AnyModelDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]
AnyNotebookDescr = typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]
class ApplicationDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
33class ApplicationDescr(GenericDescrBase):
34    """Bioimage.io description of an application."""
35
36    implemented_type: ClassVar[Literal["application"]] = "application"
37    if TYPE_CHECKING:
38        type: Literal["application"] = "application"
39    else:
40        type: Literal["application"]
41
42    id: Optional[ApplicationId] = None
43    """bioimage.io-wide unique resource identifier
44    assigned by bioimage.io; version **un**specific."""
45
46    parent: Optional[ApplicationId] = None
47    """The description from which this one is derived"""
48
49    source: Annotated[
50        Optional[FileSource_],
51        Field(description="URL or path to the source of the application"),
52    ] = None
53    """The primary source of the application"""

Bioimage.io description of an application.

implemented_type: ClassVar[Literal['application']] = 'application'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

source: Annotated[Optional[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')]), AfterValidator(func=<function wo_special_file_name at 0x7febd4d13c40>), PlainSerializer(func=<function _package_serializer at 0x7febd4daec00>, return_type=PydanticUndefined, when_used='unless-none')]], FieldInfo(annotation=NoneType, required=True, description='URL or path to the source of the application')]

The primary source of the application

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'forbid', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
class BioimageioCondaEnv(bioimageio.spec.conda_env.CondaEnv):
 76class BioimageioCondaEnv(CondaEnv):
 77    """A special `CondaEnv` that
 78    - automatically adds bioimageio specific dependencies
 79    - sorts dependencies
 80    """
 81
 82    @model_validator(mode="after")
 83    def _normalize_bioimageio_conda_env(self):
 84        """update a conda env such that we have bioimageio.core and sorted dependencies"""
 85        for req_channel in ("conda-forge", "nodefaults"):
 86            if req_channel not in self.channels:
 87                self.channels.append(req_channel)
 88
 89        if "defaults" in self.channels:
 90            warnings.warn("removing 'defaults' from conda-channels")
 91            self.channels.remove("defaults")
 92
 93        if "pip" not in self.dependencies:
 94            self.dependencies.append("pip")
 95
 96        for dep in self.dependencies:
 97            if isinstance(dep, PipDeps):
 98                pip_section = dep
 99                pip_section.pip.sort()
100                break
101        else:
102            pip_section = None
103
104        if (
105            pip_section is None
106            or not any(pd.startswith("bioimageio.core") for pd in pip_section.pip)
107        ) and not any(
108            d.startswith("bioimageio.core")
109            or d.startswith("conda-forge::bioimageio.core")
110            for d in self.dependencies
111            if not isinstance(d, PipDeps)
112        ):
113            self.dependencies.append("conda-forge::bioimageio.core")
114
115        self.dependencies.sort()
116        return self

A special CondaEnv that

  • automatically adds bioimageio specific dependencies
  • sorts dependencies
model_config: ClassVar[pydantic.config.ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def build_description( content: Mapping[str, YamlValueView], /, *, context: Optional[ValidationContext] = None, format_version: Union[Literal['latest', 'discover'], str] = 'discover') -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
173def build_description(
174    content: BioimageioYamlContentView,
175    /,
176    *,
177    context: Optional[ValidationContext] = None,
178    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
179) -> Union[ResourceDescr, InvalidDescr]:
180    """build a bioimage.io resource description from an RDF's content.
181
182    Use `load_description` if you want to build a resource description from an rdf.yaml
183    or bioimage.io zip-package.
184
185    Args:
186        content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
187        context: validation context to use during validation
188        format_version: (optional) use this argument to load the resource and
189                        convert its metadata to a higher format_version
190
191    Returns:
192        An object holding all metadata of the bioimage.io resource
193
194    """
195
196    return build_description_impl(
197        content,
198        context=context,
199        format_version=format_version,
200        get_rd_class=_get_rd_class,
201    )

build a bioimage.io resource description from an RDF's content.

Use load_description if you want to build a resource description from an rdf.yaml or bioimage.io zip-package.

Arguments:
  • content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
  • context: validation context to use during validation
  • format_version: (optional) use this argument to load the resource and convert its metadata to a higher format_version
Returns:

An object holding all metadata of the bioimage.io resource

class DatasetDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
 39class DatasetDescr(GenericDescrBase):
 40    """A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage
 41    processing.
 42    """
 43
 44    implemented_type: ClassVar[Literal["dataset"]] = "dataset"
 45    if TYPE_CHECKING:
 46        type: Literal["dataset"] = "dataset"
 47    else:
 48        type: Literal["dataset"]
 49
 50    id: Optional[DatasetId] = None
 51    """bioimage.io-wide unique resource identifier
 52    assigned by bioimage.io; version **un**specific."""
 53
 54    parent: Optional[DatasetId] = None
 55    """The description from which this one is derived"""
 56
 57    source: Optional[HttpUrl] = None
 58    """"URL to the source of the dataset."""
 59
 60    @model_validator(mode="before")
 61    @classmethod
 62    def _convert(cls, data: Dict[str, Any], /) -> Dict[str, Any]:
 63        if (
 64            data.get("type") == "dataset"
 65            and isinstance(fv := data.get("format_version"), str)
 66            and fv.startswith("0.2.")
 67        ):
 68            old = DatasetDescr02.load(data)
 69            if isinstance(old, InvalidDescr):
 70                return data
 71
 72            return cast(
 73                Dict[str, Any],
 74                (cls if TYPE_CHECKING else dict)(
 75                    attachments=(
 76                        []
 77                        if old.attachments is None
 78                        else [FileDescr(source=f) for f in old.attachments.files]
 79                    ),
 80                    authors=[
 81                        _author_conv.convert_as_dict(a) for a in old.authors
 82                    ],  # pyright: ignore[reportArgumentType]
 83                    badges=old.badges,
 84                    cite=[
 85                        {"text": c.text, "doi": c.doi, "url": c.url} for c in old.cite
 86                    ],  # pyright: ignore[reportArgumentType]
 87                    config=old.config,  # pyright: ignore[reportArgumentType]
 88                    covers=old.covers,
 89                    description=old.description,
 90                    documentation=old.documentation,
 91                    format_version="0.3.0",
 92                    git_repo=old.git_repo,  # pyright: ignore[reportArgumentType]
 93                    icon=old.icon,
 94                    id=None if old.id is None else DatasetId(old.id),
 95                    license=old.license,  # type: ignore
 96                    links=old.links,
 97                    maintainers=[
 98                        _maintainer_conv.convert_as_dict(m) for m in old.maintainers
 99                    ],  # pyright: ignore[reportArgumentType]
100                    name=old.name,
101                    source=old.source,
102                    tags=old.tags,
103                    type=old.type,
104                    uploader=old.uploader,
105                    version=old.version,
106                    **(old.model_extra or {}),
107                ),
108            )
109
110        return data

A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.

implemented_type: ClassVar[Literal['dataset']] = 'dataset'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

"URL to the source of the dataset.

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'forbid', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
def dump_description( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr], /, *, exclude_unset: bool = True, exclude_defaults: bool = False) -> Dict[str, YamlValue]:
66def dump_description(
67    rd: Union[ResourceDescr, InvalidDescr],
68    /,
69    *,
70    exclude_unset: bool = True,
71    exclude_defaults: bool = False,
72) -> BioimageioYamlContent:
73    """Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.
74
75    Args:
76        rd: bioimageio resource description
77        exclude_unset: Exclude fields that have not explicitly be set.
78        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
79    """
80    return rd.model_dump(
81        mode="json", exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
82    )

Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.

Arguments:
  • rd: bioimageio resource description
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).
def enable_pretty_validation_errors_in_ipynb():
92def enable_pretty_validation_errors_in_ipynb():
93    """DEPRECATED; this is enabled by default at import time."""
94    warnings.warn(
95        "deprecated, this is enabled by default at import time.",
96        DeprecationWarning,
97        stacklevel=2,
98    )

DEPRECATED; this is enabled by default at import time.

class GenericDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
478class GenericDescr(GenericDescrBase, extra="ignore"):
479    """Specification of the fields used in a generic bioimage.io-compliant resource description file (RDF).
480
481    An RDF is a YAML file that describes a resource such as a model, a dataset, or a notebook.
482    Note that those resources are described with a type-specific RDF.
483    Use this generic resource description, if none of the known specific types matches your resource.
484    """
485
486    implemented_type: ClassVar[Literal["generic"]] = "generic"
487    if TYPE_CHECKING:
488        type: Annotated[str, LowerCase] = "generic"
489        """The resource type assigns a broad category to the resource."""
490    else:
491        type: Annotated[str, LowerCase]
492        """The resource type assigns a broad category to the resource."""
493
494    id: Optional[
495        Annotated[ResourceId, Field(examples=["affable-shark", "ambitious-sloth"])]
496    ] = None
497    """bioimage.io-wide unique resource identifier
498    assigned by bioimage.io; version **un**specific."""
499
500    parent: Optional[ResourceId] = None
501    """The description from which this one is derived"""
502
503    source: Optional[HttpUrl] = None
504    """The primary source of the resource"""
505
506    @field_validator("type", mode="after")
507    @classmethod
508    def check_specific_types(cls, value: str) -> str:
509        if value in KNOWN_SPECIFIC_RESOURCE_TYPES:
510            raise ValueError(
511                f"Use the {value} description instead of this generic description for"
512                + f" your '{value}' resource."
513            )
514
515        return value

Specification of the fields used in a generic bioimage.io-compliant resource description file (RDF).

An RDF is a YAML file that describes a resource such as a model, a dataset, or a notebook. Note that those resources are described with a type-specific RDF. Use this generic resource description, if none of the known specific types matches your resource.

implemented_type: ClassVar[Literal['generic']] = 'generic'
id: Optional[Annotated[bioimageio.spec.generic.v0_3.ResourceId, FieldInfo(annotation=NoneType, required=True, examples=['affable-shark', 'ambitious-sloth'])]]

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

The primary source of the resource

@field_validator('type', mode='after')
@classmethod
def check_specific_types(cls, value: str) -> str:
506    @field_validator("type", mode="after")
507    @classmethod
508    def check_specific_types(cls, value: str) -> str:
509        if value in KNOWN_SPECIFIC_RESOURCE_TYPES:
510            raise ValueError(
511                f"Use the {value} description instead of this generic description for"
512                + f" your '{value}' resource."
513            )
514
515        return value
implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'ignore', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
27def get_conda_env(
28    *,
29    entry: SupportedWeightsEntry,
30    env_name: Optional[Union[Literal["DROP"], str]] = None,
31) -> BioimageioCondaEnv:
32    """get the recommended Conda environment for a given weights entry description"""
33    if isinstance(entry, (v0_4.OnnxWeightsDescr, v0_5.OnnxWeightsDescr)):
34        conda_env = _get_default_onnx_env(opset_version=entry.opset_version)
35    elif isinstance(
36        entry,
37        (
38            v0_4.PytorchStateDictWeightsDescr,
39            v0_5.PytorchStateDictWeightsDescr,
40            v0_4.TorchscriptWeightsDescr,
41            v0_5.TorchscriptWeightsDescr,
42        ),
43    ):
44        if (
45            isinstance(entry, v0_5.TorchscriptWeightsDescr)
46            or entry.dependencies is None
47        ):
48            conda_env = _get_default_pytorch_env(pytorch_version=entry.pytorch_version)
49        else:
50            conda_env = _get_env_from_deps(entry.dependencies)
51
52    elif isinstance(
53        entry,
54        (
55            v0_4.TensorflowSavedModelBundleWeightsDescr,
56            v0_5.TensorflowSavedModelBundleWeightsDescr,
57        ),
58    ):
59        if entry.dependencies is None:
60            conda_env = _get_default_tf_env(tensorflow_version=entry.tensorflow_version)
61        else:
62            conda_env = _get_env_from_deps(entry.dependencies)
63    elif isinstance(
64        entry,
65        (v0_4.KerasHdf5WeightsDescr, v0_5.KerasHdf5WeightsDescr),
66    ):
67        conda_env = _get_default_tf_env(tensorflow_version=entry.tensorflow_version)
68    else:
69        assert_never(entry)
70
71    if env_name == "DROP":
72        conda_env.name = None
73    elif env_name is not None:
74        conda_env.name = env_name
75
76    return conda_env

get the recommended Conda environment for a given weights entry description

def get_resource_package_content( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, bioimageio_yaml_file_name: str = 'rdf.yaml', weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Dict[str, Union[bioimageio.spec._internal.url.HttpUrl, Annotated[pathlib.Path, PathType(path_type='file'), Predicate(is_absolute), FieldInfo(annotation=NoneType, required=True, title='AbsoluteFilePath')], Dict[str, YamlValue], zipp.Path]]:
40def get_resource_package_content(
41    rd: ResourceDescr,
42    /,
43    *,
44    bioimageio_yaml_file_name: FileName = BIOIMAGEIO_YAML,
45    weights_priority_order: Optional[Sequence[WeightsFormat]] = None,  # model only
46) -> Dict[FileName, Union[HttpUrl, AbsoluteFilePath, BioimageioYamlContent, ZipPath]]:
47    ret: Dict[
48        FileName, Union[HttpUrl, AbsoluteFilePath, BioimageioYamlContent, ZipPath]
49    ] = {}
50    for k, v in get_package_content(
51        rd,
52        bioimageio_yaml_file_name=bioimageio_yaml_file_name,
53        weights_priority_order=weights_priority_order,
54    ).items():
55        if isinstance(v, FileDescr):
56            if isinstance(v.source, (Path, RelativeFilePath)):
57                ret[k] = v.source.absolute()
58            else:
59                ret[k] = v.source
60
61        else:
62            ret[k] = v
63
64    return ret
def get_validation_context( default: Optional[ValidationContext] = None) -> ValidationContext:
192def get_validation_context(
193    default: Optional[ValidationContext] = None,
194) -> ValidationContext:
195    """Get the currently active validation context (or a default)"""
196    return _validation_context_var.get() or default or ValidationContext()

Get the currently active validation context (or a default)

370class InvalidDescr(
371    ResourceDescrBase,
372    extra="allow",
373    title="An invalid resource description",
374):
375    """A representation of an invalid resource description"""
376
377    implemented_type: ClassVar[Literal["unknown"]] = "unknown"
378    if TYPE_CHECKING:  # see NodeWithExplicitlySetFields
379        type: Any = "unknown"
380    else:
381        type: Any
382
383    implemented_format_version: ClassVar[Literal["unknown"]] = "unknown"
384    if TYPE_CHECKING:  # see NodeWithExplicitlySetFields
385        format_version: Any = "unknown"
386    else:
387        format_version: Any

A representation of an invalid resource description

implemented_type: ClassVar[Literal['unknown']] = 'unknown'
implemented_format_version: ClassVar[Literal['unknown']] = 'unknown'
implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 0, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'allow', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True, 'title': 'An invalid resource description'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
LatestResourceDescr = typing.Union[typing.Annotated[typing.Union[ApplicationDescr, DatasetDescr, ModelDescr, NotebookDescr], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], GenericDescr]
def load_dataset_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')]:
180def load_dataset_description(
181    source: Union[PermissiveFileSource, ZipFile],
182    /,
183    *,
184    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
185    perform_io_checks: Optional[bool] = None,
186    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
187    sha256: Optional[Sha256] = None,
188) -> AnyDatasetDescr:
189    """same as `load_description`, but addtionally ensures that the loaded
190    description is valid and of type 'dataset'.
191    """
192    rd = load_description(
193        source,
194        format_version=format_version,
195        perform_io_checks=perform_io_checks,
196        known_files=known_files,
197        sha256=sha256,
198    )
199    return ensure_description_is_dataset(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'dataset'.

def load_description_and_validate_format_only( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> ValidationSummary:
232def load_description_and_validate_format_only(
233    source: Union[PermissiveFileSource, ZipFile],
234    /,
235    *,
236    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
237    perform_io_checks: Optional[bool] = None,
238    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
239    sha256: Optional[Sha256] = None,
240) -> ValidationSummary:
241    """same as `load_description`, but only return the validation summary.
242
243    Returns:
244        Validation summary of the bioimage.io resource found at `source`.
245
246    """
247    rd = load_description(
248        source,
249        format_version=format_version,
250        perform_io_checks=perform_io_checks,
251        known_files=known_files,
252        sha256=sha256,
253    )
254    assert rd.validation_summary is not None
255    return rd.validation_summary

same as load_description, but only return the validation summary.

Returns:

Validation summary of the bioimage.io resource found at source.

def load_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
 57def load_description(
 58    source: Union[PermissiveFileSource, ZipFile],
 59    /,
 60    *,
 61    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
 62    perform_io_checks: Optional[bool] = None,
 63    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
 64    sha256: Optional[Sha256] = None,
 65) -> Union[ResourceDescr, InvalidDescr]:
 66    """load a bioimage.io resource description
 67
 68    Args:
 69        source: Path or URL to an rdf.yaml or a bioimage.io package
 70                (zip-file with rdf.yaml in it).
 71        format_version: (optional) Use this argument to load the resource and
 72                        convert its metadata to a higher format_version.
 73        perform_io_checks: Wether or not to perform validation that requires file io,
 74                           e.g. downloading a remote files. The existence of local
 75                           absolute file paths is still being checked.
 76        known_files: Allows to bypass download and hashing of referenced files
 77                     (even if perform_io_checks is True).
 78                     Checked files will be added to this dictionary
 79                     with their SHA-256 value.
 80        sha256: Optional SHA-256 value of **source**
 81
 82    Returns:
 83        An object holding all metadata of the bioimage.io resource
 84
 85    """
 86    if isinstance(source, ResourceDescrBase):
 87        name = getattr(source, "name", f"{str(source)[:10]}...")
 88        logger.warning("returning already loaded description '{}' as is", name)
 89        return source  # pyright: ignore[reportReturnType]
 90
 91    opened = open_bioimageio_yaml(source, sha256=sha256)
 92
 93    context = get_validation_context().replace(
 94        root=opened.original_root,
 95        file_name=opened.original_file_name,
 96        perform_io_checks=perform_io_checks,
 97        known_files=known_files,
 98    )
 99
100    return build_description(
101        opened.content,
102        context=context,
103        format_version=format_version,
104    )

load a bioimage.io resource description

Arguments:
  • source: Path or URL to an rdf.yaml or a bioimage.io package (zip-file with rdf.yaml in it).
  • format_version: (optional) Use this argument to load the resource and convert its metadata to a higher format_version.
  • perform_io_checks: Wether or not to perform validation that requires file io, e.g. downloading a remote files. The existence of local absolute file paths is still being checked.
  • known_files: Allows to bypass download and hashing of referenced files (even if perform_io_checks is True). Checked files will be added to this dictionary with their SHA-256 value.
  • sha256: Optional SHA-256 value of source
Returns:

An object holding all metadata of the bioimage.io resource

def load_model_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]:
131def load_model_description(
132    source: Union[PermissiveFileSource, ZipFile],
133    /,
134    *,
135    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
136    perform_io_checks: Optional[bool] = None,
137    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
138    sha256: Optional[Sha256] = None,
139) -> AnyModelDescr:
140    """same as `load_description`, but addtionally ensures that the loaded
141    description is valid and of type 'model'.
142
143    Raises:
144        ValueError: for invalid or non-model resources
145    """
146    rd = load_description(
147        source,
148        format_version=format_version,
149        perform_io_checks=perform_io_checks,
150        known_files=known_files,
151        sha256=sha256,
152    )
153    return ensure_description_is_model(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'model'.

Raises:
  • ValueError: for invalid or non-model resources
2531class ModelDescr(GenericModelDescrBase):
2532    """Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.
2533    These fields are typically stored in a YAML file which we call a model resource description file (model RDF).
2534    """
2535
2536    implemented_format_version: ClassVar[Literal["0.5.4"]] = "0.5.4"
2537    if TYPE_CHECKING:
2538        format_version: Literal["0.5.4"] = "0.5.4"
2539    else:
2540        format_version: Literal["0.5.4"]
2541        """Version of the bioimage.io model description specification used.
2542        When creating a new model always use the latest micro/patch version described here.
2543        The `format_version` is important for any consumer software to understand how to parse the fields.
2544        """
2545
2546    implemented_type: ClassVar[Literal["model"]] = "model"
2547    if TYPE_CHECKING:
2548        type: Literal["model"] = "model"
2549    else:
2550        type: Literal["model"]
2551        """Specialized resource type 'model'"""
2552
2553    id: Optional[ModelId] = None
2554    """bioimage.io-wide unique resource identifier
2555    assigned by bioimage.io; version **un**specific."""
2556
2557    authors: NotEmpty[List[Author]]
2558    """The authors are the creators of the model RDF and the primary points of contact."""
2559
2560    documentation: FileSource_documentation
2561    """URL or relative path to a markdown file with additional documentation.
2562    The recommended documentation file name is `README.md`. An `.md` suffix is mandatory.
2563    The documentation should include a '#[#] Validation' (sub)section
2564    with details on how to quantitatively validate the model on unseen data."""
2565
2566    @field_validator("documentation", mode="after")
2567    @classmethod
2568    def _validate_documentation(
2569        cls, value: FileSource_documentation
2570    ) -> FileSource_documentation:
2571        if not get_validation_context().perform_io_checks:
2572            return value
2573
2574        doc_reader = get_reader(value)
2575        doc_content = doc_reader.read().decode(encoding="utf-8")
2576        if not re.search("#.*[vV]alidation", doc_content):
2577            issue_warning(
2578                "No '# Validation' (sub)section found in {value}.",
2579                value=value,
2580                field="documentation",
2581            )
2582
2583        return value
2584
2585    inputs: NotEmpty[Sequence[InputTensorDescr]]
2586    """Describes the input tensors expected by this model."""
2587
2588    @field_validator("inputs", mode="after")
2589    @classmethod
2590    def _validate_input_axes(
2591        cls, inputs: Sequence[InputTensorDescr]
2592    ) -> Sequence[InputTensorDescr]:
2593        input_size_refs = cls._get_axes_with_independent_size(inputs)
2594
2595        for i, ipt in enumerate(inputs):
2596            valid_independent_refs: Dict[
2597                Tuple[TensorId, AxisId],
2598                Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2599            ] = {
2600                **{
2601                    (ipt.id, a.id): (ipt, a, a.size)
2602                    for a in ipt.axes
2603                    if not isinstance(a, BatchAxis)
2604                    and isinstance(a.size, (int, ParameterizedSize))
2605                },
2606                **input_size_refs,
2607            }
2608            for a, ax in enumerate(ipt.axes):
2609                cls._validate_axis(
2610                    "inputs",
2611                    i=i,
2612                    tensor_id=ipt.id,
2613                    a=a,
2614                    axis=ax,
2615                    valid_independent_refs=valid_independent_refs,
2616                )
2617        return inputs
2618
2619    @staticmethod
2620    def _validate_axis(
2621        field_name: str,
2622        i: int,
2623        tensor_id: TensorId,
2624        a: int,
2625        axis: AnyAxis,
2626        valid_independent_refs: Dict[
2627            Tuple[TensorId, AxisId],
2628            Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2629        ],
2630    ):
2631        if isinstance(axis, BatchAxis) or isinstance(
2632            axis.size, (int, ParameterizedSize, DataDependentSize)
2633        ):
2634            return
2635        elif not isinstance(axis.size, SizeReference):
2636            assert_never(axis.size)
2637
2638        # validate axis.size SizeReference
2639        ref = (axis.size.tensor_id, axis.size.axis_id)
2640        if ref not in valid_independent_refs:
2641            raise ValueError(
2642                "Invalid tensor axis reference at"
2643                + f" {field_name}[{i}].axes[{a}].size: {axis.size}."
2644            )
2645        if ref == (tensor_id, axis.id):
2646            raise ValueError(
2647                "Self-referencing not allowed for"
2648                + f" {field_name}[{i}].axes[{a}].size: {axis.size}"
2649            )
2650        if axis.type == "channel":
2651            if valid_independent_refs[ref][1].type != "channel":
2652                raise ValueError(
2653                    "A channel axis' size may only reference another fixed size"
2654                    + " channel axis."
2655                )
2656            if isinstance(axis.channel_names, str) and "{i}" in axis.channel_names:
2657                ref_size = valid_independent_refs[ref][2]
2658                assert isinstance(ref_size, int), (
2659                    "channel axis ref (another channel axis) has to specify fixed"
2660                    + " size"
2661                )
2662                generated_channel_names = [
2663                    Identifier(axis.channel_names.format(i=i))
2664                    for i in range(1, ref_size + 1)
2665                ]
2666                axis.channel_names = generated_channel_names
2667
2668        if (ax_unit := getattr(axis, "unit", None)) != (
2669            ref_unit := getattr(valid_independent_refs[ref][1], "unit", None)
2670        ):
2671            raise ValueError(
2672                "The units of an axis and its reference axis need to match, but"
2673                + f" '{ax_unit}' != '{ref_unit}'."
2674            )
2675        ref_axis = valid_independent_refs[ref][1]
2676        if isinstance(ref_axis, BatchAxis):
2677            raise ValueError(
2678                f"Invalid reference axis '{ref_axis.id}' for {tensor_id}.{axis.id}"
2679                + " (a batch axis is not allowed as reference)."
2680            )
2681
2682        if isinstance(axis, WithHalo):
2683            min_size = axis.size.get_size(axis, ref_axis, n=0)
2684            if (min_size - 2 * axis.halo) < 1:
2685                raise ValueError(
2686                    f"axis {axis.id} with minimum size {min_size} is too small for halo"
2687                    + f" {axis.halo}."
2688                )
2689
2690            input_halo = axis.halo * axis.scale / ref_axis.scale
2691            if input_halo != int(input_halo) or input_halo % 2 == 1:
2692                raise ValueError(
2693                    f"input_halo {input_halo} (output_halo {axis.halo} *"
2694                    + f" output_scale {axis.scale} / input_scale {ref_axis.scale})"
2695                    + f"     {tensor_id}.{axis.id}."
2696                )
2697
2698    @model_validator(mode="after")
2699    def _validate_test_tensors(self) -> Self:
2700        if not get_validation_context().perform_io_checks:
2701            return self
2702
2703        test_output_arrays = [load_array(descr.test_tensor) for descr in self.outputs]
2704        test_input_arrays = [load_array(descr.test_tensor) for descr in self.inputs]
2705
2706        tensors = {
2707            descr.id: (descr, array)
2708            for descr, array in zip(
2709                chain(self.inputs, self.outputs), test_input_arrays + test_output_arrays
2710            )
2711        }
2712        validate_tensors(tensors, tensor_origin="test_tensor")
2713
2714        output_arrays = {
2715            descr.id: array for descr, array in zip(self.outputs, test_output_arrays)
2716        }
2717        for rep_tol in self.config.bioimageio.reproducibility_tolerance:
2718            if not rep_tol.absolute_tolerance:
2719                continue
2720
2721            if rep_tol.output_ids:
2722                out_arrays = {
2723                    oid: a
2724                    for oid, a in output_arrays.items()
2725                    if oid in rep_tol.output_ids
2726                }
2727            else:
2728                out_arrays = output_arrays
2729
2730            for out_id, array in out_arrays.items():
2731                if rep_tol.absolute_tolerance > (max_test_value := array.max()) * 0.01:
2732                    raise ValueError(
2733                        "config.bioimageio.reproducibility_tolerance.absolute_tolerance="
2734                        + f"{rep_tol.absolute_tolerance} > 0.01*{max_test_value}"
2735                        + f" (1% of the maximum value of the test tensor '{out_id}')"
2736                    )
2737
2738        return self
2739
2740    @model_validator(mode="after")
2741    def _validate_tensor_references_in_proc_kwargs(self, info: ValidationInfo) -> Self:
2742        ipt_refs = {t.id for t in self.inputs}
2743        out_refs = {t.id for t in self.outputs}
2744        for ipt in self.inputs:
2745            for p in ipt.preprocessing:
2746                ref = p.kwargs.get("reference_tensor")
2747                if ref is None:
2748                    continue
2749                if ref not in ipt_refs:
2750                    raise ValueError(
2751                        f"`reference_tensor` '{ref}' not found. Valid input tensor"
2752                        + f" references are: {ipt_refs}."
2753                    )
2754
2755        for out in self.outputs:
2756            for p in out.postprocessing:
2757                ref = p.kwargs.get("reference_tensor")
2758                if ref is None:
2759                    continue
2760
2761                if ref not in ipt_refs and ref not in out_refs:
2762                    raise ValueError(
2763                        f"`reference_tensor` '{ref}' not found. Valid tensor references"
2764                        + f" are: {ipt_refs | out_refs}."
2765                    )
2766
2767        return self
2768
2769    # TODO: use validate funcs in validate_test_tensors
2770    # def validate_inputs(self, input_tensors: Mapping[TensorId, NDArray[Any]]) -> Mapping[TensorId, NDArray[Any]]:
2771
2772    name: Annotated[
2773        Annotated[
2774            str, RestrictCharacters(string.ascii_letters + string.digits + "_+- ()")
2775        ],
2776        MinLen(5),
2777        MaxLen(128),
2778        warn(MaxLen(64), "Name longer than 64 characters.", INFO),
2779    ]
2780    """A human-readable name of this model.
2781    It should be no longer than 64 characters
2782    and may only contain letter, number, underscore, minus, parentheses and spaces.
2783    We recommend to chose a name that refers to the model's task and image modality.
2784    """
2785
2786    outputs: NotEmpty[Sequence[OutputTensorDescr]]
2787    """Describes the output tensors."""
2788
2789    @field_validator("outputs", mode="after")
2790    @classmethod
2791    def _validate_tensor_ids(
2792        cls, outputs: Sequence[OutputTensorDescr], info: ValidationInfo
2793    ) -> Sequence[OutputTensorDescr]:
2794        tensor_ids = [
2795            t.id for t in info.data.get("inputs", []) + info.data.get("outputs", [])
2796        ]
2797        duplicate_tensor_ids: List[str] = []
2798        seen: Set[str] = set()
2799        for t in tensor_ids:
2800            if t in seen:
2801                duplicate_tensor_ids.append(t)
2802
2803            seen.add(t)
2804
2805        if duplicate_tensor_ids:
2806            raise ValueError(f"Duplicate tensor ids: {duplicate_tensor_ids}")
2807
2808        return outputs
2809
2810    @staticmethod
2811    def _get_axes_with_parameterized_size(
2812        io: Union[Sequence[InputTensorDescr], Sequence[OutputTensorDescr]],
2813    ):
2814        return {
2815            f"{t.id}.{a.id}": (t, a, a.size)
2816            for t in io
2817            for a in t.axes
2818            if not isinstance(a, BatchAxis) and isinstance(a.size, ParameterizedSize)
2819        }
2820
2821    @staticmethod
2822    def _get_axes_with_independent_size(
2823        io: Union[Sequence[InputTensorDescr], Sequence[OutputTensorDescr]],
2824    ):
2825        return {
2826            (t.id, a.id): (t, a, a.size)
2827            for t in io
2828            for a in t.axes
2829            if not isinstance(a, BatchAxis)
2830            and isinstance(a.size, (int, ParameterizedSize))
2831        }
2832
2833    @field_validator("outputs", mode="after")
2834    @classmethod
2835    def _validate_output_axes(
2836        cls, outputs: List[OutputTensorDescr], info: ValidationInfo
2837    ) -> List[OutputTensorDescr]:
2838        input_size_refs = cls._get_axes_with_independent_size(
2839            info.data.get("inputs", [])
2840        )
2841        output_size_refs = cls._get_axes_with_independent_size(outputs)
2842
2843        for i, out in enumerate(outputs):
2844            valid_independent_refs: Dict[
2845                Tuple[TensorId, AxisId],
2846                Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2847            ] = {
2848                **{
2849                    (out.id, a.id): (out, a, a.size)
2850                    for a in out.axes
2851                    if not isinstance(a, BatchAxis)
2852                    and isinstance(a.size, (int, ParameterizedSize))
2853                },
2854                **input_size_refs,
2855                **output_size_refs,
2856            }
2857            for a, ax in enumerate(out.axes):
2858                cls._validate_axis(
2859                    "outputs",
2860                    i,
2861                    out.id,
2862                    a,
2863                    ax,
2864                    valid_independent_refs=valid_independent_refs,
2865                )
2866
2867        return outputs
2868
2869    packaged_by: List[Author] = Field(
2870        default_factory=cast(Callable[[], List[Author]], list)
2871    )
2872    """The persons that have packaged and uploaded this model.
2873    Only required if those persons differ from the `authors`."""
2874
2875    parent: Optional[LinkedModel] = None
2876    """The model from which this model is derived, e.g. by fine-tuning the weights."""
2877
2878    @model_validator(mode="after")
2879    def _validate_parent_is_not_self(self) -> Self:
2880        if self.parent is not None and self.parent.id == self.id:
2881            raise ValueError("A model description may not reference itself as parent.")
2882
2883        return self
2884
2885    run_mode: Annotated[
2886        Optional[RunMode],
2887        warn(None, "Run mode '{value}' has limited support across consumer softwares."),
2888    ] = None
2889    """Custom run mode for this model: for more complex prediction procedures like test time
2890    data augmentation that currently cannot be expressed in the specification.
2891    No standard run modes are defined yet."""
2892
2893    timestamp: Datetime = Field(default_factory=Datetime.now)
2894    """Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format
2895    with a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).
2896    (In Python a datetime object is valid, too)."""
2897
2898    training_data: Annotated[
2899        Union[None, LinkedDataset, DatasetDescr, DatasetDescr02],
2900        Field(union_mode="left_to_right"),
2901    ] = None
2902    """The dataset used to train this model"""
2903
2904    weights: Annotated[WeightsDescr, WrapSerializer(package_weights)]
2905    """The weights for this model.
2906    Weights can be given for different formats, but should otherwise be equivalent.
2907    The available weight formats determine which consumers can use this model."""
2908
2909    config: Config = Field(default_factory=Config)
2910
2911    @model_validator(mode="after")
2912    def _add_default_cover(self) -> Self:
2913        if not get_validation_context().perform_io_checks or self.covers:
2914            return self
2915
2916        try:
2917            generated_covers = generate_covers(
2918                [(t, load_array(t.test_tensor)) for t in self.inputs],
2919                [(t, load_array(t.test_tensor)) for t in self.outputs],
2920            )
2921        except Exception as e:
2922            issue_warning(
2923                "Failed to generate cover image(s): {e}",
2924                value=self.covers,
2925                msg_context=dict(e=e),
2926                field="covers",
2927            )
2928        else:
2929            self.covers.extend(generated_covers)
2930
2931        return self
2932
2933    def get_input_test_arrays(self) -> List[NDArray[Any]]:
2934        data = [load_array(ipt.test_tensor) for ipt in self.inputs]
2935        assert all(isinstance(d, np.ndarray) for d in data)
2936        return data
2937
2938    def get_output_test_arrays(self) -> List[NDArray[Any]]:
2939        data = [load_array(out.test_tensor) for out in self.outputs]
2940        assert all(isinstance(d, np.ndarray) for d in data)
2941        return data
2942
2943    @staticmethod
2944    def get_batch_size(tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]]) -> int:
2945        batch_size = 1
2946        tensor_with_batchsize: Optional[TensorId] = None
2947        for tid in tensor_sizes:
2948            for aid, s in tensor_sizes[tid].items():
2949                if aid != BATCH_AXIS_ID or s == 1 or s == batch_size:
2950                    continue
2951
2952                if batch_size != 1:
2953                    assert tensor_with_batchsize is not None
2954                    raise ValueError(
2955                        f"batch size mismatch for tensors '{tensor_with_batchsize}' ({batch_size}) and '{tid}' ({s})"
2956                    )
2957
2958                batch_size = s
2959                tensor_with_batchsize = tid
2960
2961        return batch_size
2962
2963    def get_output_tensor_sizes(
2964        self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]
2965    ) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]:
2966        """Returns the tensor output sizes for given **input_sizes**.
2967        Only if **input_sizes** has a valid input shape, the tensor output size is exact.
2968        Otherwise it might be larger than the actual (valid) output"""
2969        batch_size = self.get_batch_size(input_sizes)
2970        ns = self.get_ns(input_sizes)
2971
2972        tensor_sizes = self.get_tensor_sizes(ns, batch_size=batch_size)
2973        return tensor_sizes.outputs
2974
2975    def get_ns(self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]):
2976        """get parameter `n` for each parameterized axis
2977        such that the valid input size is >= the given input size"""
2978        ret: Dict[Tuple[TensorId, AxisId], ParameterizedSize_N] = {}
2979        axes = {t.id: {a.id: a for a in t.axes} for t in self.inputs}
2980        for tid in input_sizes:
2981            for aid, s in input_sizes[tid].items():
2982                size_descr = axes[tid][aid].size
2983                if isinstance(size_descr, ParameterizedSize):
2984                    ret[(tid, aid)] = size_descr.get_n(s)
2985                elif size_descr is None or isinstance(size_descr, (int, SizeReference)):
2986                    pass
2987                else:
2988                    assert_never(size_descr)
2989
2990        return ret
2991
2992    def get_tensor_sizes(
2993        self, ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N], batch_size: int
2994    ) -> _TensorSizes:
2995        axis_sizes = self.get_axis_sizes(ns, batch_size=batch_size)
2996        return _TensorSizes(
2997            {
2998                t: {
2999                    aa: axis_sizes.inputs[(tt, aa)]
3000                    for tt, aa in axis_sizes.inputs
3001                    if tt == t
3002                }
3003                for t in {tt for tt, _ in axis_sizes.inputs}
3004            },
3005            {
3006                t: {
3007                    aa: axis_sizes.outputs[(tt, aa)]
3008                    for tt, aa in axis_sizes.outputs
3009                    if tt == t
3010                }
3011                for t in {tt for tt, _ in axis_sizes.outputs}
3012            },
3013        )
3014
3015    def get_axis_sizes(
3016        self,
3017        ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N],
3018        batch_size: Optional[int] = None,
3019        *,
3020        max_input_shape: Optional[Mapping[Tuple[TensorId, AxisId], int]] = None,
3021    ) -> _AxisSizes:
3022        """Determine input and output block shape for scale factors **ns**
3023        of parameterized input sizes.
3024
3025        Args:
3026            ns: Scale factor `n` for each axis (keyed by (tensor_id, axis_id))
3027                that is parameterized as `size = min + n * step`.
3028            batch_size: The desired size of the batch dimension.
3029                If given **batch_size** overwrites any batch size present in
3030                **max_input_shape**. Default 1.
3031            max_input_shape: Limits the derived block shapes.
3032                Each axis for which the input size, parameterized by `n`, is larger
3033                than **max_input_shape** is set to the minimal value `n_min` for which
3034                this is still true.
3035                Use this for small input samples or large values of **ns**.
3036                Or simply whenever you know the full input shape.
3037
3038        Returns:
3039            Resolved axis sizes for model inputs and outputs.
3040        """
3041        max_input_shape = max_input_shape or {}
3042        if batch_size is None:
3043            for (_t_id, a_id), s in max_input_shape.items():
3044                if a_id == BATCH_AXIS_ID:
3045                    batch_size = s
3046                    break
3047            else:
3048                batch_size = 1
3049
3050        all_axes = {
3051            t.id: {a.id: a for a in t.axes} for t in chain(self.inputs, self.outputs)
3052        }
3053
3054        inputs: Dict[Tuple[TensorId, AxisId], int] = {}
3055        outputs: Dict[Tuple[TensorId, AxisId], Union[int, _DataDepSize]] = {}
3056
3057        def get_axis_size(a: Union[InputAxis, OutputAxis]):
3058            if isinstance(a, BatchAxis):
3059                if (t_descr.id, a.id) in ns:
3060                    logger.warning(
3061                        "Ignoring unexpected size increment factor (n) for batch axis"
3062                        + " of tensor '{}'.",
3063                        t_descr.id,
3064                    )
3065                return batch_size
3066            elif isinstance(a.size, int):
3067                if (t_descr.id, a.id) in ns:
3068                    logger.warning(
3069                        "Ignoring unexpected size increment factor (n) for fixed size"
3070                        + " axis '{}' of tensor '{}'.",
3071                        a.id,
3072                        t_descr.id,
3073                    )
3074                return a.size
3075            elif isinstance(a.size, ParameterizedSize):
3076                if (t_descr.id, a.id) not in ns:
3077                    raise ValueError(
3078                        "Size increment factor (n) missing for parametrized axis"
3079                        + f" '{a.id}' of tensor '{t_descr.id}'."
3080                    )
3081                n = ns[(t_descr.id, a.id)]
3082                s_max = max_input_shape.get((t_descr.id, a.id))
3083                if s_max is not None:
3084                    n = min(n, a.size.get_n(s_max))
3085
3086                return a.size.get_size(n)
3087
3088            elif isinstance(a.size, SizeReference):
3089                if (t_descr.id, a.id) in ns:
3090                    logger.warning(
3091                        "Ignoring unexpected size increment factor (n) for axis '{}'"
3092                        + " of tensor '{}' with size reference.",
3093                        a.id,
3094                        t_descr.id,
3095                    )
3096                assert not isinstance(a, BatchAxis)
3097                ref_axis = all_axes[a.size.tensor_id][a.size.axis_id]
3098                assert not isinstance(ref_axis, BatchAxis)
3099                ref_key = (a.size.tensor_id, a.size.axis_id)
3100                ref_size = inputs.get(ref_key, outputs.get(ref_key))
3101                assert ref_size is not None, ref_key
3102                assert not isinstance(ref_size, _DataDepSize), ref_key
3103                return a.size.get_size(
3104                    axis=a,
3105                    ref_axis=ref_axis,
3106                    ref_size=ref_size,
3107                )
3108            elif isinstance(a.size, DataDependentSize):
3109                if (t_descr.id, a.id) in ns:
3110                    logger.warning(
3111                        "Ignoring unexpected increment factor (n) for data dependent"
3112                        + " size axis '{}' of tensor '{}'.",
3113                        a.id,
3114                        t_descr.id,
3115                    )
3116                return _DataDepSize(a.size.min, a.size.max)
3117            else:
3118                assert_never(a.size)
3119
3120        # first resolve all , but the `SizeReference` input sizes
3121        for t_descr in self.inputs:
3122            for a in t_descr.axes:
3123                if not isinstance(a.size, SizeReference):
3124                    s = get_axis_size(a)
3125                    assert not isinstance(s, _DataDepSize)
3126                    inputs[t_descr.id, a.id] = s
3127
3128        # resolve all other input axis sizes
3129        for t_descr in self.inputs:
3130            for a in t_descr.axes:
3131                if isinstance(a.size, SizeReference):
3132                    s = get_axis_size(a)
3133                    assert not isinstance(s, _DataDepSize)
3134                    inputs[t_descr.id, a.id] = s
3135
3136        # resolve all output axis sizes
3137        for t_descr in self.outputs:
3138            for a in t_descr.axes:
3139                assert not isinstance(a.size, ParameterizedSize)
3140                s = get_axis_size(a)
3141                outputs[t_descr.id, a.id] = s
3142
3143        return _AxisSizes(inputs=inputs, outputs=outputs)
3144
3145    @model_validator(mode="before")
3146    @classmethod
3147    def _convert(cls, data: Dict[str, Any]) -> Dict[str, Any]:
3148        cls.convert_from_old_format_wo_validation(data)
3149        return data
3150
3151    @classmethod
3152    def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3153        """Convert metadata following an older format version to this classes' format
3154        without validating the result.
3155        """
3156        if (
3157            data.get("type") == "model"
3158            and isinstance(fv := data.get("format_version"), str)
3159            and fv.count(".") == 2
3160        ):
3161            fv_parts = fv.split(".")
3162            if any(not p.isdigit() for p in fv_parts):
3163                return
3164
3165            fv_tuple = tuple(map(int, fv_parts))
3166
3167            assert cls.implemented_format_version_tuple[0:2] == (0, 5)
3168            if fv_tuple[:2] in ((0, 3), (0, 4)):
3169                m04 = _ModelDescr_v0_4.load(data)
3170                if isinstance(m04, InvalidDescr):
3171                    try:
3172                        updated = _model_conv.convert_as_dict(
3173                            m04  # pyright: ignore[reportArgumentType]
3174                        )
3175                    except Exception as e:
3176                        logger.error(
3177                            "Failed to convert from invalid model 0.4 description."
3178                            + f"\nerror: {e}"
3179                            + "\nProceeding with model 0.5 validation without conversion."
3180                        )
3181                        updated = None
3182                else:
3183                    updated = _model_conv.convert_as_dict(m04)
3184
3185                if updated is not None:
3186                    data.clear()
3187                    data.update(updated)
3188
3189            elif fv_tuple[:2] == (0, 5):
3190                # bump patch version
3191                data["format_version"] = cls.implemented_format_version

Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. These fields are typically stored in a YAML file which we call a model resource description file (model RDF).

implemented_format_version: ClassVar[Literal['0.5.4']] = '0.5.4'
implemented_type: ClassVar[Literal['model']] = 'model'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

authors: Annotated[List[bioimageio.spec.generic.v0_3.Author], MinLen(min_length=1)]

The authors are the creators of the model RDF and the primary points of contact.

documentation: Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')]), AfterValidator(func=<function wo_special_file_name at 0x7febd4d13c40>), PlainSerializer(func=<function _package_serializer at 0x7febd4daec00>, return_type=PydanticUndefined, when_used='unless-none'), WithSuffix(suffix='.md', case_sensitive=True), FieldInfo(annotation=NoneType, required=True, examples=['https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md', 'README.md'])]

URL or relative path to a markdown file with additional documentation. The recommended documentation file name is README.md. An .md suffix is mandatory. The documentation should include a '#[#] Validation' (sub)section with details on how to quantitatively validate the model on unseen data.

inputs: Annotated[Sequence[bioimageio.spec.model.v0_5.InputTensorDescr], MinLen(min_length=1)]

Describes the input tensors expected by this model.

name: Annotated[str, RestrictCharacters(alphabet='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_+- ()'), MinLen(min_length=5), MaxLen(max_length=128), AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7febd302c220>, severity=20, msg='Name longer than 64 characters.', context={'typ': Annotated[Any, MaxLen(max_length=64)]})]

A human-readable name of this model. It should be no longer than 64 characters and may only contain letter, number, underscore, minus, parentheses and spaces. We recommend to chose a name that refers to the model's task and image modality.

outputs: Annotated[Sequence[bioimageio.spec.model.v0_5.OutputTensorDescr], MinLen(min_length=1)]

Describes the output tensors.

The persons that have packaged and uploaded this model. Only required if those persons differ from the authors.

The model from which this model is derived, e.g. by fine-tuning the weights.

run_mode: Annotated[Optional[bioimageio.spec.model.v0_4.RunMode], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7febd302c680>, severity=30, msg="Run mode '{value}' has limited support across consumer softwares.", context={'typ': None})]

Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.

Timestamp in ISO 8601 format with a few restrictions listed here. (In Python a datetime object is valid, too).

training_data: Annotated[Union[NoneType, bioimageio.spec.dataset.v0_3.LinkedDataset, DatasetDescr, bioimageio.spec.dataset.v0_2.DatasetDescr], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]

The dataset used to train this model

weights: Annotated[bioimageio.spec.model.v0_5.WeightsDescr, WrapSerializer(func=<function package_weights at 0x7febd4a1d8a0>, return_type=PydanticUndefined, when_used='always')]

The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.

def get_input_test_arrays(self) -> List[numpy.ndarray[tuple[Any, ...], numpy.dtype[Any]]]:
2933    def get_input_test_arrays(self) -> List[NDArray[Any]]:
2934        data = [load_array(ipt.test_tensor) for ipt in self.inputs]
2935        assert all(isinstance(d, np.ndarray) for d in data)
2936        return data
def get_output_test_arrays(self) -> List[numpy.ndarray[tuple[Any, ...], numpy.dtype[Any]]]:
2938    def get_output_test_arrays(self) -> List[NDArray[Any]]:
2939        data = [load_array(out.test_tensor) for out in self.outputs]
2940        assert all(isinstance(d, np.ndarray) for d in data)
2941        return data
@staticmethod
def get_batch_size( tensor_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]) -> int:
2943    @staticmethod
2944    def get_batch_size(tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]]) -> int:
2945        batch_size = 1
2946        tensor_with_batchsize: Optional[TensorId] = None
2947        for tid in tensor_sizes:
2948            for aid, s in tensor_sizes[tid].items():
2949                if aid != BATCH_AXIS_ID or s == 1 or s == batch_size:
2950                    continue
2951
2952                if batch_size != 1:
2953                    assert tensor_with_batchsize is not None
2954                    raise ValueError(
2955                        f"batch size mismatch for tensors '{tensor_with_batchsize}' ({batch_size}) and '{tid}' ({s})"
2956                    )
2957
2958                batch_size = s
2959                tensor_with_batchsize = tid
2960
2961        return batch_size
def get_output_tensor_sizes( self, input_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]) -> Dict[bioimageio.spec.model.v0_5.TensorId, Dict[bioimageio.spec.model.v0_5.AxisId, Union[int, bioimageio.spec.model.v0_5._DataDepSize]]]:
2963    def get_output_tensor_sizes(
2964        self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]
2965    ) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]:
2966        """Returns the tensor output sizes for given **input_sizes**.
2967        Only if **input_sizes** has a valid input shape, the tensor output size is exact.
2968        Otherwise it might be larger than the actual (valid) output"""
2969        batch_size = self.get_batch_size(input_sizes)
2970        ns = self.get_ns(input_sizes)
2971
2972        tensor_sizes = self.get_tensor_sizes(ns, batch_size=batch_size)
2973        return tensor_sizes.outputs

Returns the tensor output sizes for given input_sizes. Only if input_sizes has a valid input shape, the tensor output size is exact. Otherwise it might be larger than the actual (valid) output

def get_ns( self, input_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]):
2975    def get_ns(self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]):
2976        """get parameter `n` for each parameterized axis
2977        such that the valid input size is >= the given input size"""
2978        ret: Dict[Tuple[TensorId, AxisId], ParameterizedSize_N] = {}
2979        axes = {t.id: {a.id: a for a in t.axes} for t in self.inputs}
2980        for tid in input_sizes:
2981            for aid, s in input_sizes[tid].items():
2982                size_descr = axes[tid][aid].size
2983                if isinstance(size_descr, ParameterizedSize):
2984                    ret[(tid, aid)] = size_descr.get_n(s)
2985                elif size_descr is None or isinstance(size_descr, (int, SizeReference)):
2986                    pass
2987                else:
2988                    assert_never(size_descr)
2989
2990        return ret

get parameter n for each parameterized axis such that the valid input size is >= the given input size

def get_tensor_sizes( self, ns: Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int], batch_size: int) -> bioimageio.spec.model.v0_5._TensorSizes:
2992    def get_tensor_sizes(
2993        self, ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N], batch_size: int
2994    ) -> _TensorSizes:
2995        axis_sizes = self.get_axis_sizes(ns, batch_size=batch_size)
2996        return _TensorSizes(
2997            {
2998                t: {
2999                    aa: axis_sizes.inputs[(tt, aa)]
3000                    for tt, aa in axis_sizes.inputs
3001                    if tt == t
3002                }
3003                for t in {tt for tt, _ in axis_sizes.inputs}
3004            },
3005            {
3006                t: {
3007                    aa: axis_sizes.outputs[(tt, aa)]
3008                    for tt, aa in axis_sizes.outputs
3009                    if tt == t
3010                }
3011                for t in {tt for tt, _ in axis_sizes.outputs}
3012            },
3013        )
def get_axis_sizes( self, ns: Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int], batch_size: Optional[int] = None, *, max_input_shape: Optional[Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int]] = None) -> bioimageio.spec.model.v0_5._AxisSizes:
3015    def get_axis_sizes(
3016        self,
3017        ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N],
3018        batch_size: Optional[int] = None,
3019        *,
3020        max_input_shape: Optional[Mapping[Tuple[TensorId, AxisId], int]] = None,
3021    ) -> _AxisSizes:
3022        """Determine input and output block shape for scale factors **ns**
3023        of parameterized input sizes.
3024
3025        Args:
3026            ns: Scale factor `n` for each axis (keyed by (tensor_id, axis_id))
3027                that is parameterized as `size = min + n * step`.
3028            batch_size: The desired size of the batch dimension.
3029                If given **batch_size** overwrites any batch size present in
3030                **max_input_shape**. Default 1.
3031            max_input_shape: Limits the derived block shapes.
3032                Each axis for which the input size, parameterized by `n`, is larger
3033                than **max_input_shape** is set to the minimal value `n_min` for which
3034                this is still true.
3035                Use this for small input samples or large values of **ns**.
3036                Or simply whenever you know the full input shape.
3037
3038        Returns:
3039            Resolved axis sizes for model inputs and outputs.
3040        """
3041        max_input_shape = max_input_shape or {}
3042        if batch_size is None:
3043            for (_t_id, a_id), s in max_input_shape.items():
3044                if a_id == BATCH_AXIS_ID:
3045                    batch_size = s
3046                    break
3047            else:
3048                batch_size = 1
3049
3050        all_axes = {
3051            t.id: {a.id: a for a in t.axes} for t in chain(self.inputs, self.outputs)
3052        }
3053
3054        inputs: Dict[Tuple[TensorId, AxisId], int] = {}
3055        outputs: Dict[Tuple[TensorId, AxisId], Union[int, _DataDepSize]] = {}
3056
3057        def get_axis_size(a: Union[InputAxis, OutputAxis]):
3058            if isinstance(a, BatchAxis):
3059                if (t_descr.id, a.id) in ns:
3060                    logger.warning(
3061                        "Ignoring unexpected size increment factor (n) for batch axis"
3062                        + " of tensor '{}'.",
3063                        t_descr.id,
3064                    )
3065                return batch_size
3066            elif isinstance(a.size, int):
3067                if (t_descr.id, a.id) in ns:
3068                    logger.warning(
3069                        "Ignoring unexpected size increment factor (n) for fixed size"
3070                        + " axis '{}' of tensor '{}'.",
3071                        a.id,
3072                        t_descr.id,
3073                    )
3074                return a.size
3075            elif isinstance(a.size, ParameterizedSize):
3076                if (t_descr.id, a.id) not in ns:
3077                    raise ValueError(
3078                        "Size increment factor (n) missing for parametrized axis"
3079                        + f" '{a.id}' of tensor '{t_descr.id}'."
3080                    )
3081                n = ns[(t_descr.id, a.id)]
3082                s_max = max_input_shape.get((t_descr.id, a.id))
3083                if s_max is not None:
3084                    n = min(n, a.size.get_n(s_max))
3085
3086                return a.size.get_size(n)
3087
3088            elif isinstance(a.size, SizeReference):
3089                if (t_descr.id, a.id) in ns:
3090                    logger.warning(
3091                        "Ignoring unexpected size increment factor (n) for axis '{}'"
3092                        + " of tensor '{}' with size reference.",
3093                        a.id,
3094                        t_descr.id,
3095                    )
3096                assert not isinstance(a, BatchAxis)
3097                ref_axis = all_axes[a.size.tensor_id][a.size.axis_id]
3098                assert not isinstance(ref_axis, BatchAxis)
3099                ref_key = (a.size.tensor_id, a.size.axis_id)
3100                ref_size = inputs.get(ref_key, outputs.get(ref_key))
3101                assert ref_size is not None, ref_key
3102                assert not isinstance(ref_size, _DataDepSize), ref_key
3103                return a.size.get_size(
3104                    axis=a,
3105                    ref_axis=ref_axis,
3106                    ref_size=ref_size,
3107                )
3108            elif isinstance(a.size, DataDependentSize):
3109                if (t_descr.id, a.id) in ns:
3110                    logger.warning(
3111                        "Ignoring unexpected increment factor (n) for data dependent"
3112                        + " size axis '{}' of tensor '{}'.",
3113                        a.id,
3114                        t_descr.id,
3115                    )
3116                return _DataDepSize(a.size.min, a.size.max)
3117            else:
3118                assert_never(a.size)
3119
3120        # first resolve all , but the `SizeReference` input sizes
3121        for t_descr in self.inputs:
3122            for a in t_descr.axes:
3123                if not isinstance(a.size, SizeReference):
3124                    s = get_axis_size(a)
3125                    assert not isinstance(s, _DataDepSize)
3126                    inputs[t_descr.id, a.id] = s
3127
3128        # resolve all other input axis sizes
3129        for t_descr in self.inputs:
3130            for a in t_descr.axes:
3131                if isinstance(a.size, SizeReference):
3132                    s = get_axis_size(a)
3133                    assert not isinstance(s, _DataDepSize)
3134                    inputs[t_descr.id, a.id] = s
3135
3136        # resolve all output axis sizes
3137        for t_descr in self.outputs:
3138            for a in t_descr.axes:
3139                assert not isinstance(a.size, ParameterizedSize)
3140                s = get_axis_size(a)
3141                outputs[t_descr.id, a.id] = s
3142
3143        return _AxisSizes(inputs=inputs, outputs=outputs)

Determine input and output block shape for scale factors ns of parameterized input sizes.

Arguments:
  • ns: Scale factor n for each axis (keyed by (tensor_id, axis_id)) that is parameterized as size = min + n * step.
  • batch_size: The desired size of the batch dimension. If given batch_size overwrites any batch size present in max_input_shape. Default 1.
  • max_input_shape: Limits the derived block shapes. Each axis for which the input size, parameterized by n, is larger than max_input_shape is set to the minimal value n_min for which this is still true. Use this for small input samples or large values of ns. Or simply whenever you know the full input shape.
Returns:

Resolved axis sizes for model inputs and outputs.

@classmethod
def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3151    @classmethod
3152    def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3153        """Convert metadata following an older format version to this classes' format
3154        without validating the result.
3155        """
3156        if (
3157            data.get("type") == "model"
3158            and isinstance(fv := data.get("format_version"), str)
3159            and fv.count(".") == 2
3160        ):
3161            fv_parts = fv.split(".")
3162            if any(not p.isdigit() for p in fv_parts):
3163                return
3164
3165            fv_tuple = tuple(map(int, fv_parts))
3166
3167            assert cls.implemented_format_version_tuple[0:2] == (0, 5)
3168            if fv_tuple[:2] in ((0, 3), (0, 4)):
3169                m04 = _ModelDescr_v0_4.load(data)
3170                if isinstance(m04, InvalidDescr):
3171                    try:
3172                        updated = _model_conv.convert_as_dict(
3173                            m04  # pyright: ignore[reportArgumentType]
3174                        )
3175                    except Exception as e:
3176                        logger.error(
3177                            "Failed to convert from invalid model 0.4 description."
3178                            + f"\nerror: {e}"
3179                            + "\nProceeding with model 0.5 validation without conversion."
3180                        )
3181                        updated = None
3182                else:
3183                    updated = _model_conv.convert_as_dict(m04)
3184
3185                if updated is not None:
3186                    data.clear()
3187                    data.update(updated)
3188
3189            elif fv_tuple[:2] == (0, 5):
3190                # bump patch version
3191                data["format_version"] = cls.implemented_format_version

Convert metadata following an older format version to this classes' format without validating the result.

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 5, 4)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'forbid', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
class NotebookDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
31class NotebookDescr(GenericDescrBase):
32    """Bioimage.io description of a Jupyter notebook."""
33
34    implemented_type: ClassVar[Literal["notebook"]] = "notebook"
35    if TYPE_CHECKING:
36        type: Literal["notebook"] = "notebook"
37    else:
38        type: Literal["notebook"]
39
40    id: Optional[NotebookId] = None
41    """bioimage.io-wide unique resource identifier
42    assigned by bioimage.io; version **un**specific."""
43
44    parent: Optional[NotebookId] = None
45    """The description from which this one is derived"""
46
47    source: NotebookSource
48    """The Jupyter notebook"""

Bioimage.io description of a Jupyter notebook.

implemented_type: ClassVar[Literal['notebook']] = 'notebook'
id: Optional[bioimageio.spec.notebook.v0_3.NotebookId]

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

parent: Optional[bioimageio.spec.notebook.v0_3.NotebookId]

The description from which this one is derived

source: Union[Annotated[bioimageio.spec._internal.url.HttpUrl, WithSuffix(suffix='.ipynb', case_sensitive=True)], Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath'), WithSuffix(suffix='.ipynb', case_sensitive=True)], Annotated[bioimageio.spec._internal.io.RelativeFilePath, WithSuffix(suffix='.ipynb', case_sensitive=True)]]

The Jupyter notebook

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'forbid', 'frozen': False, 'populate_by_name': True, 'revalidate_instances': 'never', 'validate_assignment': True, 'validate_default': False, 'validate_return': True, 'use_attribute_docstrings': True, 'model_title_generator': <function _node_title_generator>, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
ResourceDescr = typing.Union[typing.Annotated[typing.Union[typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], typing.Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]]
def save_bioimageio_package_as_folder( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='dir')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Annotated[pathlib.Path, PathType(path_type='dir')]:
150def save_bioimageio_package_as_folder(
151    source: Union[BioimageioYamlSource, ResourceDescr],
152    /,
153    *,
154    output_path: Union[NewPath, DirectoryPath, None] = None,
155    weights_priority_order: Optional[  # model only
156        Sequence[
157            Literal[
158                "keras_hdf5",
159                "onnx",
160                "pytorch_state_dict",
161                "tensorflow_js",
162                "tensorflow_saved_model_bundle",
163                "torchscript",
164            ]
165        ]
166    ] = None,
167) -> DirectoryPath:
168    """Write the content of a bioimage.io resource package to a folder.
169
170    Args:
171        source: bioimageio resource description
172        output_path: file path to write package to
173        weights_priority_order: If given only the first weights format present in the model is included.
174                                If none of the prioritized weights formats is found all are included.
175
176    Returns:
177        directory path to bioimageio package folder
178    """
179    package_content = _prepare_resource_package(
180        source,
181        weights_priority_order=weights_priority_order,
182    )
183    if output_path is None:
184        output_path = Path(mkdtemp())
185    else:
186        output_path = Path(output_path)
187
188    output_path.mkdir(exist_ok=True, parents=True)
189    for name, src in package_content.items():
190        if isinstance(src, collections.abc.Mapping):
191            write_yaml(src, output_path / name)
192        else:
193            with (output_path / name).open("wb") as dest:
194                _ = shutil.copyfileobj(src, dest)
195
196    return output_path

Write the content of a bioimage.io resource package to a folder.

Arguments:
  • source: bioimageio resource description
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

directory path to bioimageio package folder

def save_bioimageio_package_to_stream( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, compression: int = 8, compression_level: int = 1, output_stream: Optional[IO[bytes]] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> IO[bytes]:
263def save_bioimageio_package_to_stream(
264    source: Union[BioimageioYamlSource, ResourceDescr],
265    /,
266    *,
267    compression: int = ZIP_DEFLATED,
268    compression_level: int = 1,
269    output_stream: Union[IO[bytes], None] = None,
270    weights_priority_order: Optional[  # model only
271        Sequence[
272            Literal[
273                "keras_hdf5",
274                "onnx",
275                "pytorch_state_dict",
276                "tensorflow_js",
277                "tensorflow_saved_model_bundle",
278                "torchscript",
279            ]
280        ]
281    ] = None,
282) -> IO[bytes]:
283    """Package a bioimageio resource into a stream.
284
285    Args:
286        rd: bioimageio resource description
287        compression: The numeric constant of compression method.
288        compression_level: Compression level to use when writing files to the archive.
289                           See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
290        output_stream: stream to write package to
291        weights_priority_order: If given only the first weights format present in the model is included.
292                                If none of the prioritized weights formats is found all are included.
293
294    Note: this function bypasses safety checks and does not load/validate the model after writing.
295
296    Returns:
297        stream of zipped bioimageio package
298    """
299    if output_stream is None:
300        output_stream = BytesIO()
301
302    package_content = _prepare_resource_package(
303        source,
304        weights_priority_order=weights_priority_order,
305    )
306
307    write_zip(
308        output_stream,
309        package_content,
310        compression=compression,
311        compression_level=compression_level,
312    )
313
314    return output_stream

Package a bioimageio resource into a stream.

Arguments:
  • rd: bioimageio resource description
  • compression: The numeric constant of compression method.
  • compression_level: Compression level to use when writing files to the archive. See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
  • output_stream: stream to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.

Note: this function bypasses safety checks and does not load/validate the model after writing.

Returns:

stream of zipped bioimageio package

def save_bioimageio_package( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, compression: int = 8, compression_level: int = 1, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None, allow_invalid: bool = False) -> Annotated[pathlib.Path, PathType(path_type='file')]:
199def save_bioimageio_package(
200    source: Union[BioimageioYamlSource, ResourceDescr],
201    /,
202    *,
203    compression: int = ZIP_DEFLATED,
204    compression_level: int = 1,
205    output_path: Union[NewPath, FilePath, None] = None,
206    weights_priority_order: Optional[  # model only
207        Sequence[
208            Literal[
209                "keras_hdf5",
210                "onnx",
211                "pytorch_state_dict",
212                "tensorflow_js",
213                "tensorflow_saved_model_bundle",
214                "torchscript",
215            ]
216        ]
217    ] = None,
218    allow_invalid: bool = False,
219) -> FilePath:
220    """Package a bioimageio resource as a zip file.
221
222    Args:
223        rd: bioimageio resource description
224        compression: The numeric constant of compression method.
225        compression_level: Compression level to use when writing files to the archive.
226                           See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
227        output_path: file path to write package to
228        weights_priority_order: If given only the first weights format present in the model is included.
229                                If none of the prioritized weights formats is found all are included.
230
231    Returns:
232        path to zipped bioimageio package
233    """
234    package_content = _prepare_resource_package(
235        source,
236        weights_priority_order=weights_priority_order,
237    )
238    if output_path is None:
239        output_path = Path(
240            NamedTemporaryFile(suffix=".bioimageio.zip", delete=False).name
241        )
242    else:
243        output_path = Path(output_path)
244
245    write_zip(
246        output_path,
247        package_content,
248        compression=compression,
249        compression_level=compression_level,
250    )
251    with get_validation_context().replace(warning_level=ERROR):
252        if isinstance((exported := load_description(output_path)), InvalidDescr):
253            exported.validation_summary.display()
254            msg = f"Exported package at '{output_path}' is invalid."
255            if allow_invalid:
256                logger.error(msg)
257            else:
258                raise ValueError(msg)
259
260    return output_path

Package a bioimageio resource as a zip file.

Arguments:
  • rd: bioimageio resource description
  • compression: The numeric constant of compression method.
  • compression_level: Compression level to use when writing files to the archive. See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

path to zipped bioimageio package

def save_bioimageio_yaml_only( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue], InvalidDescr], /, file: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], TextIO], *, exclude_unset: bool = True, exclude_defaults: bool = False):
202def save_bioimageio_yaml_only(
203    rd: Union[ResourceDescr, BioimageioYamlContent, InvalidDescr],
204    /,
205    file: Union[NewPath, FilePath, TextIO],
206    *,
207    exclude_unset: bool = True,
208    exclude_defaults: bool = False,
209):
210    """write the metadata of a resource description (`rd`) to `file`
211    without writing any of the referenced files in it.
212
213    Args:
214        rd: bioimageio resource description
215        file: file or stream to save to
216        exclude_unset: Exclude fields that have not explicitly be set.
217        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
218
219    Note: To save a resource description with its associated files as a package,
220    use `save_bioimageio_package` or `save_bioimageio_package_as_folder`.
221    """
222    if isinstance(rd, ResourceDescrBase):
223        content = dump_description(
224            rd, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
225        )
226    else:
227        content = rd
228
229    write_yaml(cast(YamlValue, content), file)

write the metadata of a resource description (rd) to file without writing any of the referenced files in it.

Arguments:
  • rd: bioimageio resource description
  • file: file or stream to save to
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).

Note: To save a resource description with its associated files as a package, use save_bioimageio_package or save_bioimageio_package_as_folder.

settings = Settings(allow_pickle=False, cache_path=PosixPath('/home/runner/.cache/bioimageio'), collection_http_pattern='https://hypha.aicell.io/bioimage-io/artifacts/{bioimageio_id}/files/rdf.yaml', id_map='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map.json', id_map_draft='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map_draft.json', perform_io_checks=True, resolve_draft=True, log_warnings=True, github_username=None, github_token=None, CI='true', user_agent=None)
SpecificResourceDescr = typing.Annotated[typing.Union[typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)]
def update_format( source: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], InvalidDescr], /, *, output: Union[pathlib.Path, TextIO, NoneType] = None, exclude_defaults: bool = True, perform_io_checks: Optional[bool] = None) -> Union[Annotated[Union[ApplicationDescr, DatasetDescr, ModelDescr, NotebookDescr], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], GenericDescr, InvalidDescr]:
258def update_format(
259    source: Union[
260        ResourceDescr,
261        PermissiveFileSource,
262        ZipFile,
263        BioimageioYamlContent,
264        InvalidDescr,
265    ],
266    /,
267    *,
268    output: Union[Path, TextIO, None] = None,
269    exclude_defaults: bool = True,
270    perform_io_checks: Optional[bool] = None,
271) -> Union[LatestResourceDescr, InvalidDescr]:
272    """Update a resource description.
273
274    Notes:
275    - Invalid **source** descriptions may fail to update.
276    - The updated description might be invalid (even if the **source** was valid).
277    """
278
279    if isinstance(source, ResourceDescrBase):
280        root = source.root
281        source = dump_description(source)
282    else:
283        root = None
284
285    if isinstance(source, collections.abc.Mapping):
286        descr = build_description(
287            source,
288            context=get_validation_context().replace(
289                root=root, perform_io_checks=perform_io_checks
290            ),
291            format_version=LATEST,
292        )
293
294    else:
295        descr = load_description(
296            source,
297            perform_io_checks=perform_io_checks,
298            format_version=LATEST,
299        )
300
301    if output is not None:
302        save_bioimageio_yaml_only(descr, file=output, exclude_defaults=exclude_defaults)
303
304    return descr

Update a resource description.

Notes:

  • Invalid source descriptions may fail to update.
  • The updated description might be invalid (even if the source was valid).
def update_hashes( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue]], /) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
307def update_hashes(
308    source: Union[PermissiveFileSource, ZipFile, ResourceDescr, BioimageioYamlContent],
309    /,
310) -> Union[ResourceDescr, InvalidDescr]:
311    """Update hash values of the files referenced in **source**."""
312    if isinstance(source, ResourceDescrBase):
313        root = source.root
314        source = dump_description(source)
315    else:
316        root = None
317
318    context = get_validation_context().replace(
319        update_hashes=True, root=root, perform_io_checks=True
320    )
321    with context:
322        if isinstance(source, collections.abc.Mapping):
323            return build_description(source)
324        else:
325            return load_description(source, perform_io_checks=True)

Update hash values of the files referenced in source.

def validate_format( data: Dict[str, YamlValue], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', context: Optional[ValidationContext] = None) -> ValidationSummary:
204def validate_format(
205    data: BioimageioYamlContent,
206    /,
207    *,
208    format_version: Union[Literal["discover", "latest"], str] = DISCOVER,
209    context: Optional[ValidationContext] = None,
210) -> ValidationSummary:
211    """Validate a dictionary holding a bioimageio description.
212    See `bioimagieo.spec.load_description_and_validate_format_only`
213    to validate a file source.
214
215    Args:
216        data: Dictionary holding the raw bioimageio.yaml content.
217        format_version: Format version to (update to and) use for validation.
218        context: Validation context, see `bioimagieo.spec.ValidationContext`
219
220    Note:
221        Use `bioimagieo.spec.load_description_and_validate_format_only` to validate a
222        file source instead of loading the YAML content and creating the appropriate
223        `ValidationContext`.
224
225        Alternatively you can use `bioimagieo.spec.load_description` and access the
226        `validation_summary` attribute of the returned object.
227    """
228    with context or get_validation_context():
229        rd = build_description(data, format_version=format_version)
230
231    assert rd.validation_summary is not None
232    return rd.validation_summary

Validate a dictionary holding a bioimageio description. See bioimagieo.spec.load_description_and_validate_format_only to validate a file source.

Arguments:
  • data: Dictionary holding the raw bioimageio.yaml content.
  • format_version: Format version to (update to and) use for validation.
  • context: Validation context, see bioimagieo.spec.ValidationContext
Note:

Use bioimagieo.spec.load_description_and_validate_format_only to validate a file source instead of loading the YAML content and creating the appropriate ValidationContext.

Alternatively you can use bioimagieo.spec.load_description and access the validation_summary attribute of the returned object.

@dataclass(frozen=True)
class ValidationContext(bioimageio.spec._internal.validation_context.ValidationContextBase):
 57@dataclass(frozen=True)
 58class ValidationContext(ValidationContextBase):
 59    """A validation context used to control validation of bioimageio resources.
 60
 61    For example a relative file path in a bioimageio description requires the **root**
 62    context to evaluate if the file is available and, if **perform_io_checks** is true,
 63    if it matches its expected SHA256 hash value.
 64    """
 65
 66    _context_tokens: "List[Token[Optional[ValidationContext]]]" = field(
 67        init=False,
 68        default_factory=cast(
 69            "Callable[[], List[Token[Optional[ValidationContext]]]]", list
 70        ),
 71    )
 72
 73    cache: Union[
 74        DiskCache[RootHttpUrl], MemoryCache[RootHttpUrl], NoopCache[RootHttpUrl]
 75    ] = field(default=settings.disk_cache)
 76    disable_cache: bool = False
 77    """Disable caching downloads to `settings.cache_path`
 78    and (re)download them to memory instead."""
 79
 80    root: Union[RootHttpUrl, DirectoryPath, ZipFile] = Path()
 81    """Url/directory/archive serving as base to resolve any relative file paths."""
 82
 83    warning_level: WarningLevel = 50
 84    """Treat warnings of severity `s` as validation errors if `s >= warning_level`."""
 85
 86    log_warnings: bool = settings.log_warnings
 87    """If `True` warnings are logged to the terminal
 88
 89    Note: This setting does not affect warning entries
 90        of a generated `bioimageio.spec.ValidationSummary`.
 91    """
 92
 93    progressbar_factory: Optional[Callable[[], Progressbar]] = None
 94    """Callable to return a tqdm-like progressbar.
 95
 96    Currently this is only used for file downloads."""
 97
 98    raise_errors: bool = False
 99    """Directly raise any validation errors
100    instead of aggregating errors and returning a `bioimageio.spec.InvalidDescr`. (for debugging)"""
101
102    @property
103    def summary(self):
104        if isinstance(self.root, ZipFile):
105            if self.root.filename is None:
106                root = "in-memory"
107            else:
108                root = Path(self.root.filename)
109        else:
110            root = self.root
111
112        return ValidationContextSummary(
113            root=root,
114            file_name=self.file_name,
115            perform_io_checks=self.perform_io_checks,
116            known_files=copy(self.known_files),
117            update_hashes=self.update_hashes,
118        )
119
120    def __enter__(self):
121        self._context_tokens.append(_validation_context_var.set(self))
122        return self
123
124    def __exit__(self, type, value, traceback):  # type: ignore
125        _validation_context_var.reset(self._context_tokens.pop(-1))
126
127    def replace(  # TODO: probably use __replace__ when py>=3.13
128        self,
129        root: Optional[Union[RootHttpUrl, DirectoryPath, ZipFile]] = None,
130        warning_level: Optional[WarningLevel] = None,
131        log_warnings: Optional[bool] = None,
132        file_name: Optional[str] = None,
133        perform_io_checks: Optional[bool] = None,
134        known_files: Optional[Dict[str, Optional[Sha256]]] = None,
135        raise_errors: Optional[bool] = None,
136        update_hashes: Optional[bool] = None,
137    ) -> Self:
138        if known_files is None and root is not None and self.root != root:
139            # reset known files if root changes, but no new known_files are given
140            known_files = {}
141
142        return self.__class__(
143            root=self.root if root is None else root,
144            warning_level=(
145                self.warning_level if warning_level is None else warning_level
146            ),
147            log_warnings=self.log_warnings if log_warnings is None else log_warnings,
148            file_name=self.file_name if file_name is None else file_name,
149            perform_io_checks=(
150                self.perform_io_checks
151                if perform_io_checks is None
152                else perform_io_checks
153            ),
154            known_files=self.known_files if known_files is None else known_files,
155            raise_errors=self.raise_errors if raise_errors is None else raise_errors,
156            update_hashes=(
157                self.update_hashes if update_hashes is None else update_hashes
158            ),
159        )
160
161    @property
162    def source_name(self) -> str:
163        if self.file_name is None:
164            return "in-memory"
165        else:
166            try:
167                if isinstance(self.root, Path):
168                    source = (self.root / self.file_name).absolute()
169                else:
170                    parsed = urlsplit(str(self.root))
171                    path = list(parsed.path.strip("/").split("/")) + [self.file_name]
172                    source = urlunsplit(
173                        (
174                            parsed.scheme,
175                            parsed.netloc,
176                            "/".join(path),
177                            parsed.query,
178                            parsed.fragment,
179                        )
180                    )
181            except ValueError:
182                return self.file_name
183            else:
184                return str(source)

A validation context used to control validation of bioimageio resources.

For example a relative file path in a bioimageio description requires the root context to evaluate if the file is available and, if perform_io_checks is true, if it matches its expected SHA256 hash value.

ValidationContext( file_name: Optional[str] = None, perform_io_checks: bool = True, known_files: Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]] = <factory>, update_hashes: bool = False, cache: Union[genericache.disk_cache.DiskCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.memory_cache.MemoryCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.noop_cache.NoopCache[bioimageio.spec._internal.root_url.RootHttpUrl]] = <genericache.disk_cache.DiskCache object>, disable_cache: bool = False, root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile] = PosixPath('.'), warning_level: Literal[20, 30, 35, 50] = 50, log_warnings: bool = True, progressbar_factory: Optional[Callable[[], bioimageio.spec._internal.progress.Progressbar]] = None, raise_errors: bool = False)
cache: Union[genericache.disk_cache.DiskCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.memory_cache.MemoryCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.noop_cache.NoopCache[bioimageio.spec._internal.root_url.RootHttpUrl]] = <genericache.disk_cache.DiskCache object>
disable_cache: bool = False

Disable caching downloads to settings.cache_path and (re)download them to memory instead.

root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile] = PosixPath('.')

Url/directory/archive serving as base to resolve any relative file paths.

warning_level: Literal[20, 30, 35, 50] = 50

Treat warnings of severity s as validation errors if s >= warning_level.

log_warnings: bool = True

If True warnings are logged to the terminal

Note: This setting does not affect warning entries of a generated bioimageio.spec.ValidationSummary.

progressbar_factory: Optional[Callable[[], bioimageio.spec._internal.progress.Progressbar]] = None

Callable to return a tqdm-like progressbar.

Currently this is only used for file downloads.

raise_errors: bool = False

Directly raise any validation errors instead of aggregating errors and returning a bioimageio.spec.InvalidDescr. (for debugging)

summary
102    @property
103    def summary(self):
104        if isinstance(self.root, ZipFile):
105            if self.root.filename is None:
106                root = "in-memory"
107            else:
108                root = Path(self.root.filename)
109        else:
110            root = self.root
111
112        return ValidationContextSummary(
113            root=root,
114            file_name=self.file_name,
115            perform_io_checks=self.perform_io_checks,
116            known_files=copy(self.known_files),
117            update_hashes=self.update_hashes,
118        )
def replace( self, root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile, NoneType] = None, warning_level: Optional[Literal[20, 30, 35, 50]] = None, log_warnings: Optional[bool] = None, file_name: Optional[str] = None, perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, raise_errors: Optional[bool] = None, update_hashes: Optional[bool] = None) -> Self:
127    def replace(  # TODO: probably use __replace__ when py>=3.13
128        self,
129        root: Optional[Union[RootHttpUrl, DirectoryPath, ZipFile]] = None,
130        warning_level: Optional[WarningLevel] = None,
131        log_warnings: Optional[bool] = None,
132        file_name: Optional[str] = None,
133        perform_io_checks: Optional[bool] = None,
134        known_files: Optional[Dict[str, Optional[Sha256]]] = None,
135        raise_errors: Optional[bool] = None,
136        update_hashes: Optional[bool] = None,
137    ) -> Self:
138        if known_files is None and root is not None and self.root != root:
139            # reset known files if root changes, but no new known_files are given
140            known_files = {}
141
142        return self.__class__(
143            root=self.root if root is None else root,
144            warning_level=(
145                self.warning_level if warning_level is None else warning_level
146            ),
147            log_warnings=self.log_warnings if log_warnings is None else log_warnings,
148            file_name=self.file_name if file_name is None else file_name,
149            perform_io_checks=(
150                self.perform_io_checks
151                if perform_io_checks is None
152                else perform_io_checks
153            ),
154            known_files=self.known_files if known_files is None else known_files,
155            raise_errors=self.raise_errors if raise_errors is None else raise_errors,
156            update_hashes=(
157                self.update_hashes if update_hashes is None else update_hashes
158            ),
159        )
source_name: str
161    @property
162    def source_name(self) -> str:
163        if self.file_name is None:
164            return "in-memory"
165        else:
166            try:
167                if isinstance(self.root, Path):
168                    source = (self.root / self.file_name).absolute()
169                else:
170                    parsed = urlsplit(str(self.root))
171                    path = list(parsed.path.strip("/").split("/")) + [self.file_name]
172                    source = urlunsplit(
173                        (
174                            parsed.scheme,
175                            parsed.netloc,
176                            "/".join(path),
177                            parsed.query,
178                            parsed.fragment,
179                        )
180                    )
181            except ValueError:
182                return self.file_name
183            else:
184                return str(source)
class ValidationSummary(pydantic.main.BaseModel):
240class ValidationSummary(BaseModel, extra="allow"):
241    """Summarizes output of all bioimageio validations and tests
242    for one specific `ResourceDescr` instance."""
243
244    name: str
245    """name of the validation"""
246    source_name: str
247    """source of the validated bioimageio description"""
248    id: Optional[str] = None
249    """ID of the resource being validated"""
250    type: str
251    """type of the resource being validated"""
252    format_version: str
253    """format version of the resource being validated"""
254    status: Literal["passed", "valid-format", "failed"]
255    """overall status of the bioimageio validation"""
256    details: List[ValidationDetail]
257    """list of validation details"""
258    env: Set[InstalledPackage] = Field(
259        default_factory=lambda: {
260            InstalledPackage(name="bioimageio.spec", version=VERSION)
261        }
262    )
263    """list of selected, relevant package versions"""
264
265    saved_conda_list: Optional[str] = None
266
267    @field_serializer("saved_conda_list")
268    def _save_conda_list(self, value: Optional[str]):
269        return self.conda_list
270
271    @property
272    def conda_list(self):
273        if self.saved_conda_list is None:
274            p = subprocess.run(
275                ["conda", "list"],
276                stdout=subprocess.PIPE,
277                stderr=subprocess.STDOUT,
278                shell=True,
279                text=True,
280            )
281            self.saved_conda_list = (
282                p.stdout or f"`conda list` exited with {p.returncode}"
283            )
284
285        return self.saved_conda_list
286
287    @property
288    def status_icon(self):
289        if self.status == "passed":
290            return "✔️"
291        elif self.status == "valid-format":
292            return "🟡"
293        else:
294            return "❌"
295
296    @property
297    def errors(self) -> List[ErrorEntry]:
298        return list(chain.from_iterable(d.errors for d in self.details))
299
300    @property
301    def warnings(self) -> List[WarningEntry]:
302        return list(chain.from_iterable(d.warnings for d in self.details))
303
304    def format(
305        self,
306        *,
307        width: Optional[int] = None,
308        include_conda_list: bool = False,
309    ):
310        """Format summary as Markdown string"""
311        return self._format(
312            width=width, target="md", include_conda_list=include_conda_list
313        )
314
315    format_md = format
316
317    def format_html(
318        self,
319        *,
320        width: Optional[int] = None,
321        include_conda_list: bool = False,
322    ):
323        md_with_html = self._format(
324            target="html", width=width, include_conda_list=include_conda_list
325        )
326        return markdown.markdown(
327            md_with_html, extensions=["tables", "fenced_code", "nl2br"]
328        )
329
330    # TODO: fix bug which casuses extensive white space between the info table and details table
331    # (the generated markdown seems fine)
332    @no_type_check
333    def display(
334        self,
335        *,
336        width: Optional[int] = None,
337        include_conda_list: bool = False,
338        tab_size: int = 4,
339        soft_wrap: bool = True,
340    ) -> None:
341        try:  # render as HTML in Jupyter notebook
342            from IPython.core.getipython import get_ipython
343            from IPython.display import display_html
344        except ImportError:
345            pass
346        else:
347            if get_ipython() is not None:
348                _ = display_html(
349                    self.format_html(
350                        width=width, include_conda_list=include_conda_list
351                    ),
352                    raw=True,
353                )
354                return
355
356        # render with rich
357        self._format(
358            target=rich.console.Console(
359                width=width,
360                tab_size=tab_size,
361                soft_wrap=soft_wrap,
362            ),
363            width=width,
364            include_conda_list=include_conda_list,
365        )
366
367    def add_detail(self, detail: ValidationDetail):
368        if detail.status == "failed":
369            self.status = "failed"
370        elif detail.status != "passed":
371            assert_never(detail.status)
372
373        self.details.append(detail)
374
375    def log(
376        self,
377        to: Union[Literal["display"], Path, Sequence[Union[Literal["display"], Path]]],
378    ) -> List[Path]:
379        """Convenience method to display the validation summary in the terminal and/or
380        save it to disk. See `save` for details."""
381        if to == "display":
382            display = True
383            save_to = []
384        elif isinstance(to, Path):
385            display = False
386            save_to = [to]
387        else:
388            display = "display" in to
389            save_to = [p for p in to if p != "display"]
390
391        if display:
392            self.display()
393
394        return self.save(save_to)
395
396    def save(
397        self, path: Union[Path, Sequence[Path]] = Path("{id}_summary_{now}")
398    ) -> List[Path]:
399        """Save the validation/test summary in JSON, Markdown or HTML format.
400
401        Returns:
402            List of file paths the summary was saved to.
403
404        Notes:
405        - Format is chosen based on the suffix: `.json`, `.md`, `.html`.
406        - If **path** has no suffix it is assumed to be a direcotry to which a
407          `summary.json`, `summary.md` and `summary.html` are saved to.
408        """
409        if isinstance(path, (str, Path)):
410            path = [Path(path)]
411
412        # folder to file paths
413        file_paths: List[Path] = []
414        for p in path:
415            if p.suffix:
416                file_paths.append(p)
417            else:
418                file_paths.extend(
419                    [
420                        p / "summary.json",
421                        p / "summary.md",
422                        p / "summary.html",
423                    ]
424                )
425
426        now = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
427        for p in file_paths:
428            p = Path(str(p).format(id=self.id or "bioimageio", now=now))
429            if p.suffix == ".json":
430                self.save_json(p)
431            elif p.suffix == ".md":
432                self.save_markdown(p)
433            elif p.suffix == ".html":
434                self.save_html(p)
435            else:
436                raise ValueError(f"Unknown summary path suffix '{p.suffix}'")
437
438        return file_paths
439
440    def save_json(
441        self, path: Path = Path("summary.json"), *, indent: Optional[int] = 2
442    ):
443        """Save validation/test summary as JSON file."""
444        json_str = self.model_dump_json(indent=indent)
445        path.parent.mkdir(exist_ok=True, parents=True)
446        _ = path.write_text(json_str, encoding="utf-8")
447        logger.info("Saved summary to {}", path.absolute())
448
449    def save_markdown(self, path: Path = Path("summary.md")):
450        """Save rendered validation/test summary as Markdown file."""
451        formatted = self.format_md()
452        path.parent.mkdir(exist_ok=True, parents=True)
453        _ = path.write_text(formatted, encoding="utf-8")
454        logger.info("Saved Markdown formatted summary to {}", path.absolute())
455
456    def save_html(self, path: Path = Path("summary.html")) -> None:
457        """Save rendered validation/test summary as HTML file."""
458        path.parent.mkdir(exist_ok=True, parents=True)
459
460        html = self.format_html()
461        _ = path.write_text(html, encoding="utf-8")
462        logger.info("Saved HTML formatted summary to {}", path.absolute())
463
464    @classmethod
465    def load_json(cls, path: Path) -> Self:
466        """Load validation/test summary from a suitable JSON file"""
467        json_str = Path(path).read_text(encoding="utf-8")
468        return cls.model_validate_json(json_str)
469
470    @field_validator("env", mode="before")
471    def _convert_dict(cls, value: List[Union[List[str], Dict[str, str]]]):
472        """convert old env value for backwards compatibility"""
473        if isinstance(value, list):
474            return [
475                (
476                    (v["name"], v["version"], v.get("build", ""), v.get("channel", ""))
477                    if isinstance(v, dict) and "name" in v and "version" in v
478                    else v
479                )
480                for v in value
481            ]
482        else:
483            return value
484
485    def _format(
486        self,
487        *,
488        target: Union[rich.console.Console, Literal["html", "md"]],
489        width: Optional[int],
490        include_conda_list: bool,
491    ):
492        return _format_summary(
493            self,
494            target=target,
495            width=width or 100,
496            include_conda_list=include_conda_list,
497        )

Summarizes output of all bioimageio validations and tests for one specific ResourceDescr instance.

name: str

name of the validation

source_name: str

source of the validated bioimageio description

id: Optional[str]

ID of the resource being validated

type: str

type of the resource being validated

format_version: str

format version of the resource being validated

status: Literal['passed', 'valid-format', 'failed']

overall status of the bioimageio validation

list of validation details

list of selected, relevant package versions

saved_conda_list: Optional[str]
conda_list
271    @property
272    def conda_list(self):
273        if self.saved_conda_list is None:
274            p = subprocess.run(
275                ["conda", "list"],
276                stdout=subprocess.PIPE,
277                stderr=subprocess.STDOUT,
278                shell=True,
279                text=True,
280            )
281            self.saved_conda_list = (
282                p.stdout or f"`conda list` exited with {p.returncode}"
283            )
284
285        return self.saved_conda_list
status_icon
287    @property
288    def status_icon(self):
289        if self.status == "passed":
290            return "✔️"
291        elif self.status == "valid-format":
292            return "🟡"
293        else:
294            return "❌"
errors: List[bioimageio.spec.summary.ErrorEntry]
296    @property
297    def errors(self) -> List[ErrorEntry]:
298        return list(chain.from_iterable(d.errors for d in self.details))
warnings: List[bioimageio.spec.summary.WarningEntry]
300    @property
301    def warnings(self) -> List[WarningEntry]:
302        return list(chain.from_iterable(d.warnings for d in self.details))
def format( self, *, width: Optional[int] = None, include_conda_list: bool = False):
304    def format(
305        self,
306        *,
307        width: Optional[int] = None,
308        include_conda_list: bool = False,
309    ):
310        """Format summary as Markdown string"""
311        return self._format(
312            width=width, target="md", include_conda_list=include_conda_list
313        )

Format summary as Markdown string

def format_md( self, *, width: Optional[int] = None, include_conda_list: bool = False):
304    def format(
305        self,
306        *,
307        width: Optional[int] = None,
308        include_conda_list: bool = False,
309    ):
310        """Format summary as Markdown string"""
311        return self._format(
312            width=width, target="md", include_conda_list=include_conda_list
313        )

Format summary as Markdown string

def format_html( self, *, width: Optional[int] = None, include_conda_list: bool = False):
317    def format_html(
318        self,
319        *,
320        width: Optional[int] = None,
321        include_conda_list: bool = False,
322    ):
323        md_with_html = self._format(
324            target="html", width=width, include_conda_list=include_conda_list
325        )
326        return markdown.markdown(
327            md_with_html, extensions=["tables", "fenced_code", "nl2br"]
328        )
@no_type_check
def display( self, *, width: Optional[int] = None, include_conda_list: bool = False, tab_size: int = 4, soft_wrap: bool = True) -> None:
332    @no_type_check
333    def display(
334        self,
335        *,
336        width: Optional[int] = None,
337        include_conda_list: bool = False,
338        tab_size: int = 4,
339        soft_wrap: bool = True,
340    ) -> None:
341        try:  # render as HTML in Jupyter notebook
342            from IPython.core.getipython import get_ipython
343            from IPython.display import display_html
344        except ImportError:
345            pass
346        else:
347            if get_ipython() is not None:
348                _ = display_html(
349                    self.format_html(
350                        width=width, include_conda_list=include_conda_list
351                    ),
352                    raw=True,
353                )
354                return
355
356        # render with rich
357        self._format(
358            target=rich.console.Console(
359                width=width,
360                tab_size=tab_size,
361                soft_wrap=soft_wrap,
362            ),
363            width=width,
364            include_conda_list=include_conda_list,
365        )
def add_detail(self, detail: bioimageio.spec.summary.ValidationDetail):
367    def add_detail(self, detail: ValidationDetail):
368        if detail.status == "failed":
369            self.status = "failed"
370        elif detail.status != "passed":
371            assert_never(detail.status)
372
373        self.details.append(detail)
def log( self, to: Union[Literal['display'], pathlib.Path, Sequence[Union[Literal['display'], pathlib.Path]]]) -> List[pathlib.Path]:
375    def log(
376        self,
377        to: Union[Literal["display"], Path, Sequence[Union[Literal["display"], Path]]],
378    ) -> List[Path]:
379        """Convenience method to display the validation summary in the terminal and/or
380        save it to disk. See `save` for details."""
381        if to == "display":
382            display = True
383            save_to = []
384        elif isinstance(to, Path):
385            display = False
386            save_to = [to]
387        else:
388            display = "display" in to
389            save_to = [p for p in to if p != "display"]
390
391        if display:
392            self.display()
393
394        return self.save(save_to)

Convenience method to display the validation summary in the terminal and/or save it to disk. See save for details.

def save( self, path: Union[pathlib.Path, Sequence[pathlib.Path]] = PosixPath('{id}_summary_{now}')) -> List[pathlib.Path]:
396    def save(
397        self, path: Union[Path, Sequence[Path]] = Path("{id}_summary_{now}")
398    ) -> List[Path]:
399        """Save the validation/test summary in JSON, Markdown or HTML format.
400
401        Returns:
402            List of file paths the summary was saved to.
403
404        Notes:
405        - Format is chosen based on the suffix: `.json`, `.md`, `.html`.
406        - If **path** has no suffix it is assumed to be a direcotry to which a
407          `summary.json`, `summary.md` and `summary.html` are saved to.
408        """
409        if isinstance(path, (str, Path)):
410            path = [Path(path)]
411
412        # folder to file paths
413        file_paths: List[Path] = []
414        for p in path:
415            if p.suffix:
416                file_paths.append(p)
417            else:
418                file_paths.extend(
419                    [
420                        p / "summary.json",
421                        p / "summary.md",
422                        p / "summary.html",
423                    ]
424                )
425
426        now = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
427        for p in file_paths:
428            p = Path(str(p).format(id=self.id or "bioimageio", now=now))
429            if p.suffix == ".json":
430                self.save_json(p)
431            elif p.suffix == ".md":
432                self.save_markdown(p)
433            elif p.suffix == ".html":
434                self.save_html(p)
435            else:
436                raise ValueError(f"Unknown summary path suffix '{p.suffix}'")
437
438        return file_paths

Save the validation/test summary in JSON, Markdown or HTML format.

Returns:

List of file paths the summary was saved to.

Notes:

  • Format is chosen based on the suffix: .json, .md, .html.
  • If path has no suffix it is assumed to be a direcotry to which a summary.json, summary.md and summary.html are saved to.
def save_json( self, path: pathlib.Path = PosixPath('summary.json'), *, indent: Optional[int] = 2):
440    def save_json(
441        self, path: Path = Path("summary.json"), *, indent: Optional[int] = 2
442    ):
443        """Save validation/test summary as JSON file."""
444        json_str = self.model_dump_json(indent=indent)
445        path.parent.mkdir(exist_ok=True, parents=True)
446        _ = path.write_text(json_str, encoding="utf-8")
447        logger.info("Saved summary to {}", path.absolute())

Save validation/test summary as JSON file.

def save_markdown(self, path: pathlib.Path = PosixPath('summary.md')):
449    def save_markdown(self, path: Path = Path("summary.md")):
450        """Save rendered validation/test summary as Markdown file."""
451        formatted = self.format_md()
452        path.parent.mkdir(exist_ok=True, parents=True)
453        _ = path.write_text(formatted, encoding="utf-8")
454        logger.info("Saved Markdown formatted summary to {}", path.absolute())

Save rendered validation/test summary as Markdown file.

def save_html(self, path: pathlib.Path = PosixPath('summary.html')) -> None:
456    def save_html(self, path: Path = Path("summary.html")) -> None:
457        """Save rendered validation/test summary as HTML file."""
458        path.parent.mkdir(exist_ok=True, parents=True)
459
460        html = self.format_html()
461        _ = path.write_text(html, encoding="utf-8")
462        logger.info("Saved HTML formatted summary to {}", path.absolute())

Save rendered validation/test summary as HTML file.

@classmethod
def load_json(cls, path: pathlib.Path) -> Self:
464    @classmethod
465    def load_json(cls, path: Path) -> Self:
466        """Load validation/test summary from a suitable JSON file"""
467        json_str = Path(path).read_text(encoding="utf-8")
468        return cls.model_validate_json(json_str)

Load validation/test summary from a suitable JSON file

model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'allow'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].