bioimageio.spec

License PyPI conda-version downloads conda-forge downloads code style coverage

Specifications for bioimage.io

This repository contains the specifications of the standard format defined by the bioimage.io community for the content (i.e., models, datasets and applications) in the bioimage.io website. Each item in the content is always described using a YAML 1.2 file named rdf.yaml or bioimageio.yaml. This rdf.yaml \ bioimageio.yaml--- along with the files referenced in it --- can be downloaded from or uploaded to the bioimage.io website and may be produced or consumed by bioimage.io-compatible consumers (e.g., image analysis software like ilastik).

These are the rules and format that bioimage.io-compatible resources must fulfill.

Note that the Python package PyYAML does not support YAML 1.2 . We therefore use and recommend ruyaml. For differences see https://ruamelyaml.readthedocs.io/en/latest/pyyaml.

Please also note that the best way to check whether your rdf.yaml file is bioimage.io-compliant is to call bioimageio.core.validate from the bioimageio.core Python package. The bioimageio.core Python package also provides the bioimageio command line interface (CLI) with the validate command:

bioimageio validate path/to/your/rdf.yaml

Format version overview

All bioimage.io description formats are defined as Pydantic models.

Type Format Version Documentation1 Developer Documentation2
model 0.5
0.4
model 0.5
model 0.4
ModelDescr_v0_5
ModelDescr_v0_4
dataset 0.3
0.2
dataset 0.3
dataset 0.2
DatasetDescr_v0_3
DatasetDescr_v0_2
notebook 0.3
0.2
notebook 0.3
notebook 0.2
NotebookDescr_v0_3
NotebookDescr_v0_2
application 0.3
0.2
application 0.3
application 0.2
ApplicationDescr_v0_3
ApplicationDescr_v0_2
generic 0.3
0.2
- GenericDescr_v0_3
GenericDescr_v0_2

JSON Schema

Simplified descriptions are available as JSON Schema (generated with Pydantic):

bioimageio.spec version JSON Schema documentation3
latest bioimageio_schema_latest.json latest documentation
0.5 bioimageio_schema_v0-5.json 0.5 documentation

Note: bioimageio_schema_v0-5.json and bioimageio_schema_latest.json are identical, but bioimageio_schema_latest.json will eventually refer to the future bioimageio_schema_v0-6.json.

Flattened, interactive docs

A flattened view of the types used by the spec that also shows values constraints.

rendered

You can also generate these docs locally by running PYTHONPATH=./scripts python -m interactive_docs

Examples

We provide some bioimageio.yaml/rdf.yaml example files to describe models, applications, notebooks and datasets; more examples are available at bioimage.io. There is also an example notebook demonstrating how to programmatically access the models, applications, notebooks and datasets descriptions in Python. For integration of bioimageio resources we recommend the bioimageio.core Python package.

💁 Recommendations

  • Use the bioimageio.core Python package to not only validate the format of your bioimageio.yaml/rdf.yaml file, but also test and deploy it (e.g. model inference).
  • bioimageio.spec keeps evolving. Try to use and upgrade to the most current format version! Note: The command line interface bioimageio (part of bioimageio.core) has the update-format command to help you with that.

⌨ bioimageio command-line interface (CLI)

The bioimageio CLI has moved to bioimageio.core.

🖥 Installation

bioimageio.spec can be installed with either conda or pip. We recommend installing bioimageio.core instead to get access to the Python programmatic features available in the BioImage.IO community:

conda install -c conda-forge bioimageio.core

or

pip install -U bioimageio.core

Still, for a lighter package or just testing, you can install the bioimageio.spec package solely:

conda install -c conda-forge bioimageio.spec

or

pip install -U bioimageio.spec

🏞 Environment variables

TODO: link to settings in dev docs

🤝 How to contribute

♥ Contributors

<a href=bioimageio.spec contributors" src="https://contrib.rocks/image?repo=bioimage-io/spec-bioimage-io" />

Made with contrib.rocks.

🛈 Versioining scheme

To keep the bioimageio.spec Python package version in sync with the (model) description format version, bioimageio.spec is versioned as MAJOR.MINRO.PATCH.LIB, where MAJOR.MINRO.PATCH correspond to the latest model description format version implemented and LIB may be bumpbed for library changes that do not affect the format version. This change was introduced with bioimageio.spec 0.5.3.1.

Δ Changelog

The changelog of the bioimageio.spec Python package and the changes to the resource description format it implements can be found here.


  1. JSON Schema based documentation generated with json-schema-for-humans

  2. JSON Schema based documentation generated with json-schema-for-humans

  3. Part of the bioimageio.spec package documentation generated with pdoc

  1"""
  2.. include:: ../../README.md
  3"""
  4
  5from . import (
  6    application,
  7    common,
  8    conda_env,
  9    dataset,
 10    generic,
 11    model,
 12    pretty_validation_errors,
 13    summary,
 14    utils,
 15)
 16from ._description import (
 17    LatestResourceDescr,
 18    ResourceDescr,
 19    SpecificResourceDescr,
 20    build_description,
 21    dump_description,
 22    validate_format,
 23)
 24from ._get_conda_env import BioimageioCondaEnv, get_conda_env
 25from ._internal import settings
 26from ._internal.common_nodes import InvalidDescr
 27from ._internal.constants import VERSION
 28from ._internal.validation_context import ValidationContext, get_validation_context
 29from ._io import (
 30    load_dataset_description,
 31    load_description,
 32    load_description_and_validate_format_only,
 33    load_model_description,
 34    save_bioimageio_yaml_only,
 35    update_format,
 36    update_hashes,
 37)
 38from ._package import (
 39    get_resource_package_content,
 40    save_bioimageio_package,
 41    save_bioimageio_package_as_folder,
 42    save_bioimageio_package_to_stream,
 43)
 44from ._upload import upload
 45from .application import AnyApplicationDescr, ApplicationDescr
 46from .dataset import AnyDatasetDescr, DatasetDescr
 47from .generic import AnyGenericDescr, GenericDescr
 48from .model import AnyModelDescr, ModelDescr
 49from .notebook import AnyNotebookDescr, NotebookDescr
 50from .pretty_validation_errors import enable_pretty_validation_errors_in_ipynb
 51from .summary import ValidationSummary
 52
 53__version__ = VERSION
 54
 55__all__ = [
 56    "__version__",
 57    "AnyApplicationDescr",
 58    "AnyDatasetDescr",
 59    "AnyGenericDescr",
 60    "AnyModelDescr",
 61    "AnyNotebookDescr",
 62    "application",
 63    "ApplicationDescr",
 64    "BioimageioCondaEnv",
 65    "build_description",
 66    "common",
 67    "conda_env",
 68    "dataset",
 69    "DatasetDescr",
 70    "dump_description",
 71    "enable_pretty_validation_errors_in_ipynb",
 72    "generic",
 73    "GenericDescr",
 74    "get_conda_env",
 75    "get_resource_package_content",
 76    "get_validation_context",
 77    "InvalidDescr",
 78    "LatestResourceDescr",
 79    "load_dataset_description",
 80    "load_description_and_validate_format_only",
 81    "load_description",
 82    "load_model_description",
 83    "model",
 84    "ModelDescr",
 85    "NotebookDescr",
 86    "pretty_validation_errors",
 87    "ResourceDescr",
 88    "save_bioimageio_package_as_folder",
 89    "save_bioimageio_package_to_stream",
 90    "save_bioimageio_package",
 91    "save_bioimageio_yaml_only",
 92    "settings",
 93    "SpecificResourceDescr",
 94    "summary",
 95    "update_format",
 96    "update_hashes",
 97    "upload",
 98    "utils",
 99    "validate_format",
100    "ValidationContext",
101    "ValidationSummary",
102]
__version__ = '0.5.5.0'
AnyApplicationDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')]
AnyDatasetDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')]
AnyGenericDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], typing.Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]
AnyModelDescr = typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]
AnyNotebookDescr = typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]
class ApplicationDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
33class ApplicationDescr(GenericDescrBase):
34    """Bioimage.io description of an application."""
35
36    implemented_type: ClassVar[Literal["application"]] = "application"
37    if TYPE_CHECKING:
38        type: Literal["application"] = "application"
39    else:
40        type: Literal["application"]
41
42    id: Optional[ApplicationId] = None
43    """bioimage.io-wide unique resource identifier
44    assigned by bioimage.io; version **un**specific."""
45
46    parent: Optional[ApplicationId] = None
47    """The description from which this one is derived"""
48
49    source: Annotated[
50        FAIR[Optional[FileSource_]],
51        Field(description="URL or path to the source of the application"),
52    ] = None
53    """The primary source of the application"""

Bioimage.io description of an application.

implemented_type: ClassVar[Literal['application']] = 'application'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

source: Annotated[Optional[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')]), AfterValidator(func=<function wo_special_file_name at 0x7f83b7c0fec0>), PlainSerializer(func=<function _package_serializer at 0x7f83b7cb7100>, return_type=PydanticUndefined, when_used='unless-none')]], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b7cd1080>, severity=35, msg=None, context=None), FieldInfo(annotation=NoneType, required=True, description='URL or path to the source of the application')]

The primary source of the application

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'forbid', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
class BioimageioCondaEnv(bioimageio.spec.conda_env.CondaEnv):
 80class BioimageioCondaEnv(CondaEnv):
 81    """A special `CondaEnv` that
 82    - automatically adds bioimageio specific dependencies
 83    - sorts dependencies
 84    """
 85
 86    @model_validator(mode="after")
 87    def _normalize_bioimageio_conda_env(self):
 88        """update a conda env such that we have bioimageio.core and sorted dependencies"""
 89        for req_channel in ("conda-forge", "nodefaults"):
 90            if req_channel not in self.channels:
 91                self.channels.append(req_channel)
 92
 93        if "defaults" in self.channels:
 94            warnings.warn("removing 'defaults' from conda-channels")
 95            self.channels.remove("defaults")
 96
 97        if "pip" not in self.dependencies:
 98            self.dependencies.append("pip")
 99
100        for dep in self.dependencies:
101            if isinstance(dep, PipDeps):
102                pip_section = dep
103                pip_section.pip.sort()
104                break
105        else:
106            pip_section = None
107
108        if (
109            pip_section is None
110            or not any(pd.startswith("bioimageio.core") for pd in pip_section.pip)
111        ) and not any(
112            d.startswith("bioimageio.core")
113            or d.startswith("conda-forge::bioimageio.core")
114            for d in self.dependencies
115            if not isinstance(d, PipDeps)
116        ):
117            self.dependencies.append("conda-forge::bioimageio.core")
118
119        self.dependencies.sort()
120        return self

A special CondaEnv that

  • automatically adds bioimageio specific dependencies
  • sorts dependencies
model_config: ClassVar[pydantic.config.ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def build_description( content: Mapping[str, YamlValueView], /, *, context: Optional[ValidationContext] = None, format_version: Union[Literal['latest', 'discover'], str] = 'discover') -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
173def build_description(
174    content: BioimageioYamlContentView,
175    /,
176    *,
177    context: Optional[ValidationContext] = None,
178    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
179) -> Union[ResourceDescr, InvalidDescr]:
180    """build a bioimage.io resource description from an RDF's content.
181
182    Use `load_description` if you want to build a resource description from an rdf.yaml
183    or bioimage.io zip-package.
184
185    Args:
186        content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
187        context: validation context to use during validation
188        format_version: (optional) use this argument to load the resource and
189                        convert its metadata to a higher format_version
190
191    Returns:
192        An object holding all metadata of the bioimage.io resource
193
194    """
195
196    return build_description_impl(
197        content,
198        context=context,
199        format_version=format_version,
200        get_rd_class=_get_rd_class,
201    )

build a bioimage.io resource description from an RDF's content.

Use load_description if you want to build a resource description from an rdf.yaml or bioimage.io zip-package.

Arguments:
  • content: loaded rdf.yaml file (loaded with YAML, not bioimageio.spec)
  • context: validation context to use during validation
  • format_version: (optional) use this argument to load the resource and convert its metadata to a higher format_version
Returns:

An object holding all metadata of the bioimage.io resource

class DatasetDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
 40class DatasetDescr(GenericDescrBase):
 41    """A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage
 42    processing.
 43    """
 44
 45    implemented_type: ClassVar[Literal["dataset"]] = "dataset"
 46    if TYPE_CHECKING:
 47        type: Literal["dataset"] = "dataset"
 48    else:
 49        type: Literal["dataset"]
 50
 51    id: Optional[DatasetId] = None
 52    """bioimage.io-wide unique resource identifier
 53    assigned by bioimage.io; version **un**specific."""
 54
 55    parent: Optional[DatasetId] = None
 56    """The description from which this one is derived"""
 57
 58    source: FAIR[Optional[HttpUrl]] = None
 59    """"URL to the source of the dataset."""
 60
 61    @model_validator(mode="before")
 62    @classmethod
 63    def _convert(cls, data: Dict[str, Any], /) -> Dict[str, Any]:
 64        if (
 65            data.get("type") == "dataset"
 66            and isinstance(fv := data.get("format_version"), str)
 67            and fv.startswith("0.2.")
 68        ):
 69            old = DatasetDescr02.load(data)
 70            if isinstance(old, InvalidDescr):
 71                return data
 72
 73            return cast(
 74                Dict[str, Any],
 75                (cls if TYPE_CHECKING else dict)(
 76                    attachments=(
 77                        []
 78                        if old.attachments is None
 79                        else [FileDescr(source=f) for f in old.attachments.files]
 80                    ),
 81                    authors=[
 82                        _author_conv.convert_as_dict(a) for a in old.authors
 83                    ],  # pyright: ignore[reportArgumentType]
 84                    badges=old.badges,
 85                    cite=[
 86                        {"text": c.text, "doi": c.doi, "url": c.url} for c in old.cite
 87                    ],  # pyright: ignore[reportArgumentType]
 88                    config=old.config,  # pyright: ignore[reportArgumentType]
 89                    covers=old.covers,
 90                    description=old.description,
 91                    documentation=old.documentation,
 92                    format_version="0.3.0",
 93                    git_repo=old.git_repo,  # pyright: ignore[reportArgumentType]
 94                    icon=old.icon,
 95                    id=None if old.id is None else DatasetId(old.id),
 96                    license=old.license,  # type: ignore
 97                    links=old.links,
 98                    maintainers=[
 99                        _maintainer_conv.convert_as_dict(m) for m in old.maintainers
100                    ],  # pyright: ignore[reportArgumentType]
101                    name=old.name,
102                    source=old.source,
103                    tags=old.tags,
104                    type=old.type,
105                    uploader=old.uploader,
106                    version=old.version,
107                    **(old.model_extra or {}),
108                ),
109            )
110
111        return data

A bioimage.io dataset resource description file (dataset RDF) describes a dataset relevant to bioimage processing.

implemented_type: ClassVar[Literal['dataset']] = 'dataset'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

source: Annotated[Optional[bioimageio.spec._internal.url.HttpUrl], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b7cd1080>, severity=35, msg=None, context=None)]

"URL to the source of the dataset.

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'forbid', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
def dump_description( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr], /, *, exclude_unset: bool = True, exclude_defaults: bool = False) -> Dict[str, YamlValue]:
66def dump_description(
67    rd: Union[ResourceDescr, InvalidDescr],
68    /,
69    *,
70    exclude_unset: bool = True,
71    exclude_defaults: bool = False,
72) -> BioimageioYamlContent:
73    """Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.
74
75    Args:
76        rd: bioimageio resource description
77        exclude_unset: Exclude fields that have not explicitly be set.
78        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
79    """
80    return rd.model_dump(
81        mode="json", exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
82    )

Converts a resource to a dictionary containing only simple types that can directly be serialzed to YAML.

Arguments:
  • rd: bioimageio resource description
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).
def enable_pretty_validation_errors_in_ipynb():
92def enable_pretty_validation_errors_in_ipynb():
93    """DEPRECATED; this is enabled by default at import time."""
94    warnings.warn(
95        "deprecated, this is enabled by default at import time.",
96        DeprecationWarning,
97        stacklevel=2,
98    )

DEPRECATED; this is enabled by default at import time.

class GenericDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
491class GenericDescr(GenericDescrBase, extra="ignore"):
492    """Specification of the fields used in a generic bioimage.io-compliant resource description file (RDF).
493
494    An RDF is a YAML file that describes a resource such as a model, a dataset, or a notebook.
495    Note that those resources are described with a type-specific RDF.
496    Use this generic resource description, if none of the known specific types matches your resource.
497    """
498
499    implemented_type: ClassVar[Literal["generic"]] = "generic"
500    if TYPE_CHECKING:
501        type: Annotated[str, LowerCase] = "generic"
502        """The resource type assigns a broad category to the resource."""
503    else:
504        type: Annotated[str, LowerCase]
505        """The resource type assigns a broad category to the resource."""
506
507    id: Optional[
508        Annotated[ResourceId, Field(examples=["affable-shark", "ambitious-sloth"])]
509    ] = None
510    """bioimage.io-wide unique resource identifier
511    assigned by bioimage.io; version **un**specific."""
512
513    parent: Optional[ResourceId] = None
514    """The description from which this one is derived"""
515
516    source: Optional[HttpUrl] = None
517    """The primary source of the resource"""
518
519    @field_validator("type", mode="after")
520    @classmethod
521    def check_specific_types(cls, value: str) -> str:
522        if value in KNOWN_SPECIFIC_RESOURCE_TYPES:
523            raise ValueError(
524                f"Use the {value} description instead of this generic description for"
525                + f" your '{value}' resource."
526            )
527
528        return value

Specification of the fields used in a generic bioimage.io-compliant resource description file (RDF).

An RDF is a YAML file that describes a resource such as a model, a dataset, or a notebook. Note that those resources are described with a type-specific RDF. Use this generic resource description, if none of the known specific types matches your resource.

implemented_type: ClassVar[Literal['generic']] = 'generic'
id: Optional[Annotated[bioimageio.spec.generic.v0_3.ResourceId, FieldInfo(annotation=NoneType, required=True, examples=['affable-shark', 'ambitious-sloth'])]]

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

The description from which this one is derived

The primary source of the resource

@field_validator('type', mode='after')
@classmethod
def check_specific_types(cls, value: str) -> str:
519    @field_validator("type", mode="after")
520    @classmethod
521    def check_specific_types(cls, value: str) -> str:
522        if value in KNOWN_SPECIFIC_RESOURCE_TYPES:
523            raise ValueError(
524                f"Use the {value} description instead of this generic description for"
525                + f" your '{value}' resource."
526            )
527
528        return value
implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'ignore', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
27def get_conda_env(
28    *,
29    entry: SupportedWeightsEntry,
30    env_name: Optional[Union[Literal["DROP"], str]] = None,
31) -> BioimageioCondaEnv:
32    """get the recommended Conda environment for a given weights entry description"""
33    if isinstance(entry, (v0_4.OnnxWeightsDescr, v0_5.OnnxWeightsDescr)):
34        conda_env = _get_default_onnx_env(opset_version=entry.opset_version)
35    elif isinstance(
36        entry,
37        (
38            v0_4.PytorchStateDictWeightsDescr,
39            v0_5.PytorchStateDictWeightsDescr,
40            v0_4.TorchscriptWeightsDescr,
41            v0_5.TorchscriptWeightsDescr,
42        ),
43    ):
44        if (
45            isinstance(entry, v0_5.TorchscriptWeightsDescr)
46            or entry.dependencies is None
47        ):
48            conda_env = _get_default_pytorch_env(pytorch_version=entry.pytorch_version)
49        else:
50            conda_env = _get_env_from_deps(entry.dependencies)
51
52    elif isinstance(
53        entry,
54        (
55            v0_4.TensorflowSavedModelBundleWeightsDescr,
56            v0_5.TensorflowSavedModelBundleWeightsDescr,
57        ),
58    ):
59        if entry.dependencies is None:
60            conda_env = _get_default_tf_env(tensorflow_version=entry.tensorflow_version)
61        else:
62            conda_env = _get_env_from_deps(entry.dependencies)
63    elif isinstance(
64        entry,
65        (v0_4.KerasHdf5WeightsDescr, v0_5.KerasHdf5WeightsDescr),
66    ):
67        conda_env = _get_default_tf_env(tensorflow_version=entry.tensorflow_version)
68    else:
69        assert_never(entry)
70
71    if env_name == "DROP":
72        conda_env.name = None
73    elif env_name is not None:
74        conda_env.name = env_name
75
76    return conda_env

get the recommended Conda environment for a given weights entry description

def get_resource_package_content( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, bioimageio_yaml_file_name: str = 'rdf.yaml', weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Dict[str, Union[bioimageio.spec._internal.url.HttpUrl, Annotated[pathlib.Path, PathType(path_type='file'), Predicate(is_absolute), FieldInfo(annotation=NoneType, required=True, title='AbsoluteFilePath')], Dict[str, YamlValue], zipp.Path]]:
40def get_resource_package_content(
41    rd: ResourceDescr,
42    /,
43    *,
44    bioimageio_yaml_file_name: FileName = BIOIMAGEIO_YAML,
45    weights_priority_order: Optional[Sequence[WeightsFormat]] = None,  # model only
46) -> Dict[FileName, Union[HttpUrl, AbsoluteFilePath, BioimageioYamlContent, ZipPath]]:
47    ret: Dict[
48        FileName, Union[HttpUrl, AbsoluteFilePath, BioimageioYamlContent, ZipPath]
49    ] = {}
50    for k, v in get_package_content(
51        rd,
52        bioimageio_yaml_file_name=bioimageio_yaml_file_name,
53        weights_priority_order=weights_priority_order,
54    ).items():
55        if isinstance(v, FileDescr):
56            if isinstance(v.source, (Path, RelativeFilePath)):
57                ret[k] = v.source.absolute()
58            else:
59                ret[k] = v.source
60
61        else:
62            ret[k] = v
63
64    return ret
def get_validation_context( default: Optional[ValidationContext] = None) -> ValidationContext:
192def get_validation_context(
193    default: Optional[ValidationContext] = None,
194) -> ValidationContext:
195    """Get the currently active validation context (or a default)"""
196    return _validation_context_var.get() or default or ValidationContext()

Get the currently active validation context (or a default)

392class InvalidDescr(
393    ResourceDescrBase,
394    extra="allow",
395    title="An invalid resource description",
396):
397    """A representation of an invalid resource description"""
398
399    implemented_type: ClassVar[Literal["unknown"]] = "unknown"
400    if TYPE_CHECKING:  # see NodeWithExplicitlySetFields
401        type: Any = "unknown"
402    else:
403        type: Any
404
405    implemented_format_version: ClassVar[Literal["unknown"]] = "unknown"
406    if TYPE_CHECKING:  # see NodeWithExplicitlySetFields
407        format_version: Any = "unknown"
408    else:
409        format_version: Any

A representation of an invalid resource description

implemented_type: ClassVar[Literal['unknown']] = 'unknown'
implemented_format_version: ClassVar[Literal['unknown']] = 'unknown'
implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 0, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'allow', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True, 'title': 'An invalid resource description'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
LatestResourceDescr = typing.Union[typing.Annotated[typing.Union[ApplicationDescr, DatasetDescr, ModelDescr, NotebookDescr], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], GenericDescr]
def load_dataset_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')]:
180def load_dataset_description(
181    source: Union[PermissiveFileSource, ZipFile],
182    /,
183    *,
184    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
185    perform_io_checks: Optional[bool] = None,
186    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
187    sha256: Optional[Sha256] = None,
188) -> AnyDatasetDescr:
189    """same as `load_description`, but addtionally ensures that the loaded
190    description is valid and of type 'dataset'.
191    """
192    rd = load_description(
193        source,
194        format_version=format_version,
195        perform_io_checks=perform_io_checks,
196        known_files=known_files,
197        sha256=sha256,
198    )
199    return ensure_description_is_dataset(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'dataset'.

def load_description_and_validate_format_only( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> ValidationSummary:
232def load_description_and_validate_format_only(
233    source: Union[PermissiveFileSource, ZipFile],
234    /,
235    *,
236    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
237    perform_io_checks: Optional[bool] = None,
238    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
239    sha256: Optional[Sha256] = None,
240) -> ValidationSummary:
241    """same as `load_description`, but only return the validation summary.
242
243    Returns:
244        Validation summary of the bioimage.io resource found at `source`.
245
246    """
247    rd = load_description(
248        source,
249        format_version=format_version,
250        perform_io_checks=perform_io_checks,
251        known_files=known_files,
252        sha256=sha256,
253    )
254    assert rd.validation_summary is not None
255    return rd.validation_summary

same as load_description, but only return the validation summary.

Returns:

Validation summary of the bioimage.io resource found at source.

def load_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
 57def load_description(
 58    source: Union[PermissiveFileSource, ZipFile],
 59    /,
 60    *,
 61    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
 62    perform_io_checks: Optional[bool] = None,
 63    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
 64    sha256: Optional[Sha256] = None,
 65) -> Union[ResourceDescr, InvalidDescr]:
 66    """load a bioimage.io resource description
 67
 68    Args:
 69        source: Path or URL to an rdf.yaml or a bioimage.io package
 70                (zip-file with rdf.yaml in it).
 71        format_version: (optional) Use this argument to load the resource and
 72                        convert its metadata to a higher format_version.
 73        perform_io_checks: Wether or not to perform validation that requires file io,
 74                           e.g. downloading a remote files. The existence of local
 75                           absolute file paths is still being checked.
 76        known_files: Allows to bypass download and hashing of referenced files
 77                     (even if perform_io_checks is True).
 78                     Checked files will be added to this dictionary
 79                     with their SHA-256 value.
 80        sha256: Optional SHA-256 value of **source**
 81
 82    Returns:
 83        An object holding all metadata of the bioimage.io resource
 84
 85    """
 86    if isinstance(source, ResourceDescrBase):
 87        name = getattr(source, "name", f"{str(source)[:10]}...")
 88        logger.warning("returning already loaded description '{}' as is", name)
 89        return source  # pyright: ignore[reportReturnType]
 90
 91    opened = open_bioimageio_yaml(source, sha256=sha256)
 92
 93    context = get_validation_context().replace(
 94        root=opened.original_root,
 95        file_name=opened.original_file_name,
 96        perform_io_checks=perform_io_checks,
 97        known_files=known_files,
 98    )
 99
100    return build_description(
101        opened.content,
102        context=context,
103        format_version=format_version,
104    )

load a bioimage.io resource description

Arguments:
  • source: Path or URL to an rdf.yaml or a bioimage.io package (zip-file with rdf.yaml in it).
  • format_version: (optional) Use this argument to load the resource and convert its metadata to a higher format_version.
  • perform_io_checks: Wether or not to perform validation that requires file io, e.g. downloading a remote files. The existence of local absolute file paths is still being checked.
  • known_files: Allows to bypass download and hashing of referenced files (even if perform_io_checks is True). Checked files will be added to this dictionary with their SHA-256 value.
  • sha256: Optional SHA-256 value of source
Returns:

An object holding all metadata of the bioimage.io resource

def load_model_description( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, sha256: Optional[bioimageio.spec._internal.io_basics.Sha256] = None) -> Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')]:
131def load_model_description(
132    source: Union[PermissiveFileSource, ZipFile],
133    /,
134    *,
135    format_version: Union[FormatVersionPlaceholder, str] = DISCOVER,
136    perform_io_checks: Optional[bool] = None,
137    known_files: Optional[Dict[str, Optional[Sha256]]] = None,
138    sha256: Optional[Sha256] = None,
139) -> AnyModelDescr:
140    """same as `load_description`, but addtionally ensures that the loaded
141    description is valid and of type 'model'.
142
143    Raises:
144        ValueError: for invalid or non-model resources
145    """
146    rd = load_description(
147        source,
148        format_version=format_version,
149        perform_io_checks=perform_io_checks,
150        known_files=known_files,
151        sha256=sha256,
152    )
153    return ensure_description_is_model(rd)

same as load_description, but addtionally ensures that the loaded description is valid and of type 'model'.

Raises:
  • ValueError: for invalid or non-model resources
2606class ModelDescr(GenericModelDescrBase):
2607    """Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights.
2608    These fields are typically stored in a YAML file which we call a model resource description file (model RDF).
2609    """
2610
2611    implemented_format_version: ClassVar[Literal["0.5.5"]] = "0.5.5"
2612    if TYPE_CHECKING:
2613        format_version: Literal["0.5.5"] = "0.5.5"
2614    else:
2615        format_version: Literal["0.5.5"]
2616        """Version of the bioimage.io model description specification used.
2617        When creating a new model always use the latest micro/patch version described here.
2618        The `format_version` is important for any consumer software to understand how to parse the fields.
2619        """
2620
2621    implemented_type: ClassVar[Literal["model"]] = "model"
2622    if TYPE_CHECKING:
2623        type: Literal["model"] = "model"
2624    else:
2625        type: Literal["model"]
2626        """Specialized resource type 'model'"""
2627
2628    id: Optional[ModelId] = None
2629    """bioimage.io-wide unique resource identifier
2630    assigned by bioimage.io; version **un**specific."""
2631
2632    authors: FAIR[List[Author]] = Field(
2633        default_factory=cast(Callable[[], List[Author]], list)
2634    )
2635    """The authors are the creators of the model RDF and the primary points of contact."""
2636
2637    documentation: FAIR[Optional[FileSource_documentation]] = None
2638    """URL or relative path to a markdown file with additional documentation.
2639    The recommended documentation file name is `README.md`. An `.md` suffix is mandatory.
2640    The documentation should include a '#[#] Validation' (sub)section
2641    with details on how to quantitatively validate the model on unseen data."""
2642
2643    @field_validator("documentation", mode="after")
2644    @classmethod
2645    def _validate_documentation(
2646        cls, value: Optional[FileSource_documentation]
2647    ) -> Optional[FileSource_documentation]:
2648        if not get_validation_context().perform_io_checks or value is None:
2649            return value
2650
2651        doc_reader = get_reader(value)
2652        doc_content = doc_reader.read().decode(encoding="utf-8")
2653        if not re.search("#.*[vV]alidation", doc_content):
2654            issue_warning(
2655                "No '# Validation' (sub)section found in {value}.",
2656                value=value,
2657                field="documentation",
2658            )
2659
2660        return value
2661
2662    inputs: NotEmpty[Sequence[InputTensorDescr]]
2663    """Describes the input tensors expected by this model."""
2664
2665    @field_validator("inputs", mode="after")
2666    @classmethod
2667    def _validate_input_axes(
2668        cls, inputs: Sequence[InputTensorDescr]
2669    ) -> Sequence[InputTensorDescr]:
2670        input_size_refs = cls._get_axes_with_independent_size(inputs)
2671
2672        for i, ipt in enumerate(inputs):
2673            valid_independent_refs: Dict[
2674                Tuple[TensorId, AxisId],
2675                Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2676            ] = {
2677                **{
2678                    (ipt.id, a.id): (ipt, a, a.size)
2679                    for a in ipt.axes
2680                    if not isinstance(a, BatchAxis)
2681                    and isinstance(a.size, (int, ParameterizedSize))
2682                },
2683                **input_size_refs,
2684            }
2685            for a, ax in enumerate(ipt.axes):
2686                cls._validate_axis(
2687                    "inputs",
2688                    i=i,
2689                    tensor_id=ipt.id,
2690                    a=a,
2691                    axis=ax,
2692                    valid_independent_refs=valid_independent_refs,
2693                )
2694        return inputs
2695
2696    @staticmethod
2697    def _validate_axis(
2698        field_name: str,
2699        i: int,
2700        tensor_id: TensorId,
2701        a: int,
2702        axis: AnyAxis,
2703        valid_independent_refs: Dict[
2704            Tuple[TensorId, AxisId],
2705            Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2706        ],
2707    ):
2708        if isinstance(axis, BatchAxis) or isinstance(
2709            axis.size, (int, ParameterizedSize, DataDependentSize)
2710        ):
2711            return
2712        elif not isinstance(axis.size, SizeReference):
2713            assert_never(axis.size)
2714
2715        # validate axis.size SizeReference
2716        ref = (axis.size.tensor_id, axis.size.axis_id)
2717        if ref not in valid_independent_refs:
2718            raise ValueError(
2719                "Invalid tensor axis reference at"
2720                + f" {field_name}[{i}].axes[{a}].size: {axis.size}."
2721            )
2722        if ref == (tensor_id, axis.id):
2723            raise ValueError(
2724                "Self-referencing not allowed for"
2725                + f" {field_name}[{i}].axes[{a}].size: {axis.size}"
2726            )
2727        if axis.type == "channel":
2728            if valid_independent_refs[ref][1].type != "channel":
2729                raise ValueError(
2730                    "A channel axis' size may only reference another fixed size"
2731                    + " channel axis."
2732                )
2733            if isinstance(axis.channel_names, str) and "{i}" in axis.channel_names:
2734                ref_size = valid_independent_refs[ref][2]
2735                assert isinstance(ref_size, int), (
2736                    "channel axis ref (another channel axis) has to specify fixed"
2737                    + " size"
2738                )
2739                generated_channel_names = [
2740                    Identifier(axis.channel_names.format(i=i))
2741                    for i in range(1, ref_size + 1)
2742                ]
2743                axis.channel_names = generated_channel_names
2744
2745        if (ax_unit := getattr(axis, "unit", None)) != (
2746            ref_unit := getattr(valid_independent_refs[ref][1], "unit", None)
2747        ):
2748            raise ValueError(
2749                "The units of an axis and its reference axis need to match, but"
2750                + f" '{ax_unit}' != '{ref_unit}'."
2751            )
2752        ref_axis = valid_independent_refs[ref][1]
2753        if isinstance(ref_axis, BatchAxis):
2754            raise ValueError(
2755                f"Invalid reference axis '{ref_axis.id}' for {tensor_id}.{axis.id}"
2756                + " (a batch axis is not allowed as reference)."
2757            )
2758
2759        if isinstance(axis, WithHalo):
2760            min_size = axis.size.get_size(axis, ref_axis, n=0)
2761            if (min_size - 2 * axis.halo) < 1:
2762                raise ValueError(
2763                    f"axis {axis.id} with minimum size {min_size} is too small for halo"
2764                    + f" {axis.halo}."
2765                )
2766
2767            input_halo = axis.halo * axis.scale / ref_axis.scale
2768            if input_halo != int(input_halo) or input_halo % 2 == 1:
2769                raise ValueError(
2770                    f"input_halo {input_halo} (output_halo {axis.halo} *"
2771                    + f" output_scale {axis.scale} / input_scale {ref_axis.scale})"
2772                    + f"     {tensor_id}.{axis.id}."
2773                )
2774
2775    @model_validator(mode="after")
2776    def _validate_test_tensors(self) -> Self:
2777        if not get_validation_context().perform_io_checks:
2778            return self
2779
2780        test_output_arrays = [
2781            None if descr.test_tensor is None else load_array(descr.test_tensor)
2782            for descr in self.outputs
2783        ]
2784        test_input_arrays = [
2785            None if descr.test_tensor is None else load_array(descr.test_tensor)
2786            for descr in self.inputs
2787        ]
2788
2789        tensors = {
2790            descr.id: (descr, array)
2791            for descr, array in zip(
2792                chain(self.inputs, self.outputs), test_input_arrays + test_output_arrays
2793            )
2794        }
2795        validate_tensors(tensors, tensor_origin="test_tensor")
2796
2797        output_arrays = {
2798            descr.id: array for descr, array in zip(self.outputs, test_output_arrays)
2799        }
2800        for rep_tol in self.config.bioimageio.reproducibility_tolerance:
2801            if not rep_tol.absolute_tolerance:
2802                continue
2803
2804            if rep_tol.output_ids:
2805                out_arrays = {
2806                    oid: a
2807                    for oid, a in output_arrays.items()
2808                    if oid in rep_tol.output_ids
2809                }
2810            else:
2811                out_arrays = output_arrays
2812
2813            for out_id, array in out_arrays.items():
2814                if array is None:
2815                    continue
2816
2817                if rep_tol.absolute_tolerance > (max_test_value := array.max()) * 0.01:
2818                    raise ValueError(
2819                        "config.bioimageio.reproducibility_tolerance.absolute_tolerance="
2820                        + f"{rep_tol.absolute_tolerance} > 0.01*{max_test_value}"
2821                        + f" (1% of the maximum value of the test tensor '{out_id}')"
2822                    )
2823
2824        return self
2825
2826    @model_validator(mode="after")
2827    def _validate_tensor_references_in_proc_kwargs(self, info: ValidationInfo) -> Self:
2828        ipt_refs = {t.id for t in self.inputs}
2829        out_refs = {t.id for t in self.outputs}
2830        for ipt in self.inputs:
2831            for p in ipt.preprocessing:
2832                ref = p.kwargs.get("reference_tensor")
2833                if ref is None:
2834                    continue
2835                if ref not in ipt_refs:
2836                    raise ValueError(
2837                        f"`reference_tensor` '{ref}' not found. Valid input tensor"
2838                        + f" references are: {ipt_refs}."
2839                    )
2840
2841        for out in self.outputs:
2842            for p in out.postprocessing:
2843                ref = p.kwargs.get("reference_tensor")
2844                if ref is None:
2845                    continue
2846
2847                if ref not in ipt_refs and ref not in out_refs:
2848                    raise ValueError(
2849                        f"`reference_tensor` '{ref}' not found. Valid tensor references"
2850                        + f" are: {ipt_refs | out_refs}."
2851                    )
2852
2853        return self
2854
2855    # TODO: use validate funcs in validate_test_tensors
2856    # def validate_inputs(self, input_tensors: Mapping[TensorId, NDArray[Any]]) -> Mapping[TensorId, NDArray[Any]]:
2857
2858    name: Annotated[
2859        str,
2860        RestrictCharacters(string.ascii_letters + string.digits + "_+- ()"),
2861        MinLen(5),
2862        MaxLen(128),
2863        warn(MaxLen(64), "Name longer than 64 characters.", INFO),
2864    ]
2865    """A human-readable name of this model.
2866    It should be no longer than 64 characters
2867    and may only contain letter, number, underscore, minus, parentheses and spaces.
2868    We recommend to chose a name that refers to the model's task and image modality.
2869    """
2870
2871    outputs: NotEmpty[Sequence[OutputTensorDescr]]
2872    """Describes the output tensors."""
2873
2874    @field_validator("outputs", mode="after")
2875    @classmethod
2876    def _validate_tensor_ids(
2877        cls, outputs: Sequence[OutputTensorDescr], info: ValidationInfo
2878    ) -> Sequence[OutputTensorDescr]:
2879        tensor_ids = [
2880            t.id for t in info.data.get("inputs", []) + info.data.get("outputs", [])
2881        ]
2882        duplicate_tensor_ids: List[str] = []
2883        seen: Set[str] = set()
2884        for t in tensor_ids:
2885            if t in seen:
2886                duplicate_tensor_ids.append(t)
2887
2888            seen.add(t)
2889
2890        if duplicate_tensor_ids:
2891            raise ValueError(f"Duplicate tensor ids: {duplicate_tensor_ids}")
2892
2893        return outputs
2894
2895    @staticmethod
2896    def _get_axes_with_parameterized_size(
2897        io: Union[Sequence[InputTensorDescr], Sequence[OutputTensorDescr]],
2898    ):
2899        return {
2900            f"{t.id}.{a.id}": (t, a, a.size)
2901            for t in io
2902            for a in t.axes
2903            if not isinstance(a, BatchAxis) and isinstance(a.size, ParameterizedSize)
2904        }
2905
2906    @staticmethod
2907    def _get_axes_with_independent_size(
2908        io: Union[Sequence[InputTensorDescr], Sequence[OutputTensorDescr]],
2909    ):
2910        return {
2911            (t.id, a.id): (t, a, a.size)
2912            for t in io
2913            for a in t.axes
2914            if not isinstance(a, BatchAxis)
2915            and isinstance(a.size, (int, ParameterizedSize))
2916        }
2917
2918    @field_validator("outputs", mode="after")
2919    @classmethod
2920    def _validate_output_axes(
2921        cls, outputs: List[OutputTensorDescr], info: ValidationInfo
2922    ) -> List[OutputTensorDescr]:
2923        input_size_refs = cls._get_axes_with_independent_size(
2924            info.data.get("inputs", [])
2925        )
2926        output_size_refs = cls._get_axes_with_independent_size(outputs)
2927
2928        for i, out in enumerate(outputs):
2929            valid_independent_refs: Dict[
2930                Tuple[TensorId, AxisId],
2931                Tuple[TensorDescr, AnyAxis, Union[int, ParameterizedSize]],
2932            ] = {
2933                **{
2934                    (out.id, a.id): (out, a, a.size)
2935                    for a in out.axes
2936                    if not isinstance(a, BatchAxis)
2937                    and isinstance(a.size, (int, ParameterizedSize))
2938                },
2939                **input_size_refs,
2940                **output_size_refs,
2941            }
2942            for a, ax in enumerate(out.axes):
2943                cls._validate_axis(
2944                    "outputs",
2945                    i,
2946                    out.id,
2947                    a,
2948                    ax,
2949                    valid_independent_refs=valid_independent_refs,
2950                )
2951
2952        return outputs
2953
2954    packaged_by: List[Author] = Field(
2955        default_factory=cast(Callable[[], List[Author]], list)
2956    )
2957    """The persons that have packaged and uploaded this model.
2958    Only required if those persons differ from the `authors`."""
2959
2960    parent: Optional[LinkedModel] = None
2961    """The model from which this model is derived, e.g. by fine-tuning the weights."""
2962
2963    @model_validator(mode="after")
2964    def _validate_parent_is_not_self(self) -> Self:
2965        if self.parent is not None and self.parent.id == self.id:
2966            raise ValueError("A model description may not reference itself as parent.")
2967
2968        return self
2969
2970    run_mode: Annotated[
2971        Optional[RunMode],
2972        warn(None, "Run mode '{value}' has limited support across consumer softwares."),
2973    ] = None
2974    """Custom run mode for this model: for more complex prediction procedures like test time
2975    data augmentation that currently cannot be expressed in the specification.
2976    No standard run modes are defined yet."""
2977
2978    timestamp: Datetime = Field(default_factory=Datetime.now)
2979    """Timestamp in [ISO 8601](#https://en.wikipedia.org/wiki/ISO_8601) format
2980    with a few restrictions listed [here](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat).
2981    (In Python a datetime object is valid, too)."""
2982
2983    training_data: Annotated[
2984        Union[None, LinkedDataset, DatasetDescr, DatasetDescr02],
2985        Field(union_mode="left_to_right"),
2986    ] = None
2987    """The dataset used to train this model"""
2988
2989    weights: Annotated[WeightsDescr, WrapSerializer(package_weights)]
2990    """The weights for this model.
2991    Weights can be given for different formats, but should otherwise be equivalent.
2992    The available weight formats determine which consumers can use this model."""
2993
2994    config: Config = Field(default_factory=Config.model_construct)
2995
2996    @model_validator(mode="after")
2997    def _add_default_cover(self) -> Self:
2998        if not get_validation_context().perform_io_checks or self.covers:
2999            return self
3000
3001        try:
3002            generated_covers = generate_covers(
3003                [
3004                    (t, load_array(t.test_tensor))
3005                    for t in self.inputs
3006                    if t.test_tensor is not None
3007                ],
3008                [
3009                    (t, load_array(t.test_tensor))
3010                    for t in self.outputs
3011                    if t.test_tensor is not None
3012                ],
3013            )
3014        except Exception as e:
3015            issue_warning(
3016                "Failed to generate cover image(s): {e}",
3017                value=self.covers,
3018                msg_context=dict(e=e),
3019                field="covers",
3020            )
3021        else:
3022            self.covers.extend(generated_covers)
3023
3024        return self
3025
3026    def get_input_test_arrays(self) -> List[NDArray[Any]]:
3027        return self._get_test_arrays(self.inputs)
3028
3029    def get_output_test_arrays(self) -> List[NDArray[Any]]:
3030        return self._get_test_arrays(self.outputs)
3031
3032    @staticmethod
3033    def _get_test_arrays(
3034        io_descr: Union[Sequence[InputTensorDescr], Sequence[OutputTensorDescr]],
3035    ):
3036        ts: List[FileDescr] = []
3037        for d in io_descr:
3038            if d.test_tensor is None:
3039                raise ValueError(
3040                    f"Failed to get test arrays: description of '{d.id}' is missing a `test_tensor`."
3041                )
3042            ts.append(d.test_tensor)
3043
3044        data = [load_array(t) for t in ts]
3045        assert all(isinstance(d, np.ndarray) for d in data)
3046        return data
3047
3048    @staticmethod
3049    def get_batch_size(tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]]) -> int:
3050        batch_size = 1
3051        tensor_with_batchsize: Optional[TensorId] = None
3052        for tid in tensor_sizes:
3053            for aid, s in tensor_sizes[tid].items():
3054                if aid != BATCH_AXIS_ID or s == 1 or s == batch_size:
3055                    continue
3056
3057                if batch_size != 1:
3058                    assert tensor_with_batchsize is not None
3059                    raise ValueError(
3060                        f"batch size mismatch for tensors '{tensor_with_batchsize}' ({batch_size}) and '{tid}' ({s})"
3061                    )
3062
3063                batch_size = s
3064                tensor_with_batchsize = tid
3065
3066        return batch_size
3067
3068    def get_output_tensor_sizes(
3069        self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]
3070    ) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]:
3071        """Returns the tensor output sizes for given **input_sizes**.
3072        Only if **input_sizes** has a valid input shape, the tensor output size is exact.
3073        Otherwise it might be larger than the actual (valid) output"""
3074        batch_size = self.get_batch_size(input_sizes)
3075        ns = self.get_ns(input_sizes)
3076
3077        tensor_sizes = self.get_tensor_sizes(ns, batch_size=batch_size)
3078        return tensor_sizes.outputs
3079
3080    def get_ns(self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]):
3081        """get parameter `n` for each parameterized axis
3082        such that the valid input size is >= the given input size"""
3083        ret: Dict[Tuple[TensorId, AxisId], ParameterizedSize_N] = {}
3084        axes = {t.id: {a.id: a for a in t.axes} for t in self.inputs}
3085        for tid in input_sizes:
3086            for aid, s in input_sizes[tid].items():
3087                size_descr = axes[tid][aid].size
3088                if isinstance(size_descr, ParameterizedSize):
3089                    ret[(tid, aid)] = size_descr.get_n(s)
3090                elif size_descr is None or isinstance(size_descr, (int, SizeReference)):
3091                    pass
3092                else:
3093                    assert_never(size_descr)
3094
3095        return ret
3096
3097    def get_tensor_sizes(
3098        self, ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N], batch_size: int
3099    ) -> _TensorSizes:
3100        axis_sizes = self.get_axis_sizes(ns, batch_size=batch_size)
3101        return _TensorSizes(
3102            {
3103                t: {
3104                    aa: axis_sizes.inputs[(tt, aa)]
3105                    for tt, aa in axis_sizes.inputs
3106                    if tt == t
3107                }
3108                for t in {tt for tt, _ in axis_sizes.inputs}
3109            },
3110            {
3111                t: {
3112                    aa: axis_sizes.outputs[(tt, aa)]
3113                    for tt, aa in axis_sizes.outputs
3114                    if tt == t
3115                }
3116                for t in {tt for tt, _ in axis_sizes.outputs}
3117            },
3118        )
3119
3120    def get_axis_sizes(
3121        self,
3122        ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N],
3123        batch_size: Optional[int] = None,
3124        *,
3125        max_input_shape: Optional[Mapping[Tuple[TensorId, AxisId], int]] = None,
3126    ) -> _AxisSizes:
3127        """Determine input and output block shape for scale factors **ns**
3128        of parameterized input sizes.
3129
3130        Args:
3131            ns: Scale factor `n` for each axis (keyed by (tensor_id, axis_id))
3132                that is parameterized as `size = min + n * step`.
3133            batch_size: The desired size of the batch dimension.
3134                If given **batch_size** overwrites any batch size present in
3135                **max_input_shape**. Default 1.
3136            max_input_shape: Limits the derived block shapes.
3137                Each axis for which the input size, parameterized by `n`, is larger
3138                than **max_input_shape** is set to the minimal value `n_min` for which
3139                this is still true.
3140                Use this for small input samples or large values of **ns**.
3141                Or simply whenever you know the full input shape.
3142
3143        Returns:
3144            Resolved axis sizes for model inputs and outputs.
3145        """
3146        max_input_shape = max_input_shape or {}
3147        if batch_size is None:
3148            for (_t_id, a_id), s in max_input_shape.items():
3149                if a_id == BATCH_AXIS_ID:
3150                    batch_size = s
3151                    break
3152            else:
3153                batch_size = 1
3154
3155        all_axes = {
3156            t.id: {a.id: a for a in t.axes} for t in chain(self.inputs, self.outputs)
3157        }
3158
3159        inputs: Dict[Tuple[TensorId, AxisId], int] = {}
3160        outputs: Dict[Tuple[TensorId, AxisId], Union[int, _DataDepSize]] = {}
3161
3162        def get_axis_size(a: Union[InputAxis, OutputAxis]):
3163            if isinstance(a, BatchAxis):
3164                if (t_descr.id, a.id) in ns:
3165                    logger.warning(
3166                        "Ignoring unexpected size increment factor (n) for batch axis"
3167                        + " of tensor '{}'.",
3168                        t_descr.id,
3169                    )
3170                return batch_size
3171            elif isinstance(a.size, int):
3172                if (t_descr.id, a.id) in ns:
3173                    logger.warning(
3174                        "Ignoring unexpected size increment factor (n) for fixed size"
3175                        + " axis '{}' of tensor '{}'.",
3176                        a.id,
3177                        t_descr.id,
3178                    )
3179                return a.size
3180            elif isinstance(a.size, ParameterizedSize):
3181                if (t_descr.id, a.id) not in ns:
3182                    raise ValueError(
3183                        "Size increment factor (n) missing for parametrized axis"
3184                        + f" '{a.id}' of tensor '{t_descr.id}'."
3185                    )
3186                n = ns[(t_descr.id, a.id)]
3187                s_max = max_input_shape.get((t_descr.id, a.id))
3188                if s_max is not None:
3189                    n = min(n, a.size.get_n(s_max))
3190
3191                return a.size.get_size(n)
3192
3193            elif isinstance(a.size, SizeReference):
3194                if (t_descr.id, a.id) in ns:
3195                    logger.warning(
3196                        "Ignoring unexpected size increment factor (n) for axis '{}'"
3197                        + " of tensor '{}' with size reference.",
3198                        a.id,
3199                        t_descr.id,
3200                    )
3201                assert not isinstance(a, BatchAxis)
3202                ref_axis = all_axes[a.size.tensor_id][a.size.axis_id]
3203                assert not isinstance(ref_axis, BatchAxis)
3204                ref_key = (a.size.tensor_id, a.size.axis_id)
3205                ref_size = inputs.get(ref_key, outputs.get(ref_key))
3206                assert ref_size is not None, ref_key
3207                assert not isinstance(ref_size, _DataDepSize), ref_key
3208                return a.size.get_size(
3209                    axis=a,
3210                    ref_axis=ref_axis,
3211                    ref_size=ref_size,
3212                )
3213            elif isinstance(a.size, DataDependentSize):
3214                if (t_descr.id, a.id) in ns:
3215                    logger.warning(
3216                        "Ignoring unexpected increment factor (n) for data dependent"
3217                        + " size axis '{}' of tensor '{}'.",
3218                        a.id,
3219                        t_descr.id,
3220                    )
3221                return _DataDepSize(a.size.min, a.size.max)
3222            else:
3223                assert_never(a.size)
3224
3225        # first resolve all , but the `SizeReference` input sizes
3226        for t_descr in self.inputs:
3227            for a in t_descr.axes:
3228                if not isinstance(a.size, SizeReference):
3229                    s = get_axis_size(a)
3230                    assert not isinstance(s, _DataDepSize)
3231                    inputs[t_descr.id, a.id] = s
3232
3233        # resolve all other input axis sizes
3234        for t_descr in self.inputs:
3235            for a in t_descr.axes:
3236                if isinstance(a.size, SizeReference):
3237                    s = get_axis_size(a)
3238                    assert not isinstance(s, _DataDepSize)
3239                    inputs[t_descr.id, a.id] = s
3240
3241        # resolve all output axis sizes
3242        for t_descr in self.outputs:
3243            for a in t_descr.axes:
3244                assert not isinstance(a.size, ParameterizedSize)
3245                s = get_axis_size(a)
3246                outputs[t_descr.id, a.id] = s
3247
3248        return _AxisSizes(inputs=inputs, outputs=outputs)
3249
3250    @model_validator(mode="before")
3251    @classmethod
3252    def _convert(cls, data: Dict[str, Any]) -> Dict[str, Any]:
3253        cls.convert_from_old_format_wo_validation(data)
3254        return data
3255
3256    @classmethod
3257    def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3258        """Convert metadata following an older format version to this classes' format
3259        without validating the result.
3260        """
3261        if (
3262            data.get("type") == "model"
3263            and isinstance(fv := data.get("format_version"), str)
3264            and fv.count(".") == 2
3265        ):
3266            fv_parts = fv.split(".")
3267            if any(not p.isdigit() for p in fv_parts):
3268                return
3269
3270            fv_tuple = tuple(map(int, fv_parts))
3271
3272            assert cls.implemented_format_version_tuple[0:2] == (0, 5)
3273            if fv_tuple[:2] in ((0, 3), (0, 4)):
3274                m04 = _ModelDescr_v0_4.load(data)
3275                if isinstance(m04, InvalidDescr):
3276                    try:
3277                        updated = _model_conv.convert_as_dict(
3278                            m04  # pyright: ignore[reportArgumentType]
3279                        )
3280                    except Exception as e:
3281                        logger.error(
3282                            "Failed to convert from invalid model 0.4 description."
3283                            + f"\nerror: {e}"
3284                            + "\nProceeding with model 0.5 validation without conversion."
3285                        )
3286                        updated = None
3287                else:
3288                    updated = _model_conv.convert_as_dict(m04)
3289
3290                if updated is not None:
3291                    data.clear()
3292                    data.update(updated)
3293
3294            elif fv_tuple[:2] == (0, 5):
3295                # bump patch version
3296                data["format_version"] = cls.implemented_format_version

Specification of the fields used in a bioimage.io-compliant RDF to describe AI models with pretrained weights. These fields are typically stored in a YAML file which we call a model resource description file (model RDF).

implemented_format_version: ClassVar[Literal['0.5.5']] = '0.5.5'
implemented_type: ClassVar[Literal['model']] = 'model'

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

authors: Annotated[List[bioimageio.spec.generic.v0_3.Author], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b7cd1080>, severity=35, msg=None, context=None)]

The authors are the creators of the model RDF and the primary points of contact.

documentation: Annotated[Optional[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')]), AfterValidator(func=<function wo_special_file_name at 0x7f83b7c0fec0>), PlainSerializer(func=<function _package_serializer at 0x7f83b7cb7100>, return_type=PydanticUndefined, when_used='unless-none'), WithSuffix(suffix='.md', case_sensitive=True), FieldInfo(annotation=NoneType, required=True, examples=['https://raw.githubusercontent.com/bioimage-io/spec-bioimage-io/main/example_descriptions/models/unet2d_nuclei_broad/README.md', 'README.md'])]], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b7cd1080>, severity=35, msg=None, context=None)]

URL or relative path to a markdown file with additional documentation. The recommended documentation file name is README.md. An .md suffix is mandatory. The documentation should include a '#[#] Validation' (sub)section with details on how to quantitatively validate the model on unseen data.

inputs: Annotated[Sequence[bioimageio.spec.model.v0_5.InputTensorDescr], MinLen(min_length=1)]

Describes the input tensors expected by this model.

name: Annotated[str, RestrictCharacters(alphabet='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_+- ()'), MinLen(min_length=5), MaxLen(max_length=128), AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b60c9e40>, severity=20, msg='Name longer than 64 characters.', context={'typ': Annotated[Any, MaxLen(max_length=64)]})]

A human-readable name of this model. It should be no longer than 64 characters and may only contain letter, number, underscore, minus, parentheses and spaces. We recommend to chose a name that refers to the model's task and image modality.

outputs: Annotated[Sequence[bioimageio.spec.model.v0_5.OutputTensorDescr], MinLen(min_length=1)]

Describes the output tensors.

The persons that have packaged and uploaded this model. Only required if those persons differ from the authors.

The model from which this model is derived, e.g. by fine-tuning the weights.

run_mode: Annotated[Optional[bioimageio.spec.model.v0_4.RunMode], AfterWarner(func=<function as_warning.<locals>.wrapper at 0x7f83b60c9da0>, severity=30, msg="Run mode '{value}' has limited support across consumer softwares.", context={'typ': None})]

Custom run mode for this model: for more complex prediction procedures like test time data augmentation that currently cannot be expressed in the specification. No standard run modes are defined yet.

Timestamp in ISO 8601 format with a few restrictions listed here. (In Python a datetime object is valid, too).

training_data: Annotated[Union[NoneType, bioimageio.spec.dataset.v0_3.LinkedDataset, DatasetDescr, bioimageio.spec.dataset.v0_2.DatasetDescr], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])]

The dataset used to train this model

weights: Annotated[bioimageio.spec.model.v0_5.WeightsDescr, WrapSerializer(func=<function package_weights at 0x7f83b790a5c0>, return_type=PydanticUndefined, when_used='always')]

The weights for this model. Weights can be given for different formats, but should otherwise be equivalent. The available weight formats determine which consumers can use this model.

def get_input_test_arrays(self) -> List[numpy.ndarray[tuple[Any, ...], numpy.dtype[Any]]]:
3026    def get_input_test_arrays(self) -> List[NDArray[Any]]:
3027        return self._get_test_arrays(self.inputs)
def get_output_test_arrays(self) -> List[numpy.ndarray[tuple[Any, ...], numpy.dtype[Any]]]:
3029    def get_output_test_arrays(self) -> List[NDArray[Any]]:
3030        return self._get_test_arrays(self.outputs)
@staticmethod
def get_batch_size( tensor_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]) -> int:
3048    @staticmethod
3049    def get_batch_size(tensor_sizes: Mapping[TensorId, Mapping[AxisId, int]]) -> int:
3050        batch_size = 1
3051        tensor_with_batchsize: Optional[TensorId] = None
3052        for tid in tensor_sizes:
3053            for aid, s in tensor_sizes[tid].items():
3054                if aid != BATCH_AXIS_ID or s == 1 or s == batch_size:
3055                    continue
3056
3057                if batch_size != 1:
3058                    assert tensor_with_batchsize is not None
3059                    raise ValueError(
3060                        f"batch size mismatch for tensors '{tensor_with_batchsize}' ({batch_size}) and '{tid}' ({s})"
3061                    )
3062
3063                batch_size = s
3064                tensor_with_batchsize = tid
3065
3066        return batch_size
def get_output_tensor_sizes( self, input_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]) -> Dict[bioimageio.spec.model.v0_5.TensorId, Dict[bioimageio.spec.model.v0_5.AxisId, Union[int, bioimageio.spec.model.v0_5._DataDepSize]]]:
3068    def get_output_tensor_sizes(
3069        self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]
3070    ) -> Dict[TensorId, Dict[AxisId, Union[int, _DataDepSize]]]:
3071        """Returns the tensor output sizes for given **input_sizes**.
3072        Only if **input_sizes** has a valid input shape, the tensor output size is exact.
3073        Otherwise it might be larger than the actual (valid) output"""
3074        batch_size = self.get_batch_size(input_sizes)
3075        ns = self.get_ns(input_sizes)
3076
3077        tensor_sizes = self.get_tensor_sizes(ns, batch_size=batch_size)
3078        return tensor_sizes.outputs

Returns the tensor output sizes for given input_sizes. Only if input_sizes has a valid input shape, the tensor output size is exact. Otherwise it might be larger than the actual (valid) output

def get_ns( self, input_sizes: Mapping[bioimageio.spec.model.v0_5.TensorId, Mapping[bioimageio.spec.model.v0_5.AxisId, int]]):
3080    def get_ns(self, input_sizes: Mapping[TensorId, Mapping[AxisId, int]]):
3081        """get parameter `n` for each parameterized axis
3082        such that the valid input size is >= the given input size"""
3083        ret: Dict[Tuple[TensorId, AxisId], ParameterizedSize_N] = {}
3084        axes = {t.id: {a.id: a for a in t.axes} for t in self.inputs}
3085        for tid in input_sizes:
3086            for aid, s in input_sizes[tid].items():
3087                size_descr = axes[tid][aid].size
3088                if isinstance(size_descr, ParameterizedSize):
3089                    ret[(tid, aid)] = size_descr.get_n(s)
3090                elif size_descr is None or isinstance(size_descr, (int, SizeReference)):
3091                    pass
3092                else:
3093                    assert_never(size_descr)
3094
3095        return ret

get parameter n for each parameterized axis such that the valid input size is >= the given input size

def get_tensor_sizes( self, ns: Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int], batch_size: int) -> bioimageio.spec.model.v0_5._TensorSizes:
3097    def get_tensor_sizes(
3098        self, ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N], batch_size: int
3099    ) -> _TensorSizes:
3100        axis_sizes = self.get_axis_sizes(ns, batch_size=batch_size)
3101        return _TensorSizes(
3102            {
3103                t: {
3104                    aa: axis_sizes.inputs[(tt, aa)]
3105                    for tt, aa in axis_sizes.inputs
3106                    if tt == t
3107                }
3108                for t in {tt for tt, _ in axis_sizes.inputs}
3109            },
3110            {
3111                t: {
3112                    aa: axis_sizes.outputs[(tt, aa)]
3113                    for tt, aa in axis_sizes.outputs
3114                    if tt == t
3115                }
3116                for t in {tt for tt, _ in axis_sizes.outputs}
3117            },
3118        )
def get_axis_sizes( self, ns: Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int], batch_size: Optional[int] = None, *, max_input_shape: Optional[Mapping[Tuple[bioimageio.spec.model.v0_5.TensorId, bioimageio.spec.model.v0_5.AxisId], int]] = None) -> bioimageio.spec.model.v0_5._AxisSizes:
3120    def get_axis_sizes(
3121        self,
3122        ns: Mapping[Tuple[TensorId, AxisId], ParameterizedSize_N],
3123        batch_size: Optional[int] = None,
3124        *,
3125        max_input_shape: Optional[Mapping[Tuple[TensorId, AxisId], int]] = None,
3126    ) -> _AxisSizes:
3127        """Determine input and output block shape for scale factors **ns**
3128        of parameterized input sizes.
3129
3130        Args:
3131            ns: Scale factor `n` for each axis (keyed by (tensor_id, axis_id))
3132                that is parameterized as `size = min + n * step`.
3133            batch_size: The desired size of the batch dimension.
3134                If given **batch_size** overwrites any batch size present in
3135                **max_input_shape**. Default 1.
3136            max_input_shape: Limits the derived block shapes.
3137                Each axis for which the input size, parameterized by `n`, is larger
3138                than **max_input_shape** is set to the minimal value `n_min` for which
3139                this is still true.
3140                Use this for small input samples or large values of **ns**.
3141                Or simply whenever you know the full input shape.
3142
3143        Returns:
3144            Resolved axis sizes for model inputs and outputs.
3145        """
3146        max_input_shape = max_input_shape or {}
3147        if batch_size is None:
3148            for (_t_id, a_id), s in max_input_shape.items():
3149                if a_id == BATCH_AXIS_ID:
3150                    batch_size = s
3151                    break
3152            else:
3153                batch_size = 1
3154
3155        all_axes = {
3156            t.id: {a.id: a for a in t.axes} for t in chain(self.inputs, self.outputs)
3157        }
3158
3159        inputs: Dict[Tuple[TensorId, AxisId], int] = {}
3160        outputs: Dict[Tuple[TensorId, AxisId], Union[int, _DataDepSize]] = {}
3161
3162        def get_axis_size(a: Union[InputAxis, OutputAxis]):
3163            if isinstance(a, BatchAxis):
3164                if (t_descr.id, a.id) in ns:
3165                    logger.warning(
3166                        "Ignoring unexpected size increment factor (n) for batch axis"
3167                        + " of tensor '{}'.",
3168                        t_descr.id,
3169                    )
3170                return batch_size
3171            elif isinstance(a.size, int):
3172                if (t_descr.id, a.id) in ns:
3173                    logger.warning(
3174                        "Ignoring unexpected size increment factor (n) for fixed size"
3175                        + " axis '{}' of tensor '{}'.",
3176                        a.id,
3177                        t_descr.id,
3178                    )
3179                return a.size
3180            elif isinstance(a.size, ParameterizedSize):
3181                if (t_descr.id, a.id) not in ns:
3182                    raise ValueError(
3183                        "Size increment factor (n) missing for parametrized axis"
3184                        + f" '{a.id}' of tensor '{t_descr.id}'."
3185                    )
3186                n = ns[(t_descr.id, a.id)]
3187                s_max = max_input_shape.get((t_descr.id, a.id))
3188                if s_max is not None:
3189                    n = min(n, a.size.get_n(s_max))
3190
3191                return a.size.get_size(n)
3192
3193            elif isinstance(a.size, SizeReference):
3194                if (t_descr.id, a.id) in ns:
3195                    logger.warning(
3196                        "Ignoring unexpected size increment factor (n) for axis '{}'"
3197                        + " of tensor '{}' with size reference.",
3198                        a.id,
3199                        t_descr.id,
3200                    )
3201                assert not isinstance(a, BatchAxis)
3202                ref_axis = all_axes[a.size.tensor_id][a.size.axis_id]
3203                assert not isinstance(ref_axis, BatchAxis)
3204                ref_key = (a.size.tensor_id, a.size.axis_id)
3205                ref_size = inputs.get(ref_key, outputs.get(ref_key))
3206                assert ref_size is not None, ref_key
3207                assert not isinstance(ref_size, _DataDepSize), ref_key
3208                return a.size.get_size(
3209                    axis=a,
3210                    ref_axis=ref_axis,
3211                    ref_size=ref_size,
3212                )
3213            elif isinstance(a.size, DataDependentSize):
3214                if (t_descr.id, a.id) in ns:
3215                    logger.warning(
3216                        "Ignoring unexpected increment factor (n) for data dependent"
3217                        + " size axis '{}' of tensor '{}'.",
3218                        a.id,
3219                        t_descr.id,
3220                    )
3221                return _DataDepSize(a.size.min, a.size.max)
3222            else:
3223                assert_never(a.size)
3224
3225        # first resolve all , but the `SizeReference` input sizes
3226        for t_descr in self.inputs:
3227            for a in t_descr.axes:
3228                if not isinstance(a.size, SizeReference):
3229                    s = get_axis_size(a)
3230                    assert not isinstance(s, _DataDepSize)
3231                    inputs[t_descr.id, a.id] = s
3232
3233        # resolve all other input axis sizes
3234        for t_descr in self.inputs:
3235            for a in t_descr.axes:
3236                if isinstance(a.size, SizeReference):
3237                    s = get_axis_size(a)
3238                    assert not isinstance(s, _DataDepSize)
3239                    inputs[t_descr.id, a.id] = s
3240
3241        # resolve all output axis sizes
3242        for t_descr in self.outputs:
3243            for a in t_descr.axes:
3244                assert not isinstance(a.size, ParameterizedSize)
3245                s = get_axis_size(a)
3246                outputs[t_descr.id, a.id] = s
3247
3248        return _AxisSizes(inputs=inputs, outputs=outputs)

Determine input and output block shape for scale factors ns of parameterized input sizes.

Arguments:
  • ns: Scale factor n for each axis (keyed by (tensor_id, axis_id)) that is parameterized as size = min + n * step.
  • batch_size: The desired size of the batch dimension. If given batch_size overwrites any batch size present in max_input_shape. Default 1.
  • max_input_shape: Limits the derived block shapes. Each axis for which the input size, parameterized by n, is larger than max_input_shape is set to the minimal value n_min for which this is still true. Use this for small input samples or large values of ns. Or simply whenever you know the full input shape.
Returns:

Resolved axis sizes for model inputs and outputs.

@classmethod
def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3256    @classmethod
3257    def convert_from_old_format_wo_validation(cls, data: Dict[str, Any]) -> None:
3258        """Convert metadata following an older format version to this classes' format
3259        without validating the result.
3260        """
3261        if (
3262            data.get("type") == "model"
3263            and isinstance(fv := data.get("format_version"), str)
3264            and fv.count(".") == 2
3265        ):
3266            fv_parts = fv.split(".")
3267            if any(not p.isdigit() for p in fv_parts):
3268                return
3269
3270            fv_tuple = tuple(map(int, fv_parts))
3271
3272            assert cls.implemented_format_version_tuple[0:2] == (0, 5)
3273            if fv_tuple[:2] in ((0, 3), (0, 4)):
3274                m04 = _ModelDescr_v0_4.load(data)
3275                if isinstance(m04, InvalidDescr):
3276                    try:
3277                        updated = _model_conv.convert_as_dict(
3278                            m04  # pyright: ignore[reportArgumentType]
3279                        )
3280                    except Exception as e:
3281                        logger.error(
3282                            "Failed to convert from invalid model 0.4 description."
3283                            + f"\nerror: {e}"
3284                            + "\nProceeding with model 0.5 validation without conversion."
3285                        )
3286                        updated = None
3287                else:
3288                    updated = _model_conv.convert_as_dict(m04)
3289
3290                if updated is not None:
3291                    data.clear()
3292                    data.update(updated)
3293
3294            elif fv_tuple[:2] == (0, 5):
3295                # bump patch version
3296                data["format_version"] = cls.implemented_format_version

Convert metadata following an older format version to this classes' format without validating the result.

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 5, 5)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'forbid', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
class NotebookDescr(bioimageio.spec.generic.v0_3.GenericDescrBase):
31class NotebookDescr(GenericDescrBase):
32    """Bioimage.io description of a Jupyter notebook."""
33
34    implemented_type: ClassVar[Literal["notebook"]] = "notebook"
35    if TYPE_CHECKING:
36        type: Literal["notebook"] = "notebook"
37    else:
38        type: Literal["notebook"]
39
40    id: Optional[NotebookId] = None
41    """bioimage.io-wide unique resource identifier
42    assigned by bioimage.io; version **un**specific."""
43
44    parent: Optional[NotebookId] = None
45    """The description from which this one is derived"""
46
47    source: NotebookSource
48    """The Jupyter notebook"""

Bioimage.io description of a Jupyter notebook.

implemented_type: ClassVar[Literal['notebook']] = 'notebook'
id: Optional[bioimageio.spec.notebook.v0_3.NotebookId]

bioimage.io-wide unique resource identifier assigned by bioimage.io; version unspecific.

parent: Optional[bioimageio.spec.notebook.v0_3.NotebookId]

The description from which this one is derived

source: Union[Annotated[bioimageio.spec._internal.url.HttpUrl, WithSuffix(suffix='.ipynb', case_sensitive=True)], Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath'), WithSuffix(suffix='.ipynb', case_sensitive=True)], Annotated[bioimageio.spec._internal.io.RelativeFilePath, WithSuffix(suffix='.ipynb', case_sensitive=True)]]

The Jupyter notebook

implemented_format_version_tuple: ClassVar[Tuple[int, int, int]] = (0, 3, 0)
model_config: ClassVar[pydantic.config.ConfigDict] = {'allow_inf_nan': False, 'extra': 'forbid', 'frozen': False, 'model_title_generator': <function _node_title_generator>, 'populate_by_name': True, 'revalidate_instances': 'always', 'use_attribute_docstrings': True, 'validate_assignment': True, 'validate_default': True, 'validate_return': True, 'validate_by_alias': True, 'validate_by_name': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

def model_post_init(self: pydantic.main.BaseModel, context: Any, /) -> None:
337def init_private_attributes(self: BaseModel, context: Any, /) -> None:
338    """This function is meant to behave like a BaseModel method to initialise private attributes.
339
340    It takes context as an argument since that's what pydantic-core passes when calling it.
341
342    Args:
343        self: The BaseModel instance.
344        context: The context.
345    """
346    if getattr(self, '__pydantic_private__', None) is None:
347        pydantic_private = {}
348        for name, private_attr in self.__private_attributes__.items():
349            default = private_attr.get_default()
350            if default is not PydanticUndefined:
351                pydantic_private[name] = default
352        object_setattr(self, '__pydantic_private__', pydantic_private)

This function is meant to behave like a BaseModel method to initialise private attributes.

It takes context as an argument since that's what pydantic-core passes when calling it.

Arguments:
  • self: The BaseModel instance.
  • context: The context.
ResourceDescr = typing.Union[typing.Annotated[typing.Union[typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], typing.Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]]
def save_bioimageio_package_as_folder( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='dir')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> Annotated[pathlib.Path, PathType(path_type='dir')]:
150def save_bioimageio_package_as_folder(
151    source: Union[BioimageioYamlSource, ResourceDescr],
152    /,
153    *,
154    output_path: Union[NewPath, DirectoryPath, None] = None,
155    weights_priority_order: Optional[  # model only
156        Sequence[
157            Literal[
158                "keras_hdf5",
159                "onnx",
160                "pytorch_state_dict",
161                "tensorflow_js",
162                "tensorflow_saved_model_bundle",
163                "torchscript",
164            ]
165        ]
166    ] = None,
167) -> DirectoryPath:
168    """Write the content of a bioimage.io resource package to a folder.
169
170    Args:
171        source: bioimageio resource description
172        output_path: file path to write package to
173        weights_priority_order: If given only the first weights format present in the model is included.
174                                If none of the prioritized weights formats is found all are included.
175
176    Returns:
177        directory path to bioimageio package folder
178    """
179    package_content = _prepare_resource_package(
180        source,
181        weights_priority_order=weights_priority_order,
182    )
183    if output_path is None:
184        output_path = Path(mkdtemp())
185    else:
186        output_path = Path(output_path)
187
188    output_path.mkdir(exist_ok=True, parents=True)
189    for name, src in package_content.items():
190        if isinstance(src, collections.abc.Mapping):
191            write_yaml(src, output_path / name)
192        else:
193            with (output_path / name).open("wb") as dest:
194                _ = shutil.copyfileobj(src, dest)
195
196    return output_path

Write the content of a bioimage.io resource package to a folder.

Arguments:
  • source: bioimageio resource description
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

directory path to bioimageio package folder

def save_bioimageio_package_to_stream( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, compression: int = 8, compression_level: int = 1, output_stream: Optional[IO[bytes]] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None) -> IO[bytes]:
263def save_bioimageio_package_to_stream(
264    source: Union[BioimageioYamlSource, ResourceDescr],
265    /,
266    *,
267    compression: int = ZIP_DEFLATED,
268    compression_level: int = 1,
269    output_stream: Union[IO[bytes], None] = None,
270    weights_priority_order: Optional[  # model only
271        Sequence[
272            Literal[
273                "keras_hdf5",
274                "onnx",
275                "pytorch_state_dict",
276                "tensorflow_js",
277                "tensorflow_saved_model_bundle",
278                "torchscript",
279            ]
280        ]
281    ] = None,
282) -> IO[bytes]:
283    """Package a bioimageio resource into a stream.
284
285    Args:
286        rd: bioimageio resource description
287        compression: The numeric constant of compression method.
288        compression_level: Compression level to use when writing files to the archive.
289                           See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
290        output_stream: stream to write package to
291        weights_priority_order: If given only the first weights format present in the model is included.
292                                If none of the prioritized weights formats is found all are included.
293
294    Note: this function bypasses safety checks and does not load/validate the model after writing.
295
296    Returns:
297        stream of zipped bioimageio package
298    """
299    if output_stream is None:
300        output_stream = BytesIO()
301
302    package_content = _prepare_resource_package(
303        source,
304        weights_priority_order=weights_priority_order,
305    )
306
307    write_zip(
308        output_stream,
309        package_content,
310        compression=compression,
311        compression_level=compression_level,
312    )
313
314    return output_stream

Package a bioimageio resource into a stream.

Arguments:
  • rd: bioimageio resource description
  • compression: The numeric constant of compression method.
  • compression_level: Compression level to use when writing files to the archive. See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
  • output_stream: stream to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.

Note: this function bypasses safety checks and does not load/validate the model after writing.

Returns:

stream of zipped bioimageio package

def save_bioimageio_package( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], Mapping[str, YamlValueView], Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')]], /, *, compression: int = 8, compression_level: int = 1, output_path: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], NoneType] = None, weights_priority_order: Optional[Sequence[Literal['keras_hdf5', 'onnx', 'pytorch_state_dict', 'tensorflow_js', 'tensorflow_saved_model_bundle', 'torchscript']]] = None, allow_invalid: bool = False) -> Annotated[pathlib.Path, PathType(path_type='file')]:
199def save_bioimageio_package(
200    source: Union[BioimageioYamlSource, ResourceDescr],
201    /,
202    *,
203    compression: int = ZIP_DEFLATED,
204    compression_level: int = 1,
205    output_path: Union[NewPath, FilePath, None] = None,
206    weights_priority_order: Optional[  # model only
207        Sequence[
208            Literal[
209                "keras_hdf5",
210                "onnx",
211                "pytorch_state_dict",
212                "tensorflow_js",
213                "tensorflow_saved_model_bundle",
214                "torchscript",
215            ]
216        ]
217    ] = None,
218    allow_invalid: bool = False,
219) -> FilePath:
220    """Package a bioimageio resource as a zip file.
221
222    Args:
223        rd: bioimageio resource description
224        compression: The numeric constant of compression method.
225        compression_level: Compression level to use when writing files to the archive.
226                           See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
227        output_path: file path to write package to
228        weights_priority_order: If given only the first weights format present in the model is included.
229                                If none of the prioritized weights formats is found all are included.
230
231    Returns:
232        path to zipped bioimageio package
233    """
234    package_content = _prepare_resource_package(
235        source,
236        weights_priority_order=weights_priority_order,
237    )
238    if output_path is None:
239        output_path = Path(
240            NamedTemporaryFile(suffix=".bioimageio.zip", delete=False).name
241        )
242    else:
243        output_path = Path(output_path)
244
245    write_zip(
246        output_path,
247        package_content,
248        compression=compression,
249        compression_level=compression_level,
250    )
251    with get_validation_context().replace(warning_level=ERROR):
252        if isinstance((exported := load_description(output_path)), InvalidDescr):
253            exported.validation_summary.display()
254            msg = f"Exported package at '{output_path}' is invalid."
255            if allow_invalid:
256                logger.error(msg)
257            else:
258                raise ValueError(msg)
259
260    return output_path

Package a bioimageio resource as a zip file.

Arguments:
  • rd: bioimageio resource description
  • compression: The numeric constant of compression method.
  • compression_level: Compression level to use when writing files to the archive. See https://docs.python.org/3/library/zipfile.html#zipfile.ZipFile
  • output_path: file path to write package to
  • weights_priority_order: If given only the first weights format present in the model is included. If none of the prioritized weights formats is found all are included.
Returns:

path to zipped bioimageio package

def save_bioimageio_yaml_only( rd: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue], InvalidDescr], /, file: Union[Annotated[pathlib.Path, PathType(path_type='new')], Annotated[pathlib.Path, PathType(path_type='file')], TextIO], *, exclude_unset: bool = True, exclude_defaults: bool = False):
202def save_bioimageio_yaml_only(
203    rd: Union[ResourceDescr, BioimageioYamlContent, InvalidDescr],
204    /,
205    file: Union[NewPath, FilePath, TextIO],
206    *,
207    exclude_unset: bool = True,
208    exclude_defaults: bool = False,
209):
210    """write the metadata of a resource description (`rd`) to `file`
211    without writing any of the referenced files in it.
212
213    Args:
214        rd: bioimageio resource description
215        file: file or stream to save to
216        exclude_unset: Exclude fields that have not explicitly be set.
217        exclude_defaults: Exclude fields that have the default value (even if set explicitly).
218
219    Note: To save a resource description with its associated files as a package,
220    use `save_bioimageio_package` or `save_bioimageio_package_as_folder`.
221    """
222    if isinstance(rd, ResourceDescrBase):
223        content = dump_description(
224            rd, exclude_unset=exclude_unset, exclude_defaults=exclude_defaults
225        )
226    else:
227        content = rd
228
229    write_yaml(cast(YamlValue, content), file)

write the metadata of a resource description (rd) to file without writing any of the referenced files in it.

Arguments:
  • rd: bioimageio resource description
  • file: file or stream to save to
  • exclude_unset: Exclude fields that have not explicitly be set.
  • exclude_defaults: Exclude fields that have the default value (even if set explicitly).

Note: To save a resource description with its associated files as a package, use save_bioimageio_package or save_bioimageio_package_as_folder.

settings = Settings(allow_pickle=False, cache_path=PosixPath('/home/runner/.cache/bioimageio'), collection_http_pattern='https://hypha.aicell.io/bioimage-io/artifacts/{bioimageio_id}/files/rdf.yaml', hypha_upload='https://hypha.aicell.io/public/services/artifact-manager/create', hypha_upload_token=None, id_map='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map.json', id_map_draft='https://uk1s3.embassy.ebi.ac.uk/public-datasets/bioimage.io/id_map_draft.json', perform_io_checks=True, resolve_draft=True, log_warnings=True, github_username=None, github_token=None, CI='true', user_agent=None)
SpecificResourceDescr = typing.Annotated[typing.Union[typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], typing.Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], typing.Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], typing.Annotated[typing.Union[typing.Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], typing.Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], typing.Annotated[typing.Union[typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], typing.Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)]
def update_format( source: Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Dict[str, YamlValue], InvalidDescr], /, *, output: Union[pathlib.Path, TextIO, NoneType] = None, exclude_defaults: bool = True, perform_io_checks: Optional[bool] = None) -> Union[Annotated[Union[ApplicationDescr, DatasetDescr, ModelDescr, NotebookDescr], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], GenericDescr, InvalidDescr]:
258def update_format(
259    source: Union[
260        ResourceDescr,
261        PermissiveFileSource,
262        ZipFile,
263        BioimageioYamlContent,
264        InvalidDescr,
265    ],
266    /,
267    *,
268    output: Union[Path, TextIO, None] = None,
269    exclude_defaults: bool = True,
270    perform_io_checks: Optional[bool] = None,
271) -> Union[LatestResourceDescr, InvalidDescr]:
272    """Update a resource description.
273
274    Notes:
275    - Invalid **source** descriptions may fail to update.
276    - The updated description might be invalid (even if the **source** was valid).
277    """
278
279    if isinstance(source, ResourceDescrBase):
280        root = source.root
281        source = dump_description(source)
282    else:
283        root = None
284
285    if isinstance(source, collections.abc.Mapping):
286        descr = build_description(
287            source,
288            context=get_validation_context().replace(
289                root=root, perform_io_checks=perform_io_checks
290            ),
291            format_version=LATEST,
292        )
293
294    else:
295        descr = load_description(
296            source,
297            perform_io_checks=perform_io_checks,
298            format_version=LATEST,
299        )
300
301    if output is not None:
302        save_bioimageio_yaml_only(descr, file=output, exclude_defaults=exclude_defaults)
303
304    return descr

Update a resource description.

Notes:

  • Invalid source descriptions may fail to update.
  • The updated description might be invalid (even if the source was valid).
def update_hashes( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue]], /) -> Union[Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], InvalidDescr]:
307def update_hashes(
308    source: Union[PermissiveFileSource, ZipFile, ResourceDescr, BioimageioYamlContent],
309    /,
310) -> Union[ResourceDescr, InvalidDescr]:
311    """Update hash values of the files referenced in **source**."""
312    if isinstance(source, ResourceDescrBase):
313        root = source.root
314        source = dump_description(source)
315    else:
316        root = None
317
318    context = get_validation_context().replace(
319        update_hashes=True, root=root, perform_io_checks=True
320    )
321    with context:
322        if isinstance(source, collections.abc.Mapping):
323            return build_description(source)
324        else:
325            return load_description(source, perform_io_checks=True)

Update hash values of the files referenced in source.

def upload( source: Union[Annotated[Union[bioimageio.spec._internal.url.HttpUrl, bioimageio.spec._internal.io.RelativeFilePath, Annotated[pathlib.Path, PathType(path_type='file'), FieldInfo(annotation=NoneType, required=True, title='FilePath')]], FieldInfo(annotation=NoneType, required=True, metadata=[_PydanticGeneralMetadata(union_mode='left_to_right')])], str, pydantic.networks.HttpUrl, zipfile.ZipFile, Annotated[Union[Annotated[Union[Annotated[bioimageio.spec.application.v0_2.ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.2')], Annotated[ApplicationDescr, FieldInfo(annotation=NoneType, required=True, title='application 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='application')], Annotated[Union[Annotated[bioimageio.spec.dataset.v0_2.DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.2')], Annotated[DatasetDescr, FieldInfo(annotation=NoneType, required=True, title='dataset 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='dataset')], Annotated[Union[Annotated[bioimageio.spec.model.v0_4.ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.4')], Annotated[ModelDescr, FieldInfo(annotation=NoneType, required=True, title='model 0.5')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='model')], Annotated[Union[Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.2')], Annotated[NotebookDescr, FieldInfo(annotation=NoneType, required=True, title='notebook 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='notebook')]], Discriminator(discriminator='type', custom_error_type=None, custom_error_message=None, custom_error_context=None)], Annotated[Union[Annotated[bioimageio.spec.generic.v0_2.GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.2')], Annotated[GenericDescr, FieldInfo(annotation=NoneType, required=True, title='generic 0.3')]], Discriminator(discriminator='format_version', custom_error_type=None, custom_error_message=None, custom_error_context=None), FieldInfo(annotation=NoneType, required=True, title='generic')], Dict[str, YamlValue]], /) -> bioimageio.spec._internal.url.HttpUrl:
 28def upload(
 29    source: Union[PermissiveFileSource, ZipFile, ResourceDescr, BioimageioYamlContent],
 30    /,
 31) -> HttpUrl:
 32    """Upload a new resource description (version) to the hypha server to be shared at bioimage.io.
 33    To edit an existing resource **version**, please login to https://bioimage.io and use the web interface.
 34
 35    WARNING: This upload function is in alpha stage and might change in the future.
 36
 37    Args:
 38        source: The resource description to upload.
 39
 40    Returns:
 41        A URL to the uploaded resource description.
 42        Note: It might take some time until the resource is processed and available for download from the returned URL.
 43    """
 44
 45    if settings.hypha_upload_token is None:
 46        raise ValueError(
 47            """
 48Upload token is not set. Please set BIOIMAGEIO_HYPHA_UPLOAD_TOKEN in your environment variables.
 49By setting this token you agree to our terms of service at https://bioimage.io/#/toc.
 50
 51How to obtain a token:
 52    1. Login to https://bioimage.io
 53    2. Generate a new token at https://bioimage.io/#/api?tab=hypha-rpc
 54"""
 55        )
 56
 57    if isinstance(source, ResourceDescrBase):
 58        # If source is already a ResourceDescr, we can use it directly
 59        descr = source
 60    elif isinstance(source, dict):
 61        descr = build_description(source)
 62    else:
 63        descr = load_description(source)
 64
 65    if isinstance(descr, InvalidDescr):
 66        raise ValueError("Uploading invalid resource descriptions is not allowed.")
 67
 68    if descr.type != "model":
 69        raise NotImplementedError(
 70            f"For now, only model resources can be uploaded (got type={descr.type})."
 71        )
 72
 73    if descr.id is not None:
 74        raise ValueError(
 75            "You cannot upload a resource with an id. Please remove the id from the description and make sure to upload a new non-existing resource. To edit an existing resource, please use the web interface at https://bioimage.io."
 76        )
 77
 78    content = get_resource_package_content(descr)
 79
 80    metadata = content[BIOIMAGEIO_YAML]
 81    assert isinstance(metadata, dict)
 82    manifest = dict(metadata)
 83
 84    # only admins can upload a resource with a version
 85    artifact_version = "stage"  # if descr.version is None else str(descr.version)
 86
 87    # Create new model
 88    r = httpx.post(
 89        settings.hypha_upload,
 90        json={
 91            "parent_id": "bioimage-io/bioimage.io",
 92            "alias": (
 93                descr.id or "{animal_adjective}-{animal}"
 94            ),  # TODO: adapt for non-model uploads,
 95            "type": descr.type,
 96            "manifest": manifest,
 97            "version": artifact_version,
 98        },
 99        headers=(
100            headers := {
101                "Authorization": f"Bearer {settings.hypha_upload_token}",
102                "Content-Type": "application/json",
103            }
104        ),
105    )
106
107    response = r.json()
108    artifact_id = response.get("id")
109    if artifact_id is None:
110        try:
111            logger.error("Response detail: {}", "".join(response["detail"]))
112        except Exception:
113            logger.error("Response: {}", response)
114
115        raise RuntimeError(f"Upload did not return resource id: {response}")
116    else:
117        logger.info("Uploaded resource description {}", artifact_id)
118
119    for file_name, file_source in content.items():
120        # Get upload URL for a file
121        response = httpx.post(
122            settings.hypha_upload.replace("/create", "/put_file"),
123            json={
124                "artifact_id": artifact_id,
125                "file_path": file_name,
126            },
127            headers=headers,
128            follow_redirects=True,
129        )
130        upload_url = response.raise_for_status().json()
131
132        # Upload file to the provided URL
133        if isinstance(file_source, collections.abc.Mapping):
134            buf = io.BytesIO()
135            write_yaml(file_source, buf)
136            files = {file_name: buf}
137        else:
138            files = {file_name: get_reader(file_source)}
139
140        response = httpx.put(
141            upload_url,
142            files=files,  # pyright: ignore[reportArgumentType]
143            # TODO: follow up on https://github.com/encode/httpx/discussions/3611
144            headers={"Content-Type": ""},  # Important for S3 uploads
145            follow_redirects=True,
146        )
147        logger.info("Uploaded '{}' successfully", file_name)
148
149    # Update model status
150    manifest["status"] = "request-review"
151    response = httpx.post(
152        settings.hypha_upload.replace("/create", "/edit"),
153        json={
154            "artifact_id": artifact_id,
155            "version": artifact_version,
156            "manifest": manifest,
157        },
158        headers=headers,
159        follow_redirects=True,
160    )
161    logger.info(
162        "Updated status of {}/{} to 'request-review'", artifact_id, artifact_version
163    )
164    logger.warning(
165        "Upload successfull. Please note that the uploaded resource might not be available for download immediately."
166    )
167    with get_validation_context().replace(perform_io_checks=False):
168        return HttpUrl(
169            f"https://hypha.aicell.io/bioimage-io/artifacts/{artifact_id}/files/rdf.yaml?version={artifact_version}"
170        )

Upload a new resource description (version) to the hypha server to be shared at bioimage.io. To edit an existing resource version, please login to https://bioimage.io and use the web interface.

WARNING: This upload function is in alpha stage and might change in the future.

Arguments:
  • source: The resource description to upload.
Returns:

A URL to the uploaded resource description. Note: It might take some time until the resource is processed and available for download from the returned URL.

def validate_format( data: Dict[str, YamlValue], /, *, format_version: Union[Literal['latest', 'discover'], str] = 'discover', context: Optional[ValidationContext] = None) -> ValidationSummary:
204def validate_format(
205    data: BioimageioYamlContent,
206    /,
207    *,
208    format_version: Union[Literal["discover", "latest"], str] = DISCOVER,
209    context: Optional[ValidationContext] = None,
210) -> ValidationSummary:
211    """Validate a dictionary holding a bioimageio description.
212    See `bioimagieo.spec.load_description_and_validate_format_only`
213    to validate a file source.
214
215    Args:
216        data: Dictionary holding the raw bioimageio.yaml content.
217        format_version: Format version to (update to and) use for validation.
218        context: Validation context, see `bioimagieo.spec.ValidationContext`
219
220    Note:
221        Use `bioimagieo.spec.load_description_and_validate_format_only` to validate a
222        file source instead of loading the YAML content and creating the appropriate
223        `ValidationContext`.
224
225        Alternatively you can use `bioimagieo.spec.load_description` and access the
226        `validation_summary` attribute of the returned object.
227    """
228    with context or get_validation_context():
229        rd = build_description(data, format_version=format_version)
230
231    assert rd.validation_summary is not None
232    return rd.validation_summary

Validate a dictionary holding a bioimageio description. See bioimagieo.spec.load_description_and_validate_format_only to validate a file source.

Arguments:
  • data: Dictionary holding the raw bioimageio.yaml content.
  • format_version: Format version to (update to and) use for validation.
  • context: Validation context, see bioimagieo.spec.ValidationContext
Note:

Use bioimagieo.spec.load_description_and_validate_format_only to validate a file source instead of loading the YAML content and creating the appropriate ValidationContext.

Alternatively you can use bioimagieo.spec.load_description and access the validation_summary attribute of the returned object.

@dataclass(frozen=True)
class ValidationContext(bioimageio.spec._internal.validation_context.ValidationContextBase):
 57@dataclass(frozen=True)
 58class ValidationContext(ValidationContextBase):
 59    """A validation context used to control validation of bioimageio resources.
 60
 61    For example a relative file path in a bioimageio description requires the **root**
 62    context to evaluate if the file is available and, if **perform_io_checks** is true,
 63    if it matches its expected SHA256 hash value.
 64    """
 65
 66    _context_tokens: "List[Token[Optional[ValidationContext]]]" = field(
 67        init=False,
 68        default_factory=cast(
 69            "Callable[[], List[Token[Optional[ValidationContext]]]]", list
 70        ),
 71    )
 72
 73    cache: Union[
 74        DiskCache[RootHttpUrl], MemoryCache[RootHttpUrl], NoopCache[RootHttpUrl]
 75    ] = field(default=settings.disk_cache)
 76    disable_cache: bool = False
 77    """Disable caching downloads to `settings.cache_path`
 78    and (re)download them to memory instead."""
 79
 80    root: Union[RootHttpUrl, DirectoryPath, ZipFile] = Path()
 81    """Url/directory/archive serving as base to resolve any relative file paths."""
 82
 83    warning_level: WarningLevel = 50
 84    """Treat warnings of severity `s` as validation errors if `s >= warning_level`."""
 85
 86    log_warnings: bool = settings.log_warnings
 87    """If `True` warnings are logged to the terminal
 88
 89    Note: This setting does not affect warning entries
 90        of a generated `bioimageio.spec.ValidationSummary`.
 91    """
 92
 93    progressbar_factory: Optional[Callable[[], Progressbar]] = None
 94    """Callable to return a tqdm-like progressbar.
 95
 96    Currently this is only used for file downloads."""
 97
 98    raise_errors: bool = False
 99    """Directly raise any validation errors
100    instead of aggregating errors and returning a `bioimageio.spec.InvalidDescr`. (for debugging)"""
101
102    @property
103    def summary(self):
104        if isinstance(self.root, ZipFile):
105            if self.root.filename is None:
106                root = "in-memory"
107            else:
108                root = Path(self.root.filename)
109        else:
110            root = self.root
111
112        return ValidationContextSummary(
113            root=root,
114            file_name=self.file_name,
115            perform_io_checks=self.perform_io_checks,
116            known_files=copy(self.known_files),
117            update_hashes=self.update_hashes,
118        )
119
120    def __enter__(self):
121        self._context_tokens.append(_validation_context_var.set(self))
122        return self
123
124    def __exit__(self, type, value, traceback):  # type: ignore
125        _validation_context_var.reset(self._context_tokens.pop(-1))
126
127    def replace(  # TODO: probably use __replace__ when py>=3.13
128        self,
129        root: Optional[Union[RootHttpUrl, DirectoryPath, ZipFile]] = None,
130        warning_level: Optional[WarningLevel] = None,
131        log_warnings: Optional[bool] = None,
132        file_name: Optional[str] = None,
133        perform_io_checks: Optional[bool] = None,
134        known_files: Optional[Dict[str, Optional[Sha256]]] = None,
135        raise_errors: Optional[bool] = None,
136        update_hashes: Optional[bool] = None,
137    ) -> Self:
138        if known_files is None and root is not None and self.root != root:
139            # reset known files if root changes, but no new known_files are given
140            known_files = {}
141
142        return self.__class__(
143            root=self.root if root is None else root,
144            warning_level=(
145                self.warning_level if warning_level is None else warning_level
146            ),
147            log_warnings=self.log_warnings if log_warnings is None else log_warnings,
148            file_name=self.file_name if file_name is None else file_name,
149            perform_io_checks=(
150                self.perform_io_checks
151                if perform_io_checks is None
152                else perform_io_checks
153            ),
154            known_files=self.known_files if known_files is None else known_files,
155            raise_errors=self.raise_errors if raise_errors is None else raise_errors,
156            update_hashes=(
157                self.update_hashes if update_hashes is None else update_hashes
158            ),
159        )
160
161    @property
162    def source_name(self) -> str:
163        if self.file_name is None:
164            return "in-memory"
165        else:
166            try:
167                if isinstance(self.root, Path):
168                    source = (self.root / self.file_name).absolute()
169                else:
170                    parsed = urlsplit(str(self.root))
171                    path = list(parsed.path.strip("/").split("/")) + [self.file_name]
172                    source = urlunsplit(
173                        (
174                            parsed.scheme,
175                            parsed.netloc,
176                            "/".join(path),
177                            parsed.query,
178                            parsed.fragment,
179                        )
180                    )
181            except ValueError:
182                return self.file_name
183            else:
184                return str(source)

A validation context used to control validation of bioimageio resources.

For example a relative file path in a bioimageio description requires the root context to evaluate if the file is available and, if perform_io_checks is true, if it matches its expected SHA256 hash value.

ValidationContext( file_name: Optional[str] = None, perform_io_checks: bool = True, known_files: Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]] = <factory>, update_hashes: bool = False, cache: Union[genericache.disk_cache.DiskCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.memory_cache.MemoryCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.noop_cache.NoopCache[bioimageio.spec._internal.root_url.RootHttpUrl]] = <genericache.disk_cache.DiskCache object>, disable_cache: bool = False, root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile] = PosixPath('.'), warning_level: Literal[20, 30, 35, 50] = 50, log_warnings: bool = True, progressbar_factory: Optional[Callable[[], bioimageio.spec._internal.progress.Progressbar]] = None, raise_errors: bool = False)
cache: Union[genericache.disk_cache.DiskCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.memory_cache.MemoryCache[bioimageio.spec._internal.root_url.RootHttpUrl], genericache.noop_cache.NoopCache[bioimageio.spec._internal.root_url.RootHttpUrl]] = <genericache.disk_cache.DiskCache object>
disable_cache: bool = False

Disable caching downloads to settings.cache_path and (re)download them to memory instead.

root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile] = PosixPath('.')

Url/directory/archive serving as base to resolve any relative file paths.

warning_level: Literal[20, 30, 35, 50] = 50

Treat warnings of severity s as validation errors if s >= warning_level.

log_warnings: bool = True

If True warnings are logged to the terminal

Note: This setting does not affect warning entries of a generated bioimageio.spec.ValidationSummary.

progressbar_factory: Optional[Callable[[], bioimageio.spec._internal.progress.Progressbar]] = None

Callable to return a tqdm-like progressbar.

Currently this is only used for file downloads.

raise_errors: bool = False

Directly raise any validation errors instead of aggregating errors and returning a bioimageio.spec.InvalidDescr. (for debugging)

summary
102    @property
103    def summary(self):
104        if isinstance(self.root, ZipFile):
105            if self.root.filename is None:
106                root = "in-memory"
107            else:
108                root = Path(self.root.filename)
109        else:
110            root = self.root
111
112        return ValidationContextSummary(
113            root=root,
114            file_name=self.file_name,
115            perform_io_checks=self.perform_io_checks,
116            known_files=copy(self.known_files),
117            update_hashes=self.update_hashes,
118        )
def replace( self, root: Union[bioimageio.spec._internal.root_url.RootHttpUrl, Annotated[pathlib.Path, PathType(path_type='dir')], zipfile.ZipFile, NoneType] = None, warning_level: Optional[Literal[20, 30, 35, 50]] = None, log_warnings: Optional[bool] = None, file_name: Optional[str] = None, perform_io_checks: Optional[bool] = None, known_files: Optional[Dict[str, Optional[bioimageio.spec._internal.io_basics.Sha256]]] = None, raise_errors: Optional[bool] = None, update_hashes: Optional[bool] = None) -> Self:
127    def replace(  # TODO: probably use __replace__ when py>=3.13
128        self,
129        root: Optional[Union[RootHttpUrl, DirectoryPath, ZipFile]] = None,
130        warning_level: Optional[WarningLevel] = None,
131        log_warnings: Optional[bool] = None,
132        file_name: Optional[str] = None,
133        perform_io_checks: Optional[bool] = None,
134        known_files: Optional[Dict[str, Optional[Sha256]]] = None,
135        raise_errors: Optional[bool] = None,
136        update_hashes: Optional[bool] = None,
137    ) -> Self:
138        if known_files is None and root is not None and self.root != root:
139            # reset known files if root changes, but no new known_files are given
140            known_files = {}
141
142        return self.__class__(
143            root=self.root if root is None else root,
144            warning_level=(
145                self.warning_level if warning_level is None else warning_level
146            ),
147            log_warnings=self.log_warnings if log_warnings is None else log_warnings,
148            file_name=self.file_name if file_name is None else file_name,
149            perform_io_checks=(
150                self.perform_io_checks
151                if perform_io_checks is None
152                else perform_io_checks
153            ),
154            known_files=self.known_files if known_files is None else known_files,
155            raise_errors=self.raise_errors if raise_errors is None else raise_errors,
156            update_hashes=(
157                self.update_hashes if update_hashes is None else update_hashes
158            ),
159        )
source_name: str
161    @property
162    def source_name(self) -> str:
163        if self.file_name is None:
164            return "in-memory"
165        else:
166            try:
167                if isinstance(self.root, Path):
168                    source = (self.root / self.file_name).absolute()
169                else:
170                    parsed = urlsplit(str(self.root))
171                    path = list(parsed.path.strip("/").split("/")) + [self.file_name]
172                    source = urlunsplit(
173                        (
174                            parsed.scheme,
175                            parsed.netloc,
176                            "/".join(path),
177                            parsed.query,
178                            parsed.fragment,
179                        )
180                    )
181            except ValueError:
182                return self.file_name
183            else:
184                return str(source)
class ValidationSummary(pydantic.main.BaseModel):
240class ValidationSummary(BaseModel, extra="allow"):
241    """Summarizes output of all bioimageio validations and tests
242    for one specific `ResourceDescr` instance."""
243
244    name: str
245    """Name of the validation"""
246    source_name: str
247    """Source of the validated bioimageio description"""
248    id: Optional[str] = None
249    """ID of the resource being validated"""
250    type: str
251    """Type of the resource being validated"""
252    format_version: str
253    """Format version of the resource being validated"""
254    status: Literal["passed", "valid-format", "failed"]
255    """overall status of the bioimageio validation"""
256    metadata_completeness: Annotated[float, annotated_types.Interval(ge=0, le=1)] = 0.0
257    """Estimate of completeness of the metadata in the resource description.
258
259    Note: This completeness estimate may change with subsequent releases
260        and should be considered bioimageio.spec version specific.
261    """
262
263    details: List[ValidationDetail]
264    """List of validation details"""
265    env: Set[InstalledPackage] = Field(
266        default_factory=lambda: {
267            InstalledPackage(name="bioimageio.spec", version=VERSION)
268        }
269    )
270    """List of selected, relevant package versions"""
271
272    saved_conda_list: Optional[str] = None
273
274    @field_serializer("saved_conda_list")
275    def _save_conda_list(self, value: Optional[str]):
276        return self.conda_list
277
278    @property
279    def conda_list(self):
280        if self.saved_conda_list is None:
281            p = subprocess.run(
282                ["conda", "list"],
283                stdout=subprocess.PIPE,
284                stderr=subprocess.STDOUT,
285                shell=True,
286                text=True,
287            )
288            self.saved_conda_list = (
289                p.stdout or f"`conda list` exited with {p.returncode}"
290            )
291
292        return self.saved_conda_list
293
294    @property
295    def status_icon(self):
296        if self.status == "passed":
297            return "✔️"
298        elif self.status == "valid-format":
299            return "🟡"
300        else:
301            return "❌"
302
303    @property
304    def errors(self) -> List[ErrorEntry]:
305        return list(chain.from_iterable(d.errors for d in self.details))
306
307    @property
308    def warnings(self) -> List[WarningEntry]:
309        return list(chain.from_iterable(d.warnings for d in self.details))
310
311    def format(
312        self,
313        *,
314        width: Optional[int] = None,
315        include_conda_list: bool = False,
316    ):
317        """Format summary as Markdown string"""
318        return self._format(
319            width=width, target="md", include_conda_list=include_conda_list
320        )
321
322    format_md = format
323
324    def format_html(
325        self,
326        *,
327        width: Optional[int] = None,
328        include_conda_list: bool = False,
329    ):
330        md_with_html = self._format(
331            target="html", width=width, include_conda_list=include_conda_list
332        )
333        return markdown.markdown(
334            md_with_html, extensions=["tables", "fenced_code", "nl2br"]
335        )
336
337    def display(
338        self,
339        *,
340        width: Optional[int] = None,
341        include_conda_list: bool = False,
342        tab_size: int = 4,
343        soft_wrap: bool = True,
344    ) -> None:
345        try:  # render as HTML in Jupyter notebook
346            from IPython.core.getipython import get_ipython
347            from IPython.display import (
348                display_html,  # pyright: ignore[reportUnknownVariableType]
349            )
350        except ImportError:
351            pass
352        else:
353            if get_ipython() is not None:
354                _ = display_html(
355                    self.format_html(
356                        width=width, include_conda_list=include_conda_list
357                    ),
358                    raw=True,
359                )
360                return
361
362        # render with rich
363        _ = self._format(
364            target=rich.console.Console(
365                width=width,
366                tab_size=tab_size,
367                soft_wrap=soft_wrap,
368            ),
369            width=width,
370            include_conda_list=include_conda_list,
371        )
372
373    def add_detail(self, detail: ValidationDetail):
374        if detail.status == "failed":
375            self.status = "failed"
376        elif detail.status != "passed":
377            assert_never(detail.status)
378
379        self.details.append(detail)
380
381    def log(
382        self,
383        to: Union[Literal["display"], Path, Sequence[Union[Literal["display"], Path]]],
384    ) -> List[Path]:
385        """Convenience method to display the validation summary in the terminal and/or
386        save it to disk. See `save` for details."""
387        if to == "display":
388            display = True
389            save_to = []
390        elif isinstance(to, Path):
391            display = False
392            save_to = [to]
393        else:
394            display = "display" in to
395            save_to = [p for p in to if p != "display"]
396
397        if display:
398            self.display()
399
400        return self.save(save_to)
401
402    def save(
403        self, path: Union[Path, Sequence[Path]] = Path("{id}_summary_{now}")
404    ) -> List[Path]:
405        """Save the validation/test summary in JSON, Markdown or HTML format.
406
407        Returns:
408            List of file paths the summary was saved to.
409
410        Notes:
411        - Format is chosen based on the suffix: `.json`, `.md`, `.html`.
412        - If **path** has no suffix it is assumed to be a direcotry to which a
413          `summary.json`, `summary.md` and `summary.html` are saved to.
414        """
415        if isinstance(path, (str, Path)):
416            path = [Path(path)]
417
418        # folder to file paths
419        file_paths: List[Path] = []
420        for p in path:
421            if p.suffix:
422                file_paths.append(p)
423            else:
424                file_paths.extend(
425                    [
426                        p / "summary.json",
427                        p / "summary.md",
428                        p / "summary.html",
429                    ]
430                )
431
432        now = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
433        for p in file_paths:
434            p = Path(str(p).format(id=self.id or "bioimageio", now=now))
435            if p.suffix == ".json":
436                self.save_json(p)
437            elif p.suffix == ".md":
438                self.save_markdown(p)
439            elif p.suffix == ".html":
440                self.save_html(p)
441            else:
442                raise ValueError(f"Unknown summary path suffix '{p.suffix}'")
443
444        return file_paths
445
446    def save_json(
447        self, path: Path = Path("summary.json"), *, indent: Optional[int] = 2
448    ):
449        """Save validation/test summary as JSON file."""
450        json_str = self.model_dump_json(indent=indent)
451        path.parent.mkdir(exist_ok=True, parents=True)
452        _ = path.write_text(json_str, encoding="utf-8")
453        logger.info("Saved summary to {}", path.absolute())
454
455    def save_markdown(self, path: Path = Path("summary.md")):
456        """Save rendered validation/test summary as Markdown file."""
457        formatted = self.format_md()
458        path.parent.mkdir(exist_ok=True, parents=True)
459        _ = path.write_text(formatted, encoding="utf-8")
460        logger.info("Saved Markdown formatted summary to {}", path.absolute())
461
462    def save_html(self, path: Path = Path("summary.html")) -> None:
463        """Save rendered validation/test summary as HTML file."""
464        path.parent.mkdir(exist_ok=True, parents=True)
465
466        html = self.format_html()
467        _ = path.write_text(html, encoding="utf-8")
468        logger.info("Saved HTML formatted summary to {}", path.absolute())
469
470    @classmethod
471    def load_json(cls, path: Path) -> Self:
472        """Load validation/test summary from a suitable JSON file"""
473        json_str = Path(path).read_text(encoding="utf-8")
474        return cls.model_validate_json(json_str)
475
476    @field_validator("env", mode="before")
477    def _convert_dict(cls, value: List[Union[List[str], Dict[str, str]]]):
478        """convert old env value for backwards compatibility"""
479        if isinstance(value, list):
480            return [
481                (
482                    (v["name"], v["version"], v.get("build", ""), v.get("channel", ""))
483                    if isinstance(v, dict) and "name" in v and "version" in v
484                    else v
485                )
486                for v in value
487            ]
488        else:
489            return value
490
491    def _format(
492        self,
493        *,
494        target: Union[rich.console.Console, Literal["html", "md"]],
495        width: Optional[int],
496        include_conda_list: bool,
497    ):
498        return _format_summary(
499            self,
500            target=target,
501            width=width or 100,
502            include_conda_list=include_conda_list,
503        )

Summarizes output of all bioimageio validations and tests for one specific ResourceDescr instance.

name: str

Name of the validation

source_name: str

Source of the validated bioimageio description

id: Optional[str]

ID of the resource being validated

type: str

Type of the resource being validated

format_version: str

Format version of the resource being validated

status: Literal['passed', 'valid-format', 'failed']

overall status of the bioimageio validation

metadata_completeness: Annotated[float, Interval(gt=None, ge=0, lt=None, le=1)]

Estimate of completeness of the metadata in the resource description.

Note: This completeness estimate may change with subsequent releases and should be considered bioimageio.spec version specific.

List of validation details

List of selected, relevant package versions

saved_conda_list: Optional[str]
conda_list
278    @property
279    def conda_list(self):
280        if self.saved_conda_list is None:
281            p = subprocess.run(
282                ["conda", "list"],
283                stdout=subprocess.PIPE,
284                stderr=subprocess.STDOUT,
285                shell=True,
286                text=True,
287            )
288            self.saved_conda_list = (
289                p.stdout or f"`conda list` exited with {p.returncode}"
290            )
291
292        return self.saved_conda_list
status_icon
294    @property
295    def status_icon(self):
296        if self.status == "passed":
297            return "✔️"
298        elif self.status == "valid-format":
299            return "🟡"
300        else:
301            return "❌"
errors: List[bioimageio.spec.summary.ErrorEntry]
303    @property
304    def errors(self) -> List[ErrorEntry]:
305        return list(chain.from_iterable(d.errors for d in self.details))
warnings: List[bioimageio.spec.summary.WarningEntry]
307    @property
308    def warnings(self) -> List[WarningEntry]:
309        return list(chain.from_iterable(d.warnings for d in self.details))
def format( self, *, width: Optional[int] = None, include_conda_list: bool = False):
311    def format(
312        self,
313        *,
314        width: Optional[int] = None,
315        include_conda_list: bool = False,
316    ):
317        """Format summary as Markdown string"""
318        return self._format(
319            width=width, target="md", include_conda_list=include_conda_list
320        )

Format summary as Markdown string

def format_md( self, *, width: Optional[int] = None, include_conda_list: bool = False):
311    def format(
312        self,
313        *,
314        width: Optional[int] = None,
315        include_conda_list: bool = False,
316    ):
317        """Format summary as Markdown string"""
318        return self._format(
319            width=width, target="md", include_conda_list=include_conda_list
320        )

Format summary as Markdown string

def format_html( self, *, width: Optional[int] = None, include_conda_list: bool = False):
324    def format_html(
325        self,
326        *,
327        width: Optional[int] = None,
328        include_conda_list: bool = False,
329    ):
330        md_with_html = self._format(
331            target="html", width=width, include_conda_list=include_conda_list
332        )
333        return markdown.markdown(
334            md_with_html, extensions=["tables", "fenced_code", "nl2br"]
335        )
def display( self, *, width: Optional[int] = None, include_conda_list: bool = False, tab_size: int = 4, soft_wrap: bool = True) -> None:
337    def display(
338        self,
339        *,
340        width: Optional[int] = None,
341        include_conda_list: bool = False,
342        tab_size: int = 4,
343        soft_wrap: bool = True,
344    ) -> None:
345        try:  # render as HTML in Jupyter notebook
346            from IPython.core.getipython import get_ipython
347            from IPython.display import (
348                display_html,  # pyright: ignore[reportUnknownVariableType]
349            )
350        except ImportError:
351            pass
352        else:
353            if get_ipython() is not None:
354                _ = display_html(
355                    self.format_html(
356                        width=width, include_conda_list=include_conda_list
357                    ),
358                    raw=True,
359                )
360                return
361
362        # render with rich
363        _ = self._format(
364            target=rich.console.Console(
365                width=width,
366                tab_size=tab_size,
367                soft_wrap=soft_wrap,
368            ),
369            width=width,
370            include_conda_list=include_conda_list,
371        )
def add_detail(self, detail: bioimageio.spec.summary.ValidationDetail):
373    def add_detail(self, detail: ValidationDetail):
374        if detail.status == "failed":
375            self.status = "failed"
376        elif detail.status != "passed":
377            assert_never(detail.status)
378
379        self.details.append(detail)
def log( self, to: Union[Literal['display'], pathlib.Path, Sequence[Union[Literal['display'], pathlib.Path]]]) -> List[pathlib.Path]:
381    def log(
382        self,
383        to: Union[Literal["display"], Path, Sequence[Union[Literal["display"], Path]]],
384    ) -> List[Path]:
385        """Convenience method to display the validation summary in the terminal and/or
386        save it to disk. See `save` for details."""
387        if to == "display":
388            display = True
389            save_to = []
390        elif isinstance(to, Path):
391            display = False
392            save_to = [to]
393        else:
394            display = "display" in to
395            save_to = [p for p in to if p != "display"]
396
397        if display:
398            self.display()
399
400        return self.save(save_to)

Convenience method to display the validation summary in the terminal and/or save it to disk. See save for details.

def save( self, path: Union[pathlib.Path, Sequence[pathlib.Path]] = PosixPath('{id}_summary_{now}')) -> List[pathlib.Path]:
402    def save(
403        self, path: Union[Path, Sequence[Path]] = Path("{id}_summary_{now}")
404    ) -> List[Path]:
405        """Save the validation/test summary in JSON, Markdown or HTML format.
406
407        Returns:
408            List of file paths the summary was saved to.
409
410        Notes:
411        - Format is chosen based on the suffix: `.json`, `.md`, `.html`.
412        - If **path** has no suffix it is assumed to be a direcotry to which a
413          `summary.json`, `summary.md` and `summary.html` are saved to.
414        """
415        if isinstance(path, (str, Path)):
416            path = [Path(path)]
417
418        # folder to file paths
419        file_paths: List[Path] = []
420        for p in path:
421            if p.suffix:
422                file_paths.append(p)
423            else:
424                file_paths.extend(
425                    [
426                        p / "summary.json",
427                        p / "summary.md",
428                        p / "summary.html",
429                    ]
430                )
431
432        now = datetime.now(timezone.utc).strftime("%Y%m%dT%H%M%SZ")
433        for p in file_paths:
434            p = Path(str(p).format(id=self.id or "bioimageio", now=now))
435            if p.suffix == ".json":
436                self.save_json(p)
437            elif p.suffix == ".md":
438                self.save_markdown(p)
439            elif p.suffix == ".html":
440                self.save_html(p)
441            else:
442                raise ValueError(f"Unknown summary path suffix '{p.suffix}'")
443
444        return file_paths

Save the validation/test summary in JSON, Markdown or HTML format.

Returns:

List of file paths the summary was saved to.

Notes:

  • Format is chosen based on the suffix: .json, .md, .html.
  • If path has no suffix it is assumed to be a direcotry to which a summary.json, summary.md and summary.html are saved to.
def save_json( self, path: pathlib.Path = PosixPath('summary.json'), *, indent: Optional[int] = 2):
446    def save_json(
447        self, path: Path = Path("summary.json"), *, indent: Optional[int] = 2
448    ):
449        """Save validation/test summary as JSON file."""
450        json_str = self.model_dump_json(indent=indent)
451        path.parent.mkdir(exist_ok=True, parents=True)
452        _ = path.write_text(json_str, encoding="utf-8")
453        logger.info("Saved summary to {}", path.absolute())

Save validation/test summary as JSON file.

def save_markdown(self, path: pathlib.Path = PosixPath('summary.md')):
455    def save_markdown(self, path: Path = Path("summary.md")):
456        """Save rendered validation/test summary as Markdown file."""
457        formatted = self.format_md()
458        path.parent.mkdir(exist_ok=True, parents=True)
459        _ = path.write_text(formatted, encoding="utf-8")
460        logger.info("Saved Markdown formatted summary to {}", path.absolute())

Save rendered validation/test summary as Markdown file.

def save_html(self, path: pathlib.Path = PosixPath('summary.html')) -> None:
462    def save_html(self, path: Path = Path("summary.html")) -> None:
463        """Save rendered validation/test summary as HTML file."""
464        path.parent.mkdir(exist_ok=True, parents=True)
465
466        html = self.format_html()
467        _ = path.write_text(html, encoding="utf-8")
468        logger.info("Saved HTML formatted summary to {}", path.absolute())

Save rendered validation/test summary as HTML file.

@classmethod
def load_json(cls, path: pathlib.Path) -> Self:
470    @classmethod
471    def load_json(cls, path: Path) -> Self:
472        """Load validation/test summary from a suitable JSON file"""
473        json_str = Path(path).read_text(encoding="utf-8")
474        return cls.model_validate_json(json_str)

Load validation/test summary from a suitable JSON file

model_config: ClassVar[pydantic.config.ConfigDict] = {'extra': 'allow'}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].